Test Report: KVM_Linux_crio 19636

                    
                      a6feba20ebb4dc887776b248ea5c810d31cc7846:2024-09-13:36198
                    
                

Test fail (33/310)

Order failed test Duration
33 TestAddons/parallel/Registry 75.21
34 TestAddons/parallel/Ingress 153.39
36 TestAddons/parallel/MetricsServer 312.09
163 TestMultiControlPlane/serial/StopSecondaryNode 141.85
165 TestMultiControlPlane/serial/RestartSecondaryNode 58.38
167 TestMultiControlPlane/serial/RestartClusterKeepsNodes 366.72
170 TestMultiControlPlane/serial/StopCluster 141.87
171 TestMultiControlPlane/serial/RestartCluster 657.58
172 TestMultiControlPlane/serial/DegradedAfterClusterRestart 2.62
173 TestMultiControlPlane/serial/AddSecondaryNode 125.5
229 TestMultiNode/serial/RestartKeepsNodes 329.5
231 TestMultiNode/serial/StopMultiNode 141.29
238 TestPreload 220.15
246 TestKubernetesUpgrade 435.6
278 TestPause/serial/SecondStartNoReconfiguration 107.47
317 TestStartStop/group/old-k8s-version/serial/FirstStart 272.21
337 TestStartStop/group/embed-certs/serial/Stop 139.06
340 TestStartStop/group/no-preload/serial/Stop 138.97
343 TestStartStop/group/default-k8s-diff-port/serial/Stop 138.99
344 TestStartStop/group/old-k8s-version/serial/DeployApp 0.47
345 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 79.25
346 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.38
348 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.38
350 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.38
354 TestStartStop/group/old-k8s-version/serial/SecondStart 759.23
355 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 544.24
356 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 545.66
357 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 545.8
358 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 543.51
359 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 425.22
360 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 426.98
361 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 397.91
362 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 147.44
x
+
TestAddons/parallel/Registry (75.21s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:328: registry stabilized in 5.380775ms
addons_test.go:330: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-pwx9m" [d9453f5b-a1d3-40e4-80d3-2250edd642ca] Running
addons_test.go:330: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.032765332s
addons_test.go:333: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-2jrhs" [8223e4fa-f130-48c6-ab8b-764434495610] Running
addons_test.go:333: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.003796641s
addons_test.go:338: (dbg) Run:  kubectl --context addons-979357 delete po -l run=registry-test --now
addons_test.go:343: (dbg) Run:  kubectl --context addons-979357 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:343: (dbg) Non-zero exit: kubectl --context addons-979357 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.089312105s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:345: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-979357 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:349: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:357: (dbg) Run:  out/minikube-linux-amd64 -p addons-979357 ip
2024/09/13 18:33:21 [DEBUG] GET http://192.168.39.34:5000
addons_test.go:386: (dbg) Run:  out/minikube-linux-amd64 -p addons-979357 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-979357 -n addons-979357
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-979357 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-979357 logs -n 25: (1.402081465s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-220014 | jenkins | v1.34.0 | 13 Sep 24 18:20 UTC |                     |
	|         | -p download-only-220014                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                                                                |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.34.0 | 13 Sep 24 18:21 UTC | 13 Sep 24 18:21 UTC |
	| delete  | -p download-only-220014                                                                     | download-only-220014 | jenkins | v1.34.0 | 13 Sep 24 18:21 UTC | 13 Sep 24 18:21 UTC |
	| start   | -o=json --download-only                                                                     | download-only-283125 | jenkins | v1.34.0 | 13 Sep 24 18:21 UTC |                     |
	|         | -p download-only-283125                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                                                                |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.34.0 | 13 Sep 24 18:21 UTC | 13 Sep 24 18:21 UTC |
	| delete  | -p download-only-283125                                                                     | download-only-283125 | jenkins | v1.34.0 | 13 Sep 24 18:21 UTC | 13 Sep 24 18:21 UTC |
	| delete  | -p download-only-220014                                                                     | download-only-220014 | jenkins | v1.34.0 | 13 Sep 24 18:21 UTC | 13 Sep 24 18:21 UTC |
	| delete  | -p download-only-283125                                                                     | download-only-283125 | jenkins | v1.34.0 | 13 Sep 24 18:21 UTC | 13 Sep 24 18:21 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-840809 | jenkins | v1.34.0 | 13 Sep 24 18:21 UTC |                     |
	|         | binary-mirror-840809                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:46177                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-840809                                                                     | binary-mirror-840809 | jenkins | v1.34.0 | 13 Sep 24 18:21 UTC | 13 Sep 24 18:21 UTC |
	| addons  | disable dashboard -p                                                                        | addons-979357        | jenkins | v1.34.0 | 13 Sep 24 18:21 UTC |                     |
	|         | addons-979357                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-979357        | jenkins | v1.34.0 | 13 Sep 24 18:21 UTC |                     |
	|         | addons-979357                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-979357 --wait=true                                                                | addons-979357        | jenkins | v1.34.0 | 13 Sep 24 18:21 UTC | 13 Sep 24 18:24 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-979357        | jenkins | v1.34.0 | 13 Sep 24 18:32 UTC | 13 Sep 24 18:32 UTC |
	|         | -p addons-979357                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-979357        | jenkins | v1.34.0 | 13 Sep 24 18:32 UTC | 13 Sep 24 18:32 UTC |
	|         | addons-979357                                                                               |                      |         |         |                     |                     |
	| addons  | addons-979357 addons disable                                                                | addons-979357        | jenkins | v1.34.0 | 13 Sep 24 18:32 UTC | 13 Sep 24 18:32 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-979357 addons disable                                                                | addons-979357        | jenkins | v1.34.0 | 13 Sep 24 18:32 UTC | 13 Sep 24 18:32 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-979357        | jenkins | v1.34.0 | 13 Sep 24 18:32 UTC | 13 Sep 24 18:32 UTC |
	|         | -p addons-979357                                                                            |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-979357        | jenkins | v1.34.0 | 13 Sep 24 18:32 UTC | 13 Sep 24 18:32 UTC |
	|         | addons-979357                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-979357 ssh cat                                                                       | addons-979357        | jenkins | v1.34.0 | 13 Sep 24 18:32 UTC | 13 Sep 24 18:32 UTC |
	|         | /opt/local-path-provisioner/pvc-2e98d28b-4232-4373-82bf-032b9972820e_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-979357 addons disable                                                                | addons-979357        | jenkins | v1.34.0 | 13 Sep 24 18:32 UTC |                     |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-979357 ip                                                                            | addons-979357        | jenkins | v1.34.0 | 13 Sep 24 18:33 UTC | 13 Sep 24 18:33 UTC |
	| addons  | addons-979357 addons disable                                                                | addons-979357        | jenkins | v1.34.0 | 13 Sep 24 18:33 UTC | 13 Sep 24 18:33 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/13 18:21:44
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0913 18:21:44.933336   11846 out.go:345] Setting OutFile to fd 1 ...
	I0913 18:21:44.933589   11846 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 18:21:44.933598   11846 out.go:358] Setting ErrFile to fd 2...
	I0913 18:21:44.933603   11846 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 18:21:44.933811   11846 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19636-3902/.minikube/bin
	I0913 18:21:44.934483   11846 out.go:352] Setting JSON to false
	I0913 18:21:44.935314   11846 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":248,"bootTime":1726251457,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0913 18:21:44.935405   11846 start.go:139] virtualization: kvm guest
	I0913 18:21:44.937733   11846 out.go:177] * [addons-979357] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0913 18:21:44.939244   11846 notify.go:220] Checking for updates...
	I0913 18:21:44.939253   11846 out.go:177]   - MINIKUBE_LOCATION=19636
	I0913 18:21:44.940802   11846 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 18:21:44.942374   11846 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19636-3902/kubeconfig
	I0913 18:21:44.943849   11846 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19636-3902/.minikube
	I0913 18:21:44.945315   11846 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0913 18:21:44.946781   11846 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 18:21:44.948355   11846 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 18:21:44.980298   11846 out.go:177] * Using the kvm2 driver based on user configuration
	I0913 18:21:44.981482   11846 start.go:297] selected driver: kvm2
	I0913 18:21:44.981496   11846 start.go:901] validating driver "kvm2" against <nil>
	I0913 18:21:44.981507   11846 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 18:21:44.982221   11846 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 18:21:44.982292   11846 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19636-3902/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0913 18:21:44.996730   11846 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0913 18:21:44.996769   11846 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0913 18:21:44.997020   11846 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0913 18:21:44.997050   11846 cni.go:84] Creating CNI manager for ""
	I0913 18:21:44.997088   11846 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0913 18:21:44.997097   11846 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0913 18:21:44.997143   11846 start.go:340] cluster config:
	{Name:addons-979357 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-979357 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 18:21:44.997247   11846 iso.go:125] acquiring lock: {Name:mk6d09a9e7ffc35d34bf41cb51ad15df14a4d34d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 18:21:44.998916   11846 out.go:177] * Starting "addons-979357" primary control-plane node in "addons-979357" cluster
	I0913 18:21:45.000116   11846 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0913 18:21:45.000156   11846 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0913 18:21:45.000181   11846 cache.go:56] Caching tarball of preloaded images
	I0913 18:21:45.000289   11846 preload.go:172] Found /home/jenkins/minikube-integration/19636-3902/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0913 18:21:45.000299   11846 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0913 18:21:45.000586   11846 profile.go:143] Saving config to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357/config.json ...
	I0913 18:21:45.000604   11846 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357/config.json: {Name:mk395248c1d6a5d1f66c229ec194a50ba2a56d51 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:21:45.000738   11846 start.go:360] acquireMachinesLock for addons-979357: {Name:mk2a4fc9f87a3264088553785e086036ce1d8c5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 18:21:45.000781   11846 start.go:364] duration metric: took 30.582µs to acquireMachinesLock for "addons-979357"
	I0913 18:21:45.000797   11846 start.go:93] Provisioning new machine with config: &{Name:addons-979357 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:addons-979357 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0913 18:21:45.000848   11846 start.go:125] createHost starting for "" (driver="kvm2")
	I0913 18:21:45.002398   11846 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0913 18:21:45.002531   11846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:21:45.002566   11846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:21:45.016840   11846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42609
	I0913 18:21:45.017377   11846 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:21:45.017901   11846 main.go:141] libmachine: Using API Version  1
	I0913 18:21:45.017922   11846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:21:45.018288   11846 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:21:45.018450   11846 main.go:141] libmachine: (addons-979357) Calling .GetMachineName
	I0913 18:21:45.018570   11846 main.go:141] libmachine: (addons-979357) Calling .DriverName
	I0913 18:21:45.018700   11846 start.go:159] libmachine.API.Create for "addons-979357" (driver="kvm2")
	I0913 18:21:45.018725   11846 client.go:168] LocalClient.Create starting
	I0913 18:21:45.018761   11846 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem
	I0913 18:21:45.156400   11846 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem
	I0913 18:21:45.353847   11846 main.go:141] libmachine: Running pre-create checks...
	I0913 18:21:45.353873   11846 main.go:141] libmachine: (addons-979357) Calling .PreCreateCheck
	I0913 18:21:45.354405   11846 main.go:141] libmachine: (addons-979357) Calling .GetConfigRaw
	I0913 18:21:45.354848   11846 main.go:141] libmachine: Creating machine...
	I0913 18:21:45.354863   11846 main.go:141] libmachine: (addons-979357) Calling .Create
	I0913 18:21:45.354984   11846 main.go:141] libmachine: (addons-979357) Creating KVM machine...
	I0913 18:21:45.356174   11846 main.go:141] libmachine: (addons-979357) DBG | found existing default KVM network
	I0913 18:21:45.356944   11846 main.go:141] libmachine: (addons-979357) DBG | I0913 18:21:45.356784   11867 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000014fa0}
	I0913 18:21:45.356967   11846 main.go:141] libmachine: (addons-979357) DBG | created network xml: 
	I0913 18:21:45.356978   11846 main.go:141] libmachine: (addons-979357) DBG | <network>
	I0913 18:21:45.356983   11846 main.go:141] libmachine: (addons-979357) DBG |   <name>mk-addons-979357</name>
	I0913 18:21:45.356989   11846 main.go:141] libmachine: (addons-979357) DBG |   <dns enable='no'/>
	I0913 18:21:45.356997   11846 main.go:141] libmachine: (addons-979357) DBG |   
	I0913 18:21:45.357004   11846 main.go:141] libmachine: (addons-979357) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0913 18:21:45.357012   11846 main.go:141] libmachine: (addons-979357) DBG |     <dhcp>
	I0913 18:21:45.357018   11846 main.go:141] libmachine: (addons-979357) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0913 18:21:45.357022   11846 main.go:141] libmachine: (addons-979357) DBG |     </dhcp>
	I0913 18:21:45.357027   11846 main.go:141] libmachine: (addons-979357) DBG |   </ip>
	I0913 18:21:45.357033   11846 main.go:141] libmachine: (addons-979357) DBG |   
	I0913 18:21:45.357037   11846 main.go:141] libmachine: (addons-979357) DBG | </network>
	I0913 18:21:45.357041   11846 main.go:141] libmachine: (addons-979357) DBG | 
	I0913 18:21:45.362778   11846 main.go:141] libmachine: (addons-979357) DBG | trying to create private KVM network mk-addons-979357 192.168.39.0/24...
	I0913 18:21:45.429739   11846 main.go:141] libmachine: (addons-979357) Setting up store path in /home/jenkins/minikube-integration/19636-3902/.minikube/machines/addons-979357 ...
	I0913 18:21:45.429776   11846 main.go:141] libmachine: (addons-979357) Building disk image from file:///home/jenkins/minikube-integration/19636-3902/.minikube/cache/iso/amd64/minikube-v1.34.0-1726156389-19616-amd64.iso
	I0913 18:21:45.429787   11846 main.go:141] libmachine: (addons-979357) DBG | private KVM network mk-addons-979357 192.168.39.0/24 created
	I0913 18:21:45.429871   11846 main.go:141] libmachine: (addons-979357) Downloading /home/jenkins/minikube-integration/19636-3902/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19636-3902/.minikube/cache/iso/amd64/minikube-v1.34.0-1726156389-19616-amd64.iso...
	I0913 18:21:45.429918   11846 main.go:141] libmachine: (addons-979357) DBG | I0913 18:21:45.429655   11867 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19636-3902/.minikube
	I0913 18:21:45.695461   11846 main.go:141] libmachine: (addons-979357) DBG | I0913 18:21:45.695348   11867 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/addons-979357/id_rsa...
	I0913 18:21:45.815456   11846 main.go:141] libmachine: (addons-979357) DBG | I0913 18:21:45.815333   11867 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/addons-979357/addons-979357.rawdisk...
	I0913 18:21:45.815481   11846 main.go:141] libmachine: (addons-979357) DBG | Writing magic tar header
	I0913 18:21:45.815490   11846 main.go:141] libmachine: (addons-979357) DBG | Writing SSH key tar header
	I0913 18:21:45.815498   11846 main.go:141] libmachine: (addons-979357) DBG | I0913 18:21:45.815436   11867 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19636-3902/.minikube/machines/addons-979357 ...
	I0913 18:21:45.815566   11846 main.go:141] libmachine: (addons-979357) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/addons-979357
	I0913 18:21:45.815594   11846 main.go:141] libmachine: (addons-979357) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19636-3902/.minikube/machines
	I0913 18:21:45.815609   11846 main.go:141] libmachine: (addons-979357) Setting executable bit set on /home/jenkins/minikube-integration/19636-3902/.minikube/machines/addons-979357 (perms=drwx------)
	I0913 18:21:45.815616   11846 main.go:141] libmachine: (addons-979357) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19636-3902/.minikube
	I0913 18:21:45.815624   11846 main.go:141] libmachine: (addons-979357) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19636-3902
	I0913 18:21:45.815629   11846 main.go:141] libmachine: (addons-979357) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0913 18:21:45.815635   11846 main.go:141] libmachine: (addons-979357) DBG | Checking permissions on dir: /home/jenkins
	I0913 18:21:45.815641   11846 main.go:141] libmachine: (addons-979357) DBG | Checking permissions on dir: /home
	I0913 18:21:45.815651   11846 main.go:141] libmachine: (addons-979357) DBG | Skipping /home - not owner
	I0913 18:21:45.815665   11846 main.go:141] libmachine: (addons-979357) Setting executable bit set on /home/jenkins/minikube-integration/19636-3902/.minikube/machines (perms=drwxr-xr-x)
	I0913 18:21:45.815681   11846 main.go:141] libmachine: (addons-979357) Setting executable bit set on /home/jenkins/minikube-integration/19636-3902/.minikube (perms=drwxr-xr-x)
	I0913 18:21:45.815693   11846 main.go:141] libmachine: (addons-979357) Setting executable bit set on /home/jenkins/minikube-integration/19636-3902 (perms=drwxrwxr-x)
	I0913 18:21:45.815703   11846 main.go:141] libmachine: (addons-979357) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0913 18:21:45.815711   11846 main.go:141] libmachine: (addons-979357) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0913 18:21:45.815741   11846 main.go:141] libmachine: (addons-979357) Creating domain...
	I0913 18:21:45.816699   11846 main.go:141] libmachine: (addons-979357) define libvirt domain using xml: 
	I0913 18:21:45.816712   11846 main.go:141] libmachine: (addons-979357) <domain type='kvm'>
	I0913 18:21:45.816718   11846 main.go:141] libmachine: (addons-979357)   <name>addons-979357</name>
	I0913 18:21:45.816723   11846 main.go:141] libmachine: (addons-979357)   <memory unit='MiB'>4000</memory>
	I0913 18:21:45.816728   11846 main.go:141] libmachine: (addons-979357)   <vcpu>2</vcpu>
	I0913 18:21:45.816732   11846 main.go:141] libmachine: (addons-979357)   <features>
	I0913 18:21:45.816738   11846 main.go:141] libmachine: (addons-979357)     <acpi/>
	I0913 18:21:45.816744   11846 main.go:141] libmachine: (addons-979357)     <apic/>
	I0913 18:21:45.816750   11846 main.go:141] libmachine: (addons-979357)     <pae/>
	I0913 18:21:45.816759   11846 main.go:141] libmachine: (addons-979357)     
	I0913 18:21:45.816766   11846 main.go:141] libmachine: (addons-979357)   </features>
	I0913 18:21:45.816776   11846 main.go:141] libmachine: (addons-979357)   <cpu mode='host-passthrough'>
	I0913 18:21:45.816783   11846 main.go:141] libmachine: (addons-979357)   
	I0913 18:21:45.816798   11846 main.go:141] libmachine: (addons-979357)   </cpu>
	I0913 18:21:45.816806   11846 main.go:141] libmachine: (addons-979357)   <os>
	I0913 18:21:45.816810   11846 main.go:141] libmachine: (addons-979357)     <type>hvm</type>
	I0913 18:21:45.816816   11846 main.go:141] libmachine: (addons-979357)     <boot dev='cdrom'/>
	I0913 18:21:45.816820   11846 main.go:141] libmachine: (addons-979357)     <boot dev='hd'/>
	I0913 18:21:45.816825   11846 main.go:141] libmachine: (addons-979357)     <bootmenu enable='no'/>
	I0913 18:21:45.816831   11846 main.go:141] libmachine: (addons-979357)   </os>
	I0913 18:21:45.816836   11846 main.go:141] libmachine: (addons-979357)   <devices>
	I0913 18:21:45.816843   11846 main.go:141] libmachine: (addons-979357)     <disk type='file' device='cdrom'>
	I0913 18:21:45.816853   11846 main.go:141] libmachine: (addons-979357)       <source file='/home/jenkins/minikube-integration/19636-3902/.minikube/machines/addons-979357/boot2docker.iso'/>
	I0913 18:21:45.816864   11846 main.go:141] libmachine: (addons-979357)       <target dev='hdc' bus='scsi'/>
	I0913 18:21:45.816874   11846 main.go:141] libmachine: (addons-979357)       <readonly/>
	I0913 18:21:45.816884   11846 main.go:141] libmachine: (addons-979357)     </disk>
	I0913 18:21:45.816910   11846 main.go:141] libmachine: (addons-979357)     <disk type='file' device='disk'>
	I0913 18:21:45.816927   11846 main.go:141] libmachine: (addons-979357)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0913 18:21:45.816935   11846 main.go:141] libmachine: (addons-979357)       <source file='/home/jenkins/minikube-integration/19636-3902/.minikube/machines/addons-979357/addons-979357.rawdisk'/>
	I0913 18:21:45.816942   11846 main.go:141] libmachine: (addons-979357)       <target dev='hda' bus='virtio'/>
	I0913 18:21:45.816949   11846 main.go:141] libmachine: (addons-979357)     </disk>
	I0913 18:21:45.816955   11846 main.go:141] libmachine: (addons-979357)     <interface type='network'>
	I0913 18:21:45.816961   11846 main.go:141] libmachine: (addons-979357)       <source network='mk-addons-979357'/>
	I0913 18:21:45.816971   11846 main.go:141] libmachine: (addons-979357)       <model type='virtio'/>
	I0913 18:21:45.816986   11846 main.go:141] libmachine: (addons-979357)     </interface>
	I0913 18:21:45.816998   11846 main.go:141] libmachine: (addons-979357)     <interface type='network'>
	I0913 18:21:45.817019   11846 main.go:141] libmachine: (addons-979357)       <source network='default'/>
	I0913 18:21:45.817038   11846 main.go:141] libmachine: (addons-979357)       <model type='virtio'/>
	I0913 18:21:45.817050   11846 main.go:141] libmachine: (addons-979357)     </interface>
	I0913 18:21:45.817060   11846 main.go:141] libmachine: (addons-979357)     <serial type='pty'>
	I0913 18:21:45.817071   11846 main.go:141] libmachine: (addons-979357)       <target port='0'/>
	I0913 18:21:45.817077   11846 main.go:141] libmachine: (addons-979357)     </serial>
	I0913 18:21:45.817082   11846 main.go:141] libmachine: (addons-979357)     <console type='pty'>
	I0913 18:21:45.817089   11846 main.go:141] libmachine: (addons-979357)       <target type='serial' port='0'/>
	I0913 18:21:45.817096   11846 main.go:141] libmachine: (addons-979357)     </console>
	I0913 18:21:45.817105   11846 main.go:141] libmachine: (addons-979357)     <rng model='virtio'>
	I0913 18:21:45.817123   11846 main.go:141] libmachine: (addons-979357)       <backend model='random'>/dev/random</backend>
	I0913 18:21:45.817134   11846 main.go:141] libmachine: (addons-979357)     </rng>
	I0913 18:21:45.817145   11846 main.go:141] libmachine: (addons-979357)     
	I0913 18:21:45.817152   11846 main.go:141] libmachine: (addons-979357)     
	I0913 18:21:45.817157   11846 main.go:141] libmachine: (addons-979357)   </devices>
	I0913 18:21:45.817163   11846 main.go:141] libmachine: (addons-979357) </domain>
	I0913 18:21:45.817170   11846 main.go:141] libmachine: (addons-979357) 
	I0913 18:21:45.823068   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:c9:b7:e5 in network default
	I0913 18:21:45.823613   11846 main.go:141] libmachine: (addons-979357) Ensuring networks are active...
	I0913 18:21:45.823634   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:21:45.824217   11846 main.go:141] libmachine: (addons-979357) Ensuring network default is active
	I0913 18:21:45.824556   11846 main.go:141] libmachine: (addons-979357) Ensuring network mk-addons-979357 is active
	I0913 18:21:45.825087   11846 main.go:141] libmachine: (addons-979357) Getting domain xml...
	I0913 18:21:45.825697   11846 main.go:141] libmachine: (addons-979357) Creating domain...
	I0913 18:21:47.215259   11846 main.go:141] libmachine: (addons-979357) Waiting to get IP...
	I0913 18:21:47.216244   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:21:47.216720   11846 main.go:141] libmachine: (addons-979357) DBG | unable to find current IP address of domain addons-979357 in network mk-addons-979357
	I0913 18:21:47.216737   11846 main.go:141] libmachine: (addons-979357) DBG | I0913 18:21:47.216708   11867 retry.go:31] will retry after 288.192907ms: waiting for machine to come up
	I0913 18:21:47.506172   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:21:47.506706   11846 main.go:141] libmachine: (addons-979357) DBG | unable to find current IP address of domain addons-979357 in network mk-addons-979357
	I0913 18:21:47.506739   11846 main.go:141] libmachine: (addons-979357) DBG | I0913 18:21:47.506644   11867 retry.go:31] will retry after 265.001251ms: waiting for machine to come up
	I0913 18:21:47.773271   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:21:47.773783   11846 main.go:141] libmachine: (addons-979357) DBG | unable to find current IP address of domain addons-979357 in network mk-addons-979357
	I0913 18:21:47.773811   11846 main.go:141] libmachine: (addons-979357) DBG | I0913 18:21:47.773744   11867 retry.go:31] will retry after 301.987216ms: waiting for machine to come up
	I0913 18:21:48.077134   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:21:48.077602   11846 main.go:141] libmachine: (addons-979357) DBG | unable to find current IP address of domain addons-979357 in network mk-addons-979357
	I0913 18:21:48.077633   11846 main.go:141] libmachine: (addons-979357) DBG | I0913 18:21:48.077565   11867 retry.go:31] will retry after 551.807466ms: waiting for machine to come up
	I0913 18:21:48.631439   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:21:48.631926   11846 main.go:141] libmachine: (addons-979357) DBG | unable to find current IP address of domain addons-979357 in network mk-addons-979357
	I0913 18:21:48.631948   11846 main.go:141] libmachine: (addons-979357) DBG | I0913 18:21:48.631877   11867 retry.go:31] will retry after 628.057496ms: waiting for machine to come up
	I0913 18:21:49.261251   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:21:49.261632   11846 main.go:141] libmachine: (addons-979357) DBG | unable to find current IP address of domain addons-979357 in network mk-addons-979357
	I0913 18:21:49.261655   11846 main.go:141] libmachine: (addons-979357) DBG | I0913 18:21:49.261592   11867 retry.go:31] will retry after 766.331433ms: waiting for machine to come up
	I0913 18:21:50.030151   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:21:50.030680   11846 main.go:141] libmachine: (addons-979357) DBG | unable to find current IP address of domain addons-979357 in network mk-addons-979357
	I0913 18:21:50.030703   11846 main.go:141] libmachine: (addons-979357) DBG | I0913 18:21:50.030633   11867 retry.go:31] will retry after 869.088297ms: waiting for machine to come up
	I0913 18:21:50.901609   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:21:50.902025   11846 main.go:141] libmachine: (addons-979357) DBG | unable to find current IP address of domain addons-979357 in network mk-addons-979357
	I0913 18:21:50.902046   11846 main.go:141] libmachine: (addons-979357) DBG | I0913 18:21:50.901973   11867 retry.go:31] will retry after 1.351047403s: waiting for machine to come up
	I0913 18:21:52.255406   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:21:52.255833   11846 main.go:141] libmachine: (addons-979357) DBG | unable to find current IP address of domain addons-979357 in network mk-addons-979357
	I0913 18:21:52.255854   11846 main.go:141] libmachine: (addons-979357) DBG | I0913 18:21:52.255806   11867 retry.go:31] will retry after 1.528727429s: waiting for machine to come up
	I0913 18:21:53.785667   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:21:53.786063   11846 main.go:141] libmachine: (addons-979357) DBG | unable to find current IP address of domain addons-979357 in network mk-addons-979357
	I0913 18:21:53.786084   11846 main.go:141] libmachine: (addons-979357) DBG | I0913 18:21:53.786023   11867 retry.go:31] will retry after 1.928511226s: waiting for machine to come up
	I0913 18:21:55.715767   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:21:55.716158   11846 main.go:141] libmachine: (addons-979357) DBG | unable to find current IP address of domain addons-979357 in network mk-addons-979357
	I0913 18:21:55.716180   11846 main.go:141] libmachine: (addons-979357) DBG | I0913 18:21:55.716108   11867 retry.go:31] will retry after 1.901214708s: waiting for machine to come up
	I0913 18:21:57.619291   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:21:57.619861   11846 main.go:141] libmachine: (addons-979357) DBG | unable to find current IP address of domain addons-979357 in network mk-addons-979357
	I0913 18:21:57.619887   11846 main.go:141] libmachine: (addons-979357) DBG | I0913 18:21:57.619823   11867 retry.go:31] will retry after 2.844347432s: waiting for machine to come up
	I0913 18:22:00.465541   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:00.465982   11846 main.go:141] libmachine: (addons-979357) DBG | unable to find current IP address of domain addons-979357 in network mk-addons-979357
	I0913 18:22:00.466008   11846 main.go:141] libmachine: (addons-979357) DBG | I0913 18:22:00.465919   11867 retry.go:31] will retry after 3.134520129s: waiting for machine to come up
	I0913 18:22:03.603405   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:03.603856   11846 main.go:141] libmachine: (addons-979357) DBG | unable to find current IP address of domain addons-979357 in network mk-addons-979357
	I0913 18:22:03.603883   11846 main.go:141] libmachine: (addons-979357) DBG | I0913 18:22:03.603813   11867 retry.go:31] will retry after 4.895864383s: waiting for machine to come up
	I0913 18:22:08.503574   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:08.503985   11846 main.go:141] libmachine: (addons-979357) Found IP for machine: 192.168.39.34
	I0913 18:22:08.504003   11846 main.go:141] libmachine: (addons-979357) Reserving static IP address...
	I0913 18:22:08.504016   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has current primary IP address 192.168.39.34 and MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:08.504317   11846 main.go:141] libmachine: (addons-979357) DBG | unable to find host DHCP lease matching {name: "addons-979357", mac: "52:54:00:9b:f4:d7", ip: "192.168.39.34"} in network mk-addons-979357
	I0913 18:22:08.572524   11846 main.go:141] libmachine: (addons-979357) DBG | Getting to WaitForSSH function...
	I0913 18:22:08.572569   11846 main.go:141] libmachine: (addons-979357) Reserved static IP address: 192.168.39.34
	I0913 18:22:08.572583   11846 main.go:141] libmachine: (addons-979357) Waiting for SSH to be available...
	I0913 18:22:08.574749   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:08.575144   11846 main.go:141] libmachine: (addons-979357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:f4:d7", ip: ""} in network mk-addons-979357: {Iface:virbr1 ExpiryTime:2024-09-13 19:22:00 +0000 UTC Type:0 Mac:52:54:00:9b:f4:d7 Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:minikube Clientid:01:52:54:00:9b:f4:d7}
	I0913 18:22:08.575171   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined IP address 192.168.39.34 and MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:08.575290   11846 main.go:141] libmachine: (addons-979357) DBG | Using SSH client type: external
	I0913 18:22:08.575309   11846 main.go:141] libmachine: (addons-979357) DBG | Using SSH private key: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/addons-979357/id_rsa (-rw-------)
	I0913 18:22:08.575337   11846 main.go:141] libmachine: (addons-979357) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.34 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19636-3902/.minikube/machines/addons-979357/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0913 18:22:08.575351   11846 main.go:141] libmachine: (addons-979357) DBG | About to run SSH command:
	I0913 18:22:08.575368   11846 main.go:141] libmachine: (addons-979357) DBG | exit 0
	I0913 18:22:08.710507   11846 main.go:141] libmachine: (addons-979357) DBG | SSH cmd err, output: <nil>: 
	I0913 18:22:08.710759   11846 main.go:141] libmachine: (addons-979357) KVM machine creation complete!
	I0913 18:22:08.711098   11846 main.go:141] libmachine: (addons-979357) Calling .GetConfigRaw
	I0913 18:22:08.711607   11846 main.go:141] libmachine: (addons-979357) Calling .DriverName
	I0913 18:22:08.711785   11846 main.go:141] libmachine: (addons-979357) Calling .DriverName
	I0913 18:22:08.711900   11846 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0913 18:22:08.711921   11846 main.go:141] libmachine: (addons-979357) Calling .GetState
	I0913 18:22:08.713103   11846 main.go:141] libmachine: Detecting operating system of created instance...
	I0913 18:22:08.713119   11846 main.go:141] libmachine: Waiting for SSH to be available...
	I0913 18:22:08.713127   11846 main.go:141] libmachine: Getting to WaitForSSH function...
	I0913 18:22:08.713138   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHHostname
	I0913 18:22:08.715205   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:08.715543   11846 main.go:141] libmachine: (addons-979357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:f4:d7", ip: ""} in network mk-addons-979357: {Iface:virbr1 ExpiryTime:2024-09-13 19:22:00 +0000 UTC Type:0 Mac:52:54:00:9b:f4:d7 Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:addons-979357 Clientid:01:52:54:00:9b:f4:d7}
	I0913 18:22:08.715570   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined IP address 192.168.39.34 and MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:08.715735   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHPort
	I0913 18:22:08.715880   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHKeyPath
	I0913 18:22:08.716011   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHKeyPath
	I0913 18:22:08.716121   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHUsername
	I0913 18:22:08.716248   11846 main.go:141] libmachine: Using SSH client type: native
	I0913 18:22:08.716428   11846 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.34 22 <nil> <nil>}
	I0913 18:22:08.716440   11846 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0913 18:22:08.829395   11846 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0913 18:22:08.829432   11846 main.go:141] libmachine: Detecting the provisioner...
	I0913 18:22:08.829439   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHHostname
	I0913 18:22:08.832429   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:08.832877   11846 main.go:141] libmachine: (addons-979357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:f4:d7", ip: ""} in network mk-addons-979357: {Iface:virbr1 ExpiryTime:2024-09-13 19:22:00 +0000 UTC Type:0 Mac:52:54:00:9b:f4:d7 Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:addons-979357 Clientid:01:52:54:00:9b:f4:d7}
	I0913 18:22:08.832903   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined IP address 192.168.39.34 and MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:08.833092   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHPort
	I0913 18:22:08.833258   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHKeyPath
	I0913 18:22:08.833366   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHKeyPath
	I0913 18:22:08.833483   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHUsername
	I0913 18:22:08.833650   11846 main.go:141] libmachine: Using SSH client type: native
	I0913 18:22:08.833827   11846 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.34 22 <nil> <nil>}
	I0913 18:22:08.833837   11846 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0913 18:22:08.946841   11846 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0913 18:22:08.946908   11846 main.go:141] libmachine: found compatible host: buildroot
	I0913 18:22:08.946918   11846 main.go:141] libmachine: Provisioning with buildroot...
	I0913 18:22:08.946930   11846 main.go:141] libmachine: (addons-979357) Calling .GetMachineName
	I0913 18:22:08.947154   11846 buildroot.go:166] provisioning hostname "addons-979357"
	I0913 18:22:08.947176   11846 main.go:141] libmachine: (addons-979357) Calling .GetMachineName
	I0913 18:22:08.947341   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHHostname
	I0913 18:22:08.949827   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:08.950138   11846 main.go:141] libmachine: (addons-979357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:f4:d7", ip: ""} in network mk-addons-979357: {Iface:virbr1 ExpiryTime:2024-09-13 19:22:00 +0000 UTC Type:0 Mac:52:54:00:9b:f4:d7 Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:addons-979357 Clientid:01:52:54:00:9b:f4:d7}
	I0913 18:22:08.950163   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined IP address 192.168.39.34 and MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:08.950307   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHPort
	I0913 18:22:08.950471   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHKeyPath
	I0913 18:22:08.950625   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHKeyPath
	I0913 18:22:08.950753   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHUsername
	I0913 18:22:08.950889   11846 main.go:141] libmachine: Using SSH client type: native
	I0913 18:22:08.951047   11846 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.34 22 <nil> <nil>}
	I0913 18:22:08.951059   11846 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-979357 && echo "addons-979357" | sudo tee /etc/hostname
	I0913 18:22:09.084010   11846 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-979357
	
	I0913 18:22:09.084038   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHHostname
	I0913 18:22:09.086820   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:09.087218   11846 main.go:141] libmachine: (addons-979357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:f4:d7", ip: ""} in network mk-addons-979357: {Iface:virbr1 ExpiryTime:2024-09-13 19:22:00 +0000 UTC Type:0 Mac:52:54:00:9b:f4:d7 Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:addons-979357 Clientid:01:52:54:00:9b:f4:d7}
	I0913 18:22:09.087244   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined IP address 192.168.39.34 and MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:09.087406   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHPort
	I0913 18:22:09.087598   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHKeyPath
	I0913 18:22:09.087771   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHKeyPath
	I0913 18:22:09.087892   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHUsername
	I0913 18:22:09.088066   11846 main.go:141] libmachine: Using SSH client type: native
	I0913 18:22:09.088267   11846 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.34 22 <nil> <nil>}
	I0913 18:22:09.088291   11846 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-979357' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-979357/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-979357' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0913 18:22:09.211719   11846 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0913 18:22:09.211749   11846 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19636-3902/.minikube CaCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19636-3902/.minikube}
	I0913 18:22:09.211801   11846 buildroot.go:174] setting up certificates
	I0913 18:22:09.211812   11846 provision.go:84] configureAuth start
	I0913 18:22:09.211824   11846 main.go:141] libmachine: (addons-979357) Calling .GetMachineName
	I0913 18:22:09.212141   11846 main.go:141] libmachine: (addons-979357) Calling .GetIP
	I0913 18:22:09.214775   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:09.215180   11846 main.go:141] libmachine: (addons-979357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:f4:d7", ip: ""} in network mk-addons-979357: {Iface:virbr1 ExpiryTime:2024-09-13 19:22:00 +0000 UTC Type:0 Mac:52:54:00:9b:f4:d7 Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:addons-979357 Clientid:01:52:54:00:9b:f4:d7}
	I0913 18:22:09.215205   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined IP address 192.168.39.34 and MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:09.215376   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHHostname
	I0913 18:22:09.217631   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:09.218082   11846 main.go:141] libmachine: (addons-979357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:f4:d7", ip: ""} in network mk-addons-979357: {Iface:virbr1 ExpiryTime:2024-09-13 19:22:00 +0000 UTC Type:0 Mac:52:54:00:9b:f4:d7 Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:addons-979357 Clientid:01:52:54:00:9b:f4:d7}
	I0913 18:22:09.218145   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined IP address 192.168.39.34 and MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:09.218259   11846 provision.go:143] copyHostCerts
	I0913 18:22:09.218330   11846 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem (1679 bytes)
	I0913 18:22:09.218462   11846 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem (1082 bytes)
	I0913 18:22:09.218590   11846 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem (1123 bytes)
	I0913 18:22:09.218660   11846 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem org=jenkins.addons-979357 san=[127.0.0.1 192.168.39.34 addons-979357 localhost minikube]
	I0913 18:22:09.715311   11846 provision.go:177] copyRemoteCerts
	I0913 18:22:09.715364   11846 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0913 18:22:09.715390   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHHostname
	I0913 18:22:09.718319   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:09.718625   11846 main.go:141] libmachine: (addons-979357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:f4:d7", ip: ""} in network mk-addons-979357: {Iface:virbr1 ExpiryTime:2024-09-13 19:22:00 +0000 UTC Type:0 Mac:52:54:00:9b:f4:d7 Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:addons-979357 Clientid:01:52:54:00:9b:f4:d7}
	I0913 18:22:09.718650   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined IP address 192.168.39.34 and MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:09.718796   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHPort
	I0913 18:22:09.718953   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHKeyPath
	I0913 18:22:09.719126   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHUsername
	I0913 18:22:09.719278   11846 sshutil.go:53] new ssh client: &{IP:192.168.39.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/addons-979357/id_rsa Username:docker}
	I0913 18:22:09.804099   11846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0913 18:22:09.829074   11846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0913 18:22:09.853991   11846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0913 18:22:09.877867   11846 provision.go:87] duration metric: took 666.039773ms to configureAuth
	I0913 18:22:09.877899   11846 buildroot.go:189] setting minikube options for container-runtime
	I0913 18:22:09.878243   11846 config.go:182] Loaded profile config "addons-979357": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 18:22:09.878342   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHHostname
	I0913 18:22:09.881237   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:09.881647   11846 main.go:141] libmachine: (addons-979357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:f4:d7", ip: ""} in network mk-addons-979357: {Iface:virbr1 ExpiryTime:2024-09-13 19:22:00 +0000 UTC Type:0 Mac:52:54:00:9b:f4:d7 Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:addons-979357 Clientid:01:52:54:00:9b:f4:d7}
	I0913 18:22:09.881678   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined IP address 192.168.39.34 and MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:09.881809   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHPort
	I0913 18:22:09.882030   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHKeyPath
	I0913 18:22:09.882238   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHKeyPath
	I0913 18:22:09.882372   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHUsername
	I0913 18:22:09.882533   11846 main.go:141] libmachine: Using SSH client type: native
	I0913 18:22:09.882691   11846 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.34 22 <nil> <nil>}
	I0913 18:22:09.882704   11846 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0913 18:22:10.126542   11846 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0913 18:22:10.126574   11846 main.go:141] libmachine: Checking connection to Docker...
	I0913 18:22:10.126585   11846 main.go:141] libmachine: (addons-979357) Calling .GetURL
	I0913 18:22:10.128029   11846 main.go:141] libmachine: (addons-979357) DBG | Using libvirt version 6000000
	I0913 18:22:10.130547   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:10.130974   11846 main.go:141] libmachine: (addons-979357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:f4:d7", ip: ""} in network mk-addons-979357: {Iface:virbr1 ExpiryTime:2024-09-13 19:22:00 +0000 UTC Type:0 Mac:52:54:00:9b:f4:d7 Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:addons-979357 Clientid:01:52:54:00:9b:f4:d7}
	I0913 18:22:10.131001   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined IP address 192.168.39.34 and MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:10.131167   11846 main.go:141] libmachine: Docker is up and running!
	I0913 18:22:10.131183   11846 main.go:141] libmachine: Reticulating splines...
	I0913 18:22:10.131190   11846 client.go:171] duration metric: took 25.112456647s to LocalClient.Create
	I0913 18:22:10.131217   11846 start.go:167] duration metric: took 25.112517605s to libmachine.API.Create "addons-979357"
	I0913 18:22:10.131230   11846 start.go:293] postStartSetup for "addons-979357" (driver="kvm2")
	I0913 18:22:10.131254   11846 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0913 18:22:10.131272   11846 main.go:141] libmachine: (addons-979357) Calling .DriverName
	I0913 18:22:10.131521   11846 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0913 18:22:10.131545   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHHostname
	I0913 18:22:10.133979   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:10.134328   11846 main.go:141] libmachine: (addons-979357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:f4:d7", ip: ""} in network mk-addons-979357: {Iface:virbr1 ExpiryTime:2024-09-13 19:22:00 +0000 UTC Type:0 Mac:52:54:00:9b:f4:d7 Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:addons-979357 Clientid:01:52:54:00:9b:f4:d7}
	I0913 18:22:10.134354   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined IP address 192.168.39.34 and MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:10.134501   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHPort
	I0913 18:22:10.134686   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHKeyPath
	I0913 18:22:10.134836   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHUsername
	I0913 18:22:10.134952   11846 sshutil.go:53] new ssh client: &{IP:192.168.39.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/addons-979357/id_rsa Username:docker}
	I0913 18:22:10.220806   11846 ssh_runner.go:195] Run: cat /etc/os-release
	I0913 18:22:10.225490   11846 info.go:137] Remote host: Buildroot 2023.02.9
	I0913 18:22:10.225520   11846 filesync.go:126] Scanning /home/jenkins/minikube-integration/19636-3902/.minikube/addons for local assets ...
	I0913 18:22:10.225600   11846 filesync.go:126] Scanning /home/jenkins/minikube-integration/19636-3902/.minikube/files for local assets ...
	I0913 18:22:10.225631   11846 start.go:296] duration metric: took 94.394779ms for postStartSetup
	I0913 18:22:10.225667   11846 main.go:141] libmachine: (addons-979357) Calling .GetConfigRaw
	I0913 18:22:10.226323   11846 main.go:141] libmachine: (addons-979357) Calling .GetIP
	I0913 18:22:10.229002   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:10.229334   11846 main.go:141] libmachine: (addons-979357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:f4:d7", ip: ""} in network mk-addons-979357: {Iface:virbr1 ExpiryTime:2024-09-13 19:22:00 +0000 UTC Type:0 Mac:52:54:00:9b:f4:d7 Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:addons-979357 Clientid:01:52:54:00:9b:f4:d7}
	I0913 18:22:10.229365   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined IP address 192.168.39.34 and MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:10.229560   11846 profile.go:143] Saving config to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357/config.json ...
	I0913 18:22:10.229851   11846 start.go:128] duration metric: took 25.228992984s to createHost
	I0913 18:22:10.229878   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHHostname
	I0913 18:22:10.232158   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:10.232608   11846 main.go:141] libmachine: (addons-979357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:f4:d7", ip: ""} in network mk-addons-979357: {Iface:virbr1 ExpiryTime:2024-09-13 19:22:00 +0000 UTC Type:0 Mac:52:54:00:9b:f4:d7 Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:addons-979357 Clientid:01:52:54:00:9b:f4:d7}
	I0913 18:22:10.232631   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined IP address 192.168.39.34 and MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:10.232764   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHPort
	I0913 18:22:10.232960   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHKeyPath
	I0913 18:22:10.233116   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHKeyPath
	I0913 18:22:10.233281   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHUsername
	I0913 18:22:10.233428   11846 main.go:141] libmachine: Using SSH client type: native
	I0913 18:22:10.233612   11846 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.34 22 <nil> <nil>}
	I0913 18:22:10.233625   11846 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0913 18:22:10.347102   11846 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726251730.321977350
	
	I0913 18:22:10.347128   11846 fix.go:216] guest clock: 1726251730.321977350
	I0913 18:22:10.347138   11846 fix.go:229] Guest: 2024-09-13 18:22:10.32197735 +0000 UTC Remote: 2024-09-13 18:22:10.22986562 +0000 UTC m=+25.329833233 (delta=92.11173ms)
	I0913 18:22:10.347167   11846 fix.go:200] guest clock delta is within tolerance: 92.11173ms
	I0913 18:22:10.347175   11846 start.go:83] releasing machines lock for "addons-979357", held for 25.34638377s
	I0913 18:22:10.347205   11846 main.go:141] libmachine: (addons-979357) Calling .DriverName
	I0913 18:22:10.347489   11846 main.go:141] libmachine: (addons-979357) Calling .GetIP
	I0913 18:22:10.350285   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:10.350656   11846 main.go:141] libmachine: (addons-979357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:f4:d7", ip: ""} in network mk-addons-979357: {Iface:virbr1 ExpiryTime:2024-09-13 19:22:00 +0000 UTC Type:0 Mac:52:54:00:9b:f4:d7 Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:addons-979357 Clientid:01:52:54:00:9b:f4:d7}
	I0913 18:22:10.350686   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined IP address 192.168.39.34 and MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:10.350858   11846 main.go:141] libmachine: (addons-979357) Calling .DriverName
	I0913 18:22:10.351398   11846 main.go:141] libmachine: (addons-979357) Calling .DriverName
	I0913 18:22:10.351583   11846 main.go:141] libmachine: (addons-979357) Calling .DriverName
	I0913 18:22:10.351693   11846 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0913 18:22:10.351742   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHHostname
	I0913 18:22:10.351791   11846 ssh_runner.go:195] Run: cat /version.json
	I0913 18:22:10.351812   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHHostname
	I0913 18:22:10.354604   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:10.354894   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:10.354935   11846 main.go:141] libmachine: (addons-979357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:f4:d7", ip: ""} in network mk-addons-979357: {Iface:virbr1 ExpiryTime:2024-09-13 19:22:00 +0000 UTC Type:0 Mac:52:54:00:9b:f4:d7 Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:addons-979357 Clientid:01:52:54:00:9b:f4:d7}
	I0913 18:22:10.354957   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined IP address 192.168.39.34 and MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:10.355076   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHPort
	I0913 18:22:10.355290   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHKeyPath
	I0913 18:22:10.355388   11846 main.go:141] libmachine: (addons-979357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:f4:d7", ip: ""} in network mk-addons-979357: {Iface:virbr1 ExpiryTime:2024-09-13 19:22:00 +0000 UTC Type:0 Mac:52:54:00:9b:f4:d7 Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:addons-979357 Clientid:01:52:54:00:9b:f4:d7}
	I0913 18:22:10.355421   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined IP address 192.168.39.34 and MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:10.355470   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHUsername
	I0913 18:22:10.355584   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHPort
	I0913 18:22:10.355636   11846 sshutil.go:53] new ssh client: &{IP:192.168.39.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/addons-979357/id_rsa Username:docker}
	I0913 18:22:10.355715   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHKeyPath
	I0913 18:22:10.355878   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHUsername
	I0913 18:22:10.356046   11846 sshutil.go:53] new ssh client: &{IP:192.168.39.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/addons-979357/id_rsa Username:docker}
	I0913 18:22:10.476853   11846 ssh_runner.go:195] Run: systemctl --version
	I0913 18:22:10.482887   11846 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0913 18:22:10.641449   11846 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0913 18:22:10.648344   11846 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0913 18:22:10.648410   11846 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0913 18:22:10.664019   11846 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0913 18:22:10.664043   11846 start.go:495] detecting cgroup driver to use...
	I0913 18:22:10.664124   11846 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0913 18:22:10.679953   11846 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0913 18:22:10.694986   11846 docker.go:217] disabling cri-docker service (if available) ...
	I0913 18:22:10.695040   11846 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0913 18:22:10.709192   11846 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0913 18:22:10.723529   11846 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0913 18:22:10.836708   11846 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0913 18:22:10.978881   11846 docker.go:233] disabling docker service ...
	I0913 18:22:10.978945   11846 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0913 18:22:10.993279   11846 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0913 18:22:11.006735   11846 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0913 18:22:11.135365   11846 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0913 18:22:11.245556   11846 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0913 18:22:11.259561   11846 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0913 18:22:11.277758   11846 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0913 18:22:11.277818   11846 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 18:22:11.288773   11846 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0913 18:22:11.288829   11846 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 18:22:11.299334   11846 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 18:22:11.309742   11846 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 18:22:11.320384   11846 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0913 18:22:11.331897   11846 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 18:22:11.343220   11846 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 18:22:11.361330   11846 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 18:22:11.372453   11846 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0913 18:22:11.382315   11846 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0913 18:22:11.382392   11846 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0913 18:22:11.396538   11846 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0913 18:22:11.407320   11846 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 18:22:11.515601   11846 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0913 18:22:11.605418   11846 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0913 18:22:11.605515   11846 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0913 18:22:11.610413   11846 start.go:563] Will wait 60s for crictl version
	I0913 18:22:11.610486   11846 ssh_runner.go:195] Run: which crictl
	I0913 18:22:11.614216   11846 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0913 18:22:11.653794   11846 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0913 18:22:11.653938   11846 ssh_runner.go:195] Run: crio --version
	I0913 18:22:11.683751   11846 ssh_runner.go:195] Run: crio --version
	I0913 18:22:11.713055   11846 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0913 18:22:11.714287   11846 main.go:141] libmachine: (addons-979357) Calling .GetIP
	I0913 18:22:11.716720   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:11.717006   11846 main.go:141] libmachine: (addons-979357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:f4:d7", ip: ""} in network mk-addons-979357: {Iface:virbr1 ExpiryTime:2024-09-13 19:22:00 +0000 UTC Type:0 Mac:52:54:00:9b:f4:d7 Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:addons-979357 Clientid:01:52:54:00:9b:f4:d7}
	I0913 18:22:11.717030   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined IP address 192.168.39.34 and MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:11.717315   11846 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0913 18:22:11.721668   11846 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0913 18:22:11.734152   11846 kubeadm.go:883] updating cluster {Name:addons-979357 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:addons-979357 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.34 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0913 18:22:11.734262   11846 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0913 18:22:11.734314   11846 ssh_runner.go:195] Run: sudo crictl images --output json
	I0913 18:22:11.771955   11846 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0913 18:22:11.772020   11846 ssh_runner.go:195] Run: which lz4
	I0913 18:22:11.776099   11846 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0913 18:22:11.780348   11846 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0913 18:22:11.780377   11846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0913 18:22:13.063182   11846 crio.go:462] duration metric: took 1.287105483s to copy over tarball
	I0913 18:22:13.063246   11846 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0913 18:22:15.131948   11846 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.068675166s)
	I0913 18:22:15.131980   11846 crio.go:469] duration metric: took 2.068772112s to extract the tarball
	I0913 18:22:15.131990   11846 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0913 18:22:15.168309   11846 ssh_runner.go:195] Run: sudo crictl images --output json
	I0913 18:22:15.210774   11846 crio.go:514] all images are preloaded for cri-o runtime.
	I0913 18:22:15.210798   11846 cache_images.go:84] Images are preloaded, skipping loading
	I0913 18:22:15.210807   11846 kubeadm.go:934] updating node { 192.168.39.34 8443 v1.31.1 crio true true} ...
	I0913 18:22:15.210915   11846 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-979357 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.34
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-979357 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0913 18:22:15.210993   11846 ssh_runner.go:195] Run: crio config
	I0913 18:22:15.258261   11846 cni.go:84] Creating CNI manager for ""
	I0913 18:22:15.258285   11846 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0913 18:22:15.258295   11846 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0913 18:22:15.258316   11846 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.34 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-979357 NodeName:addons-979357 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.34"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.34 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0913 18:22:15.258477   11846 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.34
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-979357"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.34
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.34"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0913 18:22:15.258548   11846 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0913 18:22:15.268665   11846 binaries.go:44] Found k8s binaries, skipping transfer
	I0913 18:22:15.268737   11846 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0913 18:22:15.278177   11846 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0913 18:22:15.294597   11846 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0913 18:22:15.310451   11846 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0913 18:22:15.326796   11846 ssh_runner.go:195] Run: grep 192.168.39.34	control-plane.minikube.internal$ /etc/hosts
	I0913 18:22:15.330636   11846 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.34	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0913 18:22:15.343203   11846 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 18:22:15.467199   11846 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0913 18:22:15.486141   11846 certs.go:68] Setting up /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357 for IP: 192.168.39.34
	I0913 18:22:15.486166   11846 certs.go:194] generating shared ca certs ...
	I0913 18:22:15.486182   11846 certs.go:226] acquiring lock for ca certs: {Name:mke780aab4c2895bded11772e31c7a357d07742c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:22:15.486323   11846 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.key
	I0913 18:22:15.662812   11846 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt ...
	I0913 18:22:15.662838   11846 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt: {Name:mk0c4ac93cc268df9a8da3c08edba4e990a1051c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:22:15.662994   11846 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19636-3902/.minikube/ca.key ...
	I0913 18:22:15.663004   11846 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/.minikube/ca.key: {Name:mk7c3df6b789a282ec74042612aa69d3d847194d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:22:15.663072   11846 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.key
	I0913 18:22:15.760468   11846 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.crt ...
	I0913 18:22:15.760493   11846 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.crt: {Name:mk5938022ba0b964dbd2e8d6a95f61ea52a69c2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:22:15.760629   11846 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.key ...
	I0913 18:22:15.760638   11846 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.key: {Name:mk4740460ce42bde935de79b4943921492fd98a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:22:15.760700   11846 certs.go:256] generating profile certs ...
	I0913 18:22:15.760762   11846 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357/client.key
	I0913 18:22:15.760784   11846 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357/client.crt with IP's: []
	I0913 18:22:15.869917   11846 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357/client.crt ...
	I0913 18:22:15.869945   11846 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357/client.crt: {Name:mk629832723b056c40a68a16d59abb9016c4d337 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:22:15.870132   11846 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357/client.key ...
	I0913 18:22:15.870143   11846 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357/client.key: {Name:mk7fb983c54e63b71552ed34c37898232dd25c8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:22:15.870218   11846 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357/apiserver.key.667556f7
	I0913 18:22:15.870238   11846 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357/apiserver.crt.667556f7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.34]
	I0913 18:22:15.977365   11846 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357/apiserver.crt.667556f7 ...
	I0913 18:22:15.977392   11846 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357/apiserver.crt.667556f7: {Name:mk64caa72268b14b4cff0a9627f89777df35b01c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:22:15.977557   11846 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357/apiserver.key.667556f7 ...
	I0913 18:22:15.977570   11846 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357/apiserver.key.667556f7: {Name:mk8693bd1404fecfaa4562dd7e045a763b78878a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:22:15.977637   11846 certs.go:381] copying /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357/apiserver.crt.667556f7 -> /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357/apiserver.crt
	I0913 18:22:15.977706   11846 certs.go:385] copying /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357/apiserver.key.667556f7 -> /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357/apiserver.key
	I0913 18:22:15.977750   11846 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357/proxy-client.key
	I0913 18:22:15.977766   11846 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357/proxy-client.crt with IP's: []
	I0913 18:22:16.102506   11846 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357/proxy-client.crt ...
	I0913 18:22:16.102535   11846 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357/proxy-client.crt: {Name:mk4e2dff54c8b7cdd4d081d100bae0960534d953 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:22:16.102678   11846 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357/proxy-client.key ...
	I0913 18:22:16.102688   11846 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357/proxy-client.key: {Name:mkeaff14ff97f40f98f8eae4b259ad1243c5a15e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:22:16.102848   11846 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem (1675 bytes)
	I0913 18:22:16.102882   11846 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem (1082 bytes)
	I0913 18:22:16.102905   11846 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem (1123 bytes)
	I0913 18:22:16.102929   11846 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem (1679 bytes)
	I0913 18:22:16.103974   11846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0913 18:22:16.128760   11846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0913 18:22:16.154237   11846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0913 18:22:16.180108   11846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0913 18:22:16.216371   11846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0913 18:22:16.241414   11846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0913 18:22:16.265812   11846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0913 18:22:16.288640   11846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0913 18:22:16.311923   11846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0913 18:22:16.335383   11846 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0913 18:22:16.351852   11846 ssh_runner.go:195] Run: openssl version
	I0913 18:22:16.357393   11846 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0913 18:22:16.368587   11846 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0913 18:22:16.373059   11846 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 13 18:22 /usr/share/ca-certificates/minikubeCA.pem
	I0913 18:22:16.373123   11846 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0913 18:22:16.378918   11846 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0913 18:22:16.390126   11846 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0913 18:22:16.394003   11846 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0913 18:22:16.394057   11846 kubeadm.go:392] StartCluster: {Name:addons-979357 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:addons-979357 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.34 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 18:22:16.394167   11846 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0913 18:22:16.394219   11846 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0913 18:22:16.431957   11846 cri.go:89] found id: ""
	I0913 18:22:16.432037   11846 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0913 18:22:16.442325   11846 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0913 18:22:16.452438   11846 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0913 18:22:16.462279   11846 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0913 18:22:16.462298   11846 kubeadm.go:157] found existing configuration files:
	
	I0913 18:22:16.462336   11846 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0913 18:22:16.471621   11846 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0913 18:22:16.471678   11846 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0913 18:22:16.481226   11846 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0913 18:22:16.491050   11846 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0913 18:22:16.491106   11846 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0913 18:22:16.501169   11846 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0913 18:22:16.510516   11846 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0913 18:22:16.510568   11846 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0913 18:22:16.519925   11846 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0913 18:22:16.529268   11846 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0913 18:22:16.529320   11846 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0913 18:22:16.539219   11846 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0913 18:22:16.593329   11846 kubeadm.go:310] W0913 18:22:16.575543     815 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0913 18:22:16.594569   11846 kubeadm.go:310] W0913 18:22:16.576957     815 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0913 18:22:16.708878   11846 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0913 18:22:26.701114   11846 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0913 18:22:26.701216   11846 kubeadm.go:310] [preflight] Running pre-flight checks
	I0913 18:22:26.701325   11846 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0913 18:22:26.701444   11846 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0913 18:22:26.701566   11846 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0913 18:22:26.701658   11846 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0913 18:22:26.703010   11846 out.go:235]   - Generating certificates and keys ...
	I0913 18:22:26.703101   11846 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0913 18:22:26.703171   11846 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0913 18:22:26.703246   11846 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0913 18:22:26.703315   11846 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0913 18:22:26.703395   11846 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0913 18:22:26.703486   11846 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0913 18:22:26.703560   11846 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0913 18:22:26.703710   11846 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-979357 localhost] and IPs [192.168.39.34 127.0.0.1 ::1]
	I0913 18:22:26.703780   11846 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0913 18:22:26.703947   11846 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-979357 localhost] and IPs [192.168.39.34 127.0.0.1 ::1]
	I0913 18:22:26.704047   11846 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0913 18:22:26.704149   11846 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0913 18:22:26.704214   11846 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0913 18:22:26.704286   11846 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0913 18:22:26.704372   11846 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0913 18:22:26.704458   11846 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0913 18:22:26.704532   11846 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0913 18:22:26.704633   11846 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0913 18:22:26.704715   11846 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0913 18:22:26.704825   11846 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0913 18:22:26.704915   11846 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0913 18:22:26.706252   11846 out.go:235]   - Booting up control plane ...
	I0913 18:22:26.706339   11846 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0913 18:22:26.706406   11846 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0913 18:22:26.706497   11846 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0913 18:22:26.706623   11846 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0913 18:22:26.706724   11846 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0913 18:22:26.706784   11846 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0913 18:22:26.706939   11846 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0913 18:22:26.707027   11846 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0913 18:22:26.707076   11846 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.200467ms
	I0913 18:22:26.707151   11846 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0913 18:22:26.707212   11846 kubeadm.go:310] [api-check] The API server is healthy after 5.501177192s
	I0913 18:22:26.707308   11846 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0913 18:22:26.707422   11846 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0913 18:22:26.707475   11846 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0913 18:22:26.707633   11846 kubeadm.go:310] [mark-control-plane] Marking the node addons-979357 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0913 18:22:26.707707   11846 kubeadm.go:310] [bootstrap-token] Using token: d54731.5jrr63v1n2n2kz6m
	I0913 18:22:26.708858   11846 out.go:235]   - Configuring RBAC rules ...
	I0913 18:22:26.708942   11846 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0913 18:22:26.709016   11846 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0913 18:22:26.709169   11846 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0913 18:22:26.709274   11846 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0913 18:22:26.709367   11846 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0913 18:22:26.709442   11846 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0913 18:22:26.709548   11846 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0913 18:22:26.709594   11846 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0913 18:22:26.709640   11846 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0913 18:22:26.709650   11846 kubeadm.go:310] 
	I0913 18:22:26.709698   11846 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0913 18:22:26.709704   11846 kubeadm.go:310] 
	I0913 18:22:26.709773   11846 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0913 18:22:26.709779   11846 kubeadm.go:310] 
	I0913 18:22:26.709801   11846 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0913 18:22:26.709847   11846 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0913 18:22:26.709896   11846 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0913 18:22:26.709905   11846 kubeadm.go:310] 
	I0913 18:22:26.709959   11846 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0913 18:22:26.709965   11846 kubeadm.go:310] 
	I0913 18:22:26.710000   11846 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0913 18:22:26.710006   11846 kubeadm.go:310] 
	I0913 18:22:26.710049   11846 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0913 18:22:26.710145   11846 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0913 18:22:26.710258   11846 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0913 18:22:26.710269   11846 kubeadm.go:310] 
	I0913 18:22:26.710342   11846 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0913 18:22:26.710413   11846 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0913 18:22:26.710420   11846 kubeadm.go:310] 
	I0913 18:22:26.710489   11846 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token d54731.5jrr63v1n2n2kz6m \
	I0913 18:22:26.710581   11846 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a4240e11abfbfd373505036507e5ef67d548fb372e560a526b421dfcccc65691 \
	I0913 18:22:26.710601   11846 kubeadm.go:310] 	--control-plane 
	I0913 18:22:26.710604   11846 kubeadm.go:310] 
	I0913 18:22:26.710674   11846 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0913 18:22:26.710680   11846 kubeadm.go:310] 
	I0913 18:22:26.710750   11846 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token d54731.5jrr63v1n2n2kz6m \
	I0913 18:22:26.710853   11846 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a4240e11abfbfd373505036507e5ef67d548fb372e560a526b421dfcccc65691 
	I0913 18:22:26.710865   11846 cni.go:84] Creating CNI manager for ""
	I0913 18:22:26.710872   11846 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0913 18:22:26.712247   11846 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0913 18:22:26.713291   11846 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0913 18:22:26.725202   11846 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0913 18:22:26.748825   11846 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0913 18:22:26.748885   11846 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 18:22:26.748946   11846 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-979357 minikube.k8s.io/updated_at=2024_09_13T18_22_26_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=fdd33bebc6743cfd1c61ec7fe066add478610a92 minikube.k8s.io/name=addons-979357 minikube.k8s.io/primary=true
	I0913 18:22:26.785894   11846 ops.go:34] apiserver oom_adj: -16
	I0913 18:22:26.895212   11846 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 18:22:27.395975   11846 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 18:22:27.896320   11846 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 18:22:28.395286   11846 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 18:22:28.896168   11846 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 18:22:29.395706   11846 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 18:22:29.896217   11846 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 18:22:30.395424   11846 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 18:22:30.477836   11846 kubeadm.go:1113] duration metric: took 3.729011911s to wait for elevateKubeSystemPrivileges
	I0913 18:22:30.477865   11846 kubeadm.go:394] duration metric: took 14.083813405s to StartCluster
	I0913 18:22:30.477884   11846 settings.go:142] acquiring lock: {Name:mk3b751ea8a9f3a6c2d19469cffad500e96411b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:22:30.477996   11846 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19636-3902/kubeconfig
	I0913 18:22:30.478387   11846 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/kubeconfig: {Name:mk324cf401f96b93fed93af51e24b0634e5fa1a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:22:30.478575   11846 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.34 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0913 18:22:30.478599   11846 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0913 18:22:30.478630   11846 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0913 18:22:30.478752   11846 addons.go:69] Setting yakd=true in profile "addons-979357"
	I0913 18:22:30.478773   11846 addons.go:234] Setting addon yakd=true in "addons-979357"
	I0913 18:22:30.478770   11846 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-979357"
	I0913 18:22:30.478804   11846 host.go:66] Checking if "addons-979357" exists ...
	I0913 18:22:30.478792   11846 addons.go:69] Setting metrics-server=true in profile "addons-979357"
	I0913 18:22:30.478823   11846 config.go:182] Loaded profile config "addons-979357": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 18:22:30.478809   11846 addons.go:69] Setting cloud-spanner=true in profile "addons-979357"
	I0913 18:22:30.478835   11846 addons.go:69] Setting default-storageclass=true in profile "addons-979357"
	I0913 18:22:30.478838   11846 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-979357"
	I0913 18:22:30.478848   11846 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-979357"
	I0913 18:22:30.478849   11846 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-979357"
	I0913 18:22:30.478825   11846 addons.go:234] Setting addon metrics-server=true in "addons-979357"
	I0913 18:22:30.478861   11846 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-979357"
	I0913 18:22:30.478875   11846 host.go:66] Checking if "addons-979357" exists ...
	I0913 18:22:30.478882   11846 host.go:66] Checking if "addons-979357" exists ...
	I0913 18:22:30.478898   11846 host.go:66] Checking if "addons-979357" exists ...
	I0913 18:22:30.478908   11846 addons.go:69] Setting registry=true in profile "addons-979357"
	I0913 18:22:30.478923   11846 addons.go:234] Setting addon registry=true in "addons-979357"
	I0913 18:22:30.478984   11846 host.go:66] Checking if "addons-979357" exists ...
	I0913 18:22:30.478995   11846 addons.go:69] Setting ingress=true in profile "addons-979357"
	I0913 18:22:30.479089   11846 addons.go:234] Setting addon ingress=true in "addons-979357"
	I0913 18:22:30.479124   11846 host.go:66] Checking if "addons-979357" exists ...
	I0913 18:22:30.479203   11846 addons.go:69] Setting ingress-dns=true in profile "addons-979357"
	I0913 18:22:30.479238   11846 addons.go:234] Setting addon ingress-dns=true in "addons-979357"
	I0913 18:22:30.479259   11846 addons.go:69] Setting gcp-auth=true in profile "addons-979357"
	I0913 18:22:30.479268   11846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:22:30.479281   11846 mustload.go:65] Loading cluster: addons-979357
	I0913 18:22:30.479301   11846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:22:30.479333   11846 host.go:66] Checking if "addons-979357" exists ...
	I0913 18:22:30.479338   11846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:22:30.479346   11846 addons.go:69] Setting inspektor-gadget=true in profile "addons-979357"
	I0913 18:22:30.479350   11846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:22:30.479360   11846 addons.go:234] Setting addon inspektor-gadget=true in "addons-979357"
	I0913 18:22:30.479369   11846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:22:30.479383   11846 host.go:66] Checking if "addons-979357" exists ...
	I0913 18:22:30.479395   11846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:22:30.479433   11846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:22:30.479463   11846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:22:30.479587   11846 config.go:182] Loaded profile config "addons-979357": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 18:22:30.479600   11846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:22:30.479640   11846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:22:30.479708   11846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:22:30.479727   11846 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-979357"
	I0913 18:22:30.479729   11846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:22:30.479738   11846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:22:30.479742   11846 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-979357"
	I0913 18:22:30.479754   11846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:22:30.479921   11846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:22:30.479949   11846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:22:30.478897   11846 addons.go:234] Setting addon cloud-spanner=true in "addons-979357"
	I0913 18:22:30.480164   11846 host.go:66] Checking if "addons-979357" exists ...
	I0913 18:22:30.480219   11846 addons.go:69] Setting volcano=true in profile "addons-979357"
	I0913 18:22:30.480245   11846 addons.go:234] Setting addon volcano=true in "addons-979357"
	I0913 18:22:30.480280   11846 host.go:66] Checking if "addons-979357" exists ...
	I0913 18:22:30.478820   11846 addons.go:69] Setting storage-provisioner=true in profile "addons-979357"
	I0913 18:22:30.480370   11846 addons.go:234] Setting addon storage-provisioner=true in "addons-979357"
	I0913 18:22:30.480426   11846 host.go:66] Checking if "addons-979357" exists ...
	I0913 18:22:30.480535   11846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:22:30.480572   11846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:22:30.480640   11846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:22:30.480673   11846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:22:30.480820   11846 addons.go:69] Setting volumesnapshots=true in profile "addons-979357"
	I0913 18:22:30.480840   11846 addons.go:234] Setting addon volumesnapshots=true in "addons-979357"
	I0913 18:22:30.480871   11846 host.go:66] Checking if "addons-979357" exists ...
	I0913 18:22:30.480912   11846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:22:30.480944   11846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:22:30.481326   11846 out.go:177] * Verifying Kubernetes components...
	I0913 18:22:30.479242   11846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:22:30.481520   11846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:22:30.479334   11846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:22:30.481650   11846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:22:30.482721   11846 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 18:22:30.500237   11846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39695
	I0913 18:22:30.500463   11846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37975
	I0913 18:22:30.500482   11846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40047
	I0913 18:22:30.500639   11846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42967
	I0913 18:22:30.500830   11846 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:22:30.500893   11846 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:22:30.500990   11846 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:22:30.501068   11846 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:22:30.501371   11846 main.go:141] libmachine: Using API Version  1
	I0913 18:22:30.501388   11846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:22:30.501510   11846 main.go:141] libmachine: Using API Version  1
	I0913 18:22:30.501532   11846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:22:30.501533   11846 main.go:141] libmachine: Using API Version  1
	I0913 18:22:30.501550   11846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:22:30.501853   11846 main.go:141] libmachine: Using API Version  1
	I0913 18:22:30.501869   11846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:22:30.501892   11846 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:22:30.501924   11846 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:22:30.502060   11846 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:22:30.502499   11846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:22:30.502534   11846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:22:30.508808   11846 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:22:30.508875   11846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46415
	I0913 18:22:30.514450   11846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:22:30.514505   11846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:22:30.514561   11846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:22:30.514588   11846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:22:30.514611   11846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:22:30.514702   11846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:22:30.514722   11846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:22:30.515525   11846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:22:30.515558   11846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:22:30.518495   11846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46547
	I0913 18:22:30.518648   11846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46651
	I0913 18:22:30.518780   11846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:22:30.518966   11846 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:22:30.533480   11846 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:22:30.538314   11846 main.go:141] libmachine: Using API Version  1
	I0913 18:22:30.538358   11846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:22:30.538478   11846 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:22:30.538926   11846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37989
	I0913 18:22:30.539091   11846 main.go:141] libmachine: Using API Version  1
	I0913 18:22:30.539109   11846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:22:30.539180   11846 main.go:141] libmachine: Using API Version  1
	I0913 18:22:30.539204   11846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:22:30.539375   11846 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:22:30.539537   11846 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:22:30.539596   11846 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:22:30.539644   11846 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:22:30.540197   11846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:22:30.540517   11846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:22:30.540641   11846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:22:30.540690   11846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:22:30.541616   11846 main.go:141] libmachine: Using API Version  1
	I0913 18:22:30.541640   11846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:22:30.541970   11846 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:22:30.542152   11846 main.go:141] libmachine: (addons-979357) Calling .GetState
	I0913 18:22:30.544274   11846 main.go:141] libmachine: (addons-979357) Calling .DriverName
	I0913 18:22:30.544510   11846 main.go:141] libmachine: Making call to close driver server
	I0913 18:22:30.544533   11846 main.go:141] libmachine: (addons-979357) Calling .Close
	I0913 18:22:30.546219   11846 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:22:30.546227   11846 main.go:141] libmachine: (addons-979357) DBG | Closing plugin on server side
	I0913 18:22:30.546234   11846 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:22:30.546254   11846 main.go:141] libmachine: Making call to close driver server
	I0913 18:22:30.546261   11846 main.go:141] libmachine: (addons-979357) Calling .Close
	I0913 18:22:30.546395   11846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34721
	I0913 18:22:30.546903   11846 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:22:30.547397   11846 main.go:141] libmachine: Using API Version  1
	I0913 18:22:30.547419   11846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:22:30.547706   11846 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:22:30.548255   11846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:22:30.548304   11846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:22:30.560435   11846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45657
	I0913 18:22:30.560435   11846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33527
	I0913 18:22:30.560480   11846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41521
	I0913 18:22:30.560448   11846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32857
	I0913 18:22:30.560561   11846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43585
	I0913 18:22:30.560630   11846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33791
	I0913 18:22:30.560674   11846 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:22:30.560692   11846 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:22:30.560628   11846 main.go:141] libmachine: (addons-979357) DBG | Closing plugin on server side
	I0913 18:22:30.560639   11846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46337
	W0913 18:22:30.560805   11846 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0913 18:22:30.561065   11846 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:22:30.561200   11846 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:22:30.561277   11846 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:22:30.561349   11846 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:22:30.562326   11846 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:22:30.562336   11846 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:22:30.562417   11846 main.go:141] libmachine: Using API Version  1
	I0913 18:22:30.562436   11846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:22:30.562408   11846 main.go:141] libmachine: Using API Version  1
	I0913 18:22:30.562457   11846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:22:30.562500   11846 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:22:30.562522   11846 main.go:141] libmachine: Using API Version  1
	I0913 18:22:30.562532   11846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:22:30.562564   11846 main.go:141] libmachine: Using API Version  1
	I0913 18:22:30.562575   11846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:22:30.563271   11846 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:22:30.563375   11846 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:22:30.563548   11846 main.go:141] libmachine: Using API Version  1
	I0913 18:22:30.563558   11846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:22:30.563593   11846 main.go:141] libmachine: (addons-979357) Calling .GetState
	I0913 18:22:30.563886   11846 main.go:141] libmachine: Using API Version  1
	I0913 18:22:30.563903   11846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:22:30.564271   11846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:22:30.564314   11846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:22:30.564394   11846 main.go:141] libmachine: Using API Version  1
	I0913 18:22:30.564411   11846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:22:30.564907   11846 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:22:30.565005   11846 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:22:30.565037   11846 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:22:30.565075   11846 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:22:30.565330   11846 main.go:141] libmachine: (addons-979357) Calling .GetState
	I0913 18:22:30.565392   11846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41479
	I0913 18:22:30.566066   11846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:22:30.566122   11846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:22:30.566267   11846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:22:30.566304   11846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:22:30.566523   11846 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:22:30.567164   11846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:22:30.567203   11846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:22:30.570708   11846 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-979357"
	I0913 18:22:30.570757   11846 host.go:66] Checking if "addons-979357" exists ...
	I0913 18:22:30.571197   11846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:22:30.571229   11846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:22:30.571302   11846 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:22:30.571683   11846 main.go:141] libmachine: Using API Version  1
	I0913 18:22:30.571734   11846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:22:30.571887   11846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:22:30.571926   11846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:22:30.572171   11846 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:22:30.572551   11846 main.go:141] libmachine: (addons-979357) Calling .GetState
	I0913 18:22:30.572627   11846 main.go:141] libmachine: (addons-979357) Calling .GetState
	I0913 18:22:30.581211   11846 main.go:141] libmachine: (addons-979357) Calling .DriverName
	I0913 18:22:30.581280   11846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40701
	I0913 18:22:30.581285   11846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36321
	I0913 18:22:30.581511   11846 main.go:141] libmachine: (addons-979357) Calling .DriverName
	I0913 18:22:30.582226   11846 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:22:30.582518   11846 addons.go:234] Setting addon default-storageclass=true in "addons-979357"
	I0913 18:22:30.582554   11846 host.go:66] Checking if "addons-979357" exists ...
	I0913 18:22:30.582746   11846 main.go:141] libmachine: Using API Version  1
	I0913 18:22:30.582762   11846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:22:30.582915   11846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:22:30.582949   11846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:22:30.584229   11846 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:22:30.584265   11846 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:22:30.584235   11846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40789
	I0913 18:22:30.584426   11846 main.go:141] libmachine: (addons-979357) Calling .GetState
	I0913 18:22:30.584925   11846 main.go:141] libmachine: Using API Version  1
	I0913 18:22:30.584947   11846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:22:30.585303   11846 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:22:30.585508   11846 main.go:141] libmachine: (addons-979357) Calling .GetState
	I0913 18:22:30.586552   11846 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0913 18:22:30.586648   11846 out.go:177]   - Using image docker.io/registry:2.8.3
	I0913 18:22:30.586943   11846 main.go:141] libmachine: (addons-979357) Calling .DriverName
	I0913 18:22:30.587350   11846 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:22:30.587363   11846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35105
	I0913 18:22:30.587491   11846 main.go:141] libmachine: (addons-979357) Calling .DriverName
	I0913 18:22:30.590472   11846 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:22:30.590556   11846 main.go:141] libmachine: Using API Version  1
	I0913 18:22:30.590571   11846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:22:30.590931   11846 main.go:141] libmachine: Using API Version  1
	I0913 18:22:30.590947   11846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:22:30.591000   11846 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:22:30.591151   11846 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0913 18:22:30.591166   11846 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0913 18:22:30.591190   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHHostname
	I0913 18:22:30.591251   11846 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0913 18:22:30.591281   11846 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:22:30.591303   11846 main.go:141] libmachine: (addons-979357) Calling .GetState
	I0913 18:22:30.592093   11846 main.go:141] libmachine: (addons-979357) Calling .GetState
	I0913 18:22:30.592703   11846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35599
	I0913 18:22:30.592773   11846 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0913 18:22:30.593276   11846 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:22:30.593795   11846 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0913 18:22:30.593980   11846 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0913 18:22:30.594465   11846 main.go:141] libmachine: (addons-979357) Calling .DriverName
	I0913 18:22:30.594464   11846 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0913 18:22:30.594524   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHHostname
	I0913 18:22:30.595224   11846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38945
	I0913 18:22:30.595443   11846 main.go:141] libmachine: Using API Version  1
	I0913 18:22:30.595455   11846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:22:30.595704   11846 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0913 18:22:30.595774   11846 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0913 18:22:30.596005   11846 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0913 18:22:30.596021   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHHostname
	I0913 18:22:30.596021   11846 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:22:30.596151   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:30.596413   11846 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0913 18:22:30.596485   11846 host.go:66] Checking if "addons-979357" exists ...
	I0913 18:22:30.596641   11846 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:22:30.597089   11846 main.go:141] libmachine: (addons-979357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:f4:d7", ip: ""} in network mk-addons-979357: {Iface:virbr1 ExpiryTime:2024-09-13 19:22:00 +0000 UTC Type:0 Mac:52:54:00:9b:f4:d7 Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:addons-979357 Clientid:01:52:54:00:9b:f4:d7}
	I0913 18:22:30.597116   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined IP address 192.168.39.34 and MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:30.597626   11846 main.go:141] libmachine: Using API Version  1
	I0913 18:22:30.597205   11846 main.go:141] libmachine: (addons-979357) Calling .GetState
	I0913 18:22:30.597661   11846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:22:30.597680   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHPort
	I0913 18:22:30.597823   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHKeyPath
	I0913 18:22:30.597900   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHUsername
	I0913 18:22:30.597924   11846 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0913 18:22:30.597937   11846 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0913 18:22:30.597966   11846 sshutil.go:53] new ssh client: &{IP:192.168.39.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/addons-979357/id_rsa Username:docker}
	I0913 18:22:30.598032   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHHostname
	I0913 18:22:30.598634   11846 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:22:30.598726   11846 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0913 18:22:30.598936   11846 main.go:141] libmachine: (addons-979357) Calling .GetState
	I0913 18:22:30.599673   11846 main.go:141] libmachine: (addons-979357) Calling .DriverName
	I0913 18:22:30.599727   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:30.600006   11846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:22:30.600036   11846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:22:30.600232   11846 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0913 18:22:30.600261   11846 main.go:141] libmachine: (addons-979357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:f4:d7", ip: ""} in network mk-addons-979357: {Iface:virbr1 ExpiryTime:2024-09-13 19:22:00 +0000 UTC Type:0 Mac:52:54:00:9b:f4:d7 Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:addons-979357 Clientid:01:52:54:00:9b:f4:d7}
	I0913 18:22:30.600288   11846 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0913 18:22:30.600338   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHHostname
	I0913 18:22:30.600344   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined IP address 192.168.39.34 and MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:30.600980   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHPort
	I0913 18:22:30.601242   11846 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0913 18:22:30.601962   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHKeyPath
	I0913 18:22:30.602482   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHUsername
	I0913 18:22:30.602787   11846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41695
	I0913 18:22:30.602898   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:30.602716   11846 sshutil.go:53] new ssh client: &{IP:192.168.39.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/addons-979357/id_rsa Username:docker}
	I0913 18:22:30.603290   11846 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0913 18:22:30.603303   11846 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0913 18:22:30.603320   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHHostname
	I0913 18:22:30.603501   11846 main.go:141] libmachine: (addons-979357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:f4:d7", ip: ""} in network mk-addons-979357: {Iface:virbr1 ExpiryTime:2024-09-13 19:22:00 +0000 UTC Type:0 Mac:52:54:00:9b:f4:d7 Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:addons-979357 Clientid:01:52:54:00:9b:f4:d7}
	I0913 18:22:30.603522   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined IP address 192.168.39.34 and MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:30.603562   11846 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:22:30.603698   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHPort
	I0913 18:22:30.603843   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHKeyPath
	I0913 18:22:30.603971   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHUsername
	I0913 18:22:30.604143   11846 sshutil.go:53] new ssh client: &{IP:192.168.39.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/addons-979357/id_rsa Username:docker}
	I0913 18:22:30.604873   11846 main.go:141] libmachine: Using API Version  1
	I0913 18:22:30.604890   11846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:22:30.605828   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:30.605850   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:30.605884   11846 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:22:30.606050   11846 main.go:141] libmachine: (addons-979357) Calling .GetState
	I0913 18:22:30.606504   11846 main.go:141] libmachine: (addons-979357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:f4:d7", ip: ""} in network mk-addons-979357: {Iface:virbr1 ExpiryTime:2024-09-13 19:22:00 +0000 UTC Type:0 Mac:52:54:00:9b:f4:d7 Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:addons-979357 Clientid:01:52:54:00:9b:f4:d7}
	I0913 18:22:30.606528   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined IP address 192.168.39.34 and MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:30.606942   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHPort
	I0913 18:22:30.607111   11846 main.go:141] libmachine: (addons-979357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:f4:d7", ip: ""} in network mk-addons-979357: {Iface:virbr1 ExpiryTime:2024-09-13 19:22:00 +0000 UTC Type:0 Mac:52:54:00:9b:f4:d7 Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:addons-979357 Clientid:01:52:54:00:9b:f4:d7}
	I0913 18:22:30.607137   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined IP address 192.168.39.34 and MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:30.607517   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHPort
	I0913 18:22:30.607675   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHKeyPath
	I0913 18:22:30.607867   11846 main.go:141] libmachine: (addons-979357) Calling .DriverName
	I0913 18:22:30.607917   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHKeyPath
	I0913 18:22:30.608172   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHUsername
	I0913 18:22:30.608407   11846 sshutil.go:53] new ssh client: &{IP:192.168.39.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/addons-979357/id_rsa Username:docker}
	I0913 18:22:30.608496   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:30.608593   11846 main.go:141] libmachine: (addons-979357) Calling .DriverName
	I0913 18:22:30.608646   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHUsername
	I0913 18:22:30.608773   11846 main.go:141] libmachine: (addons-979357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:f4:d7", ip: ""} in network mk-addons-979357: {Iface:virbr1 ExpiryTime:2024-09-13 19:22:00 +0000 UTC Type:0 Mac:52:54:00:9b:f4:d7 Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:addons-979357 Clientid:01:52:54:00:9b:f4:d7}
	I0913 18:22:30.608791   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined IP address 192.168.39.34 and MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:30.608953   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHPort
	I0913 18:22:30.609011   11846 sshutil.go:53] new ssh client: &{IP:192.168.39.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/addons-979357/id_rsa Username:docker}
	I0913 18:22:30.609108   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHKeyPath
	I0913 18:22:30.609196   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHUsername
	I0913 18:22:30.609292   11846 sshutil.go:53] new ssh client: &{IP:192.168.39.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/addons-979357/id_rsa Username:docker}
	I0913 18:22:30.610290   11846 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0913 18:22:30.610387   11846 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0913 18:22:30.611752   11846 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0913 18:22:30.611767   11846 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0913 18:22:30.611783   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHHostname
	I0913 18:22:30.611860   11846 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0913 18:22:30.611868   11846 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0913 18:22:30.611881   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHHostname
	I0913 18:22:30.615942   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:30.616142   11846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33383
	I0913 18:22:30.616410   11846 main.go:141] libmachine: (addons-979357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:f4:d7", ip: ""} in network mk-addons-979357: {Iface:virbr1 ExpiryTime:2024-09-13 19:22:00 +0000 UTC Type:0 Mac:52:54:00:9b:f4:d7 Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:addons-979357 Clientid:01:52:54:00:9b:f4:d7}
	I0913 18:22:30.616449   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined IP address 192.168.39.34 and MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:30.616495   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHPort
	I0913 18:22:30.616724   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHKeyPath
	I0913 18:22:30.616880   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHUsername
	I0913 18:22:30.616942   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:30.617103   11846 sshutil.go:53] new ssh client: &{IP:192.168.39.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/addons-979357/id_rsa Username:docker}
	I0913 18:22:30.617382   11846 main.go:141] libmachine: (addons-979357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:f4:d7", ip: ""} in network mk-addons-979357: {Iface:virbr1 ExpiryTime:2024-09-13 19:22:00 +0000 UTC Type:0 Mac:52:54:00:9b:f4:d7 Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:addons-979357 Clientid:01:52:54:00:9b:f4:d7}
	I0913 18:22:30.617407   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined IP address 192.168.39.34 and MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:30.617450   11846 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:22:30.617566   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHPort
	I0913 18:22:30.617700   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHKeyPath
	I0913 18:22:30.617907   11846 main.go:141] libmachine: Using API Version  1
	I0913 18:22:30.617923   11846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:22:30.617987   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHUsername
	I0913 18:22:30.618223   11846 sshutil.go:53] new ssh client: &{IP:192.168.39.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/addons-979357/id_rsa Username:docker}
	I0913 18:22:30.618283   11846 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:22:30.618400   11846 main.go:141] libmachine: (addons-979357) Calling .GetState
	I0913 18:22:30.618450   11846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39519
	I0913 18:22:30.619331   11846 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:22:30.619872   11846 main.go:141] libmachine: Using API Version  1
	I0913 18:22:30.619894   11846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:22:30.620712   11846 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:22:30.620723   11846 main.go:141] libmachine: (addons-979357) Calling .DriverName
	I0913 18:22:30.621112   11846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44177
	I0913 18:22:30.621385   11846 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:22:30.621616   11846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41847
	I0913 18:22:30.621630   11846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:22:30.621681   11846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:22:30.621808   11846 main.go:141] libmachine: Using API Version  1
	I0913 18:22:30.621830   11846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:22:30.621985   11846 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:22:30.622213   11846 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:22:30.622502   11846 main.go:141] libmachine: Using API Version  1
	I0913 18:22:30.622523   11846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:22:30.622544   11846 main.go:141] libmachine: (addons-979357) Calling .GetState
	I0913 18:22:30.622785   11846 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0913 18:22:30.623076   11846 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:22:30.623434   11846 main.go:141] libmachine: (addons-979357) Calling .DriverName
	I0913 18:22:30.624020   11846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40609
	I0913 18:22:30.624371   11846 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:22:30.624479   11846 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0913 18:22:30.624499   11846 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0913 18:22:30.624514   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHHostname
	I0913 18:22:30.624774   11846 main.go:141] libmachine: Using API Version  1
	I0913 18:22:30.624794   11846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:22:30.625076   11846 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:22:30.625321   11846 main.go:141] libmachine: (addons-979357) Calling .GetState
	I0913 18:22:30.626357   11846 main.go:141] libmachine: (addons-979357) Calling .DriverName
	I0913 18:22:30.627106   11846 main.go:141] libmachine: (addons-979357) Calling .DriverName
	I0913 18:22:30.628111   11846 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0913 18:22:30.628769   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:30.629056   11846 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 18:22:30.629179   11846 main.go:141] libmachine: (addons-979357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:f4:d7", ip: ""} in network mk-addons-979357: {Iface:virbr1 ExpiryTime:2024-09-13 19:22:00 +0000 UTC Type:0 Mac:52:54:00:9b:f4:d7 Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:addons-979357 Clientid:01:52:54:00:9b:f4:d7}
	I0913 18:22:30.629566   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined IP address 192.168.39.34 and MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:30.629413   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHPort
	I0913 18:22:30.629715   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHKeyPath
	I0913 18:22:30.629829   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHUsername
	I0913 18:22:30.629985   11846 sshutil.go:53] new ssh client: &{IP:192.168.39.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/addons-979357/id_rsa Username:docker}
	I0913 18:22:30.631455   11846 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0913 18:22:30.631475   11846 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0913 18:22:30.631490   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHHostname
	I0913 18:22:30.632139   11846 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0913 18:22:30.634478   11846 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0913 18:22:30.634531   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:30.634969   11846 main.go:141] libmachine: (addons-979357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:f4:d7", ip: ""} in network mk-addons-979357: {Iface:virbr1 ExpiryTime:2024-09-13 19:22:00 +0000 UTC Type:0 Mac:52:54:00:9b:f4:d7 Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:addons-979357 Clientid:01:52:54:00:9b:f4:d7}
	I0913 18:22:30.634985   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined IP address 192.168.39.34 and MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:30.635140   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHPort
	I0913 18:22:30.635299   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHKeyPath
	I0913 18:22:30.635443   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHUsername
	I0913 18:22:30.635542   11846 sshutil.go:53] new ssh client: &{IP:192.168.39.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/addons-979357/id_rsa Username:docker}
	I0913 18:22:30.636827   11846 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0913 18:22:30.637904   11846 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0913 18:22:30.639028   11846 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0913 18:22:30.640544   11846 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0913 18:22:30.641535   11846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43859
	I0913 18:22:30.641939   11846 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:22:30.642316   11846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44319
	I0913 18:22:30.642465   11846 main.go:141] libmachine: Using API Version  1
	I0913 18:22:30.642489   11846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:22:30.642731   11846 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:22:30.642818   11846 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:22:30.642875   11846 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0913 18:22:30.643103   11846 main.go:141] libmachine: Using API Version  1
	I0913 18:22:30.643113   11846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:22:30.643375   11846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:22:30.643394   11846 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:22:30.643415   11846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:22:30.643509   11846 main.go:141] libmachine: (addons-979357) Calling .GetState
	I0913 18:22:30.644348   11846 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0913 18:22:30.644366   11846 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0913 18:22:30.644386   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHHostname
	I0913 18:22:30.645550   11846 main.go:141] libmachine: (addons-979357) Calling .DriverName
	I0913 18:22:30.647421   11846 out.go:177]   - Using image docker.io/busybox:stable
	I0913 18:22:30.647683   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:30.648186   11846 main.go:141] libmachine: (addons-979357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:f4:d7", ip: ""} in network mk-addons-979357: {Iface:virbr1 ExpiryTime:2024-09-13 19:22:00 +0000 UTC Type:0 Mac:52:54:00:9b:f4:d7 Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:addons-979357 Clientid:01:52:54:00:9b:f4:d7}
	I0913 18:22:30.648207   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined IP address 192.168.39.34 and MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:30.648479   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHPort
	I0913 18:22:30.648648   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHKeyPath
	I0913 18:22:30.648781   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHUsername
	I0913 18:22:30.648911   11846 sshutil.go:53] new ssh client: &{IP:192.168.39.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/addons-979357/id_rsa Username:docker}
	I0913 18:22:30.649886   11846 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0913 18:22:30.651056   11846 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0913 18:22:30.651073   11846 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0913 18:22:30.651091   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHHostname
	I0913 18:22:30.654528   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:30.654955   11846 main.go:141] libmachine: (addons-979357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:f4:d7", ip: ""} in network mk-addons-979357: {Iface:virbr1 ExpiryTime:2024-09-13 19:22:00 +0000 UTC Type:0 Mac:52:54:00:9b:f4:d7 Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:addons-979357 Clientid:01:52:54:00:9b:f4:d7}
	I0913 18:22:30.654976   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined IP address 192.168.39.34 and MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:30.655136   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHPort
	I0913 18:22:30.655308   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHKeyPath
	I0913 18:22:30.655455   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHUsername
	I0913 18:22:30.655556   11846 sshutil.go:53] new ssh client: &{IP:192.168.39.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/addons-979357/id_rsa Username:docker}
	I0913 18:22:30.661503   11846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43463
	I0913 18:22:30.661851   11846 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:22:30.662364   11846 main.go:141] libmachine: Using API Version  1
	I0913 18:22:30.662380   11846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:22:30.662640   11846 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:22:30.662820   11846 main.go:141] libmachine: (addons-979357) Calling .GetState
	I0913 18:22:30.664099   11846 main.go:141] libmachine: (addons-979357) Calling .DriverName
	I0913 18:22:30.664269   11846 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0913 18:22:30.664283   11846 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0913 18:22:30.664299   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHHostname
	I0913 18:22:30.666963   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:30.667366   11846 main.go:141] libmachine: (addons-979357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:f4:d7", ip: ""} in network mk-addons-979357: {Iface:virbr1 ExpiryTime:2024-09-13 19:22:00 +0000 UTC Type:0 Mac:52:54:00:9b:f4:d7 Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:addons-979357 Clientid:01:52:54:00:9b:f4:d7}
	I0913 18:22:30.667383   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined IP address 192.168.39.34 and MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:30.667513   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHPort
	I0913 18:22:30.667646   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHKeyPath
	I0913 18:22:30.667741   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHUsername
	I0913 18:22:30.667850   11846 sshutil.go:53] new ssh client: &{IP:192.168.39.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/addons-979357/id_rsa Username:docker}
	I0913 18:22:30.876396   11846 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0913 18:22:30.876459   11846 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0913 18:22:30.928879   11846 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0913 18:22:30.930858   11846 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0913 18:22:30.930876   11846 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0913 18:22:30.989689   11846 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0913 18:22:30.989714   11846 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0913 18:22:31.040586   11846 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0913 18:22:31.057460   11846 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0913 18:22:31.100555   11846 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0913 18:22:31.100583   11846 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0913 18:22:31.105990   11846 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0913 18:22:31.106016   11846 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0913 18:22:31.191777   11846 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0913 18:22:31.191803   11846 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0913 18:22:31.194629   11846 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0913 18:22:31.194653   11846 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0913 18:22:31.261951   11846 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0913 18:22:31.268194   11846 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0913 18:22:31.268218   11846 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0913 18:22:31.269743   11846 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0913 18:22:31.269764   11846 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0913 18:22:31.367341   11846 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0913 18:22:31.383222   11846 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0913 18:22:31.383252   11846 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0913 18:22:31.394617   11846 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0913 18:22:31.396907   11846 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0913 18:22:31.431732   11846 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0913 18:22:31.431760   11846 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0913 18:22:31.472624   11846 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0913 18:22:31.472651   11846 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0913 18:22:31.498512   11846 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0913 18:22:31.498541   11846 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0913 18:22:31.549749   11846 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0913 18:22:31.549772   11846 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0913 18:22:31.556719   11846 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0913 18:22:31.556741   11846 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0913 18:22:31.566668   11846 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0913 18:22:31.583646   11846 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0913 18:22:31.583673   11846 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0913 18:22:31.624498   11846 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0913 18:22:31.624524   11846 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0913 18:22:31.705541   11846 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0913 18:22:31.705566   11846 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0913 18:22:31.738522   11846 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0913 18:22:31.738549   11846 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0913 18:22:31.744752   11846 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0913 18:22:31.774264   11846 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0913 18:22:31.774288   11846 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0913 18:22:31.899545   11846 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0913 18:22:31.899571   11846 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0913 18:22:31.916895   11846 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0913 18:22:31.916922   11846 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0913 18:22:32.112312   11846 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0913 18:22:32.112341   11846 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0913 18:22:32.123767   11846 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0913 18:22:32.123794   11846 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0913 18:22:32.215746   11846 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0913 18:22:32.287431   11846 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0913 18:22:32.287460   11846 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0913 18:22:32.301669   11846 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0913 18:22:32.301701   11846 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0913 18:22:32.394481   11846 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0913 18:22:32.394508   11846 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0913 18:22:32.514672   11846 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0913 18:22:32.514700   11846 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0913 18:22:32.519283   11846 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0913 18:22:32.584445   11846 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0913 18:22:32.808431   11846 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0913 18:22:32.808460   11846 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0913 18:22:32.958075   11846 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.081583936s)
	I0913 18:22:32.958125   11846 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0913 18:22:32.958136   11846 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.081703044s)
	I0913 18:22:32.958221   11846 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.029312252s)
	I0913 18:22:32.958260   11846 main.go:141] libmachine: Making call to close driver server
	I0913 18:22:32.958271   11846 main.go:141] libmachine: (addons-979357) Calling .Close
	I0913 18:22:32.959173   11846 node_ready.go:35] waiting up to 6m0s for node "addons-979357" to be "Ready" ...
	I0913 18:22:32.959336   11846 main.go:141] libmachine: (addons-979357) DBG | Closing plugin on server side
	I0913 18:22:32.959354   11846 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:22:32.959367   11846 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:22:32.959377   11846 main.go:141] libmachine: Making call to close driver server
	I0913 18:22:32.959389   11846 main.go:141] libmachine: (addons-979357) Calling .Close
	I0913 18:22:32.959904   11846 main.go:141] libmachine: (addons-979357) DBG | Closing plugin on server side
	I0913 18:22:32.959941   11846 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:22:32.959953   11846 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:22:32.962939   11846 node_ready.go:49] node "addons-979357" has status "Ready":"True"
	I0913 18:22:32.962965   11846 node_ready.go:38] duration metric: took 3.757473ms for node "addons-979357" to be "Ready" ...
	I0913 18:22:32.962977   11846 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0913 18:22:32.981363   11846 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-2gkt9" in "kube-system" namespace to be "Ready" ...
	I0913 18:22:32.982346   11846 main.go:141] libmachine: Making call to close driver server
	I0913 18:22:32.982366   11846 main.go:141] libmachine: (addons-979357) Calling .Close
	I0913 18:22:32.982651   11846 main.go:141] libmachine: (addons-979357) DBG | Closing plugin on server side
	I0913 18:22:32.982696   11846 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:22:32.982707   11846 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:22:33.207362   11846 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0913 18:22:33.207383   11846 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0913 18:22:33.462364   11846 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-979357" context rescaled to 1 replicas
	I0913 18:22:33.565942   11846 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0913 18:22:33.565968   11846 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0913 18:22:33.892546   11846 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0913 18:22:33.892578   11846 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0913 18:22:34.137718   11846 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0913 18:22:35.208928   11846 pod_ready.go:103] pod "coredns-7c65d6cfc9-2gkt9" in "kube-system" namespace has status "Ready":"False"
	I0913 18:22:35.463173   11846 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.422547754s)
	I0913 18:22:35.463218   11846 main.go:141] libmachine: Making call to close driver server
	I0913 18:22:35.463226   11846 main.go:141] libmachine: (addons-979357) Calling .Close
	I0913 18:22:35.463481   11846 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:22:35.463503   11846 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:22:35.463512   11846 main.go:141] libmachine: Making call to close driver server
	I0913 18:22:35.463519   11846 main.go:141] libmachine: (addons-979357) Calling .Close
	I0913 18:22:35.463699   11846 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:22:35.463745   11846 main.go:141] libmachine: (addons-979357) DBG | Closing plugin on server side
	I0913 18:22:35.463754   11846 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:22:36.177658   11846 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.120163066s)
	I0913 18:22:36.177710   11846 main.go:141] libmachine: Making call to close driver server
	I0913 18:22:36.177722   11846 main.go:141] libmachine: (addons-979357) Calling .Close
	I0913 18:22:36.177781   11846 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.915798657s)
	I0913 18:22:36.177817   11846 main.go:141] libmachine: Making call to close driver server
	I0913 18:22:36.177829   11846 main.go:141] libmachine: (addons-979357) Calling .Close
	I0913 18:22:36.177818   11846 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.810444318s)
	I0913 18:22:36.177874   11846 main.go:141] libmachine: Making call to close driver server
	I0913 18:22:36.177895   11846 main.go:141] libmachine: (addons-979357) Calling .Close
	I0913 18:22:36.177950   11846 main.go:141] libmachine: (addons-979357) DBG | Closing plugin on server side
	I0913 18:22:36.177983   11846 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:22:36.177995   11846 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:22:36.178004   11846 main.go:141] libmachine: Making call to close driver server
	I0913 18:22:36.178012   11846 main.go:141] libmachine: (addons-979357) Calling .Close
	I0913 18:22:36.178377   11846 main.go:141] libmachine: (addons-979357) DBG | Closing plugin on server side
	I0913 18:22:36.178392   11846 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:22:36.178415   11846 main.go:141] libmachine: (addons-979357) DBG | Closing plugin on server side
	I0913 18:22:36.178438   11846 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:22:36.178473   11846 main.go:141] libmachine: (addons-979357) DBG | Closing plugin on server side
	I0913 18:22:36.178498   11846 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:22:36.178511   11846 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:22:36.178524   11846 main.go:141] libmachine: Making call to close driver server
	I0913 18:22:36.178536   11846 main.go:141] libmachine: (addons-979357) Calling .Close
	I0913 18:22:36.178447   11846 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:22:36.178606   11846 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:22:36.178613   11846 main.go:141] libmachine: Making call to close driver server
	I0913 18:22:36.178625   11846 main.go:141] libmachine: (addons-979357) Calling .Close
	I0913 18:22:36.178943   11846 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:22:36.178958   11846 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:22:36.179947   11846 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:22:36.179951   11846 main.go:141] libmachine: (addons-979357) DBG | Closing plugin on server side
	I0913 18:22:36.179962   11846 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:22:36.391729   11846 main.go:141] libmachine: Making call to close driver server
	I0913 18:22:36.391752   11846 main.go:141] libmachine: (addons-979357) Calling .Close
	I0913 18:22:36.392010   11846 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:22:36.392058   11846 main.go:141] libmachine: (addons-979357) DBG | Closing plugin on server side
	I0913 18:22:36.392065   11846 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:22:36.513516   11846 pod_ready.go:93] pod "coredns-7c65d6cfc9-2gkt9" in "kube-system" namespace has status "Ready":"True"
	I0913 18:22:36.513545   11846 pod_ready.go:82] duration metric: took 3.532154275s for pod "coredns-7c65d6cfc9-2gkt9" in "kube-system" namespace to be "Ready" ...
	I0913 18:22:36.513561   11846 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-mtltd" in "kube-system" namespace to be "Ready" ...
	I0913 18:22:37.702586   11846 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0913 18:22:37.702623   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHHostname
	I0913 18:22:37.705721   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:37.706173   11846 main.go:141] libmachine: (addons-979357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:f4:d7", ip: ""} in network mk-addons-979357: {Iface:virbr1 ExpiryTime:2024-09-13 19:22:00 +0000 UTC Type:0 Mac:52:54:00:9b:f4:d7 Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:addons-979357 Clientid:01:52:54:00:9b:f4:d7}
	I0913 18:22:37.706204   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined IP address 192.168.39.34 and MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:37.706406   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHPort
	I0913 18:22:37.706598   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHKeyPath
	I0913 18:22:37.706724   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHUsername
	I0913 18:22:37.706834   11846 sshutil.go:53] new ssh client: &{IP:192.168.39.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/addons-979357/id_rsa Username:docker}
	I0913 18:22:37.941566   11846 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0913 18:22:38.057578   11846 addons.go:234] Setting addon gcp-auth=true in "addons-979357"
	I0913 18:22:38.057630   11846 host.go:66] Checking if "addons-979357" exists ...
	I0913 18:22:38.057962   11846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:22:38.057998   11846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:22:38.072716   11846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42637
	I0913 18:22:38.073244   11846 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:22:38.073727   11846 main.go:141] libmachine: Using API Version  1
	I0913 18:22:38.073753   11846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:22:38.074119   11846 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:22:38.074874   11846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:22:38.074920   11846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:22:38.089603   11846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33945
	I0913 18:22:38.090145   11846 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:22:38.090681   11846 main.go:141] libmachine: Using API Version  1
	I0913 18:22:38.090703   11846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:22:38.091107   11846 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:22:38.091372   11846 main.go:141] libmachine: (addons-979357) Calling .GetState
	I0913 18:22:38.093189   11846 main.go:141] libmachine: (addons-979357) Calling .DriverName
	I0913 18:22:38.093398   11846 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0913 18:22:38.093425   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHHostname
	I0913 18:22:38.096456   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:38.096850   11846 main.go:141] libmachine: (addons-979357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:f4:d7", ip: ""} in network mk-addons-979357: {Iface:virbr1 ExpiryTime:2024-09-13 19:22:00 +0000 UTC Type:0 Mac:52:54:00:9b:f4:d7 Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:addons-979357 Clientid:01:52:54:00:9b:f4:d7}
	I0913 18:22:38.096871   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined IP address 192.168.39.34 and MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:38.097020   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHPort
	I0913 18:22:38.097184   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHKeyPath
	I0913 18:22:38.097332   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHUsername
	I0913 18:22:38.097456   11846 sshutil.go:53] new ssh client: &{IP:192.168.39.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/addons-979357/id_rsa Username:docker}
	I0913 18:22:38.611050   11846 pod_ready.go:93] pod "coredns-7c65d6cfc9-mtltd" in "kube-system" namespace has status "Ready":"True"
	I0913 18:22:38.611074   11846 pod_ready.go:82] duration metric: took 2.097504572s for pod "coredns-7c65d6cfc9-mtltd" in "kube-system" namespace to be "Ready" ...
	I0913 18:22:38.611087   11846 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-979357" in "kube-system" namespace to be "Ready" ...
	I0913 18:22:39.180671   11846 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (7.783727776s)
	I0913 18:22:39.180723   11846 main.go:141] libmachine: Making call to close driver server
	I0913 18:22:39.180729   11846 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.78607227s)
	I0913 18:22:39.180743   11846 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.614047493s)
	I0913 18:22:39.180760   11846 main.go:141] libmachine: Making call to close driver server
	I0913 18:22:39.180786   11846 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.436006015s)
	I0913 18:22:39.180808   11846 main.go:141] libmachine: Making call to close driver server
	I0913 18:22:39.180818   11846 main.go:141] libmachine: (addons-979357) Calling .Close
	I0913 18:22:39.180820   11846 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.965045353s)
	I0913 18:22:39.180833   11846 main.go:141] libmachine: Making call to close driver server
	I0913 18:22:39.180846   11846 main.go:141] libmachine: (addons-979357) Calling .Close
	I0913 18:22:39.180763   11846 main.go:141] libmachine: Making call to close driver server
	I0913 18:22:39.180917   11846 main.go:141] libmachine: (addons-979357) Calling .Close
	I0913 18:22:39.180791   11846 main.go:141] libmachine: (addons-979357) Calling .Close
	I0913 18:22:39.180980   11846 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.661665418s)
	I0913 18:22:39.180735   11846 main.go:141] libmachine: (addons-979357) Calling .Close
	W0913 18:22:39.181015   11846 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0913 18:22:39.181035   11846 retry.go:31] will retry after 132.635799ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0913 18:22:39.181141   11846 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.5966432s)
	I0913 18:22:39.181168   11846 main.go:141] libmachine: Making call to close driver server
	I0913 18:22:39.181177   11846 main.go:141] libmachine: (addons-979357) Calling .Close
	I0913 18:22:39.181255   11846 main.go:141] libmachine: (addons-979357) DBG | Closing plugin on server side
	I0913 18:22:39.181292   11846 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:22:39.181299   11846 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:22:39.181306   11846 main.go:141] libmachine: Making call to close driver server
	I0913 18:22:39.181313   11846 main.go:141] libmachine: (addons-979357) Calling .Close
	I0913 18:22:39.182158   11846 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:22:39.182169   11846 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:22:39.182177   11846 main.go:141] libmachine: Making call to close driver server
	I0913 18:22:39.182194   11846 main.go:141] libmachine: (addons-979357) Calling .Close
	I0913 18:22:39.182874   11846 main.go:141] libmachine: (addons-979357) DBG | Closing plugin on server side
	I0913 18:22:39.182909   11846 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:22:39.182918   11846 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:22:39.182925   11846 main.go:141] libmachine: Making call to close driver server
	I0913 18:22:39.182932   11846 main.go:141] libmachine: (addons-979357) Calling .Close
	I0913 18:22:39.183061   11846 main.go:141] libmachine: (addons-979357) DBG | Closing plugin on server side
	I0913 18:22:39.183085   11846 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:22:39.183090   11846 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:22:39.183101   11846 main.go:141] libmachine: (addons-979357) DBG | Closing plugin on server side
	I0913 18:22:39.183173   11846 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:22:39.183188   11846 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:22:39.183192   11846 main.go:141] libmachine: (addons-979357) DBG | Closing plugin on server side
	I0913 18:22:39.183198   11846 addons.go:475] Verifying addon metrics-server=true in "addons-979357"
	I0913 18:22:39.183211   11846 main.go:141] libmachine: (addons-979357) DBG | Closing plugin on server side
	I0913 18:22:39.183227   11846 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:22:39.183233   11846 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:22:39.183141   11846 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:22:39.183266   11846 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:22:39.183276   11846 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:22:39.183394   11846 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:22:39.183404   11846 main.go:141] libmachine: Making call to close driver server
	I0913 18:22:39.183412   11846 main.go:141] libmachine: (addons-979357) Calling .Close
	I0913 18:22:39.183277   11846 addons.go:475] Verifying addon registry=true in "addons-979357"
	I0913 18:22:39.183673   11846 main.go:141] libmachine: (addons-979357) DBG | Closing plugin on server side
	I0913 18:22:39.183702   11846 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:22:39.183709   11846 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:22:39.183175   11846 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:22:39.183811   11846 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:22:39.183814   11846 pod_ready.go:93] pod "etcd-addons-979357" in "kube-system" namespace has status "Ready":"True"
	I0913 18:22:39.183240   11846 main.go:141] libmachine: Making call to close driver server
	I0913 18:22:39.183829   11846 pod_ready.go:82] duration metric: took 572.7356ms for pod "etcd-addons-979357" in "kube-system" namespace to be "Ready" ...
	I0913 18:22:39.183838   11846 main.go:141] libmachine: (addons-979357) Calling .Close
	I0913 18:22:39.183842   11846 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-979357" in "kube-system" namespace to be "Ready" ...
	I0913 18:22:39.183149   11846 main.go:141] libmachine: (addons-979357) DBG | Closing plugin on server side
	I0913 18:22:39.183818   11846 main.go:141] libmachine: Making call to close driver server
	I0913 18:22:39.184008   11846 main.go:141] libmachine: (addons-979357) Calling .Close
	I0913 18:22:39.183276   11846 main.go:141] libmachine: (addons-979357) DBG | Closing plugin on server side
	I0913 18:22:39.184353   11846 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:22:39.184367   11846 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:22:39.184376   11846 addons.go:475] Verifying addon ingress=true in "addons-979357"
	I0913 18:22:39.185002   11846 main.go:141] libmachine: (addons-979357) DBG | Closing plugin on server side
	I0913 18:22:39.185027   11846 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:22:39.186229   11846 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:22:39.186332   11846 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-979357 service yakd-dashboard -n yakd-dashboard
	
	I0913 18:22:39.187398   11846 out.go:177] * Verifying registry addon...
	I0913 18:22:39.188256   11846 out.go:177] * Verifying ingress addon...
	I0913 18:22:39.189818   11846 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0913 18:22:39.190687   11846 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0913 18:22:39.210962   11846 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0913 18:22:39.211000   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:39.212603   11846 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0913 18:22:39.212623   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:39.314470   11846 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0913 18:22:39.711545   11846 pod_ready.go:93] pod "kube-apiserver-addons-979357" in "kube-system" namespace has status "Ready":"True"
	I0913 18:22:39.711574   11846 pod_ready.go:82] duration metric: took 527.723521ms for pod "kube-apiserver-addons-979357" in "kube-system" namespace to be "Ready" ...
	I0913 18:22:39.711588   11846 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-979357" in "kube-system" namespace to be "Ready" ...
	I0913 18:22:39.720988   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:39.727065   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:39.735954   11846 pod_ready.go:93] pod "kube-controller-manager-addons-979357" in "kube-system" namespace has status "Ready":"True"
	I0913 18:22:39.735985   11846 pod_ready.go:82] duration metric: took 24.3888ms for pod "kube-controller-manager-addons-979357" in "kube-system" namespace to be "Ready" ...
	I0913 18:22:39.735999   11846 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-qxmw4" in "kube-system" namespace to be "Ready" ...
	I0913 18:22:39.749808   11846 pod_ready.go:93] pod "kube-proxy-qxmw4" in "kube-system" namespace has status "Ready":"True"
	I0913 18:22:39.749827   11846 pod_ready.go:82] duration metric: took 13.820436ms for pod "kube-proxy-qxmw4" in "kube-system" namespace to be "Ready" ...
	I0913 18:22:39.749836   11846 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-979357" in "kube-system" namespace to be "Ready" ...
	I0913 18:22:39.761817   11846 pod_ready.go:93] pod "kube-scheduler-addons-979357" in "kube-system" namespace has status "Ready":"True"
	I0913 18:22:39.761834   11846 pod_ready.go:82] duration metric: took 11.992857ms for pod "kube-scheduler-addons-979357" in "kube-system" namespace to be "Ready" ...
	I0913 18:22:39.761841   11846 pod_ready.go:39] duration metric: took 6.798852631s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0913 18:22:39.761856   11846 api_server.go:52] waiting for apiserver process to appear ...
	I0913 18:22:39.761902   11846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 18:22:40.110559   11846 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.017133876s)
	I0913 18:22:40.110559   11846 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.972790008s)
	I0913 18:22:40.110740   11846 main.go:141] libmachine: Making call to close driver server
	I0913 18:22:40.110759   11846 main.go:141] libmachine: (addons-979357) Calling .Close
	I0913 18:22:40.110996   11846 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:22:40.111013   11846 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:22:40.111021   11846 main.go:141] libmachine: Making call to close driver server
	I0913 18:22:40.111029   11846 main.go:141] libmachine: (addons-979357) Calling .Close
	I0913 18:22:40.111037   11846 main.go:141] libmachine: (addons-979357) DBG | Closing plugin on server side
	I0913 18:22:40.111346   11846 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:22:40.111360   11846 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:22:40.111369   11846 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-979357"
	I0913 18:22:40.111372   11846 main.go:141] libmachine: (addons-979357) DBG | Closing plugin on server side
	I0913 18:22:40.112081   11846 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0913 18:22:40.113065   11846 out.go:177] * Verifying csi-hostpath-driver addon...
	I0913 18:22:40.114734   11846 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0913 18:22:40.115664   11846 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0913 18:22:40.115892   11846 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0913 18:22:40.115906   11846 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0913 18:22:40.132558   11846 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0913 18:22:40.132577   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:40.211311   11846 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0913 18:22:40.211334   11846 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0913 18:22:40.220393   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:40.220516   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:40.300610   11846 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0913 18:22:40.300638   11846 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0913 18:22:40.389824   11846 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0913 18:22:40.621694   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:40.843154   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:40.844023   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:41.120868   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:41.194711   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:41.195587   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:41.201412   11846 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.886888763s)
	I0913 18:22:41.201454   11846 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.439534942s)
	I0913 18:22:41.201468   11846 main.go:141] libmachine: Making call to close driver server
	I0913 18:22:41.201480   11846 api_server.go:72] duration metric: took 10.722879781s to wait for apiserver process to appear ...
	I0913 18:22:41.201485   11846 main.go:141] libmachine: (addons-979357) Calling .Close
	I0913 18:22:41.201489   11846 api_server.go:88] waiting for apiserver healthz status ...
	I0913 18:22:41.201511   11846 api_server.go:253] Checking apiserver healthz at https://192.168.39.34:8443/healthz ...
	I0913 18:22:41.201764   11846 main.go:141] libmachine: (addons-979357) DBG | Closing plugin on server side
	I0913 18:22:41.201822   11846 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:22:41.201837   11846 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:22:41.201844   11846 main.go:141] libmachine: Making call to close driver server
	I0913 18:22:41.201852   11846 main.go:141] libmachine: (addons-979357) Calling .Close
	I0913 18:22:41.202028   11846 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:22:41.202047   11846 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:22:41.206053   11846 api_server.go:279] https://192.168.39.34:8443/healthz returned 200:
	ok
	I0913 18:22:41.206959   11846 api_server.go:141] control plane version: v1.31.1
	I0913 18:22:41.206977   11846 api_server.go:131] duration metric: took 5.482612ms to wait for apiserver health ...
	I0913 18:22:41.206984   11846 system_pods.go:43] waiting for kube-system pods to appear ...
	I0913 18:22:41.214695   11846 system_pods.go:59] 18 kube-system pods found
	I0913 18:22:41.214727   11846 system_pods.go:61] "coredns-7c65d6cfc9-2gkt9" [d1e3da77-7c54-4cc2-a26f-32731b8c03d0] Running
	I0913 18:22:41.214735   11846 system_pods.go:61] "coredns-7c65d6cfc9-mtltd" [bee68b4c-c773-4bb2-b088-1fe4a816edf3] Running
	I0913 18:22:41.214746   11846 system_pods.go:61] "csi-hostpath-attacher-0" [8a5b2986-b2ca-4a85-b195-1c8eb80a223e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0913 18:22:41.214760   11846 system_pods.go:61] "csi-hostpath-resizer-0" [e9c848e7-3276-496f-a60f-69f8eb633740] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0913 18:22:41.214772   11846 system_pods.go:61] "csi-hostpathplugin-zhd46" [a53ceb0b-635b-4fa8-a72b-60d626a4370f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0913 18:22:41.214782   11846 system_pods.go:61] "etcd-addons-979357" [1267edb9-0c88-4573-80ab-4c18edfd79fa] Running
	I0913 18:22:41.214789   11846 system_pods.go:61] "kube-apiserver-addons-979357" [9d630d36-12c0-4389-b21b-4a5befb11de4] Running
	I0913 18:22:41.214797   11846 system_pods.go:61] "kube-controller-manager-addons-979357" [77e27eb8-234a-4da6-a8f5-c94a66a9d3dc] Running
	I0913 18:22:41.214807   11846 system_pods.go:61] "kube-ingress-dns-minikube" [a82db8f0-646e-4f6c-8dda-7332bed77579] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0913 18:22:41.214821   11846 system_pods.go:61] "kube-proxy-qxmw4" [3e77278b-62ae-4a68-bbba-ca3108d18280] Running
	I0913 18:22:41.214830   11846 system_pods.go:61] "kube-scheduler-addons-979357" [a40db901-708e-481e-aedf-f54669897c0e] Running
	I0913 18:22:41.214838   11846 system_pods.go:61] "metrics-server-84c5f94fbc-qw488" [cf270e35-c498-455b-bc82-0a19e8f606aa] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0913 18:22:41.214850   11846 system_pods.go:61] "nvidia-device-plugin-daemonset-66thw" [ad466b8e-d669-4281-be12-36ab1bbbee83] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0913 18:22:41.214862   11846 system_pods.go:61] "registry-66c9cd494c-pwx9m" [d9453f5b-a1d3-40e4-80d3-2250edd642ca] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0913 18:22:41.214872   11846 system_pods.go:61] "registry-proxy-2jrhs" [8223e4fa-f130-48c6-ab8b-764434495610] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0913 18:22:41.214884   11846 system_pods.go:61] "snapshot-controller-56fcc65765-fvbcx" [9043c1eb-e28f-4af5-af33-529d05cce5c8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0913 18:22:41.214903   11846 system_pods.go:61] "snapshot-controller-56fcc65765-r58vx" [661bb76c-4862-41f0-a2d0-1c774b91c7dd] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0913 18:22:41.214910   11846 system_pods.go:61] "storage-provisioner" [09e9768b-ce9c-47d6-8650-191c7f864a9c] Running
	I0913 18:22:41.214917   11846 system_pods.go:74] duration metric: took 7.926337ms to wait for pod list to return data ...
	I0913 18:22:41.214926   11846 default_sa.go:34] waiting for default service account to be created ...
	I0913 18:22:41.217763   11846 default_sa.go:45] found service account: "default"
	I0913 18:22:41.217781   11846 default_sa.go:55] duration metric: took 2.845911ms for default service account to be created ...
	I0913 18:22:41.217790   11846 system_pods.go:116] waiting for k8s-apps to be running ...
	I0913 18:22:41.226796   11846 system_pods.go:86] 18 kube-system pods found
	I0913 18:22:41.226823   11846 system_pods.go:89] "coredns-7c65d6cfc9-2gkt9" [d1e3da77-7c54-4cc2-a26f-32731b8c03d0] Running
	I0913 18:22:41.226831   11846 system_pods.go:89] "coredns-7c65d6cfc9-mtltd" [bee68b4c-c773-4bb2-b088-1fe4a816edf3] Running
	I0913 18:22:41.226841   11846 system_pods.go:89] "csi-hostpath-attacher-0" [8a5b2986-b2ca-4a85-b195-1c8eb80a223e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0913 18:22:41.226852   11846 system_pods.go:89] "csi-hostpath-resizer-0" [e9c848e7-3276-496f-a60f-69f8eb633740] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0913 18:22:41.226862   11846 system_pods.go:89] "csi-hostpathplugin-zhd46" [a53ceb0b-635b-4fa8-a72b-60d626a4370f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0913 18:22:41.226869   11846 system_pods.go:89] "etcd-addons-979357" [1267edb9-0c88-4573-80ab-4c18edfd79fa] Running
	I0913 18:22:41.226876   11846 system_pods.go:89] "kube-apiserver-addons-979357" [9d630d36-12c0-4389-b21b-4a5befb11de4] Running
	I0913 18:22:41.226883   11846 system_pods.go:89] "kube-controller-manager-addons-979357" [77e27eb8-234a-4da6-a8f5-c94a66a9d3dc] Running
	I0913 18:22:41.226896   11846 system_pods.go:89] "kube-ingress-dns-minikube" [a82db8f0-646e-4f6c-8dda-7332bed77579] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0913 18:22:41.226903   11846 system_pods.go:89] "kube-proxy-qxmw4" [3e77278b-62ae-4a68-bbba-ca3108d18280] Running
	I0913 18:22:41.226913   11846 system_pods.go:89] "kube-scheduler-addons-979357" [a40db901-708e-481e-aedf-f54669897c0e] Running
	I0913 18:22:41.226923   11846 system_pods.go:89] "metrics-server-84c5f94fbc-qw488" [cf270e35-c498-455b-bc82-0a19e8f606aa] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0913 18:22:41.226936   11846 system_pods.go:89] "nvidia-device-plugin-daemonset-66thw" [ad466b8e-d669-4281-be12-36ab1bbbee83] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0913 18:22:41.226945   11846 system_pods.go:89] "registry-66c9cd494c-pwx9m" [d9453f5b-a1d3-40e4-80d3-2250edd642ca] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0913 18:22:41.226956   11846 system_pods.go:89] "registry-proxy-2jrhs" [8223e4fa-f130-48c6-ab8b-764434495610] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0913 18:22:41.226966   11846 system_pods.go:89] "snapshot-controller-56fcc65765-fvbcx" [9043c1eb-e28f-4af5-af33-529d05cce5c8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0913 18:22:41.226979   11846 system_pods.go:89] "snapshot-controller-56fcc65765-r58vx" [661bb76c-4862-41f0-a2d0-1c774b91c7dd] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0913 18:22:41.226987   11846 system_pods.go:89] "storage-provisioner" [09e9768b-ce9c-47d6-8650-191c7f864a9c] Running
	I0913 18:22:41.226997   11846 system_pods.go:126] duration metric: took 9.200944ms to wait for k8s-apps to be running ...
	I0913 18:22:41.227009   11846 system_svc.go:44] waiting for kubelet service to be running ....
	I0913 18:22:41.227055   11846 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 18:22:41.634996   11846 system_svc.go:56] duration metric: took 407.978559ms WaitForService to wait for kubelet
	I0913 18:22:41.635015   11846 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.245157022s)
	I0913 18:22:41.635029   11846 kubeadm.go:582] duration metric: took 11.156427988s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0913 18:22:41.635054   11846 node_conditions.go:102] verifying NodePressure condition ...
	I0913 18:22:41.635053   11846 main.go:141] libmachine: Making call to close driver server
	I0913 18:22:41.635073   11846 main.go:141] libmachine: (addons-979357) Calling .Close
	I0913 18:22:41.635381   11846 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:22:41.635400   11846 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:22:41.635410   11846 main.go:141] libmachine: Making call to close driver server
	I0913 18:22:41.635434   11846 main.go:141] libmachine: (addons-979357) DBG | Closing plugin on server side
	I0913 18:22:41.635497   11846 main.go:141] libmachine: (addons-979357) Calling .Close
	I0913 18:22:41.635722   11846 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:22:41.635759   11846 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:22:41.638410   11846 addons.go:475] Verifying addon gcp-auth=true in "addons-979357"
	I0913 18:22:41.640220   11846 out.go:177] * Verifying gcp-auth addon...
	I0913 18:22:41.642958   11846 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0913 18:22:41.721176   11846 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0913 18:22:41.721197   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:22:41.722056   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:41.765233   11846 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0913 18:22:41.765260   11846 node_conditions.go:123] node cpu capacity is 2
	I0913 18:22:41.765276   11846 node_conditions.go:105] duration metric: took 130.215708ms to run NodePressure ...
	I0913 18:22:41.765289   11846 start.go:241] waiting for startup goroutines ...
	I0913 18:22:41.787100   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:41.787864   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:42.120679   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:42.147184   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:22:42.194390   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:42.195105   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:42.619872   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:42.645630   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:22:42.693894   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:42.695153   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:43.120929   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:43.145927   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:22:43.194596   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:43.195583   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:43.621917   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:43.645549   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:22:43.693559   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:43.695135   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:44.121292   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:44.146843   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:22:44.195593   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:44.195599   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:44.621514   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:44.646833   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:22:44.694699   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:44.695284   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:45.121000   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:45.146665   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:22:45.221808   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:45.221886   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:45.621175   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:45.646182   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:22:45.696648   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:45.697620   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:46.120717   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:46.147336   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:22:46.193470   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:46.195172   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:46.620919   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:46.646586   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:22:46.693776   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:46.694844   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:47.121098   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:47.146164   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:22:47.194357   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:47.194812   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:47.620988   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:47.646008   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:22:47.695231   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:47.695519   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:48.123021   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:48.148617   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:22:48.194472   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:48.197071   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:48.620608   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:48.647296   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:22:48.693740   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:48.696156   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:49.121349   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:49.146152   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:22:49.193353   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:49.195100   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:49.620792   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:49.646311   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:22:49.694786   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:49.695121   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:50.120264   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:50.146350   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:22:50.195145   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:50.195301   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:50.623572   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:50.647378   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:22:50.694258   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:50.695502   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:51.121299   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:51.147289   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:22:51.195022   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:51.196037   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:51.622665   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:51.647969   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:22:51.694417   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:51.695278   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:52.120925   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:52.147440   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:22:52.193805   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:52.195323   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:52.620665   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:52.646899   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:22:52.694596   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:52.695098   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:53.121172   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:53.147196   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:22:53.193933   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:53.195515   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:53.620912   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:53.646554   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:22:53.694887   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:53.696858   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:54.121127   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:54.146492   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:22:54.193531   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:54.196209   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:54.619665   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:54.647089   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:22:54.693272   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:54.695620   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:55.121110   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:55.146241   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:22:55.222531   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:55.223243   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:55.621744   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:55.647722   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:22:55.695503   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:55.695685   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:56.120857   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:56.147149   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:22:56.195602   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:56.195853   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:56.620083   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:56.646767   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:22:56.695272   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:56.696725   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:57.120527   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:57.146315   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:22:57.196813   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:57.197244   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:57.620578   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:57.647230   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:22:57.693611   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:57.695949   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:58.120685   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:58.147408   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:22:58.193377   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:58.195277   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:58.620171   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:58.646736   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:22:58.695046   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:58.695240   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:59.121002   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:59.146152   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:22:59.193596   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:59.195514   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:59.621837   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:59.646971   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:22:59.695285   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:59.695341   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:00.120985   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:00.146606   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:00.194196   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:00.195216   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:00.622220   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:00.648159   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:00.693250   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:00.695562   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:01.121311   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:01.147065   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:01.198443   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:01.198571   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:01.620857   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:01.647554   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:01.695186   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:01.695496   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:02.120196   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:02.147540   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:02.194122   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:02.196710   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:02.623336   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:02.646284   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:02.693416   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:02.695367   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:03.121367   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:03.146882   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:03.195451   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:03.196172   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:03.620748   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:03.647039   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:03.694700   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:03.695234   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:04.121411   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:04.148078   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:04.194865   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:04.195162   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:04.620921   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:04.645990   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:04.695569   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:04.695683   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:05.120274   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:05.146571   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:05.220150   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:05.220498   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:05.621456   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:05.647109   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:05.694530   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:05.695969   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:06.120728   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:06.146744   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:06.195253   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:06.195415   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:06.620898   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:06.647924   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:06.694635   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:06.694976   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:07.127001   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:07.146392   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:07.193687   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:07.196384   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:07.621298   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:07.646498   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:07.693773   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:07.695419   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:08.127877   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:08.145692   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:08.193920   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:08.196181   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:08.622851   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:08.647712   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:08.694786   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:08.696188   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:09.120734   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:09.147876   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:09.194575   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:09.195140   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:09.620159   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:09.646445   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:09.693725   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:09.695051   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:10.121729   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:10.147049   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:10.195211   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:10.195743   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:10.620510   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:10.646705   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:10.694026   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:10.695703   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:11.131933   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:11.221769   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:11.222414   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:11.222614   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:11.620112   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:11.646407   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:11.693639   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:11.695523   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:12.120722   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:12.147783   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:12.195174   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:12.195474   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:12.620765   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:12.646438   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:12.693266   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:12.695076   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:13.120438   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:13.146881   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:13.195465   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:13.195886   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:13.621014   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:13.646016   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:13.695763   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:13.696160   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:14.121538   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:14.146032   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:14.194101   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:14.194532   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:14.620817   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:14.646854   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:14.694932   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:14.695089   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:15.119855   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:15.146131   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:15.220403   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:15.220546   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:15.626509   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:15.648020   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:15.694713   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:15.696103   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:16.120717   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:16.147101   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:16.193946   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:16.195256   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:16.625357   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:16.721430   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:16.721848   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:16.722175   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:17.120426   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:17.145905   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:17.220147   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:17.220899   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:17.621209   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:17.646445   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:17.693623   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:17.695270   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:18.120271   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:18.146686   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:18.193954   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:18.196010   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:18.621171   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:18.646946   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:18.694564   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:18.695211   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:19.120113   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:19.146469   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:19.196297   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:19.196447   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:19.650974   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:19.651697   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:19.698508   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:19.699902   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:20.120815   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:20.146825   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:20.195112   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:20.195337   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:20.620833   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:20.648724   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:20.695238   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:20.695503   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:21.120670   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:21.146241   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:21.193758   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:21.195248   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:21.620443   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:21.647189   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:21.693673   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:21.695255   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:22.120315   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:22.146703   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:22.194041   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:22.195417   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:22.620344   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:22.646609   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:22.694000   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:22.695298   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:23.119630   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:23.146904   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:23.195745   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:23.195868   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:23.620453   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:23.645852   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:23.695186   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:23.695233   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:24.120504   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:24.146668   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:24.193779   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:24.194861   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:24.626216   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:24.646458   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:24.694012   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:24.695912   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:25.121136   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:25.147431   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:25.195249   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:25.195382   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:25.622578   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:25.646123   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:25.693993   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:25.696212   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:26.121205   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:26.145925   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:26.195513   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:26.195566   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:26.624415   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:26.722553   11846 kapi.go:107] duration metric: took 47.532730438s to wait for kubernetes.io/minikube-addons=registry ...
	I0913 18:23:26.722593   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:26.722614   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:27.120042   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:27.146166   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:27.195294   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:27.622218   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:27.646583   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:27.695195   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:28.120287   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:28.146533   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:28.195157   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:28.619787   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:28.645876   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:28.696846   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:29.121064   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:29.146637   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:29.195783   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:29.626830   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:29.726354   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:29.727329   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:30.119787   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:30.145744   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:30.195173   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:30.624823   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:30.646556   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:30.695578   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:31.120515   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:31.154577   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:31.196849   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:31.620779   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:31.647534   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:31.695303   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:32.120078   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:32.146438   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:32.195173   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:32.620076   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:32.646251   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:32.694883   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:33.120737   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:33.146599   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:33.194850   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:33.621679   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:33.646334   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:33.695142   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:34.121576   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:34.146542   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:34.195016   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:34.623471   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:34.647269   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:34.694854   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:35.121463   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:35.147807   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:35.222465   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:35.620588   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:35.646453   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:35.694862   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:36.121876   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:36.147202   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:36.195143   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:36.621045   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:36.647726   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:36.695696   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:37.121125   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:37.147217   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:37.194840   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:37.621359   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:37.646372   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:37.695547   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:38.121220   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:38.146601   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:38.195403   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:38.625530   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:38.645912   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:38.725502   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:39.122386   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:39.146745   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:39.195189   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:39.620370   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:39.645995   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:39.694761   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:40.119935   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:40.149974   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:40.195722   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:40.620233   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:40.646888   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:40.695644   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:41.120849   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:41.146610   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:41.198361   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:41.622772   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:41.646925   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:41.695237   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:42.120998   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:42.152683   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:42.221014   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:42.621924   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:42.646885   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:42.695597   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:43.120297   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:43.146446   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:43.195887   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:43.621897   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:43.646013   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:43.696557   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:44.121163   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:44.147972   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:44.195376   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:44.621728   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:44.647558   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:44.720987   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:45.121126   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:45.157724   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:45.258976   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:45.622505   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:45.646349   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:45.694812   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:46.123467   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:46.147968   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:46.194710   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:46.620795   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:46.648638   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:46.696589   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:47.125323   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:47.148794   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:47.226767   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:47.625133   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:47.665246   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:47.697347   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:48.120702   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:48.146546   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:48.196137   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:48.620081   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:48.646626   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:48.697799   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:49.120469   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:49.146490   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:49.195195   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:49.623297   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:49.647120   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:49.694857   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:50.121396   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:50.146235   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:50.195440   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:50.620309   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:51.036246   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:51.036422   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:51.120322   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:51.146655   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:51.196307   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:51.621288   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:51.646663   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:51.695788   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:52.120768   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:52.147113   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:52.194880   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:52.620746   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:52.646876   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:52.695644   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:53.120209   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:53.146049   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:53.194556   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:53.623965   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:53.646378   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:53.697202   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:54.119892   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:54.220040   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:54.220900   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:54.620194   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:54.646265   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:54.694508   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:55.120705   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:55.147221   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:55.221270   11846 kapi.go:107] duration metric: took 1m16.030581818s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0913 18:23:55.620551   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:55.722715   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:56.123824   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:56.145750   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:56.620150   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:56.646276   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:57.120601   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:57.146762   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:57.620594   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:57.646802   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:58.120308   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:58.146334   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:58.621532   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:58.646676   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:59.126657   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:59.151013   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:59.620308   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:59.646351   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:00.121433   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:24:00.146323   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:00.620455   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:24:00.647099   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:01.123791   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:24:01.148334   11846 kapi.go:107] duration metric: took 1m19.505373536s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0913 18:24:01.150141   11846 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-979357 cluster.
	I0913 18:24:01.151499   11846 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0913 18:24:01.152977   11846 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0913 18:24:01.620787   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:24:02.121029   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:24:02.619924   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:24:03.121161   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:24:03.623550   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:24:04.121221   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:24:04.621386   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:24:05.120200   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:24:05.620252   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:24:06.120523   11846 kapi.go:107] duration metric: took 1m26.004857088s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0913 18:24:06.122184   11846 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, nvidia-device-plugin, ingress-dns, storage-provisioner-rancher, metrics-server, cloud-spanner, inspektor-gadget, yakd, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0913 18:24:06.123444   11846 addons.go:510] duration metric: took 1m35.644821989s for enable addons: enabled=[default-storageclass storage-provisioner nvidia-device-plugin ingress-dns storage-provisioner-rancher metrics-server cloud-spanner inspektor-gadget yakd volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0913 18:24:06.123477   11846 start.go:246] waiting for cluster config update ...
	I0913 18:24:06.123493   11846 start.go:255] writing updated cluster config ...
	I0913 18:24:06.123731   11846 ssh_runner.go:195] Run: rm -f paused
	I0913 18:24:06.194823   11846 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0913 18:24:06.196641   11846 out.go:177] * Done! kubectl is now configured to use "addons-979357" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 13 18:33:23 addons-979357 crio[661]: time="2024-09-13 18:33:23.151814671Z" level=info msg="Removed container b58874031e1c8f17ea718e353d128c0c178439f4fbdfb2463308e7e52f6f4aae: kube-system/registry-66c9cd494c-pwx9m/registry" file="server/container_remove.go:40" id=9b7f6f71-12b9-4d2c-9d2c-0246219b11a6 name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 13 18:33:23 addons-979357 crio[661]: time="2024-09-13 18:33:23.151908997Z" level=debug msg="Response: &RemoveContainerResponse{}" file="otel-collector/interceptors.go:74" id=9b7f6f71-12b9-4d2c-9d2c-0246219b11a6 name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 13 18:33:23 addons-979357 crio[661]: time="2024-09-13 18:33:23.152968505Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:b58874031e1c8f17ea718e353d128c0c178439f4fbdfb2463308e7e52f6f4aae,Verbose:false,}" file="otel-collector/interceptors.go:62" id=fae477ef-21b5-4fda-ae69-192f84fb8507 name=/runtime.v1.RuntimeService/ContainerStatus
	Sep 13 18:33:23 addons-979357 crio[661]: time="2024-09-13 18:33:23.153202717Z" level=debug msg="Response error: rpc error: code = NotFound desc = could not find container \"b58874031e1c8f17ea718e353d128c0c178439f4fbdfb2463308e7e52f6f4aae\": container with ID starting with b58874031e1c8f17ea718e353d128c0c178439f4fbdfb2463308e7e52f6f4aae not found: ID does not exist" file="otel-collector/interceptors.go:71" id=fae477ef-21b5-4fda-ae69-192f84fb8507 name=/runtime.v1.RuntimeService/ContainerStatus
	Sep 13 18:33:23 addons-979357 crio[661]: time="2024-09-13 18:33:23.155877258Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d036d663-5736-4ae3-b911-8568dcf60734 name=/runtime.v1.RuntimeService/Version
	Sep 13 18:33:23 addons-979357 crio[661]: time="2024-09-13 18:33:23.155934645Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d036d663-5736-4ae3-b911-8568dcf60734 name=/runtime.v1.RuntimeService/Version
	Sep 13 18:33:23 addons-979357 crio[661]: time="2024-09-13 18:33:23.156991153Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2bb0b660-c43b-4261-a736-b90f52ef627b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 18:33:23 addons-979357 crio[661]: time="2024-09-13 18:33:23.158564776Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726252403158536741,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:519756,},InodesUsed:&UInt64Value{Value:182,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2bb0b660-c43b-4261-a736-b90f52ef627b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 18:33:23 addons-979357 crio[661]: time="2024-09-13 18:33:23.159065876Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2da33448-6edb-4d6b-929f-a3794505267a name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 18:33:23 addons-979357 crio[661]: time="2024-09-13 18:33:23.159189728Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2da33448-6edb-4d6b-929f-a3794505267a name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 18:33:23 addons-979357 crio[661]: time="2024-09-13 18:33:23.159798816Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:490e59266d90d24181a3482ec641c88eeff322360dd9b861100a27159177697a,PodSandboxId:09b3db7de02cffbf027fd1640e0811c98ab70ce2f1ba07fb72b92119248d2e4a,Metadata:&ContainerMetadata{Name:task-pv-container,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3,State:CONTAINER_RUNNING,CreatedAt:1726252397869975992,Labels:map[string]string{io.kubernetes.container.name: task-pv-container,io.kubernetes.pod.name: task-pv-pod-restore,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 34b800b1-d2f8-4d81-badb-d5d003b1751c,},Annotations:map[string]string{io.kubernetes.container.hash: 44be65c1,io.kubernetes.container.ports: [{\"name
\":\"http-server\",\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2df31721463d47f829d491bd7502e2abc48420129632cacc3b866c48ba28a11,PodSandboxId:07c7ce6376b4185287cd7ea44e962bde57b42d38047c978d13d5f8fbcaecefa5,Metadata:&ContainerMetadata{Name:helper-pod,Attempt:0,},Image:&ImageSpec{Image:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,State:CONTAINER_EXITED,CreatedAt:1726252364321268154,Labels:map[string]string{io.kubernetes.container.name: helper-pod,io.kubernetes.pod.name: helper-pod-delete-pvc-2e98d28b-4232-4373-82bf-032b9972820e,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 454a582a-c7ca-4bde-8403-1ca78f
f1963d,},Annotations:map[string]string{io.kubernetes.container.hash: 973dbf55,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41b1d9d2d01af380ac67a754bd8e1a7ef033e61cf8ba1ab8258d9db672da8f9c,PodSandboxId:5d3c014915f5c0ccad938f8cee2818d5561fbab5e96fb6c2817878fccaa409a3,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:9186e638ccc30c5d1a2efd5a2cd632f49bb5013f164f6f85c48ed6fce90fe38f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6fd955f66c231c1a946653170d096a28ac2b2052a02080c0b84ec082a07f7d12,State:CONTAINER_EXITED,CreatedAt:1726252361252198894,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: test-local-path,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 816d967e-d591-4e13-aecb-0cf44aa24faf,},Ann
otations:map[string]string{io.kubernetes.container.hash: dd3595ac,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:749dc8e85e7656c336851626be1b118bf67c77594c3964f2b3b551e49298eb57,PodSandboxId:17454b04375bbc1ce6cd324c5645f2a48910ec45f2f79cc5d6276b1a5cde93f6,Metadata:&ContainerMetadata{Name:helper-pod,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:023917ec6a886d0e8e15f28fb543515a5fcd8d938edb091e8147db4efed388ee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,State:CONTAINER_EXITED,CreatedAt:1726252355377254955,Labels:map[string]string{io.kubernetes.container.name: helper-pod,io.kubernetes.pod.name: helper-pod-create-pvc-2e98d28b-4232-4373-82bf-032b9972820e,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod
.uid: 5d8ff709-0270-4feb-8b47-886a509560e2,},Annotations:map[string]string{io.kubernetes.container.hash: 973dbf55,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3dfec23d2efb695500d4a9afc0baf0dc731c60ccd651364c92fc342cc71623dd,PodSandboxId:e1eba8472d179600f46e86d9a50697d041404dc639f4542669e6d7c72d57decd,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1726251845612332224,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-zhd46,io.kubernetes.pod.name
space: kube-system,io.kubernetes.pod.uid: a53ceb0b-635b-4fa8-a72b-60d626a4370f,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08c8d40517c9354851b9dfe1001e813bcc8fd52b47a250cedfdd3b7289c35b5d,PodSandboxId:e1eba8472d179600f46e86d9a50697d041404dc639f4542669e6d7c72d57decd,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1726251843509262557,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpat
hplugin-zhd46,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a53ceb0b-635b-4fa8-a72b-60d626a4370f,},Annotations:map[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3470689cfedddc432ad48e73c75c6a58aeef635d804166ea97d4d5e460c12ed2,PodSandboxId:e1eba8472d179600f46e86d9a50697d041404dc639f4542669e6d7c72d57decd,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1726251841775984714,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.k
ubernetes.pod.name: csi-hostpathplugin-zhd46,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a53ceb0b-635b-4fa8-a72b-60d626a4370f,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02c6d6e4b350e88d9417e6262d530e0be455f8bacc894d0437221f5f74fc33ce,PodSandboxId:c3ecf2966876727ebd3ed8032152a9fee65a2fcee0fdd8c8056f359218ee905f,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726251840695356851,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-j795q,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 943ee71f-15fc-4b01-8fa1-385d59ee92e9,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e546f2073644b15ebd0d5f084eb952ea003a1a9b3a72153425b0cc03ad5a2105,PodSandboxId:e1eba8472d179600f46e86d9a50697d041404dc639f4542669e6d7c72d57decd,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONT
AINER_RUNNING,CreatedAt:1726251835418163383,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-zhd46,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a53ceb0b-635b-4fa8-a72b-60d626a4370f,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3801ba40bdd3d2aa0b70f7848709eb6cc533bba6092182d25ab65b6802edacc6,PodSandboxId:b34051803c5f2d9ba72a2cd7dd140f4aa9b9861cc5b7be4e1abb5b4380c49214,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},U
serSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1726251833783258315,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-6mqg7,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c80a6556-910f-4e7c-8242-f32234571525,},Annotations:map[string]string{io.kubernetes.container.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},
},&Container{Id:6a2c4d4c08c4feb8d55788e1d1a835873fd058e12f7b522e2cf1d3968134f19a,PodSandboxId:e1eba8472d179600f46e86d9a50697d041404dc639f4542669e6d7c72d57decd,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1726251826759286614,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-zhd46,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a53ceb0b-635b-4fa8-a72b-60d626a4370f,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePo
licy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4217678f08ecd393fab908673989bddf937f2015c45e19d2c9d295cced0a6800,PodSandboxId:c14b7db58706ede75986dfee5914219785a2d044927b998f86b50e63f21c764e,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1726251824303551329,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9c848e7-3276-496f-a60f-69f8eb633740,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.containe
r.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fd4e408ade95ed79c83649fdbdb70d688b2afd3a74ed3c004bdc9c8683e0e53,PodSandboxId:e1eba8472d179600f46e86d9a50697d041404dc639f4542669e6d7c72d57decd,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1726251822174267183,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-zhd46,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a53ceb0b-635b-4fa8-a72b-60d626a4370f,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.re
startCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d68105fabf46cd21f994610d5069fdd13f66e99180302d14b2dc3692f184fc8,PodSandboxId:05c316e809f64caacb6e0616ecc638fbc9463d86e139b60cff5cfa3bddc7b896,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1726251820219923607,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a5b2986-b2ca-4a85-b195-1c8eb80a223e,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc7f28ac3b62a4d20b80abf1b36f8e76449fdd981fbf7fb2f7f345d5ddde6fb9,PodSandboxId:d0ef8624420805edf03d3ea93b3e1c55f85b06e6560ca770256c60254ca174cc,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726251818864520595,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-jsft5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 9fbbf624-3525-4d06-8686-47c244faa8d2,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container
.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f305e18e914bad488669b23de6227ca1310488a16651a66e64c3104cd3ef5ef,PodSandboxId:2f0b757b23f97349b6cc8d2741ebd9dca218e4a69abe4536745560b2bf9ebf32,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726251818584979983,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-t2k2m,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c9fb798f-6249-4cf0-bb0c-ad797779f7d2,},Annotations:map[string]string{io.kubernetes.container.
hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a90cbd91c8c03ee582230559b6b3b52957de04fcbc4b968cdd2e8b4f8cfee00,PodSandboxId:c14add024154b330da362430438efd3e8b7e66ef82f20767de23ead2cf34ef23,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1726251816515968054,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-fvbcx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9043c1eb-e28f-4af5-
af33-529d05cce5c8,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:091ad3de9e7235445b57bff2ac69762cd1be79db77aa25ad59be5034102dbb62,PodSandboxId:0c7d27ecd8d72c0c4634e24aa0684d999b82a400321e60e0268c372f54cd1264,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1726251816344267514,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-r58vx,io.kubern
etes.pod.namespace: kube-system,io.kubernetes.pod.uid: 661bb76c-4862-41f0-a2d0-1c774b91c7dd,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ab3cdf56491273dcbb9cc1f982f2dc5304571e00b7a1ee3ff3175f91c97fdbe,PodSandboxId:68e88bddaa74cdd1c6b049d43a7215c2c0f320385d884237086b8abdf931a230,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1726251796529246164,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name
: metrics-server-84c5f94fbc-qw488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf270e35-c498-455b-bc82-0a19e8f606aa,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d8be76d53b6a1cc2d79f39f38496e98eba9b8fddce6215a031f4c296c8a0a51,PodSandboxId:5c7921eb426979d624b49b3db24a888ceba405fa718bf1a5c08ae5a053a74ad3,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,Stat
e:CONTAINER_RUNNING,CreatedAt:1726251779006114876,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a82db8f0-646e-4f6c-8dda-7332bed77579,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46c152a4abcf517c32fc79a8f91da41a850adb80ee9597c3f49a9e6206b45f31,PodSandboxId:c2dc3a67499c7fd0a5898c8d6199735b3b3acd422ebc719fa480351a446b863e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726251757134337412,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09e9768b-ce9c-47d6-8650-191c7f864a9c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3bf9ceff710d95ca8581ac5cd76f0ab55d09833110e1a5c63fc0953ce948f4f,PodSandboxId:abf9b475b59015ac196e49760bd86de9c580685c29a7b28bf4ba6bca39e6ec2c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa
2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726251755065524423,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mtltd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bee68b4c-c773-4bb2-b088-1fe4a816edf3,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9134bc1238e6ea0f130cab81c7973189417fcdfb4544aec08a6ee8aaf314cb0c,PodSandboxId:44e10dfb950fd8542b6cad158923008de486a5f9f595035b4b932235c39eb956,Metadata:&ContainerMetadata{Na
me:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726251752259507868,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qxmw4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e77278b-62ae-4a68-bbba-ca3108d18280,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d7472d2e3f48ffc6dd6ccc80ca03ad0ac7078696b11d7b1460addd2949d22e6,PodSandboxId:d552343eeec8aec1e907ac31f5d100cedcabe20845d1d77eda3124dbbeaa4317,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Ima
ge:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726251740614747066,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-979357,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7509a39b4c733b67776a7ae5d64c186,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:089b47ce338051609525c4f9381ceba68577833fc0ffa1c55a00b4e704e073a2,PodSandboxId:b67ca3f1d294d653bc170dc259a7acaf543d88fea5536493d60247cdeb49a879,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&Imag
eSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726251740607552426,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-979357,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fde484119c5eac540deeb46d4ed91bf6,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f36fa2cd406d1eb54c924b89d86a23d5b5415356638b0a6ab4846430227aaaa2,PodSandboxId:89b0eb49c6580a707bc39e91ffdcf46d27656879114d9b53048e2e70708e1329,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Ima
ge:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726251740610321933,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-979357,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55cc9c34cab79d6a845b85b50237201a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:beb227280e8dfb38d827f5345ec8c8984cb0a02932ab22106590d49a6d28413a,PodSandboxId:1644d60ea634e91ff07026783a9d93e0db4e015a6853a21fb46c5a2aa2aaa73c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b
7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726251740600321275,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-979357,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab45a45dafa7cbe725acff1543d4a881,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2da33448-6edb-4d6b-929f-a3794505267a name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 18:33:23 addons-979357 crio[661]: time="2024-09-13 18:33:23.199639842Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cc50356c-397d-4010-b9d1-ac33325b5307 name=/runtime.v1.RuntimeService/Version
	Sep 13 18:33:23 addons-979357 crio[661]: time="2024-09-13 18:33:23.199766877Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cc50356c-397d-4010-b9d1-ac33325b5307 name=/runtime.v1.RuntimeService/Version
	Sep 13 18:33:23 addons-979357 crio[661]: time="2024-09-13 18:33:23.201197852Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=95719658-4ac1-41ab-affa-45421f9aea0f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 18:33:23 addons-979357 crio[661]: time="2024-09-13 18:33:23.202309479Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726252403202277712,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:519756,},InodesUsed:&UInt64Value{Value:182,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=95719658-4ac1-41ab-affa-45421f9aea0f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 18:33:23 addons-979357 crio[661]: time="2024-09-13 18:33:23.203101311Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=731543d0-52c4-4ee2-8ba7-01064c782020 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 18:33:23 addons-979357 crio[661]: time="2024-09-13 18:33:23.203165245Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=731543d0-52c4-4ee2-8ba7-01064c782020 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 18:33:23 addons-979357 crio[661]: time="2024-09-13 18:33:23.203901545Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:490e59266d90d24181a3482ec641c88eeff322360dd9b861100a27159177697a,PodSandboxId:09b3db7de02cffbf027fd1640e0811c98ab70ce2f1ba07fb72b92119248d2e4a,Metadata:&ContainerMetadata{Name:task-pv-container,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3,State:CONTAINER_RUNNING,CreatedAt:1726252397869975992,Labels:map[string]string{io.kubernetes.container.name: task-pv-container,io.kubernetes.pod.name: task-pv-pod-restore,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 34b800b1-d2f8-4d81-badb-d5d003b1751c,},Annotations:map[string]string{io.kubernetes.container.hash: 44be65c1,io.kubernetes.container.ports: [{\"name
\":\"http-server\",\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2df31721463d47f829d491bd7502e2abc48420129632cacc3b866c48ba28a11,PodSandboxId:07c7ce6376b4185287cd7ea44e962bde57b42d38047c978d13d5f8fbcaecefa5,Metadata:&ContainerMetadata{Name:helper-pod,Attempt:0,},Image:&ImageSpec{Image:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,State:CONTAINER_EXITED,CreatedAt:1726252364321268154,Labels:map[string]string{io.kubernetes.container.name: helper-pod,io.kubernetes.pod.name: helper-pod-delete-pvc-2e98d28b-4232-4373-82bf-032b9972820e,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 454a582a-c7ca-4bde-8403-1ca78f
f1963d,},Annotations:map[string]string{io.kubernetes.container.hash: 973dbf55,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41b1d9d2d01af380ac67a754bd8e1a7ef033e61cf8ba1ab8258d9db672da8f9c,PodSandboxId:5d3c014915f5c0ccad938f8cee2818d5561fbab5e96fb6c2817878fccaa409a3,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:9186e638ccc30c5d1a2efd5a2cd632f49bb5013f164f6f85c48ed6fce90fe38f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6fd955f66c231c1a946653170d096a28ac2b2052a02080c0b84ec082a07f7d12,State:CONTAINER_EXITED,CreatedAt:1726252361252198894,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: test-local-path,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 816d967e-d591-4e13-aecb-0cf44aa24faf,},Ann
otations:map[string]string{io.kubernetes.container.hash: dd3595ac,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:749dc8e85e7656c336851626be1b118bf67c77594c3964f2b3b551e49298eb57,PodSandboxId:17454b04375bbc1ce6cd324c5645f2a48910ec45f2f79cc5d6276b1a5cde93f6,Metadata:&ContainerMetadata{Name:helper-pod,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:023917ec6a886d0e8e15f28fb543515a5fcd8d938edb091e8147db4efed388ee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,State:CONTAINER_EXITED,CreatedAt:1726252355377254955,Labels:map[string]string{io.kubernetes.container.name: helper-pod,io.kubernetes.pod.name: helper-pod-create-pvc-2e98d28b-4232-4373-82bf-032b9972820e,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod
.uid: 5d8ff709-0270-4feb-8b47-886a509560e2,},Annotations:map[string]string{io.kubernetes.container.hash: 973dbf55,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3dfec23d2efb695500d4a9afc0baf0dc731c60ccd651364c92fc342cc71623dd,PodSandboxId:e1eba8472d179600f46e86d9a50697d041404dc639f4542669e6d7c72d57decd,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1726251845612332224,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-zhd46,io.kubernetes.pod.name
space: kube-system,io.kubernetes.pod.uid: a53ceb0b-635b-4fa8-a72b-60d626a4370f,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08c8d40517c9354851b9dfe1001e813bcc8fd52b47a250cedfdd3b7289c35b5d,PodSandboxId:e1eba8472d179600f46e86d9a50697d041404dc639f4542669e6d7c72d57decd,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1726251843509262557,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpat
hplugin-zhd46,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a53ceb0b-635b-4fa8-a72b-60d626a4370f,},Annotations:map[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3470689cfedddc432ad48e73c75c6a58aeef635d804166ea97d4d5e460c12ed2,PodSandboxId:e1eba8472d179600f46e86d9a50697d041404dc639f4542669e6d7c72d57decd,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1726251841775984714,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.k
ubernetes.pod.name: csi-hostpathplugin-zhd46,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a53ceb0b-635b-4fa8-a72b-60d626a4370f,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02c6d6e4b350e88d9417e6262d530e0be455f8bacc894d0437221f5f74fc33ce,PodSandboxId:c3ecf2966876727ebd3ed8032152a9fee65a2fcee0fdd8c8056f359218ee905f,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726251840695356851,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-j795q,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 943ee71f-15fc-4b01-8fa1-385d59ee92e9,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e546f2073644b15ebd0d5f084eb952ea003a1a9b3a72153425b0cc03ad5a2105,PodSandboxId:e1eba8472d179600f46e86d9a50697d041404dc639f4542669e6d7c72d57decd,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONT
AINER_RUNNING,CreatedAt:1726251835418163383,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-zhd46,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a53ceb0b-635b-4fa8-a72b-60d626a4370f,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3801ba40bdd3d2aa0b70f7848709eb6cc533bba6092182d25ab65b6802edacc6,PodSandboxId:b34051803c5f2d9ba72a2cd7dd140f4aa9b9861cc5b7be4e1abb5b4380c49214,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},U
serSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1726251833783258315,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-6mqg7,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c80a6556-910f-4e7c-8242-f32234571525,},Annotations:map[string]string{io.kubernetes.container.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},
},&Container{Id:6a2c4d4c08c4feb8d55788e1d1a835873fd058e12f7b522e2cf1d3968134f19a,PodSandboxId:e1eba8472d179600f46e86d9a50697d041404dc639f4542669e6d7c72d57decd,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1726251826759286614,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-zhd46,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a53ceb0b-635b-4fa8-a72b-60d626a4370f,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePo
licy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4217678f08ecd393fab908673989bddf937f2015c45e19d2c9d295cced0a6800,PodSandboxId:c14b7db58706ede75986dfee5914219785a2d044927b998f86b50e63f21c764e,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1726251824303551329,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9c848e7-3276-496f-a60f-69f8eb633740,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.containe
r.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fd4e408ade95ed79c83649fdbdb70d688b2afd3a74ed3c004bdc9c8683e0e53,PodSandboxId:e1eba8472d179600f46e86d9a50697d041404dc639f4542669e6d7c72d57decd,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1726251822174267183,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-zhd46,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a53ceb0b-635b-4fa8-a72b-60d626a4370f,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.re
startCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d68105fabf46cd21f994610d5069fdd13f66e99180302d14b2dc3692f184fc8,PodSandboxId:05c316e809f64caacb6e0616ecc638fbc9463d86e139b60cff5cfa3bddc7b896,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1726251820219923607,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a5b2986-b2ca-4a85-b195-1c8eb80a223e,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc7f28ac3b62a4d20b80abf1b36f8e76449fdd981fbf7fb2f7f345d5ddde6fb9,PodSandboxId:d0ef8624420805edf03d3ea93b3e1c55f85b06e6560ca770256c60254ca174cc,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726251818864520595,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-jsft5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 9fbbf624-3525-4d06-8686-47c244faa8d2,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container
.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f305e18e914bad488669b23de6227ca1310488a16651a66e64c3104cd3ef5ef,PodSandboxId:2f0b757b23f97349b6cc8d2741ebd9dca218e4a69abe4536745560b2bf9ebf32,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726251818584979983,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-t2k2m,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c9fb798f-6249-4cf0-bb0c-ad797779f7d2,},Annotations:map[string]string{io.kubernetes.container.
hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a90cbd91c8c03ee582230559b6b3b52957de04fcbc4b968cdd2e8b4f8cfee00,PodSandboxId:c14add024154b330da362430438efd3e8b7e66ef82f20767de23ead2cf34ef23,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1726251816515968054,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-fvbcx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9043c1eb-e28f-4af5-
af33-529d05cce5c8,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:091ad3de9e7235445b57bff2ac69762cd1be79db77aa25ad59be5034102dbb62,PodSandboxId:0c7d27ecd8d72c0c4634e24aa0684d999b82a400321e60e0268c372f54cd1264,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1726251816344267514,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-r58vx,io.kubern
etes.pod.namespace: kube-system,io.kubernetes.pod.uid: 661bb76c-4862-41f0-a2d0-1c774b91c7dd,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ab3cdf56491273dcbb9cc1f982f2dc5304571e00b7a1ee3ff3175f91c97fdbe,PodSandboxId:68e88bddaa74cdd1c6b049d43a7215c2c0f320385d884237086b8abdf931a230,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1726251796529246164,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name
: metrics-server-84c5f94fbc-qw488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf270e35-c498-455b-bc82-0a19e8f606aa,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d8be76d53b6a1cc2d79f39f38496e98eba9b8fddce6215a031f4c296c8a0a51,PodSandboxId:5c7921eb426979d624b49b3db24a888ceba405fa718bf1a5c08ae5a053a74ad3,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,Stat
e:CONTAINER_RUNNING,CreatedAt:1726251779006114876,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a82db8f0-646e-4f6c-8dda-7332bed77579,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46c152a4abcf517c32fc79a8f91da41a850adb80ee9597c3f49a9e6206b45f31,PodSandboxId:c2dc3a67499c7fd0a5898c8d6199735b3b3acd422ebc719fa480351a446b863e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726251757134337412,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09e9768b-ce9c-47d6-8650-191c7f864a9c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3bf9ceff710d95ca8581ac5cd76f0ab55d09833110e1a5c63fc0953ce948f4f,PodSandboxId:abf9b475b59015ac196e49760bd86de9c580685c29a7b28bf4ba6bca39e6ec2c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa
2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726251755065524423,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mtltd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bee68b4c-c773-4bb2-b088-1fe4a816edf3,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9134bc1238e6ea0f130cab81c7973189417fcdfb4544aec08a6ee8aaf314cb0c,PodSandboxId:44e10dfb950fd8542b6cad158923008de486a5f9f595035b4b932235c39eb956,Metadata:&ContainerMetadata{Na
me:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726251752259507868,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qxmw4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e77278b-62ae-4a68-bbba-ca3108d18280,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d7472d2e3f48ffc6dd6ccc80ca03ad0ac7078696b11d7b1460addd2949d22e6,PodSandboxId:d552343eeec8aec1e907ac31f5d100cedcabe20845d1d77eda3124dbbeaa4317,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Ima
ge:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726251740614747066,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-979357,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7509a39b4c733b67776a7ae5d64c186,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:089b47ce338051609525c4f9381ceba68577833fc0ffa1c55a00b4e704e073a2,PodSandboxId:b67ca3f1d294d653bc170dc259a7acaf543d88fea5536493d60247cdeb49a879,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&Imag
eSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726251740607552426,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-979357,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fde484119c5eac540deeb46d4ed91bf6,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f36fa2cd406d1eb54c924b89d86a23d5b5415356638b0a6ab4846430227aaaa2,PodSandboxId:89b0eb49c6580a707bc39e91ffdcf46d27656879114d9b53048e2e70708e1329,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Ima
ge:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726251740610321933,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-979357,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55cc9c34cab79d6a845b85b50237201a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:beb227280e8dfb38d827f5345ec8c8984cb0a02932ab22106590d49a6d28413a,PodSandboxId:1644d60ea634e91ff07026783a9d93e0db4e015a6853a21fb46c5a2aa2aaa73c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b
7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726251740600321275,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-979357,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab45a45dafa7cbe725acff1543d4a881,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=731543d0-52c4-4ee2-8ba7-01064c782020 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 18:33:23 addons-979357 crio[661]: time="2024-09-13 18:33:23.242640681Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f1b745c3-6a66-4579-959e-4dea73bb49d9 name=/runtime.v1.RuntimeService/Version
	Sep 13 18:33:23 addons-979357 crio[661]: time="2024-09-13 18:33:23.242787383Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f1b745c3-6a66-4579-959e-4dea73bb49d9 name=/runtime.v1.RuntimeService/Version
	Sep 13 18:33:23 addons-979357 crio[661]: time="2024-09-13 18:33:23.243886515Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ea291566-ac3f-4d5f-9ecd-b47d6e5a3b59 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 18:33:23 addons-979357 crio[661]: time="2024-09-13 18:33:23.245018200Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726252403244983795,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:519756,},InodesUsed:&UInt64Value{Value:182,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ea291566-ac3f-4d5f-9ecd-b47d6e5a3b59 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 18:33:23 addons-979357 crio[661]: time="2024-09-13 18:33:23.245779070Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c24e5c5b-302a-4fb5-917c-85640dcc2d7b name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 18:33:23 addons-979357 crio[661]: time="2024-09-13 18:33:23.245860554Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c24e5c5b-302a-4fb5-917c-85640dcc2d7b name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 18:33:23 addons-979357 crio[661]: time="2024-09-13 18:33:23.246601937Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:490e59266d90d24181a3482ec641c88eeff322360dd9b861100a27159177697a,PodSandboxId:09b3db7de02cffbf027fd1640e0811c98ab70ce2f1ba07fb72b92119248d2e4a,Metadata:&ContainerMetadata{Name:task-pv-container,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3,State:CONTAINER_RUNNING,CreatedAt:1726252397869975992,Labels:map[string]string{io.kubernetes.container.name: task-pv-container,io.kubernetes.pod.name: task-pv-pod-restore,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 34b800b1-d2f8-4d81-badb-d5d003b1751c,},Annotations:map[string]string{io.kubernetes.container.hash: 44be65c1,io.kubernetes.container.ports: [{\"name
\":\"http-server\",\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2df31721463d47f829d491bd7502e2abc48420129632cacc3b866c48ba28a11,PodSandboxId:07c7ce6376b4185287cd7ea44e962bde57b42d38047c978d13d5f8fbcaecefa5,Metadata:&ContainerMetadata{Name:helper-pod,Attempt:0,},Image:&ImageSpec{Image:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,State:CONTAINER_EXITED,CreatedAt:1726252364321268154,Labels:map[string]string{io.kubernetes.container.name: helper-pod,io.kubernetes.pod.name: helper-pod-delete-pvc-2e98d28b-4232-4373-82bf-032b9972820e,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 454a582a-c7ca-4bde-8403-1ca78f
f1963d,},Annotations:map[string]string{io.kubernetes.container.hash: 973dbf55,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41b1d9d2d01af380ac67a754bd8e1a7ef033e61cf8ba1ab8258d9db672da8f9c,PodSandboxId:5d3c014915f5c0ccad938f8cee2818d5561fbab5e96fb6c2817878fccaa409a3,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:9186e638ccc30c5d1a2efd5a2cd632f49bb5013f164f6f85c48ed6fce90fe38f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6fd955f66c231c1a946653170d096a28ac2b2052a02080c0b84ec082a07f7d12,State:CONTAINER_EXITED,CreatedAt:1726252361252198894,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: test-local-path,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 816d967e-d591-4e13-aecb-0cf44aa24faf,},Ann
otations:map[string]string{io.kubernetes.container.hash: dd3595ac,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:749dc8e85e7656c336851626be1b118bf67c77594c3964f2b3b551e49298eb57,PodSandboxId:17454b04375bbc1ce6cd324c5645f2a48910ec45f2f79cc5d6276b1a5cde93f6,Metadata:&ContainerMetadata{Name:helper-pod,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:023917ec6a886d0e8e15f28fb543515a5fcd8d938edb091e8147db4efed388ee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,State:CONTAINER_EXITED,CreatedAt:1726252355377254955,Labels:map[string]string{io.kubernetes.container.name: helper-pod,io.kubernetes.pod.name: helper-pod-create-pvc-2e98d28b-4232-4373-82bf-032b9972820e,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod
.uid: 5d8ff709-0270-4feb-8b47-886a509560e2,},Annotations:map[string]string{io.kubernetes.container.hash: 973dbf55,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3dfec23d2efb695500d4a9afc0baf0dc731c60ccd651364c92fc342cc71623dd,PodSandboxId:e1eba8472d179600f46e86d9a50697d041404dc639f4542669e6d7c72d57decd,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1726251845612332224,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-zhd46,io.kubernetes.pod.name
space: kube-system,io.kubernetes.pod.uid: a53ceb0b-635b-4fa8-a72b-60d626a4370f,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08c8d40517c9354851b9dfe1001e813bcc8fd52b47a250cedfdd3b7289c35b5d,PodSandboxId:e1eba8472d179600f46e86d9a50697d041404dc639f4542669e6d7c72d57decd,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1726251843509262557,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpat
hplugin-zhd46,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a53ceb0b-635b-4fa8-a72b-60d626a4370f,},Annotations:map[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3470689cfedddc432ad48e73c75c6a58aeef635d804166ea97d4d5e460c12ed2,PodSandboxId:e1eba8472d179600f46e86d9a50697d041404dc639f4542669e6d7c72d57decd,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1726251841775984714,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.k
ubernetes.pod.name: csi-hostpathplugin-zhd46,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a53ceb0b-635b-4fa8-a72b-60d626a4370f,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02c6d6e4b350e88d9417e6262d530e0be455f8bacc894d0437221f5f74fc33ce,PodSandboxId:c3ecf2966876727ebd3ed8032152a9fee65a2fcee0fdd8c8056f359218ee905f,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726251840695356851,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-j795q,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 943ee71f-15fc-4b01-8fa1-385d59ee92e9,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e546f2073644b15ebd0d5f084eb952ea003a1a9b3a72153425b0cc03ad5a2105,PodSandboxId:e1eba8472d179600f46e86d9a50697d041404dc639f4542669e6d7c72d57decd,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONT
AINER_RUNNING,CreatedAt:1726251835418163383,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-zhd46,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a53ceb0b-635b-4fa8-a72b-60d626a4370f,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3801ba40bdd3d2aa0b70f7848709eb6cc533bba6092182d25ab65b6802edacc6,PodSandboxId:b34051803c5f2d9ba72a2cd7dd140f4aa9b9861cc5b7be4e1abb5b4380c49214,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},U
serSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1726251833783258315,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-6mqg7,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c80a6556-910f-4e7c-8242-f32234571525,},Annotations:map[string]string{io.kubernetes.container.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},
},&Container{Id:6a2c4d4c08c4feb8d55788e1d1a835873fd058e12f7b522e2cf1d3968134f19a,PodSandboxId:e1eba8472d179600f46e86d9a50697d041404dc639f4542669e6d7c72d57decd,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1726251826759286614,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-zhd46,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a53ceb0b-635b-4fa8-a72b-60d626a4370f,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePo
licy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4217678f08ecd393fab908673989bddf937f2015c45e19d2c9d295cced0a6800,PodSandboxId:c14b7db58706ede75986dfee5914219785a2d044927b998f86b50e63f21c764e,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1726251824303551329,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9c848e7-3276-496f-a60f-69f8eb633740,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.containe
r.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fd4e408ade95ed79c83649fdbdb70d688b2afd3a74ed3c004bdc9c8683e0e53,PodSandboxId:e1eba8472d179600f46e86d9a50697d041404dc639f4542669e6d7c72d57decd,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1726251822174267183,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-zhd46,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a53ceb0b-635b-4fa8-a72b-60d626a4370f,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.re
startCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d68105fabf46cd21f994610d5069fdd13f66e99180302d14b2dc3692f184fc8,PodSandboxId:05c316e809f64caacb6e0616ecc638fbc9463d86e139b60cff5cfa3bddc7b896,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1726251820219923607,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a5b2986-b2ca-4a85-b195-1c8eb80a223e,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc7f28ac3b62a4d20b80abf1b36f8e76449fdd981fbf7fb2f7f345d5ddde6fb9,PodSandboxId:d0ef8624420805edf03d3ea93b3e1c55f85b06e6560ca770256c60254ca174cc,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726251818864520595,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-jsft5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 9fbbf624-3525-4d06-8686-47c244faa8d2,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container
.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f305e18e914bad488669b23de6227ca1310488a16651a66e64c3104cd3ef5ef,PodSandboxId:2f0b757b23f97349b6cc8d2741ebd9dca218e4a69abe4536745560b2bf9ebf32,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726251818584979983,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-t2k2m,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c9fb798f-6249-4cf0-bb0c-ad797779f7d2,},Annotations:map[string]string{io.kubernetes.container.
hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a90cbd91c8c03ee582230559b6b3b52957de04fcbc4b968cdd2e8b4f8cfee00,PodSandboxId:c14add024154b330da362430438efd3e8b7e66ef82f20767de23ead2cf34ef23,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1726251816515968054,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-fvbcx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9043c1eb-e28f-4af5-
af33-529d05cce5c8,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:091ad3de9e7235445b57bff2ac69762cd1be79db77aa25ad59be5034102dbb62,PodSandboxId:0c7d27ecd8d72c0c4634e24aa0684d999b82a400321e60e0268c372f54cd1264,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1726251816344267514,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-r58vx,io.kubern
etes.pod.namespace: kube-system,io.kubernetes.pod.uid: 661bb76c-4862-41f0-a2d0-1c774b91c7dd,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ab3cdf56491273dcbb9cc1f982f2dc5304571e00b7a1ee3ff3175f91c97fdbe,PodSandboxId:68e88bddaa74cdd1c6b049d43a7215c2c0f320385d884237086b8abdf931a230,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1726251796529246164,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name
: metrics-server-84c5f94fbc-qw488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf270e35-c498-455b-bc82-0a19e8f606aa,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d8be76d53b6a1cc2d79f39f38496e98eba9b8fddce6215a031f4c296c8a0a51,PodSandboxId:5c7921eb426979d624b49b3db24a888ceba405fa718bf1a5c08ae5a053a74ad3,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,Stat
e:CONTAINER_RUNNING,CreatedAt:1726251779006114876,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a82db8f0-646e-4f6c-8dda-7332bed77579,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46c152a4abcf517c32fc79a8f91da41a850adb80ee9597c3f49a9e6206b45f31,PodSandboxId:c2dc3a67499c7fd0a5898c8d6199735b3b3acd422ebc719fa480351a446b863e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726251757134337412,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09e9768b-ce9c-47d6-8650-191c7f864a9c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3bf9ceff710d95ca8581ac5cd76f0ab55d09833110e1a5c63fc0953ce948f4f,PodSandboxId:abf9b475b59015ac196e49760bd86de9c580685c29a7b28bf4ba6bca39e6ec2c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa
2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726251755065524423,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mtltd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bee68b4c-c773-4bb2-b088-1fe4a816edf3,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9134bc1238e6ea0f130cab81c7973189417fcdfb4544aec08a6ee8aaf314cb0c,PodSandboxId:44e10dfb950fd8542b6cad158923008de486a5f9f595035b4b932235c39eb956,Metadata:&ContainerMetadata{Na
me:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726251752259507868,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qxmw4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e77278b-62ae-4a68-bbba-ca3108d18280,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d7472d2e3f48ffc6dd6ccc80ca03ad0ac7078696b11d7b1460addd2949d22e6,PodSandboxId:d552343eeec8aec1e907ac31f5d100cedcabe20845d1d77eda3124dbbeaa4317,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Ima
ge:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726251740614747066,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-979357,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7509a39b4c733b67776a7ae5d64c186,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:089b47ce338051609525c4f9381ceba68577833fc0ffa1c55a00b4e704e073a2,PodSandboxId:b67ca3f1d294d653bc170dc259a7acaf543d88fea5536493d60247cdeb49a879,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&Imag
eSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726251740607552426,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-979357,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fde484119c5eac540deeb46d4ed91bf6,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f36fa2cd406d1eb54c924b89d86a23d5b5415356638b0a6ab4846430227aaaa2,PodSandboxId:89b0eb49c6580a707bc39e91ffdcf46d27656879114d9b53048e2e70708e1329,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Ima
ge:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726251740610321933,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-979357,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55cc9c34cab79d6a845b85b50237201a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:beb227280e8dfb38d827f5345ec8c8984cb0a02932ab22106590d49a6d28413a,PodSandboxId:1644d60ea634e91ff07026783a9d93e0db4e015a6853a21fb46c5a2aa2aaa73c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b
7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726251740600321275,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-979357,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab45a45dafa7cbe725acff1543d4a881,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c24e5c5b-302a-4fb5-917c-85640dcc2d7b name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	490e59266d90d       docker.io/library/nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3                                              5 seconds ago       Running             task-pv-container                        0                   09b3db7de02cf       task-pv-pod-restore
	b2df31721463d       a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824                                                                             39 seconds ago      Exited              helper-pod                               0                   07c7ce6376b41       helper-pod-delete-pvc-2e98d28b-4232-4373-82bf-032b9972820e
	41b1d9d2d01af       docker.io/library/busybox@sha256:9186e638ccc30c5d1a2efd5a2cd632f49bb5013f164f6f85c48ed6fce90fe38f                                            42 seconds ago      Exited              busybox                                  0                   5d3c014915f5c       test-local-path
	749dc8e85e765       docker.io/library/busybox@sha256:023917ec6a886d0e8e15f28fb543515a5fcd8d938edb091e8147db4efed388ee                                            47 seconds ago      Exited              helper-pod                               0                   17454b04375bb       helper-pod-create-pvc-2e98d28b-4232-4373-82bf-032b9972820e
	3dfec23d2efb6       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          9 minutes ago       Running             csi-snapshotter                          0                   e1eba8472d179       csi-hostpathplugin-zhd46
	08c8d40517c93       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          9 minutes ago       Running             csi-provisioner                          0                   e1eba8472d179       csi-hostpathplugin-zhd46
	3470689cfeddd       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            9 minutes ago       Running             liveness-probe                           0                   e1eba8472d179       csi-hostpathplugin-zhd46
	02c6d6e4b350e       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b                                 9 minutes ago       Running             gcp-auth                                 0                   c3ecf29668767       gcp-auth-89d5ffd79-j795q
	e546f2073644b       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           9 minutes ago       Running             hostpath                                 0                   e1eba8472d179       csi-hostpathplugin-zhd46
	3801ba40bdd3d       registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6                             9 minutes ago       Running             controller                               0                   b34051803c5f2       ingress-nginx-controller-bc57996ff-6mqg7
	6a2c4d4c08c4f       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                9 minutes ago       Running             node-driver-registrar                    0                   e1eba8472d179       csi-hostpathplugin-zhd46
	4217678f08ecd       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              9 minutes ago       Running             csi-resizer                              0                   c14b7db58706e       csi-hostpath-resizer-0
	6fd4e408ade95       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   9 minutes ago       Running             csi-external-health-monitor-controller   0                   e1eba8472d179       csi-hostpathplugin-zhd46
	3d68105fabf46       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             9 minutes ago       Running             csi-attacher                             0                   05c316e809f64       csi-hostpath-attacher-0
	fc7f28ac3b62a       ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242                                                                             9 minutes ago       Exited              patch                                    1                   d0ef862442080       ingress-nginx-admission-patch-jsft5
	6f305e18e914b       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012                   9 minutes ago       Exited              create                                   0                   2f0b757b23f97       ingress-nginx-admission-create-t2k2m
	2a90cbd91c8c0       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      9 minutes ago       Running             volume-snapshot-controller               0                   c14add024154b       snapshot-controller-56fcc65765-fvbcx
	091ad3de9e723       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      9 minutes ago       Running             volume-snapshot-controller               0                   0c7d27ecd8d72       snapshot-controller-56fcc65765-r58vx
	7ab3cdf564912       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a                        10 minutes ago      Running             metrics-server                           0                   68e88bddaa74c       metrics-server-84c5f94fbc-qw488
	5d8be76d53b6a       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab                             10 minutes ago      Running             minikube-ingress-dns                     0                   5c7921eb42697       kube-ingress-dns-minikube
	46c152a4abcf5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             10 minutes ago      Running             storage-provisioner                      0                   c2dc3a67499c7       storage-provisioner
	e3bf9ceff710d       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                                             10 minutes ago      Running             coredns                                  0                   abf9b475b5901       coredns-7c65d6cfc9-mtltd
	9134bc1238e6e       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                                             10 minutes ago      Running             kube-proxy                               0                   44e10dfb950fd       kube-proxy-qxmw4
	1d7472d2e3f48       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                                             11 minutes ago      Running             kube-scheduler                           0                   d552343eeec8a       kube-scheduler-addons-979357
	f36fa2cd406d1       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                                             11 minutes ago      Running             etcd                                     0                   89b0eb49c6580       etcd-addons-979357
	089b47ce33805       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                                             11 minutes ago      Running             kube-controller-manager                  0                   b67ca3f1d294d       kube-controller-manager-addons-979357
	beb227280e8df       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                                                             11 minutes ago      Running             kube-apiserver                           0                   1644d60ea634e       kube-apiserver-addons-979357
	
	
	==> coredns [e3bf9ceff710d95ca8581ac5cd76f0ab55d09833110e1a5c63fc0953ce948f4f] <==
	[INFO] 127.0.0.1:55425 - 14478 "HINFO IN 8414480608980431581.7987847580657585340. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013867574s
	[INFO] 10.244.0.8:41401 - 54033 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000413348s
	[INFO] 10.244.0.8:41401 - 10285 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000151346s
	[INFO] 10.244.0.8:59177 - 13648 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000180964s
	[INFO] 10.244.0.8:59177 - 58194 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000217233s
	[INFO] 10.244.0.8:33613 - 8975 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000149676s
	[INFO] 10.244.0.8:33613 - 55809 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000167212s
	[INFO] 10.244.0.8:39507 - 64600 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000116346s
	[INFO] 10.244.0.8:39507 - 6487 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000116459s
	[INFO] 10.244.0.8:44408 - 33423 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000177557s
	[INFO] 10.244.0.8:44408 - 53388 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000095321s
	[INFO] 10.244.0.8:50243 - 29298 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000133268s
	[INFO] 10.244.0.8:50243 - 63089 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000075946s
	[INFO] 10.244.0.8:44518 - 41049 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000067378s
	[INFO] 10.244.0.8:44518 - 48475 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000090248s
	[INFO] 10.244.0.8:58663 - 2901 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000053667s
	[INFO] 10.244.0.8:58663 - 55639 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000037658s
	[INFO] 10.244.0.21:34953 - 59093 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000423399s
	[INFO] 10.244.0.21:35225 - 60921 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000298982s
	[INFO] 10.244.0.21:47005 - 14964 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000165017s
	[INFO] 10.244.0.21:38065 - 60873 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000151065s
	[INFO] 10.244.0.21:58049 - 44728 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000129589s
	[INFO] 10.244.0.21:41316 - 5999 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000108833s
	[INFO] 10.244.0.21:53728 - 64340 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000828725s
	[INFO] 10.244.0.21:36643 - 40190 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.000688535s
	
	
	==> describe nodes <==
	Name:               addons-979357
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-979357
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fdd33bebc6743cfd1c61ec7fe066add478610a92
	                    minikube.k8s.io/name=addons-979357
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_13T18_22_26_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-979357
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-979357"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Sep 2024 18:22:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-979357
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 13 Sep 2024 18:33:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Sep 2024 18:33:00 +0000   Fri, 13 Sep 2024 18:22:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Sep 2024 18:33:00 +0000   Fri, 13 Sep 2024 18:22:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Sep 2024 18:33:00 +0000   Fri, 13 Sep 2024 18:22:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 13 Sep 2024 18:33:00 +0000   Fri, 13 Sep 2024 18:22:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.34
	  Hostname:    addons-979357
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 323f75a62e114a2e93170ef9b4ca6dd9
	  System UUID:                323f75a6-2e11-4a2e-9317-0ef9b4ca6dd9
	  Boot ID:                    007169e1-5e2f-4ead-8631-d0c0eed7c494
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (18 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m17s
	  default                     task-pv-pod-restore                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         7s
	  gcp-auth                    gcp-auth-89d5ffd79-j795q                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  ingress-nginx               ingress-nginx-controller-bc57996ff-6mqg7    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         10m
	  kube-system                 coredns-7c65d6cfc9-mtltd                    100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     10m
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 csi-hostpathplugin-zhd46                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 etcd-addons-979357                          100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         10m
	  kube-system                 kube-apiserver-addons-979357                250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-addons-979357       200m (10%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-qxmw4                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-addons-979357                100m (5%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 metrics-server-84c5f94fbc-qw488             100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         10m
	  kube-system                 snapshot-controller-56fcc65765-fvbcx        0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 snapshot-controller-56fcc65765-r58vx        0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             460Mi (12%)  170Mi (4%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 10m   kube-proxy       
	  Normal  Starting                 10m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  10m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m   kubelet          Node addons-979357 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m   kubelet          Node addons-979357 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m   kubelet          Node addons-979357 status is now: NodeHasSufficientPID
	  Normal  NodeReady                10m   kubelet          Node addons-979357 status is now: NodeReady
	  Normal  RegisteredNode           10m   node-controller  Node addons-979357 event: Registered Node addons-979357 in Controller
	
	
	==> dmesg <==
	[  +4.732146] systemd-fstab-generator[1331]: Ignoring "noauto" option for root device
	[  +1.434262] kauditd_printk_skb: 43 callbacks suppressed
	[  +5.008324] kauditd_printk_skb: 126 callbacks suppressed
	[  +5.315078] kauditd_printk_skb: 141 callbacks suppressed
	[  +7.222070] kauditd_printk_skb: 22 callbacks suppressed
	[Sep13 18:23] kauditd_printk_skb: 2 callbacks suppressed
	[  +7.361549] kauditd_printk_skb: 27 callbacks suppressed
	[ +11.110464] kauditd_printk_skb: 4 callbacks suppressed
	[ +13.984432] kauditd_printk_skb: 22 callbacks suppressed
	[  +5.307990] kauditd_printk_skb: 45 callbacks suppressed
	[  +8.629278] kauditd_printk_skb: 63 callbacks suppressed
	[Sep13 18:24] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.527807] kauditd_printk_skb: 16 callbacks suppressed
	[ +19.654471] kauditd_printk_skb: 40 callbacks suppressed
	[Sep13 18:25] kauditd_printk_skb: 28 callbacks suppressed
	[Sep13 18:26] kauditd_printk_skb: 28 callbacks suppressed
	[Sep13 18:29] kauditd_printk_skb: 28 callbacks suppressed
	[Sep13 18:32] kauditd_printk_skb: 28 callbacks suppressed
	[  +6.953826] kauditd_printk_skb: 6 callbacks suppressed
	[  +7.633272] kauditd_printk_skb: 11 callbacks suppressed
	[  +7.939706] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.945246] kauditd_printk_skb: 23 callbacks suppressed
	[  +5.115088] kauditd_printk_skb: 23 callbacks suppressed
	[  +9.244947] kauditd_printk_skb: 31 callbacks suppressed
	[Sep13 18:33] kauditd_printk_skb: 15 callbacks suppressed
	
	
	==> etcd [f36fa2cd406d1eb54c924b89d86a23d5b5415356638b0a6ab4846430227aaaa2] <==
	{"level":"warn","ts":"2024-09-13T18:23:51.021543Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"387.099142ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-13T18:23:51.021644Z","caller":"traceutil/trace.go:171","msg":"trace[515273731] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1054; }","duration":"387.282484ms","start":"2024-09-13T18:23:50.634355Z","end":"2024-09-13T18:23:51.021638Z","steps":["trace[515273731] 'agreement among raft nodes before linearized reading'  (duration: 387.071303ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-13T18:23:51.021675Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-13T18:23:50.634324Z","time spent":"387.339943ms","remote":"127.0.0.1:53466","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2024-09-13T18:23:51.022402Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"337.078944ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-13T18:23:51.022467Z","caller":"traceutil/trace.go:171","msg":"trace[1756911976] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1054; }","duration":"337.150275ms","start":"2024-09-13T18:23:50.685306Z","end":"2024-09-13T18:23:51.022456Z","steps":["trace[1756911976] 'agreement among raft nodes before linearized reading'  (duration: 337.020545ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-13T18:23:51.022506Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-13T18:23:50.685273Z","time spent":"337.222274ms","remote":"127.0.0.1:53466","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"info","ts":"2024-09-13T18:23:53.608519Z","caller":"traceutil/trace.go:171","msg":"trace[570854755] transaction","detail":"{read_only:false; response_revision:1061; number_of_response:1; }","duration":"228.533999ms","start":"2024-09-13T18:23:53.379969Z","end":"2024-09-13T18:23:53.608503Z","steps":["trace[570854755] 'process raft request'  (duration: 228.091989ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-13T18:24:05.523053Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-13T18:24:05.164429Z","time spent":"358.62098ms","remote":"127.0.0.1:53300","response type":"/etcdserverpb.Lease/LeaseGrant","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""}
	{"level":"info","ts":"2024-09-13T18:24:05.526794Z","caller":"traceutil/trace.go:171","msg":"trace[1285637360] transaction","detail":"{read_only:false; response_revision:1131; number_of_response:1; }","duration":"245.594439ms","start":"2024-09-13T18:24:05.281082Z","end":"2024-09-13T18:24:05.526676Z","steps":["trace[1285637360] 'process raft request'  (duration: 245.425195ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-13T18:32:16.746450Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"259.463174ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/snapshot.storage.k8s.io/volumesnapshotclasses/\" range_end:\"/registry/snapshot.storage.k8s.io/volumesnapshotclasses0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-09-13T18:32:16.746572Z","caller":"traceutil/trace.go:171","msg":"trace[1646493262] range","detail":"{range_begin:/registry/snapshot.storage.k8s.io/volumesnapshotclasses/; range_end:/registry/snapshot.storage.k8s.io/volumesnapshotclasses0; response_count:0; response_revision:1944; }","duration":"259.655607ms","start":"2024-09-13T18:32:16.486899Z","end":"2024-09-13T18:32:16.746555Z","steps":["trace[1646493262] 'count revisions from in-memory index tree'  (duration: 259.404889ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-13T18:32:21.625942Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1491}
	{"level":"info","ts":"2024-09-13T18:32:21.662273Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1491,"took":"35.833101ms","hash":2337312588,"current-db-size-bytes":6422528,"current-db-size":"6.4 MB","current-db-size-in-use-bytes":3420160,"current-db-size-in-use":"3.4 MB"}
	{"level":"info","ts":"2024-09-13T18:32:21.662341Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2337312588,"revision":1491,"compact-revision":-1}
	{"level":"info","ts":"2024-09-13T18:32:47.777404Z","caller":"traceutil/trace.go:171","msg":"trace[9576718] transaction","detail":"{read_only:false; response_revision:2174; number_of_response:1; }","duration":"150.443543ms","start":"2024-09-13T18:32:47.626934Z","end":"2024-09-13T18:32:47.777378Z","steps":["trace[9576718] 'process raft request'  (duration: 150.357849ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-13T18:32:52.478755Z","caller":"traceutil/trace.go:171","msg":"trace[505158] linearizableReadLoop","detail":"{readStateIndex:2358; appliedIndex:2357; }","duration":"421.352793ms","start":"2024-09-13T18:32:52.057386Z","end":"2024-09-13T18:32:52.478739Z","steps":["trace[505158] 'read index received'  (duration: 421.139117ms)","trace[505158] 'applied index is now lower than readState.Index'  (duration: 212.982µs)"],"step_count":2}
	{"level":"warn","ts":"2024-09-13T18:32:52.479009Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"350.057609ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-13T18:32:52.479661Z","caller":"traceutil/trace.go:171","msg":"trace[943115826] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:2200; }","duration":"350.751111ms","start":"2024-09-13T18:32:52.128898Z","end":"2024-09-13T18:32:52.479649Z","steps":["trace[943115826] 'agreement among raft nodes before linearized reading'  (duration: 350.040298ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-13T18:32:52.479012Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"421.574332ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-13T18:32:52.480358Z","caller":"traceutil/trace.go:171","msg":"trace[691500721] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:2200; }","duration":"422.967594ms","start":"2024-09-13T18:32:52.057381Z","end":"2024-09-13T18:32:52.480349Z","steps":["trace[691500721] 'agreement among raft nodes before linearized reading'  (duration: 421.548176ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-13T18:32:52.480506Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-13T18:32:52.057333Z","time spent":"423.124824ms","remote":"127.0.0.1:53272","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"info","ts":"2024-09-13T18:32:52.479052Z","caller":"traceutil/trace.go:171","msg":"trace[2022301504] transaction","detail":"{read_only:false; response_revision:2200; number_of_response:1; }","duration":"547.643865ms","start":"2024-09-13T18:32:51.931399Z","end":"2024-09-13T18:32:52.479043Z","steps":["trace[2022301504] 'process raft request'  (duration: 547.179229ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-13T18:32:52.481455Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-13T18:32:51.931384Z","time spent":"549.269751ms","remote":"127.0.0.1:40810","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":539,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" mod_revision:2173 > success:<request_put:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" value_size:452 >> failure:<request_range:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" > >"}
	{"level":"warn","ts":"2024-09-13T18:32:52.479449Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"107.09265ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-13T18:32:52.481582Z","caller":"traceutil/trace.go:171","msg":"trace[2047800323] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:2200; }","duration":"109.228494ms","start":"2024-09-13T18:32:52.372347Z","end":"2024-09-13T18:32:52.481576Z","steps":["trace[2047800323] 'agreement among raft nodes before linearized reading'  (duration: 107.084584ms)"],"step_count":1}
	
	
	==> gcp-auth [02c6d6e4b350e88d9417e6262d530e0be455f8bacc894d0437221f5f74fc33ce] <==
	2024/09/13 18:24:00 GCP Auth Webhook started!
	2024/09/13 18:24:06 Ready to marshal response ...
	2024/09/13 18:24:06 Ready to write response ...
	2024/09/13 18:24:06 Ready to marshal response ...
	2024/09/13 18:24:06 Ready to write response ...
	2024/09/13 18:24:06 Ready to marshal response ...
	2024/09/13 18:24:06 Ready to write response ...
	2024/09/13 18:32:10 Ready to marshal response ...
	2024/09/13 18:32:10 Ready to write response ...
	2024/09/13 18:32:10 Ready to marshal response ...
	2024/09/13 18:32:10 Ready to write response ...
	2024/09/13 18:32:10 Ready to marshal response ...
	2024/09/13 18:32:10 Ready to write response ...
	2024/09/13 18:32:21 Ready to marshal response ...
	2024/09/13 18:32:21 Ready to write response ...
	2024/09/13 18:32:32 Ready to marshal response ...
	2024/09/13 18:32:32 Ready to write response ...
	2024/09/13 18:32:32 Ready to marshal response ...
	2024/09/13 18:32:32 Ready to write response ...
	2024/09/13 18:32:43 Ready to marshal response ...
	2024/09/13 18:32:43 Ready to write response ...
	2024/09/13 18:32:45 Ready to marshal response ...
	2024/09/13 18:32:45 Ready to write response ...
	2024/09/13 18:33:16 Ready to marshal response ...
	2024/09/13 18:33:16 Ready to write response ...
	
	
	==> kernel <==
	 18:33:23 up 11 min,  0 users,  load average: 1.59, 0.71, 0.42
	Linux addons-979357 5.10.207 #1 SMP Thu Sep 12 19:03:33 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [beb227280e8dfb38d827f5345ec8c8984cb0a02932ab22106590d49a6d28413a] <==
	E0913 18:24:26.943206       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0913 18:24:26.943278       1 handler_proxy.go:99] no RequestInfo found in the context
	E0913 18:24:26.943344       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0913 18:24:26.944506       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0913 18:24:26.944581       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0913 18:24:30.956495       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.108.112.163:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.108.112.163:443/apis/metrics.k8s.io/v1beta1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" logger="UnhandledError"
	W0913 18:24:30.956793       1 handler_proxy.go:99] no RequestInfo found in the context
	E0913 18:24:30.956941       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0913 18:24:30.979201       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0913 18:24:30.988101       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io \"v1beta1.metrics.k8s.io\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I0913 18:32:10.039145       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.103.81.144"}
	I0913 18:32:15.993730       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0913 18:32:17.054872       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0913 18:32:59.435526       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E0913 18:32:59.736880       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0913 18:33:10.989953       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0913 18:33:11.997980       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0913 18:33:13.005448       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0913 18:33:14.012493       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	
	
	==> kube-controller-manager [089b47ce338051609525c4f9381ceba68577833fc0ffa1c55a00b4e704e073a2] <==
	I0913 18:32:17.466826       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="40.394µs"
	W0913 18:32:18.660391       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0913 18:32:18.660460       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0913 18:32:21.699248       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0913 18:32:21.699305       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0913 18:32:24.682064       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="4µs"
	I0913 18:32:26.115494       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="gadget"
	I0913 18:32:26.780851       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="yakd-dashboard/yakd-dashboard-67d98fc6b" duration="3.79µs"
	W0913 18:32:27.231565       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0913 18:32:27.231753       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0913 18:32:29.890963       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-979357"
	I0913 18:32:30.461970       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0913 18:32:30.462173       1 shared_informer.go:320] Caches are synced for resource quota
	I0913 18:32:30.887498       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0913 18:32:30.887536       1 shared_informer.go:320] Caches are synced for garbage collector
	I0913 18:32:34.799262       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="headlamp"
	I0913 18:32:36.896509       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="yakd-dashboard"
	W0913 18:32:39.734535       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0913 18:32:39.734633       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0913 18:32:43.014473       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/cloud-spanner-emulator-769b77f747" duration="3.524µs"
	I0913 18:32:44.379105       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="local-path-storage/local-path-provisioner-86d989889c" duration="4.595µs"
	W0913 18:32:58.310080       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0913 18:32:58.310233       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0913 18:33:00.485214       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-979357"
	I0913 18:33:22.003089       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="3.089µs"
	
	
	==> kube-proxy [9134bc1238e6ea0f130cab81c7973189417fcdfb4544aec08a6ee8aaf314cb0c] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0913 18:22:33.350612       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0913 18:22:33.364476       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.34"]
	E0913 18:22:33.364537       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0913 18:22:33.483199       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0913 18:22:33.483274       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0913 18:22:33.483300       1 server_linux.go:169] "Using iptables Proxier"
	I0913 18:22:33.488023       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0913 18:22:33.488274       1 server.go:483] "Version info" version="v1.31.1"
	I0913 18:22:33.488283       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0913 18:22:33.494316       1 config.go:199] "Starting service config controller"
	I0913 18:22:33.494338       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0913 18:22:33.494377       1 config.go:105] "Starting endpoint slice config controller"
	I0913 18:22:33.494381       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0913 18:22:33.497782       1 config.go:328] "Starting node config controller"
	I0913 18:22:33.497794       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0913 18:22:33.596036       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0913 18:22:33.596075       1 shared_informer.go:320] Caches are synced for service config
	I0913 18:22:33.598825       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [1d7472d2e3f48ffc6dd6ccc80ca03ad0ac7078696b11d7b1460addd2949d22e6] <==
	W0913 18:22:23.351491       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0913 18:22:23.351533       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0913 18:22:24.185862       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0913 18:22:24.185917       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0913 18:22:24.200594       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0913 18:22:24.200752       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0913 18:22:24.218466       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0913 18:22:24.218561       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0913 18:22:24.258477       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0913 18:22:24.258532       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0913 18:22:24.395515       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0913 18:22:24.395621       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0913 18:22:24.419001       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0913 18:22:24.419792       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0913 18:22:24.459549       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0913 18:22:24.459618       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0913 18:22:24.479886       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0913 18:22:24.480416       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0913 18:22:24.498056       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0913 18:22:24.498210       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0913 18:22:24.498173       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0913 18:22:24.498336       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0913 18:22:24.953128       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0913 18:22:24.953629       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0913 18:22:28.042327       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 13 18:33:16 addons-979357 kubelet[1204]: I0913 18:33:16.484037    1204 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8r9t2\" (UniqueName: \"kubernetes.io/projected/34b800b1-d2f8-4d81-badb-d5d003b1751c-kube-api-access-8r9t2\") pod \"task-pv-pod-restore\" (UID: \"34b800b1-d2f8-4d81-badb-d5d003b1751c\") " pod="default/task-pv-pod-restore"
	Sep 13 18:33:16 addons-979357 kubelet[1204]: I0913 18:33:16.484059    1204 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/34b800b1-d2f8-4d81-badb-d5d003b1751c-gcp-creds\") pod \"task-pv-pod-restore\" (UID: \"34b800b1-d2f8-4d81-badb-d5d003b1751c\") " pod="default/task-pv-pod-restore"
	Sep 13 18:33:16 addons-979357 kubelet[1204]: I0913 18:33:16.593479    1204 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-05a902e4-e062-4a12-82c2-7aff7749f83c\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^a3e5af73-71fe-11ef-964b-1ad3f654ae34\") pod \"task-pv-pod-restore\" (UID: \"34b800b1-d2f8-4d81-badb-d5d003b1751c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/hostpath.csi.k8s.io/8ab2d89b1101aa6493a53161ad8a1dda0e8c81f0e8ea88613d155508bfcc1a37/globalmount\"" pod="default/task-pv-pod-restore"
	Sep 13 18:33:18 addons-979357 kubelet[1204]: I0913 18:33:18.050935    1204 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/task-pv-pod-restore" podStartSLOduration=1.0718160860000001 podStartE2EDuration="2.050917663s" podCreationTimestamp="2024-09-13 18:33:16 +0000 UTC" firstStartedPulling="2024-09-13 18:33:16.87996071 +0000 UTC m=+650.985493488" lastFinishedPulling="2024-09-13 18:33:17.859062286 +0000 UTC m=+651.964595065" observedRunningTime="2024-09-13 18:33:18.049295246 +0000 UTC m=+652.154828038" watchObservedRunningTime="2024-09-13 18:33:18.050917663 +0000 UTC m=+652.156450460"
	Sep 13 18:33:21 addons-979357 kubelet[1204]: I0913 18:33:21.630904    1204 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wtbgx\" (UniqueName: \"kubernetes.io/projected/1232deeb-f061-44ab-ba3e-cca83d08c6eb-kube-api-access-wtbgx\") pod \"1232deeb-f061-44ab-ba3e-cca83d08c6eb\" (UID: \"1232deeb-f061-44ab-ba3e-cca83d08c6eb\") "
	Sep 13 18:33:21 addons-979357 kubelet[1204]: I0913 18:33:21.630946    1204 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/1232deeb-f061-44ab-ba3e-cca83d08c6eb-gcp-creds\") pod \"1232deeb-f061-44ab-ba3e-cca83d08c6eb\" (UID: \"1232deeb-f061-44ab-ba3e-cca83d08c6eb\") "
	Sep 13 18:33:21 addons-979357 kubelet[1204]: I0913 18:33:21.631061    1204 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1232deeb-f061-44ab-ba3e-cca83d08c6eb-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "1232deeb-f061-44ab-ba3e-cca83d08c6eb" (UID: "1232deeb-f061-44ab-ba3e-cca83d08c6eb"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 13 18:33:21 addons-979357 kubelet[1204]: I0913 18:33:21.636664    1204 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1232deeb-f061-44ab-ba3e-cca83d08c6eb-kube-api-access-wtbgx" (OuterVolumeSpecName: "kube-api-access-wtbgx") pod "1232deeb-f061-44ab-ba3e-cca83d08c6eb" (UID: "1232deeb-f061-44ab-ba3e-cca83d08c6eb"). InnerVolumeSpecName "kube-api-access-wtbgx". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 13 18:33:21 addons-979357 kubelet[1204]: I0913 18:33:21.732127    1204 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-wtbgx\" (UniqueName: \"kubernetes.io/projected/1232deeb-f061-44ab-ba3e-cca83d08c6eb-kube-api-access-wtbgx\") on node \"addons-979357\" DevicePath \"\""
	Sep 13 18:33:21 addons-979357 kubelet[1204]: I0913 18:33:21.732180    1204 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/1232deeb-f061-44ab-ba3e-cca83d08c6eb-gcp-creds\") on node \"addons-979357\" DevicePath \"\""
	Sep 13 18:33:22 addons-979357 kubelet[1204]: E0913 18:33:22.019678    1204 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="fadcf5b8-b54e-4896-9ab6-b7294f3c8503"
	Sep 13 18:33:22 addons-979357 kubelet[1204]: I0913 18:33:22.336146    1204 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4mns\" (UniqueName: \"kubernetes.io/projected/d9453f5b-a1d3-40e4-80d3-2250edd642ca-kube-api-access-q4mns\") pod \"d9453f5b-a1d3-40e4-80d3-2250edd642ca\" (UID: \"d9453f5b-a1d3-40e4-80d3-2250edd642ca\") "
	Sep 13 18:33:22 addons-979357 kubelet[1204]: I0913 18:33:22.341581    1204 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9453f5b-a1d3-40e4-80d3-2250edd642ca-kube-api-access-q4mns" (OuterVolumeSpecName: "kube-api-access-q4mns") pod "d9453f5b-a1d3-40e4-80d3-2250edd642ca" (UID: "d9453f5b-a1d3-40e4-80d3-2250edd642ca"). InnerVolumeSpecName "kube-api-access-q4mns". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 13 18:33:22 addons-979357 kubelet[1204]: I0913 18:33:22.436409    1204 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9cj4w\" (UniqueName: \"kubernetes.io/projected/8223e4fa-f130-48c6-ab8b-764434495610-kube-api-access-9cj4w\") pod \"8223e4fa-f130-48c6-ab8b-764434495610\" (UID: \"8223e4fa-f130-48c6-ab8b-764434495610\") "
	Sep 13 18:33:22 addons-979357 kubelet[1204]: I0913 18:33:22.436501    1204 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-q4mns\" (UniqueName: \"kubernetes.io/projected/d9453f5b-a1d3-40e4-80d3-2250edd642ca-kube-api-access-q4mns\") on node \"addons-979357\" DevicePath \"\""
	Sep 13 18:33:22 addons-979357 kubelet[1204]: I0913 18:33:22.440809    1204 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8223e4fa-f130-48c6-ab8b-764434495610-kube-api-access-9cj4w" (OuterVolumeSpecName: "kube-api-access-9cj4w") pod "8223e4fa-f130-48c6-ab8b-764434495610" (UID: "8223e4fa-f130-48c6-ab8b-764434495610"). InnerVolumeSpecName "kube-api-access-9cj4w". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 13 18:33:22 addons-979357 kubelet[1204]: I0913 18:33:22.536839    1204 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-9cj4w\" (UniqueName: \"kubernetes.io/projected/8223e4fa-f130-48c6-ab8b-764434495610-kube-api-access-9cj4w\") on node \"addons-979357\" DevicePath \"\""
	Sep 13 18:33:23 addons-979357 kubelet[1204]: I0913 18:33:23.071846    1204 scope.go:117] "RemoveContainer" containerID="5d88e3033902027ecce9dc77460ab287a49ac952de5fb9c339bf909fbbf1510b"
	Sep 13 18:33:23 addons-979357 kubelet[1204]: I0913 18:33:23.135995    1204 scope.go:117] "RemoveContainer" containerID="5d88e3033902027ecce9dc77460ab287a49ac952de5fb9c339bf909fbbf1510b"
	Sep 13 18:33:23 addons-979357 kubelet[1204]: E0913 18:33:23.136878    1204 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d88e3033902027ecce9dc77460ab287a49ac952de5fb9c339bf909fbbf1510b\": container with ID starting with 5d88e3033902027ecce9dc77460ab287a49ac952de5fb9c339bf909fbbf1510b not found: ID does not exist" containerID="5d88e3033902027ecce9dc77460ab287a49ac952de5fb9c339bf909fbbf1510b"
	Sep 13 18:33:23 addons-979357 kubelet[1204]: I0913 18:33:23.136930    1204 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d88e3033902027ecce9dc77460ab287a49ac952de5fb9c339bf909fbbf1510b"} err="failed to get container status \"5d88e3033902027ecce9dc77460ab287a49ac952de5fb9c339bf909fbbf1510b\": rpc error: code = NotFound desc = could not find container \"5d88e3033902027ecce9dc77460ab287a49ac952de5fb9c339bf909fbbf1510b\": container with ID starting with 5d88e3033902027ecce9dc77460ab287a49ac952de5fb9c339bf909fbbf1510b not found: ID does not exist"
	Sep 13 18:33:23 addons-979357 kubelet[1204]: I0913 18:33:23.136954    1204 scope.go:117] "RemoveContainer" containerID="b58874031e1c8f17ea718e353d128c0c178439f4fbdfb2463308e7e52f6f4aae"
	Sep 13 18:33:23 addons-979357 kubelet[1204]: I0913 18:33:23.152251    1204 scope.go:117] "RemoveContainer" containerID="b58874031e1c8f17ea718e353d128c0c178439f4fbdfb2463308e7e52f6f4aae"
	Sep 13 18:33:23 addons-979357 kubelet[1204]: E0913 18:33:23.153584    1204 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b58874031e1c8f17ea718e353d128c0c178439f4fbdfb2463308e7e52f6f4aae\": container with ID starting with b58874031e1c8f17ea718e353d128c0c178439f4fbdfb2463308e7e52f6f4aae not found: ID does not exist" containerID="b58874031e1c8f17ea718e353d128c0c178439f4fbdfb2463308e7e52f6f4aae"
	Sep 13 18:33:23 addons-979357 kubelet[1204]: I0913 18:33:23.153612    1204 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b58874031e1c8f17ea718e353d128c0c178439f4fbdfb2463308e7e52f6f4aae"} err="failed to get container status \"b58874031e1c8f17ea718e353d128c0c178439f4fbdfb2463308e7e52f6f4aae\": rpc error: code = NotFound desc = could not find container \"b58874031e1c8f17ea718e353d128c0c178439f4fbdfb2463308e7e52f6f4aae\": container with ID starting with b58874031e1c8f17ea718e353d128c0c178439f4fbdfb2463308e7e52f6f4aae not found: ID does not exist"
	
	
	==> storage-provisioner [46c152a4abcf517c32fc79a8f91da41a850adb80ee9597c3f49a9e6206b45f31] <==
	I0913 18:22:38.267389       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0913 18:22:38.392893       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0913 18:22:38.393087       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0913 18:22:38.604516       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0913 18:22:38.626124       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-979357_ed3b70c5-6474-4e9e-adc9-2ca3ad02df5e!
	I0913 18:22:38.627911       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a06aae77-a7ca-4bb0-8803-2138b0a92163", APIVersion:"v1", ResourceVersion:"636", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-979357_ed3b70c5-6474-4e9e-adc9-2ca3ad02df5e became leader
	I0913 18:22:38.727799       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-979357_ed3b70c5-6474-4e9e-adc9-2ca3ad02df5e!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-979357 -n addons-979357
helpers_test.go:261: (dbg) Run:  kubectl --context addons-979357 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox ingress-nginx-admission-create-t2k2m ingress-nginx-admission-patch-jsft5
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-979357 describe pod busybox ingress-nginx-admission-create-t2k2m ingress-nginx-admission-patch-jsft5
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-979357 describe pod busybox ingress-nginx-admission-create-t2k2m ingress-nginx-admission-patch-jsft5: exit status 1 (69.27735ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-979357/192.168.39.34
	Start Time:       Fri, 13 Sep 2024 18:24:06 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.22
	IPs:
	  IP:  10.244.0.22
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-h9h22 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-h9h22:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m18s                   default-scheduler  Successfully assigned default/busybox to addons-979357
	  Normal   Pulling    7m55s (x4 over 9m17s)   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m55s (x4 over 9m17s)   kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
	  Warning  Failed     7m55s (x4 over 9m17s)   kubelet            Error: ErrImagePull
	  Warning  Failed     7m32s (x6 over 9m17s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m10s (x21 over 9m17s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-t2k2m" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-jsft5" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-979357 describe pod busybox ingress-nginx-admission-create-t2k2m ingress-nginx-admission-patch-jsft5: exit status 1
--- FAIL: TestAddons/parallel/Registry (75.21s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (153.39s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:205: (dbg) Run:  kubectl --context addons-979357 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:230: (dbg) Run:  kubectl --context addons-979357 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:243: (dbg) Run:  kubectl --context addons-979357 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [806d4c49-56fb-4b01-a2cd-83bdf674d6eb] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [806d4c49-56fb-4b01-a2cd-83bdf674d6eb] Running
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.00435854s
addons_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p addons-979357 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:260: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-979357 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.072701836s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:276: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:284: (dbg) Run:  kubectl --context addons-979357 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:289: (dbg) Run:  out/minikube-linux-amd64 -p addons-979357 ip
addons_test.go:295: (dbg) Run:  nslookup hello-john.test 192.168.39.34
addons_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p addons-979357 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:304: (dbg) Done: out/minikube-linux-amd64 -p addons-979357 addons disable ingress-dns --alsologtostderr -v=1: (1.486381884s)
addons_test.go:309: (dbg) Run:  out/minikube-linux-amd64 -p addons-979357 addons disable ingress --alsologtostderr -v=1
addons_test.go:309: (dbg) Done: out/minikube-linux-amd64 -p addons-979357 addons disable ingress --alsologtostderr -v=1: (7.705053269s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-979357 -n addons-979357
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-979357 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-979357 logs -n 25: (1.266124025s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                                                                                       | minikube             | jenkins | v1.34.0 | 13 Sep 24 18:21 UTC | 13 Sep 24 18:21 UTC |
	| delete  | -p download-only-283125                                                                     | download-only-283125 | jenkins | v1.34.0 | 13 Sep 24 18:21 UTC | 13 Sep 24 18:21 UTC |
	| delete  | -p download-only-220014                                                                     | download-only-220014 | jenkins | v1.34.0 | 13 Sep 24 18:21 UTC | 13 Sep 24 18:21 UTC |
	| delete  | -p download-only-283125                                                                     | download-only-283125 | jenkins | v1.34.0 | 13 Sep 24 18:21 UTC | 13 Sep 24 18:21 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-840809 | jenkins | v1.34.0 | 13 Sep 24 18:21 UTC |                     |
	|         | binary-mirror-840809                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:46177                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-840809                                                                     | binary-mirror-840809 | jenkins | v1.34.0 | 13 Sep 24 18:21 UTC | 13 Sep 24 18:21 UTC |
	| addons  | disable dashboard -p                                                                        | addons-979357        | jenkins | v1.34.0 | 13 Sep 24 18:21 UTC |                     |
	|         | addons-979357                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-979357        | jenkins | v1.34.0 | 13 Sep 24 18:21 UTC |                     |
	|         | addons-979357                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-979357 --wait=true                                                                | addons-979357        | jenkins | v1.34.0 | 13 Sep 24 18:21 UTC | 13 Sep 24 18:24 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-979357        | jenkins | v1.34.0 | 13 Sep 24 18:32 UTC | 13 Sep 24 18:32 UTC |
	|         | -p addons-979357                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-979357        | jenkins | v1.34.0 | 13 Sep 24 18:32 UTC | 13 Sep 24 18:32 UTC |
	|         | addons-979357                                                                               |                      |         |         |                     |                     |
	| addons  | addons-979357 addons disable                                                                | addons-979357        | jenkins | v1.34.0 | 13 Sep 24 18:32 UTC | 13 Sep 24 18:32 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-979357 addons disable                                                                | addons-979357        | jenkins | v1.34.0 | 13 Sep 24 18:32 UTC | 13 Sep 24 18:32 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-979357        | jenkins | v1.34.0 | 13 Sep 24 18:32 UTC | 13 Sep 24 18:32 UTC |
	|         | -p addons-979357                                                                            |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-979357        | jenkins | v1.34.0 | 13 Sep 24 18:32 UTC | 13 Sep 24 18:32 UTC |
	|         | addons-979357                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-979357 ssh cat                                                                       | addons-979357        | jenkins | v1.34.0 | 13 Sep 24 18:32 UTC | 13 Sep 24 18:32 UTC |
	|         | /opt/local-path-provisioner/pvc-2e98d28b-4232-4373-82bf-032b9972820e_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-979357 addons disable                                                                | addons-979357        | jenkins | v1.34.0 | 13 Sep 24 18:32 UTC | 13 Sep 24 18:33 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-979357 ip                                                                            | addons-979357        | jenkins | v1.34.0 | 13 Sep 24 18:33 UTC | 13 Sep 24 18:33 UTC |
	| addons  | addons-979357 addons disable                                                                | addons-979357        | jenkins | v1.34.0 | 13 Sep 24 18:33 UTC | 13 Sep 24 18:33 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-979357 addons                                                                        | addons-979357        | jenkins | v1.34.0 | 13 Sep 24 18:33 UTC | 13 Sep 24 18:33 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-979357 addons                                                                        | addons-979357        | jenkins | v1.34.0 | 13 Sep 24 18:33 UTC | 13 Sep 24 18:33 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-979357 ssh curl -s                                                                   | addons-979357        | jenkins | v1.34.0 | 13 Sep 24 18:33 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| ip      | addons-979357 ip                                                                            | addons-979357        | jenkins | v1.34.0 | 13 Sep 24 18:35 UTC | 13 Sep 24 18:35 UTC |
	| addons  | addons-979357 addons disable                                                                | addons-979357        | jenkins | v1.34.0 | 13 Sep 24 18:35 UTC | 13 Sep 24 18:35 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-979357 addons disable                                                                | addons-979357        | jenkins | v1.34.0 | 13 Sep 24 18:35 UTC | 13 Sep 24 18:35 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/13 18:21:44
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0913 18:21:44.933336   11846 out.go:345] Setting OutFile to fd 1 ...
	I0913 18:21:44.933589   11846 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 18:21:44.933598   11846 out.go:358] Setting ErrFile to fd 2...
	I0913 18:21:44.933603   11846 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 18:21:44.933811   11846 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19636-3902/.minikube/bin
	I0913 18:21:44.934483   11846 out.go:352] Setting JSON to false
	I0913 18:21:44.935314   11846 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":248,"bootTime":1726251457,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0913 18:21:44.935405   11846 start.go:139] virtualization: kvm guest
	I0913 18:21:44.937733   11846 out.go:177] * [addons-979357] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0913 18:21:44.939244   11846 notify.go:220] Checking for updates...
	I0913 18:21:44.939253   11846 out.go:177]   - MINIKUBE_LOCATION=19636
	I0913 18:21:44.940802   11846 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 18:21:44.942374   11846 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19636-3902/kubeconfig
	I0913 18:21:44.943849   11846 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19636-3902/.minikube
	I0913 18:21:44.945315   11846 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0913 18:21:44.946781   11846 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 18:21:44.948355   11846 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 18:21:44.980298   11846 out.go:177] * Using the kvm2 driver based on user configuration
	I0913 18:21:44.981482   11846 start.go:297] selected driver: kvm2
	I0913 18:21:44.981496   11846 start.go:901] validating driver "kvm2" against <nil>
	I0913 18:21:44.981507   11846 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 18:21:44.982221   11846 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 18:21:44.982292   11846 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19636-3902/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0913 18:21:44.996730   11846 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0913 18:21:44.996769   11846 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0913 18:21:44.997020   11846 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0913 18:21:44.997050   11846 cni.go:84] Creating CNI manager for ""
	I0913 18:21:44.997088   11846 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0913 18:21:44.997097   11846 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0913 18:21:44.997143   11846 start.go:340] cluster config:
	{Name:addons-979357 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-979357 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 18:21:44.997247   11846 iso.go:125] acquiring lock: {Name:mk6d09a9e7ffc35d34bf41cb51ad15df14a4d34d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 18:21:44.998916   11846 out.go:177] * Starting "addons-979357" primary control-plane node in "addons-979357" cluster
	I0913 18:21:45.000116   11846 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0913 18:21:45.000156   11846 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0913 18:21:45.000181   11846 cache.go:56] Caching tarball of preloaded images
	I0913 18:21:45.000289   11846 preload.go:172] Found /home/jenkins/minikube-integration/19636-3902/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0913 18:21:45.000299   11846 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0913 18:21:45.000586   11846 profile.go:143] Saving config to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357/config.json ...
	I0913 18:21:45.000604   11846 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357/config.json: {Name:mk395248c1d6a5d1f66c229ec194a50ba2a56d51 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:21:45.000738   11846 start.go:360] acquireMachinesLock for addons-979357: {Name:mk2a4fc9f87a3264088553785e086036ce1d8c5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 18:21:45.000781   11846 start.go:364] duration metric: took 30.582µs to acquireMachinesLock for "addons-979357"
	I0913 18:21:45.000797   11846 start.go:93] Provisioning new machine with config: &{Name:addons-979357 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:addons-979357 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0913 18:21:45.000848   11846 start.go:125] createHost starting for "" (driver="kvm2")
	I0913 18:21:45.002398   11846 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0913 18:21:45.002531   11846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:21:45.002566   11846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:21:45.016840   11846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42609
	I0913 18:21:45.017377   11846 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:21:45.017901   11846 main.go:141] libmachine: Using API Version  1
	I0913 18:21:45.017922   11846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:21:45.018288   11846 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:21:45.018450   11846 main.go:141] libmachine: (addons-979357) Calling .GetMachineName
	I0913 18:21:45.018570   11846 main.go:141] libmachine: (addons-979357) Calling .DriverName
	I0913 18:21:45.018700   11846 start.go:159] libmachine.API.Create for "addons-979357" (driver="kvm2")
	I0913 18:21:45.018725   11846 client.go:168] LocalClient.Create starting
	I0913 18:21:45.018761   11846 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem
	I0913 18:21:45.156400   11846 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem
	I0913 18:21:45.353847   11846 main.go:141] libmachine: Running pre-create checks...
	I0913 18:21:45.353873   11846 main.go:141] libmachine: (addons-979357) Calling .PreCreateCheck
	I0913 18:21:45.354405   11846 main.go:141] libmachine: (addons-979357) Calling .GetConfigRaw
	I0913 18:21:45.354848   11846 main.go:141] libmachine: Creating machine...
	I0913 18:21:45.354863   11846 main.go:141] libmachine: (addons-979357) Calling .Create
	I0913 18:21:45.354984   11846 main.go:141] libmachine: (addons-979357) Creating KVM machine...
	I0913 18:21:45.356174   11846 main.go:141] libmachine: (addons-979357) DBG | found existing default KVM network
	I0913 18:21:45.356944   11846 main.go:141] libmachine: (addons-979357) DBG | I0913 18:21:45.356784   11867 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000014fa0}
	I0913 18:21:45.356967   11846 main.go:141] libmachine: (addons-979357) DBG | created network xml: 
	I0913 18:21:45.356978   11846 main.go:141] libmachine: (addons-979357) DBG | <network>
	I0913 18:21:45.356983   11846 main.go:141] libmachine: (addons-979357) DBG |   <name>mk-addons-979357</name>
	I0913 18:21:45.356989   11846 main.go:141] libmachine: (addons-979357) DBG |   <dns enable='no'/>
	I0913 18:21:45.356997   11846 main.go:141] libmachine: (addons-979357) DBG |   
	I0913 18:21:45.357004   11846 main.go:141] libmachine: (addons-979357) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0913 18:21:45.357012   11846 main.go:141] libmachine: (addons-979357) DBG |     <dhcp>
	I0913 18:21:45.357018   11846 main.go:141] libmachine: (addons-979357) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0913 18:21:45.357022   11846 main.go:141] libmachine: (addons-979357) DBG |     </dhcp>
	I0913 18:21:45.357027   11846 main.go:141] libmachine: (addons-979357) DBG |   </ip>
	I0913 18:21:45.357033   11846 main.go:141] libmachine: (addons-979357) DBG |   
	I0913 18:21:45.357037   11846 main.go:141] libmachine: (addons-979357) DBG | </network>
	I0913 18:21:45.357041   11846 main.go:141] libmachine: (addons-979357) DBG | 
	I0913 18:21:45.362778   11846 main.go:141] libmachine: (addons-979357) DBG | trying to create private KVM network mk-addons-979357 192.168.39.0/24...
	I0913 18:21:45.429739   11846 main.go:141] libmachine: (addons-979357) Setting up store path in /home/jenkins/minikube-integration/19636-3902/.minikube/machines/addons-979357 ...
	I0913 18:21:45.429776   11846 main.go:141] libmachine: (addons-979357) Building disk image from file:///home/jenkins/minikube-integration/19636-3902/.minikube/cache/iso/amd64/minikube-v1.34.0-1726156389-19616-amd64.iso
	I0913 18:21:45.429787   11846 main.go:141] libmachine: (addons-979357) DBG | private KVM network mk-addons-979357 192.168.39.0/24 created
	I0913 18:21:45.429871   11846 main.go:141] libmachine: (addons-979357) Downloading /home/jenkins/minikube-integration/19636-3902/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19636-3902/.minikube/cache/iso/amd64/minikube-v1.34.0-1726156389-19616-amd64.iso...
	I0913 18:21:45.429918   11846 main.go:141] libmachine: (addons-979357) DBG | I0913 18:21:45.429655   11867 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19636-3902/.minikube
	I0913 18:21:45.695461   11846 main.go:141] libmachine: (addons-979357) DBG | I0913 18:21:45.695348   11867 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/addons-979357/id_rsa...
	I0913 18:21:45.815456   11846 main.go:141] libmachine: (addons-979357) DBG | I0913 18:21:45.815333   11867 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/addons-979357/addons-979357.rawdisk...
	I0913 18:21:45.815481   11846 main.go:141] libmachine: (addons-979357) DBG | Writing magic tar header
	I0913 18:21:45.815490   11846 main.go:141] libmachine: (addons-979357) DBG | Writing SSH key tar header
	I0913 18:21:45.815498   11846 main.go:141] libmachine: (addons-979357) DBG | I0913 18:21:45.815436   11867 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19636-3902/.minikube/machines/addons-979357 ...
	I0913 18:21:45.815566   11846 main.go:141] libmachine: (addons-979357) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/addons-979357
	I0913 18:21:45.815594   11846 main.go:141] libmachine: (addons-979357) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19636-3902/.minikube/machines
	I0913 18:21:45.815609   11846 main.go:141] libmachine: (addons-979357) Setting executable bit set on /home/jenkins/minikube-integration/19636-3902/.minikube/machines/addons-979357 (perms=drwx------)
	I0913 18:21:45.815616   11846 main.go:141] libmachine: (addons-979357) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19636-3902/.minikube
	I0913 18:21:45.815624   11846 main.go:141] libmachine: (addons-979357) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19636-3902
	I0913 18:21:45.815629   11846 main.go:141] libmachine: (addons-979357) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0913 18:21:45.815635   11846 main.go:141] libmachine: (addons-979357) DBG | Checking permissions on dir: /home/jenkins
	I0913 18:21:45.815641   11846 main.go:141] libmachine: (addons-979357) DBG | Checking permissions on dir: /home
	I0913 18:21:45.815651   11846 main.go:141] libmachine: (addons-979357) DBG | Skipping /home - not owner
	I0913 18:21:45.815665   11846 main.go:141] libmachine: (addons-979357) Setting executable bit set on /home/jenkins/minikube-integration/19636-3902/.minikube/machines (perms=drwxr-xr-x)
	I0913 18:21:45.815681   11846 main.go:141] libmachine: (addons-979357) Setting executable bit set on /home/jenkins/minikube-integration/19636-3902/.minikube (perms=drwxr-xr-x)
	I0913 18:21:45.815693   11846 main.go:141] libmachine: (addons-979357) Setting executable bit set on /home/jenkins/minikube-integration/19636-3902 (perms=drwxrwxr-x)
	I0913 18:21:45.815703   11846 main.go:141] libmachine: (addons-979357) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0913 18:21:45.815711   11846 main.go:141] libmachine: (addons-979357) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0913 18:21:45.815741   11846 main.go:141] libmachine: (addons-979357) Creating domain...
	I0913 18:21:45.816699   11846 main.go:141] libmachine: (addons-979357) define libvirt domain using xml: 
	I0913 18:21:45.816712   11846 main.go:141] libmachine: (addons-979357) <domain type='kvm'>
	I0913 18:21:45.816718   11846 main.go:141] libmachine: (addons-979357)   <name>addons-979357</name>
	I0913 18:21:45.816723   11846 main.go:141] libmachine: (addons-979357)   <memory unit='MiB'>4000</memory>
	I0913 18:21:45.816728   11846 main.go:141] libmachine: (addons-979357)   <vcpu>2</vcpu>
	I0913 18:21:45.816732   11846 main.go:141] libmachine: (addons-979357)   <features>
	I0913 18:21:45.816738   11846 main.go:141] libmachine: (addons-979357)     <acpi/>
	I0913 18:21:45.816744   11846 main.go:141] libmachine: (addons-979357)     <apic/>
	I0913 18:21:45.816750   11846 main.go:141] libmachine: (addons-979357)     <pae/>
	I0913 18:21:45.816759   11846 main.go:141] libmachine: (addons-979357)     
	I0913 18:21:45.816766   11846 main.go:141] libmachine: (addons-979357)   </features>
	I0913 18:21:45.816776   11846 main.go:141] libmachine: (addons-979357)   <cpu mode='host-passthrough'>
	I0913 18:21:45.816783   11846 main.go:141] libmachine: (addons-979357)   
	I0913 18:21:45.816798   11846 main.go:141] libmachine: (addons-979357)   </cpu>
	I0913 18:21:45.816806   11846 main.go:141] libmachine: (addons-979357)   <os>
	I0913 18:21:45.816810   11846 main.go:141] libmachine: (addons-979357)     <type>hvm</type>
	I0913 18:21:45.816816   11846 main.go:141] libmachine: (addons-979357)     <boot dev='cdrom'/>
	I0913 18:21:45.816820   11846 main.go:141] libmachine: (addons-979357)     <boot dev='hd'/>
	I0913 18:21:45.816825   11846 main.go:141] libmachine: (addons-979357)     <bootmenu enable='no'/>
	I0913 18:21:45.816831   11846 main.go:141] libmachine: (addons-979357)   </os>
	I0913 18:21:45.816836   11846 main.go:141] libmachine: (addons-979357)   <devices>
	I0913 18:21:45.816843   11846 main.go:141] libmachine: (addons-979357)     <disk type='file' device='cdrom'>
	I0913 18:21:45.816853   11846 main.go:141] libmachine: (addons-979357)       <source file='/home/jenkins/minikube-integration/19636-3902/.minikube/machines/addons-979357/boot2docker.iso'/>
	I0913 18:21:45.816864   11846 main.go:141] libmachine: (addons-979357)       <target dev='hdc' bus='scsi'/>
	I0913 18:21:45.816874   11846 main.go:141] libmachine: (addons-979357)       <readonly/>
	I0913 18:21:45.816884   11846 main.go:141] libmachine: (addons-979357)     </disk>
	I0913 18:21:45.816910   11846 main.go:141] libmachine: (addons-979357)     <disk type='file' device='disk'>
	I0913 18:21:45.816927   11846 main.go:141] libmachine: (addons-979357)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0913 18:21:45.816935   11846 main.go:141] libmachine: (addons-979357)       <source file='/home/jenkins/minikube-integration/19636-3902/.minikube/machines/addons-979357/addons-979357.rawdisk'/>
	I0913 18:21:45.816942   11846 main.go:141] libmachine: (addons-979357)       <target dev='hda' bus='virtio'/>
	I0913 18:21:45.816949   11846 main.go:141] libmachine: (addons-979357)     </disk>
	I0913 18:21:45.816955   11846 main.go:141] libmachine: (addons-979357)     <interface type='network'>
	I0913 18:21:45.816961   11846 main.go:141] libmachine: (addons-979357)       <source network='mk-addons-979357'/>
	I0913 18:21:45.816971   11846 main.go:141] libmachine: (addons-979357)       <model type='virtio'/>
	I0913 18:21:45.816986   11846 main.go:141] libmachine: (addons-979357)     </interface>
	I0913 18:21:45.816998   11846 main.go:141] libmachine: (addons-979357)     <interface type='network'>
	I0913 18:21:45.817019   11846 main.go:141] libmachine: (addons-979357)       <source network='default'/>
	I0913 18:21:45.817038   11846 main.go:141] libmachine: (addons-979357)       <model type='virtio'/>
	I0913 18:21:45.817050   11846 main.go:141] libmachine: (addons-979357)     </interface>
	I0913 18:21:45.817060   11846 main.go:141] libmachine: (addons-979357)     <serial type='pty'>
	I0913 18:21:45.817071   11846 main.go:141] libmachine: (addons-979357)       <target port='0'/>
	I0913 18:21:45.817077   11846 main.go:141] libmachine: (addons-979357)     </serial>
	I0913 18:21:45.817082   11846 main.go:141] libmachine: (addons-979357)     <console type='pty'>
	I0913 18:21:45.817089   11846 main.go:141] libmachine: (addons-979357)       <target type='serial' port='0'/>
	I0913 18:21:45.817096   11846 main.go:141] libmachine: (addons-979357)     </console>
	I0913 18:21:45.817105   11846 main.go:141] libmachine: (addons-979357)     <rng model='virtio'>
	I0913 18:21:45.817123   11846 main.go:141] libmachine: (addons-979357)       <backend model='random'>/dev/random</backend>
	I0913 18:21:45.817134   11846 main.go:141] libmachine: (addons-979357)     </rng>
	I0913 18:21:45.817145   11846 main.go:141] libmachine: (addons-979357)     
	I0913 18:21:45.817152   11846 main.go:141] libmachine: (addons-979357)     
	I0913 18:21:45.817157   11846 main.go:141] libmachine: (addons-979357)   </devices>
	I0913 18:21:45.817163   11846 main.go:141] libmachine: (addons-979357) </domain>
	I0913 18:21:45.817170   11846 main.go:141] libmachine: (addons-979357) 
	I0913 18:21:45.823068   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:c9:b7:e5 in network default
	I0913 18:21:45.823613   11846 main.go:141] libmachine: (addons-979357) Ensuring networks are active...
	I0913 18:21:45.823634   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:21:45.824217   11846 main.go:141] libmachine: (addons-979357) Ensuring network default is active
	I0913 18:21:45.824556   11846 main.go:141] libmachine: (addons-979357) Ensuring network mk-addons-979357 is active
	I0913 18:21:45.825087   11846 main.go:141] libmachine: (addons-979357) Getting domain xml...
	I0913 18:21:45.825697   11846 main.go:141] libmachine: (addons-979357) Creating domain...
	I0913 18:21:47.215259   11846 main.go:141] libmachine: (addons-979357) Waiting to get IP...
	I0913 18:21:47.216244   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:21:47.216720   11846 main.go:141] libmachine: (addons-979357) DBG | unable to find current IP address of domain addons-979357 in network mk-addons-979357
	I0913 18:21:47.216737   11846 main.go:141] libmachine: (addons-979357) DBG | I0913 18:21:47.216708   11867 retry.go:31] will retry after 288.192907ms: waiting for machine to come up
	I0913 18:21:47.506172   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:21:47.506706   11846 main.go:141] libmachine: (addons-979357) DBG | unable to find current IP address of domain addons-979357 in network mk-addons-979357
	I0913 18:21:47.506739   11846 main.go:141] libmachine: (addons-979357) DBG | I0913 18:21:47.506644   11867 retry.go:31] will retry after 265.001251ms: waiting for machine to come up
	I0913 18:21:47.773271   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:21:47.773783   11846 main.go:141] libmachine: (addons-979357) DBG | unable to find current IP address of domain addons-979357 in network mk-addons-979357
	I0913 18:21:47.773811   11846 main.go:141] libmachine: (addons-979357) DBG | I0913 18:21:47.773744   11867 retry.go:31] will retry after 301.987216ms: waiting for machine to come up
	I0913 18:21:48.077134   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:21:48.077602   11846 main.go:141] libmachine: (addons-979357) DBG | unable to find current IP address of domain addons-979357 in network mk-addons-979357
	I0913 18:21:48.077633   11846 main.go:141] libmachine: (addons-979357) DBG | I0913 18:21:48.077565   11867 retry.go:31] will retry after 551.807466ms: waiting for machine to come up
	I0913 18:21:48.631439   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:21:48.631926   11846 main.go:141] libmachine: (addons-979357) DBG | unable to find current IP address of domain addons-979357 in network mk-addons-979357
	I0913 18:21:48.631948   11846 main.go:141] libmachine: (addons-979357) DBG | I0913 18:21:48.631877   11867 retry.go:31] will retry after 628.057496ms: waiting for machine to come up
	I0913 18:21:49.261251   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:21:49.261632   11846 main.go:141] libmachine: (addons-979357) DBG | unable to find current IP address of domain addons-979357 in network mk-addons-979357
	I0913 18:21:49.261655   11846 main.go:141] libmachine: (addons-979357) DBG | I0913 18:21:49.261592   11867 retry.go:31] will retry after 766.331433ms: waiting for machine to come up
	I0913 18:21:50.030151   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:21:50.030680   11846 main.go:141] libmachine: (addons-979357) DBG | unable to find current IP address of domain addons-979357 in network mk-addons-979357
	I0913 18:21:50.030703   11846 main.go:141] libmachine: (addons-979357) DBG | I0913 18:21:50.030633   11867 retry.go:31] will retry after 869.088297ms: waiting for machine to come up
	I0913 18:21:50.901609   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:21:50.902025   11846 main.go:141] libmachine: (addons-979357) DBG | unable to find current IP address of domain addons-979357 in network mk-addons-979357
	I0913 18:21:50.902046   11846 main.go:141] libmachine: (addons-979357) DBG | I0913 18:21:50.901973   11867 retry.go:31] will retry after 1.351047403s: waiting for machine to come up
	I0913 18:21:52.255406   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:21:52.255833   11846 main.go:141] libmachine: (addons-979357) DBG | unable to find current IP address of domain addons-979357 in network mk-addons-979357
	I0913 18:21:52.255854   11846 main.go:141] libmachine: (addons-979357) DBG | I0913 18:21:52.255806   11867 retry.go:31] will retry after 1.528727429s: waiting for machine to come up
	I0913 18:21:53.785667   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:21:53.786063   11846 main.go:141] libmachine: (addons-979357) DBG | unable to find current IP address of domain addons-979357 in network mk-addons-979357
	I0913 18:21:53.786084   11846 main.go:141] libmachine: (addons-979357) DBG | I0913 18:21:53.786023   11867 retry.go:31] will retry after 1.928511226s: waiting for machine to come up
	I0913 18:21:55.715767   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:21:55.716158   11846 main.go:141] libmachine: (addons-979357) DBG | unable to find current IP address of domain addons-979357 in network mk-addons-979357
	I0913 18:21:55.716180   11846 main.go:141] libmachine: (addons-979357) DBG | I0913 18:21:55.716108   11867 retry.go:31] will retry after 1.901214708s: waiting for machine to come up
	I0913 18:21:57.619291   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:21:57.619861   11846 main.go:141] libmachine: (addons-979357) DBG | unable to find current IP address of domain addons-979357 in network mk-addons-979357
	I0913 18:21:57.619887   11846 main.go:141] libmachine: (addons-979357) DBG | I0913 18:21:57.619823   11867 retry.go:31] will retry after 2.844347432s: waiting for machine to come up
	I0913 18:22:00.465541   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:00.465982   11846 main.go:141] libmachine: (addons-979357) DBG | unable to find current IP address of domain addons-979357 in network mk-addons-979357
	I0913 18:22:00.466008   11846 main.go:141] libmachine: (addons-979357) DBG | I0913 18:22:00.465919   11867 retry.go:31] will retry after 3.134520129s: waiting for machine to come up
	I0913 18:22:03.603405   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:03.603856   11846 main.go:141] libmachine: (addons-979357) DBG | unable to find current IP address of domain addons-979357 in network mk-addons-979357
	I0913 18:22:03.603883   11846 main.go:141] libmachine: (addons-979357) DBG | I0913 18:22:03.603813   11867 retry.go:31] will retry after 4.895864383s: waiting for machine to come up
	I0913 18:22:08.503574   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:08.503985   11846 main.go:141] libmachine: (addons-979357) Found IP for machine: 192.168.39.34
	I0913 18:22:08.504003   11846 main.go:141] libmachine: (addons-979357) Reserving static IP address...
	I0913 18:22:08.504016   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has current primary IP address 192.168.39.34 and MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:08.504317   11846 main.go:141] libmachine: (addons-979357) DBG | unable to find host DHCP lease matching {name: "addons-979357", mac: "52:54:00:9b:f4:d7", ip: "192.168.39.34"} in network mk-addons-979357
	I0913 18:22:08.572524   11846 main.go:141] libmachine: (addons-979357) DBG | Getting to WaitForSSH function...
	I0913 18:22:08.572569   11846 main.go:141] libmachine: (addons-979357) Reserved static IP address: 192.168.39.34
	I0913 18:22:08.572583   11846 main.go:141] libmachine: (addons-979357) Waiting for SSH to be available...
	I0913 18:22:08.574749   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:08.575144   11846 main.go:141] libmachine: (addons-979357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:f4:d7", ip: ""} in network mk-addons-979357: {Iface:virbr1 ExpiryTime:2024-09-13 19:22:00 +0000 UTC Type:0 Mac:52:54:00:9b:f4:d7 Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:minikube Clientid:01:52:54:00:9b:f4:d7}
	I0913 18:22:08.575171   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined IP address 192.168.39.34 and MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:08.575290   11846 main.go:141] libmachine: (addons-979357) DBG | Using SSH client type: external
	I0913 18:22:08.575309   11846 main.go:141] libmachine: (addons-979357) DBG | Using SSH private key: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/addons-979357/id_rsa (-rw-------)
	I0913 18:22:08.575337   11846 main.go:141] libmachine: (addons-979357) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.34 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19636-3902/.minikube/machines/addons-979357/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0913 18:22:08.575351   11846 main.go:141] libmachine: (addons-979357) DBG | About to run SSH command:
	I0913 18:22:08.575368   11846 main.go:141] libmachine: (addons-979357) DBG | exit 0
	I0913 18:22:08.710507   11846 main.go:141] libmachine: (addons-979357) DBG | SSH cmd err, output: <nil>: 
	I0913 18:22:08.710759   11846 main.go:141] libmachine: (addons-979357) KVM machine creation complete!
	I0913 18:22:08.711098   11846 main.go:141] libmachine: (addons-979357) Calling .GetConfigRaw
	I0913 18:22:08.711607   11846 main.go:141] libmachine: (addons-979357) Calling .DriverName
	I0913 18:22:08.711785   11846 main.go:141] libmachine: (addons-979357) Calling .DriverName
	I0913 18:22:08.711900   11846 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0913 18:22:08.711921   11846 main.go:141] libmachine: (addons-979357) Calling .GetState
	I0913 18:22:08.713103   11846 main.go:141] libmachine: Detecting operating system of created instance...
	I0913 18:22:08.713119   11846 main.go:141] libmachine: Waiting for SSH to be available...
	I0913 18:22:08.713127   11846 main.go:141] libmachine: Getting to WaitForSSH function...
	I0913 18:22:08.713138   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHHostname
	I0913 18:22:08.715205   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:08.715543   11846 main.go:141] libmachine: (addons-979357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:f4:d7", ip: ""} in network mk-addons-979357: {Iface:virbr1 ExpiryTime:2024-09-13 19:22:00 +0000 UTC Type:0 Mac:52:54:00:9b:f4:d7 Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:addons-979357 Clientid:01:52:54:00:9b:f4:d7}
	I0913 18:22:08.715570   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined IP address 192.168.39.34 and MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:08.715735   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHPort
	I0913 18:22:08.715880   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHKeyPath
	I0913 18:22:08.716011   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHKeyPath
	I0913 18:22:08.716121   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHUsername
	I0913 18:22:08.716248   11846 main.go:141] libmachine: Using SSH client type: native
	I0913 18:22:08.716428   11846 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.34 22 <nil> <nil>}
	I0913 18:22:08.716440   11846 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0913 18:22:08.829395   11846 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0913 18:22:08.829432   11846 main.go:141] libmachine: Detecting the provisioner...
	I0913 18:22:08.829439   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHHostname
	I0913 18:22:08.832429   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:08.832877   11846 main.go:141] libmachine: (addons-979357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:f4:d7", ip: ""} in network mk-addons-979357: {Iface:virbr1 ExpiryTime:2024-09-13 19:22:00 +0000 UTC Type:0 Mac:52:54:00:9b:f4:d7 Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:addons-979357 Clientid:01:52:54:00:9b:f4:d7}
	I0913 18:22:08.832903   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined IP address 192.168.39.34 and MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:08.833092   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHPort
	I0913 18:22:08.833258   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHKeyPath
	I0913 18:22:08.833366   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHKeyPath
	I0913 18:22:08.833483   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHUsername
	I0913 18:22:08.833650   11846 main.go:141] libmachine: Using SSH client type: native
	I0913 18:22:08.833827   11846 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.34 22 <nil> <nil>}
	I0913 18:22:08.833837   11846 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0913 18:22:08.946841   11846 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0913 18:22:08.946908   11846 main.go:141] libmachine: found compatible host: buildroot
	I0913 18:22:08.946918   11846 main.go:141] libmachine: Provisioning with buildroot...
	I0913 18:22:08.946930   11846 main.go:141] libmachine: (addons-979357) Calling .GetMachineName
	I0913 18:22:08.947154   11846 buildroot.go:166] provisioning hostname "addons-979357"
	I0913 18:22:08.947176   11846 main.go:141] libmachine: (addons-979357) Calling .GetMachineName
	I0913 18:22:08.947341   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHHostname
	I0913 18:22:08.949827   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:08.950138   11846 main.go:141] libmachine: (addons-979357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:f4:d7", ip: ""} in network mk-addons-979357: {Iface:virbr1 ExpiryTime:2024-09-13 19:22:00 +0000 UTC Type:0 Mac:52:54:00:9b:f4:d7 Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:addons-979357 Clientid:01:52:54:00:9b:f4:d7}
	I0913 18:22:08.950163   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined IP address 192.168.39.34 and MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:08.950307   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHPort
	I0913 18:22:08.950471   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHKeyPath
	I0913 18:22:08.950625   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHKeyPath
	I0913 18:22:08.950753   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHUsername
	I0913 18:22:08.950889   11846 main.go:141] libmachine: Using SSH client type: native
	I0913 18:22:08.951047   11846 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.34 22 <nil> <nil>}
	I0913 18:22:08.951059   11846 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-979357 && echo "addons-979357" | sudo tee /etc/hostname
	I0913 18:22:09.084010   11846 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-979357
	
	I0913 18:22:09.084038   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHHostname
	I0913 18:22:09.086820   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:09.087218   11846 main.go:141] libmachine: (addons-979357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:f4:d7", ip: ""} in network mk-addons-979357: {Iface:virbr1 ExpiryTime:2024-09-13 19:22:00 +0000 UTC Type:0 Mac:52:54:00:9b:f4:d7 Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:addons-979357 Clientid:01:52:54:00:9b:f4:d7}
	I0913 18:22:09.087244   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined IP address 192.168.39.34 and MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:09.087406   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHPort
	I0913 18:22:09.087598   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHKeyPath
	I0913 18:22:09.087771   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHKeyPath
	I0913 18:22:09.087892   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHUsername
	I0913 18:22:09.088066   11846 main.go:141] libmachine: Using SSH client type: native
	I0913 18:22:09.088267   11846 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.34 22 <nil> <nil>}
	I0913 18:22:09.088291   11846 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-979357' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-979357/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-979357' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0913 18:22:09.211719   11846 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0913 18:22:09.211749   11846 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19636-3902/.minikube CaCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19636-3902/.minikube}
	I0913 18:22:09.211801   11846 buildroot.go:174] setting up certificates
	I0913 18:22:09.211812   11846 provision.go:84] configureAuth start
	I0913 18:22:09.211824   11846 main.go:141] libmachine: (addons-979357) Calling .GetMachineName
	I0913 18:22:09.212141   11846 main.go:141] libmachine: (addons-979357) Calling .GetIP
	I0913 18:22:09.214775   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:09.215180   11846 main.go:141] libmachine: (addons-979357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:f4:d7", ip: ""} in network mk-addons-979357: {Iface:virbr1 ExpiryTime:2024-09-13 19:22:00 +0000 UTC Type:0 Mac:52:54:00:9b:f4:d7 Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:addons-979357 Clientid:01:52:54:00:9b:f4:d7}
	I0913 18:22:09.215205   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined IP address 192.168.39.34 and MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:09.215376   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHHostname
	I0913 18:22:09.217631   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:09.218082   11846 main.go:141] libmachine: (addons-979357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:f4:d7", ip: ""} in network mk-addons-979357: {Iface:virbr1 ExpiryTime:2024-09-13 19:22:00 +0000 UTC Type:0 Mac:52:54:00:9b:f4:d7 Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:addons-979357 Clientid:01:52:54:00:9b:f4:d7}
	I0913 18:22:09.218145   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined IP address 192.168.39.34 and MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:09.218259   11846 provision.go:143] copyHostCerts
	I0913 18:22:09.218330   11846 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem (1679 bytes)
	I0913 18:22:09.218462   11846 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem (1082 bytes)
	I0913 18:22:09.218590   11846 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem (1123 bytes)
	I0913 18:22:09.218660   11846 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem org=jenkins.addons-979357 san=[127.0.0.1 192.168.39.34 addons-979357 localhost minikube]
	I0913 18:22:09.715311   11846 provision.go:177] copyRemoteCerts
	I0913 18:22:09.715364   11846 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0913 18:22:09.715390   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHHostname
	I0913 18:22:09.718319   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:09.718625   11846 main.go:141] libmachine: (addons-979357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:f4:d7", ip: ""} in network mk-addons-979357: {Iface:virbr1 ExpiryTime:2024-09-13 19:22:00 +0000 UTC Type:0 Mac:52:54:00:9b:f4:d7 Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:addons-979357 Clientid:01:52:54:00:9b:f4:d7}
	I0913 18:22:09.718650   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined IP address 192.168.39.34 and MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:09.718796   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHPort
	I0913 18:22:09.718953   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHKeyPath
	I0913 18:22:09.719126   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHUsername
	I0913 18:22:09.719278   11846 sshutil.go:53] new ssh client: &{IP:192.168.39.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/addons-979357/id_rsa Username:docker}
	I0913 18:22:09.804099   11846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0913 18:22:09.829074   11846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0913 18:22:09.853991   11846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0913 18:22:09.877867   11846 provision.go:87] duration metric: took 666.039773ms to configureAuth
	I0913 18:22:09.877899   11846 buildroot.go:189] setting minikube options for container-runtime
	I0913 18:22:09.878243   11846 config.go:182] Loaded profile config "addons-979357": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 18:22:09.878342   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHHostname
	I0913 18:22:09.881237   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:09.881647   11846 main.go:141] libmachine: (addons-979357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:f4:d7", ip: ""} in network mk-addons-979357: {Iface:virbr1 ExpiryTime:2024-09-13 19:22:00 +0000 UTC Type:0 Mac:52:54:00:9b:f4:d7 Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:addons-979357 Clientid:01:52:54:00:9b:f4:d7}
	I0913 18:22:09.881678   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined IP address 192.168.39.34 and MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:09.881809   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHPort
	I0913 18:22:09.882030   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHKeyPath
	I0913 18:22:09.882238   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHKeyPath
	I0913 18:22:09.882372   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHUsername
	I0913 18:22:09.882533   11846 main.go:141] libmachine: Using SSH client type: native
	I0913 18:22:09.882691   11846 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.34 22 <nil> <nil>}
	I0913 18:22:09.882704   11846 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0913 18:22:10.126542   11846 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0913 18:22:10.126574   11846 main.go:141] libmachine: Checking connection to Docker...
	I0913 18:22:10.126585   11846 main.go:141] libmachine: (addons-979357) Calling .GetURL
	I0913 18:22:10.128029   11846 main.go:141] libmachine: (addons-979357) DBG | Using libvirt version 6000000
	I0913 18:22:10.130547   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:10.130974   11846 main.go:141] libmachine: (addons-979357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:f4:d7", ip: ""} in network mk-addons-979357: {Iface:virbr1 ExpiryTime:2024-09-13 19:22:00 +0000 UTC Type:0 Mac:52:54:00:9b:f4:d7 Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:addons-979357 Clientid:01:52:54:00:9b:f4:d7}
	I0913 18:22:10.131001   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined IP address 192.168.39.34 and MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:10.131167   11846 main.go:141] libmachine: Docker is up and running!
	I0913 18:22:10.131183   11846 main.go:141] libmachine: Reticulating splines...
	I0913 18:22:10.131190   11846 client.go:171] duration metric: took 25.112456647s to LocalClient.Create
	I0913 18:22:10.131217   11846 start.go:167] duration metric: took 25.112517605s to libmachine.API.Create "addons-979357"
	I0913 18:22:10.131230   11846 start.go:293] postStartSetup for "addons-979357" (driver="kvm2")
	I0913 18:22:10.131254   11846 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0913 18:22:10.131272   11846 main.go:141] libmachine: (addons-979357) Calling .DriverName
	I0913 18:22:10.131521   11846 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0913 18:22:10.131545   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHHostname
	I0913 18:22:10.133979   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:10.134328   11846 main.go:141] libmachine: (addons-979357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:f4:d7", ip: ""} in network mk-addons-979357: {Iface:virbr1 ExpiryTime:2024-09-13 19:22:00 +0000 UTC Type:0 Mac:52:54:00:9b:f4:d7 Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:addons-979357 Clientid:01:52:54:00:9b:f4:d7}
	I0913 18:22:10.134354   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined IP address 192.168.39.34 and MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:10.134501   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHPort
	I0913 18:22:10.134686   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHKeyPath
	I0913 18:22:10.134836   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHUsername
	I0913 18:22:10.134952   11846 sshutil.go:53] new ssh client: &{IP:192.168.39.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/addons-979357/id_rsa Username:docker}
	I0913 18:22:10.220806   11846 ssh_runner.go:195] Run: cat /etc/os-release
	I0913 18:22:10.225490   11846 info.go:137] Remote host: Buildroot 2023.02.9
	I0913 18:22:10.225520   11846 filesync.go:126] Scanning /home/jenkins/minikube-integration/19636-3902/.minikube/addons for local assets ...
	I0913 18:22:10.225600   11846 filesync.go:126] Scanning /home/jenkins/minikube-integration/19636-3902/.minikube/files for local assets ...
	I0913 18:22:10.225631   11846 start.go:296] duration metric: took 94.394779ms for postStartSetup
	I0913 18:22:10.225667   11846 main.go:141] libmachine: (addons-979357) Calling .GetConfigRaw
	I0913 18:22:10.226323   11846 main.go:141] libmachine: (addons-979357) Calling .GetIP
	I0913 18:22:10.229002   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:10.229334   11846 main.go:141] libmachine: (addons-979357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:f4:d7", ip: ""} in network mk-addons-979357: {Iface:virbr1 ExpiryTime:2024-09-13 19:22:00 +0000 UTC Type:0 Mac:52:54:00:9b:f4:d7 Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:addons-979357 Clientid:01:52:54:00:9b:f4:d7}
	I0913 18:22:10.229365   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined IP address 192.168.39.34 and MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:10.229560   11846 profile.go:143] Saving config to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357/config.json ...
	I0913 18:22:10.229851   11846 start.go:128] duration metric: took 25.228992984s to createHost
	I0913 18:22:10.229878   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHHostname
	I0913 18:22:10.232158   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:10.232608   11846 main.go:141] libmachine: (addons-979357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:f4:d7", ip: ""} in network mk-addons-979357: {Iface:virbr1 ExpiryTime:2024-09-13 19:22:00 +0000 UTC Type:0 Mac:52:54:00:9b:f4:d7 Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:addons-979357 Clientid:01:52:54:00:9b:f4:d7}
	I0913 18:22:10.232631   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined IP address 192.168.39.34 and MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:10.232764   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHPort
	I0913 18:22:10.232960   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHKeyPath
	I0913 18:22:10.233116   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHKeyPath
	I0913 18:22:10.233281   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHUsername
	I0913 18:22:10.233428   11846 main.go:141] libmachine: Using SSH client type: native
	I0913 18:22:10.233612   11846 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.34 22 <nil> <nil>}
	I0913 18:22:10.233625   11846 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0913 18:22:10.347102   11846 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726251730.321977350
	
	I0913 18:22:10.347128   11846 fix.go:216] guest clock: 1726251730.321977350
	I0913 18:22:10.347138   11846 fix.go:229] Guest: 2024-09-13 18:22:10.32197735 +0000 UTC Remote: 2024-09-13 18:22:10.22986562 +0000 UTC m=+25.329833233 (delta=92.11173ms)
	I0913 18:22:10.347167   11846 fix.go:200] guest clock delta is within tolerance: 92.11173ms
	I0913 18:22:10.347175   11846 start.go:83] releasing machines lock for "addons-979357", held for 25.34638377s
	I0913 18:22:10.347205   11846 main.go:141] libmachine: (addons-979357) Calling .DriverName
	I0913 18:22:10.347489   11846 main.go:141] libmachine: (addons-979357) Calling .GetIP
	I0913 18:22:10.350285   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:10.350656   11846 main.go:141] libmachine: (addons-979357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:f4:d7", ip: ""} in network mk-addons-979357: {Iface:virbr1 ExpiryTime:2024-09-13 19:22:00 +0000 UTC Type:0 Mac:52:54:00:9b:f4:d7 Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:addons-979357 Clientid:01:52:54:00:9b:f4:d7}
	I0913 18:22:10.350686   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined IP address 192.168.39.34 and MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:10.350858   11846 main.go:141] libmachine: (addons-979357) Calling .DriverName
	I0913 18:22:10.351398   11846 main.go:141] libmachine: (addons-979357) Calling .DriverName
	I0913 18:22:10.351583   11846 main.go:141] libmachine: (addons-979357) Calling .DriverName
	I0913 18:22:10.351693   11846 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0913 18:22:10.351742   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHHostname
	I0913 18:22:10.351791   11846 ssh_runner.go:195] Run: cat /version.json
	I0913 18:22:10.351812   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHHostname
	I0913 18:22:10.354604   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:10.354894   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:10.354935   11846 main.go:141] libmachine: (addons-979357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:f4:d7", ip: ""} in network mk-addons-979357: {Iface:virbr1 ExpiryTime:2024-09-13 19:22:00 +0000 UTC Type:0 Mac:52:54:00:9b:f4:d7 Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:addons-979357 Clientid:01:52:54:00:9b:f4:d7}
	I0913 18:22:10.354957   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined IP address 192.168.39.34 and MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:10.355076   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHPort
	I0913 18:22:10.355290   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHKeyPath
	I0913 18:22:10.355388   11846 main.go:141] libmachine: (addons-979357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:f4:d7", ip: ""} in network mk-addons-979357: {Iface:virbr1 ExpiryTime:2024-09-13 19:22:00 +0000 UTC Type:0 Mac:52:54:00:9b:f4:d7 Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:addons-979357 Clientid:01:52:54:00:9b:f4:d7}
	I0913 18:22:10.355421   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined IP address 192.168.39.34 and MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:10.355470   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHUsername
	I0913 18:22:10.355584   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHPort
	I0913 18:22:10.355636   11846 sshutil.go:53] new ssh client: &{IP:192.168.39.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/addons-979357/id_rsa Username:docker}
	I0913 18:22:10.355715   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHKeyPath
	I0913 18:22:10.355878   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHUsername
	I0913 18:22:10.356046   11846 sshutil.go:53] new ssh client: &{IP:192.168.39.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/addons-979357/id_rsa Username:docker}
	I0913 18:22:10.476853   11846 ssh_runner.go:195] Run: systemctl --version
	I0913 18:22:10.482887   11846 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0913 18:22:10.641449   11846 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0913 18:22:10.648344   11846 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0913 18:22:10.648410   11846 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0913 18:22:10.664019   11846 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0913 18:22:10.664043   11846 start.go:495] detecting cgroup driver to use...
	I0913 18:22:10.664124   11846 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0913 18:22:10.679953   11846 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0913 18:22:10.694986   11846 docker.go:217] disabling cri-docker service (if available) ...
	I0913 18:22:10.695040   11846 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0913 18:22:10.709192   11846 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0913 18:22:10.723529   11846 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0913 18:22:10.836708   11846 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0913 18:22:10.978881   11846 docker.go:233] disabling docker service ...
	I0913 18:22:10.978945   11846 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0913 18:22:10.993279   11846 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0913 18:22:11.006735   11846 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0913 18:22:11.135365   11846 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0913 18:22:11.245556   11846 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0913 18:22:11.259561   11846 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0913 18:22:11.277758   11846 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0913 18:22:11.277818   11846 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 18:22:11.288773   11846 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0913 18:22:11.288829   11846 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 18:22:11.299334   11846 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 18:22:11.309742   11846 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 18:22:11.320384   11846 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0913 18:22:11.331897   11846 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 18:22:11.343220   11846 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 18:22:11.361330   11846 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 18:22:11.372453   11846 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0913 18:22:11.382315   11846 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0913 18:22:11.382392   11846 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0913 18:22:11.396538   11846 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0913 18:22:11.407320   11846 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 18:22:11.515601   11846 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0913 18:22:11.605418   11846 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0913 18:22:11.605515   11846 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0913 18:22:11.610413   11846 start.go:563] Will wait 60s for crictl version
	I0913 18:22:11.610486   11846 ssh_runner.go:195] Run: which crictl
	I0913 18:22:11.614216   11846 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0913 18:22:11.653794   11846 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0913 18:22:11.653938   11846 ssh_runner.go:195] Run: crio --version
	I0913 18:22:11.683751   11846 ssh_runner.go:195] Run: crio --version
	I0913 18:22:11.713055   11846 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0913 18:22:11.714287   11846 main.go:141] libmachine: (addons-979357) Calling .GetIP
	I0913 18:22:11.716720   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:11.717006   11846 main.go:141] libmachine: (addons-979357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:f4:d7", ip: ""} in network mk-addons-979357: {Iface:virbr1 ExpiryTime:2024-09-13 19:22:00 +0000 UTC Type:0 Mac:52:54:00:9b:f4:d7 Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:addons-979357 Clientid:01:52:54:00:9b:f4:d7}
	I0913 18:22:11.717030   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined IP address 192.168.39.34 and MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:11.717315   11846 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0913 18:22:11.721668   11846 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0913 18:22:11.734152   11846 kubeadm.go:883] updating cluster {Name:addons-979357 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:addons-979357 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.34 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0913 18:22:11.734262   11846 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0913 18:22:11.734314   11846 ssh_runner.go:195] Run: sudo crictl images --output json
	I0913 18:22:11.771955   11846 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0913 18:22:11.772020   11846 ssh_runner.go:195] Run: which lz4
	I0913 18:22:11.776099   11846 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0913 18:22:11.780348   11846 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0913 18:22:11.780377   11846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0913 18:22:13.063182   11846 crio.go:462] duration metric: took 1.287105483s to copy over tarball
	I0913 18:22:13.063246   11846 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0913 18:22:15.131948   11846 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.068675166s)
	I0913 18:22:15.131980   11846 crio.go:469] duration metric: took 2.068772112s to extract the tarball
	I0913 18:22:15.131990   11846 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0913 18:22:15.168309   11846 ssh_runner.go:195] Run: sudo crictl images --output json
	I0913 18:22:15.210774   11846 crio.go:514] all images are preloaded for cri-o runtime.
	I0913 18:22:15.210798   11846 cache_images.go:84] Images are preloaded, skipping loading
	I0913 18:22:15.210807   11846 kubeadm.go:934] updating node { 192.168.39.34 8443 v1.31.1 crio true true} ...
	I0913 18:22:15.210915   11846 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-979357 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.34
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-979357 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0913 18:22:15.210993   11846 ssh_runner.go:195] Run: crio config
	I0913 18:22:15.258261   11846 cni.go:84] Creating CNI manager for ""
	I0913 18:22:15.258285   11846 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0913 18:22:15.258295   11846 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0913 18:22:15.258316   11846 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.34 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-979357 NodeName:addons-979357 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.34"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.34 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0913 18:22:15.258477   11846 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.34
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-979357"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.34
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.34"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0913 18:22:15.258548   11846 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0913 18:22:15.268665   11846 binaries.go:44] Found k8s binaries, skipping transfer
	I0913 18:22:15.268737   11846 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0913 18:22:15.278177   11846 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0913 18:22:15.294597   11846 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0913 18:22:15.310451   11846 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0913 18:22:15.326796   11846 ssh_runner.go:195] Run: grep 192.168.39.34	control-plane.minikube.internal$ /etc/hosts
	I0913 18:22:15.330636   11846 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.34	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0913 18:22:15.343203   11846 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 18:22:15.467199   11846 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0913 18:22:15.486141   11846 certs.go:68] Setting up /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357 for IP: 192.168.39.34
	I0913 18:22:15.486166   11846 certs.go:194] generating shared ca certs ...
	I0913 18:22:15.486182   11846 certs.go:226] acquiring lock for ca certs: {Name:mke780aab4c2895bded11772e31c7a357d07742c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:22:15.486323   11846 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.key
	I0913 18:22:15.662812   11846 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt ...
	I0913 18:22:15.662838   11846 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt: {Name:mk0c4ac93cc268df9a8da3c08edba4e990a1051c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:22:15.662994   11846 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19636-3902/.minikube/ca.key ...
	I0913 18:22:15.663004   11846 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/.minikube/ca.key: {Name:mk7c3df6b789a282ec74042612aa69d3d847194d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:22:15.663072   11846 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.key
	I0913 18:22:15.760468   11846 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.crt ...
	I0913 18:22:15.760493   11846 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.crt: {Name:mk5938022ba0b964dbd2e8d6a95f61ea52a69c2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:22:15.760629   11846 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.key ...
	I0913 18:22:15.760638   11846 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.key: {Name:mk4740460ce42bde935de79b4943921492fd98a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:22:15.760700   11846 certs.go:256] generating profile certs ...
	I0913 18:22:15.760762   11846 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357/client.key
	I0913 18:22:15.760784   11846 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357/client.crt with IP's: []
	I0913 18:22:15.869917   11846 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357/client.crt ...
	I0913 18:22:15.869945   11846 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357/client.crt: {Name:mk629832723b056c40a68a16d59abb9016c4d337 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:22:15.870132   11846 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357/client.key ...
	I0913 18:22:15.870143   11846 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357/client.key: {Name:mk7fb983c54e63b71552ed34c37898232dd25c8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:22:15.870218   11846 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357/apiserver.key.667556f7
	I0913 18:22:15.870238   11846 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357/apiserver.crt.667556f7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.34]
	I0913 18:22:15.977365   11846 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357/apiserver.crt.667556f7 ...
	I0913 18:22:15.977392   11846 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357/apiserver.crt.667556f7: {Name:mk64caa72268b14b4cff0a9627f89777df35b01c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:22:15.977557   11846 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357/apiserver.key.667556f7 ...
	I0913 18:22:15.977570   11846 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357/apiserver.key.667556f7: {Name:mk8693bd1404fecfaa4562dd7e045a763b78878a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:22:15.977637   11846 certs.go:381] copying /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357/apiserver.crt.667556f7 -> /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357/apiserver.crt
	I0913 18:22:15.977706   11846 certs.go:385] copying /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357/apiserver.key.667556f7 -> /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357/apiserver.key
	I0913 18:22:15.977750   11846 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357/proxy-client.key
	I0913 18:22:15.977766   11846 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357/proxy-client.crt with IP's: []
	I0913 18:22:16.102506   11846 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357/proxy-client.crt ...
	I0913 18:22:16.102535   11846 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357/proxy-client.crt: {Name:mk4e2dff54c8b7cdd4d081d100bae0960534d953 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:22:16.102678   11846 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357/proxy-client.key ...
	I0913 18:22:16.102688   11846 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357/proxy-client.key: {Name:mkeaff14ff97f40f98f8eae4b259ad1243c5a15e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:22:16.102848   11846 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem (1675 bytes)
	I0913 18:22:16.102882   11846 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem (1082 bytes)
	I0913 18:22:16.102905   11846 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem (1123 bytes)
	I0913 18:22:16.102929   11846 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem (1679 bytes)
	I0913 18:22:16.103974   11846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0913 18:22:16.128760   11846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0913 18:22:16.154237   11846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0913 18:22:16.180108   11846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0913 18:22:16.216371   11846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0913 18:22:16.241414   11846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0913 18:22:16.265812   11846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0913 18:22:16.288640   11846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0913 18:22:16.311923   11846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0913 18:22:16.335383   11846 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0913 18:22:16.351852   11846 ssh_runner.go:195] Run: openssl version
	I0913 18:22:16.357393   11846 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0913 18:22:16.368587   11846 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0913 18:22:16.373059   11846 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 13 18:22 /usr/share/ca-certificates/minikubeCA.pem
	I0913 18:22:16.373123   11846 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0913 18:22:16.378918   11846 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0913 18:22:16.390126   11846 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0913 18:22:16.394003   11846 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0913 18:22:16.394057   11846 kubeadm.go:392] StartCluster: {Name:addons-979357 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:addons-979357 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.34 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 18:22:16.394167   11846 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0913 18:22:16.394219   11846 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0913 18:22:16.431957   11846 cri.go:89] found id: ""
	I0913 18:22:16.432037   11846 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0913 18:22:16.442325   11846 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0913 18:22:16.452438   11846 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0913 18:22:16.462279   11846 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0913 18:22:16.462298   11846 kubeadm.go:157] found existing configuration files:
	
	I0913 18:22:16.462336   11846 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0913 18:22:16.471621   11846 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0913 18:22:16.471678   11846 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0913 18:22:16.481226   11846 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0913 18:22:16.491050   11846 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0913 18:22:16.491106   11846 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0913 18:22:16.501169   11846 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0913 18:22:16.510516   11846 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0913 18:22:16.510568   11846 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0913 18:22:16.519925   11846 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0913 18:22:16.529268   11846 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0913 18:22:16.529320   11846 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0913 18:22:16.539219   11846 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0913 18:22:16.593329   11846 kubeadm.go:310] W0913 18:22:16.575543     815 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0913 18:22:16.594569   11846 kubeadm.go:310] W0913 18:22:16.576957     815 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0913 18:22:16.708878   11846 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0913 18:22:26.701114   11846 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0913 18:22:26.701216   11846 kubeadm.go:310] [preflight] Running pre-flight checks
	I0913 18:22:26.701325   11846 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0913 18:22:26.701444   11846 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0913 18:22:26.701566   11846 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0913 18:22:26.701658   11846 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0913 18:22:26.703010   11846 out.go:235]   - Generating certificates and keys ...
	I0913 18:22:26.703101   11846 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0913 18:22:26.703171   11846 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0913 18:22:26.703246   11846 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0913 18:22:26.703315   11846 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0913 18:22:26.703395   11846 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0913 18:22:26.703486   11846 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0913 18:22:26.703560   11846 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0913 18:22:26.703710   11846 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-979357 localhost] and IPs [192.168.39.34 127.0.0.1 ::1]
	I0913 18:22:26.703780   11846 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0913 18:22:26.703947   11846 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-979357 localhost] and IPs [192.168.39.34 127.0.0.1 ::1]
	I0913 18:22:26.704047   11846 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0913 18:22:26.704149   11846 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0913 18:22:26.704214   11846 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0913 18:22:26.704286   11846 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0913 18:22:26.704372   11846 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0913 18:22:26.704458   11846 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0913 18:22:26.704532   11846 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0913 18:22:26.704633   11846 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0913 18:22:26.704715   11846 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0913 18:22:26.704825   11846 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0913 18:22:26.704915   11846 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0913 18:22:26.706252   11846 out.go:235]   - Booting up control plane ...
	I0913 18:22:26.706339   11846 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0913 18:22:26.706406   11846 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0913 18:22:26.706497   11846 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0913 18:22:26.706623   11846 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0913 18:22:26.706724   11846 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0913 18:22:26.706784   11846 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0913 18:22:26.706939   11846 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0913 18:22:26.707027   11846 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0913 18:22:26.707076   11846 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.200467ms
	I0913 18:22:26.707151   11846 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0913 18:22:26.707212   11846 kubeadm.go:310] [api-check] The API server is healthy after 5.501177192s
	I0913 18:22:26.707308   11846 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0913 18:22:26.707422   11846 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0913 18:22:26.707475   11846 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0913 18:22:26.707633   11846 kubeadm.go:310] [mark-control-plane] Marking the node addons-979357 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0913 18:22:26.707707   11846 kubeadm.go:310] [bootstrap-token] Using token: d54731.5jrr63v1n2n2kz6m
	I0913 18:22:26.708858   11846 out.go:235]   - Configuring RBAC rules ...
	I0913 18:22:26.708942   11846 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0913 18:22:26.709016   11846 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0913 18:22:26.709169   11846 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0913 18:22:26.709274   11846 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0913 18:22:26.709367   11846 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0913 18:22:26.709442   11846 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0913 18:22:26.709548   11846 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0913 18:22:26.709594   11846 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0913 18:22:26.709640   11846 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0913 18:22:26.709650   11846 kubeadm.go:310] 
	I0913 18:22:26.709698   11846 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0913 18:22:26.709704   11846 kubeadm.go:310] 
	I0913 18:22:26.709773   11846 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0913 18:22:26.709779   11846 kubeadm.go:310] 
	I0913 18:22:26.709801   11846 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0913 18:22:26.709847   11846 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0913 18:22:26.709896   11846 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0913 18:22:26.709905   11846 kubeadm.go:310] 
	I0913 18:22:26.709959   11846 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0913 18:22:26.709965   11846 kubeadm.go:310] 
	I0913 18:22:26.710000   11846 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0913 18:22:26.710006   11846 kubeadm.go:310] 
	I0913 18:22:26.710049   11846 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0913 18:22:26.710145   11846 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0913 18:22:26.710258   11846 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0913 18:22:26.710269   11846 kubeadm.go:310] 
	I0913 18:22:26.710342   11846 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0913 18:22:26.710413   11846 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0913 18:22:26.710420   11846 kubeadm.go:310] 
	I0913 18:22:26.710489   11846 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token d54731.5jrr63v1n2n2kz6m \
	I0913 18:22:26.710581   11846 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a4240e11abfbfd373505036507e5ef67d548fb372e560a526b421dfcccc65691 \
	I0913 18:22:26.710601   11846 kubeadm.go:310] 	--control-plane 
	I0913 18:22:26.710604   11846 kubeadm.go:310] 
	I0913 18:22:26.710674   11846 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0913 18:22:26.710680   11846 kubeadm.go:310] 
	I0913 18:22:26.710750   11846 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token d54731.5jrr63v1n2n2kz6m \
	I0913 18:22:26.710853   11846 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a4240e11abfbfd373505036507e5ef67d548fb372e560a526b421dfcccc65691 
	I0913 18:22:26.710865   11846 cni.go:84] Creating CNI manager for ""
	I0913 18:22:26.710872   11846 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0913 18:22:26.712247   11846 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0913 18:22:26.713291   11846 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0913 18:22:26.725202   11846 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0913 18:22:26.748825   11846 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0913 18:22:26.748885   11846 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 18:22:26.748946   11846 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-979357 minikube.k8s.io/updated_at=2024_09_13T18_22_26_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=fdd33bebc6743cfd1c61ec7fe066add478610a92 minikube.k8s.io/name=addons-979357 minikube.k8s.io/primary=true
	I0913 18:22:26.785894   11846 ops.go:34] apiserver oom_adj: -16
	I0913 18:22:26.895212   11846 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 18:22:27.395975   11846 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 18:22:27.896320   11846 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 18:22:28.395286   11846 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 18:22:28.896168   11846 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 18:22:29.395706   11846 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 18:22:29.896217   11846 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 18:22:30.395424   11846 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 18:22:30.477836   11846 kubeadm.go:1113] duration metric: took 3.729011911s to wait for elevateKubeSystemPrivileges
	I0913 18:22:30.477865   11846 kubeadm.go:394] duration metric: took 14.083813405s to StartCluster
	I0913 18:22:30.477884   11846 settings.go:142] acquiring lock: {Name:mk3b751ea8a9f3a6c2d19469cffad500e96411b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:22:30.477996   11846 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19636-3902/kubeconfig
	I0913 18:22:30.478387   11846 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/kubeconfig: {Name:mk324cf401f96b93fed93af51e24b0634e5fa1a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:22:30.478575   11846 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.34 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0913 18:22:30.478599   11846 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0913 18:22:30.478630   11846 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0913 18:22:30.478752   11846 addons.go:69] Setting yakd=true in profile "addons-979357"
	I0913 18:22:30.478773   11846 addons.go:234] Setting addon yakd=true in "addons-979357"
	I0913 18:22:30.478770   11846 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-979357"
	I0913 18:22:30.478804   11846 host.go:66] Checking if "addons-979357" exists ...
	I0913 18:22:30.478792   11846 addons.go:69] Setting metrics-server=true in profile "addons-979357"
	I0913 18:22:30.478823   11846 config.go:182] Loaded profile config "addons-979357": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 18:22:30.478809   11846 addons.go:69] Setting cloud-spanner=true in profile "addons-979357"
	I0913 18:22:30.478835   11846 addons.go:69] Setting default-storageclass=true in profile "addons-979357"
	I0913 18:22:30.478838   11846 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-979357"
	I0913 18:22:30.478848   11846 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-979357"
	I0913 18:22:30.478849   11846 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-979357"
	I0913 18:22:30.478825   11846 addons.go:234] Setting addon metrics-server=true in "addons-979357"
	I0913 18:22:30.478861   11846 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-979357"
	I0913 18:22:30.478875   11846 host.go:66] Checking if "addons-979357" exists ...
	I0913 18:22:30.478882   11846 host.go:66] Checking if "addons-979357" exists ...
	I0913 18:22:30.478898   11846 host.go:66] Checking if "addons-979357" exists ...
	I0913 18:22:30.478908   11846 addons.go:69] Setting registry=true in profile "addons-979357"
	I0913 18:22:30.478923   11846 addons.go:234] Setting addon registry=true in "addons-979357"
	I0913 18:22:30.478984   11846 host.go:66] Checking if "addons-979357" exists ...
	I0913 18:22:30.478995   11846 addons.go:69] Setting ingress=true in profile "addons-979357"
	I0913 18:22:30.479089   11846 addons.go:234] Setting addon ingress=true in "addons-979357"
	I0913 18:22:30.479124   11846 host.go:66] Checking if "addons-979357" exists ...
	I0913 18:22:30.479203   11846 addons.go:69] Setting ingress-dns=true in profile "addons-979357"
	I0913 18:22:30.479238   11846 addons.go:234] Setting addon ingress-dns=true in "addons-979357"
	I0913 18:22:30.479259   11846 addons.go:69] Setting gcp-auth=true in profile "addons-979357"
	I0913 18:22:30.479268   11846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:22:30.479281   11846 mustload.go:65] Loading cluster: addons-979357
	I0913 18:22:30.479301   11846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:22:30.479333   11846 host.go:66] Checking if "addons-979357" exists ...
	I0913 18:22:30.479338   11846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:22:30.479346   11846 addons.go:69] Setting inspektor-gadget=true in profile "addons-979357"
	I0913 18:22:30.479350   11846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:22:30.479360   11846 addons.go:234] Setting addon inspektor-gadget=true in "addons-979357"
	I0913 18:22:30.479369   11846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:22:30.479383   11846 host.go:66] Checking if "addons-979357" exists ...
	I0913 18:22:30.479395   11846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:22:30.479433   11846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:22:30.479463   11846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:22:30.479587   11846 config.go:182] Loaded profile config "addons-979357": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 18:22:30.479600   11846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:22:30.479640   11846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:22:30.479708   11846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:22:30.479727   11846 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-979357"
	I0913 18:22:30.479729   11846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:22:30.479738   11846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:22:30.479742   11846 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-979357"
	I0913 18:22:30.479754   11846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:22:30.479921   11846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:22:30.479949   11846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:22:30.478897   11846 addons.go:234] Setting addon cloud-spanner=true in "addons-979357"
	I0913 18:22:30.480164   11846 host.go:66] Checking if "addons-979357" exists ...
	I0913 18:22:30.480219   11846 addons.go:69] Setting volcano=true in profile "addons-979357"
	I0913 18:22:30.480245   11846 addons.go:234] Setting addon volcano=true in "addons-979357"
	I0913 18:22:30.480280   11846 host.go:66] Checking if "addons-979357" exists ...
	I0913 18:22:30.478820   11846 addons.go:69] Setting storage-provisioner=true in profile "addons-979357"
	I0913 18:22:30.480370   11846 addons.go:234] Setting addon storage-provisioner=true in "addons-979357"
	I0913 18:22:30.480426   11846 host.go:66] Checking if "addons-979357" exists ...
	I0913 18:22:30.480535   11846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:22:30.480572   11846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:22:30.480640   11846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:22:30.480673   11846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:22:30.480820   11846 addons.go:69] Setting volumesnapshots=true in profile "addons-979357"
	I0913 18:22:30.480840   11846 addons.go:234] Setting addon volumesnapshots=true in "addons-979357"
	I0913 18:22:30.480871   11846 host.go:66] Checking if "addons-979357" exists ...
	I0913 18:22:30.480912   11846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:22:30.480944   11846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:22:30.481326   11846 out.go:177] * Verifying Kubernetes components...
	I0913 18:22:30.479242   11846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:22:30.481520   11846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:22:30.479334   11846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:22:30.481650   11846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:22:30.482721   11846 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 18:22:30.500237   11846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39695
	I0913 18:22:30.500463   11846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37975
	I0913 18:22:30.500482   11846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40047
	I0913 18:22:30.500639   11846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42967
	I0913 18:22:30.500830   11846 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:22:30.500893   11846 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:22:30.500990   11846 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:22:30.501068   11846 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:22:30.501371   11846 main.go:141] libmachine: Using API Version  1
	I0913 18:22:30.501388   11846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:22:30.501510   11846 main.go:141] libmachine: Using API Version  1
	I0913 18:22:30.501532   11846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:22:30.501533   11846 main.go:141] libmachine: Using API Version  1
	I0913 18:22:30.501550   11846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:22:30.501853   11846 main.go:141] libmachine: Using API Version  1
	I0913 18:22:30.501869   11846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:22:30.501892   11846 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:22:30.501924   11846 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:22:30.502060   11846 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:22:30.502499   11846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:22:30.502534   11846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:22:30.508808   11846 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:22:30.508875   11846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46415
	I0913 18:22:30.514450   11846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:22:30.514505   11846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:22:30.514561   11846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:22:30.514588   11846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:22:30.514611   11846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:22:30.514702   11846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:22:30.514722   11846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:22:30.515525   11846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:22:30.515558   11846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:22:30.518495   11846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46547
	I0913 18:22:30.518648   11846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46651
	I0913 18:22:30.518780   11846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:22:30.518966   11846 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:22:30.533480   11846 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:22:30.538314   11846 main.go:141] libmachine: Using API Version  1
	I0913 18:22:30.538358   11846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:22:30.538478   11846 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:22:30.538926   11846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37989
	I0913 18:22:30.539091   11846 main.go:141] libmachine: Using API Version  1
	I0913 18:22:30.539109   11846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:22:30.539180   11846 main.go:141] libmachine: Using API Version  1
	I0913 18:22:30.539204   11846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:22:30.539375   11846 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:22:30.539537   11846 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:22:30.539596   11846 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:22:30.539644   11846 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:22:30.540197   11846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:22:30.540517   11846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:22:30.540641   11846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:22:30.540690   11846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:22:30.541616   11846 main.go:141] libmachine: Using API Version  1
	I0913 18:22:30.541640   11846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:22:30.541970   11846 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:22:30.542152   11846 main.go:141] libmachine: (addons-979357) Calling .GetState
	I0913 18:22:30.544274   11846 main.go:141] libmachine: (addons-979357) Calling .DriverName
	I0913 18:22:30.544510   11846 main.go:141] libmachine: Making call to close driver server
	I0913 18:22:30.544533   11846 main.go:141] libmachine: (addons-979357) Calling .Close
	I0913 18:22:30.546219   11846 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:22:30.546227   11846 main.go:141] libmachine: (addons-979357) DBG | Closing plugin on server side
	I0913 18:22:30.546234   11846 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:22:30.546254   11846 main.go:141] libmachine: Making call to close driver server
	I0913 18:22:30.546261   11846 main.go:141] libmachine: (addons-979357) Calling .Close
	I0913 18:22:30.546395   11846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34721
	I0913 18:22:30.546903   11846 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:22:30.547397   11846 main.go:141] libmachine: Using API Version  1
	I0913 18:22:30.547419   11846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:22:30.547706   11846 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:22:30.548255   11846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:22:30.548304   11846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:22:30.560435   11846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45657
	I0913 18:22:30.560435   11846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33527
	I0913 18:22:30.560480   11846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41521
	I0913 18:22:30.560448   11846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32857
	I0913 18:22:30.560561   11846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43585
	I0913 18:22:30.560630   11846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33791
	I0913 18:22:30.560674   11846 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:22:30.560692   11846 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:22:30.560628   11846 main.go:141] libmachine: (addons-979357) DBG | Closing plugin on server side
	I0913 18:22:30.560639   11846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46337
	W0913 18:22:30.560805   11846 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0913 18:22:30.561065   11846 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:22:30.561200   11846 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:22:30.561277   11846 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:22:30.561349   11846 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:22:30.562326   11846 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:22:30.562336   11846 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:22:30.562417   11846 main.go:141] libmachine: Using API Version  1
	I0913 18:22:30.562436   11846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:22:30.562408   11846 main.go:141] libmachine: Using API Version  1
	I0913 18:22:30.562457   11846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:22:30.562500   11846 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:22:30.562522   11846 main.go:141] libmachine: Using API Version  1
	I0913 18:22:30.562532   11846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:22:30.562564   11846 main.go:141] libmachine: Using API Version  1
	I0913 18:22:30.562575   11846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:22:30.563271   11846 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:22:30.563375   11846 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:22:30.563548   11846 main.go:141] libmachine: Using API Version  1
	I0913 18:22:30.563558   11846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:22:30.563593   11846 main.go:141] libmachine: (addons-979357) Calling .GetState
	I0913 18:22:30.563886   11846 main.go:141] libmachine: Using API Version  1
	I0913 18:22:30.563903   11846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:22:30.564271   11846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:22:30.564314   11846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:22:30.564394   11846 main.go:141] libmachine: Using API Version  1
	I0913 18:22:30.564411   11846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:22:30.564907   11846 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:22:30.565005   11846 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:22:30.565037   11846 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:22:30.565075   11846 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:22:30.565330   11846 main.go:141] libmachine: (addons-979357) Calling .GetState
	I0913 18:22:30.565392   11846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41479
	I0913 18:22:30.566066   11846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:22:30.566122   11846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:22:30.566267   11846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:22:30.566304   11846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:22:30.566523   11846 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:22:30.567164   11846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:22:30.567203   11846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:22:30.570708   11846 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-979357"
	I0913 18:22:30.570757   11846 host.go:66] Checking if "addons-979357" exists ...
	I0913 18:22:30.571197   11846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:22:30.571229   11846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:22:30.571302   11846 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:22:30.571683   11846 main.go:141] libmachine: Using API Version  1
	I0913 18:22:30.571734   11846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:22:30.571887   11846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:22:30.571926   11846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:22:30.572171   11846 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:22:30.572551   11846 main.go:141] libmachine: (addons-979357) Calling .GetState
	I0913 18:22:30.572627   11846 main.go:141] libmachine: (addons-979357) Calling .GetState
	I0913 18:22:30.581211   11846 main.go:141] libmachine: (addons-979357) Calling .DriverName
	I0913 18:22:30.581280   11846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40701
	I0913 18:22:30.581285   11846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36321
	I0913 18:22:30.581511   11846 main.go:141] libmachine: (addons-979357) Calling .DriverName
	I0913 18:22:30.582226   11846 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:22:30.582518   11846 addons.go:234] Setting addon default-storageclass=true in "addons-979357"
	I0913 18:22:30.582554   11846 host.go:66] Checking if "addons-979357" exists ...
	I0913 18:22:30.582746   11846 main.go:141] libmachine: Using API Version  1
	I0913 18:22:30.582762   11846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:22:30.582915   11846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:22:30.582949   11846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:22:30.584229   11846 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:22:30.584265   11846 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:22:30.584235   11846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40789
	I0913 18:22:30.584426   11846 main.go:141] libmachine: (addons-979357) Calling .GetState
	I0913 18:22:30.584925   11846 main.go:141] libmachine: Using API Version  1
	I0913 18:22:30.584947   11846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:22:30.585303   11846 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:22:30.585508   11846 main.go:141] libmachine: (addons-979357) Calling .GetState
	I0913 18:22:30.586552   11846 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0913 18:22:30.586648   11846 out.go:177]   - Using image docker.io/registry:2.8.3
	I0913 18:22:30.586943   11846 main.go:141] libmachine: (addons-979357) Calling .DriverName
	I0913 18:22:30.587350   11846 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:22:30.587363   11846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35105
	I0913 18:22:30.587491   11846 main.go:141] libmachine: (addons-979357) Calling .DriverName
	I0913 18:22:30.590472   11846 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:22:30.590556   11846 main.go:141] libmachine: Using API Version  1
	I0913 18:22:30.590571   11846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:22:30.590931   11846 main.go:141] libmachine: Using API Version  1
	I0913 18:22:30.590947   11846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:22:30.591000   11846 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:22:30.591151   11846 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0913 18:22:30.591166   11846 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0913 18:22:30.591190   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHHostname
	I0913 18:22:30.591251   11846 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0913 18:22:30.591281   11846 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:22:30.591303   11846 main.go:141] libmachine: (addons-979357) Calling .GetState
	I0913 18:22:30.592093   11846 main.go:141] libmachine: (addons-979357) Calling .GetState
	I0913 18:22:30.592703   11846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35599
	I0913 18:22:30.592773   11846 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0913 18:22:30.593276   11846 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:22:30.593795   11846 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0913 18:22:30.593980   11846 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0913 18:22:30.594465   11846 main.go:141] libmachine: (addons-979357) Calling .DriverName
	I0913 18:22:30.594464   11846 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0913 18:22:30.594524   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHHostname
	I0913 18:22:30.595224   11846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38945
	I0913 18:22:30.595443   11846 main.go:141] libmachine: Using API Version  1
	I0913 18:22:30.595455   11846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:22:30.595704   11846 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0913 18:22:30.595774   11846 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0913 18:22:30.596005   11846 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0913 18:22:30.596021   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHHostname
	I0913 18:22:30.596021   11846 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:22:30.596151   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:30.596413   11846 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0913 18:22:30.596485   11846 host.go:66] Checking if "addons-979357" exists ...
	I0913 18:22:30.596641   11846 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:22:30.597089   11846 main.go:141] libmachine: (addons-979357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:f4:d7", ip: ""} in network mk-addons-979357: {Iface:virbr1 ExpiryTime:2024-09-13 19:22:00 +0000 UTC Type:0 Mac:52:54:00:9b:f4:d7 Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:addons-979357 Clientid:01:52:54:00:9b:f4:d7}
	I0913 18:22:30.597116   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined IP address 192.168.39.34 and MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:30.597626   11846 main.go:141] libmachine: Using API Version  1
	I0913 18:22:30.597205   11846 main.go:141] libmachine: (addons-979357) Calling .GetState
	I0913 18:22:30.597661   11846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:22:30.597680   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHPort
	I0913 18:22:30.597823   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHKeyPath
	I0913 18:22:30.597900   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHUsername
	I0913 18:22:30.597924   11846 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0913 18:22:30.597937   11846 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0913 18:22:30.597966   11846 sshutil.go:53] new ssh client: &{IP:192.168.39.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/addons-979357/id_rsa Username:docker}
	I0913 18:22:30.598032   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHHostname
	I0913 18:22:30.598634   11846 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:22:30.598726   11846 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0913 18:22:30.598936   11846 main.go:141] libmachine: (addons-979357) Calling .GetState
	I0913 18:22:30.599673   11846 main.go:141] libmachine: (addons-979357) Calling .DriverName
	I0913 18:22:30.599727   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:30.600006   11846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:22:30.600036   11846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:22:30.600232   11846 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0913 18:22:30.600261   11846 main.go:141] libmachine: (addons-979357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:f4:d7", ip: ""} in network mk-addons-979357: {Iface:virbr1 ExpiryTime:2024-09-13 19:22:00 +0000 UTC Type:0 Mac:52:54:00:9b:f4:d7 Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:addons-979357 Clientid:01:52:54:00:9b:f4:d7}
	I0913 18:22:30.600288   11846 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0913 18:22:30.600338   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHHostname
	I0913 18:22:30.600344   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined IP address 192.168.39.34 and MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:30.600980   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHPort
	I0913 18:22:30.601242   11846 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0913 18:22:30.601962   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHKeyPath
	I0913 18:22:30.602482   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHUsername
	I0913 18:22:30.602787   11846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41695
	I0913 18:22:30.602898   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:30.602716   11846 sshutil.go:53] new ssh client: &{IP:192.168.39.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/addons-979357/id_rsa Username:docker}
	I0913 18:22:30.603290   11846 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0913 18:22:30.603303   11846 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0913 18:22:30.603320   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHHostname
	I0913 18:22:30.603501   11846 main.go:141] libmachine: (addons-979357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:f4:d7", ip: ""} in network mk-addons-979357: {Iface:virbr1 ExpiryTime:2024-09-13 19:22:00 +0000 UTC Type:0 Mac:52:54:00:9b:f4:d7 Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:addons-979357 Clientid:01:52:54:00:9b:f4:d7}
	I0913 18:22:30.603522   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined IP address 192.168.39.34 and MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:30.603562   11846 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:22:30.603698   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHPort
	I0913 18:22:30.603843   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHKeyPath
	I0913 18:22:30.603971   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHUsername
	I0913 18:22:30.604143   11846 sshutil.go:53] new ssh client: &{IP:192.168.39.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/addons-979357/id_rsa Username:docker}
	I0913 18:22:30.604873   11846 main.go:141] libmachine: Using API Version  1
	I0913 18:22:30.604890   11846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:22:30.605828   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:30.605850   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:30.605884   11846 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:22:30.606050   11846 main.go:141] libmachine: (addons-979357) Calling .GetState
	I0913 18:22:30.606504   11846 main.go:141] libmachine: (addons-979357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:f4:d7", ip: ""} in network mk-addons-979357: {Iface:virbr1 ExpiryTime:2024-09-13 19:22:00 +0000 UTC Type:0 Mac:52:54:00:9b:f4:d7 Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:addons-979357 Clientid:01:52:54:00:9b:f4:d7}
	I0913 18:22:30.606528   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined IP address 192.168.39.34 and MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:30.606942   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHPort
	I0913 18:22:30.607111   11846 main.go:141] libmachine: (addons-979357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:f4:d7", ip: ""} in network mk-addons-979357: {Iface:virbr1 ExpiryTime:2024-09-13 19:22:00 +0000 UTC Type:0 Mac:52:54:00:9b:f4:d7 Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:addons-979357 Clientid:01:52:54:00:9b:f4:d7}
	I0913 18:22:30.607137   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined IP address 192.168.39.34 and MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:30.607517   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHPort
	I0913 18:22:30.607675   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHKeyPath
	I0913 18:22:30.607867   11846 main.go:141] libmachine: (addons-979357) Calling .DriverName
	I0913 18:22:30.607917   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHKeyPath
	I0913 18:22:30.608172   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHUsername
	I0913 18:22:30.608407   11846 sshutil.go:53] new ssh client: &{IP:192.168.39.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/addons-979357/id_rsa Username:docker}
	I0913 18:22:30.608496   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:30.608593   11846 main.go:141] libmachine: (addons-979357) Calling .DriverName
	I0913 18:22:30.608646   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHUsername
	I0913 18:22:30.608773   11846 main.go:141] libmachine: (addons-979357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:f4:d7", ip: ""} in network mk-addons-979357: {Iface:virbr1 ExpiryTime:2024-09-13 19:22:00 +0000 UTC Type:0 Mac:52:54:00:9b:f4:d7 Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:addons-979357 Clientid:01:52:54:00:9b:f4:d7}
	I0913 18:22:30.608791   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined IP address 192.168.39.34 and MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:30.608953   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHPort
	I0913 18:22:30.609011   11846 sshutil.go:53] new ssh client: &{IP:192.168.39.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/addons-979357/id_rsa Username:docker}
	I0913 18:22:30.609108   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHKeyPath
	I0913 18:22:30.609196   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHUsername
	I0913 18:22:30.609292   11846 sshutil.go:53] new ssh client: &{IP:192.168.39.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/addons-979357/id_rsa Username:docker}
	I0913 18:22:30.610290   11846 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0913 18:22:30.610387   11846 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0913 18:22:30.611752   11846 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0913 18:22:30.611767   11846 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0913 18:22:30.611783   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHHostname
	I0913 18:22:30.611860   11846 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0913 18:22:30.611868   11846 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0913 18:22:30.611881   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHHostname
	I0913 18:22:30.615942   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:30.616142   11846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33383
	I0913 18:22:30.616410   11846 main.go:141] libmachine: (addons-979357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:f4:d7", ip: ""} in network mk-addons-979357: {Iface:virbr1 ExpiryTime:2024-09-13 19:22:00 +0000 UTC Type:0 Mac:52:54:00:9b:f4:d7 Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:addons-979357 Clientid:01:52:54:00:9b:f4:d7}
	I0913 18:22:30.616449   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined IP address 192.168.39.34 and MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:30.616495   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHPort
	I0913 18:22:30.616724   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHKeyPath
	I0913 18:22:30.616880   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHUsername
	I0913 18:22:30.616942   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:30.617103   11846 sshutil.go:53] new ssh client: &{IP:192.168.39.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/addons-979357/id_rsa Username:docker}
	I0913 18:22:30.617382   11846 main.go:141] libmachine: (addons-979357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:f4:d7", ip: ""} in network mk-addons-979357: {Iface:virbr1 ExpiryTime:2024-09-13 19:22:00 +0000 UTC Type:0 Mac:52:54:00:9b:f4:d7 Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:addons-979357 Clientid:01:52:54:00:9b:f4:d7}
	I0913 18:22:30.617407   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined IP address 192.168.39.34 and MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:30.617450   11846 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:22:30.617566   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHPort
	I0913 18:22:30.617700   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHKeyPath
	I0913 18:22:30.617907   11846 main.go:141] libmachine: Using API Version  1
	I0913 18:22:30.617923   11846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:22:30.617987   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHUsername
	I0913 18:22:30.618223   11846 sshutil.go:53] new ssh client: &{IP:192.168.39.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/addons-979357/id_rsa Username:docker}
	I0913 18:22:30.618283   11846 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:22:30.618400   11846 main.go:141] libmachine: (addons-979357) Calling .GetState
	I0913 18:22:30.618450   11846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39519
	I0913 18:22:30.619331   11846 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:22:30.619872   11846 main.go:141] libmachine: Using API Version  1
	I0913 18:22:30.619894   11846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:22:30.620712   11846 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:22:30.620723   11846 main.go:141] libmachine: (addons-979357) Calling .DriverName
	I0913 18:22:30.621112   11846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44177
	I0913 18:22:30.621385   11846 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:22:30.621616   11846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41847
	I0913 18:22:30.621630   11846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:22:30.621681   11846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:22:30.621808   11846 main.go:141] libmachine: Using API Version  1
	I0913 18:22:30.621830   11846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:22:30.621985   11846 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:22:30.622213   11846 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:22:30.622502   11846 main.go:141] libmachine: Using API Version  1
	I0913 18:22:30.622523   11846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:22:30.622544   11846 main.go:141] libmachine: (addons-979357) Calling .GetState
	I0913 18:22:30.622785   11846 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0913 18:22:30.623076   11846 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:22:30.623434   11846 main.go:141] libmachine: (addons-979357) Calling .DriverName
	I0913 18:22:30.624020   11846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40609
	I0913 18:22:30.624371   11846 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:22:30.624479   11846 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0913 18:22:30.624499   11846 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0913 18:22:30.624514   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHHostname
	I0913 18:22:30.624774   11846 main.go:141] libmachine: Using API Version  1
	I0913 18:22:30.624794   11846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:22:30.625076   11846 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:22:30.625321   11846 main.go:141] libmachine: (addons-979357) Calling .GetState
	I0913 18:22:30.626357   11846 main.go:141] libmachine: (addons-979357) Calling .DriverName
	I0913 18:22:30.627106   11846 main.go:141] libmachine: (addons-979357) Calling .DriverName
	I0913 18:22:30.628111   11846 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0913 18:22:30.628769   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:30.629056   11846 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 18:22:30.629179   11846 main.go:141] libmachine: (addons-979357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:f4:d7", ip: ""} in network mk-addons-979357: {Iface:virbr1 ExpiryTime:2024-09-13 19:22:00 +0000 UTC Type:0 Mac:52:54:00:9b:f4:d7 Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:addons-979357 Clientid:01:52:54:00:9b:f4:d7}
	I0913 18:22:30.629566   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined IP address 192.168.39.34 and MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:30.629413   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHPort
	I0913 18:22:30.629715   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHKeyPath
	I0913 18:22:30.629829   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHUsername
	I0913 18:22:30.629985   11846 sshutil.go:53] new ssh client: &{IP:192.168.39.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/addons-979357/id_rsa Username:docker}
	I0913 18:22:30.631455   11846 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0913 18:22:30.631475   11846 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0913 18:22:30.631490   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHHostname
	I0913 18:22:30.632139   11846 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0913 18:22:30.634478   11846 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0913 18:22:30.634531   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:30.634969   11846 main.go:141] libmachine: (addons-979357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:f4:d7", ip: ""} in network mk-addons-979357: {Iface:virbr1 ExpiryTime:2024-09-13 19:22:00 +0000 UTC Type:0 Mac:52:54:00:9b:f4:d7 Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:addons-979357 Clientid:01:52:54:00:9b:f4:d7}
	I0913 18:22:30.634985   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined IP address 192.168.39.34 and MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:30.635140   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHPort
	I0913 18:22:30.635299   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHKeyPath
	I0913 18:22:30.635443   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHUsername
	I0913 18:22:30.635542   11846 sshutil.go:53] new ssh client: &{IP:192.168.39.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/addons-979357/id_rsa Username:docker}
	I0913 18:22:30.636827   11846 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0913 18:22:30.637904   11846 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0913 18:22:30.639028   11846 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0913 18:22:30.640544   11846 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0913 18:22:30.641535   11846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43859
	I0913 18:22:30.641939   11846 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:22:30.642316   11846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44319
	I0913 18:22:30.642465   11846 main.go:141] libmachine: Using API Version  1
	I0913 18:22:30.642489   11846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:22:30.642731   11846 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:22:30.642818   11846 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:22:30.642875   11846 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0913 18:22:30.643103   11846 main.go:141] libmachine: Using API Version  1
	I0913 18:22:30.643113   11846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:22:30.643375   11846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:22:30.643394   11846 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:22:30.643415   11846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:22:30.643509   11846 main.go:141] libmachine: (addons-979357) Calling .GetState
	I0913 18:22:30.644348   11846 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0913 18:22:30.644366   11846 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0913 18:22:30.644386   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHHostname
	I0913 18:22:30.645550   11846 main.go:141] libmachine: (addons-979357) Calling .DriverName
	I0913 18:22:30.647421   11846 out.go:177]   - Using image docker.io/busybox:stable
	I0913 18:22:30.647683   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:30.648186   11846 main.go:141] libmachine: (addons-979357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:f4:d7", ip: ""} in network mk-addons-979357: {Iface:virbr1 ExpiryTime:2024-09-13 19:22:00 +0000 UTC Type:0 Mac:52:54:00:9b:f4:d7 Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:addons-979357 Clientid:01:52:54:00:9b:f4:d7}
	I0913 18:22:30.648207   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined IP address 192.168.39.34 and MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:30.648479   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHPort
	I0913 18:22:30.648648   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHKeyPath
	I0913 18:22:30.648781   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHUsername
	I0913 18:22:30.648911   11846 sshutil.go:53] new ssh client: &{IP:192.168.39.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/addons-979357/id_rsa Username:docker}
	I0913 18:22:30.649886   11846 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0913 18:22:30.651056   11846 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0913 18:22:30.651073   11846 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0913 18:22:30.651091   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHHostname
	I0913 18:22:30.654528   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:30.654955   11846 main.go:141] libmachine: (addons-979357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:f4:d7", ip: ""} in network mk-addons-979357: {Iface:virbr1 ExpiryTime:2024-09-13 19:22:00 +0000 UTC Type:0 Mac:52:54:00:9b:f4:d7 Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:addons-979357 Clientid:01:52:54:00:9b:f4:d7}
	I0913 18:22:30.654976   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined IP address 192.168.39.34 and MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:30.655136   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHPort
	I0913 18:22:30.655308   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHKeyPath
	I0913 18:22:30.655455   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHUsername
	I0913 18:22:30.655556   11846 sshutil.go:53] new ssh client: &{IP:192.168.39.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/addons-979357/id_rsa Username:docker}
	I0913 18:22:30.661503   11846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43463
	I0913 18:22:30.661851   11846 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:22:30.662364   11846 main.go:141] libmachine: Using API Version  1
	I0913 18:22:30.662380   11846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:22:30.662640   11846 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:22:30.662820   11846 main.go:141] libmachine: (addons-979357) Calling .GetState
	I0913 18:22:30.664099   11846 main.go:141] libmachine: (addons-979357) Calling .DriverName
	I0913 18:22:30.664269   11846 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0913 18:22:30.664283   11846 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0913 18:22:30.664299   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHHostname
	I0913 18:22:30.666963   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:30.667366   11846 main.go:141] libmachine: (addons-979357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:f4:d7", ip: ""} in network mk-addons-979357: {Iface:virbr1 ExpiryTime:2024-09-13 19:22:00 +0000 UTC Type:0 Mac:52:54:00:9b:f4:d7 Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:addons-979357 Clientid:01:52:54:00:9b:f4:d7}
	I0913 18:22:30.667383   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined IP address 192.168.39.34 and MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:30.667513   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHPort
	I0913 18:22:30.667646   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHKeyPath
	I0913 18:22:30.667741   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHUsername
	I0913 18:22:30.667850   11846 sshutil.go:53] new ssh client: &{IP:192.168.39.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/addons-979357/id_rsa Username:docker}
	I0913 18:22:30.876396   11846 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0913 18:22:30.876459   11846 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0913 18:22:30.928879   11846 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0913 18:22:30.930858   11846 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0913 18:22:30.930876   11846 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0913 18:22:30.989689   11846 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0913 18:22:30.989714   11846 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0913 18:22:31.040586   11846 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0913 18:22:31.057460   11846 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0913 18:22:31.100555   11846 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0913 18:22:31.100583   11846 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0913 18:22:31.105990   11846 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0913 18:22:31.106016   11846 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0913 18:22:31.191777   11846 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0913 18:22:31.191803   11846 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0913 18:22:31.194629   11846 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0913 18:22:31.194653   11846 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0913 18:22:31.261951   11846 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0913 18:22:31.268194   11846 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0913 18:22:31.268218   11846 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0913 18:22:31.269743   11846 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0913 18:22:31.269764   11846 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0913 18:22:31.367341   11846 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0913 18:22:31.383222   11846 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0913 18:22:31.383252   11846 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0913 18:22:31.394617   11846 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0913 18:22:31.396907   11846 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0913 18:22:31.431732   11846 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0913 18:22:31.431760   11846 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0913 18:22:31.472624   11846 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0913 18:22:31.472651   11846 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0913 18:22:31.498512   11846 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0913 18:22:31.498541   11846 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0913 18:22:31.549749   11846 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0913 18:22:31.549772   11846 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0913 18:22:31.556719   11846 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0913 18:22:31.556741   11846 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0913 18:22:31.566668   11846 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0913 18:22:31.583646   11846 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0913 18:22:31.583673   11846 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0913 18:22:31.624498   11846 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0913 18:22:31.624524   11846 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0913 18:22:31.705541   11846 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0913 18:22:31.705566   11846 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0913 18:22:31.738522   11846 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0913 18:22:31.738549   11846 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0913 18:22:31.744752   11846 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0913 18:22:31.774264   11846 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0913 18:22:31.774288   11846 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0913 18:22:31.899545   11846 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0913 18:22:31.899571   11846 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0913 18:22:31.916895   11846 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0913 18:22:31.916922   11846 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0913 18:22:32.112312   11846 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0913 18:22:32.112341   11846 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0913 18:22:32.123767   11846 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0913 18:22:32.123794   11846 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0913 18:22:32.215746   11846 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0913 18:22:32.287431   11846 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0913 18:22:32.287460   11846 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0913 18:22:32.301669   11846 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0913 18:22:32.301701   11846 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0913 18:22:32.394481   11846 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0913 18:22:32.394508   11846 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0913 18:22:32.514672   11846 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0913 18:22:32.514700   11846 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0913 18:22:32.519283   11846 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0913 18:22:32.584445   11846 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0913 18:22:32.808431   11846 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0913 18:22:32.808460   11846 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0913 18:22:32.958075   11846 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.081583936s)
	I0913 18:22:32.958125   11846 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0913 18:22:32.958136   11846 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.081703044s)
	I0913 18:22:32.958221   11846 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.029312252s)
	I0913 18:22:32.958260   11846 main.go:141] libmachine: Making call to close driver server
	I0913 18:22:32.958271   11846 main.go:141] libmachine: (addons-979357) Calling .Close
	I0913 18:22:32.959173   11846 node_ready.go:35] waiting up to 6m0s for node "addons-979357" to be "Ready" ...
	I0913 18:22:32.959336   11846 main.go:141] libmachine: (addons-979357) DBG | Closing plugin on server side
	I0913 18:22:32.959354   11846 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:22:32.959367   11846 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:22:32.959377   11846 main.go:141] libmachine: Making call to close driver server
	I0913 18:22:32.959389   11846 main.go:141] libmachine: (addons-979357) Calling .Close
	I0913 18:22:32.959904   11846 main.go:141] libmachine: (addons-979357) DBG | Closing plugin on server side
	I0913 18:22:32.959941   11846 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:22:32.959953   11846 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:22:32.962939   11846 node_ready.go:49] node "addons-979357" has status "Ready":"True"
	I0913 18:22:32.962965   11846 node_ready.go:38] duration metric: took 3.757473ms for node "addons-979357" to be "Ready" ...
	I0913 18:22:32.962977   11846 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0913 18:22:32.981363   11846 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-2gkt9" in "kube-system" namespace to be "Ready" ...
	I0913 18:22:32.982346   11846 main.go:141] libmachine: Making call to close driver server
	I0913 18:22:32.982366   11846 main.go:141] libmachine: (addons-979357) Calling .Close
	I0913 18:22:32.982651   11846 main.go:141] libmachine: (addons-979357) DBG | Closing plugin on server side
	I0913 18:22:32.982696   11846 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:22:32.982707   11846 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:22:33.207362   11846 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0913 18:22:33.207383   11846 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0913 18:22:33.462364   11846 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-979357" context rescaled to 1 replicas
	I0913 18:22:33.565942   11846 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0913 18:22:33.565968   11846 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0913 18:22:33.892546   11846 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0913 18:22:33.892578   11846 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0913 18:22:34.137718   11846 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0913 18:22:35.208928   11846 pod_ready.go:103] pod "coredns-7c65d6cfc9-2gkt9" in "kube-system" namespace has status "Ready":"False"
	I0913 18:22:35.463173   11846 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.422547754s)
	I0913 18:22:35.463218   11846 main.go:141] libmachine: Making call to close driver server
	I0913 18:22:35.463226   11846 main.go:141] libmachine: (addons-979357) Calling .Close
	I0913 18:22:35.463481   11846 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:22:35.463503   11846 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:22:35.463512   11846 main.go:141] libmachine: Making call to close driver server
	I0913 18:22:35.463519   11846 main.go:141] libmachine: (addons-979357) Calling .Close
	I0913 18:22:35.463699   11846 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:22:35.463745   11846 main.go:141] libmachine: (addons-979357) DBG | Closing plugin on server side
	I0913 18:22:35.463754   11846 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:22:36.177658   11846 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.120163066s)
	I0913 18:22:36.177710   11846 main.go:141] libmachine: Making call to close driver server
	I0913 18:22:36.177722   11846 main.go:141] libmachine: (addons-979357) Calling .Close
	I0913 18:22:36.177781   11846 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.915798657s)
	I0913 18:22:36.177817   11846 main.go:141] libmachine: Making call to close driver server
	I0913 18:22:36.177829   11846 main.go:141] libmachine: (addons-979357) Calling .Close
	I0913 18:22:36.177818   11846 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.810444318s)
	I0913 18:22:36.177874   11846 main.go:141] libmachine: Making call to close driver server
	I0913 18:22:36.177895   11846 main.go:141] libmachine: (addons-979357) Calling .Close
	I0913 18:22:36.177950   11846 main.go:141] libmachine: (addons-979357) DBG | Closing plugin on server side
	I0913 18:22:36.177983   11846 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:22:36.177995   11846 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:22:36.178004   11846 main.go:141] libmachine: Making call to close driver server
	I0913 18:22:36.178012   11846 main.go:141] libmachine: (addons-979357) Calling .Close
	I0913 18:22:36.178377   11846 main.go:141] libmachine: (addons-979357) DBG | Closing plugin on server side
	I0913 18:22:36.178392   11846 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:22:36.178415   11846 main.go:141] libmachine: (addons-979357) DBG | Closing plugin on server side
	I0913 18:22:36.178438   11846 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:22:36.178473   11846 main.go:141] libmachine: (addons-979357) DBG | Closing plugin on server side
	I0913 18:22:36.178498   11846 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:22:36.178511   11846 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:22:36.178524   11846 main.go:141] libmachine: Making call to close driver server
	I0913 18:22:36.178536   11846 main.go:141] libmachine: (addons-979357) Calling .Close
	I0913 18:22:36.178447   11846 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:22:36.178606   11846 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:22:36.178613   11846 main.go:141] libmachine: Making call to close driver server
	I0913 18:22:36.178625   11846 main.go:141] libmachine: (addons-979357) Calling .Close
	I0913 18:22:36.178943   11846 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:22:36.178958   11846 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:22:36.179947   11846 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:22:36.179951   11846 main.go:141] libmachine: (addons-979357) DBG | Closing plugin on server side
	I0913 18:22:36.179962   11846 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:22:36.391729   11846 main.go:141] libmachine: Making call to close driver server
	I0913 18:22:36.391752   11846 main.go:141] libmachine: (addons-979357) Calling .Close
	I0913 18:22:36.392010   11846 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:22:36.392058   11846 main.go:141] libmachine: (addons-979357) DBG | Closing plugin on server side
	I0913 18:22:36.392065   11846 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:22:36.513516   11846 pod_ready.go:93] pod "coredns-7c65d6cfc9-2gkt9" in "kube-system" namespace has status "Ready":"True"
	I0913 18:22:36.513545   11846 pod_ready.go:82] duration metric: took 3.532154275s for pod "coredns-7c65d6cfc9-2gkt9" in "kube-system" namespace to be "Ready" ...
	I0913 18:22:36.513561   11846 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-mtltd" in "kube-system" namespace to be "Ready" ...
	I0913 18:22:37.702586   11846 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0913 18:22:37.702623   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHHostname
	I0913 18:22:37.705721   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:37.706173   11846 main.go:141] libmachine: (addons-979357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:f4:d7", ip: ""} in network mk-addons-979357: {Iface:virbr1 ExpiryTime:2024-09-13 19:22:00 +0000 UTC Type:0 Mac:52:54:00:9b:f4:d7 Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:addons-979357 Clientid:01:52:54:00:9b:f4:d7}
	I0913 18:22:37.706204   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined IP address 192.168.39.34 and MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:37.706406   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHPort
	I0913 18:22:37.706598   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHKeyPath
	I0913 18:22:37.706724   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHUsername
	I0913 18:22:37.706834   11846 sshutil.go:53] new ssh client: &{IP:192.168.39.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/addons-979357/id_rsa Username:docker}
	I0913 18:22:37.941566   11846 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0913 18:22:38.057578   11846 addons.go:234] Setting addon gcp-auth=true in "addons-979357"
	I0913 18:22:38.057630   11846 host.go:66] Checking if "addons-979357" exists ...
	I0913 18:22:38.057962   11846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:22:38.057998   11846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:22:38.072716   11846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42637
	I0913 18:22:38.073244   11846 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:22:38.073727   11846 main.go:141] libmachine: Using API Version  1
	I0913 18:22:38.073753   11846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:22:38.074119   11846 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:22:38.074874   11846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:22:38.074920   11846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:22:38.089603   11846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33945
	I0913 18:22:38.090145   11846 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:22:38.090681   11846 main.go:141] libmachine: Using API Version  1
	I0913 18:22:38.090703   11846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:22:38.091107   11846 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:22:38.091372   11846 main.go:141] libmachine: (addons-979357) Calling .GetState
	I0913 18:22:38.093189   11846 main.go:141] libmachine: (addons-979357) Calling .DriverName
	I0913 18:22:38.093398   11846 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0913 18:22:38.093425   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHHostname
	I0913 18:22:38.096456   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:38.096850   11846 main.go:141] libmachine: (addons-979357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:f4:d7", ip: ""} in network mk-addons-979357: {Iface:virbr1 ExpiryTime:2024-09-13 19:22:00 +0000 UTC Type:0 Mac:52:54:00:9b:f4:d7 Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:addons-979357 Clientid:01:52:54:00:9b:f4:d7}
	I0913 18:22:38.096871   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined IP address 192.168.39.34 and MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:38.097020   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHPort
	I0913 18:22:38.097184   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHKeyPath
	I0913 18:22:38.097332   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHUsername
	I0913 18:22:38.097456   11846 sshutil.go:53] new ssh client: &{IP:192.168.39.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/addons-979357/id_rsa Username:docker}
	I0913 18:22:38.611050   11846 pod_ready.go:93] pod "coredns-7c65d6cfc9-mtltd" in "kube-system" namespace has status "Ready":"True"
	I0913 18:22:38.611074   11846 pod_ready.go:82] duration metric: took 2.097504572s for pod "coredns-7c65d6cfc9-mtltd" in "kube-system" namespace to be "Ready" ...
	I0913 18:22:38.611087   11846 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-979357" in "kube-system" namespace to be "Ready" ...
	I0913 18:22:39.180671   11846 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (7.783727776s)
	I0913 18:22:39.180723   11846 main.go:141] libmachine: Making call to close driver server
	I0913 18:22:39.180729   11846 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.78607227s)
	I0913 18:22:39.180743   11846 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.614047493s)
	I0913 18:22:39.180760   11846 main.go:141] libmachine: Making call to close driver server
	I0913 18:22:39.180786   11846 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.436006015s)
	I0913 18:22:39.180808   11846 main.go:141] libmachine: Making call to close driver server
	I0913 18:22:39.180818   11846 main.go:141] libmachine: (addons-979357) Calling .Close
	I0913 18:22:39.180820   11846 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.965045353s)
	I0913 18:22:39.180833   11846 main.go:141] libmachine: Making call to close driver server
	I0913 18:22:39.180846   11846 main.go:141] libmachine: (addons-979357) Calling .Close
	I0913 18:22:39.180763   11846 main.go:141] libmachine: Making call to close driver server
	I0913 18:22:39.180917   11846 main.go:141] libmachine: (addons-979357) Calling .Close
	I0913 18:22:39.180791   11846 main.go:141] libmachine: (addons-979357) Calling .Close
	I0913 18:22:39.180980   11846 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.661665418s)
	I0913 18:22:39.180735   11846 main.go:141] libmachine: (addons-979357) Calling .Close
	W0913 18:22:39.181015   11846 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0913 18:22:39.181035   11846 retry.go:31] will retry after 132.635799ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0913 18:22:39.181141   11846 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.5966432s)
	I0913 18:22:39.181168   11846 main.go:141] libmachine: Making call to close driver server
	I0913 18:22:39.181177   11846 main.go:141] libmachine: (addons-979357) Calling .Close
	I0913 18:22:39.181255   11846 main.go:141] libmachine: (addons-979357) DBG | Closing plugin on server side
	I0913 18:22:39.181292   11846 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:22:39.181299   11846 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:22:39.181306   11846 main.go:141] libmachine: Making call to close driver server
	I0913 18:22:39.181313   11846 main.go:141] libmachine: (addons-979357) Calling .Close
	I0913 18:22:39.182158   11846 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:22:39.182169   11846 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:22:39.182177   11846 main.go:141] libmachine: Making call to close driver server
	I0913 18:22:39.182194   11846 main.go:141] libmachine: (addons-979357) Calling .Close
	I0913 18:22:39.182874   11846 main.go:141] libmachine: (addons-979357) DBG | Closing plugin on server side
	I0913 18:22:39.182909   11846 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:22:39.182918   11846 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:22:39.182925   11846 main.go:141] libmachine: Making call to close driver server
	I0913 18:22:39.182932   11846 main.go:141] libmachine: (addons-979357) Calling .Close
	I0913 18:22:39.183061   11846 main.go:141] libmachine: (addons-979357) DBG | Closing plugin on server side
	I0913 18:22:39.183085   11846 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:22:39.183090   11846 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:22:39.183101   11846 main.go:141] libmachine: (addons-979357) DBG | Closing plugin on server side
	I0913 18:22:39.183173   11846 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:22:39.183188   11846 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:22:39.183192   11846 main.go:141] libmachine: (addons-979357) DBG | Closing plugin on server side
	I0913 18:22:39.183198   11846 addons.go:475] Verifying addon metrics-server=true in "addons-979357"
	I0913 18:22:39.183211   11846 main.go:141] libmachine: (addons-979357) DBG | Closing plugin on server side
	I0913 18:22:39.183227   11846 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:22:39.183233   11846 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:22:39.183141   11846 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:22:39.183266   11846 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:22:39.183276   11846 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:22:39.183394   11846 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:22:39.183404   11846 main.go:141] libmachine: Making call to close driver server
	I0913 18:22:39.183412   11846 main.go:141] libmachine: (addons-979357) Calling .Close
	I0913 18:22:39.183277   11846 addons.go:475] Verifying addon registry=true in "addons-979357"
	I0913 18:22:39.183673   11846 main.go:141] libmachine: (addons-979357) DBG | Closing plugin on server side
	I0913 18:22:39.183702   11846 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:22:39.183709   11846 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:22:39.183175   11846 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:22:39.183811   11846 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:22:39.183814   11846 pod_ready.go:93] pod "etcd-addons-979357" in "kube-system" namespace has status "Ready":"True"
	I0913 18:22:39.183240   11846 main.go:141] libmachine: Making call to close driver server
	I0913 18:22:39.183829   11846 pod_ready.go:82] duration metric: took 572.7356ms for pod "etcd-addons-979357" in "kube-system" namespace to be "Ready" ...
	I0913 18:22:39.183838   11846 main.go:141] libmachine: (addons-979357) Calling .Close
	I0913 18:22:39.183842   11846 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-979357" in "kube-system" namespace to be "Ready" ...
	I0913 18:22:39.183149   11846 main.go:141] libmachine: (addons-979357) DBG | Closing plugin on server side
	I0913 18:22:39.183818   11846 main.go:141] libmachine: Making call to close driver server
	I0913 18:22:39.184008   11846 main.go:141] libmachine: (addons-979357) Calling .Close
	I0913 18:22:39.183276   11846 main.go:141] libmachine: (addons-979357) DBG | Closing plugin on server side
	I0913 18:22:39.184353   11846 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:22:39.184367   11846 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:22:39.184376   11846 addons.go:475] Verifying addon ingress=true in "addons-979357"
	I0913 18:22:39.185002   11846 main.go:141] libmachine: (addons-979357) DBG | Closing plugin on server side
	I0913 18:22:39.185027   11846 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:22:39.186229   11846 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:22:39.186332   11846 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-979357 service yakd-dashboard -n yakd-dashboard
	
	I0913 18:22:39.187398   11846 out.go:177] * Verifying registry addon...
	I0913 18:22:39.188256   11846 out.go:177] * Verifying ingress addon...
	I0913 18:22:39.189818   11846 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0913 18:22:39.190687   11846 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0913 18:22:39.210962   11846 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0913 18:22:39.211000   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:39.212603   11846 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0913 18:22:39.212623   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:39.314470   11846 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0913 18:22:39.711545   11846 pod_ready.go:93] pod "kube-apiserver-addons-979357" in "kube-system" namespace has status "Ready":"True"
	I0913 18:22:39.711574   11846 pod_ready.go:82] duration metric: took 527.723521ms for pod "kube-apiserver-addons-979357" in "kube-system" namespace to be "Ready" ...
	I0913 18:22:39.711588   11846 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-979357" in "kube-system" namespace to be "Ready" ...
	I0913 18:22:39.720988   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:39.727065   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:39.735954   11846 pod_ready.go:93] pod "kube-controller-manager-addons-979357" in "kube-system" namespace has status "Ready":"True"
	I0913 18:22:39.735985   11846 pod_ready.go:82] duration metric: took 24.3888ms for pod "kube-controller-manager-addons-979357" in "kube-system" namespace to be "Ready" ...
	I0913 18:22:39.735999   11846 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-qxmw4" in "kube-system" namespace to be "Ready" ...
	I0913 18:22:39.749808   11846 pod_ready.go:93] pod "kube-proxy-qxmw4" in "kube-system" namespace has status "Ready":"True"
	I0913 18:22:39.749827   11846 pod_ready.go:82] duration metric: took 13.820436ms for pod "kube-proxy-qxmw4" in "kube-system" namespace to be "Ready" ...
	I0913 18:22:39.749836   11846 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-979357" in "kube-system" namespace to be "Ready" ...
	I0913 18:22:39.761817   11846 pod_ready.go:93] pod "kube-scheduler-addons-979357" in "kube-system" namespace has status "Ready":"True"
	I0913 18:22:39.761834   11846 pod_ready.go:82] duration metric: took 11.992857ms for pod "kube-scheduler-addons-979357" in "kube-system" namespace to be "Ready" ...
	I0913 18:22:39.761841   11846 pod_ready.go:39] duration metric: took 6.798852631s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0913 18:22:39.761856   11846 api_server.go:52] waiting for apiserver process to appear ...
	I0913 18:22:39.761902   11846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 18:22:40.110559   11846 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.017133876s)
	I0913 18:22:40.110559   11846 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.972790008s)
	I0913 18:22:40.110740   11846 main.go:141] libmachine: Making call to close driver server
	I0913 18:22:40.110759   11846 main.go:141] libmachine: (addons-979357) Calling .Close
	I0913 18:22:40.110996   11846 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:22:40.111013   11846 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:22:40.111021   11846 main.go:141] libmachine: Making call to close driver server
	I0913 18:22:40.111029   11846 main.go:141] libmachine: (addons-979357) Calling .Close
	I0913 18:22:40.111037   11846 main.go:141] libmachine: (addons-979357) DBG | Closing plugin on server side
	I0913 18:22:40.111346   11846 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:22:40.111360   11846 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:22:40.111369   11846 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-979357"
	I0913 18:22:40.111372   11846 main.go:141] libmachine: (addons-979357) DBG | Closing plugin on server side
	I0913 18:22:40.112081   11846 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0913 18:22:40.113065   11846 out.go:177] * Verifying csi-hostpath-driver addon...
	I0913 18:22:40.114734   11846 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0913 18:22:40.115664   11846 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0913 18:22:40.115892   11846 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0913 18:22:40.115906   11846 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0913 18:22:40.132558   11846 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0913 18:22:40.132577   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:40.211311   11846 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0913 18:22:40.211334   11846 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0913 18:22:40.220393   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:40.220516   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:40.300610   11846 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0913 18:22:40.300638   11846 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0913 18:22:40.389824   11846 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0913 18:22:40.621694   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:40.843154   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:40.844023   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:41.120868   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:41.194711   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:41.195587   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:41.201412   11846 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.886888763s)
	I0913 18:22:41.201454   11846 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.439534942s)
	I0913 18:22:41.201468   11846 main.go:141] libmachine: Making call to close driver server
	I0913 18:22:41.201480   11846 api_server.go:72] duration metric: took 10.722879781s to wait for apiserver process to appear ...
	I0913 18:22:41.201485   11846 main.go:141] libmachine: (addons-979357) Calling .Close
	I0913 18:22:41.201489   11846 api_server.go:88] waiting for apiserver healthz status ...
	I0913 18:22:41.201511   11846 api_server.go:253] Checking apiserver healthz at https://192.168.39.34:8443/healthz ...
	I0913 18:22:41.201764   11846 main.go:141] libmachine: (addons-979357) DBG | Closing plugin on server side
	I0913 18:22:41.201822   11846 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:22:41.201837   11846 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:22:41.201844   11846 main.go:141] libmachine: Making call to close driver server
	I0913 18:22:41.201852   11846 main.go:141] libmachine: (addons-979357) Calling .Close
	I0913 18:22:41.202028   11846 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:22:41.202047   11846 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:22:41.206053   11846 api_server.go:279] https://192.168.39.34:8443/healthz returned 200:
	ok
	I0913 18:22:41.206959   11846 api_server.go:141] control plane version: v1.31.1
	I0913 18:22:41.206977   11846 api_server.go:131] duration metric: took 5.482612ms to wait for apiserver health ...
	I0913 18:22:41.206984   11846 system_pods.go:43] waiting for kube-system pods to appear ...
	I0913 18:22:41.214695   11846 system_pods.go:59] 18 kube-system pods found
	I0913 18:22:41.214727   11846 system_pods.go:61] "coredns-7c65d6cfc9-2gkt9" [d1e3da77-7c54-4cc2-a26f-32731b8c03d0] Running
	I0913 18:22:41.214735   11846 system_pods.go:61] "coredns-7c65d6cfc9-mtltd" [bee68b4c-c773-4bb2-b088-1fe4a816edf3] Running
	I0913 18:22:41.214746   11846 system_pods.go:61] "csi-hostpath-attacher-0" [8a5b2986-b2ca-4a85-b195-1c8eb80a223e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0913 18:22:41.214760   11846 system_pods.go:61] "csi-hostpath-resizer-0" [e9c848e7-3276-496f-a60f-69f8eb633740] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0913 18:22:41.214772   11846 system_pods.go:61] "csi-hostpathplugin-zhd46" [a53ceb0b-635b-4fa8-a72b-60d626a4370f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0913 18:22:41.214782   11846 system_pods.go:61] "etcd-addons-979357" [1267edb9-0c88-4573-80ab-4c18edfd79fa] Running
	I0913 18:22:41.214789   11846 system_pods.go:61] "kube-apiserver-addons-979357" [9d630d36-12c0-4389-b21b-4a5befb11de4] Running
	I0913 18:22:41.214797   11846 system_pods.go:61] "kube-controller-manager-addons-979357" [77e27eb8-234a-4da6-a8f5-c94a66a9d3dc] Running
	I0913 18:22:41.214807   11846 system_pods.go:61] "kube-ingress-dns-minikube" [a82db8f0-646e-4f6c-8dda-7332bed77579] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0913 18:22:41.214821   11846 system_pods.go:61] "kube-proxy-qxmw4" [3e77278b-62ae-4a68-bbba-ca3108d18280] Running
	I0913 18:22:41.214830   11846 system_pods.go:61] "kube-scheduler-addons-979357" [a40db901-708e-481e-aedf-f54669897c0e] Running
	I0913 18:22:41.214838   11846 system_pods.go:61] "metrics-server-84c5f94fbc-qw488" [cf270e35-c498-455b-bc82-0a19e8f606aa] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0913 18:22:41.214850   11846 system_pods.go:61] "nvidia-device-plugin-daemonset-66thw" [ad466b8e-d669-4281-be12-36ab1bbbee83] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0913 18:22:41.214862   11846 system_pods.go:61] "registry-66c9cd494c-pwx9m" [d9453f5b-a1d3-40e4-80d3-2250edd642ca] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0913 18:22:41.214872   11846 system_pods.go:61] "registry-proxy-2jrhs" [8223e4fa-f130-48c6-ab8b-764434495610] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0913 18:22:41.214884   11846 system_pods.go:61] "snapshot-controller-56fcc65765-fvbcx" [9043c1eb-e28f-4af5-af33-529d05cce5c8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0913 18:22:41.214903   11846 system_pods.go:61] "snapshot-controller-56fcc65765-r58vx" [661bb76c-4862-41f0-a2d0-1c774b91c7dd] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0913 18:22:41.214910   11846 system_pods.go:61] "storage-provisioner" [09e9768b-ce9c-47d6-8650-191c7f864a9c] Running
	I0913 18:22:41.214917   11846 system_pods.go:74] duration metric: took 7.926337ms to wait for pod list to return data ...
	I0913 18:22:41.214926   11846 default_sa.go:34] waiting for default service account to be created ...
	I0913 18:22:41.217763   11846 default_sa.go:45] found service account: "default"
	I0913 18:22:41.217781   11846 default_sa.go:55] duration metric: took 2.845911ms for default service account to be created ...
	I0913 18:22:41.217790   11846 system_pods.go:116] waiting for k8s-apps to be running ...
	I0913 18:22:41.226796   11846 system_pods.go:86] 18 kube-system pods found
	I0913 18:22:41.226823   11846 system_pods.go:89] "coredns-7c65d6cfc9-2gkt9" [d1e3da77-7c54-4cc2-a26f-32731b8c03d0] Running
	I0913 18:22:41.226831   11846 system_pods.go:89] "coredns-7c65d6cfc9-mtltd" [bee68b4c-c773-4bb2-b088-1fe4a816edf3] Running
	I0913 18:22:41.226841   11846 system_pods.go:89] "csi-hostpath-attacher-0" [8a5b2986-b2ca-4a85-b195-1c8eb80a223e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0913 18:22:41.226852   11846 system_pods.go:89] "csi-hostpath-resizer-0" [e9c848e7-3276-496f-a60f-69f8eb633740] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0913 18:22:41.226862   11846 system_pods.go:89] "csi-hostpathplugin-zhd46" [a53ceb0b-635b-4fa8-a72b-60d626a4370f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0913 18:22:41.226869   11846 system_pods.go:89] "etcd-addons-979357" [1267edb9-0c88-4573-80ab-4c18edfd79fa] Running
	I0913 18:22:41.226876   11846 system_pods.go:89] "kube-apiserver-addons-979357" [9d630d36-12c0-4389-b21b-4a5befb11de4] Running
	I0913 18:22:41.226883   11846 system_pods.go:89] "kube-controller-manager-addons-979357" [77e27eb8-234a-4da6-a8f5-c94a66a9d3dc] Running
	I0913 18:22:41.226896   11846 system_pods.go:89] "kube-ingress-dns-minikube" [a82db8f0-646e-4f6c-8dda-7332bed77579] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0913 18:22:41.226903   11846 system_pods.go:89] "kube-proxy-qxmw4" [3e77278b-62ae-4a68-bbba-ca3108d18280] Running
	I0913 18:22:41.226913   11846 system_pods.go:89] "kube-scheduler-addons-979357" [a40db901-708e-481e-aedf-f54669897c0e] Running
	I0913 18:22:41.226923   11846 system_pods.go:89] "metrics-server-84c5f94fbc-qw488" [cf270e35-c498-455b-bc82-0a19e8f606aa] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0913 18:22:41.226936   11846 system_pods.go:89] "nvidia-device-plugin-daemonset-66thw" [ad466b8e-d669-4281-be12-36ab1bbbee83] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0913 18:22:41.226945   11846 system_pods.go:89] "registry-66c9cd494c-pwx9m" [d9453f5b-a1d3-40e4-80d3-2250edd642ca] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0913 18:22:41.226956   11846 system_pods.go:89] "registry-proxy-2jrhs" [8223e4fa-f130-48c6-ab8b-764434495610] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0913 18:22:41.226966   11846 system_pods.go:89] "snapshot-controller-56fcc65765-fvbcx" [9043c1eb-e28f-4af5-af33-529d05cce5c8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0913 18:22:41.226979   11846 system_pods.go:89] "snapshot-controller-56fcc65765-r58vx" [661bb76c-4862-41f0-a2d0-1c774b91c7dd] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0913 18:22:41.226987   11846 system_pods.go:89] "storage-provisioner" [09e9768b-ce9c-47d6-8650-191c7f864a9c] Running
	I0913 18:22:41.226997   11846 system_pods.go:126] duration metric: took 9.200944ms to wait for k8s-apps to be running ...
	I0913 18:22:41.227009   11846 system_svc.go:44] waiting for kubelet service to be running ....
	I0913 18:22:41.227055   11846 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 18:22:41.634996   11846 system_svc.go:56] duration metric: took 407.978559ms WaitForService to wait for kubelet
	I0913 18:22:41.635015   11846 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.245157022s)
	I0913 18:22:41.635029   11846 kubeadm.go:582] duration metric: took 11.156427988s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0913 18:22:41.635054   11846 node_conditions.go:102] verifying NodePressure condition ...
	I0913 18:22:41.635053   11846 main.go:141] libmachine: Making call to close driver server
	I0913 18:22:41.635073   11846 main.go:141] libmachine: (addons-979357) Calling .Close
	I0913 18:22:41.635381   11846 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:22:41.635400   11846 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:22:41.635410   11846 main.go:141] libmachine: Making call to close driver server
	I0913 18:22:41.635434   11846 main.go:141] libmachine: (addons-979357) DBG | Closing plugin on server side
	I0913 18:22:41.635497   11846 main.go:141] libmachine: (addons-979357) Calling .Close
	I0913 18:22:41.635722   11846 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:22:41.635759   11846 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:22:41.638410   11846 addons.go:475] Verifying addon gcp-auth=true in "addons-979357"
	I0913 18:22:41.640220   11846 out.go:177] * Verifying gcp-auth addon...
	I0913 18:22:41.642958   11846 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0913 18:22:41.721176   11846 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0913 18:22:41.721197   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:22:41.722056   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:41.765233   11846 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0913 18:22:41.765260   11846 node_conditions.go:123] node cpu capacity is 2
	I0913 18:22:41.765276   11846 node_conditions.go:105] duration metric: took 130.215708ms to run NodePressure ...
	I0913 18:22:41.765289   11846 start.go:241] waiting for startup goroutines ...
	I0913 18:22:41.787100   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:41.787864   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:42.120679   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:42.147184   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:22:42.194390   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:42.195105   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:42.619872   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:42.645630   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:22:42.693894   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:42.695153   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:43.120929   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:43.145927   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:22:43.194596   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:43.195583   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:43.621917   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:43.645549   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:22:43.693559   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:43.695135   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:44.121292   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:44.146843   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:22:44.195593   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:44.195599   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:44.621514   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:44.646833   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:22:44.694699   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:44.695284   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:45.121000   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:45.146665   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:22:45.221808   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:45.221886   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:45.621175   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:45.646182   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:22:45.696648   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:45.697620   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:46.120717   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:46.147336   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:22:46.193470   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:46.195172   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:46.620919   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:46.646586   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:22:46.693776   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:46.694844   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:47.121098   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:47.146164   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:22:47.194357   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:47.194812   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:47.620988   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:47.646008   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:22:47.695231   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:47.695519   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:48.123021   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:48.148617   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:22:48.194472   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:48.197071   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:48.620608   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:48.647296   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:22:48.693740   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:48.696156   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:49.121349   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:49.146152   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:22:49.193353   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:49.195100   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:49.620792   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:49.646311   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:22:49.694786   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:49.695121   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:50.120264   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:50.146350   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:22:50.195145   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:50.195301   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:50.623572   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:50.647378   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:22:50.694258   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:50.695502   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:51.121299   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:51.147289   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:22:51.195022   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:51.196037   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:51.622665   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:51.647969   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:22:51.694417   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:51.695278   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:52.120925   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:52.147440   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:22:52.193805   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:52.195323   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:52.620665   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:52.646899   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:22:52.694596   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:52.695098   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:53.121172   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:53.147196   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:22:53.193933   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:53.195515   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:53.620912   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:53.646554   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:22:53.694887   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:53.696858   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:54.121127   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:54.146492   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:22:54.193531   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:54.196209   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:54.619665   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:54.647089   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:22:54.693272   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:54.695620   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:55.121110   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:55.146241   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:22:55.222531   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:55.223243   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:55.621744   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:55.647722   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:22:55.695503   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:55.695685   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:56.120857   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:56.147149   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:22:56.195602   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:56.195853   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:56.620083   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:56.646767   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:22:56.695272   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:56.696725   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:57.120527   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:57.146315   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:22:57.196813   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:57.197244   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:57.620578   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:57.647230   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:22:57.693611   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:57.695949   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:58.120685   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:58.147408   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:22:58.193377   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:58.195277   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:58.620171   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:58.646736   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:22:58.695046   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:58.695240   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:59.121002   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:59.146152   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:22:59.193596   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:59.195514   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:59.621837   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:59.646971   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:22:59.695285   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:59.695341   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:00.120985   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:00.146606   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:00.194196   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:00.195216   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:00.622220   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:00.648159   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:00.693250   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:00.695562   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:01.121311   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:01.147065   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:01.198443   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:01.198571   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:01.620857   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:01.647554   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:01.695186   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:01.695496   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:02.120196   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:02.147540   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:02.194122   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:02.196710   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:02.623336   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:02.646284   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:02.693416   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:02.695367   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:03.121367   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:03.146882   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:03.195451   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:03.196172   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:03.620748   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:03.647039   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:03.694700   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:03.695234   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:04.121411   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:04.148078   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:04.194865   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:04.195162   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:04.620921   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:04.645990   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:04.695569   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:04.695683   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:05.120274   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:05.146571   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:05.220150   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:05.220498   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:05.621456   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:05.647109   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:05.694530   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:05.695969   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:06.120728   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:06.146744   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:06.195253   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:06.195415   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:06.620898   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:06.647924   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:06.694635   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:06.694976   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:07.127001   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:07.146392   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:07.193687   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:07.196384   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:07.621298   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:07.646498   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:07.693773   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:07.695419   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:08.127877   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:08.145692   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:08.193920   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:08.196181   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:08.622851   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:08.647712   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:08.694786   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:08.696188   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:09.120734   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:09.147876   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:09.194575   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:09.195140   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:09.620159   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:09.646445   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:09.693725   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:09.695051   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:10.121729   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:10.147049   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:10.195211   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:10.195743   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:10.620510   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:10.646705   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:10.694026   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:10.695703   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:11.131933   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:11.221769   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:11.222414   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:11.222614   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:11.620112   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:11.646407   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:11.693639   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:11.695523   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:12.120722   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:12.147783   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:12.195174   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:12.195474   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:12.620765   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:12.646438   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:12.693266   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:12.695076   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:13.120438   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:13.146881   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:13.195465   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:13.195886   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:13.621014   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:13.646016   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:13.695763   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:13.696160   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:14.121538   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:14.146032   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:14.194101   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:14.194532   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:14.620817   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:14.646854   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:14.694932   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:14.695089   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:15.119855   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:15.146131   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:15.220403   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:15.220546   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:15.626509   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:15.648020   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:15.694713   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:15.696103   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:16.120717   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:16.147101   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:16.193946   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:16.195256   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:16.625357   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:16.721430   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:16.721848   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:16.722175   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:17.120426   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:17.145905   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:17.220147   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:17.220899   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:17.621209   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:17.646445   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:17.693623   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:17.695270   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:18.120271   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:18.146686   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:18.193954   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:18.196010   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:18.621171   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:18.646946   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:18.694564   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:18.695211   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:19.120113   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:19.146469   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:19.196297   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:19.196447   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:19.650974   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:19.651697   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:19.698508   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:19.699902   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:20.120815   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:20.146825   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:20.195112   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:20.195337   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:20.620833   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:20.648724   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:20.695238   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:20.695503   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:21.120670   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:21.146241   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:21.193758   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:21.195248   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:21.620443   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:21.647189   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:21.693673   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:21.695255   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:22.120315   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:22.146703   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:22.194041   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:22.195417   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:22.620344   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:22.646609   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:22.694000   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:22.695298   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:23.119630   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:23.146904   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:23.195745   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:23.195868   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:23.620453   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:23.645852   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:23.695186   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:23.695233   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:24.120504   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:24.146668   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:24.193779   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:24.194861   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:24.626216   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:24.646458   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:24.694012   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:24.695912   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:25.121136   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:25.147431   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:25.195249   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:25.195382   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:25.622578   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:25.646123   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:25.693993   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:25.696212   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:26.121205   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:26.145925   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:26.195513   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:26.195566   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:26.624415   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:26.722553   11846 kapi.go:107] duration metric: took 47.532730438s to wait for kubernetes.io/minikube-addons=registry ...
	I0913 18:23:26.722593   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:26.722614   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:27.120042   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:27.146166   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:27.195294   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:27.622218   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:27.646583   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:27.695195   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:28.120287   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:28.146533   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:28.195157   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:28.619787   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:28.645876   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:28.696846   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:29.121064   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:29.146637   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:29.195783   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:29.626830   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:29.726354   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:29.727329   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:30.119787   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:30.145744   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:30.195173   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:30.624823   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:30.646556   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:30.695578   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:31.120515   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:31.154577   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:31.196849   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:31.620779   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:31.647534   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:31.695303   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:32.120078   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:32.146438   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:32.195173   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:32.620076   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:32.646251   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:32.694883   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:33.120737   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:33.146599   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:33.194850   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:33.621679   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:33.646334   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:33.695142   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:34.121576   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:34.146542   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:34.195016   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:34.623471   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:34.647269   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:34.694854   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:35.121463   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:35.147807   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:35.222465   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:35.620588   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:35.646453   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:35.694862   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:36.121876   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:36.147202   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:36.195143   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:36.621045   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:36.647726   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:36.695696   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:37.121125   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:37.147217   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:37.194840   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:37.621359   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:37.646372   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:37.695547   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:38.121220   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:38.146601   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:38.195403   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:38.625530   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:38.645912   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:38.725502   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:39.122386   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:39.146745   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:39.195189   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:39.620370   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:39.645995   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:39.694761   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:40.119935   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:40.149974   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:40.195722   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:40.620233   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:40.646888   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:40.695644   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:41.120849   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:41.146610   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:41.198361   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:41.622772   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:41.646925   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:41.695237   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:42.120998   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:42.152683   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:42.221014   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:42.621924   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:42.646885   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:42.695597   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:43.120297   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:43.146446   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:43.195887   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:43.621897   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:43.646013   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:43.696557   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:44.121163   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:44.147972   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:44.195376   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:44.621728   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:44.647558   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:44.720987   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:45.121126   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:45.157724   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:45.258976   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:45.622505   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:45.646349   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:45.694812   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:46.123467   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:46.147968   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:46.194710   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:46.620795   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:46.648638   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:46.696589   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:47.125323   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:47.148794   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:47.226767   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:47.625133   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:47.665246   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:47.697347   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:48.120702   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:48.146546   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:48.196137   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:48.620081   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:48.646626   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:48.697799   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:49.120469   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:49.146490   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:49.195195   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:49.623297   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:49.647120   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:49.694857   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:50.121396   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:50.146235   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:50.195440   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:50.620309   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:51.036246   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:51.036422   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:51.120322   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:51.146655   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:51.196307   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:51.621288   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:51.646663   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:51.695788   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:52.120768   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:52.147113   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:52.194880   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:52.620746   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:52.646876   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:52.695644   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:53.120209   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:53.146049   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:53.194556   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:53.623965   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:53.646378   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:53.697202   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:54.119892   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:54.220040   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:54.220900   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:54.620194   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:54.646265   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:54.694508   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:55.120705   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:55.147221   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:55.221270   11846 kapi.go:107] duration metric: took 1m16.030581818s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0913 18:23:55.620551   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:55.722715   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:56.123824   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:56.145750   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:56.620150   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:56.646276   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:57.120601   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:57.146762   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:57.620594   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:57.646802   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:58.120308   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:58.146334   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:58.621532   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:58.646676   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:59.126657   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:59.151013   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:59.620308   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:59.646351   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:00.121433   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:24:00.146323   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:00.620455   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:24:00.647099   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:01.123791   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:24:01.148334   11846 kapi.go:107] duration metric: took 1m19.505373536s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0913 18:24:01.150141   11846 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-979357 cluster.
	I0913 18:24:01.151499   11846 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0913 18:24:01.152977   11846 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0913 18:24:01.620787   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:24:02.121029   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:24:02.619924   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:24:03.121161   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:24:03.623550   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:24:04.121221   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:24:04.621386   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:24:05.120200   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:24:05.620252   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:24:06.120523   11846 kapi.go:107] duration metric: took 1m26.004857088s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0913 18:24:06.122184   11846 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, nvidia-device-plugin, ingress-dns, storage-provisioner-rancher, metrics-server, cloud-spanner, inspektor-gadget, yakd, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0913 18:24:06.123444   11846 addons.go:510] duration metric: took 1m35.644821989s for enable addons: enabled=[default-storageclass storage-provisioner nvidia-device-plugin ingress-dns storage-provisioner-rancher metrics-server cloud-spanner inspektor-gadget yakd volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0913 18:24:06.123477   11846 start.go:246] waiting for cluster config update ...
	I0913 18:24:06.123493   11846 start.go:255] writing updated cluster config ...
	I0913 18:24:06.123731   11846 ssh_runner.go:195] Run: rm -f paused
	I0913 18:24:06.194823   11846 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0913 18:24:06.196641   11846 out.go:177] * Done! kubectl is now configured to use "addons-979357" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 13 18:35:56 addons-979357 crio[661]: time="2024-09-13 18:35:56.502337681Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726252556502310716,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559239,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0a7bba79-1331-4310-88e9-77560c5bf7e9 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 18:35:56 addons-979357 crio[661]: time="2024-09-13 18:35:56.503027053Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0e7fc733-da1a-481c-bc28-c6ada457fb9d name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 18:35:56 addons-979357 crio[661]: time="2024-09-13 18:35:56.503095889Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0e7fc733-da1a-481c-bc28-c6ada457fb9d name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 18:35:56 addons-979357 crio[661]: time="2024-09-13 18:35:56.503479178Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:22255494b139f174b63cb9552f7ce5d9965c4fbdfadacade12971758a6d7c34f,PodSandboxId:3dfe2087710ff70556a7315fef49a89402359b106fa837bda3fd211973a8f99f,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1726252549274069505,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-hw97l,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2e838a30-9cc7-4bd7-a481-378b6fe7bd29,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d3df0eba1f69c460e1b45c99bc2b34d5cc9187e7418cf453c8728af886b6617,PodSandboxId:e04ffe767b47b332a311e0d5bfa8a18486bbb463bc9b43e915cf0badc4060866,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1726252409353519154,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 806d4c49-56fb-4b01-a2cd-83bdf674d6eb,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02c6d6e4b350e88d9417e6262d530e0be455f8bacc894d0437221f5f74fc33ce,PodSandboxId:c3ecf2966876727ebd3ed8032152a9fee65a2fcee0fdd8c8056f359218ee905f,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726251840695356851,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-j795q,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: 943ee71f-15fc-4b01-8fa1-385d59ee92e9,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc7f28ac3b62a4d20b80abf1b36f8e76449fdd981fbf7fb2f7f345d5ddde6fb9,PodSandboxId:d0ef8624420805edf03d3ea93b3e1c55f85b06e6560ca770256c60254ca174cc,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726251818864520595,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-jsft5,i
o.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 9fbbf624-3525-4d06-8686-47c244faa8d2,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f305e18e914bad488669b23de6227ca1310488a16651a66e64c3104cd3ef5ef,PodSandboxId:2f0b757b23f97349b6cc8d2741ebd9dca218e4a69abe4536745560b2bf9ebf32,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726251818584979983,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name:
ingress-nginx-admission-create-t2k2m,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c9fb798f-6249-4cf0-bb0c-ad797779f7d2,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ab3cdf56491273dcbb9cc1f982f2dc5304571e00b7a1ee3ff3175f91c97fdbe,PodSandboxId:68e88bddaa74cdd1c6b049d43a7215c2c0f320385d884237086b8abdf931a230,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1726251796529246164,Labels:map[string]string{io.kubernetes.conta
iner.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-qw488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf270e35-c498-455b-bc82-0a19e8f606aa,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46c152a4abcf517c32fc79a8f91da41a850adb80ee9597c3f49a9e6206b45f31,PodSandboxId:c2dc3a67499c7fd0a5898c8d6199735b3b3acd422ebc719fa480351a446b863e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State
:CONTAINER_RUNNING,CreatedAt:1726251757134337412,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09e9768b-ce9c-47d6-8650-191c7f864a9c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3bf9ceff710d95ca8581ac5cd76f0ab55d09833110e1a5c63fc0953ce948f4f,PodSandboxId:abf9b475b59015ac196e49760bd86de9c580685c29a7b28bf4ba6bca39e6ec2c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,Creat
edAt:1726251755065524423,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mtltd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bee68b4c-c773-4bb2-b088-1fe4a816edf3,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9134bc1238e6ea0f130cab81c7973189417fcdfb4544aec08a6ee8aaf314cb0c,PodSandboxId:44e10dfb950fd8542b6cad158923008de486a5f9f595035b4b932235c39eb956,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307
063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726251752259507868,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qxmw4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e77278b-62ae-4a68-bbba-ca3108d18280,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d7472d2e3f48ffc6dd6ccc80ca03ad0ac7078696b11d7b1460addd2949d22e6,PodSandboxId:d552343eeec8aec1e907ac31f5d100cedcabe20845d1d77eda3124dbbeaa4317,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Anno
tations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726251740614747066,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-979357,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7509a39b4c733b67776a7ae5d64c186,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:089b47ce338051609525c4f9381ceba68577833fc0ffa1c55a00b4e704e073a2,PodSandboxId:b67ca3f1d294d653bc170dc259a7acaf543d88fea5536493d60247cdeb49a879,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:
map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726251740607552426,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-979357,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fde484119c5eac540deeb46d4ed91bf6,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f36fa2cd406d1eb54c924b89d86a23d5b5415356638b0a6ab4846430227aaaa2,PodSandboxId:89b0eb49c6580a707bc39e91ffdcf46d27656879114d9b53048e2e70708e1329,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[strin
g]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726251740610321933,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-979357,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55cc9c34cab79d6a845b85b50237201a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:beb227280e8dfb38d827f5345ec8c8984cb0a02932ab22106590d49a6d28413a,PodSandboxId:1644d60ea634e91ff07026783a9d93e0db4e015a6853a21fb46c5a2aa2aaa73c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726251740600321275,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-979357,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab45a45dafa7cbe725acff1543d4a881,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0e7fc733-da1a-481c-bc28-c6ada457fb9d name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 18:35:56 addons-979357 crio[661]: time="2024-09-13 18:35:56.542283839Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7c9abe78-5735-4906-ae33-b5a99feb9fb4 name=/runtime.v1.RuntimeService/Version
	Sep 13 18:35:56 addons-979357 crio[661]: time="2024-09-13 18:35:56.542388904Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7c9abe78-5735-4906-ae33-b5a99feb9fb4 name=/runtime.v1.RuntimeService/Version
	Sep 13 18:35:56 addons-979357 crio[661]: time="2024-09-13 18:35:56.544179374Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cd7bde70-6a39-4535-9ff7-cb75008c4329 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 18:35:56 addons-979357 crio[661]: time="2024-09-13 18:35:56.547333904Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726252556547306984,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559239,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cd7bde70-6a39-4535-9ff7-cb75008c4329 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 18:35:56 addons-979357 crio[661]: time="2024-09-13 18:35:56.550274359Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b80a60c3-87db-40f1-83f5-9113dae36823 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 18:35:56 addons-979357 crio[661]: time="2024-09-13 18:35:56.550490676Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b80a60c3-87db-40f1-83f5-9113dae36823 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 18:35:56 addons-979357 crio[661]: time="2024-09-13 18:35:56.552029966Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:22255494b139f174b63cb9552f7ce5d9965c4fbdfadacade12971758a6d7c34f,PodSandboxId:3dfe2087710ff70556a7315fef49a89402359b106fa837bda3fd211973a8f99f,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1726252549274069505,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-hw97l,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2e838a30-9cc7-4bd7-a481-378b6fe7bd29,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d3df0eba1f69c460e1b45c99bc2b34d5cc9187e7418cf453c8728af886b6617,PodSandboxId:e04ffe767b47b332a311e0d5bfa8a18486bbb463bc9b43e915cf0badc4060866,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1726252409353519154,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 806d4c49-56fb-4b01-a2cd-83bdf674d6eb,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02c6d6e4b350e88d9417e6262d530e0be455f8bacc894d0437221f5f74fc33ce,PodSandboxId:c3ecf2966876727ebd3ed8032152a9fee65a2fcee0fdd8c8056f359218ee905f,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726251840695356851,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-j795q,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: 943ee71f-15fc-4b01-8fa1-385d59ee92e9,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc7f28ac3b62a4d20b80abf1b36f8e76449fdd981fbf7fb2f7f345d5ddde6fb9,PodSandboxId:d0ef8624420805edf03d3ea93b3e1c55f85b06e6560ca770256c60254ca174cc,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726251818864520595,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-jsft5,i
o.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 9fbbf624-3525-4d06-8686-47c244faa8d2,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f305e18e914bad488669b23de6227ca1310488a16651a66e64c3104cd3ef5ef,PodSandboxId:2f0b757b23f97349b6cc8d2741ebd9dca218e4a69abe4536745560b2bf9ebf32,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726251818584979983,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name:
ingress-nginx-admission-create-t2k2m,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c9fb798f-6249-4cf0-bb0c-ad797779f7d2,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ab3cdf56491273dcbb9cc1f982f2dc5304571e00b7a1ee3ff3175f91c97fdbe,PodSandboxId:68e88bddaa74cdd1c6b049d43a7215c2c0f320385d884237086b8abdf931a230,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1726251796529246164,Labels:map[string]string{io.kubernetes.conta
iner.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-qw488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf270e35-c498-455b-bc82-0a19e8f606aa,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46c152a4abcf517c32fc79a8f91da41a850adb80ee9597c3f49a9e6206b45f31,PodSandboxId:c2dc3a67499c7fd0a5898c8d6199735b3b3acd422ebc719fa480351a446b863e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State
:CONTAINER_RUNNING,CreatedAt:1726251757134337412,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09e9768b-ce9c-47d6-8650-191c7f864a9c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3bf9ceff710d95ca8581ac5cd76f0ab55d09833110e1a5c63fc0953ce948f4f,PodSandboxId:abf9b475b59015ac196e49760bd86de9c580685c29a7b28bf4ba6bca39e6ec2c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,Creat
edAt:1726251755065524423,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mtltd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bee68b4c-c773-4bb2-b088-1fe4a816edf3,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9134bc1238e6ea0f130cab81c7973189417fcdfb4544aec08a6ee8aaf314cb0c,PodSandboxId:44e10dfb950fd8542b6cad158923008de486a5f9f595035b4b932235c39eb956,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307
063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726251752259507868,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qxmw4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e77278b-62ae-4a68-bbba-ca3108d18280,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d7472d2e3f48ffc6dd6ccc80ca03ad0ac7078696b11d7b1460addd2949d22e6,PodSandboxId:d552343eeec8aec1e907ac31f5d100cedcabe20845d1d77eda3124dbbeaa4317,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Anno
tations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726251740614747066,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-979357,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7509a39b4c733b67776a7ae5d64c186,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:089b47ce338051609525c4f9381ceba68577833fc0ffa1c55a00b4e704e073a2,PodSandboxId:b67ca3f1d294d653bc170dc259a7acaf543d88fea5536493d60247cdeb49a879,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:
map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726251740607552426,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-979357,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fde484119c5eac540deeb46d4ed91bf6,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f36fa2cd406d1eb54c924b89d86a23d5b5415356638b0a6ab4846430227aaaa2,PodSandboxId:89b0eb49c6580a707bc39e91ffdcf46d27656879114d9b53048e2e70708e1329,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[strin
g]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726251740610321933,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-979357,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55cc9c34cab79d6a845b85b50237201a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:beb227280e8dfb38d827f5345ec8c8984cb0a02932ab22106590d49a6d28413a,PodSandboxId:1644d60ea634e91ff07026783a9d93e0db4e015a6853a21fb46c5a2aa2aaa73c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726251740600321275,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-979357,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab45a45dafa7cbe725acff1543d4a881,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b80a60c3-87db-40f1-83f5-9113dae36823 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 18:35:56 addons-979357 crio[661]: time="2024-09-13 18:35:56.587078295Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=74ba35bb-09de-470a-b7ab-4f2b9cde370f name=/runtime.v1.RuntimeService/Version
	Sep 13 18:35:56 addons-979357 crio[661]: time="2024-09-13 18:35:56.587149838Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=74ba35bb-09de-470a-b7ab-4f2b9cde370f name=/runtime.v1.RuntimeService/Version
	Sep 13 18:35:56 addons-979357 crio[661]: time="2024-09-13 18:35:56.588642024Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=27e667a5-6ce5-4c56-8b18-3a062206331b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 18:35:56 addons-979357 crio[661]: time="2024-09-13 18:35:56.589883209Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726252556589857001,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559239,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=27e667a5-6ce5-4c56-8b18-3a062206331b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 18:35:56 addons-979357 crio[661]: time="2024-09-13 18:35:56.590510646Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b5b1a6a0-09f8-4c3c-9430-35e0a77ee1b4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 18:35:56 addons-979357 crio[661]: time="2024-09-13 18:35:56.590580295Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b5b1a6a0-09f8-4c3c-9430-35e0a77ee1b4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 18:35:56 addons-979357 crio[661]: time="2024-09-13 18:35:56.590928561Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:22255494b139f174b63cb9552f7ce5d9965c4fbdfadacade12971758a6d7c34f,PodSandboxId:3dfe2087710ff70556a7315fef49a89402359b106fa837bda3fd211973a8f99f,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1726252549274069505,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-hw97l,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2e838a30-9cc7-4bd7-a481-378b6fe7bd29,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d3df0eba1f69c460e1b45c99bc2b34d5cc9187e7418cf453c8728af886b6617,PodSandboxId:e04ffe767b47b332a311e0d5bfa8a18486bbb463bc9b43e915cf0badc4060866,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1726252409353519154,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 806d4c49-56fb-4b01-a2cd-83bdf674d6eb,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02c6d6e4b350e88d9417e6262d530e0be455f8bacc894d0437221f5f74fc33ce,PodSandboxId:c3ecf2966876727ebd3ed8032152a9fee65a2fcee0fdd8c8056f359218ee905f,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726251840695356851,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-j795q,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: 943ee71f-15fc-4b01-8fa1-385d59ee92e9,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc7f28ac3b62a4d20b80abf1b36f8e76449fdd981fbf7fb2f7f345d5ddde6fb9,PodSandboxId:d0ef8624420805edf03d3ea93b3e1c55f85b06e6560ca770256c60254ca174cc,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726251818864520595,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-jsft5,i
o.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 9fbbf624-3525-4d06-8686-47c244faa8d2,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f305e18e914bad488669b23de6227ca1310488a16651a66e64c3104cd3ef5ef,PodSandboxId:2f0b757b23f97349b6cc8d2741ebd9dca218e4a69abe4536745560b2bf9ebf32,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726251818584979983,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name:
ingress-nginx-admission-create-t2k2m,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c9fb798f-6249-4cf0-bb0c-ad797779f7d2,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ab3cdf56491273dcbb9cc1f982f2dc5304571e00b7a1ee3ff3175f91c97fdbe,PodSandboxId:68e88bddaa74cdd1c6b049d43a7215c2c0f320385d884237086b8abdf931a230,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1726251796529246164,Labels:map[string]string{io.kubernetes.conta
iner.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-qw488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf270e35-c498-455b-bc82-0a19e8f606aa,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46c152a4abcf517c32fc79a8f91da41a850adb80ee9597c3f49a9e6206b45f31,PodSandboxId:c2dc3a67499c7fd0a5898c8d6199735b3b3acd422ebc719fa480351a446b863e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State
:CONTAINER_RUNNING,CreatedAt:1726251757134337412,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09e9768b-ce9c-47d6-8650-191c7f864a9c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3bf9ceff710d95ca8581ac5cd76f0ab55d09833110e1a5c63fc0953ce948f4f,PodSandboxId:abf9b475b59015ac196e49760bd86de9c580685c29a7b28bf4ba6bca39e6ec2c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,Creat
edAt:1726251755065524423,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mtltd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bee68b4c-c773-4bb2-b088-1fe4a816edf3,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9134bc1238e6ea0f130cab81c7973189417fcdfb4544aec08a6ee8aaf314cb0c,PodSandboxId:44e10dfb950fd8542b6cad158923008de486a5f9f595035b4b932235c39eb956,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307
063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726251752259507868,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qxmw4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e77278b-62ae-4a68-bbba-ca3108d18280,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d7472d2e3f48ffc6dd6ccc80ca03ad0ac7078696b11d7b1460addd2949d22e6,PodSandboxId:d552343eeec8aec1e907ac31f5d100cedcabe20845d1d77eda3124dbbeaa4317,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Anno
tations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726251740614747066,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-979357,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7509a39b4c733b67776a7ae5d64c186,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:089b47ce338051609525c4f9381ceba68577833fc0ffa1c55a00b4e704e073a2,PodSandboxId:b67ca3f1d294d653bc170dc259a7acaf543d88fea5536493d60247cdeb49a879,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:
map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726251740607552426,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-979357,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fde484119c5eac540deeb46d4ed91bf6,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f36fa2cd406d1eb54c924b89d86a23d5b5415356638b0a6ab4846430227aaaa2,PodSandboxId:89b0eb49c6580a707bc39e91ffdcf46d27656879114d9b53048e2e70708e1329,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[strin
g]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726251740610321933,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-979357,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55cc9c34cab79d6a845b85b50237201a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:beb227280e8dfb38d827f5345ec8c8984cb0a02932ab22106590d49a6d28413a,PodSandboxId:1644d60ea634e91ff07026783a9d93e0db4e015a6853a21fb46c5a2aa2aaa73c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726251740600321275,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-979357,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab45a45dafa7cbe725acff1543d4a881,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b5b1a6a0-09f8-4c3c-9430-35e0a77ee1b4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 18:35:56 addons-979357 crio[661]: time="2024-09-13 18:35:56.629225803Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e1f3b781-b7d4-44e1-8cd3-a614ba03d8da name=/runtime.v1.RuntimeService/Version
	Sep 13 18:35:56 addons-979357 crio[661]: time="2024-09-13 18:35:56.629314201Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e1f3b781-b7d4-44e1-8cd3-a614ba03d8da name=/runtime.v1.RuntimeService/Version
	Sep 13 18:35:56 addons-979357 crio[661]: time="2024-09-13 18:35:56.632789938Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d86566ff-b9e6-47ab-938c-93db78a15855 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 18:35:56 addons-979357 crio[661]: time="2024-09-13 18:35:56.634201945Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726252556634173357,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559239,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d86566ff-b9e6-47ab-938c-93db78a15855 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 18:35:56 addons-979357 crio[661]: time="2024-09-13 18:35:56.634915782Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=54439b2b-fa73-40d2-8fe1-85b68f1afe4d name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 18:35:56 addons-979357 crio[661]: time="2024-09-13 18:35:56.634993086Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=54439b2b-fa73-40d2-8fe1-85b68f1afe4d name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 18:35:56 addons-979357 crio[661]: time="2024-09-13 18:35:56.635250725Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:22255494b139f174b63cb9552f7ce5d9965c4fbdfadacade12971758a6d7c34f,PodSandboxId:3dfe2087710ff70556a7315fef49a89402359b106fa837bda3fd211973a8f99f,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1726252549274069505,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-hw97l,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2e838a30-9cc7-4bd7-a481-378b6fe7bd29,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d3df0eba1f69c460e1b45c99bc2b34d5cc9187e7418cf453c8728af886b6617,PodSandboxId:e04ffe767b47b332a311e0d5bfa8a18486bbb463bc9b43e915cf0badc4060866,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1726252409353519154,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 806d4c49-56fb-4b01-a2cd-83bdf674d6eb,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02c6d6e4b350e88d9417e6262d530e0be455f8bacc894d0437221f5f74fc33ce,PodSandboxId:c3ecf2966876727ebd3ed8032152a9fee65a2fcee0fdd8c8056f359218ee905f,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726251840695356851,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-j795q,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: 943ee71f-15fc-4b01-8fa1-385d59ee92e9,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc7f28ac3b62a4d20b80abf1b36f8e76449fdd981fbf7fb2f7f345d5ddde6fb9,PodSandboxId:d0ef8624420805edf03d3ea93b3e1c55f85b06e6560ca770256c60254ca174cc,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726251818864520595,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-jsft5,i
o.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 9fbbf624-3525-4d06-8686-47c244faa8d2,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f305e18e914bad488669b23de6227ca1310488a16651a66e64c3104cd3ef5ef,PodSandboxId:2f0b757b23f97349b6cc8d2741ebd9dca218e4a69abe4536745560b2bf9ebf32,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726251818584979983,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name:
ingress-nginx-admission-create-t2k2m,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c9fb798f-6249-4cf0-bb0c-ad797779f7d2,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ab3cdf56491273dcbb9cc1f982f2dc5304571e00b7a1ee3ff3175f91c97fdbe,PodSandboxId:68e88bddaa74cdd1c6b049d43a7215c2c0f320385d884237086b8abdf931a230,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1726251796529246164,Labels:map[string]string{io.kubernetes.conta
iner.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-qw488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf270e35-c498-455b-bc82-0a19e8f606aa,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46c152a4abcf517c32fc79a8f91da41a850adb80ee9597c3f49a9e6206b45f31,PodSandboxId:c2dc3a67499c7fd0a5898c8d6199735b3b3acd422ebc719fa480351a446b863e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State
:CONTAINER_RUNNING,CreatedAt:1726251757134337412,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09e9768b-ce9c-47d6-8650-191c7f864a9c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3bf9ceff710d95ca8581ac5cd76f0ab55d09833110e1a5c63fc0953ce948f4f,PodSandboxId:abf9b475b59015ac196e49760bd86de9c580685c29a7b28bf4ba6bca39e6ec2c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,Creat
edAt:1726251755065524423,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mtltd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bee68b4c-c773-4bb2-b088-1fe4a816edf3,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9134bc1238e6ea0f130cab81c7973189417fcdfb4544aec08a6ee8aaf314cb0c,PodSandboxId:44e10dfb950fd8542b6cad158923008de486a5f9f595035b4b932235c39eb956,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307
063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726251752259507868,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qxmw4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e77278b-62ae-4a68-bbba-ca3108d18280,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d7472d2e3f48ffc6dd6ccc80ca03ad0ac7078696b11d7b1460addd2949d22e6,PodSandboxId:d552343eeec8aec1e907ac31f5d100cedcabe20845d1d77eda3124dbbeaa4317,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Anno
tations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726251740614747066,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-979357,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7509a39b4c733b67776a7ae5d64c186,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:089b47ce338051609525c4f9381ceba68577833fc0ffa1c55a00b4e704e073a2,PodSandboxId:b67ca3f1d294d653bc170dc259a7acaf543d88fea5536493d60247cdeb49a879,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:
map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726251740607552426,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-979357,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fde484119c5eac540deeb46d4ed91bf6,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f36fa2cd406d1eb54c924b89d86a23d5b5415356638b0a6ab4846430227aaaa2,PodSandboxId:89b0eb49c6580a707bc39e91ffdcf46d27656879114d9b53048e2e70708e1329,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[strin
g]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726251740610321933,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-979357,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55cc9c34cab79d6a845b85b50237201a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:beb227280e8dfb38d827f5345ec8c8984cb0a02932ab22106590d49a6d28413a,PodSandboxId:1644d60ea634e91ff07026783a9d93e0db4e015a6853a21fb46c5a2aa2aaa73c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726251740600321275,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-979357,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab45a45dafa7cbe725acff1543d4a881,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=54439b2b-fa73-40d2-8fe1-85b68f1afe4d name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	22255494b139f       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        7 seconds ago       Running             hello-world-app           0                   3dfe2087710ff       hello-world-app-55bf9c44b4-hw97l
	3d3df0eba1f69       docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56                              2 minutes ago       Running             nginx                     0                   e04ffe767b47b       nginx
	02c6d6e4b350e       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b                 11 minutes ago      Running             gcp-auth                  0                   c3ecf29668767       gcp-auth-89d5ffd79-j795q
	fc7f28ac3b62a       ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242                                                             12 minutes ago      Exited              patch                     1                   d0ef862442080       ingress-nginx-admission-patch-jsft5
	6f305e18e914b       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012   12 minutes ago      Exited              create                    0                   2f0b757b23f97       ingress-nginx-admission-create-t2k2m
	7ab3cdf564912       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a        12 minutes ago      Running             metrics-server            0                   68e88bddaa74c       metrics-server-84c5f94fbc-qw488
	46c152a4abcf5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             13 minutes ago      Running             storage-provisioner       0                   c2dc3a67499c7       storage-provisioner
	e3bf9ceff710d       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                             13 minutes ago      Running             coredns                   0                   abf9b475b5901       coredns-7c65d6cfc9-mtltd
	9134bc1238e6e       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                             13 minutes ago      Running             kube-proxy                0                   44e10dfb950fd       kube-proxy-qxmw4
	1d7472d2e3f48       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                             13 minutes ago      Running             kube-scheduler            0                   d552343eeec8a       kube-scheduler-addons-979357
	f36fa2cd406d1       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                             13 minutes ago      Running             etcd                      0                   89b0eb49c6580       etcd-addons-979357
	089b47ce33805       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                             13 minutes ago      Running             kube-controller-manager   0                   b67ca3f1d294d       kube-controller-manager-addons-979357
	beb227280e8df       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                                             13 minutes ago      Running             kube-apiserver            0                   1644d60ea634e       kube-apiserver-addons-979357
	
	
	==> coredns [e3bf9ceff710d95ca8581ac5cd76f0ab55d09833110e1a5c63fc0953ce948f4f] <==
	[INFO] 127.0.0.1:55425 - 14478 "HINFO IN 8414480608980431581.7987847580657585340. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013867574s
	[INFO] 10.244.0.8:41401 - 54033 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000413348s
	[INFO] 10.244.0.8:41401 - 10285 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000151346s
	[INFO] 10.244.0.8:59177 - 13648 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000180964s
	[INFO] 10.244.0.8:59177 - 58194 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000217233s
	[INFO] 10.244.0.8:33613 - 8975 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000149676s
	[INFO] 10.244.0.8:33613 - 55809 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000167212s
	[INFO] 10.244.0.8:39507 - 64600 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000116346s
	[INFO] 10.244.0.8:39507 - 6487 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000116459s
	[INFO] 10.244.0.8:44408 - 33423 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000177557s
	[INFO] 10.244.0.8:44408 - 53388 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000095321s
	[INFO] 10.244.0.8:50243 - 29298 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000133268s
	[INFO] 10.244.0.8:50243 - 63089 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000075946s
	[INFO] 10.244.0.8:44518 - 41049 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000067378s
	[INFO] 10.244.0.8:44518 - 48475 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000090248s
	[INFO] 10.244.0.8:58663 - 2901 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000053667s
	[INFO] 10.244.0.8:58663 - 55639 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000037658s
	[INFO] 10.244.0.21:34953 - 59093 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000423399s
	[INFO] 10.244.0.21:35225 - 60921 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000298982s
	[INFO] 10.244.0.21:47005 - 14964 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000165017s
	[INFO] 10.244.0.21:38065 - 60873 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000151065s
	[INFO] 10.244.0.21:58049 - 44728 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000129589s
	[INFO] 10.244.0.21:41316 - 5999 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000108833s
	[INFO] 10.244.0.21:53728 - 64340 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000828725s
	[INFO] 10.244.0.21:36643 - 40190 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.000688535s
	
	
	==> describe nodes <==
	Name:               addons-979357
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-979357
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fdd33bebc6743cfd1c61ec7fe066add478610a92
	                    minikube.k8s.io/name=addons-979357
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_13T18_22_26_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-979357
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Sep 2024 18:22:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-979357
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 13 Sep 2024 18:35:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Sep 2024 18:34:01 +0000   Fri, 13 Sep 2024 18:22:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Sep 2024 18:34:01 +0000   Fri, 13 Sep 2024 18:22:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Sep 2024 18:34:01 +0000   Fri, 13 Sep 2024 18:22:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 13 Sep 2024 18:34:01 +0000   Fri, 13 Sep 2024 18:22:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.34
	  Hostname:    addons-979357
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 323f75a62e114a2e93170ef9b4ca6dd9
	  System UUID:                323f75a6-2e11-4a2e-9317-0ef9b4ca6dd9
	  Boot ID:                    007169e1-5e2f-4ead-8631-d0c0eed7c494
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  default                     hello-world-app-55bf9c44b4-hw97l         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  default                     nginx                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m32s
	  gcp-auth                    gcp-auth-89d5ffd79-j795q                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 coredns-7c65d6cfc9-mtltd                 100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     13m
	  kube-system                 etcd-addons-979357                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         13m
	  kube-system                 kube-apiserver-addons-979357             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-addons-979357    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-qxmw4                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-addons-979357             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 metrics-server-84c5f94fbc-qw488          100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         13m
	  kube-system                 storage-provisioner                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             370Mi (9%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 13m   kube-proxy       
	  Normal  Starting                 13m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  13m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  13m   kubelet          Node addons-979357 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m   kubelet          Node addons-979357 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m   kubelet          Node addons-979357 status is now: NodeHasSufficientPID
	  Normal  NodeReady                13m   kubelet          Node addons-979357 status is now: NodeReady
	  Normal  RegisteredNode           13m   node-controller  Node addons-979357 event: Registered Node addons-979357 in Controller
	
	
	==> dmesg <==
	[  +7.222070] kauditd_printk_skb: 22 callbacks suppressed
	[Sep13 18:23] kauditd_printk_skb: 2 callbacks suppressed
	[  +7.361549] kauditd_printk_skb: 27 callbacks suppressed
	[ +11.110464] kauditd_printk_skb: 4 callbacks suppressed
	[ +13.984432] kauditd_printk_skb: 22 callbacks suppressed
	[  +5.307990] kauditd_printk_skb: 45 callbacks suppressed
	[  +8.629278] kauditd_printk_skb: 63 callbacks suppressed
	[Sep13 18:24] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.527807] kauditd_printk_skb: 16 callbacks suppressed
	[ +19.654471] kauditd_printk_skb: 40 callbacks suppressed
	[Sep13 18:25] kauditd_printk_skb: 28 callbacks suppressed
	[Sep13 18:26] kauditd_printk_skb: 28 callbacks suppressed
	[Sep13 18:29] kauditd_printk_skb: 28 callbacks suppressed
	[Sep13 18:32] kauditd_printk_skb: 28 callbacks suppressed
	[  +6.953826] kauditd_printk_skb: 6 callbacks suppressed
	[  +7.633272] kauditd_printk_skb: 11 callbacks suppressed
	[  +7.939706] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.945246] kauditd_printk_skb: 23 callbacks suppressed
	[  +5.115088] kauditd_printk_skb: 23 callbacks suppressed
	[  +9.244947] kauditd_printk_skb: 31 callbacks suppressed
	[Sep13 18:33] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.314297] kauditd_printk_skb: 44 callbacks suppressed
	[  +6.432965] kauditd_printk_skb: 28 callbacks suppressed
	[Sep13 18:35] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.404432] kauditd_printk_skb: 21 callbacks suppressed
	
	
	==> etcd [f36fa2cd406d1eb54c924b89d86a23d5b5415356638b0a6ab4846430227aaaa2] <==
	{"level":"warn","ts":"2024-09-13T18:23:51.021543Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"387.099142ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-13T18:23:51.021644Z","caller":"traceutil/trace.go:171","msg":"trace[515273731] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1054; }","duration":"387.282484ms","start":"2024-09-13T18:23:50.634355Z","end":"2024-09-13T18:23:51.021638Z","steps":["trace[515273731] 'agreement among raft nodes before linearized reading'  (duration: 387.071303ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-13T18:23:51.021675Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-13T18:23:50.634324Z","time spent":"387.339943ms","remote":"127.0.0.1:53466","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2024-09-13T18:23:51.022402Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"337.078944ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-13T18:23:51.022467Z","caller":"traceutil/trace.go:171","msg":"trace[1756911976] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1054; }","duration":"337.150275ms","start":"2024-09-13T18:23:50.685306Z","end":"2024-09-13T18:23:51.022456Z","steps":["trace[1756911976] 'agreement among raft nodes before linearized reading'  (duration: 337.020545ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-13T18:23:51.022506Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-13T18:23:50.685273Z","time spent":"337.222274ms","remote":"127.0.0.1:53466","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"info","ts":"2024-09-13T18:23:53.608519Z","caller":"traceutil/trace.go:171","msg":"trace[570854755] transaction","detail":"{read_only:false; response_revision:1061; number_of_response:1; }","duration":"228.533999ms","start":"2024-09-13T18:23:53.379969Z","end":"2024-09-13T18:23:53.608503Z","steps":["trace[570854755] 'process raft request'  (duration: 228.091989ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-13T18:24:05.523053Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-13T18:24:05.164429Z","time spent":"358.62098ms","remote":"127.0.0.1:53300","response type":"/etcdserverpb.Lease/LeaseGrant","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""}
	{"level":"info","ts":"2024-09-13T18:24:05.526794Z","caller":"traceutil/trace.go:171","msg":"trace[1285637360] transaction","detail":"{read_only:false; response_revision:1131; number_of_response:1; }","duration":"245.594439ms","start":"2024-09-13T18:24:05.281082Z","end":"2024-09-13T18:24:05.526676Z","steps":["trace[1285637360] 'process raft request'  (duration: 245.425195ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-13T18:32:16.746450Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"259.463174ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/snapshot.storage.k8s.io/volumesnapshotclasses/\" range_end:\"/registry/snapshot.storage.k8s.io/volumesnapshotclasses0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-09-13T18:32:16.746572Z","caller":"traceutil/trace.go:171","msg":"trace[1646493262] range","detail":"{range_begin:/registry/snapshot.storage.k8s.io/volumesnapshotclasses/; range_end:/registry/snapshot.storage.k8s.io/volumesnapshotclasses0; response_count:0; response_revision:1944; }","duration":"259.655607ms","start":"2024-09-13T18:32:16.486899Z","end":"2024-09-13T18:32:16.746555Z","steps":["trace[1646493262] 'count revisions from in-memory index tree'  (duration: 259.404889ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-13T18:32:21.625942Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1491}
	{"level":"info","ts":"2024-09-13T18:32:21.662273Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1491,"took":"35.833101ms","hash":2337312588,"current-db-size-bytes":6422528,"current-db-size":"6.4 MB","current-db-size-in-use-bytes":3420160,"current-db-size-in-use":"3.4 MB"}
	{"level":"info","ts":"2024-09-13T18:32:21.662341Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2337312588,"revision":1491,"compact-revision":-1}
	{"level":"info","ts":"2024-09-13T18:32:47.777404Z","caller":"traceutil/trace.go:171","msg":"trace[9576718] transaction","detail":"{read_only:false; response_revision:2174; number_of_response:1; }","duration":"150.443543ms","start":"2024-09-13T18:32:47.626934Z","end":"2024-09-13T18:32:47.777378Z","steps":["trace[9576718] 'process raft request'  (duration: 150.357849ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-13T18:32:52.478755Z","caller":"traceutil/trace.go:171","msg":"trace[505158] linearizableReadLoop","detail":"{readStateIndex:2358; appliedIndex:2357; }","duration":"421.352793ms","start":"2024-09-13T18:32:52.057386Z","end":"2024-09-13T18:32:52.478739Z","steps":["trace[505158] 'read index received'  (duration: 421.139117ms)","trace[505158] 'applied index is now lower than readState.Index'  (duration: 212.982µs)"],"step_count":2}
	{"level":"warn","ts":"2024-09-13T18:32:52.479009Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"350.057609ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-13T18:32:52.479661Z","caller":"traceutil/trace.go:171","msg":"trace[943115826] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:2200; }","duration":"350.751111ms","start":"2024-09-13T18:32:52.128898Z","end":"2024-09-13T18:32:52.479649Z","steps":["trace[943115826] 'agreement among raft nodes before linearized reading'  (duration: 350.040298ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-13T18:32:52.479012Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"421.574332ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-13T18:32:52.480358Z","caller":"traceutil/trace.go:171","msg":"trace[691500721] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:2200; }","duration":"422.967594ms","start":"2024-09-13T18:32:52.057381Z","end":"2024-09-13T18:32:52.480349Z","steps":["trace[691500721] 'agreement among raft nodes before linearized reading'  (duration: 421.548176ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-13T18:32:52.480506Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-13T18:32:52.057333Z","time spent":"423.124824ms","remote":"127.0.0.1:53272","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"info","ts":"2024-09-13T18:32:52.479052Z","caller":"traceutil/trace.go:171","msg":"trace[2022301504] transaction","detail":"{read_only:false; response_revision:2200; number_of_response:1; }","duration":"547.643865ms","start":"2024-09-13T18:32:51.931399Z","end":"2024-09-13T18:32:52.479043Z","steps":["trace[2022301504] 'process raft request'  (duration: 547.179229ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-13T18:32:52.481455Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-13T18:32:51.931384Z","time spent":"549.269751ms","remote":"127.0.0.1:40810","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":539,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" mod_revision:2173 > success:<request_put:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" value_size:452 >> failure:<request_range:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" > >"}
	{"level":"warn","ts":"2024-09-13T18:32:52.479449Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"107.09265ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-13T18:32:52.481582Z","caller":"traceutil/trace.go:171","msg":"trace[2047800323] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:2200; }","duration":"109.228494ms","start":"2024-09-13T18:32:52.372347Z","end":"2024-09-13T18:32:52.481576Z","steps":["trace[2047800323] 'agreement among raft nodes before linearized reading'  (duration: 107.084584ms)"],"step_count":1}
	
	
	==> gcp-auth [02c6d6e4b350e88d9417e6262d530e0be455f8bacc894d0437221f5f74fc33ce] <==
	2024/09/13 18:24:06 Ready to write response ...
	2024/09/13 18:24:06 Ready to marshal response ...
	2024/09/13 18:24:06 Ready to write response ...
	2024/09/13 18:32:10 Ready to marshal response ...
	2024/09/13 18:32:10 Ready to write response ...
	2024/09/13 18:32:10 Ready to marshal response ...
	2024/09/13 18:32:10 Ready to write response ...
	2024/09/13 18:32:10 Ready to marshal response ...
	2024/09/13 18:32:10 Ready to write response ...
	2024/09/13 18:32:21 Ready to marshal response ...
	2024/09/13 18:32:21 Ready to write response ...
	2024/09/13 18:32:32 Ready to marshal response ...
	2024/09/13 18:32:32 Ready to write response ...
	2024/09/13 18:32:32 Ready to marshal response ...
	2024/09/13 18:32:32 Ready to write response ...
	2024/09/13 18:32:43 Ready to marshal response ...
	2024/09/13 18:32:43 Ready to write response ...
	2024/09/13 18:32:45 Ready to marshal response ...
	2024/09/13 18:32:45 Ready to write response ...
	2024/09/13 18:33:16 Ready to marshal response ...
	2024/09/13 18:33:16 Ready to write response ...
	2024/09/13 18:33:24 Ready to marshal response ...
	2024/09/13 18:33:24 Ready to write response ...
	2024/09/13 18:35:46 Ready to marshal response ...
	2024/09/13 18:35:46 Ready to write response ...
	
	
	==> kernel <==
	 18:35:57 up 14 min,  0 users,  load average: 0.21, 0.47, 0.37
	Linux addons-979357 5.10.207 #1 SMP Thu Sep 12 19:03:33 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [beb227280e8dfb38d827f5345ec8c8984cb0a02932ab22106590d49a6d28413a] <==
	I0913 18:32:10.039145       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.103.81.144"}
	I0913 18:32:15.993730       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0913 18:32:17.054872       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0913 18:32:59.435526       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E0913 18:32:59.736880       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0913 18:33:10.989953       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0913 18:33:11.997980       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0913 18:33:13.005448       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0913 18:33:14.012493       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0913 18:33:24.691678       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0913 18:33:24.883354       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.100.119.71"}
	I0913 18:33:32.763152       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0913 18:33:32.763216       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0913 18:33:32.792443       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0913 18:33:32.792504       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0913 18:33:32.897307       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0913 18:33:32.897376       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0913 18:33:32.917372       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0913 18:33:32.917776       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0913 18:33:32.942848       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0913 18:33:32.943631       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0913 18:33:33.918112       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0913 18:33:33.943895       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0913 18:33:34.041990       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I0913 18:35:46.314495       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.102.159.60"}
	
	
	==> kube-controller-manager [089b47ce338051609525c4f9381ceba68577833fc0ffa1c55a00b4e704e073a2] <==
	W0913 18:34:34.306359       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0913 18:34:34.306592       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0913 18:34:49.947572       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0913 18:34:49.947797       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0913 18:34:58.306830       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0913 18:34:58.306872       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0913 18:35:05.121504       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0913 18:35:05.121643       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0913 18:35:32.477918       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0913 18:35:32.478113       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0913 18:35:33.148944       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0913 18:35:33.149003       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0913 18:35:46.162030       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="65.593041ms"
	I0913 18:35:46.176909       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="14.815557ms"
	I0913 18:35:46.177002       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="44.467µs"
	I0913 18:35:46.177064       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="14.659µs"
	I0913 18:35:48.641488       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create" delay="0s"
	I0913 18:35:48.647238       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-bc57996ff" duration="9.445µs"
	I0913 18:35:48.663191       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch" delay="0s"
	I0913 18:35:50.011900       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="9.261785ms"
	I0913 18:35:50.012542       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="24.18µs"
	W0913 18:35:53.756221       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0913 18:35:53.756282       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0913 18:35:56.632447       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0913 18:35:56.632481       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [9134bc1238e6ea0f130cab81c7973189417fcdfb4544aec08a6ee8aaf314cb0c] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0913 18:22:33.350612       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0913 18:22:33.364476       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.34"]
	E0913 18:22:33.364537       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0913 18:22:33.483199       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0913 18:22:33.483274       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0913 18:22:33.483300       1 server_linux.go:169] "Using iptables Proxier"
	I0913 18:22:33.488023       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0913 18:22:33.488274       1 server.go:483] "Version info" version="v1.31.1"
	I0913 18:22:33.488283       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0913 18:22:33.494316       1 config.go:199] "Starting service config controller"
	I0913 18:22:33.494338       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0913 18:22:33.494377       1 config.go:105] "Starting endpoint slice config controller"
	I0913 18:22:33.494381       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0913 18:22:33.497782       1 config.go:328] "Starting node config controller"
	I0913 18:22:33.497794       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0913 18:22:33.596036       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0913 18:22:33.596075       1 shared_informer.go:320] Caches are synced for service config
	I0913 18:22:33.598825       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [1d7472d2e3f48ffc6dd6ccc80ca03ad0ac7078696b11d7b1460addd2949d22e6] <==
	W0913 18:22:23.351491       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0913 18:22:23.351533       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0913 18:22:24.185862       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0913 18:22:24.185917       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0913 18:22:24.200594       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0913 18:22:24.200752       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0913 18:22:24.218466       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0913 18:22:24.218561       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0913 18:22:24.258477       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0913 18:22:24.258532       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0913 18:22:24.395515       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0913 18:22:24.395621       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0913 18:22:24.419001       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0913 18:22:24.419792       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0913 18:22:24.459549       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0913 18:22:24.459618       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0913 18:22:24.479886       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0913 18:22:24.480416       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0913 18:22:24.498056       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0913 18:22:24.498210       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0913 18:22:24.498173       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0913 18:22:24.498336       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0913 18:22:24.953128       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0913 18:22:24.953629       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0913 18:22:28.042327       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 13 18:35:46 addons-979357 kubelet[1204]: E0913 18:35:46.398119    1204 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726252546397758588,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:550633,},InodesUsed:&UInt64Value{Value:187,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 18:35:47 addons-979357 kubelet[1204]: I0913 18:35:47.351384    1204 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tlxtj\" (UniqueName: \"kubernetes.io/projected/a82db8f0-646e-4f6c-8dda-7332bed77579-kube-api-access-tlxtj\") pod \"a82db8f0-646e-4f6c-8dda-7332bed77579\" (UID: \"a82db8f0-646e-4f6c-8dda-7332bed77579\") "
	Sep 13 18:35:47 addons-979357 kubelet[1204]: I0913 18:35:47.353476    1204 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a82db8f0-646e-4f6c-8dda-7332bed77579-kube-api-access-tlxtj" (OuterVolumeSpecName: "kube-api-access-tlxtj") pod "a82db8f0-646e-4f6c-8dda-7332bed77579" (UID: "a82db8f0-646e-4f6c-8dda-7332bed77579"). InnerVolumeSpecName "kube-api-access-tlxtj". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 13 18:35:47 addons-979357 kubelet[1204]: I0913 18:35:47.452286    1204 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-tlxtj\" (UniqueName: \"kubernetes.io/projected/a82db8f0-646e-4f6c-8dda-7332bed77579-kube-api-access-tlxtj\") on node \"addons-979357\" DevicePath \"\""
	Sep 13 18:35:47 addons-979357 kubelet[1204]: I0913 18:35:47.978489    1204 scope.go:117] "RemoveContainer" containerID="5d8be76d53b6a1cc2d79f39f38496e98eba9b8fddce6215a031f4c296c8a0a51"
	Sep 13 18:35:48 addons-979357 kubelet[1204]: I0913 18:35:48.027736    1204 scope.go:117] "RemoveContainer" containerID="5d8be76d53b6a1cc2d79f39f38496e98eba9b8fddce6215a031f4c296c8a0a51"
	Sep 13 18:35:48 addons-979357 kubelet[1204]: E0913 18:35:48.028647    1204 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d8be76d53b6a1cc2d79f39f38496e98eba9b8fddce6215a031f4c296c8a0a51\": container with ID starting with 5d8be76d53b6a1cc2d79f39f38496e98eba9b8fddce6215a031f4c296c8a0a51 not found: ID does not exist" containerID="5d8be76d53b6a1cc2d79f39f38496e98eba9b8fddce6215a031f4c296c8a0a51"
	Sep 13 18:35:48 addons-979357 kubelet[1204]: I0913 18:35:48.028781    1204 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d8be76d53b6a1cc2d79f39f38496e98eba9b8fddce6215a031f4c296c8a0a51"} err="failed to get container status \"5d8be76d53b6a1cc2d79f39f38496e98eba9b8fddce6215a031f4c296c8a0a51\": rpc error: code = NotFound desc = could not find container \"5d8be76d53b6a1cc2d79f39f38496e98eba9b8fddce6215a031f4c296c8a0a51\": container with ID starting with 5d8be76d53b6a1cc2d79f39f38496e98eba9b8fddce6215a031f4c296c8a0a51 not found: ID does not exist"
	Sep 13 18:35:49 addons-979357 kubelet[1204]: E0913 18:35:49.018482    1204 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="fadcf5b8-b54e-4896-9ab6-b7294f3c8503"
	Sep 13 18:35:50 addons-979357 kubelet[1204]: I0913 18:35:50.021190    1204 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9fbbf624-3525-4d06-8686-47c244faa8d2" path="/var/lib/kubelet/pods/9fbbf624-3525-4d06-8686-47c244faa8d2/volumes"
	Sep 13 18:35:50 addons-979357 kubelet[1204]: I0913 18:35:50.022179    1204 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a82db8f0-646e-4f6c-8dda-7332bed77579" path="/var/lib/kubelet/pods/a82db8f0-646e-4f6c-8dda-7332bed77579/volumes"
	Sep 13 18:35:50 addons-979357 kubelet[1204]: I0913 18:35:50.023005    1204 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9fb798f-6249-4cf0-bb0c-ad797779f7d2" path="/var/lib/kubelet/pods/c9fb798f-6249-4cf0-bb0c-ad797779f7d2/volumes"
	Sep 13 18:35:51 addons-979357 kubelet[1204]: I0913 18:35:51.886149    1204 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c80a6556-910f-4e7c-8242-f32234571525-webhook-cert\") pod \"c80a6556-910f-4e7c-8242-f32234571525\" (UID: \"c80a6556-910f-4e7c-8242-f32234571525\") "
	Sep 13 18:35:51 addons-979357 kubelet[1204]: I0913 18:35:51.886223    1204 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5rdsj\" (UniqueName: \"kubernetes.io/projected/c80a6556-910f-4e7c-8242-f32234571525-kube-api-access-5rdsj\") pod \"c80a6556-910f-4e7c-8242-f32234571525\" (UID: \"c80a6556-910f-4e7c-8242-f32234571525\") "
	Sep 13 18:35:51 addons-979357 kubelet[1204]: I0913 18:35:51.895164    1204 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c80a6556-910f-4e7c-8242-f32234571525-kube-api-access-5rdsj" (OuterVolumeSpecName: "kube-api-access-5rdsj") pod "c80a6556-910f-4e7c-8242-f32234571525" (UID: "c80a6556-910f-4e7c-8242-f32234571525"). InnerVolumeSpecName "kube-api-access-5rdsj". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 13 18:35:51 addons-979357 kubelet[1204]: I0913 18:35:51.895759    1204 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c80a6556-910f-4e7c-8242-f32234571525-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "c80a6556-910f-4e7c-8242-f32234571525" (UID: "c80a6556-910f-4e7c-8242-f32234571525"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Sep 13 18:35:51 addons-979357 kubelet[1204]: I0913 18:35:51.987199    1204 reconciler_common.go:288] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c80a6556-910f-4e7c-8242-f32234571525-webhook-cert\") on node \"addons-979357\" DevicePath \"\""
	Sep 13 18:35:51 addons-979357 kubelet[1204]: I0913 18:35:51.987233    1204 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-5rdsj\" (UniqueName: \"kubernetes.io/projected/c80a6556-910f-4e7c-8242-f32234571525-kube-api-access-5rdsj\") on node \"addons-979357\" DevicePath \"\""
	Sep 13 18:35:52 addons-979357 kubelet[1204]: I0913 18:35:52.007126    1204 scope.go:117] "RemoveContainer" containerID="3801ba40bdd3d2aa0b70f7848709eb6cc533bba6092182d25ab65b6802edacc6"
	Sep 13 18:35:52 addons-979357 kubelet[1204]: I0913 18:35:52.021606    1204 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c80a6556-910f-4e7c-8242-f32234571525" path="/var/lib/kubelet/pods/c80a6556-910f-4e7c-8242-f32234571525/volumes"
	Sep 13 18:35:52 addons-979357 kubelet[1204]: I0913 18:35:52.031048    1204 scope.go:117] "RemoveContainer" containerID="3801ba40bdd3d2aa0b70f7848709eb6cc533bba6092182d25ab65b6802edacc6"
	Sep 13 18:35:52 addons-979357 kubelet[1204]: E0913 18:35:52.031661    1204 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3801ba40bdd3d2aa0b70f7848709eb6cc533bba6092182d25ab65b6802edacc6\": container with ID starting with 3801ba40bdd3d2aa0b70f7848709eb6cc533bba6092182d25ab65b6802edacc6 not found: ID does not exist" containerID="3801ba40bdd3d2aa0b70f7848709eb6cc533bba6092182d25ab65b6802edacc6"
	Sep 13 18:35:52 addons-979357 kubelet[1204]: I0913 18:35:52.031759    1204 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3801ba40bdd3d2aa0b70f7848709eb6cc533bba6092182d25ab65b6802edacc6"} err="failed to get container status \"3801ba40bdd3d2aa0b70f7848709eb6cc533bba6092182d25ab65b6802edacc6\": rpc error: code = NotFound desc = could not find container \"3801ba40bdd3d2aa0b70f7848709eb6cc533bba6092182d25ab65b6802edacc6\": container with ID starting with 3801ba40bdd3d2aa0b70f7848709eb6cc533bba6092182d25ab65b6802edacc6 not found: ID does not exist"
	Sep 13 18:35:56 addons-979357 kubelet[1204]: E0913 18:35:56.400095    1204 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726252556399625685,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559239,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 18:35:56 addons-979357 kubelet[1204]: E0913 18:35:56.400117    1204 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726252556399625685,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559239,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [46c152a4abcf517c32fc79a8f91da41a850adb80ee9597c3f49a9e6206b45f31] <==
	I0913 18:22:38.267389       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0913 18:22:38.392893       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0913 18:22:38.393087       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0913 18:22:38.604516       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0913 18:22:38.626124       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-979357_ed3b70c5-6474-4e9e-adc9-2ca3ad02df5e!
	I0913 18:22:38.627911       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a06aae77-a7ca-4bb0-8803-2138b0a92163", APIVersion:"v1", ResourceVersion:"636", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-979357_ed3b70c5-6474-4e9e-adc9-2ca3ad02df5e became leader
	I0913 18:22:38.727799       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-979357_ed3b70c5-6474-4e9e-adc9-2ca3ad02df5e!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-979357 -n addons-979357
helpers_test.go:261: (dbg) Run:  kubectl --context addons-979357 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-979357 describe pod busybox
helpers_test.go:282: (dbg) kubectl --context addons-979357 describe pod busybox:

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-979357/192.168.39.34
	Start Time:       Fri, 13 Sep 2024 18:24:06 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.22
	IPs:
	  IP:  10.244.0.22
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-h9h22 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-h9h22:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                 From               Message
	  ----     ------     ----                ----               -------
	  Normal   Scheduled  11m                 default-scheduler  Successfully assigned default/busybox to addons-979357
	  Normal   Pulling    10m (x4 over 11m)   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     10m (x4 over 11m)   kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
	  Warning  Failed     10m (x4 over 11m)   kubelet            Error: ErrImagePull
	  Warning  Failed     10m (x6 over 11m)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    98s (x43 over 11m)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (153.39s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (312.09s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:405: metrics-server stabilized in 2.906386ms
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-qw488" [cf270e35-c498-455b-bc82-0a19e8f606aa] Running
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.035738981s
addons_test.go:413: (dbg) Run:  kubectl --context addons-979357 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-979357 top pods -n kube-system: exit status 1 (90.10289ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-mtltd, age: 9m44.363575823s

                                                
                                                
** /stderr **
addons_test.go:413: (dbg) Run:  kubectl --context addons-979357 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-979357 top pods -n kube-system: exit status 1 (64.6704ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-mtltd, age: 9m48.901794002s

                                                
                                                
** /stderr **
addons_test.go:413: (dbg) Run:  kubectl --context addons-979357 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-979357 top pods -n kube-system: exit status 1 (70.536962ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-mtltd, age: 9m55.442748818s

                                                
                                                
** /stderr **
addons_test.go:413: (dbg) Run:  kubectl --context addons-979357 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-979357 top pods -n kube-system: exit status 1 (66.111306ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-mtltd, age: 10m3.861489526s

                                                
                                                
** /stderr **
addons_test.go:413: (dbg) Run:  kubectl --context addons-979357 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-979357 top pods -n kube-system: exit status 1 (68.224856ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-mtltd, age: 10m15.699181536s

                                                
                                                
** /stderr **
addons_test.go:413: (dbg) Run:  kubectl --context addons-979357 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-979357 top pods -n kube-system: exit status 1 (63.844179ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-mtltd, age: 10m24.349979346s

                                                
                                                
** /stderr **
addons_test.go:413: (dbg) Run:  kubectl --context addons-979357 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-979357 top pods -n kube-system: exit status 1 (62.947444ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-mtltd, age: 10m40.416060928s

                                                
                                                
** /stderr **
addons_test.go:413: (dbg) Run:  kubectl --context addons-979357 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-979357 top pods -n kube-system: exit status 1 (64.49487ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-mtltd, age: 11m1.442578395s

                                                
                                                
** /stderr **
addons_test.go:413: (dbg) Run:  kubectl --context addons-979357 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-979357 top pods -n kube-system: exit status 1 (62.88567ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-mtltd, age: 11m53.528074318s

                                                
                                                
** /stderr **
addons_test.go:413: (dbg) Run:  kubectl --context addons-979357 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-979357 top pods -n kube-system: exit status 1 (61.879779ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-mtltd, age: 12m56.051759405s

                                                
                                                
** /stderr **
addons_test.go:413: (dbg) Run:  kubectl --context addons-979357 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-979357 top pods -n kube-system: exit status 1 (64.539859ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-mtltd, age: 13m56.370538567s

                                                
                                                
** /stderr **
addons_test.go:413: (dbg) Run:  kubectl --context addons-979357 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-979357 top pods -n kube-system: exit status 1 (61.229707ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-mtltd, age: 14m47.537181578s

                                                
                                                
** /stderr **
addons_test.go:427: failed checking metric server: exit status 1
addons_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p addons-979357 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-979357 -n addons-979357
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-979357 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-979357 logs -n 25: (1.381548108s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-283125                                                                     | download-only-283125 | jenkins | v1.34.0 | 13 Sep 24 18:21 UTC | 13 Sep 24 18:21 UTC |
	| delete  | -p download-only-220014                                                                     | download-only-220014 | jenkins | v1.34.0 | 13 Sep 24 18:21 UTC | 13 Sep 24 18:21 UTC |
	| delete  | -p download-only-283125                                                                     | download-only-283125 | jenkins | v1.34.0 | 13 Sep 24 18:21 UTC | 13 Sep 24 18:21 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-840809 | jenkins | v1.34.0 | 13 Sep 24 18:21 UTC |                     |
	|         | binary-mirror-840809                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:46177                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-840809                                                                     | binary-mirror-840809 | jenkins | v1.34.0 | 13 Sep 24 18:21 UTC | 13 Sep 24 18:21 UTC |
	| addons  | disable dashboard -p                                                                        | addons-979357        | jenkins | v1.34.0 | 13 Sep 24 18:21 UTC |                     |
	|         | addons-979357                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-979357        | jenkins | v1.34.0 | 13 Sep 24 18:21 UTC |                     |
	|         | addons-979357                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-979357 --wait=true                                                                | addons-979357        | jenkins | v1.34.0 | 13 Sep 24 18:21 UTC | 13 Sep 24 18:24 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-979357        | jenkins | v1.34.0 | 13 Sep 24 18:32 UTC | 13 Sep 24 18:32 UTC |
	|         | -p addons-979357                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-979357        | jenkins | v1.34.0 | 13 Sep 24 18:32 UTC | 13 Sep 24 18:32 UTC |
	|         | addons-979357                                                                               |                      |         |         |                     |                     |
	| addons  | addons-979357 addons disable                                                                | addons-979357        | jenkins | v1.34.0 | 13 Sep 24 18:32 UTC | 13 Sep 24 18:32 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-979357 addons disable                                                                | addons-979357        | jenkins | v1.34.0 | 13 Sep 24 18:32 UTC | 13 Sep 24 18:32 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-979357        | jenkins | v1.34.0 | 13 Sep 24 18:32 UTC | 13 Sep 24 18:32 UTC |
	|         | -p addons-979357                                                                            |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-979357        | jenkins | v1.34.0 | 13 Sep 24 18:32 UTC | 13 Sep 24 18:32 UTC |
	|         | addons-979357                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-979357 ssh cat                                                                       | addons-979357        | jenkins | v1.34.0 | 13 Sep 24 18:32 UTC | 13 Sep 24 18:32 UTC |
	|         | /opt/local-path-provisioner/pvc-2e98d28b-4232-4373-82bf-032b9972820e_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-979357 addons disable                                                                | addons-979357        | jenkins | v1.34.0 | 13 Sep 24 18:32 UTC | 13 Sep 24 18:33 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-979357 ip                                                                            | addons-979357        | jenkins | v1.34.0 | 13 Sep 24 18:33 UTC | 13 Sep 24 18:33 UTC |
	| addons  | addons-979357 addons disable                                                                | addons-979357        | jenkins | v1.34.0 | 13 Sep 24 18:33 UTC | 13 Sep 24 18:33 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-979357 addons                                                                        | addons-979357        | jenkins | v1.34.0 | 13 Sep 24 18:33 UTC | 13 Sep 24 18:33 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-979357 addons                                                                        | addons-979357        | jenkins | v1.34.0 | 13 Sep 24 18:33 UTC | 13 Sep 24 18:33 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-979357 ssh curl -s                                                                   | addons-979357        | jenkins | v1.34.0 | 13 Sep 24 18:33 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| ip      | addons-979357 ip                                                                            | addons-979357        | jenkins | v1.34.0 | 13 Sep 24 18:35 UTC | 13 Sep 24 18:35 UTC |
	| addons  | addons-979357 addons disable                                                                | addons-979357        | jenkins | v1.34.0 | 13 Sep 24 18:35 UTC | 13 Sep 24 18:35 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-979357 addons disable                                                                | addons-979357        | jenkins | v1.34.0 | 13 Sep 24 18:35 UTC | 13 Sep 24 18:35 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-979357 addons                                                                        | addons-979357        | jenkins | v1.34.0 | 13 Sep 24 18:37 UTC | 13 Sep 24 18:37 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/13 18:21:44
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0913 18:21:44.933336   11846 out.go:345] Setting OutFile to fd 1 ...
	I0913 18:21:44.933589   11846 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 18:21:44.933598   11846 out.go:358] Setting ErrFile to fd 2...
	I0913 18:21:44.933603   11846 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 18:21:44.933811   11846 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19636-3902/.minikube/bin
	I0913 18:21:44.934483   11846 out.go:352] Setting JSON to false
	I0913 18:21:44.935314   11846 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":248,"bootTime":1726251457,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0913 18:21:44.935405   11846 start.go:139] virtualization: kvm guest
	I0913 18:21:44.937733   11846 out.go:177] * [addons-979357] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0913 18:21:44.939244   11846 notify.go:220] Checking for updates...
	I0913 18:21:44.939253   11846 out.go:177]   - MINIKUBE_LOCATION=19636
	I0913 18:21:44.940802   11846 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 18:21:44.942374   11846 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19636-3902/kubeconfig
	I0913 18:21:44.943849   11846 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19636-3902/.minikube
	I0913 18:21:44.945315   11846 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0913 18:21:44.946781   11846 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 18:21:44.948355   11846 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 18:21:44.980298   11846 out.go:177] * Using the kvm2 driver based on user configuration
	I0913 18:21:44.981482   11846 start.go:297] selected driver: kvm2
	I0913 18:21:44.981496   11846 start.go:901] validating driver "kvm2" against <nil>
	I0913 18:21:44.981507   11846 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 18:21:44.982221   11846 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 18:21:44.982292   11846 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19636-3902/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0913 18:21:44.996730   11846 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0913 18:21:44.996769   11846 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0913 18:21:44.997020   11846 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0913 18:21:44.997050   11846 cni.go:84] Creating CNI manager for ""
	I0913 18:21:44.997088   11846 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0913 18:21:44.997097   11846 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0913 18:21:44.997143   11846 start.go:340] cluster config:
	{Name:addons-979357 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-979357 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 18:21:44.997247   11846 iso.go:125] acquiring lock: {Name:mk6d09a9e7ffc35d34bf41cb51ad15df14a4d34d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 18:21:44.998916   11846 out.go:177] * Starting "addons-979357" primary control-plane node in "addons-979357" cluster
	I0913 18:21:45.000116   11846 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0913 18:21:45.000156   11846 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0913 18:21:45.000181   11846 cache.go:56] Caching tarball of preloaded images
	I0913 18:21:45.000289   11846 preload.go:172] Found /home/jenkins/minikube-integration/19636-3902/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0913 18:21:45.000299   11846 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0913 18:21:45.000586   11846 profile.go:143] Saving config to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357/config.json ...
	I0913 18:21:45.000604   11846 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357/config.json: {Name:mk395248c1d6a5d1f66c229ec194a50ba2a56d51 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:21:45.000738   11846 start.go:360] acquireMachinesLock for addons-979357: {Name:mk2a4fc9f87a3264088553785e086036ce1d8c5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 18:21:45.000781   11846 start.go:364] duration metric: took 30.582µs to acquireMachinesLock for "addons-979357"
	I0913 18:21:45.000797   11846 start.go:93] Provisioning new machine with config: &{Name:addons-979357 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:addons-979357 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0913 18:21:45.000848   11846 start.go:125] createHost starting for "" (driver="kvm2")
	I0913 18:21:45.002398   11846 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0913 18:21:45.002531   11846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:21:45.002566   11846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:21:45.016840   11846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42609
	I0913 18:21:45.017377   11846 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:21:45.017901   11846 main.go:141] libmachine: Using API Version  1
	I0913 18:21:45.017922   11846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:21:45.018288   11846 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:21:45.018450   11846 main.go:141] libmachine: (addons-979357) Calling .GetMachineName
	I0913 18:21:45.018570   11846 main.go:141] libmachine: (addons-979357) Calling .DriverName
	I0913 18:21:45.018700   11846 start.go:159] libmachine.API.Create for "addons-979357" (driver="kvm2")
	I0913 18:21:45.018725   11846 client.go:168] LocalClient.Create starting
	I0913 18:21:45.018761   11846 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem
	I0913 18:21:45.156400   11846 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem
	I0913 18:21:45.353847   11846 main.go:141] libmachine: Running pre-create checks...
	I0913 18:21:45.353873   11846 main.go:141] libmachine: (addons-979357) Calling .PreCreateCheck
	I0913 18:21:45.354405   11846 main.go:141] libmachine: (addons-979357) Calling .GetConfigRaw
	I0913 18:21:45.354848   11846 main.go:141] libmachine: Creating machine...
	I0913 18:21:45.354863   11846 main.go:141] libmachine: (addons-979357) Calling .Create
	I0913 18:21:45.354984   11846 main.go:141] libmachine: (addons-979357) Creating KVM machine...
	I0913 18:21:45.356174   11846 main.go:141] libmachine: (addons-979357) DBG | found existing default KVM network
	I0913 18:21:45.356944   11846 main.go:141] libmachine: (addons-979357) DBG | I0913 18:21:45.356784   11867 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000014fa0}
	I0913 18:21:45.356967   11846 main.go:141] libmachine: (addons-979357) DBG | created network xml: 
	I0913 18:21:45.356978   11846 main.go:141] libmachine: (addons-979357) DBG | <network>
	I0913 18:21:45.356983   11846 main.go:141] libmachine: (addons-979357) DBG |   <name>mk-addons-979357</name>
	I0913 18:21:45.356989   11846 main.go:141] libmachine: (addons-979357) DBG |   <dns enable='no'/>
	I0913 18:21:45.356997   11846 main.go:141] libmachine: (addons-979357) DBG |   
	I0913 18:21:45.357004   11846 main.go:141] libmachine: (addons-979357) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0913 18:21:45.357012   11846 main.go:141] libmachine: (addons-979357) DBG |     <dhcp>
	I0913 18:21:45.357018   11846 main.go:141] libmachine: (addons-979357) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0913 18:21:45.357022   11846 main.go:141] libmachine: (addons-979357) DBG |     </dhcp>
	I0913 18:21:45.357027   11846 main.go:141] libmachine: (addons-979357) DBG |   </ip>
	I0913 18:21:45.357033   11846 main.go:141] libmachine: (addons-979357) DBG |   
	I0913 18:21:45.357037   11846 main.go:141] libmachine: (addons-979357) DBG | </network>
	I0913 18:21:45.357041   11846 main.go:141] libmachine: (addons-979357) DBG | 
	I0913 18:21:45.362778   11846 main.go:141] libmachine: (addons-979357) DBG | trying to create private KVM network mk-addons-979357 192.168.39.0/24...
	I0913 18:21:45.429739   11846 main.go:141] libmachine: (addons-979357) Setting up store path in /home/jenkins/minikube-integration/19636-3902/.minikube/machines/addons-979357 ...
	I0913 18:21:45.429776   11846 main.go:141] libmachine: (addons-979357) Building disk image from file:///home/jenkins/minikube-integration/19636-3902/.minikube/cache/iso/amd64/minikube-v1.34.0-1726156389-19616-amd64.iso
	I0913 18:21:45.429787   11846 main.go:141] libmachine: (addons-979357) DBG | private KVM network mk-addons-979357 192.168.39.0/24 created
	I0913 18:21:45.429871   11846 main.go:141] libmachine: (addons-979357) Downloading /home/jenkins/minikube-integration/19636-3902/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19636-3902/.minikube/cache/iso/amd64/minikube-v1.34.0-1726156389-19616-amd64.iso...
	I0913 18:21:45.429918   11846 main.go:141] libmachine: (addons-979357) DBG | I0913 18:21:45.429655   11867 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19636-3902/.minikube
	I0913 18:21:45.695461   11846 main.go:141] libmachine: (addons-979357) DBG | I0913 18:21:45.695348   11867 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/addons-979357/id_rsa...
	I0913 18:21:45.815456   11846 main.go:141] libmachine: (addons-979357) DBG | I0913 18:21:45.815333   11867 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/addons-979357/addons-979357.rawdisk...
	I0913 18:21:45.815481   11846 main.go:141] libmachine: (addons-979357) DBG | Writing magic tar header
	I0913 18:21:45.815490   11846 main.go:141] libmachine: (addons-979357) DBG | Writing SSH key tar header
	I0913 18:21:45.815498   11846 main.go:141] libmachine: (addons-979357) DBG | I0913 18:21:45.815436   11867 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19636-3902/.minikube/machines/addons-979357 ...
	I0913 18:21:45.815566   11846 main.go:141] libmachine: (addons-979357) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/addons-979357
	I0913 18:21:45.815594   11846 main.go:141] libmachine: (addons-979357) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19636-3902/.minikube/machines
	I0913 18:21:45.815609   11846 main.go:141] libmachine: (addons-979357) Setting executable bit set on /home/jenkins/minikube-integration/19636-3902/.minikube/machines/addons-979357 (perms=drwx------)
	I0913 18:21:45.815616   11846 main.go:141] libmachine: (addons-979357) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19636-3902/.minikube
	I0913 18:21:45.815624   11846 main.go:141] libmachine: (addons-979357) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19636-3902
	I0913 18:21:45.815629   11846 main.go:141] libmachine: (addons-979357) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0913 18:21:45.815635   11846 main.go:141] libmachine: (addons-979357) DBG | Checking permissions on dir: /home/jenkins
	I0913 18:21:45.815641   11846 main.go:141] libmachine: (addons-979357) DBG | Checking permissions on dir: /home
	I0913 18:21:45.815651   11846 main.go:141] libmachine: (addons-979357) DBG | Skipping /home - not owner
	I0913 18:21:45.815665   11846 main.go:141] libmachine: (addons-979357) Setting executable bit set on /home/jenkins/minikube-integration/19636-3902/.minikube/machines (perms=drwxr-xr-x)
	I0913 18:21:45.815681   11846 main.go:141] libmachine: (addons-979357) Setting executable bit set on /home/jenkins/minikube-integration/19636-3902/.minikube (perms=drwxr-xr-x)
	I0913 18:21:45.815693   11846 main.go:141] libmachine: (addons-979357) Setting executable bit set on /home/jenkins/minikube-integration/19636-3902 (perms=drwxrwxr-x)
	I0913 18:21:45.815703   11846 main.go:141] libmachine: (addons-979357) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0913 18:21:45.815711   11846 main.go:141] libmachine: (addons-979357) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0913 18:21:45.815741   11846 main.go:141] libmachine: (addons-979357) Creating domain...
	I0913 18:21:45.816699   11846 main.go:141] libmachine: (addons-979357) define libvirt domain using xml: 
	I0913 18:21:45.816712   11846 main.go:141] libmachine: (addons-979357) <domain type='kvm'>
	I0913 18:21:45.816718   11846 main.go:141] libmachine: (addons-979357)   <name>addons-979357</name>
	I0913 18:21:45.816723   11846 main.go:141] libmachine: (addons-979357)   <memory unit='MiB'>4000</memory>
	I0913 18:21:45.816728   11846 main.go:141] libmachine: (addons-979357)   <vcpu>2</vcpu>
	I0913 18:21:45.816732   11846 main.go:141] libmachine: (addons-979357)   <features>
	I0913 18:21:45.816738   11846 main.go:141] libmachine: (addons-979357)     <acpi/>
	I0913 18:21:45.816744   11846 main.go:141] libmachine: (addons-979357)     <apic/>
	I0913 18:21:45.816750   11846 main.go:141] libmachine: (addons-979357)     <pae/>
	I0913 18:21:45.816759   11846 main.go:141] libmachine: (addons-979357)     
	I0913 18:21:45.816766   11846 main.go:141] libmachine: (addons-979357)   </features>
	I0913 18:21:45.816776   11846 main.go:141] libmachine: (addons-979357)   <cpu mode='host-passthrough'>
	I0913 18:21:45.816783   11846 main.go:141] libmachine: (addons-979357)   
	I0913 18:21:45.816798   11846 main.go:141] libmachine: (addons-979357)   </cpu>
	I0913 18:21:45.816806   11846 main.go:141] libmachine: (addons-979357)   <os>
	I0913 18:21:45.816810   11846 main.go:141] libmachine: (addons-979357)     <type>hvm</type>
	I0913 18:21:45.816816   11846 main.go:141] libmachine: (addons-979357)     <boot dev='cdrom'/>
	I0913 18:21:45.816820   11846 main.go:141] libmachine: (addons-979357)     <boot dev='hd'/>
	I0913 18:21:45.816825   11846 main.go:141] libmachine: (addons-979357)     <bootmenu enable='no'/>
	I0913 18:21:45.816831   11846 main.go:141] libmachine: (addons-979357)   </os>
	I0913 18:21:45.816836   11846 main.go:141] libmachine: (addons-979357)   <devices>
	I0913 18:21:45.816843   11846 main.go:141] libmachine: (addons-979357)     <disk type='file' device='cdrom'>
	I0913 18:21:45.816853   11846 main.go:141] libmachine: (addons-979357)       <source file='/home/jenkins/minikube-integration/19636-3902/.minikube/machines/addons-979357/boot2docker.iso'/>
	I0913 18:21:45.816864   11846 main.go:141] libmachine: (addons-979357)       <target dev='hdc' bus='scsi'/>
	I0913 18:21:45.816874   11846 main.go:141] libmachine: (addons-979357)       <readonly/>
	I0913 18:21:45.816884   11846 main.go:141] libmachine: (addons-979357)     </disk>
	I0913 18:21:45.816910   11846 main.go:141] libmachine: (addons-979357)     <disk type='file' device='disk'>
	I0913 18:21:45.816927   11846 main.go:141] libmachine: (addons-979357)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0913 18:21:45.816935   11846 main.go:141] libmachine: (addons-979357)       <source file='/home/jenkins/minikube-integration/19636-3902/.minikube/machines/addons-979357/addons-979357.rawdisk'/>
	I0913 18:21:45.816942   11846 main.go:141] libmachine: (addons-979357)       <target dev='hda' bus='virtio'/>
	I0913 18:21:45.816949   11846 main.go:141] libmachine: (addons-979357)     </disk>
	I0913 18:21:45.816955   11846 main.go:141] libmachine: (addons-979357)     <interface type='network'>
	I0913 18:21:45.816961   11846 main.go:141] libmachine: (addons-979357)       <source network='mk-addons-979357'/>
	I0913 18:21:45.816971   11846 main.go:141] libmachine: (addons-979357)       <model type='virtio'/>
	I0913 18:21:45.816986   11846 main.go:141] libmachine: (addons-979357)     </interface>
	I0913 18:21:45.816998   11846 main.go:141] libmachine: (addons-979357)     <interface type='network'>
	I0913 18:21:45.817019   11846 main.go:141] libmachine: (addons-979357)       <source network='default'/>
	I0913 18:21:45.817038   11846 main.go:141] libmachine: (addons-979357)       <model type='virtio'/>
	I0913 18:21:45.817050   11846 main.go:141] libmachine: (addons-979357)     </interface>
	I0913 18:21:45.817060   11846 main.go:141] libmachine: (addons-979357)     <serial type='pty'>
	I0913 18:21:45.817071   11846 main.go:141] libmachine: (addons-979357)       <target port='0'/>
	I0913 18:21:45.817077   11846 main.go:141] libmachine: (addons-979357)     </serial>
	I0913 18:21:45.817082   11846 main.go:141] libmachine: (addons-979357)     <console type='pty'>
	I0913 18:21:45.817089   11846 main.go:141] libmachine: (addons-979357)       <target type='serial' port='0'/>
	I0913 18:21:45.817096   11846 main.go:141] libmachine: (addons-979357)     </console>
	I0913 18:21:45.817105   11846 main.go:141] libmachine: (addons-979357)     <rng model='virtio'>
	I0913 18:21:45.817123   11846 main.go:141] libmachine: (addons-979357)       <backend model='random'>/dev/random</backend>
	I0913 18:21:45.817134   11846 main.go:141] libmachine: (addons-979357)     </rng>
	I0913 18:21:45.817145   11846 main.go:141] libmachine: (addons-979357)     
	I0913 18:21:45.817152   11846 main.go:141] libmachine: (addons-979357)     
	I0913 18:21:45.817157   11846 main.go:141] libmachine: (addons-979357)   </devices>
	I0913 18:21:45.817163   11846 main.go:141] libmachine: (addons-979357) </domain>
	I0913 18:21:45.817170   11846 main.go:141] libmachine: (addons-979357) 
	I0913 18:21:45.823068   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:c9:b7:e5 in network default
	I0913 18:21:45.823613   11846 main.go:141] libmachine: (addons-979357) Ensuring networks are active...
	I0913 18:21:45.823634   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:21:45.824217   11846 main.go:141] libmachine: (addons-979357) Ensuring network default is active
	I0913 18:21:45.824556   11846 main.go:141] libmachine: (addons-979357) Ensuring network mk-addons-979357 is active
	I0913 18:21:45.825087   11846 main.go:141] libmachine: (addons-979357) Getting domain xml...
	I0913 18:21:45.825697   11846 main.go:141] libmachine: (addons-979357) Creating domain...
	I0913 18:21:47.215259   11846 main.go:141] libmachine: (addons-979357) Waiting to get IP...
	I0913 18:21:47.216244   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:21:47.216720   11846 main.go:141] libmachine: (addons-979357) DBG | unable to find current IP address of domain addons-979357 in network mk-addons-979357
	I0913 18:21:47.216737   11846 main.go:141] libmachine: (addons-979357) DBG | I0913 18:21:47.216708   11867 retry.go:31] will retry after 288.192907ms: waiting for machine to come up
	I0913 18:21:47.506172   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:21:47.506706   11846 main.go:141] libmachine: (addons-979357) DBG | unable to find current IP address of domain addons-979357 in network mk-addons-979357
	I0913 18:21:47.506739   11846 main.go:141] libmachine: (addons-979357) DBG | I0913 18:21:47.506644   11867 retry.go:31] will retry after 265.001251ms: waiting for machine to come up
	I0913 18:21:47.773271   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:21:47.773783   11846 main.go:141] libmachine: (addons-979357) DBG | unable to find current IP address of domain addons-979357 in network mk-addons-979357
	I0913 18:21:47.773811   11846 main.go:141] libmachine: (addons-979357) DBG | I0913 18:21:47.773744   11867 retry.go:31] will retry after 301.987216ms: waiting for machine to come up
	I0913 18:21:48.077134   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:21:48.077602   11846 main.go:141] libmachine: (addons-979357) DBG | unable to find current IP address of domain addons-979357 in network mk-addons-979357
	I0913 18:21:48.077633   11846 main.go:141] libmachine: (addons-979357) DBG | I0913 18:21:48.077565   11867 retry.go:31] will retry after 551.807466ms: waiting for machine to come up
	I0913 18:21:48.631439   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:21:48.631926   11846 main.go:141] libmachine: (addons-979357) DBG | unable to find current IP address of domain addons-979357 in network mk-addons-979357
	I0913 18:21:48.631948   11846 main.go:141] libmachine: (addons-979357) DBG | I0913 18:21:48.631877   11867 retry.go:31] will retry after 628.057496ms: waiting for machine to come up
	I0913 18:21:49.261251   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:21:49.261632   11846 main.go:141] libmachine: (addons-979357) DBG | unable to find current IP address of domain addons-979357 in network mk-addons-979357
	I0913 18:21:49.261655   11846 main.go:141] libmachine: (addons-979357) DBG | I0913 18:21:49.261592   11867 retry.go:31] will retry after 766.331433ms: waiting for machine to come up
	I0913 18:21:50.030151   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:21:50.030680   11846 main.go:141] libmachine: (addons-979357) DBG | unable to find current IP address of domain addons-979357 in network mk-addons-979357
	I0913 18:21:50.030703   11846 main.go:141] libmachine: (addons-979357) DBG | I0913 18:21:50.030633   11867 retry.go:31] will retry after 869.088297ms: waiting for machine to come up
	I0913 18:21:50.901609   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:21:50.902025   11846 main.go:141] libmachine: (addons-979357) DBG | unable to find current IP address of domain addons-979357 in network mk-addons-979357
	I0913 18:21:50.902046   11846 main.go:141] libmachine: (addons-979357) DBG | I0913 18:21:50.901973   11867 retry.go:31] will retry after 1.351047403s: waiting for machine to come up
	I0913 18:21:52.255406   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:21:52.255833   11846 main.go:141] libmachine: (addons-979357) DBG | unable to find current IP address of domain addons-979357 in network mk-addons-979357
	I0913 18:21:52.255854   11846 main.go:141] libmachine: (addons-979357) DBG | I0913 18:21:52.255806   11867 retry.go:31] will retry after 1.528727429s: waiting for machine to come up
	I0913 18:21:53.785667   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:21:53.786063   11846 main.go:141] libmachine: (addons-979357) DBG | unable to find current IP address of domain addons-979357 in network mk-addons-979357
	I0913 18:21:53.786084   11846 main.go:141] libmachine: (addons-979357) DBG | I0913 18:21:53.786023   11867 retry.go:31] will retry after 1.928511226s: waiting for machine to come up
	I0913 18:21:55.715767   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:21:55.716158   11846 main.go:141] libmachine: (addons-979357) DBG | unable to find current IP address of domain addons-979357 in network mk-addons-979357
	I0913 18:21:55.716180   11846 main.go:141] libmachine: (addons-979357) DBG | I0913 18:21:55.716108   11867 retry.go:31] will retry after 1.901214708s: waiting for machine to come up
	I0913 18:21:57.619291   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:21:57.619861   11846 main.go:141] libmachine: (addons-979357) DBG | unable to find current IP address of domain addons-979357 in network mk-addons-979357
	I0913 18:21:57.619887   11846 main.go:141] libmachine: (addons-979357) DBG | I0913 18:21:57.619823   11867 retry.go:31] will retry after 2.844347432s: waiting for machine to come up
	I0913 18:22:00.465541   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:00.465982   11846 main.go:141] libmachine: (addons-979357) DBG | unable to find current IP address of domain addons-979357 in network mk-addons-979357
	I0913 18:22:00.466008   11846 main.go:141] libmachine: (addons-979357) DBG | I0913 18:22:00.465919   11867 retry.go:31] will retry after 3.134520129s: waiting for machine to come up
	I0913 18:22:03.603405   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:03.603856   11846 main.go:141] libmachine: (addons-979357) DBG | unable to find current IP address of domain addons-979357 in network mk-addons-979357
	I0913 18:22:03.603883   11846 main.go:141] libmachine: (addons-979357) DBG | I0913 18:22:03.603813   11867 retry.go:31] will retry after 4.895864383s: waiting for machine to come up
	I0913 18:22:08.503574   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:08.503985   11846 main.go:141] libmachine: (addons-979357) Found IP for machine: 192.168.39.34
	I0913 18:22:08.504003   11846 main.go:141] libmachine: (addons-979357) Reserving static IP address...
	I0913 18:22:08.504016   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has current primary IP address 192.168.39.34 and MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:08.504317   11846 main.go:141] libmachine: (addons-979357) DBG | unable to find host DHCP lease matching {name: "addons-979357", mac: "52:54:00:9b:f4:d7", ip: "192.168.39.34"} in network mk-addons-979357
	I0913 18:22:08.572524   11846 main.go:141] libmachine: (addons-979357) DBG | Getting to WaitForSSH function...
	I0913 18:22:08.572569   11846 main.go:141] libmachine: (addons-979357) Reserved static IP address: 192.168.39.34
	I0913 18:22:08.572583   11846 main.go:141] libmachine: (addons-979357) Waiting for SSH to be available...
	I0913 18:22:08.574749   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:08.575144   11846 main.go:141] libmachine: (addons-979357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:f4:d7", ip: ""} in network mk-addons-979357: {Iface:virbr1 ExpiryTime:2024-09-13 19:22:00 +0000 UTC Type:0 Mac:52:54:00:9b:f4:d7 Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:minikube Clientid:01:52:54:00:9b:f4:d7}
	I0913 18:22:08.575171   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined IP address 192.168.39.34 and MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:08.575290   11846 main.go:141] libmachine: (addons-979357) DBG | Using SSH client type: external
	I0913 18:22:08.575309   11846 main.go:141] libmachine: (addons-979357) DBG | Using SSH private key: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/addons-979357/id_rsa (-rw-------)
	I0913 18:22:08.575337   11846 main.go:141] libmachine: (addons-979357) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.34 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19636-3902/.minikube/machines/addons-979357/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0913 18:22:08.575351   11846 main.go:141] libmachine: (addons-979357) DBG | About to run SSH command:
	I0913 18:22:08.575368   11846 main.go:141] libmachine: (addons-979357) DBG | exit 0
	I0913 18:22:08.710507   11846 main.go:141] libmachine: (addons-979357) DBG | SSH cmd err, output: <nil>: 
	I0913 18:22:08.710759   11846 main.go:141] libmachine: (addons-979357) KVM machine creation complete!
	I0913 18:22:08.711098   11846 main.go:141] libmachine: (addons-979357) Calling .GetConfigRaw
	I0913 18:22:08.711607   11846 main.go:141] libmachine: (addons-979357) Calling .DriverName
	I0913 18:22:08.711785   11846 main.go:141] libmachine: (addons-979357) Calling .DriverName
	I0913 18:22:08.711900   11846 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0913 18:22:08.711921   11846 main.go:141] libmachine: (addons-979357) Calling .GetState
	I0913 18:22:08.713103   11846 main.go:141] libmachine: Detecting operating system of created instance...
	I0913 18:22:08.713119   11846 main.go:141] libmachine: Waiting for SSH to be available...
	I0913 18:22:08.713127   11846 main.go:141] libmachine: Getting to WaitForSSH function...
	I0913 18:22:08.713138   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHHostname
	I0913 18:22:08.715205   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:08.715543   11846 main.go:141] libmachine: (addons-979357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:f4:d7", ip: ""} in network mk-addons-979357: {Iface:virbr1 ExpiryTime:2024-09-13 19:22:00 +0000 UTC Type:0 Mac:52:54:00:9b:f4:d7 Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:addons-979357 Clientid:01:52:54:00:9b:f4:d7}
	I0913 18:22:08.715570   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined IP address 192.168.39.34 and MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:08.715735   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHPort
	I0913 18:22:08.715880   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHKeyPath
	I0913 18:22:08.716011   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHKeyPath
	I0913 18:22:08.716121   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHUsername
	I0913 18:22:08.716248   11846 main.go:141] libmachine: Using SSH client type: native
	I0913 18:22:08.716428   11846 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.34 22 <nil> <nil>}
	I0913 18:22:08.716440   11846 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0913 18:22:08.829395   11846 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0913 18:22:08.829432   11846 main.go:141] libmachine: Detecting the provisioner...
	I0913 18:22:08.829439   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHHostname
	I0913 18:22:08.832429   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:08.832877   11846 main.go:141] libmachine: (addons-979357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:f4:d7", ip: ""} in network mk-addons-979357: {Iface:virbr1 ExpiryTime:2024-09-13 19:22:00 +0000 UTC Type:0 Mac:52:54:00:9b:f4:d7 Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:addons-979357 Clientid:01:52:54:00:9b:f4:d7}
	I0913 18:22:08.832903   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined IP address 192.168.39.34 and MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:08.833092   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHPort
	I0913 18:22:08.833258   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHKeyPath
	I0913 18:22:08.833366   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHKeyPath
	I0913 18:22:08.833483   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHUsername
	I0913 18:22:08.833650   11846 main.go:141] libmachine: Using SSH client type: native
	I0913 18:22:08.833827   11846 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.34 22 <nil> <nil>}
	I0913 18:22:08.833837   11846 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0913 18:22:08.946841   11846 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0913 18:22:08.946908   11846 main.go:141] libmachine: found compatible host: buildroot
	I0913 18:22:08.946918   11846 main.go:141] libmachine: Provisioning with buildroot...
	I0913 18:22:08.946930   11846 main.go:141] libmachine: (addons-979357) Calling .GetMachineName
	I0913 18:22:08.947154   11846 buildroot.go:166] provisioning hostname "addons-979357"
	I0913 18:22:08.947176   11846 main.go:141] libmachine: (addons-979357) Calling .GetMachineName
	I0913 18:22:08.947341   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHHostname
	I0913 18:22:08.949827   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:08.950138   11846 main.go:141] libmachine: (addons-979357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:f4:d7", ip: ""} in network mk-addons-979357: {Iface:virbr1 ExpiryTime:2024-09-13 19:22:00 +0000 UTC Type:0 Mac:52:54:00:9b:f4:d7 Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:addons-979357 Clientid:01:52:54:00:9b:f4:d7}
	I0913 18:22:08.950163   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined IP address 192.168.39.34 and MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:08.950307   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHPort
	I0913 18:22:08.950471   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHKeyPath
	I0913 18:22:08.950625   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHKeyPath
	I0913 18:22:08.950753   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHUsername
	I0913 18:22:08.950889   11846 main.go:141] libmachine: Using SSH client type: native
	I0913 18:22:08.951047   11846 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.34 22 <nil> <nil>}
	I0913 18:22:08.951059   11846 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-979357 && echo "addons-979357" | sudo tee /etc/hostname
	I0913 18:22:09.084010   11846 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-979357
	
	I0913 18:22:09.084038   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHHostname
	I0913 18:22:09.086820   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:09.087218   11846 main.go:141] libmachine: (addons-979357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:f4:d7", ip: ""} in network mk-addons-979357: {Iface:virbr1 ExpiryTime:2024-09-13 19:22:00 +0000 UTC Type:0 Mac:52:54:00:9b:f4:d7 Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:addons-979357 Clientid:01:52:54:00:9b:f4:d7}
	I0913 18:22:09.087244   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined IP address 192.168.39.34 and MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:09.087406   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHPort
	I0913 18:22:09.087598   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHKeyPath
	I0913 18:22:09.087771   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHKeyPath
	I0913 18:22:09.087892   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHUsername
	I0913 18:22:09.088066   11846 main.go:141] libmachine: Using SSH client type: native
	I0913 18:22:09.088267   11846 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.34 22 <nil> <nil>}
	I0913 18:22:09.088291   11846 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-979357' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-979357/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-979357' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0913 18:22:09.211719   11846 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0913 18:22:09.211749   11846 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19636-3902/.minikube CaCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19636-3902/.minikube}
	I0913 18:22:09.211801   11846 buildroot.go:174] setting up certificates
	I0913 18:22:09.211812   11846 provision.go:84] configureAuth start
	I0913 18:22:09.211824   11846 main.go:141] libmachine: (addons-979357) Calling .GetMachineName
	I0913 18:22:09.212141   11846 main.go:141] libmachine: (addons-979357) Calling .GetIP
	I0913 18:22:09.214775   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:09.215180   11846 main.go:141] libmachine: (addons-979357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:f4:d7", ip: ""} in network mk-addons-979357: {Iface:virbr1 ExpiryTime:2024-09-13 19:22:00 +0000 UTC Type:0 Mac:52:54:00:9b:f4:d7 Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:addons-979357 Clientid:01:52:54:00:9b:f4:d7}
	I0913 18:22:09.215205   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined IP address 192.168.39.34 and MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:09.215376   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHHostname
	I0913 18:22:09.217631   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:09.218082   11846 main.go:141] libmachine: (addons-979357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:f4:d7", ip: ""} in network mk-addons-979357: {Iface:virbr1 ExpiryTime:2024-09-13 19:22:00 +0000 UTC Type:0 Mac:52:54:00:9b:f4:d7 Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:addons-979357 Clientid:01:52:54:00:9b:f4:d7}
	I0913 18:22:09.218145   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined IP address 192.168.39.34 and MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:09.218259   11846 provision.go:143] copyHostCerts
	I0913 18:22:09.218330   11846 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem (1679 bytes)
	I0913 18:22:09.218462   11846 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem (1082 bytes)
	I0913 18:22:09.218590   11846 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem (1123 bytes)
	I0913 18:22:09.218660   11846 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem org=jenkins.addons-979357 san=[127.0.0.1 192.168.39.34 addons-979357 localhost minikube]
	I0913 18:22:09.715311   11846 provision.go:177] copyRemoteCerts
	I0913 18:22:09.715364   11846 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0913 18:22:09.715390   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHHostname
	I0913 18:22:09.718319   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:09.718625   11846 main.go:141] libmachine: (addons-979357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:f4:d7", ip: ""} in network mk-addons-979357: {Iface:virbr1 ExpiryTime:2024-09-13 19:22:00 +0000 UTC Type:0 Mac:52:54:00:9b:f4:d7 Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:addons-979357 Clientid:01:52:54:00:9b:f4:d7}
	I0913 18:22:09.718650   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined IP address 192.168.39.34 and MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:09.718796   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHPort
	I0913 18:22:09.718953   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHKeyPath
	I0913 18:22:09.719126   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHUsername
	I0913 18:22:09.719278   11846 sshutil.go:53] new ssh client: &{IP:192.168.39.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/addons-979357/id_rsa Username:docker}
	I0913 18:22:09.804099   11846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0913 18:22:09.829074   11846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0913 18:22:09.853991   11846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0913 18:22:09.877867   11846 provision.go:87] duration metric: took 666.039773ms to configureAuth
	I0913 18:22:09.877899   11846 buildroot.go:189] setting minikube options for container-runtime
	I0913 18:22:09.878243   11846 config.go:182] Loaded profile config "addons-979357": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 18:22:09.878342   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHHostname
	I0913 18:22:09.881237   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:09.881647   11846 main.go:141] libmachine: (addons-979357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:f4:d7", ip: ""} in network mk-addons-979357: {Iface:virbr1 ExpiryTime:2024-09-13 19:22:00 +0000 UTC Type:0 Mac:52:54:00:9b:f4:d7 Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:addons-979357 Clientid:01:52:54:00:9b:f4:d7}
	I0913 18:22:09.881678   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined IP address 192.168.39.34 and MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:09.881809   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHPort
	I0913 18:22:09.882030   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHKeyPath
	I0913 18:22:09.882238   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHKeyPath
	I0913 18:22:09.882372   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHUsername
	I0913 18:22:09.882533   11846 main.go:141] libmachine: Using SSH client type: native
	I0913 18:22:09.882691   11846 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.34 22 <nil> <nil>}
	I0913 18:22:09.882704   11846 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0913 18:22:10.126542   11846 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0913 18:22:10.126574   11846 main.go:141] libmachine: Checking connection to Docker...
	I0913 18:22:10.126585   11846 main.go:141] libmachine: (addons-979357) Calling .GetURL
	I0913 18:22:10.128029   11846 main.go:141] libmachine: (addons-979357) DBG | Using libvirt version 6000000
	I0913 18:22:10.130547   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:10.130974   11846 main.go:141] libmachine: (addons-979357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:f4:d7", ip: ""} in network mk-addons-979357: {Iface:virbr1 ExpiryTime:2024-09-13 19:22:00 +0000 UTC Type:0 Mac:52:54:00:9b:f4:d7 Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:addons-979357 Clientid:01:52:54:00:9b:f4:d7}
	I0913 18:22:10.131001   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined IP address 192.168.39.34 and MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:10.131167   11846 main.go:141] libmachine: Docker is up and running!
	I0913 18:22:10.131183   11846 main.go:141] libmachine: Reticulating splines...
	I0913 18:22:10.131190   11846 client.go:171] duration metric: took 25.112456647s to LocalClient.Create
	I0913 18:22:10.131217   11846 start.go:167] duration metric: took 25.112517605s to libmachine.API.Create "addons-979357"
	I0913 18:22:10.131230   11846 start.go:293] postStartSetup for "addons-979357" (driver="kvm2")
	I0913 18:22:10.131254   11846 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0913 18:22:10.131272   11846 main.go:141] libmachine: (addons-979357) Calling .DriverName
	I0913 18:22:10.131521   11846 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0913 18:22:10.131545   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHHostname
	I0913 18:22:10.133979   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:10.134328   11846 main.go:141] libmachine: (addons-979357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:f4:d7", ip: ""} in network mk-addons-979357: {Iface:virbr1 ExpiryTime:2024-09-13 19:22:00 +0000 UTC Type:0 Mac:52:54:00:9b:f4:d7 Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:addons-979357 Clientid:01:52:54:00:9b:f4:d7}
	I0913 18:22:10.134354   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined IP address 192.168.39.34 and MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:10.134501   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHPort
	I0913 18:22:10.134686   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHKeyPath
	I0913 18:22:10.134836   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHUsername
	I0913 18:22:10.134952   11846 sshutil.go:53] new ssh client: &{IP:192.168.39.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/addons-979357/id_rsa Username:docker}
	I0913 18:22:10.220806   11846 ssh_runner.go:195] Run: cat /etc/os-release
	I0913 18:22:10.225490   11846 info.go:137] Remote host: Buildroot 2023.02.9
	I0913 18:22:10.225520   11846 filesync.go:126] Scanning /home/jenkins/minikube-integration/19636-3902/.minikube/addons for local assets ...
	I0913 18:22:10.225600   11846 filesync.go:126] Scanning /home/jenkins/minikube-integration/19636-3902/.minikube/files for local assets ...
	I0913 18:22:10.225631   11846 start.go:296] duration metric: took 94.394779ms for postStartSetup
	I0913 18:22:10.225667   11846 main.go:141] libmachine: (addons-979357) Calling .GetConfigRaw
	I0913 18:22:10.226323   11846 main.go:141] libmachine: (addons-979357) Calling .GetIP
	I0913 18:22:10.229002   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:10.229334   11846 main.go:141] libmachine: (addons-979357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:f4:d7", ip: ""} in network mk-addons-979357: {Iface:virbr1 ExpiryTime:2024-09-13 19:22:00 +0000 UTC Type:0 Mac:52:54:00:9b:f4:d7 Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:addons-979357 Clientid:01:52:54:00:9b:f4:d7}
	I0913 18:22:10.229365   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined IP address 192.168.39.34 and MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:10.229560   11846 profile.go:143] Saving config to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357/config.json ...
	I0913 18:22:10.229851   11846 start.go:128] duration metric: took 25.228992984s to createHost
	I0913 18:22:10.229878   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHHostname
	I0913 18:22:10.232158   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:10.232608   11846 main.go:141] libmachine: (addons-979357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:f4:d7", ip: ""} in network mk-addons-979357: {Iface:virbr1 ExpiryTime:2024-09-13 19:22:00 +0000 UTC Type:0 Mac:52:54:00:9b:f4:d7 Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:addons-979357 Clientid:01:52:54:00:9b:f4:d7}
	I0913 18:22:10.232631   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined IP address 192.168.39.34 and MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:10.232764   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHPort
	I0913 18:22:10.232960   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHKeyPath
	I0913 18:22:10.233116   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHKeyPath
	I0913 18:22:10.233281   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHUsername
	I0913 18:22:10.233428   11846 main.go:141] libmachine: Using SSH client type: native
	I0913 18:22:10.233612   11846 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.34 22 <nil> <nil>}
	I0913 18:22:10.233625   11846 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0913 18:22:10.347102   11846 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726251730.321977350
	
	I0913 18:22:10.347128   11846 fix.go:216] guest clock: 1726251730.321977350
	I0913 18:22:10.347138   11846 fix.go:229] Guest: 2024-09-13 18:22:10.32197735 +0000 UTC Remote: 2024-09-13 18:22:10.22986562 +0000 UTC m=+25.329833233 (delta=92.11173ms)
	I0913 18:22:10.347167   11846 fix.go:200] guest clock delta is within tolerance: 92.11173ms
	I0913 18:22:10.347175   11846 start.go:83] releasing machines lock for "addons-979357", held for 25.34638377s
	I0913 18:22:10.347205   11846 main.go:141] libmachine: (addons-979357) Calling .DriverName
	I0913 18:22:10.347489   11846 main.go:141] libmachine: (addons-979357) Calling .GetIP
	I0913 18:22:10.350285   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:10.350656   11846 main.go:141] libmachine: (addons-979357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:f4:d7", ip: ""} in network mk-addons-979357: {Iface:virbr1 ExpiryTime:2024-09-13 19:22:00 +0000 UTC Type:0 Mac:52:54:00:9b:f4:d7 Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:addons-979357 Clientid:01:52:54:00:9b:f4:d7}
	I0913 18:22:10.350686   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined IP address 192.168.39.34 and MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:10.350858   11846 main.go:141] libmachine: (addons-979357) Calling .DriverName
	I0913 18:22:10.351398   11846 main.go:141] libmachine: (addons-979357) Calling .DriverName
	I0913 18:22:10.351583   11846 main.go:141] libmachine: (addons-979357) Calling .DriverName
	I0913 18:22:10.351693   11846 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0913 18:22:10.351742   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHHostname
	I0913 18:22:10.351791   11846 ssh_runner.go:195] Run: cat /version.json
	I0913 18:22:10.351812   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHHostname
	I0913 18:22:10.354604   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:10.354894   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:10.354935   11846 main.go:141] libmachine: (addons-979357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:f4:d7", ip: ""} in network mk-addons-979357: {Iface:virbr1 ExpiryTime:2024-09-13 19:22:00 +0000 UTC Type:0 Mac:52:54:00:9b:f4:d7 Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:addons-979357 Clientid:01:52:54:00:9b:f4:d7}
	I0913 18:22:10.354957   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined IP address 192.168.39.34 and MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:10.355076   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHPort
	I0913 18:22:10.355290   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHKeyPath
	I0913 18:22:10.355388   11846 main.go:141] libmachine: (addons-979357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:f4:d7", ip: ""} in network mk-addons-979357: {Iface:virbr1 ExpiryTime:2024-09-13 19:22:00 +0000 UTC Type:0 Mac:52:54:00:9b:f4:d7 Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:addons-979357 Clientid:01:52:54:00:9b:f4:d7}
	I0913 18:22:10.355421   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined IP address 192.168.39.34 and MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:10.355470   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHUsername
	I0913 18:22:10.355584   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHPort
	I0913 18:22:10.355636   11846 sshutil.go:53] new ssh client: &{IP:192.168.39.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/addons-979357/id_rsa Username:docker}
	I0913 18:22:10.355715   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHKeyPath
	I0913 18:22:10.355878   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHUsername
	I0913 18:22:10.356046   11846 sshutil.go:53] new ssh client: &{IP:192.168.39.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/addons-979357/id_rsa Username:docker}
	I0913 18:22:10.476853   11846 ssh_runner.go:195] Run: systemctl --version
	I0913 18:22:10.482887   11846 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0913 18:22:10.641449   11846 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0913 18:22:10.648344   11846 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0913 18:22:10.648410   11846 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0913 18:22:10.664019   11846 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0913 18:22:10.664043   11846 start.go:495] detecting cgroup driver to use...
	I0913 18:22:10.664124   11846 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0913 18:22:10.679953   11846 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0913 18:22:10.694986   11846 docker.go:217] disabling cri-docker service (if available) ...
	I0913 18:22:10.695040   11846 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0913 18:22:10.709192   11846 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0913 18:22:10.723529   11846 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0913 18:22:10.836708   11846 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0913 18:22:10.978881   11846 docker.go:233] disabling docker service ...
	I0913 18:22:10.978945   11846 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0913 18:22:10.993279   11846 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0913 18:22:11.006735   11846 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0913 18:22:11.135365   11846 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0913 18:22:11.245556   11846 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0913 18:22:11.259561   11846 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0913 18:22:11.277758   11846 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0913 18:22:11.277818   11846 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 18:22:11.288773   11846 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0913 18:22:11.288829   11846 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 18:22:11.299334   11846 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 18:22:11.309742   11846 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 18:22:11.320384   11846 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0913 18:22:11.331897   11846 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 18:22:11.343220   11846 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 18:22:11.361330   11846 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 18:22:11.372453   11846 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0913 18:22:11.382315   11846 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0913 18:22:11.382392   11846 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0913 18:22:11.396538   11846 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0913 18:22:11.407320   11846 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 18:22:11.515601   11846 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0913 18:22:11.605418   11846 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0913 18:22:11.605515   11846 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0913 18:22:11.610413   11846 start.go:563] Will wait 60s for crictl version
	I0913 18:22:11.610486   11846 ssh_runner.go:195] Run: which crictl
	I0913 18:22:11.614216   11846 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0913 18:22:11.653794   11846 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0913 18:22:11.653938   11846 ssh_runner.go:195] Run: crio --version
	I0913 18:22:11.683751   11846 ssh_runner.go:195] Run: crio --version
	I0913 18:22:11.713055   11846 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0913 18:22:11.714287   11846 main.go:141] libmachine: (addons-979357) Calling .GetIP
	I0913 18:22:11.716720   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:11.717006   11846 main.go:141] libmachine: (addons-979357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:f4:d7", ip: ""} in network mk-addons-979357: {Iface:virbr1 ExpiryTime:2024-09-13 19:22:00 +0000 UTC Type:0 Mac:52:54:00:9b:f4:d7 Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:addons-979357 Clientid:01:52:54:00:9b:f4:d7}
	I0913 18:22:11.717030   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined IP address 192.168.39.34 and MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:11.717315   11846 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0913 18:22:11.721668   11846 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0913 18:22:11.734152   11846 kubeadm.go:883] updating cluster {Name:addons-979357 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:addons-979357 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.34 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0913 18:22:11.734262   11846 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0913 18:22:11.734314   11846 ssh_runner.go:195] Run: sudo crictl images --output json
	I0913 18:22:11.771955   11846 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0913 18:22:11.772020   11846 ssh_runner.go:195] Run: which lz4
	I0913 18:22:11.776099   11846 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0913 18:22:11.780348   11846 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0913 18:22:11.780377   11846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0913 18:22:13.063182   11846 crio.go:462] duration metric: took 1.287105483s to copy over tarball
	I0913 18:22:13.063246   11846 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0913 18:22:15.131948   11846 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.068675166s)
	I0913 18:22:15.131980   11846 crio.go:469] duration metric: took 2.068772112s to extract the tarball
	I0913 18:22:15.131990   11846 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0913 18:22:15.168309   11846 ssh_runner.go:195] Run: sudo crictl images --output json
	I0913 18:22:15.210774   11846 crio.go:514] all images are preloaded for cri-o runtime.
	I0913 18:22:15.210798   11846 cache_images.go:84] Images are preloaded, skipping loading
	I0913 18:22:15.210807   11846 kubeadm.go:934] updating node { 192.168.39.34 8443 v1.31.1 crio true true} ...
	I0913 18:22:15.210915   11846 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-979357 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.34
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-979357 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0913 18:22:15.210993   11846 ssh_runner.go:195] Run: crio config
	I0913 18:22:15.258261   11846 cni.go:84] Creating CNI manager for ""
	I0913 18:22:15.258285   11846 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0913 18:22:15.258295   11846 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0913 18:22:15.258316   11846 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.34 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-979357 NodeName:addons-979357 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.34"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.34 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0913 18:22:15.258477   11846 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.34
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-979357"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.34
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.34"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0913 18:22:15.258548   11846 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0913 18:22:15.268665   11846 binaries.go:44] Found k8s binaries, skipping transfer
	I0913 18:22:15.268737   11846 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0913 18:22:15.278177   11846 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0913 18:22:15.294597   11846 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0913 18:22:15.310451   11846 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0913 18:22:15.326796   11846 ssh_runner.go:195] Run: grep 192.168.39.34	control-plane.minikube.internal$ /etc/hosts
	I0913 18:22:15.330636   11846 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.34	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0913 18:22:15.343203   11846 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 18:22:15.467199   11846 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0913 18:22:15.486141   11846 certs.go:68] Setting up /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357 for IP: 192.168.39.34
	I0913 18:22:15.486166   11846 certs.go:194] generating shared ca certs ...
	I0913 18:22:15.486182   11846 certs.go:226] acquiring lock for ca certs: {Name:mke780aab4c2895bded11772e31c7a357d07742c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:22:15.486323   11846 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.key
	I0913 18:22:15.662812   11846 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt ...
	I0913 18:22:15.662838   11846 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt: {Name:mk0c4ac93cc268df9a8da3c08edba4e990a1051c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:22:15.662994   11846 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19636-3902/.minikube/ca.key ...
	I0913 18:22:15.663004   11846 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/.minikube/ca.key: {Name:mk7c3df6b789a282ec74042612aa69d3d847194d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:22:15.663072   11846 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.key
	I0913 18:22:15.760468   11846 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.crt ...
	I0913 18:22:15.760493   11846 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.crt: {Name:mk5938022ba0b964dbd2e8d6a95f61ea52a69c2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:22:15.760629   11846 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.key ...
	I0913 18:22:15.760638   11846 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.key: {Name:mk4740460ce42bde935de79b4943921492fd98a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:22:15.760700   11846 certs.go:256] generating profile certs ...
	I0913 18:22:15.760762   11846 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357/client.key
	I0913 18:22:15.760784   11846 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357/client.crt with IP's: []
	I0913 18:22:15.869917   11846 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357/client.crt ...
	I0913 18:22:15.869945   11846 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357/client.crt: {Name:mk629832723b056c40a68a16d59abb9016c4d337 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:22:15.870132   11846 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357/client.key ...
	I0913 18:22:15.870143   11846 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357/client.key: {Name:mk7fb983c54e63b71552ed34c37898232dd25c8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:22:15.870218   11846 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357/apiserver.key.667556f7
	I0913 18:22:15.870238   11846 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357/apiserver.crt.667556f7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.34]
	I0913 18:22:15.977365   11846 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357/apiserver.crt.667556f7 ...
	I0913 18:22:15.977392   11846 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357/apiserver.crt.667556f7: {Name:mk64caa72268b14b4cff0a9627f89777df35b01c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:22:15.977557   11846 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357/apiserver.key.667556f7 ...
	I0913 18:22:15.977570   11846 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357/apiserver.key.667556f7: {Name:mk8693bd1404fecfaa4562dd7e045a763b78878a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:22:15.977637   11846 certs.go:381] copying /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357/apiserver.crt.667556f7 -> /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357/apiserver.crt
	I0913 18:22:15.977706   11846 certs.go:385] copying /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357/apiserver.key.667556f7 -> /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357/apiserver.key
	I0913 18:22:15.977750   11846 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357/proxy-client.key
	I0913 18:22:15.977766   11846 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357/proxy-client.crt with IP's: []
	I0913 18:22:16.102506   11846 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357/proxy-client.crt ...
	I0913 18:22:16.102535   11846 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357/proxy-client.crt: {Name:mk4e2dff54c8b7cdd4d081d100bae0960534d953 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:22:16.102678   11846 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357/proxy-client.key ...
	I0913 18:22:16.102688   11846 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357/proxy-client.key: {Name:mkeaff14ff97f40f98f8eae4b259ad1243c5a15e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:22:16.102848   11846 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem (1675 bytes)
	I0913 18:22:16.102882   11846 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem (1082 bytes)
	I0913 18:22:16.102905   11846 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem (1123 bytes)
	I0913 18:22:16.102929   11846 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem (1679 bytes)
	I0913 18:22:16.103974   11846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0913 18:22:16.128760   11846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0913 18:22:16.154237   11846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0913 18:22:16.180108   11846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0913 18:22:16.216371   11846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0913 18:22:16.241414   11846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0913 18:22:16.265812   11846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0913 18:22:16.288640   11846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0913 18:22:16.311923   11846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0913 18:22:16.335383   11846 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0913 18:22:16.351852   11846 ssh_runner.go:195] Run: openssl version
	I0913 18:22:16.357393   11846 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0913 18:22:16.368587   11846 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0913 18:22:16.373059   11846 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 13 18:22 /usr/share/ca-certificates/minikubeCA.pem
	I0913 18:22:16.373123   11846 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0913 18:22:16.378918   11846 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0913 18:22:16.390126   11846 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0913 18:22:16.394003   11846 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0913 18:22:16.394057   11846 kubeadm.go:392] StartCluster: {Name:addons-979357 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:addons-979357 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.34 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 18:22:16.394167   11846 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0913 18:22:16.394219   11846 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0913 18:22:16.431957   11846 cri.go:89] found id: ""
	I0913 18:22:16.432037   11846 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0913 18:22:16.442325   11846 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0913 18:22:16.452438   11846 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0913 18:22:16.462279   11846 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0913 18:22:16.462298   11846 kubeadm.go:157] found existing configuration files:
	
	I0913 18:22:16.462336   11846 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0913 18:22:16.471621   11846 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0913 18:22:16.471678   11846 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0913 18:22:16.481226   11846 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0913 18:22:16.491050   11846 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0913 18:22:16.491106   11846 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0913 18:22:16.501169   11846 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0913 18:22:16.510516   11846 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0913 18:22:16.510568   11846 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0913 18:22:16.519925   11846 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0913 18:22:16.529268   11846 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0913 18:22:16.529320   11846 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0913 18:22:16.539219   11846 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0913 18:22:16.593329   11846 kubeadm.go:310] W0913 18:22:16.575543     815 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0913 18:22:16.594569   11846 kubeadm.go:310] W0913 18:22:16.576957     815 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0913 18:22:16.708878   11846 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0913 18:22:26.701114   11846 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0913 18:22:26.701216   11846 kubeadm.go:310] [preflight] Running pre-flight checks
	I0913 18:22:26.701325   11846 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0913 18:22:26.701444   11846 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0913 18:22:26.701566   11846 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0913 18:22:26.701658   11846 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0913 18:22:26.703010   11846 out.go:235]   - Generating certificates and keys ...
	I0913 18:22:26.703101   11846 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0913 18:22:26.703171   11846 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0913 18:22:26.703246   11846 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0913 18:22:26.703315   11846 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0913 18:22:26.703395   11846 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0913 18:22:26.703486   11846 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0913 18:22:26.703560   11846 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0913 18:22:26.703710   11846 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-979357 localhost] and IPs [192.168.39.34 127.0.0.1 ::1]
	I0913 18:22:26.703780   11846 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0913 18:22:26.703947   11846 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-979357 localhost] and IPs [192.168.39.34 127.0.0.1 ::1]
	I0913 18:22:26.704047   11846 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0913 18:22:26.704149   11846 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0913 18:22:26.704214   11846 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0913 18:22:26.704286   11846 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0913 18:22:26.704372   11846 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0913 18:22:26.704458   11846 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0913 18:22:26.704532   11846 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0913 18:22:26.704633   11846 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0913 18:22:26.704715   11846 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0913 18:22:26.704825   11846 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0913 18:22:26.704915   11846 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0913 18:22:26.706252   11846 out.go:235]   - Booting up control plane ...
	I0913 18:22:26.706339   11846 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0913 18:22:26.706406   11846 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0913 18:22:26.706497   11846 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0913 18:22:26.706623   11846 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0913 18:22:26.706724   11846 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0913 18:22:26.706784   11846 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0913 18:22:26.706939   11846 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0913 18:22:26.707027   11846 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0913 18:22:26.707076   11846 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.200467ms
	I0913 18:22:26.707151   11846 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0913 18:22:26.707212   11846 kubeadm.go:310] [api-check] The API server is healthy after 5.501177192s
	I0913 18:22:26.707308   11846 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0913 18:22:26.707422   11846 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0913 18:22:26.707475   11846 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0913 18:22:26.707633   11846 kubeadm.go:310] [mark-control-plane] Marking the node addons-979357 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0913 18:22:26.707707   11846 kubeadm.go:310] [bootstrap-token] Using token: d54731.5jrr63v1n2n2kz6m
	I0913 18:22:26.708858   11846 out.go:235]   - Configuring RBAC rules ...
	I0913 18:22:26.708942   11846 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0913 18:22:26.709016   11846 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0913 18:22:26.709169   11846 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0913 18:22:26.709274   11846 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0913 18:22:26.709367   11846 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0913 18:22:26.709442   11846 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0913 18:22:26.709548   11846 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0913 18:22:26.709594   11846 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0913 18:22:26.709640   11846 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0913 18:22:26.709650   11846 kubeadm.go:310] 
	I0913 18:22:26.709698   11846 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0913 18:22:26.709704   11846 kubeadm.go:310] 
	I0913 18:22:26.709773   11846 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0913 18:22:26.709779   11846 kubeadm.go:310] 
	I0913 18:22:26.709801   11846 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0913 18:22:26.709847   11846 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0913 18:22:26.709896   11846 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0913 18:22:26.709905   11846 kubeadm.go:310] 
	I0913 18:22:26.709959   11846 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0913 18:22:26.709965   11846 kubeadm.go:310] 
	I0913 18:22:26.710000   11846 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0913 18:22:26.710006   11846 kubeadm.go:310] 
	I0913 18:22:26.710049   11846 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0913 18:22:26.710145   11846 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0913 18:22:26.710258   11846 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0913 18:22:26.710269   11846 kubeadm.go:310] 
	I0913 18:22:26.710342   11846 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0913 18:22:26.710413   11846 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0913 18:22:26.710420   11846 kubeadm.go:310] 
	I0913 18:22:26.710489   11846 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token d54731.5jrr63v1n2n2kz6m \
	I0913 18:22:26.710581   11846 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a4240e11abfbfd373505036507e5ef67d548fb372e560a526b421dfcccc65691 \
	I0913 18:22:26.710601   11846 kubeadm.go:310] 	--control-plane 
	I0913 18:22:26.710604   11846 kubeadm.go:310] 
	I0913 18:22:26.710674   11846 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0913 18:22:26.710680   11846 kubeadm.go:310] 
	I0913 18:22:26.710750   11846 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token d54731.5jrr63v1n2n2kz6m \
	I0913 18:22:26.710853   11846 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a4240e11abfbfd373505036507e5ef67d548fb372e560a526b421dfcccc65691 
	I0913 18:22:26.710865   11846 cni.go:84] Creating CNI manager for ""
	I0913 18:22:26.710872   11846 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0913 18:22:26.712247   11846 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0913 18:22:26.713291   11846 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0913 18:22:26.725202   11846 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0913 18:22:26.748825   11846 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0913 18:22:26.748885   11846 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 18:22:26.748946   11846 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-979357 minikube.k8s.io/updated_at=2024_09_13T18_22_26_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=fdd33bebc6743cfd1c61ec7fe066add478610a92 minikube.k8s.io/name=addons-979357 minikube.k8s.io/primary=true
	I0913 18:22:26.785894   11846 ops.go:34] apiserver oom_adj: -16
	I0913 18:22:26.895212   11846 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 18:22:27.395975   11846 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 18:22:27.896320   11846 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 18:22:28.395286   11846 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 18:22:28.896168   11846 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 18:22:29.395706   11846 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 18:22:29.896217   11846 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 18:22:30.395424   11846 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 18:22:30.477836   11846 kubeadm.go:1113] duration metric: took 3.729011911s to wait for elevateKubeSystemPrivileges
	I0913 18:22:30.477865   11846 kubeadm.go:394] duration metric: took 14.083813405s to StartCluster
	I0913 18:22:30.477884   11846 settings.go:142] acquiring lock: {Name:mk3b751ea8a9f3a6c2d19469cffad500e96411b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:22:30.477996   11846 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19636-3902/kubeconfig
	I0913 18:22:30.478387   11846 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/kubeconfig: {Name:mk324cf401f96b93fed93af51e24b0634e5fa1a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:22:30.478575   11846 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.34 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0913 18:22:30.478599   11846 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0913 18:22:30.478630   11846 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0913 18:22:30.478752   11846 addons.go:69] Setting yakd=true in profile "addons-979357"
	I0913 18:22:30.478773   11846 addons.go:234] Setting addon yakd=true in "addons-979357"
	I0913 18:22:30.478770   11846 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-979357"
	I0913 18:22:30.478804   11846 host.go:66] Checking if "addons-979357" exists ...
	I0913 18:22:30.478792   11846 addons.go:69] Setting metrics-server=true in profile "addons-979357"
	I0913 18:22:30.478823   11846 config.go:182] Loaded profile config "addons-979357": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 18:22:30.478809   11846 addons.go:69] Setting cloud-spanner=true in profile "addons-979357"
	I0913 18:22:30.478835   11846 addons.go:69] Setting default-storageclass=true in profile "addons-979357"
	I0913 18:22:30.478838   11846 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-979357"
	I0913 18:22:30.478848   11846 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-979357"
	I0913 18:22:30.478849   11846 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-979357"
	I0913 18:22:30.478825   11846 addons.go:234] Setting addon metrics-server=true in "addons-979357"
	I0913 18:22:30.478861   11846 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-979357"
	I0913 18:22:30.478875   11846 host.go:66] Checking if "addons-979357" exists ...
	I0913 18:22:30.478882   11846 host.go:66] Checking if "addons-979357" exists ...
	I0913 18:22:30.478898   11846 host.go:66] Checking if "addons-979357" exists ...
	I0913 18:22:30.478908   11846 addons.go:69] Setting registry=true in profile "addons-979357"
	I0913 18:22:30.478923   11846 addons.go:234] Setting addon registry=true in "addons-979357"
	I0913 18:22:30.478984   11846 host.go:66] Checking if "addons-979357" exists ...
	I0913 18:22:30.478995   11846 addons.go:69] Setting ingress=true in profile "addons-979357"
	I0913 18:22:30.479089   11846 addons.go:234] Setting addon ingress=true in "addons-979357"
	I0913 18:22:30.479124   11846 host.go:66] Checking if "addons-979357" exists ...
	I0913 18:22:30.479203   11846 addons.go:69] Setting ingress-dns=true in profile "addons-979357"
	I0913 18:22:30.479238   11846 addons.go:234] Setting addon ingress-dns=true in "addons-979357"
	I0913 18:22:30.479259   11846 addons.go:69] Setting gcp-auth=true in profile "addons-979357"
	I0913 18:22:30.479268   11846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:22:30.479281   11846 mustload.go:65] Loading cluster: addons-979357
	I0913 18:22:30.479301   11846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:22:30.479333   11846 host.go:66] Checking if "addons-979357" exists ...
	I0913 18:22:30.479338   11846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:22:30.479346   11846 addons.go:69] Setting inspektor-gadget=true in profile "addons-979357"
	I0913 18:22:30.479350   11846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:22:30.479360   11846 addons.go:234] Setting addon inspektor-gadget=true in "addons-979357"
	I0913 18:22:30.479369   11846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:22:30.479383   11846 host.go:66] Checking if "addons-979357" exists ...
	I0913 18:22:30.479395   11846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:22:30.479433   11846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:22:30.479463   11846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:22:30.479587   11846 config.go:182] Loaded profile config "addons-979357": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 18:22:30.479600   11846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:22:30.479640   11846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:22:30.479708   11846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:22:30.479727   11846 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-979357"
	I0913 18:22:30.479729   11846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:22:30.479738   11846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:22:30.479742   11846 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-979357"
	I0913 18:22:30.479754   11846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:22:30.479921   11846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:22:30.479949   11846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:22:30.478897   11846 addons.go:234] Setting addon cloud-spanner=true in "addons-979357"
	I0913 18:22:30.480164   11846 host.go:66] Checking if "addons-979357" exists ...
	I0913 18:22:30.480219   11846 addons.go:69] Setting volcano=true in profile "addons-979357"
	I0913 18:22:30.480245   11846 addons.go:234] Setting addon volcano=true in "addons-979357"
	I0913 18:22:30.480280   11846 host.go:66] Checking if "addons-979357" exists ...
	I0913 18:22:30.478820   11846 addons.go:69] Setting storage-provisioner=true in profile "addons-979357"
	I0913 18:22:30.480370   11846 addons.go:234] Setting addon storage-provisioner=true in "addons-979357"
	I0913 18:22:30.480426   11846 host.go:66] Checking if "addons-979357" exists ...
	I0913 18:22:30.480535   11846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:22:30.480572   11846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:22:30.480640   11846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:22:30.480673   11846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:22:30.480820   11846 addons.go:69] Setting volumesnapshots=true in profile "addons-979357"
	I0913 18:22:30.480840   11846 addons.go:234] Setting addon volumesnapshots=true in "addons-979357"
	I0913 18:22:30.480871   11846 host.go:66] Checking if "addons-979357" exists ...
	I0913 18:22:30.480912   11846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:22:30.480944   11846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:22:30.481326   11846 out.go:177] * Verifying Kubernetes components...
	I0913 18:22:30.479242   11846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:22:30.481520   11846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:22:30.479334   11846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:22:30.481650   11846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:22:30.482721   11846 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 18:22:30.500237   11846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39695
	I0913 18:22:30.500463   11846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37975
	I0913 18:22:30.500482   11846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40047
	I0913 18:22:30.500639   11846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42967
	I0913 18:22:30.500830   11846 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:22:30.500893   11846 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:22:30.500990   11846 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:22:30.501068   11846 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:22:30.501371   11846 main.go:141] libmachine: Using API Version  1
	I0913 18:22:30.501388   11846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:22:30.501510   11846 main.go:141] libmachine: Using API Version  1
	I0913 18:22:30.501532   11846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:22:30.501533   11846 main.go:141] libmachine: Using API Version  1
	I0913 18:22:30.501550   11846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:22:30.501853   11846 main.go:141] libmachine: Using API Version  1
	I0913 18:22:30.501869   11846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:22:30.501892   11846 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:22:30.501924   11846 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:22:30.502060   11846 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:22:30.502499   11846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:22:30.502534   11846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:22:30.508808   11846 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:22:30.508875   11846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46415
	I0913 18:22:30.514450   11846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:22:30.514505   11846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:22:30.514561   11846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:22:30.514588   11846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:22:30.514611   11846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:22:30.514702   11846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:22:30.514722   11846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:22:30.515525   11846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:22:30.515558   11846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:22:30.518495   11846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46547
	I0913 18:22:30.518648   11846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46651
	I0913 18:22:30.518780   11846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:22:30.518966   11846 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:22:30.533480   11846 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:22:30.538314   11846 main.go:141] libmachine: Using API Version  1
	I0913 18:22:30.538358   11846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:22:30.538478   11846 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:22:30.538926   11846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37989
	I0913 18:22:30.539091   11846 main.go:141] libmachine: Using API Version  1
	I0913 18:22:30.539109   11846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:22:30.539180   11846 main.go:141] libmachine: Using API Version  1
	I0913 18:22:30.539204   11846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:22:30.539375   11846 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:22:30.539537   11846 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:22:30.539596   11846 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:22:30.539644   11846 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:22:30.540197   11846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:22:30.540517   11846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:22:30.540641   11846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:22:30.540690   11846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:22:30.541616   11846 main.go:141] libmachine: Using API Version  1
	I0913 18:22:30.541640   11846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:22:30.541970   11846 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:22:30.542152   11846 main.go:141] libmachine: (addons-979357) Calling .GetState
	I0913 18:22:30.544274   11846 main.go:141] libmachine: (addons-979357) Calling .DriverName
	I0913 18:22:30.544510   11846 main.go:141] libmachine: Making call to close driver server
	I0913 18:22:30.544533   11846 main.go:141] libmachine: (addons-979357) Calling .Close
	I0913 18:22:30.546219   11846 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:22:30.546227   11846 main.go:141] libmachine: (addons-979357) DBG | Closing plugin on server side
	I0913 18:22:30.546234   11846 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:22:30.546254   11846 main.go:141] libmachine: Making call to close driver server
	I0913 18:22:30.546261   11846 main.go:141] libmachine: (addons-979357) Calling .Close
	I0913 18:22:30.546395   11846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34721
	I0913 18:22:30.546903   11846 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:22:30.547397   11846 main.go:141] libmachine: Using API Version  1
	I0913 18:22:30.547419   11846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:22:30.547706   11846 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:22:30.548255   11846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:22:30.548304   11846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:22:30.560435   11846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45657
	I0913 18:22:30.560435   11846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33527
	I0913 18:22:30.560480   11846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41521
	I0913 18:22:30.560448   11846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32857
	I0913 18:22:30.560561   11846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43585
	I0913 18:22:30.560630   11846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33791
	I0913 18:22:30.560674   11846 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:22:30.560692   11846 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:22:30.560628   11846 main.go:141] libmachine: (addons-979357) DBG | Closing plugin on server side
	I0913 18:22:30.560639   11846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46337
	W0913 18:22:30.560805   11846 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0913 18:22:30.561065   11846 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:22:30.561200   11846 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:22:30.561277   11846 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:22:30.561349   11846 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:22:30.562326   11846 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:22:30.562336   11846 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:22:30.562417   11846 main.go:141] libmachine: Using API Version  1
	I0913 18:22:30.562436   11846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:22:30.562408   11846 main.go:141] libmachine: Using API Version  1
	I0913 18:22:30.562457   11846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:22:30.562500   11846 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:22:30.562522   11846 main.go:141] libmachine: Using API Version  1
	I0913 18:22:30.562532   11846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:22:30.562564   11846 main.go:141] libmachine: Using API Version  1
	I0913 18:22:30.562575   11846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:22:30.563271   11846 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:22:30.563375   11846 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:22:30.563548   11846 main.go:141] libmachine: Using API Version  1
	I0913 18:22:30.563558   11846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:22:30.563593   11846 main.go:141] libmachine: (addons-979357) Calling .GetState
	I0913 18:22:30.563886   11846 main.go:141] libmachine: Using API Version  1
	I0913 18:22:30.563903   11846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:22:30.564271   11846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:22:30.564314   11846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:22:30.564394   11846 main.go:141] libmachine: Using API Version  1
	I0913 18:22:30.564411   11846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:22:30.564907   11846 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:22:30.565005   11846 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:22:30.565037   11846 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:22:30.565075   11846 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:22:30.565330   11846 main.go:141] libmachine: (addons-979357) Calling .GetState
	I0913 18:22:30.565392   11846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41479
	I0913 18:22:30.566066   11846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:22:30.566122   11846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:22:30.566267   11846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:22:30.566304   11846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:22:30.566523   11846 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:22:30.567164   11846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:22:30.567203   11846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:22:30.570708   11846 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-979357"
	I0913 18:22:30.570757   11846 host.go:66] Checking if "addons-979357" exists ...
	I0913 18:22:30.571197   11846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:22:30.571229   11846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:22:30.571302   11846 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:22:30.571683   11846 main.go:141] libmachine: Using API Version  1
	I0913 18:22:30.571734   11846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:22:30.571887   11846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:22:30.571926   11846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:22:30.572171   11846 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:22:30.572551   11846 main.go:141] libmachine: (addons-979357) Calling .GetState
	I0913 18:22:30.572627   11846 main.go:141] libmachine: (addons-979357) Calling .GetState
	I0913 18:22:30.581211   11846 main.go:141] libmachine: (addons-979357) Calling .DriverName
	I0913 18:22:30.581280   11846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40701
	I0913 18:22:30.581285   11846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36321
	I0913 18:22:30.581511   11846 main.go:141] libmachine: (addons-979357) Calling .DriverName
	I0913 18:22:30.582226   11846 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:22:30.582518   11846 addons.go:234] Setting addon default-storageclass=true in "addons-979357"
	I0913 18:22:30.582554   11846 host.go:66] Checking if "addons-979357" exists ...
	I0913 18:22:30.582746   11846 main.go:141] libmachine: Using API Version  1
	I0913 18:22:30.582762   11846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:22:30.582915   11846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:22:30.582949   11846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:22:30.584229   11846 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:22:30.584265   11846 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:22:30.584235   11846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40789
	I0913 18:22:30.584426   11846 main.go:141] libmachine: (addons-979357) Calling .GetState
	I0913 18:22:30.584925   11846 main.go:141] libmachine: Using API Version  1
	I0913 18:22:30.584947   11846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:22:30.585303   11846 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:22:30.585508   11846 main.go:141] libmachine: (addons-979357) Calling .GetState
	I0913 18:22:30.586552   11846 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0913 18:22:30.586648   11846 out.go:177]   - Using image docker.io/registry:2.8.3
	I0913 18:22:30.586943   11846 main.go:141] libmachine: (addons-979357) Calling .DriverName
	I0913 18:22:30.587350   11846 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:22:30.587363   11846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35105
	I0913 18:22:30.587491   11846 main.go:141] libmachine: (addons-979357) Calling .DriverName
	I0913 18:22:30.590472   11846 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:22:30.590556   11846 main.go:141] libmachine: Using API Version  1
	I0913 18:22:30.590571   11846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:22:30.590931   11846 main.go:141] libmachine: Using API Version  1
	I0913 18:22:30.590947   11846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:22:30.591000   11846 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:22:30.591151   11846 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0913 18:22:30.591166   11846 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0913 18:22:30.591190   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHHostname
	I0913 18:22:30.591251   11846 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0913 18:22:30.591281   11846 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:22:30.591303   11846 main.go:141] libmachine: (addons-979357) Calling .GetState
	I0913 18:22:30.592093   11846 main.go:141] libmachine: (addons-979357) Calling .GetState
	I0913 18:22:30.592703   11846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35599
	I0913 18:22:30.592773   11846 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0913 18:22:30.593276   11846 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:22:30.593795   11846 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0913 18:22:30.593980   11846 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0913 18:22:30.594465   11846 main.go:141] libmachine: (addons-979357) Calling .DriverName
	I0913 18:22:30.594464   11846 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0913 18:22:30.594524   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHHostname
	I0913 18:22:30.595224   11846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38945
	I0913 18:22:30.595443   11846 main.go:141] libmachine: Using API Version  1
	I0913 18:22:30.595455   11846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:22:30.595704   11846 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0913 18:22:30.595774   11846 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0913 18:22:30.596005   11846 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0913 18:22:30.596021   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHHostname
	I0913 18:22:30.596021   11846 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:22:30.596151   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:30.596413   11846 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0913 18:22:30.596485   11846 host.go:66] Checking if "addons-979357" exists ...
	I0913 18:22:30.596641   11846 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:22:30.597089   11846 main.go:141] libmachine: (addons-979357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:f4:d7", ip: ""} in network mk-addons-979357: {Iface:virbr1 ExpiryTime:2024-09-13 19:22:00 +0000 UTC Type:0 Mac:52:54:00:9b:f4:d7 Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:addons-979357 Clientid:01:52:54:00:9b:f4:d7}
	I0913 18:22:30.597116   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined IP address 192.168.39.34 and MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:30.597626   11846 main.go:141] libmachine: Using API Version  1
	I0913 18:22:30.597205   11846 main.go:141] libmachine: (addons-979357) Calling .GetState
	I0913 18:22:30.597661   11846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:22:30.597680   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHPort
	I0913 18:22:30.597823   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHKeyPath
	I0913 18:22:30.597900   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHUsername
	I0913 18:22:30.597924   11846 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0913 18:22:30.597937   11846 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0913 18:22:30.597966   11846 sshutil.go:53] new ssh client: &{IP:192.168.39.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/addons-979357/id_rsa Username:docker}
	I0913 18:22:30.598032   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHHostname
	I0913 18:22:30.598634   11846 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:22:30.598726   11846 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0913 18:22:30.598936   11846 main.go:141] libmachine: (addons-979357) Calling .GetState
	I0913 18:22:30.599673   11846 main.go:141] libmachine: (addons-979357) Calling .DriverName
	I0913 18:22:30.599727   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:30.600006   11846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:22:30.600036   11846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:22:30.600232   11846 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0913 18:22:30.600261   11846 main.go:141] libmachine: (addons-979357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:f4:d7", ip: ""} in network mk-addons-979357: {Iface:virbr1 ExpiryTime:2024-09-13 19:22:00 +0000 UTC Type:0 Mac:52:54:00:9b:f4:d7 Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:addons-979357 Clientid:01:52:54:00:9b:f4:d7}
	I0913 18:22:30.600288   11846 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0913 18:22:30.600338   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHHostname
	I0913 18:22:30.600344   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined IP address 192.168.39.34 and MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:30.600980   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHPort
	I0913 18:22:30.601242   11846 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0913 18:22:30.601962   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHKeyPath
	I0913 18:22:30.602482   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHUsername
	I0913 18:22:30.602787   11846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41695
	I0913 18:22:30.602898   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:30.602716   11846 sshutil.go:53] new ssh client: &{IP:192.168.39.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/addons-979357/id_rsa Username:docker}
	I0913 18:22:30.603290   11846 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0913 18:22:30.603303   11846 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0913 18:22:30.603320   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHHostname
	I0913 18:22:30.603501   11846 main.go:141] libmachine: (addons-979357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:f4:d7", ip: ""} in network mk-addons-979357: {Iface:virbr1 ExpiryTime:2024-09-13 19:22:00 +0000 UTC Type:0 Mac:52:54:00:9b:f4:d7 Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:addons-979357 Clientid:01:52:54:00:9b:f4:d7}
	I0913 18:22:30.603522   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined IP address 192.168.39.34 and MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:30.603562   11846 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:22:30.603698   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHPort
	I0913 18:22:30.603843   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHKeyPath
	I0913 18:22:30.603971   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHUsername
	I0913 18:22:30.604143   11846 sshutil.go:53] new ssh client: &{IP:192.168.39.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/addons-979357/id_rsa Username:docker}
	I0913 18:22:30.604873   11846 main.go:141] libmachine: Using API Version  1
	I0913 18:22:30.604890   11846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:22:30.605828   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:30.605850   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:30.605884   11846 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:22:30.606050   11846 main.go:141] libmachine: (addons-979357) Calling .GetState
	I0913 18:22:30.606504   11846 main.go:141] libmachine: (addons-979357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:f4:d7", ip: ""} in network mk-addons-979357: {Iface:virbr1 ExpiryTime:2024-09-13 19:22:00 +0000 UTC Type:0 Mac:52:54:00:9b:f4:d7 Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:addons-979357 Clientid:01:52:54:00:9b:f4:d7}
	I0913 18:22:30.606528   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined IP address 192.168.39.34 and MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:30.606942   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHPort
	I0913 18:22:30.607111   11846 main.go:141] libmachine: (addons-979357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:f4:d7", ip: ""} in network mk-addons-979357: {Iface:virbr1 ExpiryTime:2024-09-13 19:22:00 +0000 UTC Type:0 Mac:52:54:00:9b:f4:d7 Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:addons-979357 Clientid:01:52:54:00:9b:f4:d7}
	I0913 18:22:30.607137   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined IP address 192.168.39.34 and MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:30.607517   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHPort
	I0913 18:22:30.607675   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHKeyPath
	I0913 18:22:30.607867   11846 main.go:141] libmachine: (addons-979357) Calling .DriverName
	I0913 18:22:30.607917   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHKeyPath
	I0913 18:22:30.608172   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHUsername
	I0913 18:22:30.608407   11846 sshutil.go:53] new ssh client: &{IP:192.168.39.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/addons-979357/id_rsa Username:docker}
	I0913 18:22:30.608496   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:30.608593   11846 main.go:141] libmachine: (addons-979357) Calling .DriverName
	I0913 18:22:30.608646   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHUsername
	I0913 18:22:30.608773   11846 main.go:141] libmachine: (addons-979357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:f4:d7", ip: ""} in network mk-addons-979357: {Iface:virbr1 ExpiryTime:2024-09-13 19:22:00 +0000 UTC Type:0 Mac:52:54:00:9b:f4:d7 Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:addons-979357 Clientid:01:52:54:00:9b:f4:d7}
	I0913 18:22:30.608791   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined IP address 192.168.39.34 and MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:30.608953   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHPort
	I0913 18:22:30.609011   11846 sshutil.go:53] new ssh client: &{IP:192.168.39.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/addons-979357/id_rsa Username:docker}
	I0913 18:22:30.609108   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHKeyPath
	I0913 18:22:30.609196   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHUsername
	I0913 18:22:30.609292   11846 sshutil.go:53] new ssh client: &{IP:192.168.39.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/addons-979357/id_rsa Username:docker}
	I0913 18:22:30.610290   11846 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0913 18:22:30.610387   11846 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0913 18:22:30.611752   11846 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0913 18:22:30.611767   11846 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0913 18:22:30.611783   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHHostname
	I0913 18:22:30.611860   11846 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0913 18:22:30.611868   11846 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0913 18:22:30.611881   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHHostname
	I0913 18:22:30.615942   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:30.616142   11846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33383
	I0913 18:22:30.616410   11846 main.go:141] libmachine: (addons-979357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:f4:d7", ip: ""} in network mk-addons-979357: {Iface:virbr1 ExpiryTime:2024-09-13 19:22:00 +0000 UTC Type:0 Mac:52:54:00:9b:f4:d7 Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:addons-979357 Clientid:01:52:54:00:9b:f4:d7}
	I0913 18:22:30.616449   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined IP address 192.168.39.34 and MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:30.616495   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHPort
	I0913 18:22:30.616724   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHKeyPath
	I0913 18:22:30.616880   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHUsername
	I0913 18:22:30.616942   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:30.617103   11846 sshutil.go:53] new ssh client: &{IP:192.168.39.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/addons-979357/id_rsa Username:docker}
	I0913 18:22:30.617382   11846 main.go:141] libmachine: (addons-979357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:f4:d7", ip: ""} in network mk-addons-979357: {Iface:virbr1 ExpiryTime:2024-09-13 19:22:00 +0000 UTC Type:0 Mac:52:54:00:9b:f4:d7 Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:addons-979357 Clientid:01:52:54:00:9b:f4:d7}
	I0913 18:22:30.617407   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined IP address 192.168.39.34 and MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:30.617450   11846 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:22:30.617566   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHPort
	I0913 18:22:30.617700   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHKeyPath
	I0913 18:22:30.617907   11846 main.go:141] libmachine: Using API Version  1
	I0913 18:22:30.617923   11846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:22:30.617987   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHUsername
	I0913 18:22:30.618223   11846 sshutil.go:53] new ssh client: &{IP:192.168.39.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/addons-979357/id_rsa Username:docker}
	I0913 18:22:30.618283   11846 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:22:30.618400   11846 main.go:141] libmachine: (addons-979357) Calling .GetState
	I0913 18:22:30.618450   11846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39519
	I0913 18:22:30.619331   11846 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:22:30.619872   11846 main.go:141] libmachine: Using API Version  1
	I0913 18:22:30.619894   11846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:22:30.620712   11846 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:22:30.620723   11846 main.go:141] libmachine: (addons-979357) Calling .DriverName
	I0913 18:22:30.621112   11846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44177
	I0913 18:22:30.621385   11846 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:22:30.621616   11846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41847
	I0913 18:22:30.621630   11846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:22:30.621681   11846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:22:30.621808   11846 main.go:141] libmachine: Using API Version  1
	I0913 18:22:30.621830   11846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:22:30.621985   11846 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:22:30.622213   11846 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:22:30.622502   11846 main.go:141] libmachine: Using API Version  1
	I0913 18:22:30.622523   11846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:22:30.622544   11846 main.go:141] libmachine: (addons-979357) Calling .GetState
	I0913 18:22:30.622785   11846 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0913 18:22:30.623076   11846 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:22:30.623434   11846 main.go:141] libmachine: (addons-979357) Calling .DriverName
	I0913 18:22:30.624020   11846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40609
	I0913 18:22:30.624371   11846 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:22:30.624479   11846 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0913 18:22:30.624499   11846 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0913 18:22:30.624514   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHHostname
	I0913 18:22:30.624774   11846 main.go:141] libmachine: Using API Version  1
	I0913 18:22:30.624794   11846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:22:30.625076   11846 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:22:30.625321   11846 main.go:141] libmachine: (addons-979357) Calling .GetState
	I0913 18:22:30.626357   11846 main.go:141] libmachine: (addons-979357) Calling .DriverName
	I0913 18:22:30.627106   11846 main.go:141] libmachine: (addons-979357) Calling .DriverName
	I0913 18:22:30.628111   11846 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0913 18:22:30.628769   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:30.629056   11846 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 18:22:30.629179   11846 main.go:141] libmachine: (addons-979357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:f4:d7", ip: ""} in network mk-addons-979357: {Iface:virbr1 ExpiryTime:2024-09-13 19:22:00 +0000 UTC Type:0 Mac:52:54:00:9b:f4:d7 Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:addons-979357 Clientid:01:52:54:00:9b:f4:d7}
	I0913 18:22:30.629566   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined IP address 192.168.39.34 and MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:30.629413   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHPort
	I0913 18:22:30.629715   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHKeyPath
	I0913 18:22:30.629829   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHUsername
	I0913 18:22:30.629985   11846 sshutil.go:53] new ssh client: &{IP:192.168.39.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/addons-979357/id_rsa Username:docker}
	I0913 18:22:30.631455   11846 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0913 18:22:30.631475   11846 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0913 18:22:30.631490   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHHostname
	I0913 18:22:30.632139   11846 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0913 18:22:30.634478   11846 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0913 18:22:30.634531   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:30.634969   11846 main.go:141] libmachine: (addons-979357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:f4:d7", ip: ""} in network mk-addons-979357: {Iface:virbr1 ExpiryTime:2024-09-13 19:22:00 +0000 UTC Type:0 Mac:52:54:00:9b:f4:d7 Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:addons-979357 Clientid:01:52:54:00:9b:f4:d7}
	I0913 18:22:30.634985   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined IP address 192.168.39.34 and MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:30.635140   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHPort
	I0913 18:22:30.635299   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHKeyPath
	I0913 18:22:30.635443   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHUsername
	I0913 18:22:30.635542   11846 sshutil.go:53] new ssh client: &{IP:192.168.39.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/addons-979357/id_rsa Username:docker}
	I0913 18:22:30.636827   11846 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0913 18:22:30.637904   11846 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0913 18:22:30.639028   11846 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0913 18:22:30.640544   11846 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0913 18:22:30.641535   11846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43859
	I0913 18:22:30.641939   11846 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:22:30.642316   11846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44319
	I0913 18:22:30.642465   11846 main.go:141] libmachine: Using API Version  1
	I0913 18:22:30.642489   11846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:22:30.642731   11846 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:22:30.642818   11846 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:22:30.642875   11846 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0913 18:22:30.643103   11846 main.go:141] libmachine: Using API Version  1
	I0913 18:22:30.643113   11846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:22:30.643375   11846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:22:30.643394   11846 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:22:30.643415   11846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:22:30.643509   11846 main.go:141] libmachine: (addons-979357) Calling .GetState
	I0913 18:22:30.644348   11846 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0913 18:22:30.644366   11846 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0913 18:22:30.644386   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHHostname
	I0913 18:22:30.645550   11846 main.go:141] libmachine: (addons-979357) Calling .DriverName
	I0913 18:22:30.647421   11846 out.go:177]   - Using image docker.io/busybox:stable
	I0913 18:22:30.647683   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:30.648186   11846 main.go:141] libmachine: (addons-979357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:f4:d7", ip: ""} in network mk-addons-979357: {Iface:virbr1 ExpiryTime:2024-09-13 19:22:00 +0000 UTC Type:0 Mac:52:54:00:9b:f4:d7 Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:addons-979357 Clientid:01:52:54:00:9b:f4:d7}
	I0913 18:22:30.648207   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined IP address 192.168.39.34 and MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:30.648479   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHPort
	I0913 18:22:30.648648   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHKeyPath
	I0913 18:22:30.648781   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHUsername
	I0913 18:22:30.648911   11846 sshutil.go:53] new ssh client: &{IP:192.168.39.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/addons-979357/id_rsa Username:docker}
	I0913 18:22:30.649886   11846 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0913 18:22:30.651056   11846 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0913 18:22:30.651073   11846 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0913 18:22:30.651091   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHHostname
	I0913 18:22:30.654528   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:30.654955   11846 main.go:141] libmachine: (addons-979357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:f4:d7", ip: ""} in network mk-addons-979357: {Iface:virbr1 ExpiryTime:2024-09-13 19:22:00 +0000 UTC Type:0 Mac:52:54:00:9b:f4:d7 Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:addons-979357 Clientid:01:52:54:00:9b:f4:d7}
	I0913 18:22:30.654976   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined IP address 192.168.39.34 and MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:30.655136   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHPort
	I0913 18:22:30.655308   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHKeyPath
	I0913 18:22:30.655455   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHUsername
	I0913 18:22:30.655556   11846 sshutil.go:53] new ssh client: &{IP:192.168.39.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/addons-979357/id_rsa Username:docker}
	I0913 18:22:30.661503   11846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43463
	I0913 18:22:30.661851   11846 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:22:30.662364   11846 main.go:141] libmachine: Using API Version  1
	I0913 18:22:30.662380   11846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:22:30.662640   11846 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:22:30.662820   11846 main.go:141] libmachine: (addons-979357) Calling .GetState
	I0913 18:22:30.664099   11846 main.go:141] libmachine: (addons-979357) Calling .DriverName
	I0913 18:22:30.664269   11846 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0913 18:22:30.664283   11846 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0913 18:22:30.664299   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHHostname
	I0913 18:22:30.666963   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:30.667366   11846 main.go:141] libmachine: (addons-979357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:f4:d7", ip: ""} in network mk-addons-979357: {Iface:virbr1 ExpiryTime:2024-09-13 19:22:00 +0000 UTC Type:0 Mac:52:54:00:9b:f4:d7 Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:addons-979357 Clientid:01:52:54:00:9b:f4:d7}
	I0913 18:22:30.667383   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined IP address 192.168.39.34 and MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:30.667513   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHPort
	I0913 18:22:30.667646   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHKeyPath
	I0913 18:22:30.667741   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHUsername
	I0913 18:22:30.667850   11846 sshutil.go:53] new ssh client: &{IP:192.168.39.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/addons-979357/id_rsa Username:docker}
	I0913 18:22:30.876396   11846 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0913 18:22:30.876459   11846 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0913 18:22:30.928879   11846 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0913 18:22:30.930858   11846 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0913 18:22:30.930876   11846 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0913 18:22:30.989689   11846 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0913 18:22:30.989714   11846 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0913 18:22:31.040586   11846 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0913 18:22:31.057460   11846 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0913 18:22:31.100555   11846 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0913 18:22:31.100583   11846 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0913 18:22:31.105990   11846 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0913 18:22:31.106016   11846 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0913 18:22:31.191777   11846 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0913 18:22:31.191803   11846 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0913 18:22:31.194629   11846 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0913 18:22:31.194653   11846 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0913 18:22:31.261951   11846 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0913 18:22:31.268194   11846 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0913 18:22:31.268218   11846 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0913 18:22:31.269743   11846 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0913 18:22:31.269764   11846 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0913 18:22:31.367341   11846 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0913 18:22:31.383222   11846 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0913 18:22:31.383252   11846 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0913 18:22:31.394617   11846 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0913 18:22:31.396907   11846 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0913 18:22:31.431732   11846 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0913 18:22:31.431760   11846 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0913 18:22:31.472624   11846 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0913 18:22:31.472651   11846 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0913 18:22:31.498512   11846 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0913 18:22:31.498541   11846 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0913 18:22:31.549749   11846 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0913 18:22:31.549772   11846 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0913 18:22:31.556719   11846 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0913 18:22:31.556741   11846 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0913 18:22:31.566668   11846 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0913 18:22:31.583646   11846 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0913 18:22:31.583673   11846 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0913 18:22:31.624498   11846 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0913 18:22:31.624524   11846 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0913 18:22:31.705541   11846 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0913 18:22:31.705566   11846 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0913 18:22:31.738522   11846 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0913 18:22:31.738549   11846 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0913 18:22:31.744752   11846 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0913 18:22:31.774264   11846 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0913 18:22:31.774288   11846 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0913 18:22:31.899545   11846 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0913 18:22:31.899571   11846 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0913 18:22:31.916895   11846 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0913 18:22:31.916922   11846 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0913 18:22:32.112312   11846 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0913 18:22:32.112341   11846 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0913 18:22:32.123767   11846 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0913 18:22:32.123794   11846 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0913 18:22:32.215746   11846 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0913 18:22:32.287431   11846 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0913 18:22:32.287460   11846 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0913 18:22:32.301669   11846 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0913 18:22:32.301701   11846 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0913 18:22:32.394481   11846 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0913 18:22:32.394508   11846 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0913 18:22:32.514672   11846 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0913 18:22:32.514700   11846 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0913 18:22:32.519283   11846 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0913 18:22:32.584445   11846 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0913 18:22:32.808431   11846 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0913 18:22:32.808460   11846 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0913 18:22:32.958075   11846 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.081583936s)
	I0913 18:22:32.958125   11846 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0913 18:22:32.958136   11846 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.081703044s)
	I0913 18:22:32.958221   11846 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.029312252s)
	I0913 18:22:32.958260   11846 main.go:141] libmachine: Making call to close driver server
	I0913 18:22:32.958271   11846 main.go:141] libmachine: (addons-979357) Calling .Close
	I0913 18:22:32.959173   11846 node_ready.go:35] waiting up to 6m0s for node "addons-979357" to be "Ready" ...
	I0913 18:22:32.959336   11846 main.go:141] libmachine: (addons-979357) DBG | Closing plugin on server side
	I0913 18:22:32.959354   11846 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:22:32.959367   11846 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:22:32.959377   11846 main.go:141] libmachine: Making call to close driver server
	I0913 18:22:32.959389   11846 main.go:141] libmachine: (addons-979357) Calling .Close
	I0913 18:22:32.959904   11846 main.go:141] libmachine: (addons-979357) DBG | Closing plugin on server side
	I0913 18:22:32.959941   11846 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:22:32.959953   11846 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:22:32.962939   11846 node_ready.go:49] node "addons-979357" has status "Ready":"True"
	I0913 18:22:32.962965   11846 node_ready.go:38] duration metric: took 3.757473ms for node "addons-979357" to be "Ready" ...
	I0913 18:22:32.962977   11846 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0913 18:22:32.981363   11846 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-2gkt9" in "kube-system" namespace to be "Ready" ...
	I0913 18:22:32.982346   11846 main.go:141] libmachine: Making call to close driver server
	I0913 18:22:32.982366   11846 main.go:141] libmachine: (addons-979357) Calling .Close
	I0913 18:22:32.982651   11846 main.go:141] libmachine: (addons-979357) DBG | Closing plugin on server side
	I0913 18:22:32.982696   11846 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:22:32.982707   11846 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:22:33.207362   11846 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0913 18:22:33.207383   11846 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0913 18:22:33.462364   11846 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-979357" context rescaled to 1 replicas
	I0913 18:22:33.565942   11846 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0913 18:22:33.565968   11846 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0913 18:22:33.892546   11846 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0913 18:22:33.892578   11846 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0913 18:22:34.137718   11846 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0913 18:22:35.208928   11846 pod_ready.go:103] pod "coredns-7c65d6cfc9-2gkt9" in "kube-system" namespace has status "Ready":"False"
	I0913 18:22:35.463173   11846 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.422547754s)
	I0913 18:22:35.463218   11846 main.go:141] libmachine: Making call to close driver server
	I0913 18:22:35.463226   11846 main.go:141] libmachine: (addons-979357) Calling .Close
	I0913 18:22:35.463481   11846 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:22:35.463503   11846 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:22:35.463512   11846 main.go:141] libmachine: Making call to close driver server
	I0913 18:22:35.463519   11846 main.go:141] libmachine: (addons-979357) Calling .Close
	I0913 18:22:35.463699   11846 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:22:35.463745   11846 main.go:141] libmachine: (addons-979357) DBG | Closing plugin on server side
	I0913 18:22:35.463754   11846 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:22:36.177658   11846 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.120163066s)
	I0913 18:22:36.177710   11846 main.go:141] libmachine: Making call to close driver server
	I0913 18:22:36.177722   11846 main.go:141] libmachine: (addons-979357) Calling .Close
	I0913 18:22:36.177781   11846 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.915798657s)
	I0913 18:22:36.177817   11846 main.go:141] libmachine: Making call to close driver server
	I0913 18:22:36.177829   11846 main.go:141] libmachine: (addons-979357) Calling .Close
	I0913 18:22:36.177818   11846 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.810444318s)
	I0913 18:22:36.177874   11846 main.go:141] libmachine: Making call to close driver server
	I0913 18:22:36.177895   11846 main.go:141] libmachine: (addons-979357) Calling .Close
	I0913 18:22:36.177950   11846 main.go:141] libmachine: (addons-979357) DBG | Closing plugin on server side
	I0913 18:22:36.177983   11846 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:22:36.177995   11846 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:22:36.178004   11846 main.go:141] libmachine: Making call to close driver server
	I0913 18:22:36.178012   11846 main.go:141] libmachine: (addons-979357) Calling .Close
	I0913 18:22:36.178377   11846 main.go:141] libmachine: (addons-979357) DBG | Closing plugin on server side
	I0913 18:22:36.178392   11846 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:22:36.178415   11846 main.go:141] libmachine: (addons-979357) DBG | Closing plugin on server side
	I0913 18:22:36.178438   11846 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:22:36.178473   11846 main.go:141] libmachine: (addons-979357) DBG | Closing plugin on server side
	I0913 18:22:36.178498   11846 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:22:36.178511   11846 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:22:36.178524   11846 main.go:141] libmachine: Making call to close driver server
	I0913 18:22:36.178536   11846 main.go:141] libmachine: (addons-979357) Calling .Close
	I0913 18:22:36.178447   11846 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:22:36.178606   11846 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:22:36.178613   11846 main.go:141] libmachine: Making call to close driver server
	I0913 18:22:36.178625   11846 main.go:141] libmachine: (addons-979357) Calling .Close
	I0913 18:22:36.178943   11846 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:22:36.178958   11846 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:22:36.179947   11846 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:22:36.179951   11846 main.go:141] libmachine: (addons-979357) DBG | Closing plugin on server side
	I0913 18:22:36.179962   11846 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:22:36.391729   11846 main.go:141] libmachine: Making call to close driver server
	I0913 18:22:36.391752   11846 main.go:141] libmachine: (addons-979357) Calling .Close
	I0913 18:22:36.392010   11846 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:22:36.392058   11846 main.go:141] libmachine: (addons-979357) DBG | Closing plugin on server side
	I0913 18:22:36.392065   11846 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:22:36.513516   11846 pod_ready.go:93] pod "coredns-7c65d6cfc9-2gkt9" in "kube-system" namespace has status "Ready":"True"
	I0913 18:22:36.513545   11846 pod_ready.go:82] duration metric: took 3.532154275s for pod "coredns-7c65d6cfc9-2gkt9" in "kube-system" namespace to be "Ready" ...
	I0913 18:22:36.513561   11846 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-mtltd" in "kube-system" namespace to be "Ready" ...
	I0913 18:22:37.702586   11846 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0913 18:22:37.702623   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHHostname
	I0913 18:22:37.705721   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:37.706173   11846 main.go:141] libmachine: (addons-979357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:f4:d7", ip: ""} in network mk-addons-979357: {Iface:virbr1 ExpiryTime:2024-09-13 19:22:00 +0000 UTC Type:0 Mac:52:54:00:9b:f4:d7 Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:addons-979357 Clientid:01:52:54:00:9b:f4:d7}
	I0913 18:22:37.706204   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined IP address 192.168.39.34 and MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:37.706406   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHPort
	I0913 18:22:37.706598   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHKeyPath
	I0913 18:22:37.706724   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHUsername
	I0913 18:22:37.706834   11846 sshutil.go:53] new ssh client: &{IP:192.168.39.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/addons-979357/id_rsa Username:docker}
	I0913 18:22:37.941566   11846 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0913 18:22:38.057578   11846 addons.go:234] Setting addon gcp-auth=true in "addons-979357"
	I0913 18:22:38.057630   11846 host.go:66] Checking if "addons-979357" exists ...
	I0913 18:22:38.057962   11846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:22:38.057998   11846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:22:38.072716   11846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42637
	I0913 18:22:38.073244   11846 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:22:38.073727   11846 main.go:141] libmachine: Using API Version  1
	I0913 18:22:38.073753   11846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:22:38.074119   11846 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:22:38.074874   11846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:22:38.074920   11846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:22:38.089603   11846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33945
	I0913 18:22:38.090145   11846 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:22:38.090681   11846 main.go:141] libmachine: Using API Version  1
	I0913 18:22:38.090703   11846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:22:38.091107   11846 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:22:38.091372   11846 main.go:141] libmachine: (addons-979357) Calling .GetState
	I0913 18:22:38.093189   11846 main.go:141] libmachine: (addons-979357) Calling .DriverName
	I0913 18:22:38.093398   11846 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0913 18:22:38.093425   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHHostname
	I0913 18:22:38.096456   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:38.096850   11846 main.go:141] libmachine: (addons-979357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:f4:d7", ip: ""} in network mk-addons-979357: {Iface:virbr1 ExpiryTime:2024-09-13 19:22:00 +0000 UTC Type:0 Mac:52:54:00:9b:f4:d7 Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:addons-979357 Clientid:01:52:54:00:9b:f4:d7}
	I0913 18:22:38.096871   11846 main.go:141] libmachine: (addons-979357) DBG | domain addons-979357 has defined IP address 192.168.39.34 and MAC address 52:54:00:9b:f4:d7 in network mk-addons-979357
	I0913 18:22:38.097020   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHPort
	I0913 18:22:38.097184   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHKeyPath
	I0913 18:22:38.097332   11846 main.go:141] libmachine: (addons-979357) Calling .GetSSHUsername
	I0913 18:22:38.097456   11846 sshutil.go:53] new ssh client: &{IP:192.168.39.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/addons-979357/id_rsa Username:docker}
	I0913 18:22:38.611050   11846 pod_ready.go:93] pod "coredns-7c65d6cfc9-mtltd" in "kube-system" namespace has status "Ready":"True"
	I0913 18:22:38.611074   11846 pod_ready.go:82] duration metric: took 2.097504572s for pod "coredns-7c65d6cfc9-mtltd" in "kube-system" namespace to be "Ready" ...
	I0913 18:22:38.611087   11846 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-979357" in "kube-system" namespace to be "Ready" ...
	I0913 18:22:39.180671   11846 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (7.783727776s)
	I0913 18:22:39.180723   11846 main.go:141] libmachine: Making call to close driver server
	I0913 18:22:39.180729   11846 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.78607227s)
	I0913 18:22:39.180743   11846 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.614047493s)
	I0913 18:22:39.180760   11846 main.go:141] libmachine: Making call to close driver server
	I0913 18:22:39.180786   11846 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.436006015s)
	I0913 18:22:39.180808   11846 main.go:141] libmachine: Making call to close driver server
	I0913 18:22:39.180818   11846 main.go:141] libmachine: (addons-979357) Calling .Close
	I0913 18:22:39.180820   11846 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.965045353s)
	I0913 18:22:39.180833   11846 main.go:141] libmachine: Making call to close driver server
	I0913 18:22:39.180846   11846 main.go:141] libmachine: (addons-979357) Calling .Close
	I0913 18:22:39.180763   11846 main.go:141] libmachine: Making call to close driver server
	I0913 18:22:39.180917   11846 main.go:141] libmachine: (addons-979357) Calling .Close
	I0913 18:22:39.180791   11846 main.go:141] libmachine: (addons-979357) Calling .Close
	I0913 18:22:39.180980   11846 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.661665418s)
	I0913 18:22:39.180735   11846 main.go:141] libmachine: (addons-979357) Calling .Close
	W0913 18:22:39.181015   11846 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0913 18:22:39.181035   11846 retry.go:31] will retry after 132.635799ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0913 18:22:39.181141   11846 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.5966432s)
	I0913 18:22:39.181168   11846 main.go:141] libmachine: Making call to close driver server
	I0913 18:22:39.181177   11846 main.go:141] libmachine: (addons-979357) Calling .Close
	I0913 18:22:39.181255   11846 main.go:141] libmachine: (addons-979357) DBG | Closing plugin on server side
	I0913 18:22:39.181292   11846 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:22:39.181299   11846 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:22:39.181306   11846 main.go:141] libmachine: Making call to close driver server
	I0913 18:22:39.181313   11846 main.go:141] libmachine: (addons-979357) Calling .Close
	I0913 18:22:39.182158   11846 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:22:39.182169   11846 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:22:39.182177   11846 main.go:141] libmachine: Making call to close driver server
	I0913 18:22:39.182194   11846 main.go:141] libmachine: (addons-979357) Calling .Close
	I0913 18:22:39.182874   11846 main.go:141] libmachine: (addons-979357) DBG | Closing plugin on server side
	I0913 18:22:39.182909   11846 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:22:39.182918   11846 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:22:39.182925   11846 main.go:141] libmachine: Making call to close driver server
	I0913 18:22:39.182932   11846 main.go:141] libmachine: (addons-979357) Calling .Close
	I0913 18:22:39.183061   11846 main.go:141] libmachine: (addons-979357) DBG | Closing plugin on server side
	I0913 18:22:39.183085   11846 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:22:39.183090   11846 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:22:39.183101   11846 main.go:141] libmachine: (addons-979357) DBG | Closing plugin on server side
	I0913 18:22:39.183173   11846 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:22:39.183188   11846 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:22:39.183192   11846 main.go:141] libmachine: (addons-979357) DBG | Closing plugin on server side
	I0913 18:22:39.183198   11846 addons.go:475] Verifying addon metrics-server=true in "addons-979357"
	I0913 18:22:39.183211   11846 main.go:141] libmachine: (addons-979357) DBG | Closing plugin on server side
	I0913 18:22:39.183227   11846 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:22:39.183233   11846 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:22:39.183141   11846 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:22:39.183266   11846 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:22:39.183276   11846 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:22:39.183394   11846 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:22:39.183404   11846 main.go:141] libmachine: Making call to close driver server
	I0913 18:22:39.183412   11846 main.go:141] libmachine: (addons-979357) Calling .Close
	I0913 18:22:39.183277   11846 addons.go:475] Verifying addon registry=true in "addons-979357"
	I0913 18:22:39.183673   11846 main.go:141] libmachine: (addons-979357) DBG | Closing plugin on server side
	I0913 18:22:39.183702   11846 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:22:39.183709   11846 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:22:39.183175   11846 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:22:39.183811   11846 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:22:39.183814   11846 pod_ready.go:93] pod "etcd-addons-979357" in "kube-system" namespace has status "Ready":"True"
	I0913 18:22:39.183240   11846 main.go:141] libmachine: Making call to close driver server
	I0913 18:22:39.183829   11846 pod_ready.go:82] duration metric: took 572.7356ms for pod "etcd-addons-979357" in "kube-system" namespace to be "Ready" ...
	I0913 18:22:39.183838   11846 main.go:141] libmachine: (addons-979357) Calling .Close
	I0913 18:22:39.183842   11846 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-979357" in "kube-system" namespace to be "Ready" ...
	I0913 18:22:39.183149   11846 main.go:141] libmachine: (addons-979357) DBG | Closing plugin on server side
	I0913 18:22:39.183818   11846 main.go:141] libmachine: Making call to close driver server
	I0913 18:22:39.184008   11846 main.go:141] libmachine: (addons-979357) Calling .Close
	I0913 18:22:39.183276   11846 main.go:141] libmachine: (addons-979357) DBG | Closing plugin on server side
	I0913 18:22:39.184353   11846 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:22:39.184367   11846 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:22:39.184376   11846 addons.go:475] Verifying addon ingress=true in "addons-979357"
	I0913 18:22:39.185002   11846 main.go:141] libmachine: (addons-979357) DBG | Closing plugin on server side
	I0913 18:22:39.185027   11846 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:22:39.186229   11846 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:22:39.186332   11846 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-979357 service yakd-dashboard -n yakd-dashboard
	
	I0913 18:22:39.187398   11846 out.go:177] * Verifying registry addon...
	I0913 18:22:39.188256   11846 out.go:177] * Verifying ingress addon...
	I0913 18:22:39.189818   11846 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0913 18:22:39.190687   11846 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0913 18:22:39.210962   11846 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0913 18:22:39.211000   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:39.212603   11846 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0913 18:22:39.212623   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:39.314470   11846 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0913 18:22:39.711545   11846 pod_ready.go:93] pod "kube-apiserver-addons-979357" in "kube-system" namespace has status "Ready":"True"
	I0913 18:22:39.711574   11846 pod_ready.go:82] duration metric: took 527.723521ms for pod "kube-apiserver-addons-979357" in "kube-system" namespace to be "Ready" ...
	I0913 18:22:39.711588   11846 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-979357" in "kube-system" namespace to be "Ready" ...
	I0913 18:22:39.720988   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:39.727065   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:39.735954   11846 pod_ready.go:93] pod "kube-controller-manager-addons-979357" in "kube-system" namespace has status "Ready":"True"
	I0913 18:22:39.735985   11846 pod_ready.go:82] duration metric: took 24.3888ms for pod "kube-controller-manager-addons-979357" in "kube-system" namespace to be "Ready" ...
	I0913 18:22:39.735999   11846 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-qxmw4" in "kube-system" namespace to be "Ready" ...
	I0913 18:22:39.749808   11846 pod_ready.go:93] pod "kube-proxy-qxmw4" in "kube-system" namespace has status "Ready":"True"
	I0913 18:22:39.749827   11846 pod_ready.go:82] duration metric: took 13.820436ms for pod "kube-proxy-qxmw4" in "kube-system" namespace to be "Ready" ...
	I0913 18:22:39.749836   11846 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-979357" in "kube-system" namespace to be "Ready" ...
	I0913 18:22:39.761817   11846 pod_ready.go:93] pod "kube-scheduler-addons-979357" in "kube-system" namespace has status "Ready":"True"
	I0913 18:22:39.761834   11846 pod_ready.go:82] duration metric: took 11.992857ms for pod "kube-scheduler-addons-979357" in "kube-system" namespace to be "Ready" ...
	I0913 18:22:39.761841   11846 pod_ready.go:39] duration metric: took 6.798852631s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0913 18:22:39.761856   11846 api_server.go:52] waiting for apiserver process to appear ...
	I0913 18:22:39.761902   11846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 18:22:40.110559   11846 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.017133876s)
	I0913 18:22:40.110559   11846 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.972790008s)
	I0913 18:22:40.110740   11846 main.go:141] libmachine: Making call to close driver server
	I0913 18:22:40.110759   11846 main.go:141] libmachine: (addons-979357) Calling .Close
	I0913 18:22:40.110996   11846 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:22:40.111013   11846 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:22:40.111021   11846 main.go:141] libmachine: Making call to close driver server
	I0913 18:22:40.111029   11846 main.go:141] libmachine: (addons-979357) Calling .Close
	I0913 18:22:40.111037   11846 main.go:141] libmachine: (addons-979357) DBG | Closing plugin on server side
	I0913 18:22:40.111346   11846 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:22:40.111360   11846 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:22:40.111369   11846 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-979357"
	I0913 18:22:40.111372   11846 main.go:141] libmachine: (addons-979357) DBG | Closing plugin on server side
	I0913 18:22:40.112081   11846 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0913 18:22:40.113065   11846 out.go:177] * Verifying csi-hostpath-driver addon...
	I0913 18:22:40.114734   11846 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0913 18:22:40.115664   11846 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0913 18:22:40.115892   11846 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0913 18:22:40.115906   11846 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0913 18:22:40.132558   11846 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0913 18:22:40.132577   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:40.211311   11846 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0913 18:22:40.211334   11846 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0913 18:22:40.220393   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:40.220516   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:40.300610   11846 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0913 18:22:40.300638   11846 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0913 18:22:40.389824   11846 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0913 18:22:40.621694   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:40.843154   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:40.844023   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:41.120868   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:41.194711   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:41.195587   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:41.201412   11846 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.886888763s)
	I0913 18:22:41.201454   11846 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.439534942s)
	I0913 18:22:41.201468   11846 main.go:141] libmachine: Making call to close driver server
	I0913 18:22:41.201480   11846 api_server.go:72] duration metric: took 10.722879781s to wait for apiserver process to appear ...
	I0913 18:22:41.201485   11846 main.go:141] libmachine: (addons-979357) Calling .Close
	I0913 18:22:41.201489   11846 api_server.go:88] waiting for apiserver healthz status ...
	I0913 18:22:41.201511   11846 api_server.go:253] Checking apiserver healthz at https://192.168.39.34:8443/healthz ...
	I0913 18:22:41.201764   11846 main.go:141] libmachine: (addons-979357) DBG | Closing plugin on server side
	I0913 18:22:41.201822   11846 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:22:41.201837   11846 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:22:41.201844   11846 main.go:141] libmachine: Making call to close driver server
	I0913 18:22:41.201852   11846 main.go:141] libmachine: (addons-979357) Calling .Close
	I0913 18:22:41.202028   11846 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:22:41.202047   11846 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:22:41.206053   11846 api_server.go:279] https://192.168.39.34:8443/healthz returned 200:
	ok
	I0913 18:22:41.206959   11846 api_server.go:141] control plane version: v1.31.1
	I0913 18:22:41.206977   11846 api_server.go:131] duration metric: took 5.482612ms to wait for apiserver health ...
	I0913 18:22:41.206984   11846 system_pods.go:43] waiting for kube-system pods to appear ...
	I0913 18:22:41.214695   11846 system_pods.go:59] 18 kube-system pods found
	I0913 18:22:41.214727   11846 system_pods.go:61] "coredns-7c65d6cfc9-2gkt9" [d1e3da77-7c54-4cc2-a26f-32731b8c03d0] Running
	I0913 18:22:41.214735   11846 system_pods.go:61] "coredns-7c65d6cfc9-mtltd" [bee68b4c-c773-4bb2-b088-1fe4a816edf3] Running
	I0913 18:22:41.214746   11846 system_pods.go:61] "csi-hostpath-attacher-0" [8a5b2986-b2ca-4a85-b195-1c8eb80a223e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0913 18:22:41.214760   11846 system_pods.go:61] "csi-hostpath-resizer-0" [e9c848e7-3276-496f-a60f-69f8eb633740] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0913 18:22:41.214772   11846 system_pods.go:61] "csi-hostpathplugin-zhd46" [a53ceb0b-635b-4fa8-a72b-60d626a4370f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0913 18:22:41.214782   11846 system_pods.go:61] "etcd-addons-979357" [1267edb9-0c88-4573-80ab-4c18edfd79fa] Running
	I0913 18:22:41.214789   11846 system_pods.go:61] "kube-apiserver-addons-979357" [9d630d36-12c0-4389-b21b-4a5befb11de4] Running
	I0913 18:22:41.214797   11846 system_pods.go:61] "kube-controller-manager-addons-979357" [77e27eb8-234a-4da6-a8f5-c94a66a9d3dc] Running
	I0913 18:22:41.214807   11846 system_pods.go:61] "kube-ingress-dns-minikube" [a82db8f0-646e-4f6c-8dda-7332bed77579] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0913 18:22:41.214821   11846 system_pods.go:61] "kube-proxy-qxmw4" [3e77278b-62ae-4a68-bbba-ca3108d18280] Running
	I0913 18:22:41.214830   11846 system_pods.go:61] "kube-scheduler-addons-979357" [a40db901-708e-481e-aedf-f54669897c0e] Running
	I0913 18:22:41.214838   11846 system_pods.go:61] "metrics-server-84c5f94fbc-qw488" [cf270e35-c498-455b-bc82-0a19e8f606aa] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0913 18:22:41.214850   11846 system_pods.go:61] "nvidia-device-plugin-daemonset-66thw" [ad466b8e-d669-4281-be12-36ab1bbbee83] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0913 18:22:41.214862   11846 system_pods.go:61] "registry-66c9cd494c-pwx9m" [d9453f5b-a1d3-40e4-80d3-2250edd642ca] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0913 18:22:41.214872   11846 system_pods.go:61] "registry-proxy-2jrhs" [8223e4fa-f130-48c6-ab8b-764434495610] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0913 18:22:41.214884   11846 system_pods.go:61] "snapshot-controller-56fcc65765-fvbcx" [9043c1eb-e28f-4af5-af33-529d05cce5c8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0913 18:22:41.214903   11846 system_pods.go:61] "snapshot-controller-56fcc65765-r58vx" [661bb76c-4862-41f0-a2d0-1c774b91c7dd] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0913 18:22:41.214910   11846 system_pods.go:61] "storage-provisioner" [09e9768b-ce9c-47d6-8650-191c7f864a9c] Running
	I0913 18:22:41.214917   11846 system_pods.go:74] duration metric: took 7.926337ms to wait for pod list to return data ...
	I0913 18:22:41.214926   11846 default_sa.go:34] waiting for default service account to be created ...
	I0913 18:22:41.217763   11846 default_sa.go:45] found service account: "default"
	I0913 18:22:41.217781   11846 default_sa.go:55] duration metric: took 2.845911ms for default service account to be created ...
	I0913 18:22:41.217790   11846 system_pods.go:116] waiting for k8s-apps to be running ...
	I0913 18:22:41.226796   11846 system_pods.go:86] 18 kube-system pods found
	I0913 18:22:41.226823   11846 system_pods.go:89] "coredns-7c65d6cfc9-2gkt9" [d1e3da77-7c54-4cc2-a26f-32731b8c03d0] Running
	I0913 18:22:41.226831   11846 system_pods.go:89] "coredns-7c65d6cfc9-mtltd" [bee68b4c-c773-4bb2-b088-1fe4a816edf3] Running
	I0913 18:22:41.226841   11846 system_pods.go:89] "csi-hostpath-attacher-0" [8a5b2986-b2ca-4a85-b195-1c8eb80a223e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0913 18:22:41.226852   11846 system_pods.go:89] "csi-hostpath-resizer-0" [e9c848e7-3276-496f-a60f-69f8eb633740] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0913 18:22:41.226862   11846 system_pods.go:89] "csi-hostpathplugin-zhd46" [a53ceb0b-635b-4fa8-a72b-60d626a4370f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0913 18:22:41.226869   11846 system_pods.go:89] "etcd-addons-979357" [1267edb9-0c88-4573-80ab-4c18edfd79fa] Running
	I0913 18:22:41.226876   11846 system_pods.go:89] "kube-apiserver-addons-979357" [9d630d36-12c0-4389-b21b-4a5befb11de4] Running
	I0913 18:22:41.226883   11846 system_pods.go:89] "kube-controller-manager-addons-979357" [77e27eb8-234a-4da6-a8f5-c94a66a9d3dc] Running
	I0913 18:22:41.226896   11846 system_pods.go:89] "kube-ingress-dns-minikube" [a82db8f0-646e-4f6c-8dda-7332bed77579] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0913 18:22:41.226903   11846 system_pods.go:89] "kube-proxy-qxmw4" [3e77278b-62ae-4a68-bbba-ca3108d18280] Running
	I0913 18:22:41.226913   11846 system_pods.go:89] "kube-scheduler-addons-979357" [a40db901-708e-481e-aedf-f54669897c0e] Running
	I0913 18:22:41.226923   11846 system_pods.go:89] "metrics-server-84c5f94fbc-qw488" [cf270e35-c498-455b-bc82-0a19e8f606aa] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0913 18:22:41.226936   11846 system_pods.go:89] "nvidia-device-plugin-daemonset-66thw" [ad466b8e-d669-4281-be12-36ab1bbbee83] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0913 18:22:41.226945   11846 system_pods.go:89] "registry-66c9cd494c-pwx9m" [d9453f5b-a1d3-40e4-80d3-2250edd642ca] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0913 18:22:41.226956   11846 system_pods.go:89] "registry-proxy-2jrhs" [8223e4fa-f130-48c6-ab8b-764434495610] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0913 18:22:41.226966   11846 system_pods.go:89] "snapshot-controller-56fcc65765-fvbcx" [9043c1eb-e28f-4af5-af33-529d05cce5c8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0913 18:22:41.226979   11846 system_pods.go:89] "snapshot-controller-56fcc65765-r58vx" [661bb76c-4862-41f0-a2d0-1c774b91c7dd] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0913 18:22:41.226987   11846 system_pods.go:89] "storage-provisioner" [09e9768b-ce9c-47d6-8650-191c7f864a9c] Running
	I0913 18:22:41.226997   11846 system_pods.go:126] duration metric: took 9.200944ms to wait for k8s-apps to be running ...
	I0913 18:22:41.227009   11846 system_svc.go:44] waiting for kubelet service to be running ....
	I0913 18:22:41.227055   11846 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 18:22:41.634996   11846 system_svc.go:56] duration metric: took 407.978559ms WaitForService to wait for kubelet
	I0913 18:22:41.635015   11846 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.245157022s)
	I0913 18:22:41.635029   11846 kubeadm.go:582] duration metric: took 11.156427988s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0913 18:22:41.635054   11846 node_conditions.go:102] verifying NodePressure condition ...
	I0913 18:22:41.635053   11846 main.go:141] libmachine: Making call to close driver server
	I0913 18:22:41.635073   11846 main.go:141] libmachine: (addons-979357) Calling .Close
	I0913 18:22:41.635381   11846 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:22:41.635400   11846 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:22:41.635410   11846 main.go:141] libmachine: Making call to close driver server
	I0913 18:22:41.635434   11846 main.go:141] libmachine: (addons-979357) DBG | Closing plugin on server side
	I0913 18:22:41.635497   11846 main.go:141] libmachine: (addons-979357) Calling .Close
	I0913 18:22:41.635722   11846 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:22:41.635759   11846 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:22:41.638410   11846 addons.go:475] Verifying addon gcp-auth=true in "addons-979357"
	I0913 18:22:41.640220   11846 out.go:177] * Verifying gcp-auth addon...
	I0913 18:22:41.642958   11846 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0913 18:22:41.721176   11846 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0913 18:22:41.721197   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:22:41.722056   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:41.765233   11846 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0913 18:22:41.765260   11846 node_conditions.go:123] node cpu capacity is 2
	I0913 18:22:41.765276   11846 node_conditions.go:105] duration metric: took 130.215708ms to run NodePressure ...
	I0913 18:22:41.765289   11846 start.go:241] waiting for startup goroutines ...
	I0913 18:22:41.787100   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:41.787864   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:42.120679   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:42.147184   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:22:42.194390   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:42.195105   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:42.619872   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:42.645630   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:22:42.693894   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:42.695153   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:43.120929   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:43.145927   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:22:43.194596   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:43.195583   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:43.621917   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:43.645549   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:22:43.693559   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:43.695135   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:44.121292   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:44.146843   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:22:44.195593   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:44.195599   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:44.621514   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:44.646833   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:22:44.694699   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:44.695284   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:45.121000   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:45.146665   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:22:45.221808   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:45.221886   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:45.621175   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:45.646182   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:22:45.696648   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:45.697620   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:46.120717   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:46.147336   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:22:46.193470   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:46.195172   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:46.620919   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:46.646586   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:22:46.693776   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:46.694844   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:47.121098   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:47.146164   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:22:47.194357   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:47.194812   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:47.620988   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:47.646008   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:22:47.695231   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:47.695519   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:48.123021   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:48.148617   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:22:48.194472   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:48.197071   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:48.620608   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:48.647296   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:22:48.693740   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:48.696156   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:49.121349   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:49.146152   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:22:49.193353   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:49.195100   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:49.620792   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:49.646311   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:22:49.694786   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:49.695121   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:50.120264   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:50.146350   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:22:50.195145   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:50.195301   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:50.623572   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:50.647378   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:22:50.694258   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:50.695502   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:51.121299   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:51.147289   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:22:51.195022   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:51.196037   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:51.622665   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:51.647969   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:22:51.694417   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:51.695278   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:52.120925   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:52.147440   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:22:52.193805   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:52.195323   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:52.620665   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:52.646899   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:22:52.694596   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:52.695098   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:53.121172   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:53.147196   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:22:53.193933   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:53.195515   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:53.620912   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:53.646554   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:22:53.694887   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:53.696858   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:54.121127   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:54.146492   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:22:54.193531   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:54.196209   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:54.619665   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:54.647089   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:22:54.693272   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:54.695620   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:55.121110   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:55.146241   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:22:55.222531   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:55.223243   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:55.621744   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:55.647722   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:22:55.695503   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:55.695685   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:56.120857   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:56.147149   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:22:56.195602   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:56.195853   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:56.620083   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:56.646767   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:22:56.695272   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:56.696725   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:57.120527   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:57.146315   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:22:57.196813   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:57.197244   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:57.620578   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:57.647230   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:22:57.693611   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:57.695949   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:58.120685   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:58.147408   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:22:58.193377   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:58.195277   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:58.620171   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:58.646736   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:22:58.695046   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:58.695240   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:59.121002   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:59.146152   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:22:59.193596   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:59.195514   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:59.621837   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:59.646971   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:22:59.695285   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:59.695341   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:00.120985   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:00.146606   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:00.194196   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:00.195216   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:00.622220   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:00.648159   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:00.693250   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:00.695562   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:01.121311   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:01.147065   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:01.198443   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:01.198571   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:01.620857   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:01.647554   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:01.695186   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:01.695496   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:02.120196   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:02.147540   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:02.194122   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:02.196710   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:02.623336   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:02.646284   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:02.693416   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:02.695367   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:03.121367   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:03.146882   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:03.195451   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:03.196172   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:03.620748   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:03.647039   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:03.694700   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:03.695234   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:04.121411   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:04.148078   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:04.194865   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:04.195162   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:04.620921   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:04.645990   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:04.695569   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:04.695683   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:05.120274   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:05.146571   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:05.220150   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:05.220498   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:05.621456   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:05.647109   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:05.694530   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:05.695969   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:06.120728   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:06.146744   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:06.195253   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:06.195415   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:06.620898   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:06.647924   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:06.694635   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:06.694976   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:07.127001   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:07.146392   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:07.193687   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:07.196384   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:07.621298   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:07.646498   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:07.693773   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:07.695419   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:08.127877   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:08.145692   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:08.193920   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:08.196181   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:08.622851   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:08.647712   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:08.694786   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:08.696188   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:09.120734   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:09.147876   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:09.194575   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:09.195140   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:09.620159   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:09.646445   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:09.693725   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:09.695051   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:10.121729   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:10.147049   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:10.195211   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:10.195743   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:10.620510   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:10.646705   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:10.694026   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:10.695703   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:11.131933   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:11.221769   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:11.222414   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:11.222614   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:11.620112   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:11.646407   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:11.693639   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:11.695523   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:12.120722   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:12.147783   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:12.195174   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:12.195474   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:12.620765   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:12.646438   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:12.693266   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:12.695076   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:13.120438   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:13.146881   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:13.195465   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:13.195886   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:13.621014   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:13.646016   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:13.695763   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:13.696160   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:14.121538   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:14.146032   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:14.194101   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:14.194532   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:14.620817   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:14.646854   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:14.694932   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:14.695089   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:15.119855   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:15.146131   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:15.220403   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:15.220546   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:15.626509   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:15.648020   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:15.694713   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:15.696103   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:16.120717   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:16.147101   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:16.193946   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:16.195256   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:16.625357   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:16.721430   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:16.721848   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:16.722175   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:17.120426   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:17.145905   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:17.220147   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:17.220899   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:17.621209   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:17.646445   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:17.693623   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:17.695270   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:18.120271   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:18.146686   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:18.193954   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:18.196010   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:18.621171   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:18.646946   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:18.694564   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:18.695211   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:19.120113   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:19.146469   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:19.196297   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:19.196447   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:19.650974   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:19.651697   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:19.698508   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:19.699902   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:20.120815   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:20.146825   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:20.195112   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:20.195337   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:20.620833   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:20.648724   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:20.695238   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:20.695503   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:21.120670   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:21.146241   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:21.193758   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:21.195248   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:21.620443   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:21.647189   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:21.693673   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:21.695255   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:22.120315   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:22.146703   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:22.194041   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:22.195417   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:22.620344   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:22.646609   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:22.694000   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:22.695298   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:23.119630   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:23.146904   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:23.195745   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:23.195868   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:23.620453   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:23.645852   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:23.695186   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:23.695233   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:24.120504   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:24.146668   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:24.193779   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:24.194861   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:24.626216   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:24.646458   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:24.694012   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:24.695912   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:25.121136   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:25.147431   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:25.195249   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:25.195382   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:25.622578   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:25.646123   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:25.693993   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:25.696212   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:26.121205   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:26.145925   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:26.195513   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:26.195566   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:23:26.624415   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:26.722553   11846 kapi.go:107] duration metric: took 47.532730438s to wait for kubernetes.io/minikube-addons=registry ...
	I0913 18:23:26.722593   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:26.722614   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:27.120042   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:27.146166   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:27.195294   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:27.622218   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:27.646583   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:27.695195   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:28.120287   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:28.146533   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:28.195157   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:28.619787   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:28.645876   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:28.696846   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:29.121064   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:29.146637   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:29.195783   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:29.626830   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:29.726354   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:29.727329   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:30.119787   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:30.145744   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:30.195173   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:30.624823   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:30.646556   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:30.695578   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:31.120515   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:31.154577   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:31.196849   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:31.620779   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:31.647534   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:31.695303   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:32.120078   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:32.146438   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:32.195173   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:32.620076   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:32.646251   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:32.694883   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:33.120737   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:33.146599   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:33.194850   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:33.621679   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:33.646334   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:33.695142   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:34.121576   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:34.146542   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:34.195016   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:34.623471   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:34.647269   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:34.694854   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:35.121463   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:35.147807   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:35.222465   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:35.620588   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:35.646453   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:35.694862   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:36.121876   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:36.147202   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:36.195143   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:36.621045   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:36.647726   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:36.695696   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:37.121125   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:37.147217   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:37.194840   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:37.621359   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:37.646372   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:37.695547   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:38.121220   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:38.146601   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:38.195403   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:38.625530   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:38.645912   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:38.725502   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:39.122386   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:39.146745   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:39.195189   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:39.620370   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:39.645995   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:39.694761   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:40.119935   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:40.149974   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:40.195722   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:40.620233   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:40.646888   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:40.695644   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:41.120849   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:41.146610   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:41.198361   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:41.622772   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:41.646925   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:41.695237   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:42.120998   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:42.152683   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:42.221014   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:42.621924   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:42.646885   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:42.695597   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:43.120297   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:43.146446   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:43.195887   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:43.621897   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:43.646013   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:43.696557   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:44.121163   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:44.147972   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:44.195376   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:44.621728   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:44.647558   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:44.720987   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:45.121126   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:45.157724   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:45.258976   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:45.622505   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:45.646349   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:45.694812   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:46.123467   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:46.147968   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:46.194710   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:46.620795   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:46.648638   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:46.696589   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:47.125323   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:47.148794   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:47.226767   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:47.625133   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:47.665246   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:47.697347   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:48.120702   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:48.146546   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:48.196137   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:48.620081   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:48.646626   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:48.697799   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:49.120469   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:49.146490   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:49.195195   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:49.623297   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:49.647120   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:49.694857   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:50.121396   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:50.146235   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:50.195440   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:50.620309   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:51.036246   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:51.036422   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:51.120322   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:51.146655   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:51.196307   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:51.621288   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:51.646663   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:51.695788   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:52.120768   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:52.147113   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:52.194880   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:52.620746   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:52.646876   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:52.695644   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:53.120209   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:53.146049   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:53.194556   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:53.623965   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:53.646378   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:53.697202   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:54.119892   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:54.220040   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:54.220900   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:54.620194   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:54.646265   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:54.694508   11846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:55.120705   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:55.147221   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:55.221270   11846 kapi.go:107] duration metric: took 1m16.030581818s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0913 18:23:55.620551   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:55.722715   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:56.123824   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:56.145750   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:56.620150   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:56.646276   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:57.120601   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:57.146762   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:57.620594   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:57.646802   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:58.120308   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:58.146334   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:58.621532   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:58.646676   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:59.126657   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:59.151013   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:59.620308   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:59.646351   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:00.121433   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:24:00.146323   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:00.620455   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:24:00.647099   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:01.123791   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:24:01.148334   11846 kapi.go:107] duration metric: took 1m19.505373536s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0913 18:24:01.150141   11846 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-979357 cluster.
	I0913 18:24:01.151499   11846 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0913 18:24:01.152977   11846 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0913 18:24:01.620787   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:24:02.121029   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:24:02.619924   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:24:03.121161   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:24:03.623550   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:24:04.121221   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:24:04.621386   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:24:05.120200   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:24:05.620252   11846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:24:06.120523   11846 kapi.go:107] duration metric: took 1m26.004857088s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0913 18:24:06.122184   11846 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, nvidia-device-plugin, ingress-dns, storage-provisioner-rancher, metrics-server, cloud-spanner, inspektor-gadget, yakd, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0913 18:24:06.123444   11846 addons.go:510] duration metric: took 1m35.644821989s for enable addons: enabled=[default-storageclass storage-provisioner nvidia-device-plugin ingress-dns storage-provisioner-rancher metrics-server cloud-spanner inspektor-gadget yakd volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0913 18:24:06.123477   11846 start.go:246] waiting for cluster config update ...
	I0913 18:24:06.123493   11846 start.go:255] writing updated cluster config ...
	I0913 18:24:06.123731   11846 ssh_runner.go:195] Run: rm -f paused
	I0913 18:24:06.194823   11846 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0913 18:24:06.196641   11846 out.go:177] * Done! kubectl is now configured to use "addons-979357" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 13 18:37:19 addons-979357 crio[661]: time="2024-09-13 18:37:19.963088394Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=db0bc43b-b29a-4e1c-bf31-dc60b7154546 name=/runtime.v1.RuntimeService/Version
	Sep 13 18:37:19 addons-979357 crio[661]: time="2024-09-13 18:37:19.964286889Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=39d556a7-2666-4599-92dc-7fe15814db82 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 18:37:19 addons-979357 crio[661]: time="2024-09-13 18:37:19.965402682Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726252639965378526,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559239,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=39d556a7-2666-4599-92dc-7fe15814db82 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 18:37:19 addons-979357 crio[661]: time="2024-09-13 18:37:19.966059025Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8f80bf64-4677-4e6a-ae44-f7947b7706a8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 18:37:19 addons-979357 crio[661]: time="2024-09-13 18:37:19.966119780Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8f80bf64-4677-4e6a-ae44-f7947b7706a8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 18:37:19 addons-979357 crio[661]: time="2024-09-13 18:37:19.966363842Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:22255494b139f174b63cb9552f7ce5d9965c4fbdfadacade12971758a6d7c34f,PodSandboxId:3dfe2087710ff70556a7315fef49a89402359b106fa837bda3fd211973a8f99f,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1726252549274069505,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-hw97l,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2e838a30-9cc7-4bd7-a481-378b6fe7bd29,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d3df0eba1f69c460e1b45c99bc2b34d5cc9187e7418cf453c8728af886b6617,PodSandboxId:e04ffe767b47b332a311e0d5bfa8a18486bbb463bc9b43e915cf0badc4060866,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1726252409353519154,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 806d4c49-56fb-4b01-a2cd-83bdf674d6eb,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02c6d6e4b350e88d9417e6262d530e0be455f8bacc894d0437221f5f74fc33ce,PodSandboxId:c3ecf2966876727ebd3ed8032152a9fee65a2fcee0fdd8c8056f359218ee905f,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726251840695356851,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-j795q,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: 943ee71f-15fc-4b01-8fa1-385d59ee92e9,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ab3cdf56491273dcbb9cc1f982f2dc5304571e00b7a1ee3ff3175f91c97fdbe,PodSandboxId:68e88bddaa74cdd1c6b049d43a7215c2c0f320385d884237086b8abdf931a230,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1726251796529246164,Labels:map[string]string{io.kubernetes.container.name: metr
ics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-qw488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf270e35-c498-455b-bc82-0a19e8f606aa,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46c152a4abcf517c32fc79a8f91da41a850adb80ee9597c3f49a9e6206b45f31,PodSandboxId:c2dc3a67499c7fd0a5898c8d6199735b3b3acd422ebc719fa480351a446b863e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNN
ING,CreatedAt:1726251757134337412,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09e9768b-ce9c-47d6-8650-191c7f864a9c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3bf9ceff710d95ca8581ac5cd76f0ab55d09833110e1a5c63fc0953ce948f4f,PodSandboxId:abf9b475b59015ac196e49760bd86de9c580685c29a7b28bf4ba6bca39e6ec2c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726251755
065524423,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mtltd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bee68b4c-c773-4bb2-b088-1fe4a816edf3,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9134bc1238e6ea0f130cab81c7973189417fcdfb4544aec08a6ee8aaf314cb0c,PodSandboxId:44e10dfb950fd8542b6cad158923008de486a5f9f595035b4b932235c39eb956,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf0
6a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726251752259507868,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qxmw4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e77278b-62ae-4a68-bbba-ca3108d18280,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d7472d2e3f48ffc6dd6ccc80ca03ad0ac7078696b11d7b1460addd2949d22e6,PodSandboxId:d552343eeec8aec1e907ac31f5d100cedcabe20845d1d77eda3124dbbeaa4317,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[str
ing]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726251740614747066,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-979357,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7509a39b4c733b67776a7ae5d64c186,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:089b47ce338051609525c4f9381ceba68577833fc0ffa1c55a00b4e704e073a2,PodSandboxId:b67ca3f1d294d653bc170dc259a7acaf543d88fea5536493d60247cdeb49a879,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]stri
ng{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726251740607552426,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-979357,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fde484119c5eac540deeb46d4ed91bf6,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f36fa2cd406d1eb54c924b89d86a23d5b5415356638b0a6ab4846430227aaaa2,PodSandboxId:89b0eb49c6580a707bc39e91ffdcf46d27656879114d9b53048e2e70708e1329,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},User
SpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726251740610321933,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-979357,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55cc9c34cab79d6a845b85b50237201a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:beb227280e8dfb38d827f5345ec8c8984cb0a02932ab22106590d49a6d28413a,PodSandboxId:1644d60ea634e91ff07026783a9d93e0db4e015a6853a21fb46c5a2aa2aaa73c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Ima
geRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726251740600321275,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-979357,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab45a45dafa7cbe725acff1543d4a881,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8f80bf64-4677-4e6a-ae44-f7947b7706a8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 18:37:20 addons-979357 crio[661]: time="2024-09-13 18:37:20.003486191Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b42b7e00-8e9e-4e7b-adaf-0cc295bd77e3 name=/runtime.v1.RuntimeService/Version
	Sep 13 18:37:20 addons-979357 crio[661]: time="2024-09-13 18:37:20.003562132Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b42b7e00-8e9e-4e7b-adaf-0cc295bd77e3 name=/runtime.v1.RuntimeService/Version
	Sep 13 18:37:20 addons-979357 crio[661]: time="2024-09-13 18:37:20.004960367Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4820c8e9-cf5b-4e39-b363-3dc0f3d1a2ea name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 18:37:20 addons-979357 crio[661]: time="2024-09-13 18:37:20.006198750Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726252640006170588,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559239,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4820c8e9-cf5b-4e39-b363-3dc0f3d1a2ea name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 18:37:20 addons-979357 crio[661]: time="2024-09-13 18:37:20.006804957Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fc33473e-3fde-4d81-a36c-b72da73fb3cc name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 18:37:20 addons-979357 crio[661]: time="2024-09-13 18:37:20.006892560Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fc33473e-3fde-4d81-a36c-b72da73fb3cc name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 18:37:20 addons-979357 crio[661]: time="2024-09-13 18:37:20.007181288Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:22255494b139f174b63cb9552f7ce5d9965c4fbdfadacade12971758a6d7c34f,PodSandboxId:3dfe2087710ff70556a7315fef49a89402359b106fa837bda3fd211973a8f99f,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1726252549274069505,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-hw97l,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2e838a30-9cc7-4bd7-a481-378b6fe7bd29,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d3df0eba1f69c460e1b45c99bc2b34d5cc9187e7418cf453c8728af886b6617,PodSandboxId:e04ffe767b47b332a311e0d5bfa8a18486bbb463bc9b43e915cf0badc4060866,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1726252409353519154,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 806d4c49-56fb-4b01-a2cd-83bdf674d6eb,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02c6d6e4b350e88d9417e6262d530e0be455f8bacc894d0437221f5f74fc33ce,PodSandboxId:c3ecf2966876727ebd3ed8032152a9fee65a2fcee0fdd8c8056f359218ee905f,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726251840695356851,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-j795q,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: 943ee71f-15fc-4b01-8fa1-385d59ee92e9,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ab3cdf56491273dcbb9cc1f982f2dc5304571e00b7a1ee3ff3175f91c97fdbe,PodSandboxId:68e88bddaa74cdd1c6b049d43a7215c2c0f320385d884237086b8abdf931a230,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1726251796529246164,Labels:map[string]string{io.kubernetes.container.name: metr
ics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-qw488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf270e35-c498-455b-bc82-0a19e8f606aa,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46c152a4abcf517c32fc79a8f91da41a850adb80ee9597c3f49a9e6206b45f31,PodSandboxId:c2dc3a67499c7fd0a5898c8d6199735b3b3acd422ebc719fa480351a446b863e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNN
ING,CreatedAt:1726251757134337412,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09e9768b-ce9c-47d6-8650-191c7f864a9c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3bf9ceff710d95ca8581ac5cd76f0ab55d09833110e1a5c63fc0953ce948f4f,PodSandboxId:abf9b475b59015ac196e49760bd86de9c580685c29a7b28bf4ba6bca39e6ec2c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726251755
065524423,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mtltd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bee68b4c-c773-4bb2-b088-1fe4a816edf3,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9134bc1238e6ea0f130cab81c7973189417fcdfb4544aec08a6ee8aaf314cb0c,PodSandboxId:44e10dfb950fd8542b6cad158923008de486a5f9f595035b4b932235c39eb956,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf0
6a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726251752259507868,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qxmw4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e77278b-62ae-4a68-bbba-ca3108d18280,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d7472d2e3f48ffc6dd6ccc80ca03ad0ac7078696b11d7b1460addd2949d22e6,PodSandboxId:d552343eeec8aec1e907ac31f5d100cedcabe20845d1d77eda3124dbbeaa4317,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[str
ing]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726251740614747066,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-979357,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7509a39b4c733b67776a7ae5d64c186,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:089b47ce338051609525c4f9381ceba68577833fc0ffa1c55a00b4e704e073a2,PodSandboxId:b67ca3f1d294d653bc170dc259a7acaf543d88fea5536493d60247cdeb49a879,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]stri
ng{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726251740607552426,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-979357,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fde484119c5eac540deeb46d4ed91bf6,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f36fa2cd406d1eb54c924b89d86a23d5b5415356638b0a6ab4846430227aaaa2,PodSandboxId:89b0eb49c6580a707bc39e91ffdcf46d27656879114d9b53048e2e70708e1329,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},User
SpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726251740610321933,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-979357,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55cc9c34cab79d6a845b85b50237201a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:beb227280e8dfb38d827f5345ec8c8984cb0a02932ab22106590d49a6d28413a,PodSandboxId:1644d60ea634e91ff07026783a9d93e0db4e015a6853a21fb46c5a2aa2aaa73c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Ima
geRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726251740600321275,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-979357,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab45a45dafa7cbe725acff1543d4a881,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fc33473e-3fde-4d81-a36c-b72da73fb3cc name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 18:37:20 addons-979357 crio[661]: time="2024-09-13 18:37:20.018125815Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=17f9cdbc-f31b-45b0-a3f3-54aac74e7965 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 13 18:37:20 addons-979357 crio[661]: time="2024-09-13 18:37:20.018765295Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:3dfe2087710ff70556a7315fef49a89402359b106fa837bda3fd211973a8f99f,Metadata:&PodSandboxMetadata{Name:hello-world-app-55bf9c44b4-hw97l,Uid:2e838a30-9cc7-4bd7-a481-378b6fe7bd29,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726252546461385063,Labels:map[string]string{app: hello-world-app,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-hw97l,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2e838a30-9cc7-4bd7-a481-378b6fe7bd29,pod-template-hash: 55bf9c44b4,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-13T18:35:46.151989470Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e04ffe767b47b332a311e0d5bfa8a18486bbb463bc9b43e915cf0badc4060866,Metadata:&PodSandboxMetadata{Name:nginx,Uid:806d4c49-56fb-4b01-a2cd-83bdf674d6eb,Namespace:default,Attempt:0,}
,State:SANDBOX_READY,CreatedAt:1726252405162607646,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 806d4c49-56fb-4b01-a2cd-83bdf674d6eb,run: nginx,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-13T18:33:24.830191563Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9101128de42bf61a447d87dd1fc0d890643c5e132024271066297b568a31389e,Metadata:&PodSandboxMetadata{Name:busybox,Uid:fadcf5b8-b54e-4896-9ab6-b7294f3c8503,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726251846843159481,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fadcf5b8-b54e-4896-9ab6-b7294f3c8503,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-13T18:24:06.527459510Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c3ecf2966876727ebd
3ed8032152a9fee65a2fcee0fdd8c8056f359218ee905f,Metadata:&PodSandboxMetadata{Name:gcp-auth-89d5ffd79-j795q,Uid:943ee71f-15fc-4b01-8fa1-385d59ee92e9,Namespace:gcp-auth,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726251826580253166,Labels:map[string]string{app: gcp-auth,io.kubernetes.container.name: POD,io.kubernetes.pod.name: gcp-auth-89d5ffd79-j795q,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 943ee71f-15fc-4b01-8fa1-385d59ee92e9,kubernetes.io/minikube-addons: gcp-auth,pod-template-hash: 89d5ffd79,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-13T18:22:41.602125469Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:68e88bddaa74cdd1c6b049d43a7215c2c0f320385d884237086b8abdf931a230,Metadata:&PodSandboxMetadata{Name:metrics-server-84c5f94fbc-qw488,Uid:cf270e35-c498-455b-bc82-0a19e8f606aa,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726251755928773654,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-se
rver-84c5f94fbc-qw488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf270e35-c498-455b-bc82-0a19e8f606aa,k8s-app: metrics-server,pod-template-hash: 84c5f94fbc,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-13T18:22:35.615365114Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c2dc3a67499c7fd0a5898c8d6199735b3b3acd422ebc719fa480351a446b863e,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:09e9768b-ce9c-47d6-8650-191c7f864a9c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726251755817458782,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09e9768b-ce9c-47d6-8650-191c7f864a9c,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"lab
els\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-09-13T18:22:35.494313744Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:abf9b475b59015ac196e49760bd86de9c580685c29a7b28bf4ba6bca39e6ec2c,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-mtltd,Uid:bee68b4c-c773-4bb2-b088-1fe4a816edf3,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726251751888345586,Labels:map[string]string{io.kubernetes.container.name: POD,io.kub
ernetes.pod.name: coredns-7c65d6cfc9-mtltd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bee68b4c-c773-4bb2-b088-1fe4a816edf3,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-13T18:22:31.279575645Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:44e10dfb950fd8542b6cad158923008de486a5f9f595035b4b932235c39eb956,Metadata:&PodSandboxMetadata{Name:kube-proxy-qxmw4,Uid:3e77278b-62ae-4a68-bbba-ca3108d18280,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726251751723061734,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-qxmw4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e77278b-62ae-4a68-bbba-ca3108d18280,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-13T18:22:31.104661726Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodS
andbox{Id:b67ca3f1d294d653bc170dc259a7acaf543d88fea5536493d60247cdeb49a879,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-addons-979357,Uid:fde484119c5eac540deeb46d4ed91bf6,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726251740190235836,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-addons-979357,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fde484119c5eac540deeb46d4ed91bf6,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: fde484119c5eac540deeb46d4ed91bf6,kubernetes.io/config.seen: 2024-09-13T18:22:19.671644460Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:1644d60ea634e91ff07026783a9d93e0db4e015a6853a21fb46c5a2aa2aaa73c,Metadata:&PodSandboxMetadata{Name:kube-apiserver-addons-979357,Uid:ab45a45dafa7cbe725acff1543d4a881,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726251740189410158,Labels:map[stri
ng]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-addons-979357,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab45a45dafa7cbe725acff1543d4a881,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.34:8443,kubernetes.io/config.hash: ab45a45dafa7cbe725acff1543d4a881,kubernetes.io/config.seen: 2024-09-13T18:22:19.671642771Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:d552343eeec8aec1e907ac31f5d100cedcabe20845d1d77eda3124dbbeaa4317,Metadata:&PodSandboxMetadata{Name:kube-scheduler-addons-979357,Uid:c7509a39b4c733b67776a7ae5d64c186,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726251740169435312,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-addons-979357,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7509a39b4c733b67776a7ae5d64c1
86,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: c7509a39b4c733b67776a7ae5d64c186,kubernetes.io/config.seen: 2024-09-13T18:22:19.671645376Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:89b0eb49c6580a707bc39e91ffdcf46d27656879114d9b53048e2e70708e1329,Metadata:&PodSandboxMetadata{Name:etcd-addons-979357,Uid:55cc9c34cab79d6a845b85b50237201a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726251740165537665,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-addons-979357,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55cc9c34cab79d6a845b85b50237201a,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.34:2379,kubernetes.io/config.hash: 55cc9c34cab79d6a845b85b50237201a,kubernetes.io/config.seen: 2024-09-13T18:22:19.671639534Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/i
nterceptors.go:74" id=17f9cdbc-f31b-45b0-a3f3-54aac74e7965 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 13 18:37:20 addons-979357 crio[661]: time="2024-09-13 18:37:20.019621869Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c686d2e1-9c62-487a-be8e-5b999f1614ba name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 18:37:20 addons-979357 crio[661]: time="2024-09-13 18:37:20.019759700Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c686d2e1-9c62-487a-be8e-5b999f1614ba name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 18:37:20 addons-979357 crio[661]: time="2024-09-13 18:37:20.019981248Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:22255494b139f174b63cb9552f7ce5d9965c4fbdfadacade12971758a6d7c34f,PodSandboxId:3dfe2087710ff70556a7315fef49a89402359b106fa837bda3fd211973a8f99f,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1726252549274069505,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-hw97l,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2e838a30-9cc7-4bd7-a481-378b6fe7bd29,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d3df0eba1f69c460e1b45c99bc2b34d5cc9187e7418cf453c8728af886b6617,PodSandboxId:e04ffe767b47b332a311e0d5bfa8a18486bbb463bc9b43e915cf0badc4060866,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1726252409353519154,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 806d4c49-56fb-4b01-a2cd-83bdf674d6eb,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02c6d6e4b350e88d9417e6262d530e0be455f8bacc894d0437221f5f74fc33ce,PodSandboxId:c3ecf2966876727ebd3ed8032152a9fee65a2fcee0fdd8c8056f359218ee905f,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726251840695356851,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-j795q,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: 943ee71f-15fc-4b01-8fa1-385d59ee92e9,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ab3cdf56491273dcbb9cc1f982f2dc5304571e00b7a1ee3ff3175f91c97fdbe,PodSandboxId:68e88bddaa74cdd1c6b049d43a7215c2c0f320385d884237086b8abdf931a230,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1726251796529246164,Labels:map[string]string{io.kubernetes.container.name: metr
ics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-qw488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf270e35-c498-455b-bc82-0a19e8f606aa,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46c152a4abcf517c32fc79a8f91da41a850adb80ee9597c3f49a9e6206b45f31,PodSandboxId:c2dc3a67499c7fd0a5898c8d6199735b3b3acd422ebc719fa480351a446b863e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNN
ING,CreatedAt:1726251757134337412,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09e9768b-ce9c-47d6-8650-191c7f864a9c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3bf9ceff710d95ca8581ac5cd76f0ab55d09833110e1a5c63fc0953ce948f4f,PodSandboxId:abf9b475b59015ac196e49760bd86de9c580685c29a7b28bf4ba6bca39e6ec2c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726251755
065524423,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mtltd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bee68b4c-c773-4bb2-b088-1fe4a816edf3,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9134bc1238e6ea0f130cab81c7973189417fcdfb4544aec08a6ee8aaf314cb0c,PodSandboxId:44e10dfb950fd8542b6cad158923008de486a5f9f595035b4b932235c39eb956,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf0
6a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726251752259507868,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qxmw4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e77278b-62ae-4a68-bbba-ca3108d18280,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d7472d2e3f48ffc6dd6ccc80ca03ad0ac7078696b11d7b1460addd2949d22e6,PodSandboxId:d552343eeec8aec1e907ac31f5d100cedcabe20845d1d77eda3124dbbeaa4317,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[str
ing]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726251740614747066,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-979357,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7509a39b4c733b67776a7ae5d64c186,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:089b47ce338051609525c4f9381ceba68577833fc0ffa1c55a00b4e704e073a2,PodSandboxId:b67ca3f1d294d653bc170dc259a7acaf543d88fea5536493d60247cdeb49a879,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]stri
ng{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726251740607552426,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-979357,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fde484119c5eac540deeb46d4ed91bf6,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f36fa2cd406d1eb54c924b89d86a23d5b5415356638b0a6ab4846430227aaaa2,PodSandboxId:89b0eb49c6580a707bc39e91ffdcf46d27656879114d9b53048e2e70708e1329,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},User
SpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726251740610321933,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-979357,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55cc9c34cab79d6a845b85b50237201a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:beb227280e8dfb38d827f5345ec8c8984cb0a02932ab22106590d49a6d28413a,PodSandboxId:1644d60ea634e91ff07026783a9d93e0db4e015a6853a21fb46c5a2aa2aaa73c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Ima
geRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726251740600321275,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-979357,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab45a45dafa7cbe725acff1543d4a881,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c686d2e1-9c62-487a-be8e-5b999f1614ba name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 18:37:20 addons-979357 crio[661]: time="2024-09-13 18:37:20.048861027Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=573a3ccc-bb8b-4921-a111-ae430ae32c66 name=/runtime.v1.RuntimeService/Version
	Sep 13 18:37:20 addons-979357 crio[661]: time="2024-09-13 18:37:20.048943707Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=573a3ccc-bb8b-4921-a111-ae430ae32c66 name=/runtime.v1.RuntimeService/Version
	Sep 13 18:37:20 addons-979357 crio[661]: time="2024-09-13 18:37:20.049917848Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=db22baea-c404-40f4-9c8e-b8cab641b671 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 18:37:20 addons-979357 crio[661]: time="2024-09-13 18:37:20.051107432Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726252640051069393,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559239,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=db22baea-c404-40f4-9c8e-b8cab641b671 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 18:37:20 addons-979357 crio[661]: time="2024-09-13 18:37:20.051888700Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=47842b4a-3327-44cd-94e5-61d7160125ad name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 18:37:20 addons-979357 crio[661]: time="2024-09-13 18:37:20.051961121Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=47842b4a-3327-44cd-94e5-61d7160125ad name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 18:37:20 addons-979357 crio[661]: time="2024-09-13 18:37:20.052230075Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:22255494b139f174b63cb9552f7ce5d9965c4fbdfadacade12971758a6d7c34f,PodSandboxId:3dfe2087710ff70556a7315fef49a89402359b106fa837bda3fd211973a8f99f,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1726252549274069505,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-hw97l,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2e838a30-9cc7-4bd7-a481-378b6fe7bd29,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d3df0eba1f69c460e1b45c99bc2b34d5cc9187e7418cf453c8728af886b6617,PodSandboxId:e04ffe767b47b332a311e0d5bfa8a18486bbb463bc9b43e915cf0badc4060866,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1726252409353519154,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 806d4c49-56fb-4b01-a2cd-83bdf674d6eb,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02c6d6e4b350e88d9417e6262d530e0be455f8bacc894d0437221f5f74fc33ce,PodSandboxId:c3ecf2966876727ebd3ed8032152a9fee65a2fcee0fdd8c8056f359218ee905f,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726251840695356851,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-j795q,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: 943ee71f-15fc-4b01-8fa1-385d59ee92e9,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ab3cdf56491273dcbb9cc1f982f2dc5304571e00b7a1ee3ff3175f91c97fdbe,PodSandboxId:68e88bddaa74cdd1c6b049d43a7215c2c0f320385d884237086b8abdf931a230,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1726251796529246164,Labels:map[string]string{io.kubernetes.container.name: metr
ics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-qw488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf270e35-c498-455b-bc82-0a19e8f606aa,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46c152a4abcf517c32fc79a8f91da41a850adb80ee9597c3f49a9e6206b45f31,PodSandboxId:c2dc3a67499c7fd0a5898c8d6199735b3b3acd422ebc719fa480351a446b863e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNN
ING,CreatedAt:1726251757134337412,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09e9768b-ce9c-47d6-8650-191c7f864a9c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3bf9ceff710d95ca8581ac5cd76f0ab55d09833110e1a5c63fc0953ce948f4f,PodSandboxId:abf9b475b59015ac196e49760bd86de9c580685c29a7b28bf4ba6bca39e6ec2c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726251755
065524423,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mtltd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bee68b4c-c773-4bb2-b088-1fe4a816edf3,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9134bc1238e6ea0f130cab81c7973189417fcdfb4544aec08a6ee8aaf314cb0c,PodSandboxId:44e10dfb950fd8542b6cad158923008de486a5f9f595035b4b932235c39eb956,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf0
6a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726251752259507868,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qxmw4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e77278b-62ae-4a68-bbba-ca3108d18280,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d7472d2e3f48ffc6dd6ccc80ca03ad0ac7078696b11d7b1460addd2949d22e6,PodSandboxId:d552343eeec8aec1e907ac31f5d100cedcabe20845d1d77eda3124dbbeaa4317,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[str
ing]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726251740614747066,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-979357,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7509a39b4c733b67776a7ae5d64c186,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:089b47ce338051609525c4f9381ceba68577833fc0ffa1c55a00b4e704e073a2,PodSandboxId:b67ca3f1d294d653bc170dc259a7acaf543d88fea5536493d60247cdeb49a879,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]stri
ng{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726251740607552426,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-979357,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fde484119c5eac540deeb46d4ed91bf6,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f36fa2cd406d1eb54c924b89d86a23d5b5415356638b0a6ab4846430227aaaa2,PodSandboxId:89b0eb49c6580a707bc39e91ffdcf46d27656879114d9b53048e2e70708e1329,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},User
SpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726251740610321933,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-979357,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55cc9c34cab79d6a845b85b50237201a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:beb227280e8dfb38d827f5345ec8c8984cb0a02932ab22106590d49a6d28413a,PodSandboxId:1644d60ea634e91ff07026783a9d93e0db4e015a6853a21fb46c5a2aa2aaa73c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Ima
geRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726251740600321275,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-979357,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab45a45dafa7cbe725acff1543d4a881,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=47842b4a-3327-44cd-94e5-61d7160125ad name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	22255494b139f       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   About a minute ago   Running             hello-world-app           0                   3dfe2087710ff       hello-world-app-55bf9c44b4-hw97l
	3d3df0eba1f69       docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56                         3 minutes ago        Running             nginx                     0                   e04ffe767b47b       nginx
	02c6d6e4b350e       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b            13 minutes ago       Running             gcp-auth                  0                   c3ecf29668767       gcp-auth-89d5ffd79-j795q
	7ab3cdf564912       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a   14 minutes ago       Running             metrics-server            0                   68e88bddaa74c       metrics-server-84c5f94fbc-qw488
	46c152a4abcf5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        14 minutes ago       Running             storage-provisioner       0                   c2dc3a67499c7       storage-provisioner
	e3bf9ceff710d       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                        14 minutes ago       Running             coredns                   0                   abf9b475b5901       coredns-7c65d6cfc9-mtltd
	9134bc1238e6e       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                        14 minutes ago       Running             kube-proxy                0                   44e10dfb950fd       kube-proxy-qxmw4
	1d7472d2e3f48       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                        14 minutes ago       Running             kube-scheduler            0                   d552343eeec8a       kube-scheduler-addons-979357
	f36fa2cd406d1       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                        14 minutes ago       Running             etcd                      0                   89b0eb49c6580       etcd-addons-979357
	089b47ce33805       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                        14 minutes ago       Running             kube-controller-manager   0                   b67ca3f1d294d       kube-controller-manager-addons-979357
	beb227280e8df       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                                        14 minutes ago       Running             kube-apiserver            0                   1644d60ea634e       kube-apiserver-addons-979357
	
	
	==> coredns [e3bf9ceff710d95ca8581ac5cd76f0ab55d09833110e1a5c63fc0953ce948f4f] <==
	[INFO] 127.0.0.1:55425 - 14478 "HINFO IN 8414480608980431581.7987847580657585340. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013867574s
	[INFO] 10.244.0.8:41401 - 54033 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000413348s
	[INFO] 10.244.0.8:41401 - 10285 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000151346s
	[INFO] 10.244.0.8:59177 - 13648 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000180964s
	[INFO] 10.244.0.8:59177 - 58194 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000217233s
	[INFO] 10.244.0.8:33613 - 8975 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000149676s
	[INFO] 10.244.0.8:33613 - 55809 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000167212s
	[INFO] 10.244.0.8:39507 - 64600 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000116346s
	[INFO] 10.244.0.8:39507 - 6487 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000116459s
	[INFO] 10.244.0.8:44408 - 33423 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000177557s
	[INFO] 10.244.0.8:44408 - 53388 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000095321s
	[INFO] 10.244.0.8:50243 - 29298 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000133268s
	[INFO] 10.244.0.8:50243 - 63089 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000075946s
	[INFO] 10.244.0.8:44518 - 41049 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000067378s
	[INFO] 10.244.0.8:44518 - 48475 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000090248s
	[INFO] 10.244.0.8:58663 - 2901 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000053667s
	[INFO] 10.244.0.8:58663 - 55639 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000037658s
	[INFO] 10.244.0.21:34953 - 59093 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000423399s
	[INFO] 10.244.0.21:35225 - 60921 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000298982s
	[INFO] 10.244.0.21:47005 - 14964 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000165017s
	[INFO] 10.244.0.21:38065 - 60873 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000151065s
	[INFO] 10.244.0.21:58049 - 44728 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000129589s
	[INFO] 10.244.0.21:41316 - 5999 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000108833s
	[INFO] 10.244.0.21:53728 - 64340 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000828725s
	[INFO] 10.244.0.21:36643 - 40190 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.000688535s
	
	
	==> describe nodes <==
	Name:               addons-979357
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-979357
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fdd33bebc6743cfd1c61ec7fe066add478610a92
	                    minikube.k8s.io/name=addons-979357
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_13T18_22_26_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-979357
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Sep 2024 18:22:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-979357
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 13 Sep 2024 18:37:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Sep 2024 18:36:03 +0000   Fri, 13 Sep 2024 18:22:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Sep 2024 18:36:03 +0000   Fri, 13 Sep 2024 18:22:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Sep 2024 18:36:03 +0000   Fri, 13 Sep 2024 18:22:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 13 Sep 2024 18:36:03 +0000   Fri, 13 Sep 2024 18:22:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.34
	  Hostname:    addons-979357
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 323f75a62e114a2e93170ef9b4ca6dd9
	  System UUID:                323f75a6-2e11-4a2e-9317-0ef9b4ca6dd9
	  Boot ID:                    007169e1-5e2f-4ead-8631-d0c0eed7c494
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  default                     hello-world-app-55bf9c44b4-hw97l         0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  default                     nginx                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m56s
	  gcp-auth                    gcp-auth-89d5ffd79-j795q                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 coredns-7c65d6cfc9-mtltd                 100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     14m
	  kube-system                 etcd-addons-979357                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         14m
	  kube-system                 kube-apiserver-addons-979357             250m (12%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-addons-979357    200m (10%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-qxmw4                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-addons-979357             100m (5%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 storage-provisioner                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (4%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 14m   kube-proxy       
	  Normal  Starting                 14m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  14m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m   kubelet          Node addons-979357 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m   kubelet          Node addons-979357 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m   kubelet          Node addons-979357 status is now: NodeHasSufficientPID
	  Normal  NodeReady                14m   kubelet          Node addons-979357 status is now: NodeReady
	  Normal  RegisteredNode           14m   node-controller  Node addons-979357 event: Registered Node addons-979357 in Controller
	
	
	==> dmesg <==
	[  +7.222070] kauditd_printk_skb: 22 callbacks suppressed
	[Sep13 18:23] kauditd_printk_skb: 2 callbacks suppressed
	[  +7.361549] kauditd_printk_skb: 27 callbacks suppressed
	[ +11.110464] kauditd_printk_skb: 4 callbacks suppressed
	[ +13.984432] kauditd_printk_skb: 22 callbacks suppressed
	[  +5.307990] kauditd_printk_skb: 45 callbacks suppressed
	[  +8.629278] kauditd_printk_skb: 63 callbacks suppressed
	[Sep13 18:24] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.527807] kauditd_printk_skb: 16 callbacks suppressed
	[ +19.654471] kauditd_printk_skb: 40 callbacks suppressed
	[Sep13 18:25] kauditd_printk_skb: 28 callbacks suppressed
	[Sep13 18:26] kauditd_printk_skb: 28 callbacks suppressed
	[Sep13 18:29] kauditd_printk_skb: 28 callbacks suppressed
	[Sep13 18:32] kauditd_printk_skb: 28 callbacks suppressed
	[  +6.953826] kauditd_printk_skb: 6 callbacks suppressed
	[  +7.633272] kauditd_printk_skb: 11 callbacks suppressed
	[  +7.939706] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.945246] kauditd_printk_skb: 23 callbacks suppressed
	[  +5.115088] kauditd_printk_skb: 23 callbacks suppressed
	[  +9.244947] kauditd_printk_skb: 31 callbacks suppressed
	[Sep13 18:33] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.314297] kauditd_printk_skb: 44 callbacks suppressed
	[  +6.432965] kauditd_printk_skb: 28 callbacks suppressed
	[Sep13 18:35] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.404432] kauditd_printk_skb: 21 callbacks suppressed
	
	
	==> etcd [f36fa2cd406d1eb54c924b89d86a23d5b5415356638b0a6ab4846430227aaaa2] <==
	{"level":"warn","ts":"2024-09-13T18:23:51.021543Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"387.099142ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-13T18:23:51.021644Z","caller":"traceutil/trace.go:171","msg":"trace[515273731] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1054; }","duration":"387.282484ms","start":"2024-09-13T18:23:50.634355Z","end":"2024-09-13T18:23:51.021638Z","steps":["trace[515273731] 'agreement among raft nodes before linearized reading'  (duration: 387.071303ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-13T18:23:51.021675Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-13T18:23:50.634324Z","time spent":"387.339943ms","remote":"127.0.0.1:53466","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2024-09-13T18:23:51.022402Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"337.078944ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-13T18:23:51.022467Z","caller":"traceutil/trace.go:171","msg":"trace[1756911976] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1054; }","duration":"337.150275ms","start":"2024-09-13T18:23:50.685306Z","end":"2024-09-13T18:23:51.022456Z","steps":["trace[1756911976] 'agreement among raft nodes before linearized reading'  (duration: 337.020545ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-13T18:23:51.022506Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-13T18:23:50.685273Z","time spent":"337.222274ms","remote":"127.0.0.1:53466","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"info","ts":"2024-09-13T18:23:53.608519Z","caller":"traceutil/trace.go:171","msg":"trace[570854755] transaction","detail":"{read_only:false; response_revision:1061; number_of_response:1; }","duration":"228.533999ms","start":"2024-09-13T18:23:53.379969Z","end":"2024-09-13T18:23:53.608503Z","steps":["trace[570854755] 'process raft request'  (duration: 228.091989ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-13T18:24:05.523053Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-13T18:24:05.164429Z","time spent":"358.62098ms","remote":"127.0.0.1:53300","response type":"/etcdserverpb.Lease/LeaseGrant","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""}
	{"level":"info","ts":"2024-09-13T18:24:05.526794Z","caller":"traceutil/trace.go:171","msg":"trace[1285637360] transaction","detail":"{read_only:false; response_revision:1131; number_of_response:1; }","duration":"245.594439ms","start":"2024-09-13T18:24:05.281082Z","end":"2024-09-13T18:24:05.526676Z","steps":["trace[1285637360] 'process raft request'  (duration: 245.425195ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-13T18:32:16.746450Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"259.463174ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/snapshot.storage.k8s.io/volumesnapshotclasses/\" range_end:\"/registry/snapshot.storage.k8s.io/volumesnapshotclasses0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-09-13T18:32:16.746572Z","caller":"traceutil/trace.go:171","msg":"trace[1646493262] range","detail":"{range_begin:/registry/snapshot.storage.k8s.io/volumesnapshotclasses/; range_end:/registry/snapshot.storage.k8s.io/volumesnapshotclasses0; response_count:0; response_revision:1944; }","duration":"259.655607ms","start":"2024-09-13T18:32:16.486899Z","end":"2024-09-13T18:32:16.746555Z","steps":["trace[1646493262] 'count revisions from in-memory index tree'  (duration: 259.404889ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-13T18:32:21.625942Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1491}
	{"level":"info","ts":"2024-09-13T18:32:21.662273Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1491,"took":"35.833101ms","hash":2337312588,"current-db-size-bytes":6422528,"current-db-size":"6.4 MB","current-db-size-in-use-bytes":3420160,"current-db-size-in-use":"3.4 MB"}
	{"level":"info","ts":"2024-09-13T18:32:21.662341Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2337312588,"revision":1491,"compact-revision":-1}
	{"level":"info","ts":"2024-09-13T18:32:47.777404Z","caller":"traceutil/trace.go:171","msg":"trace[9576718] transaction","detail":"{read_only:false; response_revision:2174; number_of_response:1; }","duration":"150.443543ms","start":"2024-09-13T18:32:47.626934Z","end":"2024-09-13T18:32:47.777378Z","steps":["trace[9576718] 'process raft request'  (duration: 150.357849ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-13T18:32:52.478755Z","caller":"traceutil/trace.go:171","msg":"trace[505158] linearizableReadLoop","detail":"{readStateIndex:2358; appliedIndex:2357; }","duration":"421.352793ms","start":"2024-09-13T18:32:52.057386Z","end":"2024-09-13T18:32:52.478739Z","steps":["trace[505158] 'read index received'  (duration: 421.139117ms)","trace[505158] 'applied index is now lower than readState.Index'  (duration: 212.982µs)"],"step_count":2}
	{"level":"warn","ts":"2024-09-13T18:32:52.479009Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"350.057609ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-13T18:32:52.479661Z","caller":"traceutil/trace.go:171","msg":"trace[943115826] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:2200; }","duration":"350.751111ms","start":"2024-09-13T18:32:52.128898Z","end":"2024-09-13T18:32:52.479649Z","steps":["trace[943115826] 'agreement among raft nodes before linearized reading'  (duration: 350.040298ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-13T18:32:52.479012Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"421.574332ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-13T18:32:52.480358Z","caller":"traceutil/trace.go:171","msg":"trace[691500721] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:2200; }","duration":"422.967594ms","start":"2024-09-13T18:32:52.057381Z","end":"2024-09-13T18:32:52.480349Z","steps":["trace[691500721] 'agreement among raft nodes before linearized reading'  (duration: 421.548176ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-13T18:32:52.480506Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-13T18:32:52.057333Z","time spent":"423.124824ms","remote":"127.0.0.1:53272","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"info","ts":"2024-09-13T18:32:52.479052Z","caller":"traceutil/trace.go:171","msg":"trace[2022301504] transaction","detail":"{read_only:false; response_revision:2200; number_of_response:1; }","duration":"547.643865ms","start":"2024-09-13T18:32:51.931399Z","end":"2024-09-13T18:32:52.479043Z","steps":["trace[2022301504] 'process raft request'  (duration: 547.179229ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-13T18:32:52.481455Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-13T18:32:51.931384Z","time spent":"549.269751ms","remote":"127.0.0.1:40810","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":539,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" mod_revision:2173 > success:<request_put:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" value_size:452 >> failure:<request_range:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" > >"}
	{"level":"warn","ts":"2024-09-13T18:32:52.479449Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"107.09265ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-13T18:32:52.481582Z","caller":"traceutil/trace.go:171","msg":"trace[2047800323] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:2200; }","duration":"109.228494ms","start":"2024-09-13T18:32:52.372347Z","end":"2024-09-13T18:32:52.481576Z","steps":["trace[2047800323] 'agreement among raft nodes before linearized reading'  (duration: 107.084584ms)"],"step_count":1}
	
	
	==> gcp-auth [02c6d6e4b350e88d9417e6262d530e0be455f8bacc894d0437221f5f74fc33ce] <==
	2024/09/13 18:24:06 Ready to write response ...
	2024/09/13 18:24:06 Ready to marshal response ...
	2024/09/13 18:24:06 Ready to write response ...
	2024/09/13 18:32:10 Ready to marshal response ...
	2024/09/13 18:32:10 Ready to write response ...
	2024/09/13 18:32:10 Ready to marshal response ...
	2024/09/13 18:32:10 Ready to write response ...
	2024/09/13 18:32:10 Ready to marshal response ...
	2024/09/13 18:32:10 Ready to write response ...
	2024/09/13 18:32:21 Ready to marshal response ...
	2024/09/13 18:32:21 Ready to write response ...
	2024/09/13 18:32:32 Ready to marshal response ...
	2024/09/13 18:32:32 Ready to write response ...
	2024/09/13 18:32:32 Ready to marshal response ...
	2024/09/13 18:32:32 Ready to write response ...
	2024/09/13 18:32:43 Ready to marshal response ...
	2024/09/13 18:32:43 Ready to write response ...
	2024/09/13 18:32:45 Ready to marshal response ...
	2024/09/13 18:32:45 Ready to write response ...
	2024/09/13 18:33:16 Ready to marshal response ...
	2024/09/13 18:33:16 Ready to write response ...
	2024/09/13 18:33:24 Ready to marshal response ...
	2024/09/13 18:33:24 Ready to write response ...
	2024/09/13 18:35:46 Ready to marshal response ...
	2024/09/13 18:35:46 Ready to write response ...
	
	
	==> kernel <==
	 18:37:20 up 15 min,  0 users,  load average: 0.15, 0.38, 0.35
	Linux addons-979357 5.10.207 #1 SMP Thu Sep 12 19:03:33 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [beb227280e8dfb38d827f5345ec8c8984cb0a02932ab22106590d49a6d28413a] <==
	I0913 18:32:10.039145       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.103.81.144"}
	I0913 18:32:15.993730       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0913 18:32:17.054872       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0913 18:32:59.435526       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E0913 18:32:59.736880       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0913 18:33:10.989953       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0913 18:33:11.997980       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0913 18:33:13.005448       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0913 18:33:14.012493       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0913 18:33:24.691678       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0913 18:33:24.883354       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.100.119.71"}
	I0913 18:33:32.763152       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0913 18:33:32.763216       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0913 18:33:32.792443       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0913 18:33:32.792504       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0913 18:33:32.897307       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0913 18:33:32.897376       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0913 18:33:32.917372       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0913 18:33:32.917776       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0913 18:33:32.942848       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0913 18:33:32.943631       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0913 18:33:33.918112       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0913 18:33:33.943895       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0913 18:33:34.041990       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I0913 18:35:46.314495       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.102.159.60"}
	
	
	==> kube-controller-manager [089b47ce338051609525c4f9381ceba68577833fc0ffa1c55a00b4e704e073a2] <==
	I0913 18:35:46.177064       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="14.659µs"
	I0913 18:35:48.641488       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create" delay="0s"
	I0913 18:35:48.647238       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-bc57996ff" duration="9.445µs"
	I0913 18:35:48.663191       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch" delay="0s"
	I0913 18:35:50.011900       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="9.261785ms"
	I0913 18:35:50.012542       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="24.18µs"
	W0913 18:35:53.756221       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0913 18:35:53.756282       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0913 18:35:56.632447       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0913 18:35:56.632481       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0913 18:35:58.714447       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="ingress-nginx"
	W0913 18:36:02.698831       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0913 18:36:02.698956       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0913 18:36:03.448600       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-979357"
	W0913 18:36:32.197588       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0913 18:36:32.197884       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0913 18:36:42.187413       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0913 18:36:42.187478       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0913 18:36:42.307348       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0913 18:36:42.307401       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0913 18:36:55.694588       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0913 18:36:55.694829       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0913 18:37:16.888865       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0913 18:37:16.888947       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0913 18:37:18.992277       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-84c5f94fbc" duration="3.408µs"
	
	
	==> kube-proxy [9134bc1238e6ea0f130cab81c7973189417fcdfb4544aec08a6ee8aaf314cb0c] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0913 18:22:33.350612       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0913 18:22:33.364476       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.34"]
	E0913 18:22:33.364537       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0913 18:22:33.483199       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0913 18:22:33.483274       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0913 18:22:33.483300       1 server_linux.go:169] "Using iptables Proxier"
	I0913 18:22:33.488023       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0913 18:22:33.488274       1 server.go:483] "Version info" version="v1.31.1"
	I0913 18:22:33.488283       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0913 18:22:33.494316       1 config.go:199] "Starting service config controller"
	I0913 18:22:33.494338       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0913 18:22:33.494377       1 config.go:105] "Starting endpoint slice config controller"
	I0913 18:22:33.494381       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0913 18:22:33.497782       1 config.go:328] "Starting node config controller"
	I0913 18:22:33.497794       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0913 18:22:33.596036       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0913 18:22:33.596075       1 shared_informer.go:320] Caches are synced for service config
	I0913 18:22:33.598825       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [1d7472d2e3f48ffc6dd6ccc80ca03ad0ac7078696b11d7b1460addd2949d22e6] <==
	W0913 18:22:23.351491       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0913 18:22:23.351533       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0913 18:22:24.185862       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0913 18:22:24.185917       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0913 18:22:24.200594       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0913 18:22:24.200752       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0913 18:22:24.218466       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0913 18:22:24.218561       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0913 18:22:24.258477       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0913 18:22:24.258532       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0913 18:22:24.395515       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0913 18:22:24.395621       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0913 18:22:24.419001       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0913 18:22:24.419792       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0913 18:22:24.459549       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0913 18:22:24.459618       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0913 18:22:24.479886       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0913 18:22:24.480416       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0913 18:22:24.498056       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0913 18:22:24.498210       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0913 18:22:24.498173       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0913 18:22:24.498336       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0913 18:22:24.953128       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0913 18:22:24.953629       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0913 18:22:28.042327       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 13 18:36:31 addons-979357 kubelet[1204]: E0913 18:36:31.020003    1204 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="fadcf5b8-b54e-4896-9ab6-b7294f3c8503"
	Sep 13 18:36:36 addons-979357 kubelet[1204]: E0913 18:36:36.411257    1204 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726252596410846113,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559239,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 18:36:36 addons-979357 kubelet[1204]: E0913 18:36:36.411774    1204 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726252596410846113,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559239,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 18:36:42 addons-979357 kubelet[1204]: E0913 18:36:42.019095    1204 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="fadcf5b8-b54e-4896-9ab6-b7294f3c8503"
	Sep 13 18:36:46 addons-979357 kubelet[1204]: E0913 18:36:46.415583    1204 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726252606414854378,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559239,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 18:36:46 addons-979357 kubelet[1204]: E0913 18:36:46.415643    1204 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726252606414854378,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559239,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 18:36:55 addons-979357 kubelet[1204]: E0913 18:36:55.018201    1204 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="fadcf5b8-b54e-4896-9ab6-b7294f3c8503"
	Sep 13 18:36:56 addons-979357 kubelet[1204]: E0913 18:36:56.418182    1204 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726252616417773738,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559239,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 18:36:56 addons-979357 kubelet[1204]: E0913 18:36:56.418474    1204 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726252616417773738,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559239,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 18:37:06 addons-979357 kubelet[1204]: E0913 18:37:06.421254    1204 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726252626420945144,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559239,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 18:37:06 addons-979357 kubelet[1204]: E0913 18:37:06.421300    1204 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726252626420945144,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559239,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 18:37:07 addons-979357 kubelet[1204]: E0913 18:37:07.019133    1204 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="fadcf5b8-b54e-4896-9ab6-b7294f3c8503"
	Sep 13 18:37:16 addons-979357 kubelet[1204]: E0913 18:37:16.424055    1204 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726252636423483393,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559239,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 18:37:16 addons-979357 kubelet[1204]: E0913 18:37:16.424333    1204 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726252636423483393,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559239,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 18:37:19 addons-979357 kubelet[1204]: I0913 18:37:19.022945    1204 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-55bf9c44b4-hw97l" podStartSLOduration=90.468930364 podStartE2EDuration="1m33.022925529s" podCreationTimestamp="2024-09-13 18:35:46 +0000 UTC" firstStartedPulling="2024-09-13 18:35:46.707853124 +0000 UTC m=+800.813385901" lastFinishedPulling="2024-09-13 18:35:49.261848285 +0000 UTC m=+803.367381066" observedRunningTime="2024-09-13 18:35:50.003641448 +0000 UTC m=+804.109174245" watchObservedRunningTime="2024-09-13 18:37:19.022925529 +0000 UTC m=+893.128458327"
	Sep 13 18:37:20 addons-979357 kubelet[1204]: I0913 18:37:20.360051    1204 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/cf270e35-c498-455b-bc82-0a19e8f606aa-tmp-dir\") pod \"cf270e35-c498-455b-bc82-0a19e8f606aa\" (UID: \"cf270e35-c498-455b-bc82-0a19e8f606aa\") "
	Sep 13 18:37:20 addons-979357 kubelet[1204]: I0913 18:37:20.360094    1204 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rdc64\" (UniqueName: \"kubernetes.io/projected/cf270e35-c498-455b-bc82-0a19e8f606aa-kube-api-access-rdc64\") pod \"cf270e35-c498-455b-bc82-0a19e8f606aa\" (UID: \"cf270e35-c498-455b-bc82-0a19e8f606aa\") "
	Sep 13 18:37:20 addons-979357 kubelet[1204]: I0913 18:37:20.361169    1204 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf270e35-c498-455b-bc82-0a19e8f606aa-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "cf270e35-c498-455b-bc82-0a19e8f606aa" (UID: "cf270e35-c498-455b-bc82-0a19e8f606aa"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Sep 13 18:37:20 addons-979357 kubelet[1204]: I0913 18:37:20.367743    1204 scope.go:117] "RemoveContainer" containerID="7ab3cdf56491273dcbb9cc1f982f2dc5304571e00b7a1ee3ff3175f91c97fdbe"
	Sep 13 18:37:20 addons-979357 kubelet[1204]: I0913 18:37:20.389752    1204 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf270e35-c498-455b-bc82-0a19e8f606aa-kube-api-access-rdc64" (OuterVolumeSpecName: "kube-api-access-rdc64") pod "cf270e35-c498-455b-bc82-0a19e8f606aa" (UID: "cf270e35-c498-455b-bc82-0a19e8f606aa"). InnerVolumeSpecName "kube-api-access-rdc64". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 13 18:37:20 addons-979357 kubelet[1204]: I0913 18:37:20.395640    1204 scope.go:117] "RemoveContainer" containerID="7ab3cdf56491273dcbb9cc1f982f2dc5304571e00b7a1ee3ff3175f91c97fdbe"
	Sep 13 18:37:20 addons-979357 kubelet[1204]: E0913 18:37:20.396363    1204 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ab3cdf56491273dcbb9cc1f982f2dc5304571e00b7a1ee3ff3175f91c97fdbe\": container with ID starting with 7ab3cdf56491273dcbb9cc1f982f2dc5304571e00b7a1ee3ff3175f91c97fdbe not found: ID does not exist" containerID="7ab3cdf56491273dcbb9cc1f982f2dc5304571e00b7a1ee3ff3175f91c97fdbe"
	Sep 13 18:37:20 addons-979357 kubelet[1204]: I0913 18:37:20.396415    1204 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ab3cdf56491273dcbb9cc1f982f2dc5304571e00b7a1ee3ff3175f91c97fdbe"} err="failed to get container status \"7ab3cdf56491273dcbb9cc1f982f2dc5304571e00b7a1ee3ff3175f91c97fdbe\": rpc error: code = NotFound desc = could not find container \"7ab3cdf56491273dcbb9cc1f982f2dc5304571e00b7a1ee3ff3175f91c97fdbe\": container with ID starting with 7ab3cdf56491273dcbb9cc1f982f2dc5304571e00b7a1ee3ff3175f91c97fdbe not found: ID does not exist"
	Sep 13 18:37:20 addons-979357 kubelet[1204]: I0913 18:37:20.460817    1204 reconciler_common.go:288] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/cf270e35-c498-455b-bc82-0a19e8f606aa-tmp-dir\") on node \"addons-979357\" DevicePath \"\""
	Sep 13 18:37:20 addons-979357 kubelet[1204]: I0913 18:37:20.460886    1204 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-rdc64\" (UniqueName: \"kubernetes.io/projected/cf270e35-c498-455b-bc82-0a19e8f606aa-kube-api-access-rdc64\") on node \"addons-979357\" DevicePath \"\""
	
	
	==> storage-provisioner [46c152a4abcf517c32fc79a8f91da41a850adb80ee9597c3f49a9e6206b45f31] <==
	I0913 18:22:38.267389       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0913 18:22:38.392893       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0913 18:22:38.393087       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0913 18:22:38.604516       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0913 18:22:38.626124       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-979357_ed3b70c5-6474-4e9e-adc9-2ca3ad02df5e!
	I0913 18:22:38.627911       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a06aae77-a7ca-4bb0-8803-2138b0a92163", APIVersion:"v1", ResourceVersion:"636", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-979357_ed3b70c5-6474-4e9e-adc9-2ca3ad02df5e became leader
	I0913 18:22:38.727799       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-979357_ed3b70c5-6474-4e9e-adc9-2ca3ad02df5e!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-979357 -n addons-979357
helpers_test.go:261: (dbg) Run:  kubectl --context addons-979357 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/MetricsServer]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-979357 describe pod busybox
helpers_test.go:282: (dbg) kubectl --context addons-979357 describe pod busybox:

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-979357/192.168.39.34
	Start Time:       Fri, 13 Sep 2024 18:24:06 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.22
	IPs:
	  IP:  10.244.0.22
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-h9h22 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-h9h22:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  13m                  default-scheduler  Successfully assigned default/busybox to addons-979357
	  Normal   Pulling    11m (x4 over 13m)    kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     11m (x4 over 13m)    kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
	  Warning  Failed     11m (x4 over 13m)    kubelet            Error: ErrImagePull
	  Warning  Failed     11m (x6 over 13m)    kubelet            Error: ImagePullBackOff
	  Normal   BackOff    3m2s (x43 over 13m)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/MetricsServer (312.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (141.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-617764 node stop m02 -v=7 --alsologtostderr
E0913 18:46:38.554079   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/functional-204039/client.crt: no such file or directory" logger="UnhandledError"
E0913 18:47:19.516301   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/functional-204039/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:363: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-617764 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.459070581s)

                                                
                                                
-- stdout --
	* Stopping node "ha-617764-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 18:46:33.534961   26819 out.go:345] Setting OutFile to fd 1 ...
	I0913 18:46:33.535241   26819 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 18:46:33.535251   26819 out.go:358] Setting ErrFile to fd 2...
	I0913 18:46:33.535256   26819 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 18:46:33.535519   26819 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19636-3902/.minikube/bin
	I0913 18:46:33.535837   26819 mustload.go:65] Loading cluster: ha-617764
	I0913 18:46:33.536212   26819 config.go:182] Loaded profile config "ha-617764": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 18:46:33.536229   26819 stop.go:39] StopHost: ha-617764-m02
	I0913 18:46:33.536577   26819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:46:33.536617   26819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:46:33.553144   26819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36207
	I0913 18:46:33.553998   26819 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:46:33.554574   26819 main.go:141] libmachine: Using API Version  1
	I0913 18:46:33.554602   26819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:46:33.554936   26819 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:46:33.557322   26819 out.go:177] * Stopping node "ha-617764-m02"  ...
	I0913 18:46:33.558589   26819 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0913 18:46:33.558614   26819 main.go:141] libmachine: (ha-617764-m02) Calling .DriverName
	I0913 18:46:33.558791   26819 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0913 18:46:33.558811   26819 main.go:141] libmachine: (ha-617764-m02) Calling .GetSSHHostname
	I0913 18:46:33.561512   26819 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:46:33.561913   26819 main.go:141] libmachine: (ha-617764-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:42:52", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:48 +0000 UTC Type:0 Mac:52:54:00:ab:42:52 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-617764-m02 Clientid:01:52:54:00:ab:42:52}
	I0913 18:46:33.561938   26819 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined IP address 192.168.39.203 and MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:46:33.562075   26819 main.go:141] libmachine: (ha-617764-m02) Calling .GetSSHPort
	I0913 18:46:33.562239   26819 main.go:141] libmachine: (ha-617764-m02) Calling .GetSSHKeyPath
	I0913 18:46:33.562364   26819 main.go:141] libmachine: (ha-617764-m02) Calling .GetSSHUsername
	I0913 18:46:33.562494   26819 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764-m02/id_rsa Username:docker}
	I0913 18:46:33.646242   26819 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0913 18:46:33.700978   26819 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0913 18:46:33.762139   26819 main.go:141] libmachine: Stopping "ha-617764-m02"...
	I0913 18:46:33.762177   26819 main.go:141] libmachine: (ha-617764-m02) Calling .GetState
	I0913 18:46:33.763479   26819 main.go:141] libmachine: (ha-617764-m02) Calling .Stop
	I0913 18:46:33.766669   26819 main.go:141] libmachine: (ha-617764-m02) Waiting for machine to stop 0/120
	I0913 18:46:34.768463   26819 main.go:141] libmachine: (ha-617764-m02) Waiting for machine to stop 1/120
	I0913 18:46:35.769700   26819 main.go:141] libmachine: (ha-617764-m02) Waiting for machine to stop 2/120
	I0913 18:46:36.771809   26819 main.go:141] libmachine: (ha-617764-m02) Waiting for machine to stop 3/120
	I0913 18:46:37.773021   26819 main.go:141] libmachine: (ha-617764-m02) Waiting for machine to stop 4/120
	I0913 18:46:38.775072   26819 main.go:141] libmachine: (ha-617764-m02) Waiting for machine to stop 5/120
	I0913 18:46:39.776763   26819 main.go:141] libmachine: (ha-617764-m02) Waiting for machine to stop 6/120
	I0913 18:46:40.777897   26819 main.go:141] libmachine: (ha-617764-m02) Waiting for machine to stop 7/120
	I0913 18:46:41.779283   26819 main.go:141] libmachine: (ha-617764-m02) Waiting for machine to stop 8/120
	I0913 18:46:42.780638   26819 main.go:141] libmachine: (ha-617764-m02) Waiting for machine to stop 9/120
	I0913 18:46:43.782927   26819 main.go:141] libmachine: (ha-617764-m02) Waiting for machine to stop 10/120
	I0913 18:46:44.784589   26819 main.go:141] libmachine: (ha-617764-m02) Waiting for machine to stop 11/120
	I0913 18:46:45.785836   26819 main.go:141] libmachine: (ha-617764-m02) Waiting for machine to stop 12/120
	I0913 18:46:46.787061   26819 main.go:141] libmachine: (ha-617764-m02) Waiting for machine to stop 13/120
	I0913 18:46:47.788406   26819 main.go:141] libmachine: (ha-617764-m02) Waiting for machine to stop 14/120
	I0913 18:46:48.790273   26819 main.go:141] libmachine: (ha-617764-m02) Waiting for machine to stop 15/120
	I0913 18:46:49.791660   26819 main.go:141] libmachine: (ha-617764-m02) Waiting for machine to stop 16/120
	I0913 18:46:50.792852   26819 main.go:141] libmachine: (ha-617764-m02) Waiting for machine to stop 17/120
	I0913 18:46:51.794005   26819 main.go:141] libmachine: (ha-617764-m02) Waiting for machine to stop 18/120
	I0913 18:46:52.795302   26819 main.go:141] libmachine: (ha-617764-m02) Waiting for machine to stop 19/120
	I0913 18:46:53.797338   26819 main.go:141] libmachine: (ha-617764-m02) Waiting for machine to stop 20/120
	I0913 18:46:54.798627   26819 main.go:141] libmachine: (ha-617764-m02) Waiting for machine to stop 21/120
	I0913 18:46:55.799998   26819 main.go:141] libmachine: (ha-617764-m02) Waiting for machine to stop 22/120
	I0913 18:46:56.801388   26819 main.go:141] libmachine: (ha-617764-m02) Waiting for machine to stop 23/120
	I0913 18:46:57.802903   26819 main.go:141] libmachine: (ha-617764-m02) Waiting for machine to stop 24/120
	I0913 18:46:58.804373   26819 main.go:141] libmachine: (ha-617764-m02) Waiting for machine to stop 25/120
	I0913 18:46:59.805745   26819 main.go:141] libmachine: (ha-617764-m02) Waiting for machine to stop 26/120
	I0913 18:47:00.806997   26819 main.go:141] libmachine: (ha-617764-m02) Waiting for machine to stop 27/120
	I0913 18:47:01.808273   26819 main.go:141] libmachine: (ha-617764-m02) Waiting for machine to stop 28/120
	I0913 18:47:02.809506   26819 main.go:141] libmachine: (ha-617764-m02) Waiting for machine to stop 29/120
	I0913 18:47:03.811705   26819 main.go:141] libmachine: (ha-617764-m02) Waiting for machine to stop 30/120
	I0913 18:47:04.813099   26819 main.go:141] libmachine: (ha-617764-m02) Waiting for machine to stop 31/120
	I0913 18:47:05.814344   26819 main.go:141] libmachine: (ha-617764-m02) Waiting for machine to stop 32/120
	I0913 18:47:06.816769   26819 main.go:141] libmachine: (ha-617764-m02) Waiting for machine to stop 33/120
	I0913 18:47:07.818008   26819 main.go:141] libmachine: (ha-617764-m02) Waiting for machine to stop 34/120
	I0913 18:47:08.819851   26819 main.go:141] libmachine: (ha-617764-m02) Waiting for machine to stop 35/120
	I0913 18:47:09.821272   26819 main.go:141] libmachine: (ha-617764-m02) Waiting for machine to stop 36/120
	I0913 18:47:10.822642   26819 main.go:141] libmachine: (ha-617764-m02) Waiting for machine to stop 37/120
	I0913 18:47:11.824467   26819 main.go:141] libmachine: (ha-617764-m02) Waiting for machine to stop 38/120
	I0913 18:47:12.825765   26819 main.go:141] libmachine: (ha-617764-m02) Waiting for machine to stop 39/120
	I0913 18:47:13.827811   26819 main.go:141] libmachine: (ha-617764-m02) Waiting for machine to stop 40/120
	I0913 18:47:14.829139   26819 main.go:141] libmachine: (ha-617764-m02) Waiting for machine to stop 41/120
	I0913 18:47:15.830510   26819 main.go:141] libmachine: (ha-617764-m02) Waiting for machine to stop 42/120
	I0913 18:47:16.832490   26819 main.go:141] libmachine: (ha-617764-m02) Waiting for machine to stop 43/120
	I0913 18:47:17.834137   26819 main.go:141] libmachine: (ha-617764-m02) Waiting for machine to stop 44/120
	I0913 18:47:18.836070   26819 main.go:141] libmachine: (ha-617764-m02) Waiting for machine to stop 45/120
	I0913 18:47:19.837448   26819 main.go:141] libmachine: (ha-617764-m02) Waiting for machine to stop 46/120
	I0913 18:47:20.839364   26819 main.go:141] libmachine: (ha-617764-m02) Waiting for machine to stop 47/120
	I0913 18:47:21.840841   26819 main.go:141] libmachine: (ha-617764-m02) Waiting for machine to stop 48/120
	I0913 18:47:22.842173   26819 main.go:141] libmachine: (ha-617764-m02) Waiting for machine to stop 49/120
	I0913 18:47:23.844118   26819 main.go:141] libmachine: (ha-617764-m02) Waiting for machine to stop 50/120
	I0913 18:47:24.845347   26819 main.go:141] libmachine: (ha-617764-m02) Waiting for machine to stop 51/120
	I0913 18:47:25.846730   26819 main.go:141] libmachine: (ha-617764-m02) Waiting for machine to stop 52/120
	I0913 18:47:26.848466   26819 main.go:141] libmachine: (ha-617764-m02) Waiting for machine to stop 53/120
	I0913 18:47:27.849966   26819 main.go:141] libmachine: (ha-617764-m02) Waiting for machine to stop 54/120
	I0913 18:47:28.851797   26819 main.go:141] libmachine: (ha-617764-m02) Waiting for machine to stop 55/120
	I0913 18:47:29.853407   26819 main.go:141] libmachine: (ha-617764-m02) Waiting for machine to stop 56/120
	I0913 18:47:30.855023   26819 main.go:141] libmachine: (ha-617764-m02) Waiting for machine to stop 57/120
	I0913 18:47:31.856459   26819 main.go:141] libmachine: (ha-617764-m02) Waiting for machine to stop 58/120
	I0913 18:47:32.857595   26819 main.go:141] libmachine: (ha-617764-m02) Waiting for machine to stop 59/120
	I0913 18:47:33.859569   26819 main.go:141] libmachine: (ha-617764-m02) Waiting for machine to stop 60/120
	I0913 18:47:34.860968   26819 main.go:141] libmachine: (ha-617764-m02) Waiting for machine to stop 61/120
	I0913 18:47:35.862268   26819 main.go:141] libmachine: (ha-617764-m02) Waiting for machine to stop 62/120
	I0913 18:47:36.864501   26819 main.go:141] libmachine: (ha-617764-m02) Waiting for machine to stop 63/120
	I0913 18:47:37.865789   26819 main.go:141] libmachine: (ha-617764-m02) Waiting for machine to stop 64/120
	I0913 18:47:38.867519   26819 main.go:141] libmachine: (ha-617764-m02) Waiting for machine to stop 65/120
	I0913 18:47:39.868819   26819 main.go:141] libmachine: (ha-617764-m02) Waiting for machine to stop 66/120
	I0913 18:47:40.869924   26819 main.go:141] libmachine: (ha-617764-m02) Waiting for machine to stop 67/120
	I0913 18:47:41.871235   26819 main.go:141] libmachine: (ha-617764-m02) Waiting for machine to stop 68/120
	I0913 18:47:42.872765   26819 main.go:141] libmachine: (ha-617764-m02) Waiting for machine to stop 69/120
	I0913 18:47:43.874801   26819 main.go:141] libmachine: (ha-617764-m02) Waiting for machine to stop 70/120
	I0913 18:47:44.876040   26819 main.go:141] libmachine: (ha-617764-m02) Waiting for machine to stop 71/120
	I0913 18:47:45.877409   26819 main.go:141] libmachine: (ha-617764-m02) Waiting for machine to stop 72/120
	I0913 18:47:46.879118   26819 main.go:141] libmachine: (ha-617764-m02) Waiting for machine to stop 73/120
	I0913 18:47:47.880829   26819 main.go:141] libmachine: (ha-617764-m02) Waiting for machine to stop 74/120
	I0913 18:47:48.882879   26819 main.go:141] libmachine: (ha-617764-m02) Waiting for machine to stop 75/120
	I0913 18:47:49.884532   26819 main.go:141] libmachine: (ha-617764-m02) Waiting for machine to stop 76/120
	I0913 18:47:50.885839   26819 main.go:141] libmachine: (ha-617764-m02) Waiting for machine to stop 77/120
	I0913 18:47:51.887659   26819 main.go:141] libmachine: (ha-617764-m02) Waiting for machine to stop 78/120
	I0913 18:47:52.889142   26819 main.go:141] libmachine: (ha-617764-m02) Waiting for machine to stop 79/120
	I0913 18:47:53.890977   26819 main.go:141] libmachine: (ha-617764-m02) Waiting for machine to stop 80/120
	I0913 18:47:54.892298   26819 main.go:141] libmachine: (ha-617764-m02) Waiting for machine to stop 81/120
	I0913 18:47:55.893339   26819 main.go:141] libmachine: (ha-617764-m02) Waiting for machine to stop 82/120
	I0913 18:47:56.894618   26819 main.go:141] libmachine: (ha-617764-m02) Waiting for machine to stop 83/120
	I0913 18:47:57.895842   26819 main.go:141] libmachine: (ha-617764-m02) Waiting for machine to stop 84/120
	I0913 18:47:58.896991   26819 main.go:141] libmachine: (ha-617764-m02) Waiting for machine to stop 85/120
	I0913 18:47:59.898201   26819 main.go:141] libmachine: (ha-617764-m02) Waiting for machine to stop 86/120
	I0913 18:48:00.899221   26819 main.go:141] libmachine: (ha-617764-m02) Waiting for machine to stop 87/120
	I0913 18:48:01.901159   26819 main.go:141] libmachine: (ha-617764-m02) Waiting for machine to stop 88/120
	I0913 18:48:02.902547   26819 main.go:141] libmachine: (ha-617764-m02) Waiting for machine to stop 89/120
	I0913 18:48:03.904848   26819 main.go:141] libmachine: (ha-617764-m02) Waiting for machine to stop 90/120
	I0913 18:48:04.907192   26819 main.go:141] libmachine: (ha-617764-m02) Waiting for machine to stop 91/120
	I0913 18:48:05.909122   26819 main.go:141] libmachine: (ha-617764-m02) Waiting for machine to stop 92/120
	I0913 18:48:06.910596   26819 main.go:141] libmachine: (ha-617764-m02) Waiting for machine to stop 93/120
	I0913 18:48:07.912682   26819 main.go:141] libmachine: (ha-617764-m02) Waiting for machine to stop 94/120
	I0913 18:48:08.914893   26819 main.go:141] libmachine: (ha-617764-m02) Waiting for machine to stop 95/120
	I0913 18:48:09.916353   26819 main.go:141] libmachine: (ha-617764-m02) Waiting for machine to stop 96/120
	I0913 18:48:10.917922   26819 main.go:141] libmachine: (ha-617764-m02) Waiting for machine to stop 97/120
	I0913 18:48:11.919358   26819 main.go:141] libmachine: (ha-617764-m02) Waiting for machine to stop 98/120
	I0913 18:48:12.920566   26819 main.go:141] libmachine: (ha-617764-m02) Waiting for machine to stop 99/120
	I0913 18:48:13.922676   26819 main.go:141] libmachine: (ha-617764-m02) Waiting for machine to stop 100/120
	I0913 18:48:14.924145   26819 main.go:141] libmachine: (ha-617764-m02) Waiting for machine to stop 101/120
	I0913 18:48:15.925582   26819 main.go:141] libmachine: (ha-617764-m02) Waiting for machine to stop 102/120
	I0913 18:48:16.927060   26819 main.go:141] libmachine: (ha-617764-m02) Waiting for machine to stop 103/120
	I0913 18:48:17.928445   26819 main.go:141] libmachine: (ha-617764-m02) Waiting for machine to stop 104/120
	I0913 18:48:18.930463   26819 main.go:141] libmachine: (ha-617764-m02) Waiting for machine to stop 105/120
	I0913 18:48:19.932775   26819 main.go:141] libmachine: (ha-617764-m02) Waiting for machine to stop 106/120
	I0913 18:48:20.934080   26819 main.go:141] libmachine: (ha-617764-m02) Waiting for machine to stop 107/120
	I0913 18:48:21.935399   26819 main.go:141] libmachine: (ha-617764-m02) Waiting for machine to stop 108/120
	I0913 18:48:22.936906   26819 main.go:141] libmachine: (ha-617764-m02) Waiting for machine to stop 109/120
	I0913 18:48:23.938878   26819 main.go:141] libmachine: (ha-617764-m02) Waiting for machine to stop 110/120
	I0913 18:48:24.940075   26819 main.go:141] libmachine: (ha-617764-m02) Waiting for machine to stop 111/120
	I0913 18:48:25.941445   26819 main.go:141] libmachine: (ha-617764-m02) Waiting for machine to stop 112/120
	I0913 18:48:26.942710   26819 main.go:141] libmachine: (ha-617764-m02) Waiting for machine to stop 113/120
	I0913 18:48:27.943978   26819 main.go:141] libmachine: (ha-617764-m02) Waiting for machine to stop 114/120
	I0913 18:48:28.945912   26819 main.go:141] libmachine: (ha-617764-m02) Waiting for machine to stop 115/120
	I0913 18:48:29.947194   26819 main.go:141] libmachine: (ha-617764-m02) Waiting for machine to stop 116/120
	I0913 18:48:30.948972   26819 main.go:141] libmachine: (ha-617764-m02) Waiting for machine to stop 117/120
	I0913 18:48:31.950436   26819 main.go:141] libmachine: (ha-617764-m02) Waiting for machine to stop 118/120
	I0913 18:48:32.952448   26819 main.go:141] libmachine: (ha-617764-m02) Waiting for machine to stop 119/120
	I0913 18:48:33.953374   26819 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0913 18:48:33.953487   26819 out.go:270] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-617764 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-617764 status -v=7 --alsologtostderr
E0913 18:48:41.439237   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/functional-204039/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-617764 status -v=7 --alsologtostderr: exit status 3 (19.163188728s)

                                                
                                                
-- stdout --
	ha-617764
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-617764-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-617764-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-617764-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 18:48:33.995890   27266 out.go:345] Setting OutFile to fd 1 ...
	I0913 18:48:33.995986   27266 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 18:48:33.995990   27266 out.go:358] Setting ErrFile to fd 2...
	I0913 18:48:33.995994   27266 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 18:48:33.996172   27266 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19636-3902/.minikube/bin
	I0913 18:48:33.996326   27266 out.go:352] Setting JSON to false
	I0913 18:48:33.996354   27266 mustload.go:65] Loading cluster: ha-617764
	I0913 18:48:33.996386   27266 notify.go:220] Checking for updates...
	I0913 18:48:33.996731   27266 config.go:182] Loaded profile config "ha-617764": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 18:48:33.996744   27266 status.go:255] checking status of ha-617764 ...
	I0913 18:48:33.997138   27266 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:48:33.997191   27266 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:48:34.015377   27266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41979
	I0913 18:48:34.015911   27266 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:48:34.016534   27266 main.go:141] libmachine: Using API Version  1
	I0913 18:48:34.016566   27266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:48:34.016901   27266 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:48:34.017099   27266 main.go:141] libmachine: (ha-617764) Calling .GetState
	I0913 18:48:34.018514   27266 status.go:330] ha-617764 host status = "Running" (err=<nil>)
	I0913 18:48:34.018533   27266 host.go:66] Checking if "ha-617764" exists ...
	I0913 18:48:34.018839   27266 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:48:34.018880   27266 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:48:34.032751   27266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37387
	I0913 18:48:34.033144   27266 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:48:34.033585   27266 main.go:141] libmachine: Using API Version  1
	I0913 18:48:34.033602   27266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:48:34.033888   27266 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:48:34.034077   27266 main.go:141] libmachine: (ha-617764) Calling .GetIP
	I0913 18:48:34.036535   27266 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:48:34.036967   27266 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 18:48:34.036999   27266 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:48:34.037113   27266 host.go:66] Checking if "ha-617764" exists ...
	I0913 18:48:34.037486   27266 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:48:34.037528   27266 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:48:34.052129   27266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43873
	I0913 18:48:34.052562   27266 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:48:34.052962   27266 main.go:141] libmachine: Using API Version  1
	I0913 18:48:34.052981   27266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:48:34.053349   27266 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:48:34.053534   27266 main.go:141] libmachine: (ha-617764) Calling .DriverName
	I0913 18:48:34.053713   27266 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0913 18:48:34.053736   27266 main.go:141] libmachine: (ha-617764) Calling .GetSSHHostname
	I0913 18:48:34.056479   27266 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:48:34.056895   27266 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 18:48:34.056919   27266 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:48:34.057047   27266 main.go:141] libmachine: (ha-617764) Calling .GetSSHPort
	I0913 18:48:34.057219   27266 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 18:48:34.057341   27266 main.go:141] libmachine: (ha-617764) Calling .GetSSHUsername
	I0913 18:48:34.057462   27266 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764/id_rsa Username:docker}
	I0913 18:48:34.144564   27266 ssh_runner.go:195] Run: systemctl --version
	I0913 18:48:34.153575   27266 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 18:48:34.173775   27266 kubeconfig.go:125] found "ha-617764" server: "https://192.168.39.254:8443"
	I0913 18:48:34.173809   27266 api_server.go:166] Checking apiserver status ...
	I0913 18:48:34.173849   27266 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 18:48:34.201310   27266 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1125/cgroup
	W0913 18:48:34.211427   27266 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1125/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0913 18:48:34.211477   27266 ssh_runner.go:195] Run: ls
	I0913 18:48:34.216360   27266 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0913 18:48:34.221920   27266 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0913 18:48:34.221961   27266 status.go:422] ha-617764 apiserver status = Running (err=<nil>)
	I0913 18:48:34.221973   27266 status.go:257] ha-617764 status: &{Name:ha-617764 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0913 18:48:34.222003   27266 status.go:255] checking status of ha-617764-m02 ...
	I0913 18:48:34.222312   27266 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:48:34.222348   27266 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:48:34.238276   27266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45093
	I0913 18:48:34.238717   27266 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:48:34.239148   27266 main.go:141] libmachine: Using API Version  1
	I0913 18:48:34.239161   27266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:48:34.239463   27266 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:48:34.239643   27266 main.go:141] libmachine: (ha-617764-m02) Calling .GetState
	I0913 18:48:34.241096   27266 status.go:330] ha-617764-m02 host status = "Running" (err=<nil>)
	I0913 18:48:34.241116   27266 host.go:66] Checking if "ha-617764-m02" exists ...
	I0913 18:48:34.241385   27266 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:48:34.241424   27266 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:48:34.255266   27266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41293
	I0913 18:48:34.255585   27266 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:48:34.255992   27266 main.go:141] libmachine: Using API Version  1
	I0913 18:48:34.256014   27266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:48:34.256364   27266 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:48:34.256542   27266 main.go:141] libmachine: (ha-617764-m02) Calling .GetIP
	I0913 18:48:34.258989   27266 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:48:34.259390   27266 main.go:141] libmachine: (ha-617764-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:42:52", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:48 +0000 UTC Type:0 Mac:52:54:00:ab:42:52 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-617764-m02 Clientid:01:52:54:00:ab:42:52}
	I0913 18:48:34.259419   27266 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined IP address 192.168.39.203 and MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:48:34.259590   27266 host.go:66] Checking if "ha-617764-m02" exists ...
	I0913 18:48:34.259881   27266 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:48:34.259916   27266 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:48:34.275100   27266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37547
	I0913 18:48:34.275513   27266 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:48:34.276032   27266 main.go:141] libmachine: Using API Version  1
	I0913 18:48:34.276050   27266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:48:34.276327   27266 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:48:34.276503   27266 main.go:141] libmachine: (ha-617764-m02) Calling .DriverName
	I0913 18:48:34.276679   27266 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0913 18:48:34.276697   27266 main.go:141] libmachine: (ha-617764-m02) Calling .GetSSHHostname
	I0913 18:48:34.279318   27266 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:48:34.279773   27266 main.go:141] libmachine: (ha-617764-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:42:52", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:48 +0000 UTC Type:0 Mac:52:54:00:ab:42:52 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-617764-m02 Clientid:01:52:54:00:ab:42:52}
	I0913 18:48:34.279806   27266 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined IP address 192.168.39.203 and MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:48:34.279945   27266 main.go:141] libmachine: (ha-617764-m02) Calling .GetSSHPort
	I0913 18:48:34.280109   27266 main.go:141] libmachine: (ha-617764-m02) Calling .GetSSHKeyPath
	I0913 18:48:34.280244   27266 main.go:141] libmachine: (ha-617764-m02) Calling .GetSSHUsername
	I0913 18:48:34.280364   27266 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764-m02/id_rsa Username:docker}
	W0913 18:48:52.754472   27266 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.203:22: connect: no route to host
	W0913 18:48:52.754568   27266 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.203:22: connect: no route to host
	E0913 18:48:52.754588   27266 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.203:22: connect: no route to host
	I0913 18:48:52.754601   27266 status.go:257] ha-617764-m02 status: &{Name:ha-617764-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0913 18:48:52.754624   27266 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.203:22: connect: no route to host
	I0913 18:48:52.754632   27266 status.go:255] checking status of ha-617764-m03 ...
	I0913 18:48:52.755027   27266 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:48:52.755084   27266 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:48:52.770833   27266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40937
	I0913 18:48:52.771326   27266 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:48:52.771759   27266 main.go:141] libmachine: Using API Version  1
	I0913 18:48:52.771779   27266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:48:52.772153   27266 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:48:52.772353   27266 main.go:141] libmachine: (ha-617764-m03) Calling .GetState
	I0913 18:48:52.773914   27266 status.go:330] ha-617764-m03 host status = "Running" (err=<nil>)
	I0913 18:48:52.773930   27266 host.go:66] Checking if "ha-617764-m03" exists ...
	I0913 18:48:52.774262   27266 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:48:52.774306   27266 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:48:52.789578   27266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33897
	I0913 18:48:52.790039   27266 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:48:52.790596   27266 main.go:141] libmachine: Using API Version  1
	I0913 18:48:52.790625   27266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:48:52.790979   27266 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:48:52.791164   27266 main.go:141] libmachine: (ha-617764-m03) Calling .GetIP
	I0913 18:48:52.793816   27266 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:48:52.794221   27266 main.go:141] libmachine: (ha-617764-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:bc:fa", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:44:05 +0000 UTC Type:0 Mac:52:54:00:4c:bc:fa Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:ha-617764-m03 Clientid:01:52:54:00:4c:bc:fa}
	I0913 18:48:52.794244   27266 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined IP address 192.168.39.118 and MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:48:52.794450   27266 host.go:66] Checking if "ha-617764-m03" exists ...
	I0913 18:48:52.794832   27266 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:48:52.794883   27266 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:48:52.809720   27266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33797
	I0913 18:48:52.810141   27266 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:48:52.810612   27266 main.go:141] libmachine: Using API Version  1
	I0913 18:48:52.810639   27266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:48:52.810944   27266 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:48:52.811136   27266 main.go:141] libmachine: (ha-617764-m03) Calling .DriverName
	I0913 18:48:52.811338   27266 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0913 18:48:52.811363   27266 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHHostname
	I0913 18:48:52.814076   27266 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:48:52.814528   27266 main.go:141] libmachine: (ha-617764-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:bc:fa", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:44:05 +0000 UTC Type:0 Mac:52:54:00:4c:bc:fa Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:ha-617764-m03 Clientid:01:52:54:00:4c:bc:fa}
	I0913 18:48:52.814551   27266 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined IP address 192.168.39.118 and MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:48:52.814700   27266 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHPort
	I0913 18:48:52.814865   27266 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHKeyPath
	I0913 18:48:52.815022   27266 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHUsername
	I0913 18:48:52.815155   27266 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764-m03/id_rsa Username:docker}
	I0913 18:48:52.903063   27266 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 18:48:52.920515   27266 kubeconfig.go:125] found "ha-617764" server: "https://192.168.39.254:8443"
	I0913 18:48:52.920542   27266 api_server.go:166] Checking apiserver status ...
	I0913 18:48:52.920574   27266 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 18:48:52.935117   27266 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1393/cgroup
	W0913 18:48:52.945420   27266 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1393/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0913 18:48:52.945492   27266 ssh_runner.go:195] Run: ls
	I0913 18:48:52.950813   27266 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0913 18:48:52.956655   27266 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0913 18:48:52.956679   27266 status.go:422] ha-617764-m03 apiserver status = Running (err=<nil>)
	I0913 18:48:52.956690   27266 status.go:257] ha-617764-m03 status: &{Name:ha-617764-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0913 18:48:52.956709   27266 status.go:255] checking status of ha-617764-m04 ...
	I0913 18:48:52.957083   27266 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:48:52.957123   27266 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:48:52.972815   27266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39435
	I0913 18:48:52.973353   27266 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:48:52.973899   27266 main.go:141] libmachine: Using API Version  1
	I0913 18:48:52.973920   27266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:48:52.974282   27266 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:48:52.974453   27266 main.go:141] libmachine: (ha-617764-m04) Calling .GetState
	I0913 18:48:52.975979   27266 status.go:330] ha-617764-m04 host status = "Running" (err=<nil>)
	I0913 18:48:52.976006   27266 host.go:66] Checking if "ha-617764-m04" exists ...
	I0913 18:48:52.976395   27266 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:48:52.976449   27266 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:48:52.991373   27266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38633
	I0913 18:48:52.991832   27266 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:48:52.992438   27266 main.go:141] libmachine: Using API Version  1
	I0913 18:48:52.992458   27266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:48:52.992745   27266 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:48:52.992941   27266 main.go:141] libmachine: (ha-617764-m04) Calling .GetIP
	I0913 18:48:52.996036   27266 main.go:141] libmachine: (ha-617764-m04) DBG | domain ha-617764-m04 has defined MAC address 52:54:00:08:6e:e8 in network mk-ha-617764
	I0913 18:48:52.996437   27266 main.go:141] libmachine: (ha-617764-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:6e:e8", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:45:38 +0000 UTC Type:0 Mac:52:54:00:08:6e:e8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-617764-m04 Clientid:01:52:54:00:08:6e:e8}
	I0913 18:48:52.996461   27266 main.go:141] libmachine: (ha-617764-m04) DBG | domain ha-617764-m04 has defined IP address 192.168.39.238 and MAC address 52:54:00:08:6e:e8 in network mk-ha-617764
	I0913 18:48:52.996578   27266 host.go:66] Checking if "ha-617764-m04" exists ...
	I0913 18:48:52.996899   27266 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:48:52.996938   27266 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:48:53.011377   27266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38575
	I0913 18:48:53.011842   27266 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:48:53.012375   27266 main.go:141] libmachine: Using API Version  1
	I0913 18:48:53.012393   27266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:48:53.012691   27266 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:48:53.012864   27266 main.go:141] libmachine: (ha-617764-m04) Calling .DriverName
	I0913 18:48:53.013036   27266 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0913 18:48:53.013054   27266 main.go:141] libmachine: (ha-617764-m04) Calling .GetSSHHostname
	I0913 18:48:53.015767   27266 main.go:141] libmachine: (ha-617764-m04) DBG | domain ha-617764-m04 has defined MAC address 52:54:00:08:6e:e8 in network mk-ha-617764
	I0913 18:48:53.016146   27266 main.go:141] libmachine: (ha-617764-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:6e:e8", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:45:38 +0000 UTC Type:0 Mac:52:54:00:08:6e:e8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-617764-m04 Clientid:01:52:54:00:08:6e:e8}
	I0913 18:48:53.016175   27266 main.go:141] libmachine: (ha-617764-m04) DBG | domain ha-617764-m04 has defined IP address 192.168.39.238 and MAC address 52:54:00:08:6e:e8 in network mk-ha-617764
	I0913 18:48:53.016283   27266 main.go:141] libmachine: (ha-617764-m04) Calling .GetSSHPort
	I0913 18:48:53.016453   27266 main.go:141] libmachine: (ha-617764-m04) Calling .GetSSHKeyPath
	I0913 18:48:53.016589   27266 main.go:141] libmachine: (ha-617764-m04) Calling .GetSSHUsername
	I0913 18:48:53.016729   27266 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764-m04/id_rsa Username:docker}
	I0913 18:48:53.098933   27266 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 18:48:53.116331   27266 status.go:257] ha-617764-m04 status: &{Name:ha-617764-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:372: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-617764 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-617764 -n ha-617764
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-617764 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-617764 logs -n 25: (1.370129111s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                      |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-617764 cp ha-617764-m03:/home/docker/cp-test.txt                            | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC | 13 Sep 24 18:46 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile62644144/001/cp-test_ha-617764-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-617764 ssh -n                                                               | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC | 13 Sep 24 18:46 UTC |
	|         | ha-617764-m03 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| cp      | ha-617764 cp ha-617764-m03:/home/docker/cp-test.txt                            | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC | 13 Sep 24 18:46 UTC |
	|         | ha-617764:/home/docker/cp-test_ha-617764-m03_ha-617764.txt                     |           |         |         |                     |                     |
	| ssh     | ha-617764 ssh -n                                                               | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC | 13 Sep 24 18:46 UTC |
	|         | ha-617764-m03 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-617764 ssh -n ha-617764 sudo cat                                            | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC | 13 Sep 24 18:46 UTC |
	|         | /home/docker/cp-test_ha-617764-m03_ha-617764.txt                               |           |         |         |                     |                     |
	| cp      | ha-617764 cp ha-617764-m03:/home/docker/cp-test.txt                            | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC | 13 Sep 24 18:46 UTC |
	|         | ha-617764-m02:/home/docker/cp-test_ha-617764-m03_ha-617764-m02.txt             |           |         |         |                     |                     |
	| ssh     | ha-617764 ssh -n                                                               | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC | 13 Sep 24 18:46 UTC |
	|         | ha-617764-m03 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-617764 ssh -n ha-617764-m02 sudo cat                                        | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC | 13 Sep 24 18:46 UTC |
	|         | /home/docker/cp-test_ha-617764-m03_ha-617764-m02.txt                           |           |         |         |                     |                     |
	| cp      | ha-617764 cp ha-617764-m03:/home/docker/cp-test.txt                            | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC | 13 Sep 24 18:46 UTC |
	|         | ha-617764-m04:/home/docker/cp-test_ha-617764-m03_ha-617764-m04.txt             |           |         |         |                     |                     |
	| ssh     | ha-617764 ssh -n                                                               | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC | 13 Sep 24 18:46 UTC |
	|         | ha-617764-m03 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-617764 ssh -n ha-617764-m04 sudo cat                                        | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC | 13 Sep 24 18:46 UTC |
	|         | /home/docker/cp-test_ha-617764-m03_ha-617764-m04.txt                           |           |         |         |                     |                     |
	| cp      | ha-617764 cp testdata/cp-test.txt                                              | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC | 13 Sep 24 18:46 UTC |
	|         | ha-617764-m04:/home/docker/cp-test.txt                                         |           |         |         |                     |                     |
	| ssh     | ha-617764 ssh -n                                                               | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC | 13 Sep 24 18:46 UTC |
	|         | ha-617764-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| cp      | ha-617764 cp ha-617764-m04:/home/docker/cp-test.txt                            | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC | 13 Sep 24 18:46 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile62644144/001/cp-test_ha-617764-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-617764 ssh -n                                                               | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC | 13 Sep 24 18:46 UTC |
	|         | ha-617764-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| cp      | ha-617764 cp ha-617764-m04:/home/docker/cp-test.txt                            | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC | 13 Sep 24 18:46 UTC |
	|         | ha-617764:/home/docker/cp-test_ha-617764-m04_ha-617764.txt                     |           |         |         |                     |                     |
	| ssh     | ha-617764 ssh -n                                                               | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC | 13 Sep 24 18:46 UTC |
	|         | ha-617764-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-617764 ssh -n ha-617764 sudo cat                                            | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC | 13 Sep 24 18:46 UTC |
	|         | /home/docker/cp-test_ha-617764-m04_ha-617764.txt                               |           |         |         |                     |                     |
	| cp      | ha-617764 cp ha-617764-m04:/home/docker/cp-test.txt                            | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC | 13 Sep 24 18:46 UTC |
	|         | ha-617764-m02:/home/docker/cp-test_ha-617764-m04_ha-617764-m02.txt             |           |         |         |                     |                     |
	| ssh     | ha-617764 ssh -n                                                               | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC | 13 Sep 24 18:46 UTC |
	|         | ha-617764-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-617764 ssh -n ha-617764-m02 sudo cat                                        | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC | 13 Sep 24 18:46 UTC |
	|         | /home/docker/cp-test_ha-617764-m04_ha-617764-m02.txt                           |           |         |         |                     |                     |
	| cp      | ha-617764 cp ha-617764-m04:/home/docker/cp-test.txt                            | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC | 13 Sep 24 18:46 UTC |
	|         | ha-617764-m03:/home/docker/cp-test_ha-617764-m04_ha-617764-m03.txt             |           |         |         |                     |                     |
	| ssh     | ha-617764 ssh -n                                                               | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC | 13 Sep 24 18:46 UTC |
	|         | ha-617764-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-617764 ssh -n ha-617764-m03 sudo cat                                        | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC | 13 Sep 24 18:46 UTC |
	|         | /home/docker/cp-test_ha-617764-m04_ha-617764-m03.txt                           |           |         |         |                     |                     |
	| node    | ha-617764 node stop m02 -v=7                                                   | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC |                     |
	|         | --alsologtostderr                                                              |           |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/13 18:41:46
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0913 18:41:46.342076   22792 out.go:345] Setting OutFile to fd 1 ...
	I0913 18:41:46.342355   22792 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 18:41:46.342364   22792 out.go:358] Setting ErrFile to fd 2...
	I0913 18:41:46.342369   22792 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 18:41:46.342538   22792 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19636-3902/.minikube/bin
	I0913 18:41:46.343063   22792 out.go:352] Setting JSON to false
	I0913 18:41:46.343967   22792 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1449,"bootTime":1726251457,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0913 18:41:46.344058   22792 start.go:139] virtualization: kvm guest
	I0913 18:41:46.346218   22792 out.go:177] * [ha-617764] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0913 18:41:46.347591   22792 out.go:177]   - MINIKUBE_LOCATION=19636
	I0913 18:41:46.347592   22792 notify.go:220] Checking for updates...
	I0913 18:41:46.349905   22792 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 18:41:46.351182   22792 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19636-3902/kubeconfig
	I0913 18:41:46.352355   22792 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19636-3902/.minikube
	I0913 18:41:46.353531   22792 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0913 18:41:46.354851   22792 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 18:41:46.356378   22792 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 18:41:46.390751   22792 out.go:177] * Using the kvm2 driver based on user configuration
	I0913 18:41:46.392075   22792 start.go:297] selected driver: kvm2
	I0913 18:41:46.392084   22792 start.go:901] validating driver "kvm2" against <nil>
	I0913 18:41:46.392094   22792 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 18:41:46.392812   22792 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 18:41:46.392896   22792 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19636-3902/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0913 18:41:46.407318   22792 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0913 18:41:46.407361   22792 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0913 18:41:46.407592   22792 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0913 18:41:46.407622   22792 cni.go:84] Creating CNI manager for ""
	I0913 18:41:46.407659   22792 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0913 18:41:46.407666   22792 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0913 18:41:46.407735   22792 start.go:340] cluster config:
	{Name:ha-617764 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-617764 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0913 18:41:46.407833   22792 iso.go:125] acquiring lock: {Name:mk6d09a9e7ffc35d34bf41cb51ad15df14a4d34d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 18:41:46.409833   22792 out.go:177] * Starting "ha-617764" primary control-plane node in "ha-617764" cluster
	I0913 18:41:46.411217   22792 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0913 18:41:46.411244   22792 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0913 18:41:46.411250   22792 cache.go:56] Caching tarball of preloaded images
	I0913 18:41:46.411328   22792 preload.go:172] Found /home/jenkins/minikube-integration/19636-3902/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0913 18:41:46.411342   22792 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0913 18:41:46.411638   22792 profile.go:143] Saving config to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/config.json ...
	I0913 18:41:46.411660   22792 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/config.json: {Name:mk4f12574a12f474df5f3b929e48935a5774feaa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:41:46.411795   22792 start.go:360] acquireMachinesLock for ha-617764: {Name:mk2a4fc9f87a3264088553785e086036ce1d8c5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 18:41:46.411830   22792 start.go:364] duration metric: took 18.873µs to acquireMachinesLock for "ha-617764"
	I0913 18:41:46.411852   22792 start.go:93] Provisioning new machine with config: &{Name:ha-617764 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-617764 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0913 18:41:46.411920   22792 start.go:125] createHost starting for "" (driver="kvm2")
	I0913 18:41:46.413820   22792 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0913 18:41:46.413936   22792 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:41:46.413977   22792 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:41:46.428170   22792 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42427
	I0913 18:41:46.428606   22792 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:41:46.429169   22792 main.go:141] libmachine: Using API Version  1
	I0913 18:41:46.429192   22792 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:41:46.429573   22792 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:41:46.429755   22792 main.go:141] libmachine: (ha-617764) Calling .GetMachineName
	I0913 18:41:46.429898   22792 main.go:141] libmachine: (ha-617764) Calling .DriverName
	I0913 18:41:46.430037   22792 start.go:159] libmachine.API.Create for "ha-617764" (driver="kvm2")
	I0913 18:41:46.430070   22792 client.go:168] LocalClient.Create starting
	I0913 18:41:46.430113   22792 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem
	I0913 18:41:46.430174   22792 main.go:141] libmachine: Decoding PEM data...
	I0913 18:41:46.430193   22792 main.go:141] libmachine: Parsing certificate...
	I0913 18:41:46.430263   22792 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem
	I0913 18:41:46.430287   22792 main.go:141] libmachine: Decoding PEM data...
	I0913 18:41:46.430308   22792 main.go:141] libmachine: Parsing certificate...
	I0913 18:41:46.430331   22792 main.go:141] libmachine: Running pre-create checks...
	I0913 18:41:46.430350   22792 main.go:141] libmachine: (ha-617764) Calling .PreCreateCheck
	I0913 18:41:46.430738   22792 main.go:141] libmachine: (ha-617764) Calling .GetConfigRaw
	I0913 18:41:46.431083   22792 main.go:141] libmachine: Creating machine...
	I0913 18:41:46.431095   22792 main.go:141] libmachine: (ha-617764) Calling .Create
	I0913 18:41:46.431240   22792 main.go:141] libmachine: (ha-617764) Creating KVM machine...
	I0913 18:41:46.432342   22792 main.go:141] libmachine: (ha-617764) DBG | found existing default KVM network
	I0913 18:41:46.432950   22792 main.go:141] libmachine: (ha-617764) DBG | I0913 18:41:46.432804   22815 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002091e0}
	I0913 18:41:46.432972   22792 main.go:141] libmachine: (ha-617764) DBG | created network xml: 
	I0913 18:41:46.432985   22792 main.go:141] libmachine: (ha-617764) DBG | <network>
	I0913 18:41:46.432992   22792 main.go:141] libmachine: (ha-617764) DBG |   <name>mk-ha-617764</name>
	I0913 18:41:46.433004   22792 main.go:141] libmachine: (ha-617764) DBG |   <dns enable='no'/>
	I0913 18:41:46.433009   22792 main.go:141] libmachine: (ha-617764) DBG |   
	I0913 18:41:46.433017   22792 main.go:141] libmachine: (ha-617764) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0913 18:41:46.433026   22792 main.go:141] libmachine: (ha-617764) DBG |     <dhcp>
	I0913 18:41:46.433036   22792 main.go:141] libmachine: (ha-617764) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0913 18:41:46.433054   22792 main.go:141] libmachine: (ha-617764) DBG |     </dhcp>
	I0913 18:41:46.433063   22792 main.go:141] libmachine: (ha-617764) DBG |   </ip>
	I0913 18:41:46.433068   22792 main.go:141] libmachine: (ha-617764) DBG |   
	I0913 18:41:46.433076   22792 main.go:141] libmachine: (ha-617764) DBG | </network>
	I0913 18:41:46.433082   22792 main.go:141] libmachine: (ha-617764) DBG | 
	I0913 18:41:46.438128   22792 main.go:141] libmachine: (ha-617764) DBG | trying to create private KVM network mk-ha-617764 192.168.39.0/24...
	I0913 18:41:46.501990   22792 main.go:141] libmachine: (ha-617764) Setting up store path in /home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764 ...
	I0913 18:41:46.502020   22792 main.go:141] libmachine: (ha-617764) Building disk image from file:///home/jenkins/minikube-integration/19636-3902/.minikube/cache/iso/amd64/minikube-v1.34.0-1726156389-19616-amd64.iso
	I0913 18:41:46.502029   22792 main.go:141] libmachine: (ha-617764) DBG | private KVM network mk-ha-617764 192.168.39.0/24 created
	I0913 18:41:46.502049   22792 main.go:141] libmachine: (ha-617764) DBG | I0913 18:41:46.501959   22815 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19636-3902/.minikube
	I0913 18:41:46.502172   22792 main.go:141] libmachine: (ha-617764) Downloading /home/jenkins/minikube-integration/19636-3902/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19636-3902/.minikube/cache/iso/amd64/minikube-v1.34.0-1726156389-19616-amd64.iso...
	I0913 18:41:46.746853   22792 main.go:141] libmachine: (ha-617764) DBG | I0913 18:41:46.746736   22815 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764/id_rsa...
	I0913 18:41:46.901725   22792 main.go:141] libmachine: (ha-617764) DBG | I0913 18:41:46.901613   22815 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764/ha-617764.rawdisk...
	I0913 18:41:46.901768   22792 main.go:141] libmachine: (ha-617764) DBG | Writing magic tar header
	I0913 18:41:46.901781   22792 main.go:141] libmachine: (ha-617764) DBG | Writing SSH key tar header
	I0913 18:41:46.901791   22792 main.go:141] libmachine: (ha-617764) DBG | I0913 18:41:46.901725   22815 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764 ...
	I0913 18:41:46.901917   22792 main.go:141] libmachine: (ha-617764) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764
	I0913 18:41:46.901965   22792 main.go:141] libmachine: (ha-617764) Setting executable bit set on /home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764 (perms=drwx------)
	I0913 18:41:46.901980   22792 main.go:141] libmachine: (ha-617764) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19636-3902/.minikube/machines
	I0913 18:41:46.901994   22792 main.go:141] libmachine: (ha-617764) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19636-3902/.minikube
	I0913 18:41:46.902001   22792 main.go:141] libmachine: (ha-617764) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19636-3902
	I0913 18:41:46.902008   22792 main.go:141] libmachine: (ha-617764) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0913 18:41:46.902014   22792 main.go:141] libmachine: (ha-617764) DBG | Checking permissions on dir: /home/jenkins
	I0913 18:41:46.902024   22792 main.go:141] libmachine: (ha-617764) DBG | Checking permissions on dir: /home
	I0913 18:41:46.902029   22792 main.go:141] libmachine: (ha-617764) DBG | Skipping /home - not owner
	I0913 18:41:46.902039   22792 main.go:141] libmachine: (ha-617764) Setting executable bit set on /home/jenkins/minikube-integration/19636-3902/.minikube/machines (perms=drwxr-xr-x)
	I0913 18:41:46.902056   22792 main.go:141] libmachine: (ha-617764) Setting executable bit set on /home/jenkins/minikube-integration/19636-3902/.minikube (perms=drwxr-xr-x)
	I0913 18:41:46.902071   22792 main.go:141] libmachine: (ha-617764) Setting executable bit set on /home/jenkins/minikube-integration/19636-3902 (perms=drwxrwxr-x)
	I0913 18:41:46.902082   22792 main.go:141] libmachine: (ha-617764) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0913 18:41:46.902113   22792 main.go:141] libmachine: (ha-617764) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0913 18:41:46.902131   22792 main.go:141] libmachine: (ha-617764) Creating domain...
	I0913 18:41:46.903143   22792 main.go:141] libmachine: (ha-617764) define libvirt domain using xml: 
	I0913 18:41:46.903185   22792 main.go:141] libmachine: (ha-617764) <domain type='kvm'>
	I0913 18:41:46.903198   22792 main.go:141] libmachine: (ha-617764)   <name>ha-617764</name>
	I0913 18:41:46.903209   22792 main.go:141] libmachine: (ha-617764)   <memory unit='MiB'>2200</memory>
	I0913 18:41:46.903220   22792 main.go:141] libmachine: (ha-617764)   <vcpu>2</vcpu>
	I0913 18:41:46.903227   22792 main.go:141] libmachine: (ha-617764)   <features>
	I0913 18:41:46.903237   22792 main.go:141] libmachine: (ha-617764)     <acpi/>
	I0913 18:41:46.903245   22792 main.go:141] libmachine: (ha-617764)     <apic/>
	I0913 18:41:46.903255   22792 main.go:141] libmachine: (ha-617764)     <pae/>
	I0913 18:41:46.903267   22792 main.go:141] libmachine: (ha-617764)     
	I0913 18:41:46.903300   22792 main.go:141] libmachine: (ha-617764)   </features>
	I0913 18:41:46.903322   22792 main.go:141] libmachine: (ha-617764)   <cpu mode='host-passthrough'>
	I0913 18:41:46.903331   22792 main.go:141] libmachine: (ha-617764)   
	I0913 18:41:46.903341   22792 main.go:141] libmachine: (ha-617764)   </cpu>
	I0913 18:41:46.903378   22792 main.go:141] libmachine: (ha-617764)   <os>
	I0913 18:41:46.903394   22792 main.go:141] libmachine: (ha-617764)     <type>hvm</type>
	I0913 18:41:46.903401   22792 main.go:141] libmachine: (ha-617764)     <boot dev='cdrom'/>
	I0913 18:41:46.903407   22792 main.go:141] libmachine: (ha-617764)     <boot dev='hd'/>
	I0913 18:41:46.903413   22792 main.go:141] libmachine: (ha-617764)     <bootmenu enable='no'/>
	I0913 18:41:46.903419   22792 main.go:141] libmachine: (ha-617764)   </os>
	I0913 18:41:46.903426   22792 main.go:141] libmachine: (ha-617764)   <devices>
	I0913 18:41:46.903449   22792 main.go:141] libmachine: (ha-617764)     <disk type='file' device='cdrom'>
	I0913 18:41:46.903459   22792 main.go:141] libmachine: (ha-617764)       <source file='/home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764/boot2docker.iso'/>
	I0913 18:41:46.903464   22792 main.go:141] libmachine: (ha-617764)       <target dev='hdc' bus='scsi'/>
	I0913 18:41:46.903468   22792 main.go:141] libmachine: (ha-617764)       <readonly/>
	I0913 18:41:46.903472   22792 main.go:141] libmachine: (ha-617764)     </disk>
	I0913 18:41:46.903477   22792 main.go:141] libmachine: (ha-617764)     <disk type='file' device='disk'>
	I0913 18:41:46.903482   22792 main.go:141] libmachine: (ha-617764)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0913 18:41:46.903489   22792 main.go:141] libmachine: (ha-617764)       <source file='/home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764/ha-617764.rawdisk'/>
	I0913 18:41:46.903495   22792 main.go:141] libmachine: (ha-617764)       <target dev='hda' bus='virtio'/>
	I0913 18:41:46.903499   22792 main.go:141] libmachine: (ha-617764)     </disk>
	I0913 18:41:46.903503   22792 main.go:141] libmachine: (ha-617764)     <interface type='network'>
	I0913 18:41:46.903510   22792 main.go:141] libmachine: (ha-617764)       <source network='mk-ha-617764'/>
	I0913 18:41:46.903514   22792 main.go:141] libmachine: (ha-617764)       <model type='virtio'/>
	I0913 18:41:46.903529   22792 main.go:141] libmachine: (ha-617764)     </interface>
	I0913 18:41:46.903545   22792 main.go:141] libmachine: (ha-617764)     <interface type='network'>
	I0913 18:41:46.903560   22792 main.go:141] libmachine: (ha-617764)       <source network='default'/>
	I0913 18:41:46.903572   22792 main.go:141] libmachine: (ha-617764)       <model type='virtio'/>
	I0913 18:41:46.903580   22792 main.go:141] libmachine: (ha-617764)     </interface>
	I0913 18:41:46.903585   22792 main.go:141] libmachine: (ha-617764)     <serial type='pty'>
	I0913 18:41:46.903591   22792 main.go:141] libmachine: (ha-617764)       <target port='0'/>
	I0913 18:41:46.903600   22792 main.go:141] libmachine: (ha-617764)     </serial>
	I0913 18:41:46.903609   22792 main.go:141] libmachine: (ha-617764)     <console type='pty'>
	I0913 18:41:46.903619   22792 main.go:141] libmachine: (ha-617764)       <target type='serial' port='0'/>
	I0913 18:41:46.903637   22792 main.go:141] libmachine: (ha-617764)     </console>
	I0913 18:41:46.903652   22792 main.go:141] libmachine: (ha-617764)     <rng model='virtio'>
	I0913 18:41:46.903666   22792 main.go:141] libmachine: (ha-617764)       <backend model='random'>/dev/random</backend>
	I0913 18:41:46.903675   22792 main.go:141] libmachine: (ha-617764)     </rng>
	I0913 18:41:46.903682   22792 main.go:141] libmachine: (ha-617764)     
	I0913 18:41:46.903691   22792 main.go:141] libmachine: (ha-617764)     
	I0913 18:41:46.903699   22792 main.go:141] libmachine: (ha-617764)   </devices>
	I0913 18:41:46.903708   22792 main.go:141] libmachine: (ha-617764) </domain>
	I0913 18:41:46.903718   22792 main.go:141] libmachine: (ha-617764) 
	I0913 18:41:46.908004   22792 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:03:35:b9 in network default
	I0913 18:41:46.908582   22792 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:41:46.908600   22792 main.go:141] libmachine: (ha-617764) Ensuring networks are active...
	I0913 18:41:46.909237   22792 main.go:141] libmachine: (ha-617764) Ensuring network default is active
	I0913 18:41:46.909547   22792 main.go:141] libmachine: (ha-617764) Ensuring network mk-ha-617764 is active
	I0913 18:41:46.910141   22792 main.go:141] libmachine: (ha-617764) Getting domain xml...
	I0913 18:41:46.910893   22792 main.go:141] libmachine: (ha-617764) Creating domain...
	I0913 18:41:48.077626   22792 main.go:141] libmachine: (ha-617764) Waiting to get IP...
	I0913 18:41:48.078377   22792 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:41:48.078794   22792 main.go:141] libmachine: (ha-617764) DBG | unable to find current IP address of domain ha-617764 in network mk-ha-617764
	I0913 18:41:48.078836   22792 main.go:141] libmachine: (ha-617764) DBG | I0913 18:41:48.078769   22815 retry.go:31] will retry after 204.25518ms: waiting for machine to come up
	I0913 18:41:48.284172   22792 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:41:48.284644   22792 main.go:141] libmachine: (ha-617764) DBG | unable to find current IP address of domain ha-617764 in network mk-ha-617764
	I0913 18:41:48.284671   22792 main.go:141] libmachine: (ha-617764) DBG | I0913 18:41:48.284596   22815 retry.go:31] will retry after 380.64238ms: waiting for machine to come up
	I0913 18:41:48.667071   22792 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:41:48.667404   22792 main.go:141] libmachine: (ha-617764) DBG | unable to find current IP address of domain ha-617764 in network mk-ha-617764
	I0913 18:41:48.667448   22792 main.go:141] libmachine: (ha-617764) DBG | I0913 18:41:48.667387   22815 retry.go:31] will retry after 461.878657ms: waiting for machine to come up
	I0913 18:41:49.131208   22792 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:41:49.131674   22792 main.go:141] libmachine: (ha-617764) DBG | unable to find current IP address of domain ha-617764 in network mk-ha-617764
	I0913 18:41:49.131696   22792 main.go:141] libmachine: (ha-617764) DBG | I0913 18:41:49.131636   22815 retry.go:31] will retry after 465.910019ms: waiting for machine to come up
	I0913 18:41:49.599586   22792 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:41:49.600042   22792 main.go:141] libmachine: (ha-617764) DBG | unable to find current IP address of domain ha-617764 in network mk-ha-617764
	I0913 18:41:49.600071   22792 main.go:141] libmachine: (ha-617764) DBG | I0913 18:41:49.599990   22815 retry.go:31] will retry after 520.107531ms: waiting for machine to come up
	I0913 18:41:50.121442   22792 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:41:50.121811   22792 main.go:141] libmachine: (ha-617764) DBG | unable to find current IP address of domain ha-617764 in network mk-ha-617764
	I0913 18:41:50.121847   22792 main.go:141] libmachine: (ha-617764) DBG | I0913 18:41:50.121771   22815 retry.go:31] will retry after 841.781356ms: waiting for machine to come up
	I0913 18:41:50.964741   22792 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:41:50.965088   22792 main.go:141] libmachine: (ha-617764) DBG | unable to find current IP address of domain ha-617764 in network mk-ha-617764
	I0913 18:41:50.965138   22792 main.go:141] libmachine: (ha-617764) DBG | I0913 18:41:50.965055   22815 retry.go:31] will retry after 878.516977ms: waiting for machine to come up
	I0913 18:41:51.844650   22792 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:41:51.845078   22792 main.go:141] libmachine: (ha-617764) DBG | unable to find current IP address of domain ha-617764 in network mk-ha-617764
	I0913 18:41:51.845105   22792 main.go:141] libmachine: (ha-617764) DBG | I0913 18:41:51.845024   22815 retry.go:31] will retry after 1.02797598s: waiting for machine to come up
	I0913 18:41:52.874267   22792 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:41:52.874720   22792 main.go:141] libmachine: (ha-617764) DBG | unable to find current IP address of domain ha-617764 in network mk-ha-617764
	I0913 18:41:52.874771   22792 main.go:141] libmachine: (ha-617764) DBG | I0913 18:41:52.874669   22815 retry.go:31] will retry after 1.506028162s: waiting for machine to come up
	I0913 18:41:54.382227   22792 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:41:54.382632   22792 main.go:141] libmachine: (ha-617764) DBG | unable to find current IP address of domain ha-617764 in network mk-ha-617764
	I0913 18:41:54.382653   22792 main.go:141] libmachine: (ha-617764) DBG | I0913 18:41:54.382588   22815 retry.go:31] will retry after 2.112322208s: waiting for machine to come up
	I0913 18:41:56.496683   22792 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:41:56.497136   22792 main.go:141] libmachine: (ha-617764) DBG | unable to find current IP address of domain ha-617764 in network mk-ha-617764
	I0913 18:41:56.497181   22792 main.go:141] libmachine: (ha-617764) DBG | I0913 18:41:56.497110   22815 retry.go:31] will retry after 2.314980479s: waiting for machine to come up
	I0913 18:41:58.814590   22792 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:41:58.814997   22792 main.go:141] libmachine: (ha-617764) DBG | unable to find current IP address of domain ha-617764 in network mk-ha-617764
	I0913 18:41:58.815019   22792 main.go:141] libmachine: (ha-617764) DBG | I0913 18:41:58.814968   22815 retry.go:31] will retry after 3.001940314s: waiting for machine to come up
	I0913 18:42:01.818637   22792 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:42:01.818951   22792 main.go:141] libmachine: (ha-617764) DBG | unable to find current IP address of domain ha-617764 in network mk-ha-617764
	I0913 18:42:01.818972   22792 main.go:141] libmachine: (ha-617764) DBG | I0913 18:42:01.818927   22815 retry.go:31] will retry after 4.031102313s: waiting for machine to come up
	I0913 18:42:05.852122   22792 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:42:05.852506   22792 main.go:141] libmachine: (ha-617764) DBG | unable to find current IP address of domain ha-617764 in network mk-ha-617764
	I0913 18:42:05.852527   22792 main.go:141] libmachine: (ha-617764) DBG | I0913 18:42:05.852470   22815 retry.go:31] will retry after 4.375378529s: waiting for machine to come up
	I0913 18:42:10.229015   22792 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:42:10.229456   22792 main.go:141] libmachine: (ha-617764) Found IP for machine: 192.168.39.145
	I0913 18:42:10.229476   22792 main.go:141] libmachine: (ha-617764) Reserving static IP address...
	I0913 18:42:10.229488   22792 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has current primary IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:42:10.229797   22792 main.go:141] libmachine: (ha-617764) DBG | unable to find host DHCP lease matching {name: "ha-617764", mac: "52:54:00:1a:5d:60", ip: "192.168.39.145"} in network mk-ha-617764
	I0913 18:42:10.299811   22792 main.go:141] libmachine: (ha-617764) DBG | Getting to WaitForSSH function...
	I0913 18:42:10.299835   22792 main.go:141] libmachine: (ha-617764) Reserved static IP address: 192.168.39.145
	I0913 18:42:10.299847   22792 main.go:141] libmachine: (ha-617764) Waiting for SSH to be available...
	I0913 18:42:10.302478   22792 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:42:10.302834   22792 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:minikube Clientid:01:52:54:00:1a:5d:60}
	I0913 18:42:10.302854   22792 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:42:10.302969   22792 main.go:141] libmachine: (ha-617764) DBG | Using SSH client type: external
	I0913 18:42:10.302995   22792 main.go:141] libmachine: (ha-617764) DBG | Using SSH private key: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764/id_rsa (-rw-------)
	I0913 18:42:10.303051   22792 main.go:141] libmachine: (ha-617764) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.145 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0913 18:42:10.303075   22792 main.go:141] libmachine: (ha-617764) DBG | About to run SSH command:
	I0913 18:42:10.303090   22792 main.go:141] libmachine: (ha-617764) DBG | exit 0
	I0913 18:42:10.426273   22792 main.go:141] libmachine: (ha-617764) DBG | SSH cmd err, output: <nil>: 
	I0913 18:42:10.426570   22792 main.go:141] libmachine: (ha-617764) KVM machine creation complete!
	I0913 18:42:10.426839   22792 main.go:141] libmachine: (ha-617764) Calling .GetConfigRaw
	I0913 18:42:10.427462   22792 main.go:141] libmachine: (ha-617764) Calling .DriverName
	I0913 18:42:10.427655   22792 main.go:141] libmachine: (ha-617764) Calling .DriverName
	I0913 18:42:10.427809   22792 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0913 18:42:10.427826   22792 main.go:141] libmachine: (ha-617764) Calling .GetState
	I0913 18:42:10.428962   22792 main.go:141] libmachine: Detecting operating system of created instance...
	I0913 18:42:10.428973   22792 main.go:141] libmachine: Waiting for SSH to be available...
	I0913 18:42:10.428985   22792 main.go:141] libmachine: Getting to WaitForSSH function...
	I0913 18:42:10.428992   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHHostname
	I0913 18:42:10.431154   22792 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:42:10.431525   22792 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 18:42:10.431551   22792 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:42:10.431737   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHPort
	I0913 18:42:10.431931   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 18:42:10.432072   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 18:42:10.432202   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHUsername
	I0913 18:42:10.432369   22792 main.go:141] libmachine: Using SSH client type: native
	I0913 18:42:10.432565   22792 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.145 22 <nil> <nil>}
	I0913 18:42:10.432579   22792 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0913 18:42:10.533614   22792 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0913 18:42:10.533653   22792 main.go:141] libmachine: Detecting the provisioner...
	I0913 18:42:10.533661   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHHostname
	I0913 18:42:10.536476   22792 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:42:10.536863   22792 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 18:42:10.536896   22792 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:42:10.537040   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHPort
	I0913 18:42:10.537233   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 18:42:10.537404   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 18:42:10.537541   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHUsername
	I0913 18:42:10.537692   22792 main.go:141] libmachine: Using SSH client type: native
	I0913 18:42:10.537958   22792 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.145 22 <nil> <nil>}
	I0913 18:42:10.537969   22792 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0913 18:42:10.642894   22792 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0913 18:42:10.643008   22792 main.go:141] libmachine: found compatible host: buildroot
	I0913 18:42:10.643022   22792 main.go:141] libmachine: Provisioning with buildroot...
	I0913 18:42:10.643031   22792 main.go:141] libmachine: (ha-617764) Calling .GetMachineName
	I0913 18:42:10.643282   22792 buildroot.go:166] provisioning hostname "ha-617764"
	I0913 18:42:10.643309   22792 main.go:141] libmachine: (ha-617764) Calling .GetMachineName
	I0913 18:42:10.643482   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHHostname
	I0913 18:42:10.646247   22792 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:42:10.646623   22792 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 18:42:10.646650   22792 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:42:10.646771   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHPort
	I0913 18:42:10.646959   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 18:42:10.647132   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 18:42:10.647295   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHUsername
	I0913 18:42:10.647445   22792 main.go:141] libmachine: Using SSH client type: native
	I0913 18:42:10.647616   22792 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.145 22 <nil> <nil>}
	I0913 18:42:10.647626   22792 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-617764 && echo "ha-617764" | sudo tee /etc/hostname
	I0913 18:42:10.763740   22792 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-617764
	
	I0913 18:42:10.763771   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHHostname
	I0913 18:42:10.766562   22792 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:42:10.766902   22792 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 18:42:10.766930   22792 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:42:10.767076   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHPort
	I0913 18:42:10.767278   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 18:42:10.767451   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 18:42:10.767568   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHUsername
	I0913 18:42:10.767702   22792 main.go:141] libmachine: Using SSH client type: native
	I0913 18:42:10.767869   22792 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.145 22 <nil> <nil>}
	I0913 18:42:10.767885   22792 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-617764' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-617764/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-617764' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0913 18:42:10.883089   22792 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0913 18:42:10.883119   22792 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19636-3902/.minikube CaCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19636-3902/.minikube}
	I0913 18:42:10.883137   22792 buildroot.go:174] setting up certificates
	I0913 18:42:10.883191   22792 provision.go:84] configureAuth start
	I0913 18:42:10.883207   22792 main.go:141] libmachine: (ha-617764) Calling .GetMachineName
	I0913 18:42:10.883440   22792 main.go:141] libmachine: (ha-617764) Calling .GetIP
	I0913 18:42:10.886378   22792 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:42:10.886734   22792 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 18:42:10.886754   22792 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:42:10.886911   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHHostname
	I0913 18:42:10.888976   22792 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:42:10.889323   22792 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 18:42:10.889339   22792 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:42:10.889465   22792 provision.go:143] copyHostCerts
	I0913 18:42:10.889498   22792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem
	I0913 18:42:10.889526   22792 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem, removing ...
	I0913 18:42:10.889534   22792 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem
	I0913 18:42:10.889595   22792 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem (1123 bytes)
	I0913 18:42:10.889676   22792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem
	I0913 18:42:10.889704   22792 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem, removing ...
	I0913 18:42:10.889708   22792 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem
	I0913 18:42:10.889730   22792 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem (1679 bytes)
	I0913 18:42:10.889783   22792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem
	I0913 18:42:10.889800   22792 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem, removing ...
	I0913 18:42:10.889803   22792 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem
	I0913 18:42:10.889823   22792 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem (1082 bytes)
	I0913 18:42:10.889878   22792 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem org=jenkins.ha-617764 san=[127.0.0.1 192.168.39.145 ha-617764 localhost minikube]
	I0913 18:42:11.091571   22792 provision.go:177] copyRemoteCerts
	I0913 18:42:11.091641   22792 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0913 18:42:11.091663   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHHostname
	I0913 18:42:11.094175   22792 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:42:11.094504   22792 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 18:42:11.094534   22792 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:42:11.094665   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHPort
	I0913 18:42:11.094832   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 18:42:11.094937   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHUsername
	I0913 18:42:11.095049   22792 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764/id_rsa Username:docker}
	I0913 18:42:11.176343   22792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0913 18:42:11.176413   22792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0913 18:42:11.200756   22792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0913 18:42:11.200825   22792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0913 18:42:11.224844   22792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0913 18:42:11.224901   22792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0913 18:42:11.248467   22792 provision.go:87] duration metric: took 365.261129ms to configureAuth
	I0913 18:42:11.248494   22792 buildroot.go:189] setting minikube options for container-runtime
	I0913 18:42:11.248676   22792 config.go:182] Loaded profile config "ha-617764": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 18:42:11.248745   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHHostname
	I0913 18:42:11.251102   22792 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:42:11.251430   22792 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 18:42:11.251460   22792 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:42:11.251576   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHPort
	I0913 18:42:11.251729   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 18:42:11.251860   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 18:42:11.251978   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHUsername
	I0913 18:42:11.252097   22792 main.go:141] libmachine: Using SSH client type: native
	I0913 18:42:11.252311   22792 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.145 22 <nil> <nil>}
	I0913 18:42:11.252326   22792 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0913 18:42:11.474430   22792 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0913 18:42:11.474454   22792 main.go:141] libmachine: Checking connection to Docker...
	I0913 18:42:11.474462   22792 main.go:141] libmachine: (ha-617764) Calling .GetURL
	I0913 18:42:11.475676   22792 main.go:141] libmachine: (ha-617764) DBG | Using libvirt version 6000000
	I0913 18:42:11.477592   22792 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:42:11.477910   22792 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 18:42:11.477933   22792 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:42:11.478053   22792 main.go:141] libmachine: Docker is up and running!
	I0913 18:42:11.478067   22792 main.go:141] libmachine: Reticulating splines...
	I0913 18:42:11.478074   22792 client.go:171] duration metric: took 25.04799423s to LocalClient.Create
	I0913 18:42:11.478112   22792 start.go:167] duration metric: took 25.048062384s to libmachine.API.Create "ha-617764"
	I0913 18:42:11.478125   22792 start.go:293] postStartSetup for "ha-617764" (driver="kvm2")
	I0913 18:42:11.478143   22792 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0913 18:42:11.478160   22792 main.go:141] libmachine: (ha-617764) Calling .DriverName
	I0913 18:42:11.478359   22792 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0913 18:42:11.478384   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHHostname
	I0913 18:42:11.480294   22792 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:42:11.480543   22792 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 18:42:11.480561   22792 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:42:11.480705   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHPort
	I0913 18:42:11.480847   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 18:42:11.480987   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHUsername
	I0913 18:42:11.481112   22792 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764/id_rsa Username:docker}
	I0913 18:42:11.565059   22792 ssh_runner.go:195] Run: cat /etc/os-release
	I0913 18:42:11.569516   22792 info.go:137] Remote host: Buildroot 2023.02.9
	I0913 18:42:11.569550   22792 filesync.go:126] Scanning /home/jenkins/minikube-integration/19636-3902/.minikube/addons for local assets ...
	I0913 18:42:11.569637   22792 filesync.go:126] Scanning /home/jenkins/minikube-integration/19636-3902/.minikube/files for local assets ...
	I0913 18:42:11.569734   22792 filesync.go:149] local asset: /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem -> 110792.pem in /etc/ssl/certs
	I0913 18:42:11.569745   22792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem -> /etc/ssl/certs/110792.pem
	I0913 18:42:11.569860   22792 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0913 18:42:11.579256   22792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem --> /etc/ssl/certs/110792.pem (1708 bytes)
	I0913 18:42:11.603060   22792 start.go:296] duration metric: took 124.923337ms for postStartSetup
	I0913 18:42:11.603117   22792 main.go:141] libmachine: (ha-617764) Calling .GetConfigRaw
	I0913 18:42:11.603688   22792 main.go:141] libmachine: (ha-617764) Calling .GetIP
	I0913 18:42:11.606119   22792 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:42:11.606546   22792 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 18:42:11.606572   22792 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:42:11.606803   22792 profile.go:143] Saving config to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/config.json ...
	I0913 18:42:11.606978   22792 start.go:128] duration metric: took 25.195049778s to createHost
	I0913 18:42:11.607011   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHHostname
	I0913 18:42:11.609202   22792 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:42:11.609513   22792 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 18:42:11.609531   22792 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:42:11.609667   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHPort
	I0913 18:42:11.609836   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 18:42:11.609967   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 18:42:11.610070   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHUsername
	I0913 18:42:11.610208   22792 main.go:141] libmachine: Using SSH client type: native
	I0913 18:42:11.610404   22792 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.145 22 <nil> <nil>}
	I0913 18:42:11.610417   22792 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0913 18:42:11.714852   22792 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726252931.693013594
	
	I0913 18:42:11.714875   22792 fix.go:216] guest clock: 1726252931.693013594
	I0913 18:42:11.714884   22792 fix.go:229] Guest: 2024-09-13 18:42:11.693013594 +0000 UTC Remote: 2024-09-13 18:42:11.606989503 +0000 UTC m=+25.297899776 (delta=86.024091ms)
	I0913 18:42:11.714951   22792 fix.go:200] guest clock delta is within tolerance: 86.024091ms
	I0913 18:42:11.714960   22792 start.go:83] releasing machines lock for "ha-617764", held for 25.303117412s
	I0913 18:42:11.714991   22792 main.go:141] libmachine: (ha-617764) Calling .DriverName
	I0913 18:42:11.715245   22792 main.go:141] libmachine: (ha-617764) Calling .GetIP
	I0913 18:42:11.717660   22792 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:42:11.718028   22792 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 18:42:11.718057   22792 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:42:11.718183   22792 main.go:141] libmachine: (ha-617764) Calling .DriverName
	I0913 18:42:11.718784   22792 main.go:141] libmachine: (ha-617764) Calling .DriverName
	I0913 18:42:11.718983   22792 main.go:141] libmachine: (ha-617764) Calling .DriverName
	I0913 18:42:11.719074   22792 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0913 18:42:11.719163   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHHostname
	I0913 18:42:11.719206   22792 ssh_runner.go:195] Run: cat /version.json
	I0913 18:42:11.719227   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHHostname
	I0913 18:42:11.721920   22792 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:42:11.721954   22792 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:42:11.722230   22792 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 18:42:11.722254   22792 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:42:11.722282   22792 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 18:42:11.722301   22792 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:42:11.722411   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHPort
	I0913 18:42:11.722537   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHPort
	I0913 18:42:11.722601   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 18:42:11.722702   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 18:42:11.722754   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHUsername
	I0913 18:42:11.722842   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHUsername
	I0913 18:42:11.722890   22792 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764/id_rsa Username:docker}
	I0913 18:42:11.722960   22792 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764/id_rsa Username:docker}
	I0913 18:42:11.820073   22792 ssh_runner.go:195] Run: systemctl --version
	I0913 18:42:11.825894   22792 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0913 18:42:11.982385   22792 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0913 18:42:11.988782   22792 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0913 18:42:11.988864   22792 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0913 18:42:12.004565   22792 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0913 18:42:12.004593   22792 start.go:495] detecting cgroup driver to use...
	I0913 18:42:12.004661   22792 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0913 18:42:12.019979   22792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0913 18:42:12.032588   22792 docker.go:217] disabling cri-docker service (if available) ...
	I0913 18:42:12.032636   22792 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0913 18:42:12.045995   22792 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0913 18:42:12.058796   22792 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0913 18:42:12.171682   22792 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0913 18:42:12.328312   22792 docker.go:233] disabling docker service ...
	I0913 18:42:12.328387   22792 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0913 18:42:12.342929   22792 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0913 18:42:12.355609   22792 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0913 18:42:12.461539   22792 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0913 18:42:12.583650   22792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0913 18:42:12.597599   22792 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0913 18:42:12.616301   22792 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0913 18:42:12.616369   22792 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 18:42:12.627045   22792 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0913 18:42:12.627114   22792 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 18:42:12.637884   22792 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 18:42:12.648895   22792 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 18:42:12.659405   22792 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0913 18:42:12.670256   22792 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 18:42:12.680556   22792 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 18:42:12.697451   22792 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 18:42:12.708124   22792 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0913 18:42:12.717399   22792 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0913 18:42:12.717467   22792 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0913 18:42:12.730124   22792 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0913 18:42:12.740070   22792 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 18:42:12.860993   22792 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0913 18:42:12.952434   22792 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0913 18:42:12.952520   22792 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0913 18:42:12.957244   22792 start.go:563] Will wait 60s for crictl version
	I0913 18:42:12.957290   22792 ssh_runner.go:195] Run: which crictl
	I0913 18:42:12.960871   22792 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0913 18:42:13.003023   22792 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0913 18:42:13.003108   22792 ssh_runner.go:195] Run: crio --version
	I0913 18:42:13.030965   22792 ssh_runner.go:195] Run: crio --version
	I0913 18:42:13.061413   22792 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0913 18:42:13.062704   22792 main.go:141] libmachine: (ha-617764) Calling .GetIP
	I0913 18:42:13.065064   22792 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:42:13.065406   22792 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 18:42:13.065433   22792 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:42:13.065636   22792 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0913 18:42:13.069674   22792 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0913 18:42:13.082398   22792 kubeadm.go:883] updating cluster {Name:ha-617764 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-617764 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.145 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0913 18:42:13.082510   22792 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0913 18:42:13.082551   22792 ssh_runner.go:195] Run: sudo crictl images --output json
	I0913 18:42:13.114270   22792 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0913 18:42:13.114344   22792 ssh_runner.go:195] Run: which lz4
	I0913 18:42:13.118116   22792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0913 18:42:13.118209   22792 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0913 18:42:13.122135   22792 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0913 18:42:13.122172   22792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0913 18:42:14.418387   22792 crio.go:462] duration metric: took 1.300206452s to copy over tarball
	I0913 18:42:14.418465   22792 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0913 18:42:16.405722   22792 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.987230034s)
	I0913 18:42:16.405745   22792 crio.go:469] duration metric: took 1.987328817s to extract the tarball
	I0913 18:42:16.405752   22792 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0913 18:42:16.443623   22792 ssh_runner.go:195] Run: sudo crictl images --output json
	I0913 18:42:16.489290   22792 crio.go:514] all images are preloaded for cri-o runtime.
	I0913 18:42:16.489312   22792 cache_images.go:84] Images are preloaded, skipping loading
	I0913 18:42:16.489319   22792 kubeadm.go:934] updating node { 192.168.39.145 8443 v1.31.1 crio true true} ...
	I0913 18:42:16.489446   22792 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-617764 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.145
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-617764 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0913 18:42:16.489517   22792 ssh_runner.go:195] Run: crio config
	I0913 18:42:16.532922   22792 cni.go:84] Creating CNI manager for ""
	I0913 18:42:16.532944   22792 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0913 18:42:16.532955   22792 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0913 18:42:16.532974   22792 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.145 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-617764 NodeName:ha-617764 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.145"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.145 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0913 18:42:16.533087   22792 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.145
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-617764"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.145
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.145"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0913 18:42:16.533109   22792 kube-vip.go:115] generating kube-vip config ...
	I0913 18:42:16.533150   22792 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0913 18:42:16.549716   22792 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0913 18:42:16.549818   22792 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0913 18:42:16.549866   22792 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0913 18:42:16.559900   22792 binaries.go:44] Found k8s binaries, skipping transfer
	I0913 18:42:16.559962   22792 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0913 18:42:16.569382   22792 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0913 18:42:16.585673   22792 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0913 18:42:16.602255   22792 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0913 18:42:16.618723   22792 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0913 18:42:16.634794   22792 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0913 18:42:16.638626   22792 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0913 18:42:16.651362   22792 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 18:42:16.762368   22792 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0913 18:42:16.779430   22792 certs.go:68] Setting up /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764 for IP: 192.168.39.145
	I0913 18:42:16.779452   22792 certs.go:194] generating shared ca certs ...
	I0913 18:42:16.779510   22792 certs.go:226] acquiring lock for ca certs: {Name:mke780aab4c2895bded11772e31c7a357d07742c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:42:16.779672   22792 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.key
	I0913 18:42:16.779714   22792 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.key
	I0913 18:42:16.779721   22792 certs.go:256] generating profile certs ...
	I0913 18:42:16.779771   22792 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/client.key
	I0913 18:42:16.779792   22792 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/client.crt with IP's: []
	I0913 18:42:16.941388   22792 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/client.crt ...
	I0913 18:42:16.941415   22792 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/client.crt: {Name:mk44eed791f2583040b622110d984321628f6223 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:42:16.941581   22792 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/client.key ...
	I0913 18:42:16.941593   22792 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/client.key: {Name:mk1915c48dc6fc804dedf32c0a46e920bb821a2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:42:16.941665   22792 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.key.9cc0c887
	I0913 18:42:16.941679   22792 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.crt.9cc0c887 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.145 192.168.39.254]
	I0913 18:42:17.210285   22792 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.crt.9cc0c887 ...
	I0913 18:42:17.210315   22792 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.crt.9cc0c887: {Name:mk8a652a777a3d4d8cb2161b0f1935680536b79d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:42:17.210463   22792 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.key.9cc0c887 ...
	I0913 18:42:17.210475   22792 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.key.9cc0c887: {Name:mkdb5fbb1ec247d9ce8891014dfa79d01eef24fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:42:17.210543   22792 certs.go:381] copying /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.crt.9cc0c887 -> /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.crt
	I0913 18:42:17.210633   22792 certs.go:385] copying /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.key.9cc0c887 -> /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.key
	I0913 18:42:17.210686   22792 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/proxy-client.key
	I0913 18:42:17.210700   22792 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/proxy-client.crt with IP's: []
	I0913 18:42:17.337363   22792 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/proxy-client.crt ...
	I0913 18:42:17.337393   22792 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/proxy-client.crt: {Name:mkd514a028f059d8de360447f0fae602d4a32c05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:42:17.337549   22792 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/proxy-client.key ...
	I0913 18:42:17.337560   22792 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/proxy-client.key: {Name:mk3daf966e864f78edc7ad53314f95accf71a54b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:42:17.337625   22792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0913 18:42:17.337642   22792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0913 18:42:17.337652   22792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0913 18:42:17.337662   22792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0913 18:42:17.337673   22792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0913 18:42:17.337683   22792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0913 18:42:17.337695   22792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0913 18:42:17.337704   22792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0913 18:42:17.337755   22792 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079.pem (1338 bytes)
	W0913 18:42:17.337788   22792 certs.go:480] ignoring /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079_empty.pem, impossibly tiny 0 bytes
	I0913 18:42:17.337796   22792 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem (1675 bytes)
	I0913 18:42:17.337829   22792 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem (1082 bytes)
	I0913 18:42:17.337856   22792 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem (1123 bytes)
	I0913 18:42:17.337877   22792 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem (1679 bytes)
	I0913 18:42:17.337916   22792 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem (1708 bytes)
	I0913 18:42:17.337940   22792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079.pem -> /usr/share/ca-certificates/11079.pem
	I0913 18:42:17.337959   22792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem -> /usr/share/ca-certificates/110792.pem
	I0913 18:42:17.337972   22792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0913 18:42:17.338554   22792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0913 18:42:17.364338   22792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0913 18:42:17.387197   22792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0913 18:42:17.410443   22792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0913 18:42:17.433814   22792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0913 18:42:17.456479   22792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0913 18:42:17.479080   22792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0913 18:42:17.501736   22792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0913 18:42:17.524376   22792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079.pem --> /usr/share/ca-certificates/11079.pem (1338 bytes)
	I0913 18:42:17.549608   22792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem --> /usr/share/ca-certificates/110792.pem (1708 bytes)
	I0913 18:42:17.572433   22792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0913 18:42:17.597199   22792 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0913 18:42:17.613267   22792 ssh_runner.go:195] Run: openssl version
	I0913 18:42:17.619055   22792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0913 18:42:17.629948   22792 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0913 18:42:17.634415   22792 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 13 18:22 /usr/share/ca-certificates/minikubeCA.pem
	I0913 18:42:17.634473   22792 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0913 18:42:17.640077   22792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0913 18:42:17.650772   22792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11079.pem && ln -fs /usr/share/ca-certificates/11079.pem /etc/ssl/certs/11079.pem"
	I0913 18:42:17.661921   22792 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11079.pem
	I0913 18:42:17.666610   22792 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 13 18:38 /usr/share/ca-certificates/11079.pem
	I0913 18:42:17.666668   22792 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11079.pem
	I0913 18:42:17.672350   22792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11079.pem /etc/ssl/certs/51391683.0"
	I0913 18:42:17.683307   22792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110792.pem && ln -fs /usr/share/ca-certificates/110792.pem /etc/ssl/certs/110792.pem"
	I0913 18:42:17.694195   22792 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110792.pem
	I0913 18:42:17.698826   22792 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 13 18:38 /usr/share/ca-certificates/110792.pem
	I0913 18:42:17.698883   22792 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110792.pem
	I0913 18:42:17.704664   22792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110792.pem /etc/ssl/certs/3ec20f2e.0"
	I0913 18:42:17.715573   22792 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0913 18:42:17.719695   22792 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0913 18:42:17.719743   22792 kubeadm.go:392] StartCluster: {Name:ha-617764 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-617764 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.145 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 18:42:17.719833   22792 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0913 18:42:17.719901   22792 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0913 18:42:17.756895   22792 cri.go:89] found id: ""
	I0913 18:42:17.756978   22792 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0913 18:42:17.767125   22792 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0913 18:42:17.776625   22792 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0913 18:42:17.786162   22792 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0913 18:42:17.786188   22792 kubeadm.go:157] found existing configuration files:
	
	I0913 18:42:17.786239   22792 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0913 18:42:17.795290   22792 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0913 18:42:17.795350   22792 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0913 18:42:17.804626   22792 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0913 18:42:17.813683   22792 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0913 18:42:17.813741   22792 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0913 18:42:17.823450   22792 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0913 18:42:17.832901   22792 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0913 18:42:17.832962   22792 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0913 18:42:17.842504   22792 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0913 18:42:17.851577   22792 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0913 18:42:17.851639   22792 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0913 18:42:17.861524   22792 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0913 18:42:17.958735   22792 kubeadm.go:310] W0913 18:42:17.943166     843 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0913 18:42:17.961057   22792 kubeadm.go:310] W0913 18:42:17.945581     843 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0913 18:42:18.060353   22792 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0913 18:42:29.172501   22792 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0913 18:42:29.172573   22792 kubeadm.go:310] [preflight] Running pre-flight checks
	I0913 18:42:29.172684   22792 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0913 18:42:29.172832   22792 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0913 18:42:29.172965   22792 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0913 18:42:29.173065   22792 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0913 18:42:29.174820   22792 out.go:235]   - Generating certificates and keys ...
	I0913 18:42:29.174903   22792 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0913 18:42:29.174960   22792 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0913 18:42:29.175019   22792 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0913 18:42:29.175086   22792 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0913 18:42:29.175159   22792 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0913 18:42:29.175230   22792 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0913 18:42:29.175305   22792 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0913 18:42:29.175507   22792 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-617764 localhost] and IPs [192.168.39.145 127.0.0.1 ::1]
	I0913 18:42:29.175590   22792 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0913 18:42:29.175753   22792 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-617764 localhost] and IPs [192.168.39.145 127.0.0.1 ::1]
	I0913 18:42:29.175840   22792 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0913 18:42:29.175930   22792 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0913 18:42:29.175992   22792 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0913 18:42:29.176080   22792 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0913 18:42:29.176162   22792 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0913 18:42:29.176240   22792 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0913 18:42:29.176320   22792 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0913 18:42:29.176409   22792 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0913 18:42:29.176484   22792 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0913 18:42:29.176570   22792 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0913 18:42:29.176629   22792 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0913 18:42:29.178531   22792 out.go:235]   - Booting up control plane ...
	I0913 18:42:29.178618   22792 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0913 18:42:29.178715   22792 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0913 18:42:29.178797   22792 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0913 18:42:29.178891   22792 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0913 18:42:29.178971   22792 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0913 18:42:29.179009   22792 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0913 18:42:29.179149   22792 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0913 18:42:29.179252   22792 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0913 18:42:29.179307   22792 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001926088s
	I0913 18:42:29.179401   22792 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0913 18:42:29.179459   22792 kubeadm.go:310] [api-check] The API server is healthy after 5.655401274s
	I0913 18:42:29.179582   22792 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0913 18:42:29.179756   22792 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0913 18:42:29.179836   22792 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0913 18:42:29.180032   22792 kubeadm.go:310] [mark-control-plane] Marking the node ha-617764 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0913 18:42:29.180085   22792 kubeadm.go:310] [bootstrap-token] Using token: wcshh7.vfnyb8uttcj6bcfg
	I0913 18:42:29.181519   22792 out.go:235]   - Configuring RBAC rules ...
	I0913 18:42:29.181620   22792 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0913 18:42:29.181691   22792 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0913 18:42:29.181810   22792 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0913 18:42:29.181964   22792 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0913 18:42:29.182169   22792 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0913 18:42:29.182276   22792 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0913 18:42:29.182380   22792 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0913 18:42:29.182420   22792 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0913 18:42:29.182464   22792 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0913 18:42:29.182470   22792 kubeadm.go:310] 
	I0913 18:42:29.182523   22792 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0913 18:42:29.182529   22792 kubeadm.go:310] 
	I0913 18:42:29.182597   22792 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0913 18:42:29.182603   22792 kubeadm.go:310] 
	I0913 18:42:29.182624   22792 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0913 18:42:29.182677   22792 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0913 18:42:29.182728   22792 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0913 18:42:29.182735   22792 kubeadm.go:310] 
	I0913 18:42:29.182778   22792 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0913 18:42:29.182784   22792 kubeadm.go:310] 
	I0913 18:42:29.182823   22792 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0913 18:42:29.182828   22792 kubeadm.go:310] 
	I0913 18:42:29.182875   22792 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0913 18:42:29.182938   22792 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0913 18:42:29.183002   22792 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0913 18:42:29.183009   22792 kubeadm.go:310] 
	I0913 18:42:29.183083   22792 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0913 18:42:29.183152   22792 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0913 18:42:29.183158   22792 kubeadm.go:310] 
	I0913 18:42:29.183226   22792 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token wcshh7.vfnyb8uttcj6bcfg \
	I0913 18:42:29.183315   22792 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a4240e11abfbfd373505036507e5ef67d548fb372e560a526b421dfcccc65691 \
	I0913 18:42:29.183349   22792 kubeadm.go:310] 	--control-plane 
	I0913 18:42:29.183372   22792 kubeadm.go:310] 
	I0913 18:42:29.183490   22792 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0913 18:42:29.183497   22792 kubeadm.go:310] 
	I0913 18:42:29.183600   22792 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token wcshh7.vfnyb8uttcj6bcfg \
	I0913 18:42:29.183695   22792 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a4240e11abfbfd373505036507e5ef67d548fb372e560a526b421dfcccc65691 
	I0913 18:42:29.183705   22792 cni.go:84] Creating CNI manager for ""
	I0913 18:42:29.183712   22792 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0913 18:42:29.185184   22792 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0913 18:42:29.186427   22792 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0913 18:42:29.193124   22792 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0913 18:42:29.193152   22792 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0913 18:42:29.211367   22792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0913 18:42:29.620466   22792 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0913 18:42:29.620575   22792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 18:42:29.620716   22792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-617764 minikube.k8s.io/updated_at=2024_09_13T18_42_29_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=fdd33bebc6743cfd1c61ec7fe066add478610a92 minikube.k8s.io/name=ha-617764 minikube.k8s.io/primary=true
	I0913 18:42:29.842265   22792 ops.go:34] apiserver oom_adj: -16
	I0913 18:42:29.842480   22792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 18:42:30.342706   22792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 18:42:30.842533   22792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 18:42:31.343422   22792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 18:42:31.842644   22792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 18:42:32.342550   22792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 18:42:32.842702   22792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 18:42:32.945537   22792 kubeadm.go:1113] duration metric: took 3.325029347s to wait for elevateKubeSystemPrivileges
	I0913 18:42:32.945573   22792 kubeadm.go:394] duration metric: took 15.225833532s to StartCluster
	I0913 18:42:32.945595   22792 settings.go:142] acquiring lock: {Name:mk3b751ea8a9f3a6c2d19469cffad500e96411b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:42:32.945688   22792 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19636-3902/kubeconfig
	I0913 18:42:32.946598   22792 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/kubeconfig: {Name:mk324cf401f96b93fed93af51e24b0634e5fa1a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:42:32.946842   22792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0913 18:42:32.946852   22792 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.145 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0913 18:42:32.946877   22792 start.go:241] waiting for startup goroutines ...
	I0913 18:42:32.946891   22792 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0913 18:42:32.946971   22792 addons.go:69] Setting storage-provisioner=true in profile "ha-617764"
	I0913 18:42:32.946990   22792 addons.go:234] Setting addon storage-provisioner=true in "ha-617764"
	I0913 18:42:32.947019   22792 host.go:66] Checking if "ha-617764" exists ...
	I0913 18:42:32.947062   22792 config.go:182] Loaded profile config "ha-617764": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 18:42:32.946989   22792 addons.go:69] Setting default-storageclass=true in profile "ha-617764"
	I0913 18:42:32.947091   22792 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-617764"
	I0913 18:42:32.947394   22792 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:42:32.947404   22792 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:42:32.947425   22792 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:42:32.947513   22792 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:42:32.963607   22792 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36063
	I0913 18:42:32.963910   22792 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42943
	I0913 18:42:32.964165   22792 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:42:32.964260   22792 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:42:32.964866   22792 main.go:141] libmachine: Using API Version  1
	I0913 18:42:32.964886   22792 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:42:32.964931   22792 main.go:141] libmachine: Using API Version  1
	I0913 18:42:32.964951   22792 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:42:32.965288   22792 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:42:32.965289   22792 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:42:32.965504   22792 main.go:141] libmachine: (ha-617764) Calling .GetState
	I0913 18:42:32.965895   22792 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:42:32.965935   22792 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:42:32.967835   22792 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19636-3902/kubeconfig
	I0913 18:42:32.968146   22792 kapi.go:59] client config for ha-617764: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/client.crt", KeyFile:"/home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/client.key", CAFile:"/home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6f9c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0913 18:42:32.968608   22792 cert_rotation.go:140] Starting client certificate rotation controller
	I0913 18:42:32.968754   22792 addons.go:234] Setting addon default-storageclass=true in "ha-617764"
	I0913 18:42:32.968780   22792 host.go:66] Checking if "ha-617764" exists ...
	I0913 18:42:32.969002   22792 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:42:32.969032   22792 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:42:32.981249   22792 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34449
	I0913 18:42:32.981684   22792 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:42:32.982160   22792 main.go:141] libmachine: Using API Version  1
	I0913 18:42:32.982185   22792 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:42:32.982525   22792 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:42:32.982698   22792 main.go:141] libmachine: (ha-617764) Calling .GetState
	I0913 18:42:32.983665   22792 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38159
	I0913 18:42:32.984052   22792 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:42:32.984518   22792 main.go:141] libmachine: Using API Version  1
	I0913 18:42:32.984532   22792 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:42:32.984586   22792 main.go:141] libmachine: (ha-617764) Calling .DriverName
	I0913 18:42:32.984938   22792 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:42:32.985343   22792 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:42:32.985373   22792 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:42:32.986418   22792 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 18:42:32.987796   22792 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0913 18:42:32.987811   22792 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0913 18:42:32.987825   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHHostname
	I0913 18:42:32.990920   22792 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:42:32.991398   22792 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 18:42:32.991430   22792 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:42:32.991626   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHPort
	I0913 18:42:32.991806   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 18:42:32.991948   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHUsername
	I0913 18:42:32.992069   22792 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764/id_rsa Username:docker}
	I0913 18:42:33.000960   22792 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45465
	I0913 18:42:33.001377   22792 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:42:33.001866   22792 main.go:141] libmachine: Using API Version  1
	I0913 18:42:33.001887   22792 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:42:33.002180   22792 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:42:33.002376   22792 main.go:141] libmachine: (ha-617764) Calling .GetState
	I0913 18:42:33.003800   22792 main.go:141] libmachine: (ha-617764) Calling .DriverName
	I0913 18:42:33.003996   22792 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0913 18:42:33.004014   22792 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0913 18:42:33.004029   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHHostname
	I0913 18:42:33.006450   22792 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:42:33.006853   22792 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 18:42:33.006869   22792 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:42:33.007034   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHPort
	I0913 18:42:33.007221   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 18:42:33.007366   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHUsername
	I0913 18:42:33.007510   22792 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764/id_rsa Username:docker}
	I0913 18:42:33.090159   22792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0913 18:42:33.189726   22792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0913 18:42:33.218377   22792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0913 18:42:33.516232   22792 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0913 18:42:33.790186   22792 main.go:141] libmachine: Making call to close driver server
	I0913 18:42:33.790220   22792 main.go:141] libmachine: (ha-617764) Calling .Close
	I0913 18:42:33.790255   22792 main.go:141] libmachine: Making call to close driver server
	I0913 18:42:33.790274   22792 main.go:141] libmachine: (ha-617764) Calling .Close
	I0913 18:42:33.790546   22792 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:42:33.790561   22792 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:42:33.790571   22792 main.go:141] libmachine: Making call to close driver server
	I0913 18:42:33.790579   22792 main.go:141] libmachine: (ha-617764) Calling .Close
	I0913 18:42:33.790608   22792 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:42:33.790621   22792 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:42:33.790631   22792 main.go:141] libmachine: Making call to close driver server
	I0913 18:42:33.790638   22792 main.go:141] libmachine: (ha-617764) Calling .Close
	I0913 18:42:33.790810   22792 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:42:33.790813   22792 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:42:33.790833   22792 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:42:33.790852   22792 main.go:141] libmachine: (ha-617764) DBG | Closing plugin on server side
	I0913 18:42:33.790821   22792 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:42:33.790904   22792 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0913 18:42:33.790927   22792 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0913 18:42:33.791043   22792 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0913 18:42:33.791054   22792 round_trippers.go:469] Request Headers:
	I0913 18:42:33.791064   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:42:33.791076   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:42:33.808225   22792 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0913 18:42:33.808988   22792 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0913 18:42:33.809008   22792 round_trippers.go:469] Request Headers:
	I0913 18:42:33.809019   22792 round_trippers.go:473]     Content-Type: application/json
	I0913 18:42:33.809024   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:42:33.809028   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:42:33.813534   22792 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0913 18:42:33.813685   22792 main.go:141] libmachine: Making call to close driver server
	I0913 18:42:33.813703   22792 main.go:141] libmachine: (ha-617764) Calling .Close
	I0913 18:42:33.813977   22792 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:42:33.813997   22792 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:42:33.816633   22792 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0913 18:42:33.817831   22792 addons.go:510] duration metric: took 870.940329ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0913 18:42:33.817878   22792 start.go:246] waiting for cluster config update ...
	I0913 18:42:33.817894   22792 start.go:255] writing updated cluster config ...
	I0913 18:42:33.820194   22792 out.go:201] 
	I0913 18:42:33.821789   22792 config.go:182] Loaded profile config "ha-617764": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 18:42:33.821919   22792 profile.go:143] Saving config to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/config.json ...
	I0913 18:42:33.823747   22792 out.go:177] * Starting "ha-617764-m02" control-plane node in "ha-617764" cluster
	I0913 18:42:33.825412   22792 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0913 18:42:33.825435   22792 cache.go:56] Caching tarball of preloaded images
	I0913 18:42:33.825541   22792 preload.go:172] Found /home/jenkins/minikube-integration/19636-3902/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0913 18:42:33.825552   22792 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0913 18:42:33.825621   22792 profile.go:143] Saving config to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/config.json ...
	I0913 18:42:33.825926   22792 start.go:360] acquireMachinesLock for ha-617764-m02: {Name:mk2a4fc9f87a3264088553785e086036ce1d8c5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 18:42:33.825968   22792 start.go:364] duration metric: took 23.623µs to acquireMachinesLock for "ha-617764-m02"
	I0913 18:42:33.825984   22792 start.go:93] Provisioning new machine with config: &{Name:ha-617764 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-617764 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.145 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0913 18:42:33.826053   22792 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0913 18:42:33.827760   22792 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0913 18:42:33.827853   22792 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:42:33.827885   22792 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:42:33.842456   22792 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38521
	I0913 18:42:33.842932   22792 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:42:33.843363   22792 main.go:141] libmachine: Using API Version  1
	I0913 18:42:33.843385   22792 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:42:33.843677   22792 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:42:33.843837   22792 main.go:141] libmachine: (ha-617764-m02) Calling .GetMachineName
	I0913 18:42:33.844018   22792 main.go:141] libmachine: (ha-617764-m02) Calling .DriverName
	I0913 18:42:33.844168   22792 start.go:159] libmachine.API.Create for "ha-617764" (driver="kvm2")
	I0913 18:42:33.844198   22792 client.go:168] LocalClient.Create starting
	I0913 18:42:33.844239   22792 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem
	I0913 18:42:33.844270   22792 main.go:141] libmachine: Decoding PEM data...
	I0913 18:42:33.844285   22792 main.go:141] libmachine: Parsing certificate...
	I0913 18:42:33.844331   22792 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem
	I0913 18:42:33.844352   22792 main.go:141] libmachine: Decoding PEM data...
	I0913 18:42:33.844362   22792 main.go:141] libmachine: Parsing certificate...
	I0913 18:42:33.844379   22792 main.go:141] libmachine: Running pre-create checks...
	I0913 18:42:33.844387   22792 main.go:141] libmachine: (ha-617764-m02) Calling .PreCreateCheck
	I0913 18:42:33.844535   22792 main.go:141] libmachine: (ha-617764-m02) Calling .GetConfigRaw
	I0913 18:42:33.844909   22792 main.go:141] libmachine: Creating machine...
	I0913 18:42:33.844921   22792 main.go:141] libmachine: (ha-617764-m02) Calling .Create
	I0913 18:42:33.845093   22792 main.go:141] libmachine: (ha-617764-m02) Creating KVM machine...
	I0913 18:42:33.846503   22792 main.go:141] libmachine: (ha-617764-m02) DBG | found existing default KVM network
	I0913 18:42:33.846596   22792 main.go:141] libmachine: (ha-617764-m02) DBG | found existing private KVM network mk-ha-617764
	I0913 18:42:33.846724   22792 main.go:141] libmachine: (ha-617764-m02) Setting up store path in /home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764-m02 ...
	I0913 18:42:33.846769   22792 main.go:141] libmachine: (ha-617764-m02) Building disk image from file:///home/jenkins/minikube-integration/19636-3902/.minikube/cache/iso/amd64/minikube-v1.34.0-1726156389-19616-amd64.iso
	I0913 18:42:33.846832   22792 main.go:141] libmachine: (ha-617764-m02) DBG | I0913 18:42:33.846727   23143 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19636-3902/.minikube
	I0913 18:42:33.846916   22792 main.go:141] libmachine: (ha-617764-m02) Downloading /home/jenkins/minikube-integration/19636-3902/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19636-3902/.minikube/cache/iso/amd64/minikube-v1.34.0-1726156389-19616-amd64.iso...
	I0913 18:42:34.098734   22792 main.go:141] libmachine: (ha-617764-m02) DBG | I0913 18:42:34.098637   23143 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764-m02/id_rsa...
	I0913 18:42:34.182300   22792 main.go:141] libmachine: (ha-617764-m02) DBG | I0913 18:42:34.182200   23143 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764-m02/ha-617764-m02.rawdisk...
	I0913 18:42:34.182336   22792 main.go:141] libmachine: (ha-617764-m02) DBG | Writing magic tar header
	I0913 18:42:34.182360   22792 main.go:141] libmachine: (ha-617764-m02) DBG | Writing SSH key tar header
	I0913 18:42:34.182375   22792 main.go:141] libmachine: (ha-617764-m02) DBG | I0913 18:42:34.182308   23143 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764-m02 ...
	I0913 18:42:34.182445   22792 main.go:141] libmachine: (ha-617764-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764-m02
	I0913 18:42:34.182476   22792 main.go:141] libmachine: (ha-617764-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19636-3902/.minikube/machines
	I0913 18:42:34.182497   22792 main.go:141] libmachine: (ha-617764-m02) Setting executable bit set on /home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764-m02 (perms=drwx------)
	I0913 18:42:34.182512   22792 main.go:141] libmachine: (ha-617764-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19636-3902/.minikube
	I0913 18:42:34.182525   22792 main.go:141] libmachine: (ha-617764-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19636-3902
	I0913 18:42:34.182535   22792 main.go:141] libmachine: (ha-617764-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0913 18:42:34.182545   22792 main.go:141] libmachine: (ha-617764-m02) DBG | Checking permissions on dir: /home/jenkins
	I0913 18:42:34.182554   22792 main.go:141] libmachine: (ha-617764-m02) DBG | Checking permissions on dir: /home
	I0913 18:42:34.182565   22792 main.go:141] libmachine: (ha-617764-m02) DBG | Skipping /home - not owner
	I0913 18:42:34.182576   22792 main.go:141] libmachine: (ha-617764-m02) Setting executable bit set on /home/jenkins/minikube-integration/19636-3902/.minikube/machines (perms=drwxr-xr-x)
	I0913 18:42:34.182590   22792 main.go:141] libmachine: (ha-617764-m02) Setting executable bit set on /home/jenkins/minikube-integration/19636-3902/.minikube (perms=drwxr-xr-x)
	I0913 18:42:34.182605   22792 main.go:141] libmachine: (ha-617764-m02) Setting executable bit set on /home/jenkins/minikube-integration/19636-3902 (perms=drwxrwxr-x)
	I0913 18:42:34.182625   22792 main.go:141] libmachine: (ha-617764-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0913 18:42:34.182637   22792 main.go:141] libmachine: (ha-617764-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0913 18:42:34.182650   22792 main.go:141] libmachine: (ha-617764-m02) Creating domain...
	I0913 18:42:34.183657   22792 main.go:141] libmachine: (ha-617764-m02) define libvirt domain using xml: 
	I0913 18:42:34.183679   22792 main.go:141] libmachine: (ha-617764-m02) <domain type='kvm'>
	I0913 18:42:34.183690   22792 main.go:141] libmachine: (ha-617764-m02)   <name>ha-617764-m02</name>
	I0913 18:42:34.183700   22792 main.go:141] libmachine: (ha-617764-m02)   <memory unit='MiB'>2200</memory>
	I0913 18:42:34.183709   22792 main.go:141] libmachine: (ha-617764-m02)   <vcpu>2</vcpu>
	I0913 18:42:34.183718   22792 main.go:141] libmachine: (ha-617764-m02)   <features>
	I0913 18:42:34.183726   22792 main.go:141] libmachine: (ha-617764-m02)     <acpi/>
	I0913 18:42:34.183733   22792 main.go:141] libmachine: (ha-617764-m02)     <apic/>
	I0913 18:42:34.183742   22792 main.go:141] libmachine: (ha-617764-m02)     <pae/>
	I0913 18:42:34.183752   22792 main.go:141] libmachine: (ha-617764-m02)     
	I0913 18:42:34.183760   22792 main.go:141] libmachine: (ha-617764-m02)   </features>
	I0913 18:42:34.183771   22792 main.go:141] libmachine: (ha-617764-m02)   <cpu mode='host-passthrough'>
	I0913 18:42:34.183778   22792 main.go:141] libmachine: (ha-617764-m02)   
	I0913 18:42:34.183791   22792 main.go:141] libmachine: (ha-617764-m02)   </cpu>
	I0913 18:42:34.183820   22792 main.go:141] libmachine: (ha-617764-m02)   <os>
	I0913 18:42:34.183838   22792 main.go:141] libmachine: (ha-617764-m02)     <type>hvm</type>
	I0913 18:42:34.183852   22792 main.go:141] libmachine: (ha-617764-m02)     <boot dev='cdrom'/>
	I0913 18:42:34.183862   22792 main.go:141] libmachine: (ha-617764-m02)     <boot dev='hd'/>
	I0913 18:42:34.183875   22792 main.go:141] libmachine: (ha-617764-m02)     <bootmenu enable='no'/>
	I0913 18:42:34.183885   22792 main.go:141] libmachine: (ha-617764-m02)   </os>
	I0913 18:42:34.183895   22792 main.go:141] libmachine: (ha-617764-m02)   <devices>
	I0913 18:42:34.183905   22792 main.go:141] libmachine: (ha-617764-m02)     <disk type='file' device='cdrom'>
	I0913 18:42:34.183923   22792 main.go:141] libmachine: (ha-617764-m02)       <source file='/home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764-m02/boot2docker.iso'/>
	I0913 18:42:34.183934   22792 main.go:141] libmachine: (ha-617764-m02)       <target dev='hdc' bus='scsi'/>
	I0913 18:42:34.183946   22792 main.go:141] libmachine: (ha-617764-m02)       <readonly/>
	I0913 18:42:34.183956   22792 main.go:141] libmachine: (ha-617764-m02)     </disk>
	I0913 18:42:34.183967   22792 main.go:141] libmachine: (ha-617764-m02)     <disk type='file' device='disk'>
	I0913 18:42:34.183979   22792 main.go:141] libmachine: (ha-617764-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0913 18:42:34.183995   22792 main.go:141] libmachine: (ha-617764-m02)       <source file='/home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764-m02/ha-617764-m02.rawdisk'/>
	I0913 18:42:34.184005   22792 main.go:141] libmachine: (ha-617764-m02)       <target dev='hda' bus='virtio'/>
	I0913 18:42:34.184014   22792 main.go:141] libmachine: (ha-617764-m02)     </disk>
	I0913 18:42:34.184024   22792 main.go:141] libmachine: (ha-617764-m02)     <interface type='network'>
	I0913 18:42:34.184047   22792 main.go:141] libmachine: (ha-617764-m02)       <source network='mk-ha-617764'/>
	I0913 18:42:34.184072   22792 main.go:141] libmachine: (ha-617764-m02)       <model type='virtio'/>
	I0913 18:42:34.184082   22792 main.go:141] libmachine: (ha-617764-m02)     </interface>
	I0913 18:42:34.184089   22792 main.go:141] libmachine: (ha-617764-m02)     <interface type='network'>
	I0913 18:42:34.184098   22792 main.go:141] libmachine: (ha-617764-m02)       <source network='default'/>
	I0913 18:42:34.184105   22792 main.go:141] libmachine: (ha-617764-m02)       <model type='virtio'/>
	I0913 18:42:34.184112   22792 main.go:141] libmachine: (ha-617764-m02)     </interface>
	I0913 18:42:34.184121   22792 main.go:141] libmachine: (ha-617764-m02)     <serial type='pty'>
	I0913 18:42:34.184133   22792 main.go:141] libmachine: (ha-617764-m02)       <target port='0'/>
	I0913 18:42:34.184139   22792 main.go:141] libmachine: (ha-617764-m02)     </serial>
	I0913 18:42:34.184147   22792 main.go:141] libmachine: (ha-617764-m02)     <console type='pty'>
	I0913 18:42:34.184155   22792 main.go:141] libmachine: (ha-617764-m02)       <target type='serial' port='0'/>
	I0913 18:42:34.184162   22792 main.go:141] libmachine: (ha-617764-m02)     </console>
	I0913 18:42:34.184172   22792 main.go:141] libmachine: (ha-617764-m02)     <rng model='virtio'>
	I0913 18:42:34.184181   22792 main.go:141] libmachine: (ha-617764-m02)       <backend model='random'>/dev/random</backend>
	I0913 18:42:34.184190   22792 main.go:141] libmachine: (ha-617764-m02)     </rng>
	I0913 18:42:34.184196   22792 main.go:141] libmachine: (ha-617764-m02)     
	I0913 18:42:34.184205   22792 main.go:141] libmachine: (ha-617764-m02)     
	I0913 18:42:34.184213   22792 main.go:141] libmachine: (ha-617764-m02)   </devices>
	I0913 18:42:34.184224   22792 main.go:141] libmachine: (ha-617764-m02) </domain>
	I0913 18:42:34.184234   22792 main.go:141] libmachine: (ha-617764-m02) 
	I0913 18:42:34.191005   22792 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined MAC address 52:54:00:bc:5e:d5 in network default
	I0913 18:42:34.191737   22792 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:42:34.191757   22792 main.go:141] libmachine: (ha-617764-m02) Ensuring networks are active...
	I0913 18:42:34.192718   22792 main.go:141] libmachine: (ha-617764-m02) Ensuring network default is active
	I0913 18:42:34.193103   22792 main.go:141] libmachine: (ha-617764-m02) Ensuring network mk-ha-617764 is active
	I0913 18:42:34.193588   22792 main.go:141] libmachine: (ha-617764-m02) Getting domain xml...
	I0913 18:42:34.194419   22792 main.go:141] libmachine: (ha-617764-m02) Creating domain...
	I0913 18:42:35.408107   22792 main.go:141] libmachine: (ha-617764-m02) Waiting to get IP...
	I0913 18:42:35.408973   22792 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:42:35.409470   22792 main.go:141] libmachine: (ha-617764-m02) DBG | unable to find current IP address of domain ha-617764-m02 in network mk-ha-617764
	I0913 18:42:35.409493   22792 main.go:141] libmachine: (ha-617764-m02) DBG | I0913 18:42:35.409451   23143 retry.go:31] will retry after 264.373822ms: waiting for machine to come up
	I0913 18:42:35.676087   22792 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:42:35.676476   22792 main.go:141] libmachine: (ha-617764-m02) DBG | unable to find current IP address of domain ha-617764-m02 in network mk-ha-617764
	I0913 18:42:35.676503   22792 main.go:141] libmachine: (ha-617764-m02) DBG | I0913 18:42:35.676421   23143 retry.go:31] will retry after 263.878522ms: waiting for machine to come up
	I0913 18:42:35.942022   22792 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:42:35.942487   22792 main.go:141] libmachine: (ha-617764-m02) DBG | unable to find current IP address of domain ha-617764-m02 in network mk-ha-617764
	I0913 18:42:35.942515   22792 main.go:141] libmachine: (ha-617764-m02) DBG | I0913 18:42:35.942441   23143 retry.go:31] will retry after 338.022522ms: waiting for machine to come up
	I0913 18:42:36.282060   22792 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:42:36.282605   22792 main.go:141] libmachine: (ha-617764-m02) DBG | unable to find current IP address of domain ha-617764-m02 in network mk-ha-617764
	I0913 18:42:36.282631   22792 main.go:141] libmachine: (ha-617764-m02) DBG | I0913 18:42:36.282553   23143 retry.go:31] will retry after 536.406863ms: waiting for machine to come up
	I0913 18:42:36.820192   22792 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:42:36.820631   22792 main.go:141] libmachine: (ha-617764-m02) DBG | unable to find current IP address of domain ha-617764-m02 in network mk-ha-617764
	I0913 18:42:36.820655   22792 main.go:141] libmachine: (ha-617764-m02) DBG | I0913 18:42:36.820599   23143 retry.go:31] will retry after 505.176991ms: waiting for machine to come up
	I0913 18:42:37.327316   22792 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:42:37.327776   22792 main.go:141] libmachine: (ha-617764-m02) DBG | unable to find current IP address of domain ha-617764-m02 in network mk-ha-617764
	I0913 18:42:37.327808   22792 main.go:141] libmachine: (ha-617764-m02) DBG | I0913 18:42:37.327731   23143 retry.go:31] will retry after 710.248346ms: waiting for machine to come up
	I0913 18:42:38.039518   22792 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:42:38.039974   22792 main.go:141] libmachine: (ha-617764-m02) DBG | unable to find current IP address of domain ha-617764-m02 in network mk-ha-617764
	I0913 18:42:38.039999   22792 main.go:141] libmachine: (ha-617764-m02) DBG | I0913 18:42:38.039914   23143 retry.go:31] will retry after 1.093957656s: waiting for machine to come up
	I0913 18:42:39.135450   22792 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:42:39.135831   22792 main.go:141] libmachine: (ha-617764-m02) DBG | unable to find current IP address of domain ha-617764-m02 in network mk-ha-617764
	I0913 18:42:39.135859   22792 main.go:141] libmachine: (ha-617764-m02) DBG | I0913 18:42:39.135778   23143 retry.go:31] will retry after 1.203417577s: waiting for machine to come up
	I0913 18:42:40.340982   22792 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:42:40.341334   22792 main.go:141] libmachine: (ha-617764-m02) DBG | unable to find current IP address of domain ha-617764-m02 in network mk-ha-617764
	I0913 18:42:40.341362   22792 main.go:141] libmachine: (ha-617764-m02) DBG | I0913 18:42:40.341294   23143 retry.go:31] will retry after 1.236225531s: waiting for machine to come up
	I0913 18:42:41.579551   22792 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:42:41.580029   22792 main.go:141] libmachine: (ha-617764-m02) DBG | unable to find current IP address of domain ha-617764-m02 in network mk-ha-617764
	I0913 18:42:41.580051   22792 main.go:141] libmachine: (ha-617764-m02) DBG | I0913 18:42:41.579969   23143 retry.go:31] will retry after 2.326969723s: waiting for machine to come up
	I0913 18:42:43.908257   22792 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:42:43.908629   22792 main.go:141] libmachine: (ha-617764-m02) DBG | unable to find current IP address of domain ha-617764-m02 in network mk-ha-617764
	I0913 18:42:43.908654   22792 main.go:141] libmachine: (ha-617764-m02) DBG | I0913 18:42:43.908589   23143 retry.go:31] will retry after 2.078305319s: waiting for machine to come up
	I0913 18:42:45.988301   22792 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:42:45.988776   22792 main.go:141] libmachine: (ha-617764-m02) DBG | unable to find current IP address of domain ha-617764-m02 in network mk-ha-617764
	I0913 18:42:45.988805   22792 main.go:141] libmachine: (ha-617764-m02) DBG | I0913 18:42:45.988726   23143 retry.go:31] will retry after 2.330094079s: waiting for machine to come up
	I0913 18:42:48.322144   22792 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:42:48.322497   22792 main.go:141] libmachine: (ha-617764-m02) DBG | unable to find current IP address of domain ha-617764-m02 in network mk-ha-617764
	I0913 18:42:48.322511   22792 main.go:141] libmachine: (ha-617764-m02) DBG | I0913 18:42:48.322461   23143 retry.go:31] will retry after 3.235874809s: waiting for machine to come up
	I0913 18:42:51.562199   22792 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:42:51.562650   22792 main.go:141] libmachine: (ha-617764-m02) DBG | unable to find current IP address of domain ha-617764-m02 in network mk-ha-617764
	I0913 18:42:51.562678   22792 main.go:141] libmachine: (ha-617764-m02) DBG | I0913 18:42:51.562590   23143 retry.go:31] will retry after 3.996843955s: waiting for machine to come up
	I0913 18:42:55.562043   22792 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:42:55.562475   22792 main.go:141] libmachine: (ha-617764-m02) Found IP for machine: 192.168.39.203
	I0913 18:42:55.562497   22792 main.go:141] libmachine: (ha-617764-m02) Reserving static IP address...
	I0913 18:42:55.562514   22792 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has current primary IP address 192.168.39.203 and MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:42:55.562848   22792 main.go:141] libmachine: (ha-617764-m02) DBG | unable to find host DHCP lease matching {name: "ha-617764-m02", mac: "52:54:00:ab:42:52", ip: "192.168.39.203"} in network mk-ha-617764
	I0913 18:42:55.635170   22792 main.go:141] libmachine: (ha-617764-m02) DBG | Getting to WaitForSSH function...
	I0913 18:42:55.635207   22792 main.go:141] libmachine: (ha-617764-m02) Reserved static IP address: 192.168.39.203
	I0913 18:42:55.635256   22792 main.go:141] libmachine: (ha-617764-m02) Waiting for SSH to be available...
	I0913 18:42:55.638187   22792 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:42:55.638602   22792 main.go:141] libmachine: (ha-617764-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:42:52", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:48 +0000 UTC Type:0 Mac:52:54:00:ab:42:52 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:minikube Clientid:01:52:54:00:ab:42:52}
	I0913 18:42:55.638630   22792 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined IP address 192.168.39.203 and MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:42:55.638793   22792 main.go:141] libmachine: (ha-617764-m02) DBG | Using SSH client type: external
	I0913 18:42:55.638873   22792 main.go:141] libmachine: (ha-617764-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764-m02/id_rsa (-rw-------)
	I0913 18:42:55.639483   22792 main.go:141] libmachine: (ha-617764-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.203 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0913 18:42:55.640013   22792 main.go:141] libmachine: (ha-617764-m02) DBG | About to run SSH command:
	I0913 18:42:55.640037   22792 main.go:141] libmachine: (ha-617764-m02) DBG | exit 0
	I0913 18:42:55.762288   22792 main.go:141] libmachine: (ha-617764-m02) DBG | SSH cmd err, output: <nil>: 
	I0913 18:42:55.762565   22792 main.go:141] libmachine: (ha-617764-m02) KVM machine creation complete!
	I0913 18:42:55.762890   22792 main.go:141] libmachine: (ha-617764-m02) Calling .GetConfigRaw
	I0913 18:42:55.763481   22792 main.go:141] libmachine: (ha-617764-m02) Calling .DriverName
	I0913 18:42:55.763669   22792 main.go:141] libmachine: (ha-617764-m02) Calling .DriverName
	I0913 18:42:55.763800   22792 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0913 18:42:55.763813   22792 main.go:141] libmachine: (ha-617764-m02) Calling .GetState
	I0913 18:42:55.765272   22792 main.go:141] libmachine: Detecting operating system of created instance...
	I0913 18:42:55.765287   22792 main.go:141] libmachine: Waiting for SSH to be available...
	I0913 18:42:55.765293   22792 main.go:141] libmachine: Getting to WaitForSSH function...
	I0913 18:42:55.765298   22792 main.go:141] libmachine: (ha-617764-m02) Calling .GetSSHHostname
	I0913 18:42:55.767597   22792 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:42:55.767917   22792 main.go:141] libmachine: (ha-617764-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:42:52", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:48 +0000 UTC Type:0 Mac:52:54:00:ab:42:52 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-617764-m02 Clientid:01:52:54:00:ab:42:52}
	I0913 18:42:55.767935   22792 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined IP address 192.168.39.203 and MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:42:55.768060   22792 main.go:141] libmachine: (ha-617764-m02) Calling .GetSSHPort
	I0913 18:42:55.768273   22792 main.go:141] libmachine: (ha-617764-m02) Calling .GetSSHKeyPath
	I0913 18:42:55.768403   22792 main.go:141] libmachine: (ha-617764-m02) Calling .GetSSHKeyPath
	I0913 18:42:55.768509   22792 main.go:141] libmachine: (ha-617764-m02) Calling .GetSSHUsername
	I0913 18:42:55.768631   22792 main.go:141] libmachine: Using SSH client type: native
	I0913 18:42:55.768890   22792 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.203 22 <nil> <nil>}
	I0913 18:42:55.768908   22792 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0913 18:42:55.865390   22792 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0913 18:42:55.865413   22792 main.go:141] libmachine: Detecting the provisioner...
	I0913 18:42:55.865424   22792 main.go:141] libmachine: (ha-617764-m02) Calling .GetSSHHostname
	I0913 18:42:55.868116   22792 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:42:55.868486   22792 main.go:141] libmachine: (ha-617764-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:42:52", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:48 +0000 UTC Type:0 Mac:52:54:00:ab:42:52 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-617764-m02 Clientid:01:52:54:00:ab:42:52}
	I0913 18:42:55.868512   22792 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined IP address 192.168.39.203 and MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:42:55.868653   22792 main.go:141] libmachine: (ha-617764-m02) Calling .GetSSHPort
	I0913 18:42:55.868837   22792 main.go:141] libmachine: (ha-617764-m02) Calling .GetSSHKeyPath
	I0913 18:42:55.868991   22792 main.go:141] libmachine: (ha-617764-m02) Calling .GetSSHKeyPath
	I0913 18:42:55.869119   22792 main.go:141] libmachine: (ha-617764-m02) Calling .GetSSHUsername
	I0913 18:42:55.869326   22792 main.go:141] libmachine: Using SSH client type: native
	I0913 18:42:55.869599   22792 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.203 22 <nil> <nil>}
	I0913 18:42:55.869613   22792 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0913 18:42:55.966894   22792 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0913 18:42:55.966998   22792 main.go:141] libmachine: found compatible host: buildroot
	I0913 18:42:55.967011   22792 main.go:141] libmachine: Provisioning with buildroot...
	I0913 18:42:55.967022   22792 main.go:141] libmachine: (ha-617764-m02) Calling .GetMachineName
	I0913 18:42:55.967311   22792 buildroot.go:166] provisioning hostname "ha-617764-m02"
	I0913 18:42:55.967338   22792 main.go:141] libmachine: (ha-617764-m02) Calling .GetMachineName
	I0913 18:42:55.967522   22792 main.go:141] libmachine: (ha-617764-m02) Calling .GetSSHHostname
	I0913 18:42:55.970301   22792 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:42:55.970631   22792 main.go:141] libmachine: (ha-617764-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:42:52", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:48 +0000 UTC Type:0 Mac:52:54:00:ab:42:52 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-617764-m02 Clientid:01:52:54:00:ab:42:52}
	I0913 18:42:55.970660   22792 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined IP address 192.168.39.203 and MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:42:55.970825   22792 main.go:141] libmachine: (ha-617764-m02) Calling .GetSSHPort
	I0913 18:42:55.971018   22792 main.go:141] libmachine: (ha-617764-m02) Calling .GetSSHKeyPath
	I0913 18:42:55.971163   22792 main.go:141] libmachine: (ha-617764-m02) Calling .GetSSHKeyPath
	I0913 18:42:55.971301   22792 main.go:141] libmachine: (ha-617764-m02) Calling .GetSSHUsername
	I0913 18:42:55.971496   22792 main.go:141] libmachine: Using SSH client type: native
	I0913 18:42:55.971707   22792 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.203 22 <nil> <nil>}
	I0913 18:42:55.971725   22792 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-617764-m02 && echo "ha-617764-m02" | sudo tee /etc/hostname
	I0913 18:42:56.086576   22792 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-617764-m02
	
	I0913 18:42:56.086607   22792 main.go:141] libmachine: (ha-617764-m02) Calling .GetSSHHostname
	I0913 18:42:56.089443   22792 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:42:56.089742   22792 main.go:141] libmachine: (ha-617764-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:42:52", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:48 +0000 UTC Type:0 Mac:52:54:00:ab:42:52 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-617764-m02 Clientid:01:52:54:00:ab:42:52}
	I0913 18:42:56.089766   22792 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined IP address 192.168.39.203 and MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:42:56.089955   22792 main.go:141] libmachine: (ha-617764-m02) Calling .GetSSHPort
	I0913 18:42:56.090166   22792 main.go:141] libmachine: (ha-617764-m02) Calling .GetSSHKeyPath
	I0913 18:42:56.090435   22792 main.go:141] libmachine: (ha-617764-m02) Calling .GetSSHKeyPath
	I0913 18:42:56.090571   22792 main.go:141] libmachine: (ha-617764-m02) Calling .GetSSHUsername
	I0913 18:42:56.090760   22792 main.go:141] libmachine: Using SSH client type: native
	I0913 18:42:56.090911   22792 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.203 22 <nil> <nil>}
	I0913 18:42:56.090926   22792 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-617764-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-617764-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-617764-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0913 18:42:56.195182   22792 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0913 18:42:56.195220   22792 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19636-3902/.minikube CaCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19636-3902/.minikube}
	I0913 18:42:56.195241   22792 buildroot.go:174] setting up certificates
	I0913 18:42:56.195252   22792 provision.go:84] configureAuth start
	I0913 18:42:56.195262   22792 main.go:141] libmachine: (ha-617764-m02) Calling .GetMachineName
	I0913 18:42:56.195523   22792 main.go:141] libmachine: (ha-617764-m02) Calling .GetIP
	I0913 18:42:56.197899   22792 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:42:56.198225   22792 main.go:141] libmachine: (ha-617764-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:42:52", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:48 +0000 UTC Type:0 Mac:52:54:00:ab:42:52 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-617764-m02 Clientid:01:52:54:00:ab:42:52}
	I0913 18:42:56.198248   22792 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined IP address 192.168.39.203 and MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:42:56.198365   22792 main.go:141] libmachine: (ha-617764-m02) Calling .GetSSHHostname
	I0913 18:42:56.200705   22792 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:42:56.201030   22792 main.go:141] libmachine: (ha-617764-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:42:52", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:48 +0000 UTC Type:0 Mac:52:54:00:ab:42:52 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-617764-m02 Clientid:01:52:54:00:ab:42:52}
	I0913 18:42:56.201057   22792 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined IP address 192.168.39.203 and MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:42:56.201201   22792 provision.go:143] copyHostCerts
	I0913 18:42:56.201233   22792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem
	I0913 18:42:56.201274   22792 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem, removing ...
	I0913 18:42:56.201286   22792 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem
	I0913 18:42:56.201366   22792 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem (1082 bytes)
	I0913 18:42:56.201456   22792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem
	I0913 18:42:56.201478   22792 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem, removing ...
	I0913 18:42:56.201486   22792 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem
	I0913 18:42:56.201516   22792 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem (1123 bytes)
	I0913 18:42:56.201567   22792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem
	I0913 18:42:56.201589   22792 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem, removing ...
	I0913 18:42:56.201597   22792 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem
	I0913 18:42:56.201623   22792 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem (1679 bytes)
	I0913 18:42:56.201680   22792 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem org=jenkins.ha-617764-m02 san=[127.0.0.1 192.168.39.203 ha-617764-m02 localhost minikube]
	I0913 18:42:56.304838   22792 provision.go:177] copyRemoteCerts
	I0913 18:42:56.304894   22792 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0913 18:42:56.304915   22792 main.go:141] libmachine: (ha-617764-m02) Calling .GetSSHHostname
	I0913 18:42:56.307334   22792 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:42:56.307653   22792 main.go:141] libmachine: (ha-617764-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:42:52", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:48 +0000 UTC Type:0 Mac:52:54:00:ab:42:52 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-617764-m02 Clientid:01:52:54:00:ab:42:52}
	I0913 18:42:56.307685   22792 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined IP address 192.168.39.203 and MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:42:56.307806   22792 main.go:141] libmachine: (ha-617764-m02) Calling .GetSSHPort
	I0913 18:42:56.307976   22792 main.go:141] libmachine: (ha-617764-m02) Calling .GetSSHKeyPath
	I0913 18:42:56.308108   22792 main.go:141] libmachine: (ha-617764-m02) Calling .GetSSHUsername
	I0913 18:42:56.308232   22792 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764-m02/id_rsa Username:docker}
	I0913 18:42:56.388206   22792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0913 18:42:56.388295   22792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1671 bytes)
	I0913 18:42:56.412902   22792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0913 18:42:56.412975   22792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0913 18:42:56.437081   22792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0913 18:42:56.437162   22792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0913 18:42:56.461095   22792 provision.go:87] duration metric: took 265.820588ms to configureAuth
	I0913 18:42:56.461120   22792 buildroot.go:189] setting minikube options for container-runtime
	I0913 18:42:56.461323   22792 config.go:182] Loaded profile config "ha-617764": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 18:42:56.461405   22792 main.go:141] libmachine: (ha-617764-m02) Calling .GetSSHHostname
	I0913 18:42:56.464186   22792 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:42:56.464537   22792 main.go:141] libmachine: (ha-617764-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:42:52", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:48 +0000 UTC Type:0 Mac:52:54:00:ab:42:52 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-617764-m02 Clientid:01:52:54:00:ab:42:52}
	I0913 18:42:56.464571   22792 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined IP address 192.168.39.203 and MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:42:56.464774   22792 main.go:141] libmachine: (ha-617764-m02) Calling .GetSSHPort
	I0913 18:42:56.464944   22792 main.go:141] libmachine: (ha-617764-m02) Calling .GetSSHKeyPath
	I0913 18:42:56.465101   22792 main.go:141] libmachine: (ha-617764-m02) Calling .GetSSHKeyPath
	I0913 18:42:56.465223   22792 main.go:141] libmachine: (ha-617764-m02) Calling .GetSSHUsername
	I0913 18:42:56.465371   22792 main.go:141] libmachine: Using SSH client type: native
	I0913 18:42:56.465559   22792 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.203 22 <nil> <nil>}
	I0913 18:42:56.465575   22792 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0913 18:42:56.681537   22792 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0913 18:42:56.681567   22792 main.go:141] libmachine: Checking connection to Docker...
	I0913 18:42:56.681579   22792 main.go:141] libmachine: (ha-617764-m02) Calling .GetURL
	I0913 18:42:56.682877   22792 main.go:141] libmachine: (ha-617764-m02) DBG | Using libvirt version 6000000
	I0913 18:42:56.684960   22792 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:42:56.685263   22792 main.go:141] libmachine: (ha-617764-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:42:52", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:48 +0000 UTC Type:0 Mac:52:54:00:ab:42:52 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-617764-m02 Clientid:01:52:54:00:ab:42:52}
	I0913 18:42:56.685292   22792 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined IP address 192.168.39.203 and MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:42:56.685455   22792 main.go:141] libmachine: Docker is up and running!
	I0913 18:42:56.685473   22792 main.go:141] libmachine: Reticulating splines...
	I0913 18:42:56.685479   22792 client.go:171] duration metric: took 22.841271502s to LocalClient.Create
	I0913 18:42:56.685504   22792 start.go:167] duration metric: took 22.841337164s to libmachine.API.Create "ha-617764"
	I0913 18:42:56.685514   22792 start.go:293] postStartSetup for "ha-617764-m02" (driver="kvm2")
	I0913 18:42:56.685530   22792 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0913 18:42:56.685549   22792 main.go:141] libmachine: (ha-617764-m02) Calling .DriverName
	I0913 18:42:56.685743   22792 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0913 18:42:56.685764   22792 main.go:141] libmachine: (ha-617764-m02) Calling .GetSSHHostname
	I0913 18:42:56.687558   22792 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:42:56.687865   22792 main.go:141] libmachine: (ha-617764-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:42:52", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:48 +0000 UTC Type:0 Mac:52:54:00:ab:42:52 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-617764-m02 Clientid:01:52:54:00:ab:42:52}
	I0913 18:42:56.687891   22792 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined IP address 192.168.39.203 and MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:42:56.688053   22792 main.go:141] libmachine: (ha-617764-m02) Calling .GetSSHPort
	I0913 18:42:56.688205   22792 main.go:141] libmachine: (ha-617764-m02) Calling .GetSSHKeyPath
	I0913 18:42:56.688342   22792 main.go:141] libmachine: (ha-617764-m02) Calling .GetSSHUsername
	I0913 18:42:56.688451   22792 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764-m02/id_rsa Username:docker}
	I0913 18:42:56.767885   22792 ssh_runner.go:195] Run: cat /etc/os-release
	I0913 18:42:56.772109   22792 info.go:137] Remote host: Buildroot 2023.02.9
	I0913 18:42:56.772127   22792 filesync.go:126] Scanning /home/jenkins/minikube-integration/19636-3902/.minikube/addons for local assets ...
	I0913 18:42:56.772191   22792 filesync.go:126] Scanning /home/jenkins/minikube-integration/19636-3902/.minikube/files for local assets ...
	I0913 18:42:56.772259   22792 filesync.go:149] local asset: /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem -> 110792.pem in /etc/ssl/certs
	I0913 18:42:56.772268   22792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem -> /etc/ssl/certs/110792.pem
	I0913 18:42:56.772342   22792 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0913 18:42:56.781943   22792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem --> /etc/ssl/certs/110792.pem (1708 bytes)
	I0913 18:42:56.808581   22792 start.go:296] duration metric: took 123.052756ms for postStartSetup
	I0913 18:42:56.808619   22792 main.go:141] libmachine: (ha-617764-m02) Calling .GetConfigRaw
	I0913 18:42:56.809145   22792 main.go:141] libmachine: (ha-617764-m02) Calling .GetIP
	I0913 18:42:56.811531   22792 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:42:56.811840   22792 main.go:141] libmachine: (ha-617764-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:42:52", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:48 +0000 UTC Type:0 Mac:52:54:00:ab:42:52 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-617764-m02 Clientid:01:52:54:00:ab:42:52}
	I0913 18:42:56.811859   22792 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined IP address 192.168.39.203 and MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:42:56.812097   22792 profile.go:143] Saving config to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/config.json ...
	I0913 18:42:56.812259   22792 start.go:128] duration metric: took 22.986195771s to createHost
	I0913 18:42:56.812278   22792 main.go:141] libmachine: (ha-617764-m02) Calling .GetSSHHostname
	I0913 18:42:56.814271   22792 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:42:56.814590   22792 main.go:141] libmachine: (ha-617764-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:42:52", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:48 +0000 UTC Type:0 Mac:52:54:00:ab:42:52 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-617764-m02 Clientid:01:52:54:00:ab:42:52}
	I0913 18:42:56.814616   22792 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined IP address 192.168.39.203 and MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:42:56.814735   22792 main.go:141] libmachine: (ha-617764-m02) Calling .GetSSHPort
	I0913 18:42:56.814900   22792 main.go:141] libmachine: (ha-617764-m02) Calling .GetSSHKeyPath
	I0913 18:42:56.815055   22792 main.go:141] libmachine: (ha-617764-m02) Calling .GetSSHKeyPath
	I0913 18:42:56.815181   22792 main.go:141] libmachine: (ha-617764-m02) Calling .GetSSHUsername
	I0913 18:42:56.815329   22792 main.go:141] libmachine: Using SSH client type: native
	I0913 18:42:56.815477   22792 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.203 22 <nil> <nil>}
	I0913 18:42:56.815485   22792 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0913 18:42:56.910973   22792 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726252976.878556016
	
	I0913 18:42:56.910995   22792 fix.go:216] guest clock: 1726252976.878556016
	I0913 18:42:56.911001   22792 fix.go:229] Guest: 2024-09-13 18:42:56.878556016 +0000 UTC Remote: 2024-09-13 18:42:56.812269104 +0000 UTC m=+70.503179379 (delta=66.286912ms)
	I0913 18:42:56.911016   22792 fix.go:200] guest clock delta is within tolerance: 66.286912ms
	I0913 18:42:56.911021   22792 start.go:83] releasing machines lock for "ha-617764-m02", held for 23.085044062s
	I0913 18:42:56.911037   22792 main.go:141] libmachine: (ha-617764-m02) Calling .DriverName
	I0913 18:42:56.911342   22792 main.go:141] libmachine: (ha-617764-m02) Calling .GetIP
	I0913 18:42:56.913641   22792 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:42:56.914008   22792 main.go:141] libmachine: (ha-617764-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:42:52", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:48 +0000 UTC Type:0 Mac:52:54:00:ab:42:52 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-617764-m02 Clientid:01:52:54:00:ab:42:52}
	I0913 18:42:56.914034   22792 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined IP address 192.168.39.203 and MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:42:56.916176   22792 out.go:177] * Found network options:
	I0913 18:42:56.917389   22792 out.go:177]   - NO_PROXY=192.168.39.145
	W0913 18:42:56.918480   22792 proxy.go:119] fail to check proxy env: Error ip not in block
	I0913 18:42:56.918510   22792 main.go:141] libmachine: (ha-617764-m02) Calling .DriverName
	I0913 18:42:56.918961   22792 main.go:141] libmachine: (ha-617764-m02) Calling .DriverName
	I0913 18:42:56.919119   22792 main.go:141] libmachine: (ha-617764-m02) Calling .DriverName
	I0913 18:42:56.919195   22792 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0913 18:42:56.919235   22792 main.go:141] libmachine: (ha-617764-m02) Calling .GetSSHHostname
	W0913 18:42:56.919318   22792 proxy.go:119] fail to check proxy env: Error ip not in block
	I0913 18:42:56.919377   22792 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0913 18:42:56.919395   22792 main.go:141] libmachine: (ha-617764-m02) Calling .GetSSHHostname
	I0913 18:42:56.922064   22792 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:42:56.922354   22792 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:42:56.922410   22792 main.go:141] libmachine: (ha-617764-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:42:52", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:48 +0000 UTC Type:0 Mac:52:54:00:ab:42:52 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-617764-m02 Clientid:01:52:54:00:ab:42:52}
	I0913 18:42:56.922440   22792 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined IP address 192.168.39.203 and MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:42:56.922589   22792 main.go:141] libmachine: (ha-617764-m02) Calling .GetSSHPort
	I0913 18:42:56.922762   22792 main.go:141] libmachine: (ha-617764-m02) Calling .GetSSHKeyPath
	I0913 18:42:56.922781   22792 main.go:141] libmachine: (ha-617764-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:42:52", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:48 +0000 UTC Type:0 Mac:52:54:00:ab:42:52 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-617764-m02 Clientid:01:52:54:00:ab:42:52}
	I0913 18:42:56.922796   22792 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined IP address 192.168.39.203 and MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:42:56.922906   22792 main.go:141] libmachine: (ha-617764-m02) Calling .GetSSHUsername
	I0913 18:42:56.922935   22792 main.go:141] libmachine: (ha-617764-m02) Calling .GetSSHPort
	I0913 18:42:56.923116   22792 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764-m02/id_rsa Username:docker}
	I0913 18:42:56.923130   22792 main.go:141] libmachine: (ha-617764-m02) Calling .GetSSHKeyPath
	I0913 18:42:56.923273   22792 main.go:141] libmachine: (ha-617764-m02) Calling .GetSSHUsername
	I0913 18:42:56.923389   22792 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764-m02/id_rsa Username:docker}
	I0913 18:42:57.147627   22792 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0913 18:42:57.154515   22792 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0913 18:42:57.154583   22792 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0913 18:42:57.171030   22792 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0913 18:42:57.171050   22792 start.go:495] detecting cgroup driver to use...
	I0913 18:42:57.171111   22792 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0913 18:42:57.187446   22792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0913 18:42:57.200316   22792 docker.go:217] disabling cri-docker service (if available) ...
	I0913 18:42:57.200359   22792 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0913 18:42:57.212970   22792 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0913 18:42:57.225988   22792 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0913 18:42:57.344734   22792 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0913 18:42:57.484508   22792 docker.go:233] disabling docker service ...
	I0913 18:42:57.484569   22792 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0913 18:42:57.499332   22792 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0913 18:42:57.512148   22792 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0913 18:42:57.656863   22792 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0913 18:42:57.779451   22792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0913 18:42:57.793246   22792 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0913 18:42:57.811312   22792 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0913 18:42:57.811380   22792 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 18:42:57.822030   22792 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0913 18:42:57.822082   22792 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 18:42:57.832599   22792 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 18:42:57.843228   22792 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 18:42:57.854115   22792 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0913 18:42:57.864918   22792 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 18:42:57.876273   22792 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 18:42:57.893313   22792 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 18:42:57.904216   22792 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0913 18:42:57.914207   22792 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0913 18:42:57.914268   22792 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0913 18:42:57.928195   22792 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0913 18:42:57.938419   22792 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 18:42:58.064351   22792 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0913 18:42:58.165182   22792 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0913 18:42:58.165248   22792 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0913 18:42:58.170298   22792 start.go:563] Will wait 60s for crictl version
	I0913 18:42:58.170339   22792 ssh_runner.go:195] Run: which crictl
	I0913 18:42:58.174086   22792 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0913 18:42:58.211997   22792 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0913 18:42:58.212072   22792 ssh_runner.go:195] Run: crio --version
	I0913 18:42:58.239488   22792 ssh_runner.go:195] Run: crio --version
	I0913 18:42:58.283822   22792 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0913 18:42:58.285548   22792 out.go:177]   - env NO_PROXY=192.168.39.145
	I0913 18:42:58.286654   22792 main.go:141] libmachine: (ha-617764-m02) Calling .GetIP
	I0913 18:42:58.289221   22792 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:42:58.289622   22792 main.go:141] libmachine: (ha-617764-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:42:52", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:48 +0000 UTC Type:0 Mac:52:54:00:ab:42:52 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-617764-m02 Clientid:01:52:54:00:ab:42:52}
	I0913 18:42:58.289650   22792 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined IP address 192.168.39.203 and MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:42:58.289857   22792 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0913 18:42:58.294318   22792 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0913 18:42:58.306758   22792 mustload.go:65] Loading cluster: ha-617764
	I0913 18:42:58.306968   22792 config.go:182] Loaded profile config "ha-617764": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 18:42:58.307259   22792 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:42:58.307299   22792 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:42:58.322070   22792 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41437
	I0913 18:42:58.322504   22792 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:42:58.323022   22792 main.go:141] libmachine: Using API Version  1
	I0913 18:42:58.323053   22792 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:42:58.323361   22792 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:42:58.323580   22792 main.go:141] libmachine: (ha-617764) Calling .GetState
	I0913 18:42:58.325023   22792 host.go:66] Checking if "ha-617764" exists ...
	I0913 18:42:58.325289   22792 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:42:58.325319   22792 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:42:58.339300   22792 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44227
	I0913 18:42:58.339772   22792 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:42:58.340260   22792 main.go:141] libmachine: Using API Version  1
	I0913 18:42:58.340278   22792 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:42:58.340575   22792 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:42:58.340724   22792 main.go:141] libmachine: (ha-617764) Calling .DriverName
	I0913 18:42:58.340859   22792 certs.go:68] Setting up /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764 for IP: 192.168.39.203
	I0913 18:42:58.340870   22792 certs.go:194] generating shared ca certs ...
	I0913 18:42:58.340882   22792 certs.go:226] acquiring lock for ca certs: {Name:mke780aab4c2895bded11772e31c7a357d07742c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:42:58.340990   22792 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.key
	I0913 18:42:58.341027   22792 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.key
	I0913 18:42:58.341036   22792 certs.go:256] generating profile certs ...
	I0913 18:42:58.341109   22792 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/client.key
	I0913 18:42:58.341133   22792 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.key.609bdfe2
	I0913 18:42:58.341148   22792 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.crt.609bdfe2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.145 192.168.39.203 192.168.39.254]
	I0913 18:42:58.505948   22792 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.crt.609bdfe2 ...
	I0913 18:42:58.505974   22792 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.crt.609bdfe2: {Name:mk1f0f163f6880fd564fdf3cf71c4cf20e0ab1cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:42:58.506144   22792 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.key.609bdfe2 ...
	I0913 18:42:58.506157   22792 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.key.609bdfe2: {Name:mkb45e7c95cfc51b46c801a3c439fa0dbd0be17b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:42:58.506229   22792 certs.go:381] copying /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.crt.609bdfe2 -> /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.crt
	I0913 18:42:58.506354   22792 certs.go:385] copying /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.key.609bdfe2 -> /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.key
	I0913 18:42:58.506480   22792 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/proxy-client.key
	I0913 18:42:58.506494   22792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0913 18:42:58.506507   22792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0913 18:42:58.506521   22792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0913 18:42:58.506533   22792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0913 18:42:58.506544   22792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0913 18:42:58.506557   22792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0913 18:42:58.506568   22792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0913 18:42:58.506580   22792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0913 18:42:58.506623   22792 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079.pem (1338 bytes)
	W0913 18:42:58.506650   22792 certs.go:480] ignoring /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079_empty.pem, impossibly tiny 0 bytes
	I0913 18:42:58.506659   22792 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem (1675 bytes)
	I0913 18:42:58.506682   22792 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem (1082 bytes)
	I0913 18:42:58.506702   22792 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem (1123 bytes)
	I0913 18:42:58.506722   22792 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem (1679 bytes)
	I0913 18:42:58.506756   22792 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem (1708 bytes)
	I0913 18:42:58.506782   22792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0913 18:42:58.506795   22792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079.pem -> /usr/share/ca-certificates/11079.pem
	I0913 18:42:58.506807   22792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem -> /usr/share/ca-certificates/110792.pem
	I0913 18:42:58.506835   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHHostname
	I0913 18:42:58.509789   22792 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:42:58.510175   22792 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 18:42:58.510204   22792 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:42:58.510371   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHPort
	I0913 18:42:58.510571   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 18:42:58.510733   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHUsername
	I0913 18:42:58.510861   22792 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764/id_rsa Username:docker}
	I0913 18:42:58.586366   22792 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0913 18:42:58.591311   22792 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0913 18:42:58.602028   22792 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0913 18:42:58.606047   22792 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0913 18:42:58.615371   22792 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0913 18:42:58.619130   22792 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0913 18:42:58.628861   22792 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0913 18:42:58.633263   22792 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0913 18:42:58.643569   22792 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0913 18:42:58.647816   22792 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0913 18:42:58.658335   22792 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0913 18:42:58.662734   22792 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0913 18:42:58.672848   22792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0913 18:42:58.699188   22792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0913 18:42:58.724599   22792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0913 18:42:58.749275   22792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0913 18:42:58.773279   22792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0913 18:42:58.796703   22792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0913 18:42:58.820178   22792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0913 18:42:58.844900   22792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0913 18:42:58.868932   22792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0913 18:42:58.893473   22792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079.pem --> /usr/share/ca-certificates/11079.pem (1338 bytes)
	I0913 18:42:58.917540   22792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem --> /usr/share/ca-certificates/110792.pem (1708 bytes)
	I0913 18:42:58.940368   22792 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0913 18:42:58.956895   22792 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0913 18:42:58.972923   22792 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0913 18:42:58.989583   22792 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0913 18:42:59.008802   22792 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0913 18:42:59.026107   22792 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0913 18:42:59.042329   22792 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0913 18:42:59.058225   22792 ssh_runner.go:195] Run: openssl version
	I0913 18:42:59.063866   22792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110792.pem && ln -fs /usr/share/ca-certificates/110792.pem /etc/ssl/certs/110792.pem"
	I0913 18:42:59.074196   22792 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110792.pem
	I0913 18:42:59.078791   22792 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 13 18:38 /usr/share/ca-certificates/110792.pem
	I0913 18:42:59.078835   22792 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110792.pem
	I0913 18:42:59.084460   22792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110792.pem /etc/ssl/certs/3ec20f2e.0"
	I0913 18:42:59.094525   22792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0913 18:42:59.104776   22792 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0913 18:42:59.109074   22792 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 13 18:22 /usr/share/ca-certificates/minikubeCA.pem
	I0913 18:42:59.109126   22792 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0913 18:42:59.114673   22792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0913 18:42:59.125259   22792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11079.pem && ln -fs /usr/share/ca-certificates/11079.pem /etc/ssl/certs/11079.pem"
	I0913 18:42:59.135745   22792 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11079.pem
	I0913 18:42:59.140613   22792 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 13 18:38 /usr/share/ca-certificates/11079.pem
	I0913 18:42:59.140695   22792 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11079.pem
	I0913 18:42:59.146658   22792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11079.pem /etc/ssl/certs/51391683.0"
	I0913 18:42:59.157420   22792 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0913 18:42:59.161755   22792 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0913 18:42:59.161816   22792 kubeadm.go:934] updating node {m02 192.168.39.203 8443 v1.31.1 crio true true} ...
	I0913 18:42:59.161900   22792 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-617764-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.203
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-617764 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0913 18:42:59.161922   22792 kube-vip.go:115] generating kube-vip config ...
	I0913 18:42:59.161952   22792 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0913 18:42:59.176862   22792 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0913 18:42:59.176957   22792 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0913 18:42:59.177009   22792 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0913 18:42:59.187364   22792 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0913 18:42:59.187422   22792 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0913 18:42:59.197410   22792 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0913 18:42:59.197436   22792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0913 18:42:59.197495   22792 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0913 18:42:59.197522   22792 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19636-3902/.minikube/cache/linux/amd64/v1.31.1/kubeadm
	I0913 18:42:59.197496   22792 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19636-3902/.minikube/cache/linux/amd64/v1.31.1/kubelet
	I0913 18:42:59.202183   22792 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0913 18:42:59.202209   22792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0913 18:43:02.732054   22792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0913 18:43:02.732128   22792 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0913 18:43:02.737270   22792 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0913 18:43:02.737313   22792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0913 18:43:02.958947   22792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 18:43:02.994648   22792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0913 18:43:02.994758   22792 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0913 18:43:03.007070   22792 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0913 18:43:03.007115   22792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0913 18:43:03.373882   22792 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0913 18:43:03.384047   22792 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0913 18:43:03.402339   22792 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0913 18:43:03.421245   22792 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0913 18:43:03.439591   22792 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0913 18:43:03.443820   22792 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0913 18:43:03.456121   22792 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 18:43:03.581257   22792 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0913 18:43:03.600338   22792 host.go:66] Checking if "ha-617764" exists ...
	I0913 18:43:03.600751   22792 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:43:03.600803   22792 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:43:03.615242   22792 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43377
	I0913 18:43:03.615707   22792 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:43:03.616197   22792 main.go:141] libmachine: Using API Version  1
	I0913 18:43:03.616216   22792 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:43:03.616509   22792 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:43:03.616709   22792 main.go:141] libmachine: (ha-617764) Calling .DriverName
	I0913 18:43:03.616831   22792 start.go:317] joinCluster: &{Name:ha-617764 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-617764 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.145 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.203 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 18:43:03.616931   22792 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0913 18:43:03.616951   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHHostname
	I0913 18:43:03.619819   22792 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:43:03.620222   22792 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 18:43:03.620246   22792 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:43:03.620371   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHPort
	I0913 18:43:03.620523   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 18:43:03.620676   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHUsername
	I0913 18:43:03.620807   22792 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764/id_rsa Username:docker}
	I0913 18:43:03.766712   22792 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.203 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0913 18:43:03.766767   22792 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7j7evy.9yflqt75sqaf2ecw --discovery-token-ca-cert-hash sha256:a4240e11abfbfd373505036507e5ef67d548fb372e560a526b421dfcccc65691 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-617764-m02 --control-plane --apiserver-advertise-address=192.168.39.203 --apiserver-bind-port=8443"
	I0913 18:43:26.782487   22792 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7j7evy.9yflqt75sqaf2ecw --discovery-token-ca-cert-hash sha256:a4240e11abfbfd373505036507e5ef67d548fb372e560a526b421dfcccc65691 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-617764-m02 --control-plane --apiserver-advertise-address=192.168.39.203 --apiserver-bind-port=8443": (23.015680117s)
	I0913 18:43:26.782526   22792 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0913 18:43:27.216131   22792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-617764-m02 minikube.k8s.io/updated_at=2024_09_13T18_43_27_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=fdd33bebc6743cfd1c61ec7fe066add478610a92 minikube.k8s.io/name=ha-617764 minikube.k8s.io/primary=false
	I0913 18:43:27.365316   22792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-617764-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0913 18:43:27.479009   22792 start.go:319] duration metric: took 23.862174011s to joinCluster
	I0913 18:43:27.479149   22792 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.203 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0913 18:43:27.479426   22792 config.go:182] Loaded profile config "ha-617764": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 18:43:27.480584   22792 out.go:177] * Verifying Kubernetes components...
	I0913 18:43:27.481817   22792 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 18:43:27.724286   22792 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0913 18:43:27.745509   22792 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19636-3902/kubeconfig
	I0913 18:43:27.745863   22792 kapi.go:59] client config for ha-617764: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/client.crt", KeyFile:"/home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/client.key", CAFile:"/home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6f9c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0913 18:43:27.745948   22792 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.145:8443
	I0913 18:43:27.746279   22792 node_ready.go:35] waiting up to 6m0s for node "ha-617764-m02" to be "Ready" ...
	I0913 18:43:27.746428   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 18:43:27.746442   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:27.746456   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:27.746462   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:27.755757   22792 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0913 18:43:28.247360   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 18:43:28.247380   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:28.247388   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:28.247392   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:28.251395   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:43:28.746797   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 18:43:28.746817   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:28.746824   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:28.746827   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:28.755187   22792 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0913 18:43:29.247368   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 18:43:29.247393   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:29.247402   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:29.247410   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:29.250841   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:43:29.747281   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 18:43:29.747304   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:29.747312   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:29.747315   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:29.750870   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:43:29.751575   22792 node_ready.go:53] node "ha-617764-m02" has status "Ready":"False"
	I0913 18:43:30.246565   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 18:43:30.246586   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:30.246594   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:30.246597   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:30.250022   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:43:30.746560   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 18:43:30.746587   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:30.746597   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:30.746602   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:30.750616   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:43:31.246768   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 18:43:31.246788   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:31.246795   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:31.246800   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:31.250304   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:43:31.746805   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 18:43:31.746828   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:31.746838   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:31.746844   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:31.751727   22792 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0913 18:43:31.752531   22792 node_ready.go:53] node "ha-617764-m02" has status "Ready":"False"
	I0913 18:43:32.246890   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 18:43:32.246911   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:32.246924   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:32.246928   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:32.250249   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:43:32.747092   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 18:43:32.747114   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:32.747122   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:32.747127   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:32.750815   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:43:33.247103   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 18:43:33.247125   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:33.247133   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:33.247138   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:33.250742   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:43:33.747045   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 18:43:33.747070   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:33.747083   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:33.747087   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:33.751216   22792 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0913 18:43:34.247426   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 18:43:34.247454   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:34.247465   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:34.247472   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:34.251446   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:43:34.252350   22792 node_ready.go:53] node "ha-617764-m02" has status "Ready":"False"
	I0913 18:43:34.746671   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 18:43:34.746699   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:34.746708   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:34.746713   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:34.750454   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:43:35.246648   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 18:43:35.246666   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:35.246675   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:35.246682   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:35.249677   22792 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 18:43:35.746686   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 18:43:35.746707   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:35.746714   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:35.746718   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:35.750343   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:43:36.247410   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 18:43:36.247438   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:36.247450   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:36.247456   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:36.251732   22792 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0913 18:43:36.252557   22792 node_ready.go:53] node "ha-617764-m02" has status "Ready":"False"
	I0913 18:43:36.746913   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 18:43:36.746933   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:36.746944   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:36.746949   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:36.750250   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:43:37.247384   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 18:43:37.247405   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:37.247414   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:37.247418   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:37.251417   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:43:37.747331   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 18:43:37.747351   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:37.747358   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:37.747362   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:37.751415   22792 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0913 18:43:38.247314   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 18:43:38.247336   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:38.247344   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:38.247348   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:38.251107   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:43:38.746717   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 18:43:38.746739   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:38.746752   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:38.746758   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:38.750605   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:43:38.751267   22792 node_ready.go:53] node "ha-617764-m02" has status "Ready":"False"
	I0913 18:43:39.247047   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 18:43:39.247069   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:39.247079   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:39.247084   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:39.250631   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:43:39.746863   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 18:43:39.746893   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:39.746904   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:39.746911   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:39.750055   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:43:40.247216   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 18:43:40.247240   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:40.247247   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:40.247250   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:40.250686   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:43:40.746930   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 18:43:40.746950   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:40.746958   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:40.746961   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:40.750049   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:43:41.247174   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 18:43:41.247200   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:41.247212   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:41.247217   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:41.250485   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:43:41.251328   22792 node_ready.go:53] node "ha-617764-m02" has status "Ready":"False"
	I0913 18:43:41.747306   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 18:43:41.747330   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:41.747337   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:41.747340   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:41.750596   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:43:42.246615   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 18:43:42.246642   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:42.246654   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:42.246662   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:42.250518   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:43:42.746549   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 18:43:42.746572   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:42.746580   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:42.746583   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:42.749508   22792 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 18:43:43.246689   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 18:43:43.246711   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:43.246719   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:43.246724   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:43.250023   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:43:43.747148   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 18:43:43.747170   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:43.747181   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:43.747187   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:43.749897   22792 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 18:43:43.750484   22792 node_ready.go:53] node "ha-617764-m02" has status "Ready":"False"
	I0913 18:43:44.246957   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 18:43:44.246981   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:44.246989   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:44.246995   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:44.250339   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:43:44.747562   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 18:43:44.747589   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:44.747601   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:44.747606   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:44.791116   22792 round_trippers.go:574] Response Status: 200 OK in 43 milliseconds
	I0913 18:43:45.247268   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 18:43:45.247294   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:45.247304   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:45.247310   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:45.251318   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:43:45.747422   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 18:43:45.747445   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:45.747453   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:45.747456   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:45.750923   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:43:45.751400   22792 node_ready.go:53] node "ha-617764-m02" has status "Ready":"False"
	I0913 18:43:46.246779   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 18:43:46.246806   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:46.246817   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:46.246822   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:46.250788   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:43:46.251283   22792 node_ready.go:49] node "ha-617764-m02" has status "Ready":"True"
	I0913 18:43:46.251316   22792 node_ready.go:38] duration metric: took 18.504986298s for node "ha-617764-m02" to be "Ready" ...
	I0913 18:43:46.251336   22792 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0913 18:43:46.251458   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods
	I0913 18:43:46.251470   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:46.251480   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:46.251488   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:46.255607   22792 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0913 18:43:46.261970   22792 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-fdhnm" in "kube-system" namespace to be "Ready" ...
	I0913 18:43:46.262045   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fdhnm
	I0913 18:43:46.262054   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:46.262061   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:46.262068   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:46.264813   22792 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 18:43:46.265441   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764
	I0913 18:43:46.265458   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:46.265464   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:46.265468   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:46.268153   22792 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 18:43:46.268738   22792 pod_ready.go:93] pod "coredns-7c65d6cfc9-fdhnm" in "kube-system" namespace has status "Ready":"True"
	I0913 18:43:46.268758   22792 pod_ready.go:82] duration metric: took 6.7655ms for pod "coredns-7c65d6cfc9-fdhnm" in "kube-system" namespace to be "Ready" ...
	I0913 18:43:46.268767   22792 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-htrbt" in "kube-system" namespace to be "Ready" ...
	I0913 18:43:46.268814   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-htrbt
	I0913 18:43:46.268826   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:46.268836   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:46.268842   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:46.271260   22792 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 18:43:46.271819   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764
	I0913 18:43:46.271833   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:46.271843   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:46.271847   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:46.274282   22792 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 18:43:46.274979   22792 pod_ready.go:93] pod "coredns-7c65d6cfc9-htrbt" in "kube-system" namespace has status "Ready":"True"
	I0913 18:43:46.274998   22792 pod_ready.go:82] duration metric: took 6.225608ms for pod "coredns-7c65d6cfc9-htrbt" in "kube-system" namespace to be "Ready" ...
	I0913 18:43:46.275010   22792 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-617764" in "kube-system" namespace to be "Ready" ...
	I0913 18:43:46.275081   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/etcd-ha-617764
	I0913 18:43:46.275092   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:46.275128   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:46.275136   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:46.278197   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:43:46.278964   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764
	I0913 18:43:46.278980   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:46.278992   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:46.278997   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:46.281160   22792 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 18:43:46.281715   22792 pod_ready.go:93] pod "etcd-ha-617764" in "kube-system" namespace has status "Ready":"True"
	I0913 18:43:46.281729   22792 pod_ready.go:82] duration metric: took 6.70395ms for pod "etcd-ha-617764" in "kube-system" namespace to be "Ready" ...
	I0913 18:43:46.281739   22792 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-617764-m02" in "kube-system" namespace to be "Ready" ...
	I0913 18:43:46.281792   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/etcd-ha-617764-m02
	I0913 18:43:46.281799   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:46.281806   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:46.281812   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:46.283916   22792 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 18:43:46.284433   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 18:43:46.284444   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:46.284453   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:46.284464   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:46.288133   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:43:46.288518   22792 pod_ready.go:93] pod "etcd-ha-617764-m02" in "kube-system" namespace has status "Ready":"True"
	I0913 18:43:46.288533   22792 pod_ready.go:82] duration metric: took 6.783837ms for pod "etcd-ha-617764-m02" in "kube-system" namespace to be "Ready" ...
	I0913 18:43:46.288554   22792 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-617764" in "kube-system" namespace to be "Ready" ...
	I0913 18:43:46.447062   22792 request.go:632] Waited for 158.444752ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-617764
	I0913 18:43:46.447156   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-617764
	I0913 18:43:46.447167   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:46.447178   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:46.447186   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:46.450727   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:43:46.647832   22792 request.go:632] Waited for 196.372609ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.145:8443/api/v1/nodes/ha-617764
	I0913 18:43:46.647919   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764
	I0913 18:43:46.647927   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:46.647999   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:46.648027   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:46.651891   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:43:46.652784   22792 pod_ready.go:93] pod "kube-apiserver-ha-617764" in "kube-system" namespace has status "Ready":"True"
	I0913 18:43:46.652798   22792 pod_ready.go:82] duration metric: took 364.234884ms for pod "kube-apiserver-ha-617764" in "kube-system" namespace to be "Ready" ...
	I0913 18:43:46.652808   22792 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-617764-m02" in "kube-system" namespace to be "Ready" ...
	I0913 18:43:46.846869   22792 request.go:632] Waited for 194.006603ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-617764-m02
	I0913 18:43:46.846945   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-617764-m02
	I0913 18:43:46.846952   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:46.846961   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:46.846972   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:46.849816   22792 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 18:43:47.046830   22792 request.go:632] Waited for 196.296816ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 18:43:47.046892   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 18:43:47.046896   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:47.046903   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:47.046908   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:47.049999   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:43:47.050465   22792 pod_ready.go:93] pod "kube-apiserver-ha-617764-m02" in "kube-system" namespace has status "Ready":"True"
	I0913 18:43:47.050482   22792 pod_ready.go:82] duration metric: took 397.667915ms for pod "kube-apiserver-ha-617764-m02" in "kube-system" namespace to be "Ready" ...
	I0913 18:43:47.050492   22792 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-617764" in "kube-system" namespace to be "Ready" ...
	I0913 18:43:47.247613   22792 request.go:632] Waited for 197.055207ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-617764
	I0913 18:43:47.247708   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-617764
	I0913 18:43:47.247714   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:47.247722   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:47.247726   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:47.251150   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:43:47.447025   22792 request.go:632] Waited for 195.29363ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.145:8443/api/v1/nodes/ha-617764
	I0913 18:43:47.447096   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764
	I0913 18:43:47.447101   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:47.447110   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:47.447115   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:47.450667   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:43:47.451356   22792 pod_ready.go:93] pod "kube-controller-manager-ha-617764" in "kube-system" namespace has status "Ready":"True"
	I0913 18:43:47.451383   22792 pod_ready.go:82] duration metric: took 400.884125ms for pod "kube-controller-manager-ha-617764" in "kube-system" namespace to be "Ready" ...
	I0913 18:43:47.451397   22792 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-617764-m02" in "kube-system" namespace to be "Ready" ...
	I0913 18:43:47.647436   22792 request.go:632] Waited for 195.961235ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-617764-m02
	I0913 18:43:47.647509   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-617764-m02
	I0913 18:43:47.647514   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:47.647521   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:47.647526   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:47.651652   22792 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0913 18:43:47.847602   22792 request.go:632] Waited for 195.36147ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 18:43:47.847668   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 18:43:47.847674   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:47.847682   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:47.847691   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:47.851451   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:43:47.852078   22792 pod_ready.go:93] pod "kube-controller-manager-ha-617764-m02" in "kube-system" namespace has status "Ready":"True"
	I0913 18:43:47.852098   22792 pod_ready.go:82] duration metric: took 400.693621ms for pod "kube-controller-manager-ha-617764-m02" in "kube-system" namespace to be "Ready" ...
	I0913 18:43:47.852111   22792 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-92mml" in "kube-system" namespace to be "Ready" ...
	I0913 18:43:48.047116   22792 request.go:632] Waited for 194.935132ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-proxy-92mml
	I0913 18:43:48.047239   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-proxy-92mml
	I0913 18:43:48.047266   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:48.047273   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:48.047277   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:48.050797   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:43:48.246855   22792 request.go:632] Waited for 195.227248ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.145:8443/api/v1/nodes/ha-617764
	I0913 18:43:48.246929   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764
	I0913 18:43:48.246936   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:48.246946   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:48.246955   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:48.250290   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:43:48.250828   22792 pod_ready.go:93] pod "kube-proxy-92mml" in "kube-system" namespace has status "Ready":"True"
	I0913 18:43:48.250845   22792 pod_ready.go:82] duration metric: took 398.720708ms for pod "kube-proxy-92mml" in "kube-system" namespace to be "Ready" ...
	I0913 18:43:48.250855   22792 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-hqm8n" in "kube-system" namespace to be "Ready" ...
	I0913 18:43:48.447815   22792 request.go:632] Waited for 196.902431ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hqm8n
	I0913 18:43:48.447893   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hqm8n
	I0913 18:43:48.447901   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:48.447912   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:48.447922   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:48.450968   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:43:48.647003   22792 request.go:632] Waited for 195.22434ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 18:43:48.647081   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 18:43:48.647089   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:48.647100   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:48.647108   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:48.650460   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:43:48.651040   22792 pod_ready.go:93] pod "kube-proxy-hqm8n" in "kube-system" namespace has status "Ready":"True"
	I0913 18:43:48.651060   22792 pod_ready.go:82] duration metric: took 400.198016ms for pod "kube-proxy-hqm8n" in "kube-system" namespace to be "Ready" ...
	I0913 18:43:48.651072   22792 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-617764" in "kube-system" namespace to be "Ready" ...
	I0913 18:43:48.847203   22792 request.go:632] Waited for 196.062994ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-617764
	I0913 18:43:48.847260   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-617764
	I0913 18:43:48.847275   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:48.847283   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:48.847291   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:48.850230   22792 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 18:43:49.047242   22792 request.go:632] Waited for 196.44001ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.145:8443/api/v1/nodes/ha-617764
	I0913 18:43:49.047295   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764
	I0913 18:43:49.047300   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:49.047307   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:49.047311   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:49.051206   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:43:49.051718   22792 pod_ready.go:93] pod "kube-scheduler-ha-617764" in "kube-system" namespace has status "Ready":"True"
	I0913 18:43:49.051736   22792 pod_ready.go:82] duration metric: took 400.657373ms for pod "kube-scheduler-ha-617764" in "kube-system" namespace to be "Ready" ...
	I0913 18:43:49.051746   22792 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-617764-m02" in "kube-system" namespace to be "Ready" ...
	I0913 18:43:49.246865   22792 request.go:632] Waited for 195.040081ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-617764-m02
	I0913 18:43:49.246928   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-617764-m02
	I0913 18:43:49.246933   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:49.246940   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:49.246945   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:49.250686   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:43:49.447653   22792 request.go:632] Waited for 196.379077ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 18:43:49.447718   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 18:43:49.447725   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:49.447736   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:49.447741   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:49.451346   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:43:49.451937   22792 pod_ready.go:93] pod "kube-scheduler-ha-617764-m02" in "kube-system" namespace has status "Ready":"True"
	I0913 18:43:49.451961   22792 pod_ready.go:82] duration metric: took 400.208032ms for pod "kube-scheduler-ha-617764-m02" in "kube-system" namespace to be "Ready" ...
	I0913 18:43:49.451976   22792 pod_ready.go:39] duration metric: took 3.200594709s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0913 18:43:49.452001   22792 api_server.go:52] waiting for apiserver process to appear ...
	I0913 18:43:49.452067   22792 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 18:43:49.469404   22792 api_server.go:72] duration metric: took 21.990223278s to wait for apiserver process to appear ...
	I0913 18:43:49.469427   22792 api_server.go:88] waiting for apiserver healthz status ...
	I0913 18:43:49.469457   22792 api_server.go:253] Checking apiserver healthz at https://192.168.39.145:8443/healthz ...
	I0913 18:43:49.474387   22792 api_server.go:279] https://192.168.39.145:8443/healthz returned 200:
	ok
	I0913 18:43:49.474465   22792 round_trippers.go:463] GET https://192.168.39.145:8443/version
	I0913 18:43:49.474474   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:49.474483   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:49.474494   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:49.475410   22792 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0913 18:43:49.475511   22792 api_server.go:141] control plane version: v1.31.1
	I0913 18:43:49.475529   22792 api_server.go:131] duration metric: took 6.095026ms to wait for apiserver health ...
	I0913 18:43:49.475545   22792 system_pods.go:43] waiting for kube-system pods to appear ...
	I0913 18:43:49.646847   22792 request.go:632] Waited for 171.210404ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods
	I0913 18:43:49.646915   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods
	I0913 18:43:49.646922   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:49.646931   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:49.646938   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:49.651797   22792 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0913 18:43:49.656746   22792 system_pods.go:59] 17 kube-system pods found
	I0913 18:43:49.656779   22792 system_pods.go:61] "coredns-7c65d6cfc9-fdhnm" [5c509676-c7ba-4841-89b5-7e4266abd9c9] Running
	I0913 18:43:49.656785   22792 system_pods.go:61] "coredns-7c65d6cfc9-htrbt" [41a8301e-fca3-4907-bc77-808b013a2d2a] Running
	I0913 18:43:49.656788   22792 system_pods.go:61] "etcd-ha-617764" [e8b297d1-ae3c-45c7-bc17-086a7411c65e] Running
	I0913 18:43:49.656791   22792 system_pods.go:61] "etcd-ha-617764-m02" [54cadd8e-b226-4748-a065-efe913b74058] Running
	I0913 18:43:49.656795   22792 system_pods.go:61] "kindnet-b9bzd" [81130c38-ff5d-4c9e-ab7b-7eae4a62d3b8] Running
	I0913 18:43:49.656798   22792 system_pods.go:61] "kindnet-bc2zg" [e1f2f8d7-bb8a-44cb-ac52-c9f87c6f6170] Running
	I0913 18:43:49.656801   22792 system_pods.go:61] "kube-apiserver-ha-617764" [b9779d8c-fceb-4764-a9fa-e98e0e8446fd] Running
	I0913 18:43:49.656804   22792 system_pods.go:61] "kube-apiserver-ha-617764-m02" [6a67c49f-958f-4463-b3f3-cdc449987a0e] Running
	I0913 18:43:49.656808   22792 system_pods.go:61] "kube-controller-manager-ha-617764" [50f8efc0-95c9-4356-ac31-5ef778f43620] Running
	I0913 18:43:49.656811   22792 system_pods.go:61] "kube-controller-manager-ha-617764-m02" [c9405f4f-2355-4d74-8bce-0afd9709a297] Running
	I0913 18:43:49.656816   22792 system_pods.go:61] "kube-proxy-92mml" [36bd37dc-88c4-4264-9e7c-a90246cc5212] Running
	I0913 18:43:49.656819   22792 system_pods.go:61] "kube-proxy-hqm8n" [d21c9abc-9d25-4a59-9830-2325e7f8ad44] Running
	I0913 18:43:49.656823   22792 system_pods.go:61] "kube-scheduler-ha-617764" [0ffae4ca-101d-499c-a10e-d24d42c6ddbd] Running
	I0913 18:43:49.656826   22792 system_pods.go:61] "kube-scheduler-ha-617764-m02" [46451fc8-0fbe-4c70-b331-3db2cefacd60] Running
	I0913 18:43:49.656831   22792 system_pods.go:61] "kube-vip-ha-617764" [7960420c-8f57-47a3-8d63-de5ad027f8bd] Running
	I0913 18:43:49.656834   22792 system_pods.go:61] "kube-vip-ha-617764-m02" [da600d89-fd3d-45a3-af3b-66cfd443562d] Running
	I0913 18:43:49.656837   22792 system_pods.go:61] "storage-provisioner" [1e5f1a84-1798-430e-af04-82469e8f4a7b] Running
	I0913 18:43:49.656842   22792 system_pods.go:74] duration metric: took 181.289408ms to wait for pod list to return data ...
	I0913 18:43:49.656852   22792 default_sa.go:34] waiting for default service account to be created ...
	I0913 18:43:49.847258   22792 request.go:632] Waited for 190.329384ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.145:8443/api/v1/namespaces/default/serviceaccounts
	I0913 18:43:49.847325   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/default/serviceaccounts
	I0913 18:43:49.847332   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:49.847353   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:49.847376   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:49.860502   22792 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0913 18:43:49.860781   22792 default_sa.go:45] found service account: "default"
	I0913 18:43:49.860806   22792 default_sa.go:55] duration metric: took 203.946475ms for default service account to be created ...
	I0913 18:43:49.860818   22792 system_pods.go:116] waiting for k8s-apps to be running ...
	I0913 18:43:50.047230   22792 request.go:632] Waited for 186.339317ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods
	I0913 18:43:50.047293   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods
	I0913 18:43:50.047300   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:50.047311   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:50.047320   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:50.053175   22792 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0913 18:43:50.057396   22792 system_pods.go:86] 17 kube-system pods found
	I0913 18:43:50.057418   22792 system_pods.go:89] "coredns-7c65d6cfc9-fdhnm" [5c509676-c7ba-4841-89b5-7e4266abd9c9] Running
	I0913 18:43:50.057423   22792 system_pods.go:89] "coredns-7c65d6cfc9-htrbt" [41a8301e-fca3-4907-bc77-808b013a2d2a] Running
	I0913 18:43:50.057427   22792 system_pods.go:89] "etcd-ha-617764" [e8b297d1-ae3c-45c7-bc17-086a7411c65e] Running
	I0913 18:43:50.057431   22792 system_pods.go:89] "etcd-ha-617764-m02" [54cadd8e-b226-4748-a065-efe913b74058] Running
	I0913 18:43:50.057435   22792 system_pods.go:89] "kindnet-b9bzd" [81130c38-ff5d-4c9e-ab7b-7eae4a62d3b8] Running
	I0913 18:43:50.057439   22792 system_pods.go:89] "kindnet-bc2zg" [e1f2f8d7-bb8a-44cb-ac52-c9f87c6f6170] Running
	I0913 18:43:50.057442   22792 system_pods.go:89] "kube-apiserver-ha-617764" [b9779d8c-fceb-4764-a9fa-e98e0e8446fd] Running
	I0913 18:43:50.057446   22792 system_pods.go:89] "kube-apiserver-ha-617764-m02" [6a67c49f-958f-4463-b3f3-cdc449987a0e] Running
	I0913 18:43:50.057450   22792 system_pods.go:89] "kube-controller-manager-ha-617764" [50f8efc0-95c9-4356-ac31-5ef778f43620] Running
	I0913 18:43:50.057453   22792 system_pods.go:89] "kube-controller-manager-ha-617764-m02" [c9405f4f-2355-4d74-8bce-0afd9709a297] Running
	I0913 18:43:50.057457   22792 system_pods.go:89] "kube-proxy-92mml" [36bd37dc-88c4-4264-9e7c-a90246cc5212] Running
	I0913 18:43:50.057460   22792 system_pods.go:89] "kube-proxy-hqm8n" [d21c9abc-9d25-4a59-9830-2325e7f8ad44] Running
	I0913 18:43:50.057463   22792 system_pods.go:89] "kube-scheduler-ha-617764" [0ffae4ca-101d-499c-a10e-d24d42c6ddbd] Running
	I0913 18:43:50.057467   22792 system_pods.go:89] "kube-scheduler-ha-617764-m02" [46451fc8-0fbe-4c70-b331-3db2cefacd60] Running
	I0913 18:43:50.057472   22792 system_pods.go:89] "kube-vip-ha-617764" [7960420c-8f57-47a3-8d63-de5ad027f8bd] Running
	I0913 18:43:50.057475   22792 system_pods.go:89] "kube-vip-ha-617764-m02" [da600d89-fd3d-45a3-af3b-66cfd443562d] Running
	I0913 18:43:50.057480   22792 system_pods.go:89] "storage-provisioner" [1e5f1a84-1798-430e-af04-82469e8f4a7b] Running
	I0913 18:43:50.057486   22792 system_pods.go:126] duration metric: took 196.658835ms to wait for k8s-apps to be running ...
	I0913 18:43:50.057501   22792 system_svc.go:44] waiting for kubelet service to be running ....
	I0913 18:43:50.057549   22792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 18:43:50.073387   22792 system_svc.go:56] duration metric: took 15.885277ms WaitForService to wait for kubelet
	I0913 18:43:50.073415   22792 kubeadm.go:582] duration metric: took 22.594235765s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0913 18:43:50.073434   22792 node_conditions.go:102] verifying NodePressure condition ...
	I0913 18:43:50.247824   22792 request.go:632] Waited for 174.319724ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.145:8443/api/v1/nodes
	I0913 18:43:50.247892   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes
	I0913 18:43:50.247899   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:50.247910   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:50.247914   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:50.251836   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:43:50.252517   22792 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0913 18:43:50.252547   22792 node_conditions.go:123] node cpu capacity is 2
	I0913 18:43:50.252570   22792 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0913 18:43:50.252576   22792 node_conditions.go:123] node cpu capacity is 2
	I0913 18:43:50.252585   22792 node_conditions.go:105] duration metric: took 179.145226ms to run NodePressure ...
	I0913 18:43:50.252600   22792 start.go:241] waiting for startup goroutines ...
	I0913 18:43:50.252623   22792 start.go:255] writing updated cluster config ...
	I0913 18:43:50.254637   22792 out.go:201] 
	I0913 18:43:50.256021   22792 config.go:182] Loaded profile config "ha-617764": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 18:43:50.256102   22792 profile.go:143] Saving config to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/config.json ...
	I0913 18:43:50.257560   22792 out.go:177] * Starting "ha-617764-m03" control-plane node in "ha-617764" cluster
	I0913 18:43:50.258691   22792 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0913 18:43:50.258711   22792 cache.go:56] Caching tarball of preloaded images
	I0913 18:43:50.258841   22792 preload.go:172] Found /home/jenkins/minikube-integration/19636-3902/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0913 18:43:50.258854   22792 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0913 18:43:50.258945   22792 profile.go:143] Saving config to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/config.json ...
	I0913 18:43:50.259133   22792 start.go:360] acquireMachinesLock for ha-617764-m03: {Name:mk2a4fc9f87a3264088553785e086036ce1d8c5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 18:43:50.259190   22792 start.go:364] duration metric: took 36.307µs to acquireMachinesLock for "ha-617764-m03"
	I0913 18:43:50.259213   22792 start.go:93] Provisioning new machine with config: &{Name:ha-617764 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-617764 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.145 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.203 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspekto
r-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0913 18:43:50.259350   22792 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0913 18:43:50.260708   22792 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0913 18:43:50.260798   22792 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:43:50.260839   22792 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:43:50.276521   22792 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34271
	I0913 18:43:50.276883   22792 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:43:50.277314   22792 main.go:141] libmachine: Using API Version  1
	I0913 18:43:50.277333   22792 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:43:50.277654   22792 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:43:50.277825   22792 main.go:141] libmachine: (ha-617764-m03) Calling .GetMachineName
	I0913 18:43:50.277948   22792 main.go:141] libmachine: (ha-617764-m03) Calling .DriverName
	I0913 18:43:50.278139   22792 start.go:159] libmachine.API.Create for "ha-617764" (driver="kvm2")
	I0913 18:43:50.278171   22792 client.go:168] LocalClient.Create starting
	I0913 18:43:50.278210   22792 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem
	I0913 18:43:50.278240   22792 main.go:141] libmachine: Decoding PEM data...
	I0913 18:43:50.278253   22792 main.go:141] libmachine: Parsing certificate...
	I0913 18:43:50.278299   22792 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem
	I0913 18:43:50.278317   22792 main.go:141] libmachine: Decoding PEM data...
	I0913 18:43:50.278327   22792 main.go:141] libmachine: Parsing certificate...
	I0913 18:43:50.278341   22792 main.go:141] libmachine: Running pre-create checks...
	I0913 18:43:50.278348   22792 main.go:141] libmachine: (ha-617764-m03) Calling .PreCreateCheck
	I0913 18:43:50.278514   22792 main.go:141] libmachine: (ha-617764-m03) Calling .GetConfigRaw
	I0913 18:43:50.278875   22792 main.go:141] libmachine: Creating machine...
	I0913 18:43:50.278886   22792 main.go:141] libmachine: (ha-617764-m03) Calling .Create
	I0913 18:43:50.279010   22792 main.go:141] libmachine: (ha-617764-m03) Creating KVM machine...
	I0913 18:43:50.280249   22792 main.go:141] libmachine: (ha-617764-m03) DBG | found existing default KVM network
	I0913 18:43:50.280409   22792 main.go:141] libmachine: (ha-617764-m03) DBG | found existing private KVM network mk-ha-617764
	I0913 18:43:50.280562   22792 main.go:141] libmachine: (ha-617764-m03) Setting up store path in /home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764-m03 ...
	I0913 18:43:50.280585   22792 main.go:141] libmachine: (ha-617764-m03) Building disk image from file:///home/jenkins/minikube-integration/19636-3902/.minikube/cache/iso/amd64/minikube-v1.34.0-1726156389-19616-amd64.iso
	I0913 18:43:50.280698   22792 main.go:141] libmachine: (ha-617764-m03) DBG | I0913 18:43:50.280556   23564 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19636-3902/.minikube
	I0913 18:43:50.280766   22792 main.go:141] libmachine: (ha-617764-m03) Downloading /home/jenkins/minikube-integration/19636-3902/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19636-3902/.minikube/cache/iso/amd64/minikube-v1.34.0-1726156389-19616-amd64.iso...
	I0913 18:43:50.509770   22792 main.go:141] libmachine: (ha-617764-m03) DBG | I0913 18:43:50.509656   23564 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764-m03/id_rsa...
	I0913 18:43:50.718355   22792 main.go:141] libmachine: (ha-617764-m03) DBG | I0913 18:43:50.718232   23564 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764-m03/ha-617764-m03.rawdisk...
	I0913 18:43:50.718383   22792 main.go:141] libmachine: (ha-617764-m03) DBG | Writing magic tar header
	I0913 18:43:50.718394   22792 main.go:141] libmachine: (ha-617764-m03) DBG | Writing SSH key tar header
	I0913 18:43:50.718401   22792 main.go:141] libmachine: (ha-617764-m03) DBG | I0913 18:43:50.718356   23564 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764-m03 ...
	I0913 18:43:50.718520   22792 main.go:141] libmachine: (ha-617764-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764-m03
	I0913 18:43:50.718542   22792 main.go:141] libmachine: (ha-617764-m03) Setting executable bit set on /home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764-m03 (perms=drwx------)
	I0913 18:43:50.718556   22792 main.go:141] libmachine: (ha-617764-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19636-3902/.minikube/machines
	I0913 18:43:50.718574   22792 main.go:141] libmachine: (ha-617764-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19636-3902/.minikube
	I0913 18:43:50.718582   22792 main.go:141] libmachine: (ha-617764-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19636-3902
	I0913 18:43:50.718589   22792 main.go:141] libmachine: (ha-617764-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0913 18:43:50.718595   22792 main.go:141] libmachine: (ha-617764-m03) DBG | Checking permissions on dir: /home/jenkins
	I0913 18:43:50.718604   22792 main.go:141] libmachine: (ha-617764-m03) DBG | Checking permissions on dir: /home
	I0913 18:43:50.718611   22792 main.go:141] libmachine: (ha-617764-m03) DBG | Skipping /home - not owner
	I0913 18:43:50.718635   22792 main.go:141] libmachine: (ha-617764-m03) Setting executable bit set on /home/jenkins/minikube-integration/19636-3902/.minikube/machines (perms=drwxr-xr-x)
	I0913 18:43:50.718653   22792 main.go:141] libmachine: (ha-617764-m03) Setting executable bit set on /home/jenkins/minikube-integration/19636-3902/.minikube (perms=drwxr-xr-x)
	I0913 18:43:50.718671   22792 main.go:141] libmachine: (ha-617764-m03) Setting executable bit set on /home/jenkins/minikube-integration/19636-3902 (perms=drwxrwxr-x)
	I0913 18:43:50.718679   22792 main.go:141] libmachine: (ha-617764-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0913 18:43:50.718689   22792 main.go:141] libmachine: (ha-617764-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0913 18:43:50.718694   22792 main.go:141] libmachine: (ha-617764-m03) Creating domain...
	I0913 18:43:50.719572   22792 main.go:141] libmachine: (ha-617764-m03) define libvirt domain using xml: 
	I0913 18:43:50.719592   22792 main.go:141] libmachine: (ha-617764-m03) <domain type='kvm'>
	I0913 18:43:50.719600   22792 main.go:141] libmachine: (ha-617764-m03)   <name>ha-617764-m03</name>
	I0913 18:43:50.719604   22792 main.go:141] libmachine: (ha-617764-m03)   <memory unit='MiB'>2200</memory>
	I0913 18:43:50.719618   22792 main.go:141] libmachine: (ha-617764-m03)   <vcpu>2</vcpu>
	I0913 18:43:50.719627   22792 main.go:141] libmachine: (ha-617764-m03)   <features>
	I0913 18:43:50.719639   22792 main.go:141] libmachine: (ha-617764-m03)     <acpi/>
	I0913 18:43:50.719647   22792 main.go:141] libmachine: (ha-617764-m03)     <apic/>
	I0913 18:43:50.719654   22792 main.go:141] libmachine: (ha-617764-m03)     <pae/>
	I0913 18:43:50.719663   22792 main.go:141] libmachine: (ha-617764-m03)     
	I0913 18:43:50.719670   22792 main.go:141] libmachine: (ha-617764-m03)   </features>
	I0913 18:43:50.719678   22792 main.go:141] libmachine: (ha-617764-m03)   <cpu mode='host-passthrough'>
	I0913 18:43:50.719685   22792 main.go:141] libmachine: (ha-617764-m03)   
	I0913 18:43:50.719693   22792 main.go:141] libmachine: (ha-617764-m03)   </cpu>
	I0913 18:43:50.719700   22792 main.go:141] libmachine: (ha-617764-m03)   <os>
	I0913 18:43:50.719709   22792 main.go:141] libmachine: (ha-617764-m03)     <type>hvm</type>
	I0913 18:43:50.719719   22792 main.go:141] libmachine: (ha-617764-m03)     <boot dev='cdrom'/>
	I0913 18:43:50.719728   22792 main.go:141] libmachine: (ha-617764-m03)     <boot dev='hd'/>
	I0913 18:43:50.719746   22792 main.go:141] libmachine: (ha-617764-m03)     <bootmenu enable='no'/>
	I0913 18:43:50.719754   22792 main.go:141] libmachine: (ha-617764-m03)   </os>
	I0913 18:43:50.719764   22792 main.go:141] libmachine: (ha-617764-m03)   <devices>
	I0913 18:43:50.719773   22792 main.go:141] libmachine: (ha-617764-m03)     <disk type='file' device='cdrom'>
	I0913 18:43:50.719785   22792 main.go:141] libmachine: (ha-617764-m03)       <source file='/home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764-m03/boot2docker.iso'/>
	I0913 18:43:50.719794   22792 main.go:141] libmachine: (ha-617764-m03)       <target dev='hdc' bus='scsi'/>
	I0913 18:43:50.719802   22792 main.go:141] libmachine: (ha-617764-m03)       <readonly/>
	I0913 18:43:50.719813   22792 main.go:141] libmachine: (ha-617764-m03)     </disk>
	I0913 18:43:50.719821   22792 main.go:141] libmachine: (ha-617764-m03)     <disk type='file' device='disk'>
	I0913 18:43:50.719832   22792 main.go:141] libmachine: (ha-617764-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0913 18:43:50.719849   22792 main.go:141] libmachine: (ha-617764-m03)       <source file='/home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764-m03/ha-617764-m03.rawdisk'/>
	I0913 18:43:50.719860   22792 main.go:141] libmachine: (ha-617764-m03)       <target dev='hda' bus='virtio'/>
	I0913 18:43:50.719871   22792 main.go:141] libmachine: (ha-617764-m03)     </disk>
	I0913 18:43:50.719881   22792 main.go:141] libmachine: (ha-617764-m03)     <interface type='network'>
	I0913 18:43:50.719888   22792 main.go:141] libmachine: (ha-617764-m03)       <source network='mk-ha-617764'/>
	I0913 18:43:50.719902   22792 main.go:141] libmachine: (ha-617764-m03)       <model type='virtio'/>
	I0913 18:43:50.719913   22792 main.go:141] libmachine: (ha-617764-m03)     </interface>
	I0913 18:43:50.719921   22792 main.go:141] libmachine: (ha-617764-m03)     <interface type='network'>
	I0913 18:43:50.719932   22792 main.go:141] libmachine: (ha-617764-m03)       <source network='default'/>
	I0913 18:43:50.719944   22792 main.go:141] libmachine: (ha-617764-m03)       <model type='virtio'/>
	I0913 18:43:50.719952   22792 main.go:141] libmachine: (ha-617764-m03)     </interface>
	I0913 18:43:50.719961   22792 main.go:141] libmachine: (ha-617764-m03)     <serial type='pty'>
	I0913 18:43:50.719971   22792 main.go:141] libmachine: (ha-617764-m03)       <target port='0'/>
	I0913 18:43:50.719984   22792 main.go:141] libmachine: (ha-617764-m03)     </serial>
	I0913 18:43:50.720013   22792 main.go:141] libmachine: (ha-617764-m03)     <console type='pty'>
	I0913 18:43:50.720036   22792 main.go:141] libmachine: (ha-617764-m03)       <target type='serial' port='0'/>
	I0913 18:43:50.720053   22792 main.go:141] libmachine: (ha-617764-m03)     </console>
	I0913 18:43:50.720064   22792 main.go:141] libmachine: (ha-617764-m03)     <rng model='virtio'>
	I0913 18:43:50.720076   22792 main.go:141] libmachine: (ha-617764-m03)       <backend model='random'>/dev/random</backend>
	I0913 18:43:50.720085   22792 main.go:141] libmachine: (ha-617764-m03)     </rng>
	I0913 18:43:50.720093   22792 main.go:141] libmachine: (ha-617764-m03)     
	I0913 18:43:50.720103   22792 main.go:141] libmachine: (ha-617764-m03)     
	I0913 18:43:50.720112   22792 main.go:141] libmachine: (ha-617764-m03)   </devices>
	I0913 18:43:50.720121   22792 main.go:141] libmachine: (ha-617764-m03) </domain>
	I0913 18:43:50.720133   22792 main.go:141] libmachine: (ha-617764-m03) 
	I0913 18:43:50.727105   22792 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined MAC address 52:54:00:83:c8:09 in network default
	I0913 18:43:50.727653   22792 main.go:141] libmachine: (ha-617764-m03) Ensuring networks are active...
	I0913 18:43:50.727670   22792 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:43:50.728499   22792 main.go:141] libmachine: (ha-617764-m03) Ensuring network default is active
	I0913 18:43:50.728841   22792 main.go:141] libmachine: (ha-617764-m03) Ensuring network mk-ha-617764 is active
	I0913 18:43:50.729292   22792 main.go:141] libmachine: (ha-617764-m03) Getting domain xml...
	I0913 18:43:50.729984   22792 main.go:141] libmachine: (ha-617764-m03) Creating domain...
	I0913 18:43:51.960516   22792 main.go:141] libmachine: (ha-617764-m03) Waiting to get IP...
	I0913 18:43:51.961283   22792 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:43:51.961628   22792 main.go:141] libmachine: (ha-617764-m03) DBG | unable to find current IP address of domain ha-617764-m03 in network mk-ha-617764
	I0913 18:43:51.961674   22792 main.go:141] libmachine: (ha-617764-m03) DBG | I0913 18:43:51.961619   23564 retry.go:31] will retry after 222.94822ms: waiting for machine to come up
	I0913 18:43:52.185989   22792 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:43:52.186489   22792 main.go:141] libmachine: (ha-617764-m03) DBG | unable to find current IP address of domain ha-617764-m03 in network mk-ha-617764
	I0913 18:43:52.186519   22792 main.go:141] libmachine: (ha-617764-m03) DBG | I0913 18:43:52.186468   23564 retry.go:31] will retry after 348.512697ms: waiting for machine to come up
	I0913 18:43:52.536967   22792 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:43:52.537348   22792 main.go:141] libmachine: (ha-617764-m03) DBG | unable to find current IP address of domain ha-617764-m03 in network mk-ha-617764
	I0913 18:43:52.537378   22792 main.go:141] libmachine: (ha-617764-m03) DBG | I0913 18:43:52.537294   23564 retry.go:31] will retry after 356.439128ms: waiting for machine to come up
	I0913 18:43:52.895652   22792 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:43:52.896099   22792 main.go:141] libmachine: (ha-617764-m03) DBG | unable to find current IP address of domain ha-617764-m03 in network mk-ha-617764
	I0913 18:43:52.896129   22792 main.go:141] libmachine: (ha-617764-m03) DBG | I0913 18:43:52.896049   23564 retry.go:31] will retry after 531.086298ms: waiting for machine to come up
	I0913 18:43:53.428881   22792 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:43:53.429320   22792 main.go:141] libmachine: (ha-617764-m03) DBG | unable to find current IP address of domain ha-617764-m03 in network mk-ha-617764
	I0913 18:43:53.429348   22792 main.go:141] libmachine: (ha-617764-m03) DBG | I0913 18:43:53.429273   23564 retry.go:31] will retry after 545.757086ms: waiting for machine to come up
	I0913 18:43:53.977006   22792 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:43:53.977444   22792 main.go:141] libmachine: (ha-617764-m03) DBG | unable to find current IP address of domain ha-617764-m03 in network mk-ha-617764
	I0913 18:43:53.977469   22792 main.go:141] libmachine: (ha-617764-m03) DBG | I0913 18:43:53.977389   23564 retry.go:31] will retry after 899.801689ms: waiting for machine to come up
	I0913 18:43:54.878395   22792 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:43:54.878846   22792 main.go:141] libmachine: (ha-617764-m03) DBG | unable to find current IP address of domain ha-617764-m03 in network mk-ha-617764
	I0913 18:43:54.878874   22792 main.go:141] libmachine: (ha-617764-m03) DBG | I0913 18:43:54.878805   23564 retry.go:31] will retry after 936.88095ms: waiting for machine to come up
	I0913 18:43:55.817262   22792 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:43:55.817647   22792 main.go:141] libmachine: (ha-617764-m03) DBG | unable to find current IP address of domain ha-617764-m03 in network mk-ha-617764
	I0913 18:43:55.817673   22792 main.go:141] libmachine: (ha-617764-m03) DBG | I0913 18:43:55.817605   23564 retry.go:31] will retry after 1.411862736s: waiting for machine to come up
	I0913 18:43:57.231474   22792 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:43:57.232007   22792 main.go:141] libmachine: (ha-617764-m03) DBG | unable to find current IP address of domain ha-617764-m03 in network mk-ha-617764
	I0913 18:43:57.232035   22792 main.go:141] libmachine: (ha-617764-m03) DBG | I0913 18:43:57.231965   23564 retry.go:31] will retry after 1.158592591s: waiting for machine to come up
	I0913 18:43:58.392379   22792 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:43:58.392788   22792 main.go:141] libmachine: (ha-617764-m03) DBG | unable to find current IP address of domain ha-617764-m03 in network mk-ha-617764
	I0913 18:43:58.392803   22792 main.go:141] libmachine: (ha-617764-m03) DBG | I0913 18:43:58.392764   23564 retry.go:31] will retry after 1.974547795s: waiting for machine to come up
	I0913 18:44:00.369279   22792 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:44:00.369865   22792 main.go:141] libmachine: (ha-617764-m03) DBG | unable to find current IP address of domain ha-617764-m03 in network mk-ha-617764
	I0913 18:44:00.369894   22792 main.go:141] libmachine: (ha-617764-m03) DBG | I0913 18:44:00.369815   23564 retry.go:31] will retry after 2.798968918s: waiting for machine to come up
	I0913 18:44:03.171087   22792 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:44:03.171475   22792 main.go:141] libmachine: (ha-617764-m03) DBG | unable to find current IP address of domain ha-617764-m03 in network mk-ha-617764
	I0913 18:44:03.171512   22792 main.go:141] libmachine: (ha-617764-m03) DBG | I0913 18:44:03.171449   23564 retry.go:31] will retry after 2.54793054s: waiting for machine to come up
	I0913 18:44:05.721058   22792 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:44:05.721564   22792 main.go:141] libmachine: (ha-617764-m03) DBG | unable to find current IP address of domain ha-617764-m03 in network mk-ha-617764
	I0913 18:44:05.721585   22792 main.go:141] libmachine: (ha-617764-m03) DBG | I0913 18:44:05.721527   23564 retry.go:31] will retry after 3.45685189s: waiting for machine to come up
	I0913 18:44:09.179717   22792 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:44:09.180158   22792 main.go:141] libmachine: (ha-617764-m03) DBG | unable to find current IP address of domain ha-617764-m03 in network mk-ha-617764
	I0913 18:44:09.180185   22792 main.go:141] libmachine: (ha-617764-m03) DBG | I0913 18:44:09.180093   23564 retry.go:31] will retry after 4.407544734s: waiting for machine to come up
	I0913 18:44:13.591186   22792 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:44:13.591703   22792 main.go:141] libmachine: (ha-617764-m03) Found IP for machine: 192.168.39.118
	I0913 18:44:13.591736   22792 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has current primary IP address 192.168.39.118 and MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:44:13.591745   22792 main.go:141] libmachine: (ha-617764-m03) Reserving static IP address...
	I0913 18:44:13.592220   22792 main.go:141] libmachine: (ha-617764-m03) DBG | unable to find host DHCP lease matching {name: "ha-617764-m03", mac: "52:54:00:4c:bc:fa", ip: "192.168.39.118"} in network mk-ha-617764
	I0913 18:44:13.663972   22792 main.go:141] libmachine: (ha-617764-m03) Reserved static IP address: 192.168.39.118
	I0913 18:44:13.664003   22792 main.go:141] libmachine: (ha-617764-m03) DBG | Getting to WaitForSSH function...
	I0913 18:44:13.664010   22792 main.go:141] libmachine: (ha-617764-m03) Waiting for SSH to be available...
	I0913 18:44:13.666336   22792 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:44:13.666646   22792 main.go:141] libmachine: (ha-617764-m03) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:4c:bc:fa", ip: ""} in network mk-ha-617764
	I0913 18:44:13.666682   22792 main.go:141] libmachine: (ha-617764-m03) DBG | unable to find defined IP address of network mk-ha-617764 interface with MAC address 52:54:00:4c:bc:fa
	I0913 18:44:13.666775   22792 main.go:141] libmachine: (ha-617764-m03) DBG | Using SSH client type: external
	I0913 18:44:13.666797   22792 main.go:141] libmachine: (ha-617764-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764-m03/id_rsa (-rw-------)
	I0913 18:44:13.666862   22792 main.go:141] libmachine: (ha-617764-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0913 18:44:13.666896   22792 main.go:141] libmachine: (ha-617764-m03) DBG | About to run SSH command:
	I0913 18:44:13.666915   22792 main.go:141] libmachine: (ha-617764-m03) DBG | exit 0
	I0913 18:44:13.670667   22792 main.go:141] libmachine: (ha-617764-m03) DBG | SSH cmd err, output: exit status 255: 
	I0913 18:44:13.670691   22792 main.go:141] libmachine: (ha-617764-m03) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0913 18:44:13.670701   22792 main.go:141] libmachine: (ha-617764-m03) DBG | command : exit 0
	I0913 18:44:13.670712   22792 main.go:141] libmachine: (ha-617764-m03) DBG | err     : exit status 255
	I0913 18:44:13.670722   22792 main.go:141] libmachine: (ha-617764-m03) DBG | output  : 
	I0913 18:44:16.671501   22792 main.go:141] libmachine: (ha-617764-m03) DBG | Getting to WaitForSSH function...
	I0913 18:44:16.674272   22792 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:44:16.674700   22792 main.go:141] libmachine: (ha-617764-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:bc:fa", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:44:05 +0000 UTC Type:0 Mac:52:54:00:4c:bc:fa Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:ha-617764-m03 Clientid:01:52:54:00:4c:bc:fa}
	I0913 18:44:16.674728   22792 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined IP address 192.168.39.118 and MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:44:16.674886   22792 main.go:141] libmachine: (ha-617764-m03) DBG | Using SSH client type: external
	I0913 18:44:16.674901   22792 main.go:141] libmachine: (ha-617764-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764-m03/id_rsa (-rw-------)
	I0913 18:44:16.674917   22792 main.go:141] libmachine: (ha-617764-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.118 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0913 18:44:16.674926   22792 main.go:141] libmachine: (ha-617764-m03) DBG | About to run SSH command:
	I0913 18:44:16.674937   22792 main.go:141] libmachine: (ha-617764-m03) DBG | exit 0
	I0913 18:44:16.802087   22792 main.go:141] libmachine: (ha-617764-m03) DBG | SSH cmd err, output: <nil>: 
	I0913 18:44:16.802352   22792 main.go:141] libmachine: (ha-617764-m03) KVM machine creation complete!
	I0913 18:44:16.802725   22792 main.go:141] libmachine: (ha-617764-m03) Calling .GetConfigRaw
	I0913 18:44:16.803249   22792 main.go:141] libmachine: (ha-617764-m03) Calling .DriverName
	I0913 18:44:16.803483   22792 main.go:141] libmachine: (ha-617764-m03) Calling .DriverName
	I0913 18:44:16.803650   22792 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0913 18:44:16.803666   22792 main.go:141] libmachine: (ha-617764-m03) Calling .GetState
	I0913 18:44:16.804794   22792 main.go:141] libmachine: Detecting operating system of created instance...
	I0913 18:44:16.804809   22792 main.go:141] libmachine: Waiting for SSH to be available...
	I0913 18:44:16.804822   22792 main.go:141] libmachine: Getting to WaitForSSH function...
	I0913 18:44:16.804833   22792 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHHostname
	I0913 18:44:16.807097   22792 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:44:16.807435   22792 main.go:141] libmachine: (ha-617764-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:bc:fa", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:44:05 +0000 UTC Type:0 Mac:52:54:00:4c:bc:fa Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:ha-617764-m03 Clientid:01:52:54:00:4c:bc:fa}
	I0913 18:44:16.807460   22792 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined IP address 192.168.39.118 and MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:44:16.807595   22792 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHPort
	I0913 18:44:16.807770   22792 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHKeyPath
	I0913 18:44:16.807894   22792 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHKeyPath
	I0913 18:44:16.808004   22792 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHUsername
	I0913 18:44:16.808115   22792 main.go:141] libmachine: Using SSH client type: native
	I0913 18:44:16.808373   22792 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.118 22 <nil> <nil>}
	I0913 18:44:16.808390   22792 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0913 18:44:16.917430   22792 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0913 18:44:16.917450   22792 main.go:141] libmachine: Detecting the provisioner...
	I0913 18:44:16.917457   22792 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHHostname
	I0913 18:44:16.920222   22792 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:44:16.920568   22792 main.go:141] libmachine: (ha-617764-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:bc:fa", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:44:05 +0000 UTC Type:0 Mac:52:54:00:4c:bc:fa Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:ha-617764-m03 Clientid:01:52:54:00:4c:bc:fa}
	I0913 18:44:16.920593   22792 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined IP address 192.168.39.118 and MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:44:16.920710   22792 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHPort
	I0913 18:44:16.920899   22792 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHKeyPath
	I0913 18:44:16.921041   22792 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHKeyPath
	I0913 18:44:16.921197   22792 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHUsername
	I0913 18:44:16.921389   22792 main.go:141] libmachine: Using SSH client type: native
	I0913 18:44:16.921627   22792 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.118 22 <nil> <nil>}
	I0913 18:44:16.921647   22792 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0913 18:44:17.035046   22792 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0913 18:44:17.035107   22792 main.go:141] libmachine: found compatible host: buildroot
	I0913 18:44:17.035116   22792 main.go:141] libmachine: Provisioning with buildroot...
	I0913 18:44:17.035126   22792 main.go:141] libmachine: (ha-617764-m03) Calling .GetMachineName
	I0913 18:44:17.035348   22792 buildroot.go:166] provisioning hostname "ha-617764-m03"
	I0913 18:44:17.035373   22792 main.go:141] libmachine: (ha-617764-m03) Calling .GetMachineName
	I0913 18:44:17.035514   22792 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHHostname
	I0913 18:44:17.037946   22792 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:44:17.038320   22792 main.go:141] libmachine: (ha-617764-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:bc:fa", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:44:05 +0000 UTC Type:0 Mac:52:54:00:4c:bc:fa Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:ha-617764-m03 Clientid:01:52:54:00:4c:bc:fa}
	I0913 18:44:17.038346   22792 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined IP address 192.168.39.118 and MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:44:17.038484   22792 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHPort
	I0913 18:44:17.038678   22792 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHKeyPath
	I0913 18:44:17.038833   22792 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHKeyPath
	I0913 18:44:17.038940   22792 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHUsername
	I0913 18:44:17.039090   22792 main.go:141] libmachine: Using SSH client type: native
	I0913 18:44:17.039237   22792 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.118 22 <nil> <nil>}
	I0913 18:44:17.039248   22792 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-617764-m03 && echo "ha-617764-m03" | sudo tee /etc/hostname
	I0913 18:44:17.162627   22792 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-617764-m03
	
	I0913 18:44:17.162684   22792 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHHostname
	I0913 18:44:17.165667   22792 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:44:17.166190   22792 main.go:141] libmachine: (ha-617764-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:bc:fa", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:44:05 +0000 UTC Type:0 Mac:52:54:00:4c:bc:fa Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:ha-617764-m03 Clientid:01:52:54:00:4c:bc:fa}
	I0913 18:44:17.166221   22792 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined IP address 192.168.39.118 and MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:44:17.166426   22792 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHPort
	I0913 18:44:17.166745   22792 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHKeyPath
	I0913 18:44:17.166994   22792 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHKeyPath
	I0913 18:44:17.167180   22792 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHUsername
	I0913 18:44:17.167381   22792 main.go:141] libmachine: Using SSH client type: native
	I0913 18:44:17.167575   22792 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.118 22 <nil> <nil>}
	I0913 18:44:17.167602   22792 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-617764-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-617764-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-617764-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0913 18:44:17.289053   22792 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0913 18:44:17.289089   22792 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19636-3902/.minikube CaCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19636-3902/.minikube}
	I0913 18:44:17.289114   22792 buildroot.go:174] setting up certificates
	I0913 18:44:17.289126   22792 provision.go:84] configureAuth start
	I0913 18:44:17.289138   22792 main.go:141] libmachine: (ha-617764-m03) Calling .GetMachineName
	I0913 18:44:17.289455   22792 main.go:141] libmachine: (ha-617764-m03) Calling .GetIP
	I0913 18:44:17.292727   22792 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:44:17.293193   22792 main.go:141] libmachine: (ha-617764-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:bc:fa", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:44:05 +0000 UTC Type:0 Mac:52:54:00:4c:bc:fa Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:ha-617764-m03 Clientid:01:52:54:00:4c:bc:fa}
	I0913 18:44:17.293219   22792 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined IP address 192.168.39.118 and MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:44:17.293507   22792 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHHostname
	I0913 18:44:17.296104   22792 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:44:17.296401   22792 main.go:141] libmachine: (ha-617764-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:bc:fa", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:44:05 +0000 UTC Type:0 Mac:52:54:00:4c:bc:fa Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:ha-617764-m03 Clientid:01:52:54:00:4c:bc:fa}
	I0913 18:44:17.296436   22792 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined IP address 192.168.39.118 and MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:44:17.296508   22792 provision.go:143] copyHostCerts
	I0913 18:44:17.296548   22792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem
	I0913 18:44:17.296589   22792 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem, removing ...
	I0913 18:44:17.296601   22792 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem
	I0913 18:44:17.296679   22792 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem (1082 bytes)
	I0913 18:44:17.296782   22792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem
	I0913 18:44:17.296810   22792 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem, removing ...
	I0913 18:44:17.296819   22792 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem
	I0913 18:44:17.296874   22792 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem (1123 bytes)
	I0913 18:44:17.296935   22792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem
	I0913 18:44:17.296958   22792 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem, removing ...
	I0913 18:44:17.296967   22792 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem
	I0913 18:44:17.296998   22792 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem (1679 bytes)
	I0913 18:44:17.297108   22792 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem org=jenkins.ha-617764-m03 san=[127.0.0.1 192.168.39.118 ha-617764-m03 localhost minikube]
	I0913 18:44:17.994603   22792 provision.go:177] copyRemoteCerts
	I0913 18:44:17.994665   22792 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0913 18:44:17.994687   22792 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHHostname
	I0913 18:44:17.997165   22792 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:44:17.997477   22792 main.go:141] libmachine: (ha-617764-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:bc:fa", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:44:05 +0000 UTC Type:0 Mac:52:54:00:4c:bc:fa Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:ha-617764-m03 Clientid:01:52:54:00:4c:bc:fa}
	I0913 18:44:17.997501   22792 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined IP address 192.168.39.118 and MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:44:17.997667   22792 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHPort
	I0913 18:44:17.997867   22792 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHKeyPath
	I0913 18:44:17.998004   22792 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHUsername
	I0913 18:44:17.998164   22792 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764-m03/id_rsa Username:docker}
	I0913 18:44:18.085053   22792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0913 18:44:18.085147   22792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0913 18:44:18.113227   22792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0913 18:44:18.113322   22792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0913 18:44:18.139984   22792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0913 18:44:18.140045   22792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0913 18:44:18.163918   22792 provision.go:87] duration metric: took 874.778214ms to configureAuth
	I0913 18:44:18.163947   22792 buildroot.go:189] setting minikube options for container-runtime
	I0913 18:44:18.164223   22792 config.go:182] Loaded profile config "ha-617764": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 18:44:18.164325   22792 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHHostname
	I0913 18:44:18.166705   22792 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:44:18.167021   22792 main.go:141] libmachine: (ha-617764-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:bc:fa", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:44:05 +0000 UTC Type:0 Mac:52:54:00:4c:bc:fa Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:ha-617764-m03 Clientid:01:52:54:00:4c:bc:fa}
	I0913 18:44:18.167051   22792 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined IP address 192.168.39.118 and MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:44:18.167203   22792 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHPort
	I0913 18:44:18.167392   22792 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHKeyPath
	I0913 18:44:18.167550   22792 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHKeyPath
	I0913 18:44:18.167683   22792 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHUsername
	I0913 18:44:18.167830   22792 main.go:141] libmachine: Using SSH client type: native
	I0913 18:44:18.167978   22792 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.118 22 <nil> <nil>}
	I0913 18:44:18.167991   22792 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0913 18:44:18.407262   22792 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0913 18:44:18.407290   22792 main.go:141] libmachine: Checking connection to Docker...
	I0913 18:44:18.407298   22792 main.go:141] libmachine: (ha-617764-m03) Calling .GetURL
	I0913 18:44:18.408775   22792 main.go:141] libmachine: (ha-617764-m03) DBG | Using libvirt version 6000000
	I0913 18:44:18.411073   22792 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:44:18.411441   22792 main.go:141] libmachine: (ha-617764-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:bc:fa", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:44:05 +0000 UTC Type:0 Mac:52:54:00:4c:bc:fa Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:ha-617764-m03 Clientid:01:52:54:00:4c:bc:fa}
	I0913 18:44:18.411469   22792 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined IP address 192.168.39.118 and MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:44:18.411627   22792 main.go:141] libmachine: Docker is up and running!
	I0913 18:44:18.411642   22792 main.go:141] libmachine: Reticulating splines...
	I0913 18:44:18.411649   22792 client.go:171] duration metric: took 28.133468342s to LocalClient.Create
	I0913 18:44:18.411675   22792 start.go:167] duration metric: took 28.133537197s to libmachine.API.Create "ha-617764"
	I0913 18:44:18.411687   22792 start.go:293] postStartSetup for "ha-617764-m03" (driver="kvm2")
	I0913 18:44:18.411701   22792 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0913 18:44:18.411723   22792 main.go:141] libmachine: (ha-617764-m03) Calling .DriverName
	I0913 18:44:18.411923   22792 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0913 18:44:18.411947   22792 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHHostname
	I0913 18:44:18.413754   22792 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:44:18.414041   22792 main.go:141] libmachine: (ha-617764-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:bc:fa", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:44:05 +0000 UTC Type:0 Mac:52:54:00:4c:bc:fa Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:ha-617764-m03 Clientid:01:52:54:00:4c:bc:fa}
	I0913 18:44:18.414067   22792 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined IP address 192.168.39.118 and MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:44:18.414188   22792 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHPort
	I0913 18:44:18.414367   22792 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHKeyPath
	I0913 18:44:18.414521   22792 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHUsername
	I0913 18:44:18.414649   22792 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764-m03/id_rsa Username:docker}
	I0913 18:44:18.500086   22792 ssh_runner.go:195] Run: cat /etc/os-release
	I0913 18:44:18.504465   22792 info.go:137] Remote host: Buildroot 2023.02.9
	I0913 18:44:18.504492   22792 filesync.go:126] Scanning /home/jenkins/minikube-integration/19636-3902/.minikube/addons for local assets ...
	I0913 18:44:18.504570   22792 filesync.go:126] Scanning /home/jenkins/minikube-integration/19636-3902/.minikube/files for local assets ...
	I0913 18:44:18.504640   22792 filesync.go:149] local asset: /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem -> 110792.pem in /etc/ssl/certs
	I0913 18:44:18.504648   22792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem -> /etc/ssl/certs/110792.pem
	I0913 18:44:18.504724   22792 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0913 18:44:18.513533   22792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem --> /etc/ssl/certs/110792.pem (1708 bytes)
	I0913 18:44:18.538121   22792 start.go:296] duration metric: took 126.41811ms for postStartSetup
	I0913 18:44:18.538175   22792 main.go:141] libmachine: (ha-617764-m03) Calling .GetConfigRaw
	I0913 18:44:18.538744   22792 main.go:141] libmachine: (ha-617764-m03) Calling .GetIP
	I0913 18:44:18.541022   22792 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:44:18.541373   22792 main.go:141] libmachine: (ha-617764-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:bc:fa", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:44:05 +0000 UTC Type:0 Mac:52:54:00:4c:bc:fa Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:ha-617764-m03 Clientid:01:52:54:00:4c:bc:fa}
	I0913 18:44:18.541402   22792 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined IP address 192.168.39.118 and MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:44:18.541667   22792 profile.go:143] Saving config to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/config.json ...
	I0913 18:44:18.541859   22792 start.go:128] duration metric: took 28.282497305s to createHost
	I0913 18:44:18.541881   22792 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHHostname
	I0913 18:44:18.543900   22792 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:44:18.544232   22792 main.go:141] libmachine: (ha-617764-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:bc:fa", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:44:05 +0000 UTC Type:0 Mac:52:54:00:4c:bc:fa Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:ha-617764-m03 Clientid:01:52:54:00:4c:bc:fa}
	I0913 18:44:18.544274   22792 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined IP address 192.168.39.118 and MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:44:18.544436   22792 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHPort
	I0913 18:44:18.544575   22792 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHKeyPath
	I0913 18:44:18.544729   22792 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHKeyPath
	I0913 18:44:18.544825   22792 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHUsername
	I0913 18:44:18.544940   22792 main.go:141] libmachine: Using SSH client type: native
	I0913 18:44:18.545159   22792 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.118 22 <nil> <nil>}
	I0913 18:44:18.545174   22792 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0913 18:44:18.654826   22792 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726253058.635136982
	
	I0913 18:44:18.654846   22792 fix.go:216] guest clock: 1726253058.635136982
	I0913 18:44:18.654855   22792 fix.go:229] Guest: 2024-09-13 18:44:18.635136982 +0000 UTC Remote: 2024-09-13 18:44:18.541870412 +0000 UTC m=+152.232780684 (delta=93.26657ms)
	I0913 18:44:18.654874   22792 fix.go:200] guest clock delta is within tolerance: 93.26657ms
	I0913 18:44:18.654880   22792 start.go:83] releasing machines lock for "ha-617764-m03", held for 28.395679518s
	I0913 18:44:18.654905   22792 main.go:141] libmachine: (ha-617764-m03) Calling .DriverName
	I0913 18:44:18.655148   22792 main.go:141] libmachine: (ha-617764-m03) Calling .GetIP
	I0913 18:44:18.657542   22792 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:44:18.657923   22792 main.go:141] libmachine: (ha-617764-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:bc:fa", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:44:05 +0000 UTC Type:0 Mac:52:54:00:4c:bc:fa Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:ha-617764-m03 Clientid:01:52:54:00:4c:bc:fa}
	I0913 18:44:18.657954   22792 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined IP address 192.168.39.118 and MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:44:18.660294   22792 out.go:177] * Found network options:
	I0913 18:44:18.661658   22792 out.go:177]   - NO_PROXY=192.168.39.145,192.168.39.203
	W0913 18:44:18.662833   22792 proxy.go:119] fail to check proxy env: Error ip not in block
	W0913 18:44:18.662855   22792 proxy.go:119] fail to check proxy env: Error ip not in block
	I0913 18:44:18.662867   22792 main.go:141] libmachine: (ha-617764-m03) Calling .DriverName
	I0913 18:44:18.663354   22792 main.go:141] libmachine: (ha-617764-m03) Calling .DriverName
	I0913 18:44:18.663520   22792 main.go:141] libmachine: (ha-617764-m03) Calling .DriverName
	I0913 18:44:18.663595   22792 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0913 18:44:18.663630   22792 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHHostname
	W0913 18:44:18.663661   22792 proxy.go:119] fail to check proxy env: Error ip not in block
	W0913 18:44:18.663686   22792 proxy.go:119] fail to check proxy env: Error ip not in block
	I0913 18:44:18.663750   22792 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0913 18:44:18.663773   22792 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHHostname
	I0913 18:44:18.666489   22792 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:44:18.666717   22792 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:44:18.666864   22792 main.go:141] libmachine: (ha-617764-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:bc:fa", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:44:05 +0000 UTC Type:0 Mac:52:54:00:4c:bc:fa Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:ha-617764-m03 Clientid:01:52:54:00:4c:bc:fa}
	I0913 18:44:18.666891   22792 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined IP address 192.168.39.118 and MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:44:18.667045   22792 main.go:141] libmachine: (ha-617764-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:bc:fa", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:44:05 +0000 UTC Type:0 Mac:52:54:00:4c:bc:fa Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:ha-617764-m03 Clientid:01:52:54:00:4c:bc:fa}
	I0913 18:44:18.667063   22792 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined IP address 192.168.39.118 and MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:44:18.667090   22792 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHPort
	I0913 18:44:18.667280   22792 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHKeyPath
	I0913 18:44:18.667318   22792 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHPort
	I0913 18:44:18.667454   22792 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHKeyPath
	I0913 18:44:18.667457   22792 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHUsername
	I0913 18:44:18.667656   22792 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHUsername
	I0913 18:44:18.667669   22792 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764-m03/id_rsa Username:docker}
	I0913 18:44:18.667774   22792 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764-m03/id_rsa Username:docker}
	I0913 18:44:18.904393   22792 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0913 18:44:18.910388   22792 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0913 18:44:18.910459   22792 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0913 18:44:18.926370   22792 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0913 18:44:18.926401   22792 start.go:495] detecting cgroup driver to use...
	I0913 18:44:18.926455   22792 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0913 18:44:18.942741   22792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0913 18:44:18.956665   22792 docker.go:217] disabling cri-docker service (if available) ...
	I0913 18:44:18.956716   22792 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0913 18:44:18.970209   22792 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0913 18:44:18.984000   22792 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0913 18:44:19.105582   22792 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0913 18:44:19.253613   22792 docker.go:233] disabling docker service ...
	I0913 18:44:19.253679   22792 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0913 18:44:19.269462   22792 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0913 18:44:19.282397   22792 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0913 18:44:19.421118   22792 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0913 18:44:19.552164   22792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0913 18:44:19.566377   22792 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0913 18:44:19.585430   22792 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0913 18:44:19.585485   22792 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 18:44:19.596399   22792 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0913 18:44:19.596450   22792 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 18:44:19.607523   22792 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 18:44:19.618292   22792 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 18:44:19.629162   22792 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0913 18:44:19.640258   22792 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 18:44:19.651512   22792 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 18:44:19.669361   22792 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 18:44:19.682032   22792 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0913 18:44:19.693153   22792 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0913 18:44:19.693220   22792 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0913 18:44:19.708001   22792 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0913 18:44:19.719219   22792 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 18:44:19.842723   22792 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0913 18:44:19.941502   22792 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0913 18:44:19.941573   22792 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0913 18:44:19.946517   22792 start.go:563] Will wait 60s for crictl version
	I0913 18:44:19.946584   22792 ssh_runner.go:195] Run: which crictl
	I0913 18:44:19.951033   22792 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0913 18:44:19.994419   22792 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0913 18:44:19.994508   22792 ssh_runner.go:195] Run: crio --version
	I0913 18:44:20.026203   22792 ssh_runner.go:195] Run: crio --version
	I0913 18:44:20.057969   22792 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0913 18:44:20.059353   22792 out.go:177]   - env NO_PROXY=192.168.39.145
	I0913 18:44:20.060544   22792 out.go:177]   - env NO_PROXY=192.168.39.145,192.168.39.203
	I0913 18:44:20.061885   22792 main.go:141] libmachine: (ha-617764-m03) Calling .GetIP
	I0913 18:44:20.064491   22792 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:44:20.064889   22792 main.go:141] libmachine: (ha-617764-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:bc:fa", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:44:05 +0000 UTC Type:0 Mac:52:54:00:4c:bc:fa Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:ha-617764-m03 Clientid:01:52:54:00:4c:bc:fa}
	I0913 18:44:20.064910   22792 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined IP address 192.168.39.118 and MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:44:20.065147   22792 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0913 18:44:20.069234   22792 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0913 18:44:20.085265   22792 mustload.go:65] Loading cluster: ha-617764
	I0913 18:44:20.085536   22792 config.go:182] Loaded profile config "ha-617764": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 18:44:20.085832   22792 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:44:20.085873   22792 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:44:20.100678   22792 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44327
	I0913 18:44:20.101132   22792 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:44:20.101632   22792 main.go:141] libmachine: Using API Version  1
	I0913 18:44:20.101652   22792 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:44:20.101952   22792 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:44:20.102112   22792 main.go:141] libmachine: (ha-617764) Calling .GetState
	I0913 18:44:20.103679   22792 host.go:66] Checking if "ha-617764" exists ...
	I0913 18:44:20.104082   22792 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:44:20.104127   22792 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:44:20.118274   22792 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46767
	I0913 18:44:20.118755   22792 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:44:20.119183   22792 main.go:141] libmachine: Using API Version  1
	I0913 18:44:20.119202   22792 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:44:20.119526   22792 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:44:20.119672   22792 main.go:141] libmachine: (ha-617764) Calling .DriverName
	I0913 18:44:20.119844   22792 certs.go:68] Setting up /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764 for IP: 192.168.39.118
	I0913 18:44:20.119854   22792 certs.go:194] generating shared ca certs ...
	I0913 18:44:20.119866   22792 certs.go:226] acquiring lock for ca certs: {Name:mke780aab4c2895bded11772e31c7a357d07742c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:44:20.119979   22792 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.key
	I0913 18:44:20.120016   22792 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.key
	I0913 18:44:20.120025   22792 certs.go:256] generating profile certs ...
	I0913 18:44:20.120095   22792 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/client.key
	I0913 18:44:20.120118   22792 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.key.43e3f53f
	I0913 18:44:20.120131   22792 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.crt.43e3f53f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.145 192.168.39.203 192.168.39.118 192.168.39.254]
	I0913 18:44:20.197533   22792 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.crt.43e3f53f ...
	I0913 18:44:20.197562   22792 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.crt.43e3f53f: {Name:mk56f9dfde1b148b5c4a8abc62ca190d87a808ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:44:20.197747   22792 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.key.43e3f53f ...
	I0913 18:44:20.197761   22792 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.key.43e3f53f: {Name:mk8928cafe5417a6fe2ae9196048e3f96fa72023 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:44:20.197855   22792 certs.go:381] copying /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.crt.43e3f53f -> /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.crt
	I0913 18:44:20.198000   22792 certs.go:385] copying /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.key.43e3f53f -> /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.key
	I0913 18:44:20.198186   22792 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/proxy-client.key
	I0913 18:44:20.198201   22792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0913 18:44:20.198217   22792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0913 18:44:20.198231   22792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0913 18:44:20.198250   22792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0913 18:44:20.198269   22792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0913 18:44:20.198286   22792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0913 18:44:20.198302   22792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0913 18:44:20.226232   22792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0913 18:44:20.226325   22792 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079.pem (1338 bytes)
	W0913 18:44:20.226376   22792 certs.go:480] ignoring /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079_empty.pem, impossibly tiny 0 bytes
	I0913 18:44:20.226390   22792 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem (1675 bytes)
	I0913 18:44:20.226444   22792 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem (1082 bytes)
	I0913 18:44:20.226479   22792 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem (1123 bytes)
	I0913 18:44:20.226507   22792 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem (1679 bytes)
	I0913 18:44:20.226573   22792 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem (1708 bytes)
	I0913 18:44:20.226609   22792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079.pem -> /usr/share/ca-certificates/11079.pem
	I0913 18:44:20.226629   22792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem -> /usr/share/ca-certificates/110792.pem
	I0913 18:44:20.226647   22792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0913 18:44:20.226684   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHHostname
	I0913 18:44:20.229767   22792 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:44:20.230182   22792 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 18:44:20.230200   22792 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:44:20.230414   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHPort
	I0913 18:44:20.230602   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 18:44:20.230742   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHUsername
	I0913 18:44:20.230837   22792 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764/id_rsa Username:docker}
	I0913 18:44:20.302398   22792 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0913 18:44:20.307468   22792 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0913 18:44:20.320292   22792 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0913 18:44:20.324955   22792 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0913 18:44:20.337983   22792 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0913 18:44:20.344488   22792 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0913 18:44:20.356113   22792 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0913 18:44:20.360329   22792 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0913 18:44:20.371659   22792 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0913 18:44:20.376502   22792 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0913 18:44:20.387569   22792 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0913 18:44:20.391714   22792 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0913 18:44:20.408717   22792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0913 18:44:20.435090   22792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0913 18:44:20.460942   22792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0913 18:44:20.485491   22792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0913 18:44:20.508611   22792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0913 18:44:20.532845   22792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0913 18:44:20.555757   22792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0913 18:44:20.578859   22792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0913 18:44:20.602953   22792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079.pem --> /usr/share/ca-certificates/11079.pem (1338 bytes)
	I0913 18:44:20.628234   22792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem --> /usr/share/ca-certificates/110792.pem (1708 bytes)
	I0913 18:44:20.653837   22792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0913 18:44:20.678692   22792 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0913 18:44:20.695969   22792 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0913 18:44:20.713357   22792 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0913 18:44:20.730533   22792 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0913 18:44:20.747290   22792 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0913 18:44:20.763797   22792 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0913 18:44:20.780741   22792 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0913 18:44:20.797290   22792 ssh_runner.go:195] Run: openssl version
	I0913 18:44:20.803524   22792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11079.pem && ln -fs /usr/share/ca-certificates/11079.pem /etc/ssl/certs/11079.pem"
	I0913 18:44:20.814404   22792 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11079.pem
	I0913 18:44:20.819001   22792 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 13 18:38 /usr/share/ca-certificates/11079.pem
	I0913 18:44:20.819051   22792 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11079.pem
	I0913 18:44:20.824835   22792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11079.pem /etc/ssl/certs/51391683.0"
	I0913 18:44:20.836589   22792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110792.pem && ln -fs /usr/share/ca-certificates/110792.pem /etc/ssl/certs/110792.pem"
	I0913 18:44:20.847760   22792 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110792.pem
	I0913 18:44:20.852138   22792 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 13 18:38 /usr/share/ca-certificates/110792.pem
	I0913 18:44:20.852182   22792 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110792.pem
	I0913 18:44:20.857733   22792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110792.pem /etc/ssl/certs/3ec20f2e.0"
	I0913 18:44:20.868683   22792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0913 18:44:20.880517   22792 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0913 18:44:20.884835   22792 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 13 18:22 /usr/share/ca-certificates/minikubeCA.pem
	I0913 18:44:20.884879   22792 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0913 18:44:20.890420   22792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0913 18:44:20.902701   22792 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0913 18:44:20.906972   22792 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0913 18:44:20.907018   22792 kubeadm.go:934] updating node {m03 192.168.39.118 8443 v1.31.1 crio true true} ...
	I0913 18:44:20.907126   22792 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-617764-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.118
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-617764 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0913 18:44:20.907158   22792 kube-vip.go:115] generating kube-vip config ...
	I0913 18:44:20.907199   22792 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0913 18:44:20.923403   22792 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0913 18:44:20.923474   22792 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0913 18:44:20.923532   22792 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0913 18:44:20.933709   22792 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0913 18:44:20.933772   22792 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0913 18:44:20.943277   22792 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I0913 18:44:20.943297   22792 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0913 18:44:20.943314   22792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0913 18:44:20.943356   22792 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0913 18:44:20.943303   22792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0913 18:44:20.943278   22792 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I0913 18:44:20.943428   22792 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0913 18:44:20.943455   22792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 18:44:20.958921   22792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0913 18:44:20.958948   22792 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0913 18:44:20.958986   22792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0913 18:44:20.959011   22792 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0913 18:44:20.959019   22792 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0913 18:44:20.959050   22792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0913 18:44:20.983538   22792 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0913 18:44:20.983581   22792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0913 18:44:21.866684   22792 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0913 18:44:21.877058   22792 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0913 18:44:21.896399   22792 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0913 18:44:21.913772   22792 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0913 18:44:21.931619   22792 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0913 18:44:21.936255   22792 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0913 18:44:21.949711   22792 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 18:44:22.077379   22792 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0913 18:44:22.095404   22792 host.go:66] Checking if "ha-617764" exists ...
	I0913 18:44:22.095709   22792 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:44:22.095743   22792 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:44:22.112680   22792 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43637
	I0913 18:44:22.113186   22792 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:44:22.113686   22792 main.go:141] libmachine: Using API Version  1
	I0913 18:44:22.113705   22792 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:44:22.114081   22792 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:44:22.114441   22792 main.go:141] libmachine: (ha-617764) Calling .DriverName
	I0913 18:44:22.114602   22792 start.go:317] joinCluster: &{Name:ha-617764 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-617764 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.145 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.203 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.118 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:f
alse istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 18:44:22.114755   22792 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0913 18:44:22.114776   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHHostname
	I0913 18:44:22.117737   22792 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:44:22.118269   22792 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 18:44:22.118298   22792 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:44:22.118403   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHPort
	I0913 18:44:22.118574   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 18:44:22.118738   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHUsername
	I0913 18:44:22.118864   22792 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764/id_rsa Username:docker}
	I0913 18:44:22.290532   22792 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.118 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0913 18:44:22.290589   22792 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token szldyi.jx7bkapu8c26p2ux --discovery-token-ca-cert-hash sha256:a4240e11abfbfd373505036507e5ef67d548fb372e560a526b421dfcccc65691 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-617764-m03 --control-plane --apiserver-advertise-address=192.168.39.118 --apiserver-bind-port=8443"
	I0913 18:44:46.125346   22792 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token szldyi.jx7bkapu8c26p2ux --discovery-token-ca-cert-hash sha256:a4240e11abfbfd373505036507e5ef67d548fb372e560a526b421dfcccc65691 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-617764-m03 --control-plane --apiserver-advertise-address=192.168.39.118 --apiserver-bind-port=8443": (23.834727038s)
	I0913 18:44:46.125383   22792 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0913 18:44:46.675572   22792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-617764-m03 minikube.k8s.io/updated_at=2024_09_13T18_44_46_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=fdd33bebc6743cfd1c61ec7fe066add478610a92 minikube.k8s.io/name=ha-617764 minikube.k8s.io/primary=false
	I0913 18:44:46.828529   22792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-617764-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0913 18:44:46.940550   22792 start.go:319] duration metric: took 24.825943975s to joinCluster
	I0913 18:44:46.940677   22792 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.118 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0913 18:44:46.941034   22792 config.go:182] Loaded profile config "ha-617764": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 18:44:46.942345   22792 out.go:177] * Verifying Kubernetes components...
	I0913 18:44:46.943542   22792 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 18:44:47.214458   22792 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0913 18:44:47.257262   22792 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19636-3902/kubeconfig
	I0913 18:44:47.257469   22792 kapi.go:59] client config for ha-617764: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/client.crt", KeyFile:"/home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/client.key", CAFile:"/home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6f9c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0913 18:44:47.257525   22792 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.145:8443
	I0913 18:44:47.257700   22792 node_ready.go:35] waiting up to 6m0s for node "ha-617764-m03" to be "Ready" ...
	I0913 18:44:47.257767   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m03
	I0913 18:44:47.257775   22792 round_trippers.go:469] Request Headers:
	I0913 18:44:47.257782   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:44:47.257789   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:44:47.261015   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:44:47.758260   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m03
	I0913 18:44:47.758281   22792 round_trippers.go:469] Request Headers:
	I0913 18:44:47.758290   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:44:47.758294   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:44:47.761632   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:44:48.258586   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m03
	I0913 18:44:48.258620   22792 round_trippers.go:469] Request Headers:
	I0913 18:44:48.258639   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:44:48.258645   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:44:48.262554   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:44:48.758911   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m03
	I0913 18:44:48.758936   22792 round_trippers.go:469] Request Headers:
	I0913 18:44:48.758947   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:44:48.758952   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:44:48.763981   22792 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0913 18:44:49.258403   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m03
	I0913 18:44:49.258424   22792 round_trippers.go:469] Request Headers:
	I0913 18:44:49.258432   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:44:49.258436   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:44:49.261673   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:44:49.262242   22792 node_ready.go:53] node "ha-617764-m03" has status "Ready":"False"
	I0913 18:44:49.758261   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m03
	I0913 18:44:49.758284   22792 round_trippers.go:469] Request Headers:
	I0913 18:44:49.758296   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:44:49.758308   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:44:49.761487   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:44:50.258217   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m03
	I0913 18:44:50.258240   22792 round_trippers.go:469] Request Headers:
	I0913 18:44:50.258250   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:44:50.258254   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:44:50.261917   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:44:50.758653   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m03
	I0913 18:44:50.758679   22792 round_trippers.go:469] Request Headers:
	I0913 18:44:50.758691   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:44:50.758697   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:44:50.761871   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:44:51.257891   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m03
	I0913 18:44:51.257932   22792 round_trippers.go:469] Request Headers:
	I0913 18:44:51.257941   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:44:51.257945   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:44:51.261395   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:44:51.757959   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m03
	I0913 18:44:51.757987   22792 round_trippers.go:469] Request Headers:
	I0913 18:44:51.758000   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:44:51.758005   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:44:51.761401   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:44:51.762347   22792 node_ready.go:53] node "ha-617764-m03" has status "Ready":"False"
	I0913 18:44:52.257922   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m03
	I0913 18:44:52.257944   22792 round_trippers.go:469] Request Headers:
	I0913 18:44:52.257952   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:44:52.257957   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:44:52.262582   22792 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0913 18:44:52.757893   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m03
	I0913 18:44:52.757919   22792 round_trippers.go:469] Request Headers:
	I0913 18:44:52.757928   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:44:52.757933   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:44:52.761982   22792 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0913 18:44:53.258147   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m03
	I0913 18:44:53.258170   22792 round_trippers.go:469] Request Headers:
	I0913 18:44:53.258183   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:44:53.258188   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:44:53.261248   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:44:53.758906   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m03
	I0913 18:44:53.758929   22792 round_trippers.go:469] Request Headers:
	I0913 18:44:53.758938   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:44:53.758945   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:44:53.762479   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:44:53.763110   22792 node_ready.go:53] node "ha-617764-m03" has status "Ready":"False"
	I0913 18:44:54.258911   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m03
	I0913 18:44:54.258932   22792 round_trippers.go:469] Request Headers:
	I0913 18:44:54.258940   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:44:54.258943   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:44:54.262344   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:44:54.758801   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m03
	I0913 18:44:54.758823   22792 round_trippers.go:469] Request Headers:
	I0913 18:44:54.758831   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:44:54.758835   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:44:54.762012   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:44:55.258836   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m03
	I0913 18:44:55.258860   22792 round_trippers.go:469] Request Headers:
	I0913 18:44:55.258872   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:44:55.258878   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:44:55.262275   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:44:55.757958   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m03
	I0913 18:44:55.757997   22792 round_trippers.go:469] Request Headers:
	I0913 18:44:55.758008   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:44:55.758013   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:44:55.761419   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:44:56.258260   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m03
	I0913 18:44:56.258287   22792 round_trippers.go:469] Request Headers:
	I0913 18:44:56.258297   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:44:56.258304   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:44:56.261753   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:44:56.262571   22792 node_ready.go:53] node "ha-617764-m03" has status "Ready":"False"
	I0913 18:44:56.758786   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m03
	I0913 18:44:56.758809   22792 round_trippers.go:469] Request Headers:
	I0913 18:44:56.758818   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:44:56.758821   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:44:56.762274   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:44:57.258650   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m03
	I0913 18:44:57.258677   22792 round_trippers.go:469] Request Headers:
	I0913 18:44:57.258688   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:44:57.258693   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:44:57.262219   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:44:57.758298   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m03
	I0913 18:44:57.758319   22792 round_trippers.go:469] Request Headers:
	I0913 18:44:57.758329   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:44:57.758334   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:44:57.761704   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:44:58.258395   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m03
	I0913 18:44:58.258421   22792 round_trippers.go:469] Request Headers:
	I0913 18:44:58.258429   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:44:58.258434   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:44:58.262263   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:44:58.262860   22792 node_ready.go:53] node "ha-617764-m03" has status "Ready":"False"
	I0913 18:44:58.758293   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m03
	I0913 18:44:58.758320   22792 round_trippers.go:469] Request Headers:
	I0913 18:44:58.758333   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:44:58.758340   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:44:58.761869   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:44:59.258216   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m03
	I0913 18:44:59.258240   22792 round_trippers.go:469] Request Headers:
	I0913 18:44:59.258248   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:44:59.258252   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:44:59.261660   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:44:59.758798   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m03
	I0913 18:44:59.758824   22792 round_trippers.go:469] Request Headers:
	I0913 18:44:59.758833   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:44:59.758837   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:44:59.762196   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:45:00.257949   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m03
	I0913 18:45:00.257969   22792 round_trippers.go:469] Request Headers:
	I0913 18:45:00.257977   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:45:00.257980   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:45:00.261779   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:45:00.758236   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m03
	I0913 18:45:00.758257   22792 round_trippers.go:469] Request Headers:
	I0913 18:45:00.758266   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:45:00.758270   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:45:00.761640   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:45:00.762348   22792 node_ready.go:53] node "ha-617764-m03" has status "Ready":"False"
	I0913 18:45:01.258661   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m03
	I0913 18:45:01.258684   22792 round_trippers.go:469] Request Headers:
	I0913 18:45:01.258692   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:45:01.258695   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:45:01.262043   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:45:01.758524   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m03
	I0913 18:45:01.758549   22792 round_trippers.go:469] Request Headers:
	I0913 18:45:01.758559   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:45:01.758566   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:45:01.762147   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:45:02.258789   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m03
	I0913 18:45:02.258816   22792 round_trippers.go:469] Request Headers:
	I0913 18:45:02.258827   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:45:02.258832   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:45:02.262512   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:45:02.757854   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m03
	I0913 18:45:02.757879   22792 round_trippers.go:469] Request Headers:
	I0913 18:45:02.757889   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:45:02.757894   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:45:02.761694   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:45:02.762546   22792 node_ready.go:53] node "ha-617764-m03" has status "Ready":"False"
	I0913 18:45:03.257869   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m03
	I0913 18:45:03.257891   22792 round_trippers.go:469] Request Headers:
	I0913 18:45:03.257902   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:45:03.257905   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:45:03.261551   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:45:03.758746   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m03
	I0913 18:45:03.758769   22792 round_trippers.go:469] Request Headers:
	I0913 18:45:03.758777   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:45:03.758781   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:45:03.762559   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:45:04.257962   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m03
	I0913 18:45:04.257985   22792 round_trippers.go:469] Request Headers:
	I0913 18:45:04.257993   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:45:04.257997   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:45:04.261414   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:45:04.758251   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m03
	I0913 18:45:04.758274   22792 round_trippers.go:469] Request Headers:
	I0913 18:45:04.758282   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:45:04.758292   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:45:04.762024   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:45:04.762716   22792 node_ready.go:53] node "ha-617764-m03" has status "Ready":"False"
	I0913 18:45:05.258158   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m03
	I0913 18:45:05.258180   22792 round_trippers.go:469] Request Headers:
	I0913 18:45:05.258188   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:45:05.258192   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:45:05.261750   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:45:05.758157   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m03
	I0913 18:45:05.758185   22792 round_trippers.go:469] Request Headers:
	I0913 18:45:05.758191   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:45:05.758194   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:45:05.761652   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:45:06.258659   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m03
	I0913 18:45:06.258681   22792 round_trippers.go:469] Request Headers:
	I0913 18:45:06.258689   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:45:06.258693   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:45:06.262236   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:45:06.758069   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m03
	I0913 18:45:06.758107   22792 round_trippers.go:469] Request Headers:
	I0913 18:45:06.758117   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:45:06.758137   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:45:06.761583   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:45:07.257901   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m03
	I0913 18:45:07.257929   22792 round_trippers.go:469] Request Headers:
	I0913 18:45:07.257940   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:45:07.257945   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:45:07.261293   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:45:07.261948   22792 node_ready.go:49] node "ha-617764-m03" has status "Ready":"True"
	I0913 18:45:07.261964   22792 node_ready.go:38] duration metric: took 20.004251057s for node "ha-617764-m03" to be "Ready" ...
	I0913 18:45:07.261979   22792 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0913 18:45:07.262045   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods
	I0913 18:45:07.262054   22792 round_trippers.go:469] Request Headers:
	I0913 18:45:07.262062   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:45:07.262070   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:45:07.269216   22792 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0913 18:45:07.278002   22792 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-fdhnm" in "kube-system" namespace to be "Ready" ...
	I0913 18:45:07.278075   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fdhnm
	I0913 18:45:07.278083   22792 round_trippers.go:469] Request Headers:
	I0913 18:45:07.278089   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:45:07.278113   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:45:07.281227   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:45:07.281938   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764
	I0913 18:45:07.281956   22792 round_trippers.go:469] Request Headers:
	I0913 18:45:07.281967   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:45:07.281979   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:45:07.284497   22792 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 18:45:07.284957   22792 pod_ready.go:93] pod "coredns-7c65d6cfc9-fdhnm" in "kube-system" namespace has status "Ready":"True"
	I0913 18:45:07.284974   22792 pod_ready.go:82] duration metric: took 6.948175ms for pod "coredns-7c65d6cfc9-fdhnm" in "kube-system" namespace to be "Ready" ...
	I0913 18:45:07.284985   22792 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-htrbt" in "kube-system" namespace to be "Ready" ...
	I0913 18:45:07.285047   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-htrbt
	I0913 18:45:07.285058   22792 round_trippers.go:469] Request Headers:
	I0913 18:45:07.285070   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:45:07.285077   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:45:07.287707   22792 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 18:45:07.288385   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764
	I0913 18:45:07.288398   22792 round_trippers.go:469] Request Headers:
	I0913 18:45:07.288408   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:45:07.288416   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:45:07.291237   22792 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 18:45:07.291898   22792 pod_ready.go:93] pod "coredns-7c65d6cfc9-htrbt" in "kube-system" namespace has status "Ready":"True"
	I0913 18:45:07.291913   22792 pod_ready.go:82] duration metric: took 6.921874ms for pod "coredns-7c65d6cfc9-htrbt" in "kube-system" namespace to be "Ready" ...
	I0913 18:45:07.291921   22792 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-617764" in "kube-system" namespace to be "Ready" ...
	I0913 18:45:07.291976   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/etcd-ha-617764
	I0913 18:45:07.291987   22792 round_trippers.go:469] Request Headers:
	I0913 18:45:07.291997   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:45:07.292002   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:45:07.296919   22792 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0913 18:45:07.297475   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764
	I0913 18:45:07.297487   22792 round_trippers.go:469] Request Headers:
	I0913 18:45:07.297494   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:45:07.297498   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:45:07.303799   22792 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0913 18:45:07.304372   22792 pod_ready.go:93] pod "etcd-ha-617764" in "kube-system" namespace has status "Ready":"True"
	I0913 18:45:07.304400   22792 pod_ready.go:82] duration metric: took 12.472064ms for pod "etcd-ha-617764" in "kube-system" namespace to be "Ready" ...
	I0913 18:45:07.304413   22792 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-617764-m02" in "kube-system" namespace to be "Ready" ...
	I0913 18:45:07.304479   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/etcd-ha-617764-m02
	I0913 18:45:07.304489   22792 round_trippers.go:469] Request Headers:
	I0913 18:45:07.304500   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:45:07.304506   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:45:07.309120   22792 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0913 18:45:07.309935   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 18:45:07.309954   22792 round_trippers.go:469] Request Headers:
	I0913 18:45:07.309964   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:45:07.309970   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:45:07.314376   22792 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0913 18:45:07.314767   22792 pod_ready.go:93] pod "etcd-ha-617764-m02" in "kube-system" namespace has status "Ready":"True"
	I0913 18:45:07.314784   22792 pod_ready.go:82] duration metric: took 10.364044ms for pod "etcd-ha-617764-m02" in "kube-system" namespace to be "Ready" ...
	I0913 18:45:07.314793   22792 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-617764-m03" in "kube-system" namespace to be "Ready" ...
	I0913 18:45:07.458166   22792 request.go:632] Waited for 143.309667ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/etcd-ha-617764-m03
	I0913 18:45:07.458240   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/etcd-ha-617764-m03
	I0913 18:45:07.458262   22792 round_trippers.go:469] Request Headers:
	I0913 18:45:07.458273   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:45:07.458280   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:45:07.461635   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:45:07.658619   22792 request.go:632] Waited for 196.368397ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.145:8443/api/v1/nodes/ha-617764-m03
	I0913 18:45:07.658677   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m03
	I0913 18:45:07.658682   22792 round_trippers.go:469] Request Headers:
	I0913 18:45:07.658690   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:45:07.658699   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:45:07.661920   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:45:07.662467   22792 pod_ready.go:93] pod "etcd-ha-617764-m03" in "kube-system" namespace has status "Ready":"True"
	I0913 18:45:07.662484   22792 pod_ready.go:82] duration metric: took 347.68543ms for pod "etcd-ha-617764-m03" in "kube-system" namespace to be "Ready" ...
	I0913 18:45:07.662500   22792 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-617764" in "kube-system" namespace to be "Ready" ...
	I0913 18:45:07.858673   22792 request.go:632] Waited for 196.108753ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-617764
	I0913 18:45:07.858733   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-617764
	I0913 18:45:07.858738   22792 round_trippers.go:469] Request Headers:
	I0913 18:45:07.858757   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:45:07.858764   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:45:07.861654   22792 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 18:45:08.058763   22792 request.go:632] Waited for 196.379707ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.145:8443/api/v1/nodes/ha-617764
	I0913 18:45:08.058857   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764
	I0913 18:45:08.058869   22792 round_trippers.go:469] Request Headers:
	I0913 18:45:08.058881   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:45:08.058890   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:45:08.062245   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:45:08.062930   22792 pod_ready.go:93] pod "kube-apiserver-ha-617764" in "kube-system" namespace has status "Ready":"True"
	I0913 18:45:08.062951   22792 pod_ready.go:82] duration metric: took 400.444861ms for pod "kube-apiserver-ha-617764" in "kube-system" namespace to be "Ready" ...
	I0913 18:45:08.062963   22792 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-617764-m02" in "kube-system" namespace to be "Ready" ...
	I0913 18:45:08.257911   22792 request.go:632] Waited for 194.878186ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-617764-m02
	I0913 18:45:08.257985   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-617764-m02
	I0913 18:45:08.257992   22792 round_trippers.go:469] Request Headers:
	I0913 18:45:08.258002   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:45:08.258011   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:45:08.261892   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:45:08.458402   22792 request.go:632] Waited for 195.746351ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 18:45:08.458486   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 18:45:08.458497   22792 round_trippers.go:469] Request Headers:
	I0913 18:45:08.458509   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:45:08.458520   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:45:08.462081   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:45:08.463183   22792 pod_ready.go:93] pod "kube-apiserver-ha-617764-m02" in "kube-system" namespace has status "Ready":"True"
	I0913 18:45:08.463206   22792 pod_ready.go:82] duration metric: took 400.237121ms for pod "kube-apiserver-ha-617764-m02" in "kube-system" namespace to be "Ready" ...
	I0913 18:45:08.463220   22792 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-617764-m03" in "kube-system" namespace to be "Ready" ...
	I0913 18:45:08.658691   22792 request.go:632] Waited for 195.384277ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-617764-m03
	I0913 18:45:08.658743   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-617764-m03
	I0913 18:45:08.658749   22792 round_trippers.go:469] Request Headers:
	I0913 18:45:08.658756   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:45:08.658760   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:45:08.662235   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:45:08.858689   22792 request.go:632] Waited for 195.371118ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.145:8443/api/v1/nodes/ha-617764-m03
	I0913 18:45:08.858776   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m03
	I0913 18:45:08.858789   22792 round_trippers.go:469] Request Headers:
	I0913 18:45:08.858798   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:45:08.858807   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:45:08.862189   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:45:08.862736   22792 pod_ready.go:93] pod "kube-apiserver-ha-617764-m03" in "kube-system" namespace has status "Ready":"True"
	I0913 18:45:08.862759   22792 pod_ready.go:82] duration metric: took 399.530638ms for pod "kube-apiserver-ha-617764-m03" in "kube-system" namespace to be "Ready" ...
	I0913 18:45:08.862772   22792 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-617764" in "kube-system" namespace to be "Ready" ...
	I0913 18:45:09.058077   22792 request.go:632] Waited for 195.237895ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-617764
	I0913 18:45:09.058174   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-617764
	I0913 18:45:09.058182   22792 round_trippers.go:469] Request Headers:
	I0913 18:45:09.058195   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:45:09.058205   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:45:09.061599   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:45:09.258562   22792 request.go:632] Waited for 196.201704ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.145:8443/api/v1/nodes/ha-617764
	I0913 18:45:09.258636   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764
	I0913 18:45:09.258647   22792 round_trippers.go:469] Request Headers:
	I0913 18:45:09.258657   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:45:09.258665   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:45:09.261933   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:45:09.262732   22792 pod_ready.go:93] pod "kube-controller-manager-ha-617764" in "kube-system" namespace has status "Ready":"True"
	I0913 18:45:09.262754   22792 pod_ready.go:82] duration metric: took 399.972907ms for pod "kube-controller-manager-ha-617764" in "kube-system" namespace to be "Ready" ...
	I0913 18:45:09.262768   22792 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-617764-m02" in "kube-system" namespace to be "Ready" ...
	I0913 18:45:09.458787   22792 request.go:632] Waited for 195.940964ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-617764-m02
	I0913 18:45:09.458839   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-617764-m02
	I0913 18:45:09.458844   22792 round_trippers.go:469] Request Headers:
	I0913 18:45:09.458852   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:45:09.458857   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:45:09.462034   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:45:09.657980   22792 request.go:632] Waited for 195.27571ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 18:45:09.658064   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 18:45:09.658074   22792 round_trippers.go:469] Request Headers:
	I0913 18:45:09.658086   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:45:09.658113   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:45:09.661913   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:45:09.662725   22792 pod_ready.go:93] pod "kube-controller-manager-ha-617764-m02" in "kube-system" namespace has status "Ready":"True"
	I0913 18:45:09.662743   22792 pod_ready.go:82] duration metric: took 399.963324ms for pod "kube-controller-manager-ha-617764-m02" in "kube-system" namespace to be "Ready" ...
	I0913 18:45:09.662752   22792 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-617764-m03" in "kube-system" namespace to be "Ready" ...
	I0913 18:45:09.858912   22792 request.go:632] Waited for 196.078833ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-617764-m03
	I0913 18:45:09.858972   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-617764-m03
	I0913 18:45:09.858979   22792 round_trippers.go:469] Request Headers:
	I0913 18:45:09.858988   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:45:09.858995   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:45:09.862666   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:45:10.058882   22792 request.go:632] Waited for 195.333873ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.145:8443/api/v1/nodes/ha-617764-m03
	I0913 18:45:10.058952   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m03
	I0913 18:45:10.058960   22792 round_trippers.go:469] Request Headers:
	I0913 18:45:10.058967   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:45:10.058971   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:45:10.062375   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:45:10.063280   22792 pod_ready.go:93] pod "kube-controller-manager-ha-617764-m03" in "kube-system" namespace has status "Ready":"True"
	I0913 18:45:10.063298   22792 pod_ready.go:82] duration metric: took 400.53806ms for pod "kube-controller-manager-ha-617764-m03" in "kube-system" namespace to be "Ready" ...
	I0913 18:45:10.063308   22792 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-7bpk5" in "kube-system" namespace to be "Ready" ...
	I0913 18:45:10.258308   22792 request.go:632] Waited for 194.921956ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7bpk5
	I0913 18:45:10.258366   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7bpk5
	I0913 18:45:10.258372   22792 round_trippers.go:469] Request Headers:
	I0913 18:45:10.258383   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:45:10.258393   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:45:10.261695   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:45:10.458778   22792 request.go:632] Waited for 196.165114ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.145:8443/api/v1/nodes/ha-617764-m03
	I0913 18:45:10.458835   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m03
	I0913 18:45:10.458842   22792 round_trippers.go:469] Request Headers:
	I0913 18:45:10.458851   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:45:10.458856   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:45:10.462795   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:45:10.463269   22792 pod_ready.go:93] pod "kube-proxy-7bpk5" in "kube-system" namespace has status "Ready":"True"
	I0913 18:45:10.463285   22792 pod_ready.go:82] duration metric: took 399.971446ms for pod "kube-proxy-7bpk5" in "kube-system" namespace to be "Ready" ...
	I0913 18:45:10.463295   22792 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-92mml" in "kube-system" namespace to be "Ready" ...
	I0913 18:45:10.658473   22792 request.go:632] Waited for 195.113067ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-proxy-92mml
	I0913 18:45:10.658534   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-proxy-92mml
	I0913 18:45:10.658540   22792 round_trippers.go:469] Request Headers:
	I0913 18:45:10.658547   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:45:10.658552   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:45:10.662470   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:45:10.858668   22792 request.go:632] Waited for 195.3392ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.145:8443/api/v1/nodes/ha-617764
	I0913 18:45:10.858733   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764
	I0913 18:45:10.858740   22792 round_trippers.go:469] Request Headers:
	I0913 18:45:10.858751   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:45:10.858759   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:45:10.861462   22792 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 18:45:10.862049   22792 pod_ready.go:93] pod "kube-proxy-92mml" in "kube-system" namespace has status "Ready":"True"
	I0913 18:45:10.862071   22792 pod_ready.go:82] duration metric: took 398.769606ms for pod "kube-proxy-92mml" in "kube-system" namespace to be "Ready" ...
	I0913 18:45:10.862082   22792 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-hqm8n" in "kube-system" namespace to be "Ready" ...
	I0913 18:45:11.058203   22792 request.go:632] Waited for 196.022069ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hqm8n
	I0913 18:45:11.058265   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hqm8n
	I0913 18:45:11.058270   22792 round_trippers.go:469] Request Headers:
	I0913 18:45:11.058277   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:45:11.058281   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:45:11.061914   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:45:11.258044   22792 request.go:632] Waited for 195.273377ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 18:45:11.258117   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 18:45:11.258126   22792 round_trippers.go:469] Request Headers:
	I0913 18:45:11.258138   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:45:11.258145   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:45:11.261745   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:45:11.262304   22792 pod_ready.go:93] pod "kube-proxy-hqm8n" in "kube-system" namespace has status "Ready":"True"
	I0913 18:45:11.262327   22792 pod_ready.go:82] duration metric: took 400.239534ms for pod "kube-proxy-hqm8n" in "kube-system" namespace to be "Ready" ...
	I0913 18:45:11.262337   22792 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-617764" in "kube-system" namespace to be "Ready" ...
	I0913 18:45:11.458444   22792 request.go:632] Waited for 196.01969ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-617764
	I0913 18:45:11.458497   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-617764
	I0913 18:45:11.458504   22792 round_trippers.go:469] Request Headers:
	I0913 18:45:11.458514   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:45:11.458521   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:45:11.461946   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:45:11.657948   22792 request.go:632] Waited for 195.28823ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.145:8443/api/v1/nodes/ha-617764
	I0913 18:45:11.658002   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764
	I0913 18:45:11.658007   22792 round_trippers.go:469] Request Headers:
	I0913 18:45:11.658017   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:45:11.658023   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:45:11.661841   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:45:11.662470   22792 pod_ready.go:93] pod "kube-scheduler-ha-617764" in "kube-system" namespace has status "Ready":"True"
	I0913 18:45:11.662492   22792 pod_ready.go:82] duration metric: took 400.146385ms for pod "kube-scheduler-ha-617764" in "kube-system" namespace to be "Ready" ...
	I0913 18:45:11.662506   22792 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-617764-m02" in "kube-system" namespace to be "Ready" ...
	I0913 18:45:11.858450   22792 request.go:632] Waited for 195.863677ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-617764-m02
	I0913 18:45:11.858507   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-617764-m02
	I0913 18:45:11.858512   22792 round_trippers.go:469] Request Headers:
	I0913 18:45:11.858522   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:45:11.858526   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:45:11.861821   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:45:12.058902   22792 request.go:632] Waited for 196.361586ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 18:45:12.058952   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 18:45:12.058957   22792 round_trippers.go:469] Request Headers:
	I0913 18:45:12.058964   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:45:12.058968   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:45:12.062080   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:45:12.062688   22792 pod_ready.go:93] pod "kube-scheduler-ha-617764-m02" in "kube-system" namespace has status "Ready":"True"
	I0913 18:45:12.062705   22792 pod_ready.go:82] duration metric: took 400.191873ms for pod "kube-scheduler-ha-617764-m02" in "kube-system" namespace to be "Ready" ...
	I0913 18:45:12.062717   22792 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-617764-m03" in "kube-system" namespace to be "Ready" ...
	I0913 18:45:12.258239   22792 request.go:632] Waited for 195.452487ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-617764-m03
	I0913 18:45:12.258294   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-617764-m03
	I0913 18:45:12.258299   22792 round_trippers.go:469] Request Headers:
	I0913 18:45:12.258306   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:45:12.258310   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:45:12.261850   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:45:12.458741   22792 request.go:632] Waited for 196.359842ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.145:8443/api/v1/nodes/ha-617764-m03
	I0913 18:45:12.458799   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m03
	I0913 18:45:12.458804   22792 round_trippers.go:469] Request Headers:
	I0913 18:45:12.458812   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:45:12.458819   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:45:12.461925   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:45:12.462443   22792 pod_ready.go:93] pod "kube-scheduler-ha-617764-m03" in "kube-system" namespace has status "Ready":"True"
	I0913 18:45:12.462462   22792 pod_ready.go:82] duration metric: took 399.738229ms for pod "kube-scheduler-ha-617764-m03" in "kube-system" namespace to be "Ready" ...
	I0913 18:45:12.462476   22792 pod_ready.go:39] duration metric: took 5.200482826s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0913 18:45:12.462493   22792 api_server.go:52] waiting for apiserver process to appear ...
	I0913 18:45:12.462545   22792 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 18:45:12.479364   22792 api_server.go:72] duration metric: took 25.538641921s to wait for apiserver process to appear ...
	I0913 18:45:12.479384   22792 api_server.go:88] waiting for apiserver healthz status ...
	I0913 18:45:12.479408   22792 api_server.go:253] Checking apiserver healthz at https://192.168.39.145:8443/healthz ...
	I0913 18:45:12.483655   22792 api_server.go:279] https://192.168.39.145:8443/healthz returned 200:
	ok
	I0913 18:45:12.483722   22792 round_trippers.go:463] GET https://192.168.39.145:8443/version
	I0913 18:45:12.483732   22792 round_trippers.go:469] Request Headers:
	I0913 18:45:12.483743   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:45:12.483752   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:45:12.484691   22792 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0913 18:45:12.484751   22792 api_server.go:141] control plane version: v1.31.1
	I0913 18:45:12.484765   22792 api_server.go:131] duration metric: took 5.374766ms to wait for apiserver health ...
	I0913 18:45:12.484771   22792 system_pods.go:43] waiting for kube-system pods to appear ...
	I0913 18:45:12.658175   22792 request.go:632] Waited for 173.338358ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods
	I0913 18:45:12.658263   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods
	I0913 18:45:12.658282   22792 round_trippers.go:469] Request Headers:
	I0913 18:45:12.658293   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:45:12.658301   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:45:12.663873   22792 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0913 18:45:12.670428   22792 system_pods.go:59] 24 kube-system pods found
	I0913 18:45:12.670456   22792 system_pods.go:61] "coredns-7c65d6cfc9-fdhnm" [5c509676-c7ba-4841-89b5-7e4266abd9c9] Running
	I0913 18:45:12.670461   22792 system_pods.go:61] "coredns-7c65d6cfc9-htrbt" [41a8301e-fca3-4907-bc77-808b013a2d2a] Running
	I0913 18:45:12.670466   22792 system_pods.go:61] "etcd-ha-617764" [e8b297d1-ae3c-45c7-bc17-086a7411c65e] Running
	I0913 18:45:12.670469   22792 system_pods.go:61] "etcd-ha-617764-m02" [54cadd8e-b226-4748-a065-efe913b74058] Running
	I0913 18:45:12.670473   22792 system_pods.go:61] "etcd-ha-617764-m03" [4247e8e8-fa8d-47f3-9ab3-1ec5c9d85de9] Running
	I0913 18:45:12.670476   22792 system_pods.go:61] "kindnet-8mbkd" [4fe1b67c-b4ca-4839-bbc9-2bfeddf91611] Running
	I0913 18:45:12.670479   22792 system_pods.go:61] "kindnet-b9bzd" [81130c38-ff5d-4c9e-ab7b-7eae4a62d3b8] Running
	I0913 18:45:12.670482   22792 system_pods.go:61] "kindnet-bc2zg" [e1f2f8d7-bb8a-44cb-ac52-c9f87c6f6170] Running
	I0913 18:45:12.670485   22792 system_pods.go:61] "kube-apiserver-ha-617764" [b9779d8c-fceb-4764-a9fa-e98e0e8446fd] Running
	I0913 18:45:12.670489   22792 system_pods.go:61] "kube-apiserver-ha-617764-m02" [6a67c49f-958f-4463-b3f3-cdc449987a0e] Running
	I0913 18:45:12.670492   22792 system_pods.go:61] "kube-apiserver-ha-617764-m03" [3dedc18a-1964-41af-8797-eec61443095e] Running
	I0913 18:45:12.670496   22792 system_pods.go:61] "kube-controller-manager-ha-617764" [50f8efc0-95c9-4356-ac31-5ef778f43620] Running
	I0913 18:45:12.670499   22792 system_pods.go:61] "kube-controller-manager-ha-617764-m02" [c9405f4f-2355-4d74-8bce-0afd9709a297] Running
	I0913 18:45:12.670502   22792 system_pods.go:61] "kube-controller-manager-ha-617764-m03" [2ef16dd1-da44-4c17-b191-f13d7401a21d] Running
	I0913 18:45:12.670506   22792 system_pods.go:61] "kube-proxy-7bpk5" [075a72a7-32a5-4502-b52d-eeba572f94d4] Running
	I0913 18:45:12.670509   22792 system_pods.go:61] "kube-proxy-92mml" [36bd37dc-88c4-4264-9e7c-a90246cc5212] Running
	I0913 18:45:12.670512   22792 system_pods.go:61] "kube-proxy-hqm8n" [d21c9abc-9d25-4a59-9830-2325e7f8ad44] Running
	I0913 18:45:12.670515   22792 system_pods.go:61] "kube-scheduler-ha-617764" [0ffae4ca-101d-499c-a10e-d24d42c6ddbd] Running
	I0913 18:45:12.670519   22792 system_pods.go:61] "kube-scheduler-ha-617764-m02" [46451fc8-0fbe-4c70-b331-3db2cefacd60] Running
	I0913 18:45:12.670522   22792 system_pods.go:61] "kube-scheduler-ha-617764-m03" [01d83f8e-84af-4ebb-a64d-90a1a4dd7799] Running
	I0913 18:45:12.670525   22792 system_pods.go:61] "kube-vip-ha-617764" [7960420c-8f57-47a3-8d63-de5ad027f8bd] Running
	I0913 18:45:12.670528   22792 system_pods.go:61] "kube-vip-ha-617764-m02" [da600d89-fd3d-45a3-af3b-66cfd443562d] Running
	I0913 18:45:12.670531   22792 system_pods.go:61] "kube-vip-ha-617764-m03" [21987759-d9ea-4367-96c5-f95df97fa81a] Running
	I0913 18:45:12.670534   22792 system_pods.go:61] "storage-provisioner" [1e5f1a84-1798-430e-af04-82469e8f4a7b] Running
	I0913 18:45:12.670540   22792 system_pods.go:74] duration metric: took 185.763517ms to wait for pod list to return data ...
	I0913 18:45:12.670547   22792 default_sa.go:34] waiting for default service account to be created ...
	I0913 18:45:12.857932   22792 request.go:632] Waited for 187.304017ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.145:8443/api/v1/namespaces/default/serviceaccounts
	I0913 18:45:12.858002   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/default/serviceaccounts
	I0913 18:45:12.858012   22792 round_trippers.go:469] Request Headers:
	I0913 18:45:12.858024   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:45:12.858031   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:45:12.861412   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:45:12.861530   22792 default_sa.go:45] found service account: "default"
	I0913 18:45:12.861547   22792 default_sa.go:55] duration metric: took 190.99324ms for default service account to be created ...
	I0913 18:45:12.861561   22792 system_pods.go:116] waiting for k8s-apps to be running ...
	I0913 18:45:13.058902   22792 request.go:632] Waited for 197.279772ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods
	I0913 18:45:13.058968   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods
	I0913 18:45:13.058975   22792 round_trippers.go:469] Request Headers:
	I0913 18:45:13.058983   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:45:13.058989   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:45:13.064227   22792 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0913 18:45:13.070856   22792 system_pods.go:86] 24 kube-system pods found
	I0913 18:45:13.070880   22792 system_pods.go:89] "coredns-7c65d6cfc9-fdhnm" [5c509676-c7ba-4841-89b5-7e4266abd9c9] Running
	I0913 18:45:13.070885   22792 system_pods.go:89] "coredns-7c65d6cfc9-htrbt" [41a8301e-fca3-4907-bc77-808b013a2d2a] Running
	I0913 18:45:13.070889   22792 system_pods.go:89] "etcd-ha-617764" [e8b297d1-ae3c-45c7-bc17-086a7411c65e] Running
	I0913 18:45:13.070892   22792 system_pods.go:89] "etcd-ha-617764-m02" [54cadd8e-b226-4748-a065-efe913b74058] Running
	I0913 18:45:13.070896   22792 system_pods.go:89] "etcd-ha-617764-m03" [4247e8e8-fa8d-47f3-9ab3-1ec5c9d85de9] Running
	I0913 18:45:13.070899   22792 system_pods.go:89] "kindnet-8mbkd" [4fe1b67c-b4ca-4839-bbc9-2bfeddf91611] Running
	I0913 18:45:13.070902   22792 system_pods.go:89] "kindnet-b9bzd" [81130c38-ff5d-4c9e-ab7b-7eae4a62d3b8] Running
	I0913 18:45:13.070905   22792 system_pods.go:89] "kindnet-bc2zg" [e1f2f8d7-bb8a-44cb-ac52-c9f87c6f6170] Running
	I0913 18:45:13.070908   22792 system_pods.go:89] "kube-apiserver-ha-617764" [b9779d8c-fceb-4764-a9fa-e98e0e8446fd] Running
	I0913 18:45:13.070912   22792 system_pods.go:89] "kube-apiserver-ha-617764-m02" [6a67c49f-958f-4463-b3f3-cdc449987a0e] Running
	I0913 18:45:13.070916   22792 system_pods.go:89] "kube-apiserver-ha-617764-m03" [3dedc18a-1964-41af-8797-eec61443095e] Running
	I0913 18:45:13.070920   22792 system_pods.go:89] "kube-controller-manager-ha-617764" [50f8efc0-95c9-4356-ac31-5ef778f43620] Running
	I0913 18:45:13.070924   22792 system_pods.go:89] "kube-controller-manager-ha-617764-m02" [c9405f4f-2355-4d74-8bce-0afd9709a297] Running
	I0913 18:45:13.070928   22792 system_pods.go:89] "kube-controller-manager-ha-617764-m03" [2ef16dd1-da44-4c17-b191-f13d7401a21d] Running
	I0913 18:45:13.070934   22792 system_pods.go:89] "kube-proxy-7bpk5" [075a72a7-32a5-4502-b52d-eeba572f94d4] Running
	I0913 18:45:13.070938   22792 system_pods.go:89] "kube-proxy-92mml" [36bd37dc-88c4-4264-9e7c-a90246cc5212] Running
	I0913 18:45:13.070944   22792 system_pods.go:89] "kube-proxy-hqm8n" [d21c9abc-9d25-4a59-9830-2325e7f8ad44] Running
	I0913 18:45:13.070947   22792 system_pods.go:89] "kube-scheduler-ha-617764" [0ffae4ca-101d-499c-a10e-d24d42c6ddbd] Running
	I0913 18:45:13.070951   22792 system_pods.go:89] "kube-scheduler-ha-617764-m02" [46451fc8-0fbe-4c70-b331-3db2cefacd60] Running
	I0913 18:45:13.070955   22792 system_pods.go:89] "kube-scheduler-ha-617764-m03" [01d83f8e-84af-4ebb-a64d-90a1a4dd7799] Running
	I0913 18:45:13.070961   22792 system_pods.go:89] "kube-vip-ha-617764" [7960420c-8f57-47a3-8d63-de5ad027f8bd] Running
	I0913 18:45:13.070964   22792 system_pods.go:89] "kube-vip-ha-617764-m02" [da600d89-fd3d-45a3-af3b-66cfd443562d] Running
	I0913 18:45:13.070967   22792 system_pods.go:89] "kube-vip-ha-617764-m03" [21987759-d9ea-4367-96c5-f95df97fa81a] Running
	I0913 18:45:13.070970   22792 system_pods.go:89] "storage-provisioner" [1e5f1a84-1798-430e-af04-82469e8f4a7b] Running
	I0913 18:45:13.070975   22792 system_pods.go:126] duration metric: took 209.406637ms to wait for k8s-apps to be running ...
	I0913 18:45:13.070983   22792 system_svc.go:44] waiting for kubelet service to be running ....
	I0913 18:45:13.071021   22792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 18:45:13.090449   22792 system_svc.go:56] duration metric: took 19.454477ms WaitForService to wait for kubelet
	I0913 18:45:13.090497   22792 kubeadm.go:582] duration metric: took 26.149775771s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0913 18:45:13.090519   22792 node_conditions.go:102] verifying NodePressure condition ...
	I0913 18:45:13.258912   22792 request.go:632] Waited for 168.315715ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.145:8443/api/v1/nodes
	I0913 18:45:13.258991   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes
	I0913 18:45:13.259000   22792 round_trippers.go:469] Request Headers:
	I0913 18:45:13.259020   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:45:13.259027   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:45:13.263259   22792 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0913 18:45:13.264256   22792 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0913 18:45:13.264275   22792 node_conditions.go:123] node cpu capacity is 2
	I0913 18:45:13.264288   22792 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0913 18:45:13.264294   22792 node_conditions.go:123] node cpu capacity is 2
	I0913 18:45:13.264299   22792 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0913 18:45:13.264303   22792 node_conditions.go:123] node cpu capacity is 2
	I0913 18:45:13.264308   22792 node_conditions.go:105] duration metric: took 173.783377ms to run NodePressure ...
	I0913 18:45:13.264323   22792 start.go:241] waiting for startup goroutines ...
	I0913 18:45:13.264349   22792 start.go:255] writing updated cluster config ...
	I0913 18:45:13.264642   22792 ssh_runner.go:195] Run: rm -f paused
	I0913 18:45:13.317314   22792 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0913 18:45:13.319418   22792 out.go:177] * Done! kubectl is now configured to use "ha-617764" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 13 18:48:53 ha-617764 crio[672]: time="2024-09-13 18:48:53.845722058Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726253333845636531,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4c51dd02-7077-4dbe-84b6-c918ad74110b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 18:48:53 ha-617764 crio[672]: time="2024-09-13 18:48:53.846175097Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dbcefeb0-15e1-4289-ac11-560337b5aee4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 18:48:53 ha-617764 crio[672]: time="2024-09-13 18:48:53.846288222Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dbcefeb0-15e1-4289-ac11-560337b5aee4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 18:48:53 ha-617764 crio[672]: time="2024-09-13 18:48:53.846499825Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0d456d4bd90d260631341afbae7431ccb0cc790417a4c9e8160e57467bfaf2b9,PodSandboxId:99c7958cb4872950974230539dc45944b50688bb9878563aee2e61fedfb5c35c,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726253118277078152,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-t4fwq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1bc3749b-0225-445c-9b86-767558392df7,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31a66627d146af0c454548e90aaf09b72db4e0fb35a2436b75ee6b7712ebfd7b,PodSandboxId:e586cc7654290c3c0cda0d6fd83b97505e3979cb49833b1eea94d979d037f3b4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726252966219275312,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-htrbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41a8301e-fca3-4907-bc77-808b013a2d2a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3502979cf3ea177059372b048c527fdda963bdd51e52d12d22dfa810cf54057d,PodSandboxId:bd08f2ca13336963167d06a6cc6476a2e09fa40dcabe8661be5e3dacaf6be576,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726252966228541661,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fdhnm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
5c509676-c7ba-4841-89b5-7e4266abd9c9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0647676f81788ee0bbd56eb7d60f950a46f51bd631508ec8b7f81c7a92597539,PodSandboxId:83953eef4efcdfdc33ae2e81c51ed92fad3f247f0c72fd0ea207b106fd0a340a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1726252966074952662,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e5f1a84-1798-430e-af04-82469e8f4a7b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e98c43ffb7347041b10b0e7a00cc20d3901c203313e4f54385c199a191115e1,PodSandboxId:47bf9789759213fdfb7ee71e76e2a2afa4707545aca3efe3f7ebec2b63ea5635,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17262529
54213914741,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-b9bzd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81130c38-ff5d-4c9e-ab7b-7eae4a62d3b8,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5065ca788226965fdaff6088f9b63d8cf7f5a5a7f59a07f825cfcdd7bc02e218,PodSandboxId:585827783c6745769918e227877e73f139672d7f78bf2704aaebc3850f3d07f1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726252953884531828,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-92mml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36bd37dc-88c4-4264-9e7c-a90246cc5212,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b116fa0d9ecbf5eef9e58f830dd785949b52bfac86d2d3b084cc734d4d60272a,PodSandboxId:12d8d3bba4f5d79766fb30a3be93e5c5b6b1d99c264f7fe990971c521de8ba00,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726252945807765880,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e1649fa4768b4904b58c62ad7504e2b,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a41f6c9e152de2576ff4360ccc68e55259081afe9bb9bcb9f172aec46f9ba14,PodSandboxId:c771b93aaed83a51d9fae504a0ebd7917b460d9afca8dcf2f7e6262edda971f4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726252942325370148,Labels:map[string]string{io.kubernetes.container.
name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 815ca8cb73177215968b5c5242b63776,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a31170a295b77a01a2c07a4bf7dbd0be4738f757372e24a85bcaf7d50d27d4c,PodSandboxId:16bf73d50b501d98243464444491201ec86d4a669bf5b3098590c9930bee5091,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726252942304927946,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15cf7928620050653d6239c1007547bd,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d66613ccb1f220327ae486a7304f4ca06bdfa65b31bf2cde55a2f616174be80,PodSandboxId:4d7e2cf8f9de86c4268f7fbb0b56f23617bf78a0a49ea7214fefaa5facb8d3d5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726252942281818435,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.
kubernetes.pod.name: kube-apiserver-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f4db9ee38410b02d601ed80ae90b5a4,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b2f0c73fe9ef4e082cc81429c4d5b062aa2764776ca148ed40e6d633b128ae5,PodSandboxId:353214980e0a16b1f3f76759918d9ab56c9f4add56df1e83cc1f6e8daf96c9a6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726252942238691966,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-617764,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf3d4ca74d8429dc43b760fdf8f185ab,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dbcefeb0-15e1-4289-ac11-560337b5aee4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 18:48:53 ha-617764 crio[672]: time="2024-09-13 18:48:53.885170153Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=68b3bd4b-c87a-48aa-8f19-481ae86b36ee name=/runtime.v1.RuntimeService/Version
	Sep 13 18:48:53 ha-617764 crio[672]: time="2024-09-13 18:48:53.885317164Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=68b3bd4b-c87a-48aa-8f19-481ae86b36ee name=/runtime.v1.RuntimeService/Version
	Sep 13 18:48:53 ha-617764 crio[672]: time="2024-09-13 18:48:53.886784388Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0bf3fb03-0030-4dc2-bf26-5f49bc33a1c2 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 18:48:53 ha-617764 crio[672]: time="2024-09-13 18:48:53.887196745Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726253333887175535,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0bf3fb03-0030-4dc2-bf26-5f49bc33a1c2 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 18:48:53 ha-617764 crio[672]: time="2024-09-13 18:48:53.887833778Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=32606505-d1b0-4b43-9f2a-f00c0fc7415c name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 18:48:53 ha-617764 crio[672]: time="2024-09-13 18:48:53.887888569Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=32606505-d1b0-4b43-9f2a-f00c0fc7415c name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 18:48:53 ha-617764 crio[672]: time="2024-09-13 18:48:53.888104013Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0d456d4bd90d260631341afbae7431ccb0cc790417a4c9e8160e57467bfaf2b9,PodSandboxId:99c7958cb4872950974230539dc45944b50688bb9878563aee2e61fedfb5c35c,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726253118277078152,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-t4fwq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1bc3749b-0225-445c-9b86-767558392df7,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31a66627d146af0c454548e90aaf09b72db4e0fb35a2436b75ee6b7712ebfd7b,PodSandboxId:e586cc7654290c3c0cda0d6fd83b97505e3979cb49833b1eea94d979d037f3b4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726252966219275312,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-htrbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41a8301e-fca3-4907-bc77-808b013a2d2a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3502979cf3ea177059372b048c527fdda963bdd51e52d12d22dfa810cf54057d,PodSandboxId:bd08f2ca13336963167d06a6cc6476a2e09fa40dcabe8661be5e3dacaf6be576,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726252966228541661,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fdhnm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
5c509676-c7ba-4841-89b5-7e4266abd9c9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0647676f81788ee0bbd56eb7d60f950a46f51bd631508ec8b7f81c7a92597539,PodSandboxId:83953eef4efcdfdc33ae2e81c51ed92fad3f247f0c72fd0ea207b106fd0a340a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1726252966074952662,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e5f1a84-1798-430e-af04-82469e8f4a7b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e98c43ffb7347041b10b0e7a00cc20d3901c203313e4f54385c199a191115e1,PodSandboxId:47bf9789759213fdfb7ee71e76e2a2afa4707545aca3efe3f7ebec2b63ea5635,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17262529
54213914741,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-b9bzd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81130c38-ff5d-4c9e-ab7b-7eae4a62d3b8,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5065ca788226965fdaff6088f9b63d8cf7f5a5a7f59a07f825cfcdd7bc02e218,PodSandboxId:585827783c6745769918e227877e73f139672d7f78bf2704aaebc3850f3d07f1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726252953884531828,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-92mml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36bd37dc-88c4-4264-9e7c-a90246cc5212,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b116fa0d9ecbf5eef9e58f830dd785949b52bfac86d2d3b084cc734d4d60272a,PodSandboxId:12d8d3bba4f5d79766fb30a3be93e5c5b6b1d99c264f7fe990971c521de8ba00,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726252945807765880,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e1649fa4768b4904b58c62ad7504e2b,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a41f6c9e152de2576ff4360ccc68e55259081afe9bb9bcb9f172aec46f9ba14,PodSandboxId:c771b93aaed83a51d9fae504a0ebd7917b460d9afca8dcf2f7e6262edda971f4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726252942325370148,Labels:map[string]string{io.kubernetes.container.
name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 815ca8cb73177215968b5c5242b63776,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a31170a295b77a01a2c07a4bf7dbd0be4738f757372e24a85bcaf7d50d27d4c,PodSandboxId:16bf73d50b501d98243464444491201ec86d4a669bf5b3098590c9930bee5091,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726252942304927946,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15cf7928620050653d6239c1007547bd,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d66613ccb1f220327ae486a7304f4ca06bdfa65b31bf2cde55a2f616174be80,PodSandboxId:4d7e2cf8f9de86c4268f7fbb0b56f23617bf78a0a49ea7214fefaa5facb8d3d5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726252942281818435,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.
kubernetes.pod.name: kube-apiserver-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f4db9ee38410b02d601ed80ae90b5a4,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b2f0c73fe9ef4e082cc81429c4d5b062aa2764776ca148ed40e6d633b128ae5,PodSandboxId:353214980e0a16b1f3f76759918d9ab56c9f4add56df1e83cc1f6e8daf96c9a6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726252942238691966,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-617764,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf3d4ca74d8429dc43b760fdf8f185ab,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=32606505-d1b0-4b43-9f2a-f00c0fc7415c name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 18:48:53 ha-617764 crio[672]: time="2024-09-13 18:48:53.926500272Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7a39f3e5-4d9c-4872-bd1c-2631947de244 name=/runtime.v1.RuntimeService/Version
	Sep 13 18:48:53 ha-617764 crio[672]: time="2024-09-13 18:48:53.926591174Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7a39f3e5-4d9c-4872-bd1c-2631947de244 name=/runtime.v1.RuntimeService/Version
	Sep 13 18:48:53 ha-617764 crio[672]: time="2024-09-13 18:48:53.928022553Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=98b0b91c-2a15-454f-97a7-e42d4f30a6b2 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 18:48:53 ha-617764 crio[672]: time="2024-09-13 18:48:53.928507305Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726253333928483597,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=98b0b91c-2a15-454f-97a7-e42d4f30a6b2 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 18:48:53 ha-617764 crio[672]: time="2024-09-13 18:48:53.928948711Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7b1ae186-d4cb-4acb-9919-320279dda785 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 18:48:53 ha-617764 crio[672]: time="2024-09-13 18:48:53.929026705Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7b1ae186-d4cb-4acb-9919-320279dda785 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 18:48:53 ha-617764 crio[672]: time="2024-09-13 18:48:53.929316285Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0d456d4bd90d260631341afbae7431ccb0cc790417a4c9e8160e57467bfaf2b9,PodSandboxId:99c7958cb4872950974230539dc45944b50688bb9878563aee2e61fedfb5c35c,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726253118277078152,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-t4fwq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1bc3749b-0225-445c-9b86-767558392df7,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31a66627d146af0c454548e90aaf09b72db4e0fb35a2436b75ee6b7712ebfd7b,PodSandboxId:e586cc7654290c3c0cda0d6fd83b97505e3979cb49833b1eea94d979d037f3b4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726252966219275312,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-htrbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41a8301e-fca3-4907-bc77-808b013a2d2a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3502979cf3ea177059372b048c527fdda963bdd51e52d12d22dfa810cf54057d,PodSandboxId:bd08f2ca13336963167d06a6cc6476a2e09fa40dcabe8661be5e3dacaf6be576,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726252966228541661,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fdhnm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
5c509676-c7ba-4841-89b5-7e4266abd9c9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0647676f81788ee0bbd56eb7d60f950a46f51bd631508ec8b7f81c7a92597539,PodSandboxId:83953eef4efcdfdc33ae2e81c51ed92fad3f247f0c72fd0ea207b106fd0a340a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1726252966074952662,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e5f1a84-1798-430e-af04-82469e8f4a7b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e98c43ffb7347041b10b0e7a00cc20d3901c203313e4f54385c199a191115e1,PodSandboxId:47bf9789759213fdfb7ee71e76e2a2afa4707545aca3efe3f7ebec2b63ea5635,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17262529
54213914741,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-b9bzd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81130c38-ff5d-4c9e-ab7b-7eae4a62d3b8,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5065ca788226965fdaff6088f9b63d8cf7f5a5a7f59a07f825cfcdd7bc02e218,PodSandboxId:585827783c6745769918e227877e73f139672d7f78bf2704aaebc3850f3d07f1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726252953884531828,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-92mml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36bd37dc-88c4-4264-9e7c-a90246cc5212,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b116fa0d9ecbf5eef9e58f830dd785949b52bfac86d2d3b084cc734d4d60272a,PodSandboxId:12d8d3bba4f5d79766fb30a3be93e5c5b6b1d99c264f7fe990971c521de8ba00,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726252945807765880,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e1649fa4768b4904b58c62ad7504e2b,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a41f6c9e152de2576ff4360ccc68e55259081afe9bb9bcb9f172aec46f9ba14,PodSandboxId:c771b93aaed83a51d9fae504a0ebd7917b460d9afca8dcf2f7e6262edda971f4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726252942325370148,Labels:map[string]string{io.kubernetes.container.
name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 815ca8cb73177215968b5c5242b63776,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a31170a295b77a01a2c07a4bf7dbd0be4738f757372e24a85bcaf7d50d27d4c,PodSandboxId:16bf73d50b501d98243464444491201ec86d4a669bf5b3098590c9930bee5091,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726252942304927946,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15cf7928620050653d6239c1007547bd,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d66613ccb1f220327ae486a7304f4ca06bdfa65b31bf2cde55a2f616174be80,PodSandboxId:4d7e2cf8f9de86c4268f7fbb0b56f23617bf78a0a49ea7214fefaa5facb8d3d5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726252942281818435,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.
kubernetes.pod.name: kube-apiserver-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f4db9ee38410b02d601ed80ae90b5a4,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b2f0c73fe9ef4e082cc81429c4d5b062aa2764776ca148ed40e6d633b128ae5,PodSandboxId:353214980e0a16b1f3f76759918d9ab56c9f4add56df1e83cc1f6e8daf96c9a6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726252942238691966,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-617764,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf3d4ca74d8429dc43b760fdf8f185ab,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7b1ae186-d4cb-4acb-9919-320279dda785 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 18:48:53 ha-617764 crio[672]: time="2024-09-13 18:48:53.975656795Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e329a634-bd73-4a7a-a994-1f109b09ea1e name=/runtime.v1.RuntimeService/Version
	Sep 13 18:48:53 ha-617764 crio[672]: time="2024-09-13 18:48:53.975750498Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e329a634-bd73-4a7a-a994-1f109b09ea1e name=/runtime.v1.RuntimeService/Version
	Sep 13 18:48:53 ha-617764 crio[672]: time="2024-09-13 18:48:53.979509032Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0547b6a7-d0c4-4ac6-9380-288600c084c2 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 18:48:53 ha-617764 crio[672]: time="2024-09-13 18:48:53.979917095Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726253333979895924,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0547b6a7-d0c4-4ac6-9380-288600c084c2 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 18:48:53 ha-617764 crio[672]: time="2024-09-13 18:48:53.980669789Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4e164ec4-9d69-441f-b7d8-07ca831b03f6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 18:48:53 ha-617764 crio[672]: time="2024-09-13 18:48:53.980721328Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4e164ec4-9d69-441f-b7d8-07ca831b03f6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 18:48:53 ha-617764 crio[672]: time="2024-09-13 18:48:53.980930990Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0d456d4bd90d260631341afbae7431ccb0cc790417a4c9e8160e57467bfaf2b9,PodSandboxId:99c7958cb4872950974230539dc45944b50688bb9878563aee2e61fedfb5c35c,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726253118277078152,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-t4fwq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1bc3749b-0225-445c-9b86-767558392df7,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31a66627d146af0c454548e90aaf09b72db4e0fb35a2436b75ee6b7712ebfd7b,PodSandboxId:e586cc7654290c3c0cda0d6fd83b97505e3979cb49833b1eea94d979d037f3b4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726252966219275312,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-htrbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41a8301e-fca3-4907-bc77-808b013a2d2a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3502979cf3ea177059372b048c527fdda963bdd51e52d12d22dfa810cf54057d,PodSandboxId:bd08f2ca13336963167d06a6cc6476a2e09fa40dcabe8661be5e3dacaf6be576,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726252966228541661,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fdhnm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
5c509676-c7ba-4841-89b5-7e4266abd9c9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0647676f81788ee0bbd56eb7d60f950a46f51bd631508ec8b7f81c7a92597539,PodSandboxId:83953eef4efcdfdc33ae2e81c51ed92fad3f247f0c72fd0ea207b106fd0a340a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1726252966074952662,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e5f1a84-1798-430e-af04-82469e8f4a7b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e98c43ffb7347041b10b0e7a00cc20d3901c203313e4f54385c199a191115e1,PodSandboxId:47bf9789759213fdfb7ee71e76e2a2afa4707545aca3efe3f7ebec2b63ea5635,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17262529
54213914741,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-b9bzd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81130c38-ff5d-4c9e-ab7b-7eae4a62d3b8,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5065ca788226965fdaff6088f9b63d8cf7f5a5a7f59a07f825cfcdd7bc02e218,PodSandboxId:585827783c6745769918e227877e73f139672d7f78bf2704aaebc3850f3d07f1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726252953884531828,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-92mml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36bd37dc-88c4-4264-9e7c-a90246cc5212,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b116fa0d9ecbf5eef9e58f830dd785949b52bfac86d2d3b084cc734d4d60272a,PodSandboxId:12d8d3bba4f5d79766fb30a3be93e5c5b6b1d99c264f7fe990971c521de8ba00,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726252945807765880,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e1649fa4768b4904b58c62ad7504e2b,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a41f6c9e152de2576ff4360ccc68e55259081afe9bb9bcb9f172aec46f9ba14,PodSandboxId:c771b93aaed83a51d9fae504a0ebd7917b460d9afca8dcf2f7e6262edda971f4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726252942325370148,Labels:map[string]string{io.kubernetes.container.
name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 815ca8cb73177215968b5c5242b63776,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a31170a295b77a01a2c07a4bf7dbd0be4738f757372e24a85bcaf7d50d27d4c,PodSandboxId:16bf73d50b501d98243464444491201ec86d4a669bf5b3098590c9930bee5091,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726252942304927946,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15cf7928620050653d6239c1007547bd,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d66613ccb1f220327ae486a7304f4ca06bdfa65b31bf2cde55a2f616174be80,PodSandboxId:4d7e2cf8f9de86c4268f7fbb0b56f23617bf78a0a49ea7214fefaa5facb8d3d5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726252942281818435,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.
kubernetes.pod.name: kube-apiserver-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f4db9ee38410b02d601ed80ae90b5a4,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b2f0c73fe9ef4e082cc81429c4d5b062aa2764776ca148ed40e6d633b128ae5,PodSandboxId:353214980e0a16b1f3f76759918d9ab56c9f4add56df1e83cc1f6e8daf96c9a6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726252942238691966,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-617764,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf3d4ca74d8429dc43b760fdf8f185ab,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4e164ec4-9d69-441f-b7d8-07ca831b03f6 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	0d456d4bd90d2       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   99c7958cb4872       busybox-7dff88458-t4fwq
	3502979cf3ea1       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   bd08f2ca13336       coredns-7c65d6cfc9-fdhnm
	31a66627d146a       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   e586cc7654290       coredns-7c65d6cfc9-htrbt
	0647676f81788       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   83953eef4efcd       storage-provisioner
	7e98c43ffb734       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      6 minutes ago       Running             kindnet-cni               0                   47bf978975921       kindnet-b9bzd
	5065ca7882269       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      6 minutes ago       Running             kube-proxy                0                   585827783c674       kube-proxy-92mml
	b116fa0d9ecbf       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     6 minutes ago       Running             kube-vip                  0                   12d8d3bba4f5d       kube-vip-ha-617764
	8a41f6c9e152d       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      6 minutes ago       Running             kube-controller-manager   0                   c771b93aaed83       kube-controller-manager-ha-617764
	8a31170a295b7       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      6 minutes ago       Running             kube-scheduler            0                   16bf73d50b501       kube-scheduler-ha-617764
	1d66613ccb1f2       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      6 minutes ago       Running             kube-apiserver            0                   4d7e2cf8f9de8       kube-apiserver-ha-617764
	3b2f0c73fe9ef       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   353214980e0a1       etcd-ha-617764
	
	
	==> coredns [31a66627d146af0c454548e90aaf09b72db4e0fb35a2436b75ee6b7712ebfd7b] <==
	[INFO] 10.244.1.2:49297 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.01533144s
	[INFO] 10.244.1.2:34775 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000173343s
	[INFO] 10.244.1.2:48094 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000185771s
	[INFO] 10.244.1.2:38224 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000261627s
	[INFO] 10.244.2.2:46762 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001531358s
	[INFO] 10.244.2.2:49140 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000110788s
	[INFO] 10.244.2.2:48200 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000122858s
	[INFO] 10.244.0.4:42212 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000107526s
	[INFO] 10.244.0.4:55473 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001625324s
	[INFO] 10.244.0.4:57662 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00027413s
	[INFO] 10.244.0.4:42804 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000086384s
	[INFO] 10.244.1.2:42712 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000149698s
	[INFO] 10.244.1.2:33468 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000117843s
	[INFO] 10.244.1.2:53696 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000125501s
	[INFO] 10.244.1.2:59050 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000121214s
	[INFO] 10.244.2.2:60250 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000129604s
	[INFO] 10.244.2.2:33290 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000127517s
	[INFO] 10.244.0.4:48739 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000096314s
	[INFO] 10.244.0.4:42249 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000049139s
	[INFO] 10.244.1.2:35348 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000327466s
	[INFO] 10.244.1.2:36802 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000158894s
	[INFO] 10.244.2.2:33661 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000134839s
	[INFO] 10.244.2.2:41493 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000135174s
	[INFO] 10.244.0.4:55720 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00006804s
	[INFO] 10.244.0.4:59841 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00009592s
	
	
	==> coredns [3502979cf3ea177059372b048c527fdda963bdd51e52d12d22dfa810cf54057d] <==
	[INFO] 10.244.0.4:34399 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000211802s
	[INFO] 10.244.0.4:50067 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000522446s
	[INFO] 10.244.0.4:39102 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001720209s
	[INFO] 10.244.1.2:37027 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000286563s
	[INFO] 10.244.1.2:60285 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.013541777s
	[INFO] 10.244.1.2:53881 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000133465s
	[INFO] 10.244.2.2:44355 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000163171s
	[INFO] 10.244.2.2:36763 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001800499s
	[INFO] 10.244.2.2:41469 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000115361s
	[INFO] 10.244.2.2:40909 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000145743s
	[INFO] 10.244.2.2:44681 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000149088s
	[INFO] 10.244.0.4:51555 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000069764s
	[INFO] 10.244.0.4:53574 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001057592s
	[INFO] 10.244.0.4:45350 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000035427s
	[INFO] 10.244.0.4:48145 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000190172s
	[INFO] 10.244.2.2:36852 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000187208s
	[INFO] 10.244.2.2:58201 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00010302s
	[INFO] 10.244.0.4:45335 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000139302s
	[INFO] 10.244.0.4:41623 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000054642s
	[INFO] 10.244.1.2:43471 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000145957s
	[INFO] 10.244.1.2:55858 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000179256s
	[INFO] 10.244.2.2:35120 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000154146s
	[INFO] 10.244.2.2:57748 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000106668s
	[INFO] 10.244.0.4:35176 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00009163s
	[INFO] 10.244.0.4:35630 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000191227s
	
	
	==> describe nodes <==
	Name:               ha-617764
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-617764
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fdd33bebc6743cfd1c61ec7fe066add478610a92
	                    minikube.k8s.io/name=ha-617764
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_13T18_42_29_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Sep 2024 18:42:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-617764
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 13 Sep 2024 18:48:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Sep 2024 18:45:31 +0000   Fri, 13 Sep 2024 18:42:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Sep 2024 18:45:31 +0000   Fri, 13 Sep 2024 18:42:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Sep 2024 18:45:31 +0000   Fri, 13 Sep 2024 18:42:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 13 Sep 2024 18:45:31 +0000   Fri, 13 Sep 2024 18:42:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.145
	  Hostname:    ha-617764
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0f2aba800dcc4d28902afcae110b3305
	  System UUID:                0f2aba80-0dcc-4d28-902a-fcae110b3305
	  Boot ID:                    07a71fa7-1555-49dc-a1c7-c845200ddeaa
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-t4fwq              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m40s
	  kube-system                 coredns-7c65d6cfc9-fdhnm             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m21s
	  kube-system                 coredns-7c65d6cfc9-htrbt             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m21s
	  kube-system                 etcd-ha-617764                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m26s
	  kube-system                 kindnet-b9bzd                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m22s
	  kube-system                 kube-apiserver-ha-617764             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m26s
	  kube-system                 kube-controller-manager-ha-617764    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m26s
	  kube-system                 kube-proxy-92mml                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m22s
	  kube-system                 kube-scheduler-ha-617764             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m26s
	  kube-system                 kube-vip-ha-617764                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m28s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m21s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m20s  kube-proxy       
	  Normal  Starting                 6m26s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m26s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m26s  kubelet          Node ha-617764 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m26s  kubelet          Node ha-617764 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m26s  kubelet          Node ha-617764 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m22s  node-controller  Node ha-617764 event: Registered Node ha-617764 in Controller
	  Normal  NodeReady                6m9s   kubelet          Node ha-617764 status is now: NodeReady
	  Normal  RegisteredNode           5m22s  node-controller  Node ha-617764 event: Registered Node ha-617764 in Controller
	  Normal  RegisteredNode           4m2s   node-controller  Node ha-617764 event: Registered Node ha-617764 in Controller
	
	
	Name:               ha-617764-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-617764-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fdd33bebc6743cfd1c61ec7fe066add478610a92
	                    minikube.k8s.io/name=ha-617764
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_13T18_43_27_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Sep 2024 18:43:24 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-617764-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 13 Sep 2024 18:46:27 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 13 Sep 2024 18:45:27 +0000   Fri, 13 Sep 2024 18:47:12 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 13 Sep 2024 18:45:27 +0000   Fri, 13 Sep 2024 18:47:12 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 13 Sep 2024 18:45:27 +0000   Fri, 13 Sep 2024 18:47:12 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 13 Sep 2024 18:45:27 +0000   Fri, 13 Sep 2024 18:47:12 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.203
	  Hostname:    ha-617764-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 3cc6f4389cf8474aa5ba9d0c108b603d
	  System UUID:                3cc6f438-9cf8-474a-a5ba-9d0c108b603d
	  Boot ID:                    a73fc468-bba1-4d38-b835-10012a86fc0a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-c28t9                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m40s
	  kube-system                 etcd-ha-617764-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m28s
	  kube-system                 kindnet-bc2zg                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m30s
	  kube-system                 kube-apiserver-ha-617764-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m29s
	  kube-system                 kube-controller-manager-ha-617764-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m29s
	  kube-system                 kube-proxy-hqm8n                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m30s
	  kube-system                 kube-scheduler-ha-617764-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m29s
	  kube-system                 kube-vip-ha-617764-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m26s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m30s (x8 over 5m30s)  kubelet          Node ha-617764-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m30s (x8 over 5m30s)  kubelet          Node ha-617764-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m30s (x7 over 5m30s)  kubelet          Node ha-617764-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m30s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m27s                  node-controller  Node ha-617764-m02 event: Registered Node ha-617764-m02 in Controller
	  Normal  RegisteredNode           5m22s                  node-controller  Node ha-617764-m02 event: Registered Node ha-617764-m02 in Controller
	  Normal  RegisteredNode           4m2s                   node-controller  Node ha-617764-m02 event: Registered Node ha-617764-m02 in Controller
	  Normal  NodeNotReady             102s                   node-controller  Node ha-617764-m02 status is now: NodeNotReady
	
	
	Name:               ha-617764-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-617764-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fdd33bebc6743cfd1c61ec7fe066add478610a92
	                    minikube.k8s.io/name=ha-617764
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_13T18_44_46_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Sep 2024 18:44:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-617764-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 13 Sep 2024 18:48:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Sep 2024 18:45:44 +0000   Fri, 13 Sep 2024 18:44:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Sep 2024 18:45:44 +0000   Fri, 13 Sep 2024 18:44:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Sep 2024 18:45:44 +0000   Fri, 13 Sep 2024 18:44:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 13 Sep 2024 18:45:44 +0000   Fri, 13 Sep 2024 18:45:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.118
	  Hostname:    ha-617764-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 bf9ad263c8a24e5ab1b585d83dd0c49b
	  System UUID:                bf9ad263-c8a2-4e5a-b1b5-85d83dd0c49b
	  Boot ID:                    5302b469-e319-46e4-a87d-2fbb7190087e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-srmxt                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m40s
	  kube-system                 etcd-ha-617764-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m9s
	  kube-system                 kindnet-8mbkd                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m11s
	  kube-system                 kube-apiserver-ha-617764-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m9s
	  kube-system                 kube-controller-manager-ha-617764-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m9s
	  kube-system                 kube-proxy-7bpk5                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m11s
	  kube-system                 kube-scheduler-ha-617764-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m5s
	  kube-system                 kube-vip-ha-617764-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m6s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m6s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  4m11s (x8 over 4m11s)  kubelet          Node ha-617764-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m11s (x8 over 4m11s)  kubelet          Node ha-617764-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m11s (x7 over 4m11s)  kubelet          Node ha-617764-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m11s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m7s                   node-controller  Node ha-617764-m03 event: Registered Node ha-617764-m03 in Controller
	  Normal  RegisteredNode           4m7s                   node-controller  Node ha-617764-m03 event: Registered Node ha-617764-m03 in Controller
	  Normal  RegisteredNode           4m2s                   node-controller  Node ha-617764-m03 event: Registered Node ha-617764-m03 in Controller
	
	
	Name:               ha-617764-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-617764-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fdd33bebc6743cfd1c61ec7fe066add478610a92
	                    minikube.k8s.io/name=ha-617764
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_13T18_45_53_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Sep 2024 18:45:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-617764-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 13 Sep 2024 18:48:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Sep 2024 18:46:23 +0000   Fri, 13 Sep 2024 18:45:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Sep 2024 18:46:23 +0000   Fri, 13 Sep 2024 18:45:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Sep 2024 18:46:23 +0000   Fri, 13 Sep 2024 18:45:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 13 Sep 2024 18:46:23 +0000   Fri, 13 Sep 2024 18:46:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.238
	  Hostname:    ha-617764-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 377b052ecb52420ba7e0fb039f04c4f5
	  System UUID:                377b052e-cb52-420b-a7e0-fb039f04c4f5
	  Boot ID:                    d2b9d80d-fb6e-4958-9da8-1e29e77fa9a6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-47jgz       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m2s
	  kube-system                 kube-proxy-5rlkn    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m2s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 2m56s                kube-proxy       
	  Normal  NodeHasSufficientMemory  3m2s (x2 over 3m2s)  kubelet          Node ha-617764-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m2s (x2 over 3m2s)  kubelet          Node ha-617764-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m2s (x2 over 3m2s)  kubelet          Node ha-617764-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m2s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m57s                node-controller  Node ha-617764-m04 event: Registered Node ha-617764-m04 in Controller
	  Normal  RegisteredNode           2m57s                node-controller  Node ha-617764-m04 event: Registered Node ha-617764-m04 in Controller
	  Normal  RegisteredNode           2m57s                node-controller  Node ha-617764-m04 event: Registered Node ha-617764-m04 in Controller
	  Normal  NodeReady                2m41s                kubelet          Node ha-617764-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Sep13 18:41] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050724] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040063] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.773769] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.469844] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[Sep13 18:42] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +10.036071] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.066350] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.051740] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.182667] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.119649] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.275654] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +3.901030] systemd-fstab-generator[757]: Ignoring "noauto" option for root device
	[  +4.328019] systemd-fstab-generator[895]: Ignoring "noauto" option for root device
	[  +0.063743] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.140604] systemd-fstab-generator[1308]: Ignoring "noauto" option for root device
	[  +0.090298] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.514292] kauditd_printk_skb: 21 callbacks suppressed
	[ +12.179813] kauditd_printk_skb: 38 callbacks suppressed
	[Sep13 18:43] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [3b2f0c73fe9ef4e082cc81429c4d5b062aa2764776ca148ed40e6d633b128ae5] <==
	{"level":"warn","ts":"2024-09-13T18:48:54.254655Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"44b3a0f32f80bb09","from":"44b3a0f32f80bb09","remote-peer-id":"130da78b66ce9e95","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-13T18:48:54.260069Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"44b3a0f32f80bb09","from":"44b3a0f32f80bb09","remote-peer-id":"130da78b66ce9e95","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-13T18:48:54.264950Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"44b3a0f32f80bb09","from":"44b3a0f32f80bb09","remote-peer-id":"130da78b66ce9e95","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-13T18:48:54.268550Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"44b3a0f32f80bb09","from":"44b3a0f32f80bb09","remote-peer-id":"130da78b66ce9e95","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-13T18:48:54.277941Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"44b3a0f32f80bb09","from":"44b3a0f32f80bb09","remote-peer-id":"130da78b66ce9e95","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-13T18:48:54.279694Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"44b3a0f32f80bb09","from":"44b3a0f32f80bb09","remote-peer-id":"130da78b66ce9e95","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-13T18:48:54.288148Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"44b3a0f32f80bb09","from":"44b3a0f32f80bb09","remote-peer-id":"130da78b66ce9e95","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-13T18:48:54.296062Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"44b3a0f32f80bb09","from":"44b3a0f32f80bb09","remote-peer-id":"130da78b66ce9e95","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-13T18:48:54.299725Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"44b3a0f32f80bb09","from":"44b3a0f32f80bb09","remote-peer-id":"130da78b66ce9e95","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-13T18:48:54.299802Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"44b3a0f32f80bb09","from":"44b3a0f32f80bb09","remote-peer-id":"130da78b66ce9e95","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-13T18:48:54.303422Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"44b3a0f32f80bb09","from":"44b3a0f32f80bb09","remote-peer-id":"130da78b66ce9e95","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-13T18:48:54.309184Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"44b3a0f32f80bb09","from":"44b3a0f32f80bb09","remote-peer-id":"130da78b66ce9e95","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-13T18:48:54.315385Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"44b3a0f32f80bb09","from":"44b3a0f32f80bb09","remote-peer-id":"130da78b66ce9e95","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-13T18:48:54.321682Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"44b3a0f32f80bb09","from":"44b3a0f32f80bb09","remote-peer-id":"130da78b66ce9e95","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-13T18:48:54.335085Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"44b3a0f32f80bb09","from":"44b3a0f32f80bb09","remote-peer-id":"130da78b66ce9e95","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-13T18:48:54.338520Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"44b3a0f32f80bb09","from":"44b3a0f32f80bb09","remote-peer-id":"130da78b66ce9e95","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-13T18:48:54.344663Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"44b3a0f32f80bb09","from":"44b3a0f32f80bb09","remote-peer-id":"130da78b66ce9e95","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-13T18:48:54.350518Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"44b3a0f32f80bb09","from":"44b3a0f32f80bb09","remote-peer-id":"130da78b66ce9e95","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-13T18:48:54.356965Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"44b3a0f32f80bb09","from":"44b3a0f32f80bb09","remote-peer-id":"130da78b66ce9e95","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-13T18:48:54.360303Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"44b3a0f32f80bb09","from":"44b3a0f32f80bb09","remote-peer-id":"130da78b66ce9e95","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-13T18:48:54.360443Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"44b3a0f32f80bb09","from":"44b3a0f32f80bb09","remote-peer-id":"130da78b66ce9e95","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-13T18:48:54.363664Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"44b3a0f32f80bb09","from":"44b3a0f32f80bb09","remote-peer-id":"130da78b66ce9e95","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-13T18:48:54.367081Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"44b3a0f32f80bb09","from":"44b3a0f32f80bb09","remote-peer-id":"130da78b66ce9e95","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-13T18:48:54.373300Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"44b3a0f32f80bb09","from":"44b3a0f32f80bb09","remote-peer-id":"130da78b66ce9e95","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-13T18:48:54.381132Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"44b3a0f32f80bb09","from":"44b3a0f32f80bb09","remote-peer-id":"130da78b66ce9e95","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 18:48:54 up 7 min,  0 users,  load average: 0.25, 0.25, 0.13
	Linux ha-617764 5.10.207 #1 SMP Thu Sep 12 19:03:33 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [7e98c43ffb7347041b10b0e7a00cc20d3901c203313e4f54385c199a191115e1] <==
	I0913 18:48:15.370092       1 main.go:322] Node ha-617764-m03 has CIDR [10.244.2.0/24] 
	I0913 18:48:25.376495       1 main.go:295] Handling node with IPs: map[192.168.39.145:{}]
	I0913 18:48:25.376559       1 main.go:299] handling current node
	I0913 18:48:25.376582       1 main.go:295] Handling node with IPs: map[192.168.39.203:{}]
	I0913 18:48:25.376615       1 main.go:322] Node ha-617764-m02 has CIDR [10.244.1.0/24] 
	I0913 18:48:25.376791       1 main.go:295] Handling node with IPs: map[192.168.39.118:{}]
	I0913 18:48:25.376817       1 main.go:322] Node ha-617764-m03 has CIDR [10.244.2.0/24] 
	I0913 18:48:25.376908       1 main.go:295] Handling node with IPs: map[192.168.39.238:{}]
	I0913 18:48:25.376931       1 main.go:322] Node ha-617764-m04 has CIDR [10.244.3.0/24] 
	I0913 18:48:35.369164       1 main.go:295] Handling node with IPs: map[192.168.39.145:{}]
	I0913 18:48:35.369296       1 main.go:299] handling current node
	I0913 18:48:35.369317       1 main.go:295] Handling node with IPs: map[192.168.39.203:{}]
	I0913 18:48:35.369335       1 main.go:322] Node ha-617764-m02 has CIDR [10.244.1.0/24] 
	I0913 18:48:35.369501       1 main.go:295] Handling node with IPs: map[192.168.39.118:{}]
	I0913 18:48:35.369528       1 main.go:322] Node ha-617764-m03 has CIDR [10.244.2.0/24] 
	I0913 18:48:35.369588       1 main.go:295] Handling node with IPs: map[192.168.39.238:{}]
	I0913 18:48:35.369610       1 main.go:322] Node ha-617764-m04 has CIDR [10.244.3.0/24] 
	I0913 18:48:45.374705       1 main.go:295] Handling node with IPs: map[192.168.39.145:{}]
	I0913 18:48:45.374761       1 main.go:299] handling current node
	I0913 18:48:45.374776       1 main.go:295] Handling node with IPs: map[192.168.39.203:{}]
	I0913 18:48:45.374781       1 main.go:322] Node ha-617764-m02 has CIDR [10.244.1.0/24] 
	I0913 18:48:45.374949       1 main.go:295] Handling node with IPs: map[192.168.39.118:{}]
	I0913 18:48:45.374973       1 main.go:322] Node ha-617764-m03 has CIDR [10.244.2.0/24] 
	I0913 18:48:45.375025       1 main.go:295] Handling node with IPs: map[192.168.39.238:{}]
	I0913 18:48:45.375045       1 main.go:322] Node ha-617764-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [1d66613ccb1f220327ae486a7304f4ca06bdfa65b31bf2cde55a2f616174be80] <==
	I0913 18:42:28.598737       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0913 18:42:28.608881       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0913 18:42:32.773006       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0913 18:42:33.086168       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0913 18:43:24.969062       1 writers.go:122] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E0913 18:43:24.969780       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 556.42µs, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
	E0913 18:43:24.970485       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E0913 18:43:24.971765       1 writers.go:135] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E0913 18:43:24.973078       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="4.502168ms" method="POST" path="/api/v1/namespaces/kube-system/events" result=null
	E0913 18:45:20.119793       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57468: use of closed network connection
	E0913 18:45:20.304607       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57490: use of closed network connection
	E0913 18:45:20.490076       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57522: use of closed network connection
	E0913 18:45:20.696819       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57540: use of closed network connection
	E0913 18:45:20.876650       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57570: use of closed network connection
	E0913 18:45:21.058755       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57580: use of closed network connection
	E0913 18:45:21.228559       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57598: use of closed network connection
	E0913 18:45:21.414467       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57608: use of closed network connection
	E0913 18:45:21.597909       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57624: use of closed network connection
	E0913 18:45:21.914524       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57660: use of closed network connection
	E0913 18:45:22.101117       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57682: use of closed network connection
	E0913 18:45:22.284427       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57706: use of closed network connection
	E0913 18:45:22.461214       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57722: use of closed network connection
	E0913 18:45:22.648226       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57742: use of closed network connection
	E0913 18:45:22.810730       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57764: use of closed network connection
	W0913 18:46:47.295761       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.118 192.168.39.145]
	
	
	==> kube-controller-manager [8a41f6c9e152de2576ff4360ccc68e55259081afe9bb9bcb9f172aec46f9ba14] <==
	I0913 18:45:52.540334       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-617764-m04"
	I0913 18:45:52.551008       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-617764-m04"
	I0913 18:45:52.757917       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-617764-m04"
	E0913 18:45:52.767717       1 daemon_controller.go:329] "Unhandled Error" err="kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kindnet\", GenerateName:\"\", Namespace:\"kube-system\", SelfLink:\"\", UID:\"a6795e37-2984-4e51-b0e9-20f3c3a9e522\", ResourceVersion:\"935\", Generation:1, CreationTimestamp:time.Date(2024, time.September, 13, 18, 42, 29, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"app\":\"kindnet\", \"k8s-app\":\"kindnet\", \"tier\":\"node\"}, Annotations:map[string]string{\"deprecated.daemonset.template.generation\":\"1\", \"kubectl.kubernetes.io/last-applied-configuration\":\"{\\\"apiVersion\\\":\\\"apps/v1\\\",\\\"kind\\\":\\\"DaemonSet\\\",\\\"metadata\\\":{\\\"annotations\\\":{},\\\"labels\\\":{\\\"app\\\":\\\"kindnet\\\",\\\"k8s-app\\\":\\\"kindnet\\\",\\\"tier\\\":\\\"node\\\"},\\\"name\\\":\\\"kindnet\
\\",\\\"namespace\\\":\\\"kube-system\\\"},\\\"spec\\\":{\\\"selector\\\":{\\\"matchLabels\\\":{\\\"app\\\":\\\"kindnet\\\"}},\\\"template\\\":{\\\"metadata\\\":{\\\"labels\\\":{\\\"app\\\":\\\"kindnet\\\",\\\"k8s-app\\\":\\\"kindnet\\\",\\\"tier\\\":\\\"node\\\"}},\\\"spec\\\":{\\\"containers\\\":[{\\\"env\\\":[{\\\"name\\\":\\\"HOST_IP\\\",\\\"valueFrom\\\":{\\\"fieldRef\\\":{\\\"fieldPath\\\":\\\"status.hostIP\\\"}}},{\\\"name\\\":\\\"POD_IP\\\",\\\"valueFrom\\\":{\\\"fieldRef\\\":{\\\"fieldPath\\\":\\\"status.podIP\\\"}}},{\\\"name\\\":\\\"POD_SUBNET\\\",\\\"value\\\":\\\"10.244.0.0/16\\\"}],\\\"image\\\":\\\"docker.io/kindest/kindnetd:v20240813-c6f155d6\\\",\\\"name\\\":\\\"kindnet-cni\\\",\\\"resources\\\":{\\\"limits\\\":{\\\"cpu\\\":\\\"100m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"requests\\\":{\\\"cpu\\\":\\\"100m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"securityContext\\\":{\\\"capabilities\\\":{\\\"add\\\":[\\\"NET_RAW\\\",\\\"NET_ADMIN\\\"]},\\\"privileged\\\":false},\\\"volumeMounts\\\":[{\\\"mountPath
\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"cni-cfg\\\"},{\\\"mountPath\\\":\\\"/run/xtables.lock\\\",\\\"name\\\":\\\"xtables-lock\\\",\\\"readOnly\\\":false},{\\\"mountPath\\\":\\\"/lib/modules\\\",\\\"name\\\":\\\"lib-modules\\\",\\\"readOnly\\\":true}]}],\\\"hostNetwork\\\":true,\\\"serviceAccountName\\\":\\\"kindnet\\\",\\\"tolerations\\\":[{\\\"effect\\\":\\\"NoSchedule\\\",\\\"operator\\\":\\\"Exists\\\"}],\\\"volumes\\\":[{\\\"hostPath\\\":{\\\"path\\\":\\\"/etc/cni/net.d\\\",\\\"type\\\":\\\"DirectoryOrCreate\\\"},\\\"name\\\":\\\"cni-cfg\\\"},{\\\"hostPath\\\":{\\\"path\\\":\\\"/run/xtables.lock\\\",\\\"type\\\":\\\"FileOrCreate\\\"},\\\"name\\\":\\\"xtables-lock\\\"},{\\\"hostPath\\\":{\\\"path\\\":\\\"/lib/modules\\\"},\\\"name\\\":\\\"lib-modules\\\"}]}}}}\\n\"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc001b8a0a0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name
:\"\", GenerateName:\"\", Namespace:\"\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"app\":\"kindnet\", \"k8s-app\":\"kindnet\", \"tier\":\"node\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:\"cni-cfg\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001dbb9c8), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolu
meClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}, v1.Volume{Name:\"xtables-lock\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001dbb9f8), EmptyDir:(*v1.EmptyDirVolumeSourc
e)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.Portwo
rxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}, v1.Volume{Name:\"lib-modules\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001dbba28), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil),
AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:\"kindnet-cni\", Image:\"docker.io/kindest/kindnetd:v20240813-c6f155d6\", Command:[]string(nil), Args:[]string(nil), WorkingDir:\"\", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:\"HOST_IP\", Value:\"\", ValueFrom:(*v1.EnvVarSource)(0xc001b8a0c0)}, v1.EnvVar{Name:\"POD_IP\", Value:\"\", ValueFrom:(*v1.EnvVa
rSource)(0xc001b8a100)}, v1.EnvVar{Name:\"POD_SUBNET\", Value:\"10.244.0.0/16\", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{\"cpu\":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"100m\", Format:\"DecimalSI\"}, \"memory\":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"50Mi\", Format:\"BinarySI\"}}, Requests:v1.ResourceList{\"cpu\":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"100m\", Format:\"DecimalSI\"}, \"memory\":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"50Mi\", Format:\"BinarySI\"}}, Claims:[]v1.ResourceClaim(nil)}, ResizePolicy:[]v1.ContainerResizePolicy(nil), RestartPolicy:(*v1.ContainerRestartPolicy)(nil), VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:\"cni-cfg\", ReadOnly:fa
lse, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/etc/cni/net.d\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}, v1.VolumeMount{Name:\"xtables-lock\", ReadOnly:false, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/run/xtables.lock\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}, v1.VolumeMount{Name:\"lib-modules\", ReadOnly:true, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/lib/modules\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:\"/dev/termination-log\", TerminationMessagePolicy:\"File\", ImagePullPolicy:\"IfNotPresent\", SecurityContext:(*v1.SecurityContext)(0xc0026b0600), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralCo
ntainer(nil), RestartPolicy:\"Always\", TerminationGracePeriodSeconds:(*int64)(0xc002575ec0), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:\"ClusterFirst\", NodeSelector:map[string]string(nil), ServiceAccountName:\"kindnet\", DeprecatedServiceAccount:\"kindnet\", AutomountServiceAccountToken:(*bool)(nil), NodeName:\"\", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002378900), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:\"\", Subdomain:\"\", Affinity:(*v1.Affinity)(nil), SchedulerName:\"default-scheduler\", Tolerations:[]v1.Toleration{v1.Toleration{Key:\"\", Operator:\"Exists\", Value:\"\", Effect:\"NoSchedule\", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:\"\", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil),
Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil), HostUsers:(*bool)(nil), SchedulingGates:[]v1.PodSchedulingGate(nil), ResourceClaims:[]v1.PodResourceClaim(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:\"RollingUpdate\", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc0024fb120)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc002575efc)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:3, NumberMisscheduled:0, DesiredNumberScheduled:3, NumberReady:3, ObservedGeneration:1, UpdatedNumberScheduled:3, NumberAvailable:3, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps \"kindnet\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	E0913 18:45:52.776897       1 daemon_controller.go:329] "Unhandled Error" err="kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kube-proxy\", GenerateName:\"\", Namespace:\"kube-system\", SelfLink:\"\", UID:\"e6e9eb5f-8178-4a93-9c83-0365ad1f7e6b\", ResourceVersion:\"888\", Generation:1, CreationTimestamp:time.Date(2024, time.September, 13, 18, 42, 28, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"k8s-app\":\"kube-proxy\"}, Annotations:map[string]string{\"deprecated.daemonset.template.generation\":\"1\"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc0017b3480), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:\"\", GenerateName:\"\", Namespace:\"\", SelfLink:\"\", UID:\"\", ResourceVersion:\"
\", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"k8s-app\":\"kube-proxy\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:\"kube-proxy\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSourc
e)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc0024c1f00), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}, v1.Volume{Name:\"xtables-lock\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001c27590), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolu
meSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIV
olumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}, v1.Volume{Name:\"lib-modules\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001c275a8), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtu
alDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:\"kube-proxy\", Image:\"registry.k8s.io/kube-proxy:v1.31.1\", Command:[]string{\"/usr/local/bin/kube-proxy\", \"--config=/var/lib/kube-proxy/config.conf\", \"--hostname-override=$(NODE_NAME)\"}, Args:[]string(nil), WorkingDir:\"\", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:\"NODE_NAME\", Value:\"\", ValueFrom:(*v1.EnvVarSource)(0xc0017b3500)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.Re
sourceList(nil), Claims:[]v1.ResourceClaim(nil)}, ResizePolicy:[]v1.ContainerResizePolicy(nil), RestartPolicy:(*v1.ContainerRestartPolicy)(nil), VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:\"kube-proxy\", ReadOnly:false, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/var/lib/kube-proxy\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}, v1.VolumeMount{Name:\"xtables-lock\", ReadOnly:false, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/run/xtables.lock\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}, v1.VolumeMount{Name:\"lib-modules\", ReadOnly:true, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/lib/modules\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:
\"/dev/termination-log\", TerminationMessagePolicy:\"File\", ImagePullPolicy:\"IfNotPresent\", SecurityContext:(*v1.SecurityContext)(0xc00259c2a0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:\"Always\", TerminationGracePeriodSeconds:(*int64)(0xc001d6db68), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:\"ClusterFirst\", NodeSelector:map[string]string{\"kubernetes.io/os\":\"linux\"}, ServiceAccountName:\"kube-proxy\", DeprecatedServiceAccount:\"kube-proxy\", AutomountServiceAccountToken:(*bool)(nil), NodeName:\"\", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00235af00), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:\"\", Subdomain:\"\", Affinity:(*v1.Affinity)(nil), SchedulerName:\"default-scheduler\", Tolerations:[]v1.Toleration{v1.Toleration{Key:\"\", Operator:\"Exists\", Value:\"\", Effect:\"\", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.Hos
tAlias(nil), PriorityClassName:\"system-node-critical\", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil), HostUsers:(*bool)(nil), SchedulingGates:[]v1.PodSchedulingGate(nil), ResourceClaims:[]v1.PodResourceClaim(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:\"RollingUpdate\", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc0023a9bc0)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc001d6dbc0)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:3, NumberMisscheduled:0, DesiredNumberScheduled:3, NumberReady:3, ObservedGeneration:1, UpdatedNumberScheduled:3, NumberAvailable:3, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfill
ed on daemonsets.apps \"kube-proxy\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I0913 18:45:53.180832       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-617764-m04"
	I0913 18:45:57.199944       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-617764-m04"
	I0913 18:45:57.241929       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-617764-m04"
	I0913 18:45:57.242678       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-617764-m04"
	I0913 18:45:57.257869       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-617764-m04"
	I0913 18:46:02.771478       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-617764-m04"
	I0913 18:46:13.500518       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-617764-m04"
	I0913 18:46:13.500677       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-617764-m04"
	I0913 18:46:13.514958       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-617764-m04"
	I0913 18:46:17.098470       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-617764-m04"
	I0913 18:46:23.493929       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-617764-m04"
	I0913 18:47:12.126896       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-617764-m04"
	I0913 18:47:12.126991       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-617764-m02"
	I0913 18:47:12.156704       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-617764-m02"
	I0913 18:47:12.303974       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="14.787558ms"
	I0913 18:47:12.304320       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="99.062µs"
	I0913 18:47:12.351458       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-617764-m02"
	I0913 18:47:17.350018       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-617764-m02"
	
	
	==> kube-proxy [5065ca788226965fdaff6088f9b63d8cf7f5a5a7f59a07f825cfcdd7bc02e218] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0913 18:42:34.167647       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0913 18:42:34.198918       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.145"]
	E0913 18:42:34.199182       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0913 18:42:34.253828       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0913 18:42:34.253872       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0913 18:42:34.253905       1 server_linux.go:169] "Using iptables Proxier"
	I0913 18:42:34.256484       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0913 18:42:34.257771       1 server.go:483] "Version info" version="v1.31.1"
	I0913 18:42:34.257801       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0913 18:42:34.260502       1 config.go:199] "Starting service config controller"
	I0913 18:42:34.260914       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0913 18:42:34.261139       1 config.go:105] "Starting endpoint slice config controller"
	I0913 18:42:34.261164       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0913 18:42:34.262109       1 config.go:328] "Starting node config controller"
	I0913 18:42:34.262140       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0913 18:42:34.361759       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0913 18:42:34.361863       1 shared_informer.go:320] Caches are synced for service config
	I0913 18:42:34.362333       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [8a31170a295b77a01a2c07a4bf7dbd0be4738f757372e24a85bcaf7d50d27d4c] <==
	W0913 18:42:26.897784       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0913 18:42:26.897834       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0913 18:42:29.405500       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0913 18:45:52.604951       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-tw74q\": pod kube-proxy-tw74q is already assigned to node \"ha-617764-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-tw74q" node="ha-617764-m04"
	E0913 18:45:52.605182       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-tw74q\": pod kube-proxy-tw74q is already assigned to node \"ha-617764-m04\"" pod="kube-system/kube-proxy-tw74q"
	E0913 18:45:52.616503       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-47jgz\": pod kindnet-47jgz is already assigned to node \"ha-617764-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-47jgz" node="ha-617764-m04"
	E0913 18:45:52.616777       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 52c2fe7a-7d09-4d11-ae85-b0fc016f6f16(kube-system/kindnet-47jgz) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-47jgz"
	E0913 18:45:52.616962       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-47jgz\": pod kindnet-47jgz is already assigned to node \"ha-617764-m04\"" pod="kube-system/kindnet-47jgz"
	I0913 18:45:52.617091       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-47jgz" node="ha-617764-m04"
	E0913 18:45:52.684630       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-j4ht7\": pod kindnet-j4ht7 is already assigned to node \"ha-617764-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-j4ht7" node="ha-617764-m04"
	E0913 18:45:52.684705       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 427dbc82-b752-4208-aa44-73c372996446(kube-system/kindnet-j4ht7) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-j4ht7"
	E0913 18:45:52.684722       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-j4ht7\": pod kindnet-j4ht7 is already assigned to node \"ha-617764-m04\"" pod="kube-system/kindnet-j4ht7"
	I0913 18:45:52.684740       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-j4ht7" node="ha-617764-m04"
	E0913 18:45:52.688566       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-jvrw5\": pod kindnet-jvrw5 is already assigned to node \"ha-617764-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-jvrw5" node="ha-617764-m04"
	E0913 18:45:52.688697       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 1c4990d1-e2c7-48fe-85a3-c6571c60c9b7(kube-system/kindnet-jvrw5) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-jvrw5"
	E0913 18:45:52.688716       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-jvrw5\": pod kindnet-jvrw5 is already assigned to node \"ha-617764-m04\"" pod="kube-system/kindnet-jvrw5"
	I0913 18:45:52.688769       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-jvrw5" node="ha-617764-m04"
	E0913 18:45:52.689590       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-5rlkn\": pod kube-proxy-5rlkn is already assigned to node \"ha-617764-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-5rlkn" node="ha-617764-m04"
	E0913 18:45:52.689658       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod fb31ed1c-fbc0-46ca-b60c-7201362519ff(kube-system/kube-proxy-5rlkn) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-5rlkn"
	E0913 18:45:52.689678       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-5rlkn\": pod kube-proxy-5rlkn is already assigned to node \"ha-617764-m04\"" pod="kube-system/kube-proxy-5rlkn"
	I0913 18:45:52.689696       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-5rlkn" node="ha-617764-m04"
	E0913 18:45:52.694462       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-xtt2d\": pod kube-proxy-xtt2d is already assigned to node \"ha-617764-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-xtt2d" node="ha-617764-m04"
	E0913 18:45:52.694585       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 848151c4-6f4d-47e6-9447-bd1d09469957(kube-system/kube-proxy-xtt2d) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-xtt2d"
	E0913 18:45:52.694606       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-xtt2d\": pod kube-proxy-xtt2d is already assigned to node \"ha-617764-m04\"" pod="kube-system/kube-proxy-xtt2d"
	I0913 18:45:52.694636       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-xtt2d" node="ha-617764-m04"
	
	
	==> kubelet <==
	Sep 13 18:47:28 ha-617764 kubelet[1315]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 13 18:47:28 ha-617764 kubelet[1315]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 13 18:47:28 ha-617764 kubelet[1315]: E0913 18:47:28.638424    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726253248638053949,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 18:47:28 ha-617764 kubelet[1315]: E0913 18:47:28.638452    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726253248638053949,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 18:47:38 ha-617764 kubelet[1315]: E0913 18:47:38.639589    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726253258639210015,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 18:47:38 ha-617764 kubelet[1315]: E0913 18:47:38.639622    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726253258639210015,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 18:47:48 ha-617764 kubelet[1315]: E0913 18:47:48.641668    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726253268641193323,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 18:47:48 ha-617764 kubelet[1315]: E0913 18:47:48.641713    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726253268641193323,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 18:47:58 ha-617764 kubelet[1315]: E0913 18:47:58.644032    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726253278643462310,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 18:47:58 ha-617764 kubelet[1315]: E0913 18:47:58.644368    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726253278643462310,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 18:48:08 ha-617764 kubelet[1315]: E0913 18:48:08.646005    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726253288645599841,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 18:48:08 ha-617764 kubelet[1315]: E0913 18:48:08.646058    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726253288645599841,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 18:48:18 ha-617764 kubelet[1315]: E0913 18:48:18.647954    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726253298647383649,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 18:48:18 ha-617764 kubelet[1315]: E0913 18:48:18.647995    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726253298647383649,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 18:48:28 ha-617764 kubelet[1315]: E0913 18:48:28.546320    1315 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 13 18:48:28 ha-617764 kubelet[1315]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 13 18:48:28 ha-617764 kubelet[1315]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 13 18:48:28 ha-617764 kubelet[1315]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 13 18:48:28 ha-617764 kubelet[1315]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 13 18:48:28 ha-617764 kubelet[1315]: E0913 18:48:28.649696    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726253308649378157,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 18:48:28 ha-617764 kubelet[1315]: E0913 18:48:28.649725    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726253308649378157,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 18:48:38 ha-617764 kubelet[1315]: E0913 18:48:38.651439    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726253318650959548,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 18:48:38 ha-617764 kubelet[1315]: E0913 18:48:38.651760    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726253318650959548,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 18:48:48 ha-617764 kubelet[1315]: E0913 18:48:48.653615    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726253328652971679,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 18:48:48 ha-617764 kubelet[1315]: E0913 18:48:48.653963    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726253328652971679,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-617764 -n ha-617764
helpers_test.go:261: (dbg) Run:  kubectl --context ha-617764 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (141.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (58.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-617764 node start m02 -v=7 --alsologtostderr
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-617764 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-617764 status -v=7 --alsologtostderr: exit status 3 (3.195775611s)

                                                
                                                
-- stdout --
	ha-617764
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-617764-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-617764-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-617764-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 18:48:58.904966   27605 out.go:345] Setting OutFile to fd 1 ...
	I0913 18:48:58.905068   27605 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 18:48:58.905076   27605 out.go:358] Setting ErrFile to fd 2...
	I0913 18:48:58.905080   27605 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 18:48:58.905245   27605 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19636-3902/.minikube/bin
	I0913 18:48:58.905398   27605 out.go:352] Setting JSON to false
	I0913 18:48:58.905425   27605 mustload.go:65] Loading cluster: ha-617764
	I0913 18:48:58.905483   27605 notify.go:220] Checking for updates...
	I0913 18:48:58.905833   27605 config.go:182] Loaded profile config "ha-617764": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 18:48:58.905848   27605 status.go:255] checking status of ha-617764 ...
	I0913 18:48:58.906330   27605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:48:58.906392   27605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:48:58.924686   27605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38873
	I0913 18:48:58.925144   27605 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:48:58.925769   27605 main.go:141] libmachine: Using API Version  1
	I0913 18:48:58.925804   27605 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:48:58.926162   27605 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:48:58.926377   27605 main.go:141] libmachine: (ha-617764) Calling .GetState
	I0913 18:48:58.928086   27605 status.go:330] ha-617764 host status = "Running" (err=<nil>)
	I0913 18:48:58.928100   27605 host.go:66] Checking if "ha-617764" exists ...
	I0913 18:48:58.928367   27605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:48:58.928398   27605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:48:58.942737   27605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39799
	I0913 18:48:58.943175   27605 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:48:58.943640   27605 main.go:141] libmachine: Using API Version  1
	I0913 18:48:58.943661   27605 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:48:58.943951   27605 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:48:58.944122   27605 main.go:141] libmachine: (ha-617764) Calling .GetIP
	I0913 18:48:58.946666   27605 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:48:58.947064   27605 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 18:48:58.947107   27605 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:48:58.947266   27605 host.go:66] Checking if "ha-617764" exists ...
	I0913 18:48:58.947541   27605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:48:58.947577   27605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:48:58.961652   27605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34427
	I0913 18:48:58.961990   27605 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:48:58.962510   27605 main.go:141] libmachine: Using API Version  1
	I0913 18:48:58.962534   27605 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:48:58.962833   27605 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:48:58.963024   27605 main.go:141] libmachine: (ha-617764) Calling .DriverName
	I0913 18:48:58.963217   27605 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0913 18:48:58.963242   27605 main.go:141] libmachine: (ha-617764) Calling .GetSSHHostname
	I0913 18:48:58.965808   27605 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:48:58.966245   27605 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 18:48:58.966275   27605 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:48:58.966408   27605 main.go:141] libmachine: (ha-617764) Calling .GetSSHPort
	I0913 18:48:58.966560   27605 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 18:48:58.966697   27605 main.go:141] libmachine: (ha-617764) Calling .GetSSHUsername
	I0913 18:48:58.966805   27605 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764/id_rsa Username:docker}
	I0913 18:48:59.046594   27605 ssh_runner.go:195] Run: systemctl --version
	I0913 18:48:59.053066   27605 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 18:48:59.069722   27605 kubeconfig.go:125] found "ha-617764" server: "https://192.168.39.254:8443"
	I0913 18:48:59.069747   27605 api_server.go:166] Checking apiserver status ...
	I0913 18:48:59.069779   27605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 18:48:59.093659   27605 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1125/cgroup
	W0913 18:48:59.103486   27605 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1125/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0913 18:48:59.103553   27605 ssh_runner.go:195] Run: ls
	I0913 18:48:59.107921   27605 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0913 18:48:59.113252   27605 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0913 18:48:59.113276   27605 status.go:422] ha-617764 apiserver status = Running (err=<nil>)
	I0913 18:48:59.113287   27605 status.go:257] ha-617764 status: &{Name:ha-617764 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0913 18:48:59.113315   27605 status.go:255] checking status of ha-617764-m02 ...
	I0913 18:48:59.113621   27605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:48:59.113662   27605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:48:59.128721   27605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38865
	I0913 18:48:59.129250   27605 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:48:59.129745   27605 main.go:141] libmachine: Using API Version  1
	I0913 18:48:59.129763   27605 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:48:59.130141   27605 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:48:59.130344   27605 main.go:141] libmachine: (ha-617764-m02) Calling .GetState
	I0913 18:48:59.132032   27605 status.go:330] ha-617764-m02 host status = "Running" (err=<nil>)
	I0913 18:48:59.132045   27605 host.go:66] Checking if "ha-617764-m02" exists ...
	I0913 18:48:59.132410   27605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:48:59.132453   27605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:48:59.147160   27605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40231
	I0913 18:48:59.147528   27605 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:48:59.147953   27605 main.go:141] libmachine: Using API Version  1
	I0913 18:48:59.147973   27605 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:48:59.148296   27605 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:48:59.148485   27605 main.go:141] libmachine: (ha-617764-m02) Calling .GetIP
	I0913 18:48:59.151439   27605 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:48:59.151868   27605 main.go:141] libmachine: (ha-617764-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:42:52", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:48 +0000 UTC Type:0 Mac:52:54:00:ab:42:52 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-617764-m02 Clientid:01:52:54:00:ab:42:52}
	I0913 18:48:59.151892   27605 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined IP address 192.168.39.203 and MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:48:59.152070   27605 host.go:66] Checking if "ha-617764-m02" exists ...
	I0913 18:48:59.152433   27605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:48:59.152481   27605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:48:59.167417   27605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33723
	I0913 18:48:59.167777   27605 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:48:59.168208   27605 main.go:141] libmachine: Using API Version  1
	I0913 18:48:59.168229   27605 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:48:59.168545   27605 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:48:59.168752   27605 main.go:141] libmachine: (ha-617764-m02) Calling .DriverName
	I0913 18:48:59.168949   27605 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0913 18:48:59.168967   27605 main.go:141] libmachine: (ha-617764-m02) Calling .GetSSHHostname
	I0913 18:48:59.171906   27605 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:48:59.172323   27605 main.go:141] libmachine: (ha-617764-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:42:52", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:48 +0000 UTC Type:0 Mac:52:54:00:ab:42:52 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-617764-m02 Clientid:01:52:54:00:ab:42:52}
	I0913 18:48:59.172351   27605 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined IP address 192.168.39.203 and MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:48:59.172465   27605 main.go:141] libmachine: (ha-617764-m02) Calling .GetSSHPort
	I0913 18:48:59.172613   27605 main.go:141] libmachine: (ha-617764-m02) Calling .GetSSHKeyPath
	I0913 18:48:59.172732   27605 main.go:141] libmachine: (ha-617764-m02) Calling .GetSSHUsername
	I0913 18:48:59.172861   27605 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764-m02/id_rsa Username:docker}
	W0913 18:49:01.714474   27605 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.203:22: connect: no route to host
	W0913 18:49:01.714569   27605 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.203:22: connect: no route to host
	E0913 18:49:01.714595   27605 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.203:22: connect: no route to host
	I0913 18:49:01.714603   27605 status.go:257] ha-617764-m02 status: &{Name:ha-617764-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0913 18:49:01.714623   27605 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.203:22: connect: no route to host
	I0913 18:49:01.714632   27605 status.go:255] checking status of ha-617764-m03 ...
	I0913 18:49:01.714933   27605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:49:01.714980   27605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:49:01.730332   27605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32903
	I0913 18:49:01.730769   27605 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:49:01.731302   27605 main.go:141] libmachine: Using API Version  1
	I0913 18:49:01.731324   27605 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:49:01.731612   27605 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:49:01.731827   27605 main.go:141] libmachine: (ha-617764-m03) Calling .GetState
	I0913 18:49:01.733200   27605 status.go:330] ha-617764-m03 host status = "Running" (err=<nil>)
	I0913 18:49:01.733217   27605 host.go:66] Checking if "ha-617764-m03" exists ...
	I0913 18:49:01.733508   27605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:49:01.733546   27605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:49:01.749085   27605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37513
	I0913 18:49:01.749549   27605 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:49:01.749989   27605 main.go:141] libmachine: Using API Version  1
	I0913 18:49:01.750019   27605 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:49:01.750335   27605 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:49:01.750512   27605 main.go:141] libmachine: (ha-617764-m03) Calling .GetIP
	I0913 18:49:01.753308   27605 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:49:01.753646   27605 main.go:141] libmachine: (ha-617764-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:bc:fa", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:44:05 +0000 UTC Type:0 Mac:52:54:00:4c:bc:fa Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:ha-617764-m03 Clientid:01:52:54:00:4c:bc:fa}
	I0913 18:49:01.753673   27605 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined IP address 192.168.39.118 and MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:49:01.753783   27605 host.go:66] Checking if "ha-617764-m03" exists ...
	I0913 18:49:01.754196   27605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:49:01.754243   27605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:49:01.768891   27605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44029
	I0913 18:49:01.769264   27605 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:49:01.769736   27605 main.go:141] libmachine: Using API Version  1
	I0913 18:49:01.769755   27605 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:49:01.770047   27605 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:49:01.770252   27605 main.go:141] libmachine: (ha-617764-m03) Calling .DriverName
	I0913 18:49:01.770439   27605 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0913 18:49:01.770458   27605 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHHostname
	I0913 18:49:01.773162   27605 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:49:01.773537   27605 main.go:141] libmachine: (ha-617764-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:bc:fa", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:44:05 +0000 UTC Type:0 Mac:52:54:00:4c:bc:fa Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:ha-617764-m03 Clientid:01:52:54:00:4c:bc:fa}
	I0913 18:49:01.773558   27605 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined IP address 192.168.39.118 and MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:49:01.773714   27605 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHPort
	I0913 18:49:01.773869   27605 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHKeyPath
	I0913 18:49:01.773995   27605 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHUsername
	I0913 18:49:01.774132   27605 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764-m03/id_rsa Username:docker}
	I0913 18:49:01.857599   27605 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 18:49:01.872134   27605 kubeconfig.go:125] found "ha-617764" server: "https://192.168.39.254:8443"
	I0913 18:49:01.872166   27605 api_server.go:166] Checking apiserver status ...
	I0913 18:49:01.872197   27605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 18:49:01.887005   27605 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1393/cgroup
	W0913 18:49:01.897078   27605 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1393/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0913 18:49:01.897146   27605 ssh_runner.go:195] Run: ls
	I0913 18:49:01.901808   27605 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0913 18:49:01.906545   27605 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0913 18:49:01.906569   27605 status.go:422] ha-617764-m03 apiserver status = Running (err=<nil>)
	I0913 18:49:01.906577   27605 status.go:257] ha-617764-m03 status: &{Name:ha-617764-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0913 18:49:01.906591   27605 status.go:255] checking status of ha-617764-m04 ...
	I0913 18:49:01.906875   27605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:49:01.906906   27605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:49:01.922293   27605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44317
	I0913 18:49:01.922736   27605 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:49:01.923200   27605 main.go:141] libmachine: Using API Version  1
	I0913 18:49:01.923219   27605 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:49:01.923500   27605 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:49:01.923684   27605 main.go:141] libmachine: (ha-617764-m04) Calling .GetState
	I0913 18:49:01.925076   27605 status.go:330] ha-617764-m04 host status = "Running" (err=<nil>)
	I0913 18:49:01.925089   27605 host.go:66] Checking if "ha-617764-m04" exists ...
	I0913 18:49:01.925462   27605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:49:01.925505   27605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:49:01.940765   27605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35791
	I0913 18:49:01.941272   27605 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:49:01.941806   27605 main.go:141] libmachine: Using API Version  1
	I0913 18:49:01.941820   27605 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:49:01.942139   27605 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:49:01.942321   27605 main.go:141] libmachine: (ha-617764-m04) Calling .GetIP
	I0913 18:49:01.945253   27605 main.go:141] libmachine: (ha-617764-m04) DBG | domain ha-617764-m04 has defined MAC address 52:54:00:08:6e:e8 in network mk-ha-617764
	I0913 18:49:01.945879   27605 main.go:141] libmachine: (ha-617764-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:6e:e8", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:45:38 +0000 UTC Type:0 Mac:52:54:00:08:6e:e8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-617764-m04 Clientid:01:52:54:00:08:6e:e8}
	I0913 18:49:01.945914   27605 main.go:141] libmachine: (ha-617764-m04) DBG | domain ha-617764-m04 has defined IP address 192.168.39.238 and MAC address 52:54:00:08:6e:e8 in network mk-ha-617764
	I0913 18:49:01.946033   27605 host.go:66] Checking if "ha-617764-m04" exists ...
	I0913 18:49:01.946432   27605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:49:01.946468   27605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:49:01.961535   27605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34275
	I0913 18:49:01.962011   27605 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:49:01.962516   27605 main.go:141] libmachine: Using API Version  1
	I0913 18:49:01.962535   27605 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:49:01.962816   27605 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:49:01.962994   27605 main.go:141] libmachine: (ha-617764-m04) Calling .DriverName
	I0913 18:49:01.963180   27605 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0913 18:49:01.963197   27605 main.go:141] libmachine: (ha-617764-m04) Calling .GetSSHHostname
	I0913 18:49:01.965951   27605 main.go:141] libmachine: (ha-617764-m04) DBG | domain ha-617764-m04 has defined MAC address 52:54:00:08:6e:e8 in network mk-ha-617764
	I0913 18:49:01.966363   27605 main.go:141] libmachine: (ha-617764-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:6e:e8", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:45:38 +0000 UTC Type:0 Mac:52:54:00:08:6e:e8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-617764-m04 Clientid:01:52:54:00:08:6e:e8}
	I0913 18:49:01.966384   27605 main.go:141] libmachine: (ha-617764-m04) DBG | domain ha-617764-m04 has defined IP address 192.168.39.238 and MAC address 52:54:00:08:6e:e8 in network mk-ha-617764
	I0913 18:49:01.966531   27605 main.go:141] libmachine: (ha-617764-m04) Calling .GetSSHPort
	I0913 18:49:01.966695   27605 main.go:141] libmachine: (ha-617764-m04) Calling .GetSSHKeyPath
	I0913 18:49:01.966804   27605 main.go:141] libmachine: (ha-617764-m04) Calling .GetSSHUsername
	I0913 18:49:01.966905   27605 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764-m04/id_rsa Username:docker}
	I0913 18:49:02.045957   27605 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 18:49:02.059508   27605 status.go:257] ha-617764-m04 status: &{Name:ha-617764-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-617764 status -v=7 --alsologtostderr
E0913 18:49:06.601720   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-617764 status -v=7 --alsologtostderr: exit status 3 (5.209561158s)

                                                
                                                
-- stdout --
	ha-617764
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-617764-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-617764-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-617764-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 18:49:03.038087   27705 out.go:345] Setting OutFile to fd 1 ...
	I0913 18:49:03.038407   27705 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 18:49:03.038435   27705 out.go:358] Setting ErrFile to fd 2...
	I0913 18:49:03.038442   27705 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 18:49:03.038819   27705 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19636-3902/.minikube/bin
	I0913 18:49:03.039015   27705 out.go:352] Setting JSON to false
	I0913 18:49:03.039042   27705 mustload.go:65] Loading cluster: ha-617764
	I0913 18:49:03.039136   27705 notify.go:220] Checking for updates...
	I0913 18:49:03.039467   27705 config.go:182] Loaded profile config "ha-617764": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 18:49:03.039483   27705 status.go:255] checking status of ha-617764 ...
	I0913 18:49:03.039878   27705 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:49:03.039931   27705 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:49:03.059537   27705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32781
	I0913 18:49:03.060028   27705 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:49:03.060640   27705 main.go:141] libmachine: Using API Version  1
	I0913 18:49:03.060663   27705 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:49:03.061059   27705 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:49:03.061280   27705 main.go:141] libmachine: (ha-617764) Calling .GetState
	I0913 18:49:03.062906   27705 status.go:330] ha-617764 host status = "Running" (err=<nil>)
	I0913 18:49:03.062921   27705 host.go:66] Checking if "ha-617764" exists ...
	I0913 18:49:03.063181   27705 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:49:03.063213   27705 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:49:03.077503   27705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45361
	I0913 18:49:03.077951   27705 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:49:03.078449   27705 main.go:141] libmachine: Using API Version  1
	I0913 18:49:03.078469   27705 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:49:03.078741   27705 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:49:03.078895   27705 main.go:141] libmachine: (ha-617764) Calling .GetIP
	I0913 18:49:03.081954   27705 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:49:03.082455   27705 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 18:49:03.082477   27705 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:49:03.082647   27705 host.go:66] Checking if "ha-617764" exists ...
	I0913 18:49:03.083037   27705 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:49:03.083098   27705 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:49:03.099043   27705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40813
	I0913 18:49:03.099463   27705 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:49:03.099881   27705 main.go:141] libmachine: Using API Version  1
	I0913 18:49:03.099906   27705 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:49:03.100204   27705 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:49:03.100395   27705 main.go:141] libmachine: (ha-617764) Calling .DriverName
	I0913 18:49:03.100546   27705 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0913 18:49:03.100568   27705 main.go:141] libmachine: (ha-617764) Calling .GetSSHHostname
	I0913 18:49:03.103193   27705 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:49:03.103636   27705 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 18:49:03.103668   27705 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:49:03.103793   27705 main.go:141] libmachine: (ha-617764) Calling .GetSSHPort
	I0913 18:49:03.103953   27705 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 18:49:03.104093   27705 main.go:141] libmachine: (ha-617764) Calling .GetSSHUsername
	I0913 18:49:03.104191   27705 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764/id_rsa Username:docker}
	I0913 18:49:03.182672   27705 ssh_runner.go:195] Run: systemctl --version
	I0913 18:49:03.189392   27705 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 18:49:03.207478   27705 kubeconfig.go:125] found "ha-617764" server: "https://192.168.39.254:8443"
	I0913 18:49:03.207511   27705 api_server.go:166] Checking apiserver status ...
	I0913 18:49:03.207552   27705 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 18:49:03.221204   27705 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1125/cgroup
	W0913 18:49:03.231419   27705 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1125/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0913 18:49:03.231467   27705 ssh_runner.go:195] Run: ls
	I0913 18:49:03.235873   27705 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0913 18:49:03.242004   27705 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0913 18:49:03.242031   27705 status.go:422] ha-617764 apiserver status = Running (err=<nil>)
	I0913 18:49:03.242043   27705 status.go:257] ha-617764 status: &{Name:ha-617764 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0913 18:49:03.242063   27705 status.go:255] checking status of ha-617764-m02 ...
	I0913 18:49:03.242450   27705 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:49:03.242500   27705 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:49:03.258013   27705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34629
	I0913 18:49:03.258473   27705 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:49:03.259051   27705 main.go:141] libmachine: Using API Version  1
	I0913 18:49:03.259071   27705 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:49:03.259385   27705 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:49:03.259566   27705 main.go:141] libmachine: (ha-617764-m02) Calling .GetState
	I0913 18:49:03.260876   27705 status.go:330] ha-617764-m02 host status = "Running" (err=<nil>)
	I0913 18:49:03.260893   27705 host.go:66] Checking if "ha-617764-m02" exists ...
	I0913 18:49:03.261199   27705 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:49:03.261231   27705 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:49:03.276085   27705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41737
	I0913 18:49:03.276533   27705 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:49:03.276976   27705 main.go:141] libmachine: Using API Version  1
	I0913 18:49:03.276995   27705 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:49:03.277291   27705 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:49:03.277456   27705 main.go:141] libmachine: (ha-617764-m02) Calling .GetIP
	I0913 18:49:03.279955   27705 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:49:03.280370   27705 main.go:141] libmachine: (ha-617764-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:42:52", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:48 +0000 UTC Type:0 Mac:52:54:00:ab:42:52 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-617764-m02 Clientid:01:52:54:00:ab:42:52}
	I0913 18:49:03.280396   27705 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined IP address 192.168.39.203 and MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:49:03.280504   27705 host.go:66] Checking if "ha-617764-m02" exists ...
	I0913 18:49:03.280794   27705 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:49:03.280839   27705 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:49:03.295756   27705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33487
	I0913 18:49:03.296211   27705 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:49:03.296658   27705 main.go:141] libmachine: Using API Version  1
	I0913 18:49:03.296680   27705 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:49:03.296957   27705 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:49:03.297142   27705 main.go:141] libmachine: (ha-617764-m02) Calling .DriverName
	I0913 18:49:03.297318   27705 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0913 18:49:03.297340   27705 main.go:141] libmachine: (ha-617764-m02) Calling .GetSSHHostname
	I0913 18:49:03.300025   27705 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:49:03.300505   27705 main.go:141] libmachine: (ha-617764-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:42:52", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:48 +0000 UTC Type:0 Mac:52:54:00:ab:42:52 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-617764-m02 Clientid:01:52:54:00:ab:42:52}
	I0913 18:49:03.300527   27705 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined IP address 192.168.39.203 and MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:49:03.300716   27705 main.go:141] libmachine: (ha-617764-m02) Calling .GetSSHPort
	I0913 18:49:03.300876   27705 main.go:141] libmachine: (ha-617764-m02) Calling .GetSSHKeyPath
	I0913 18:49:03.301027   27705 main.go:141] libmachine: (ha-617764-m02) Calling .GetSSHUsername
	I0913 18:49:03.301153   27705 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764-m02/id_rsa Username:docker}
	W0913 18:49:04.786402   27705 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.203:22: connect: no route to host
	I0913 18:49:04.786445   27705 retry.go:31] will retry after 312.916709ms: dial tcp 192.168.39.203:22: connect: no route to host
	W0913 18:49:07.858413   27705 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.203:22: connect: no route to host
	W0913 18:49:07.858512   27705 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.203:22: connect: no route to host
	E0913 18:49:07.858533   27705 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.203:22: connect: no route to host
	I0913 18:49:07.858542   27705 status.go:257] ha-617764-m02 status: &{Name:ha-617764-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0913 18:49:07.858559   27705 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.203:22: connect: no route to host
	I0913 18:49:07.858569   27705 status.go:255] checking status of ha-617764-m03 ...
	I0913 18:49:07.858856   27705 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:49:07.858900   27705 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:49:07.874606   27705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39531
	I0913 18:49:07.875090   27705 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:49:07.875594   27705 main.go:141] libmachine: Using API Version  1
	I0913 18:49:07.875610   27705 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:49:07.875909   27705 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:49:07.876086   27705 main.go:141] libmachine: (ha-617764-m03) Calling .GetState
	I0913 18:49:07.877542   27705 status.go:330] ha-617764-m03 host status = "Running" (err=<nil>)
	I0913 18:49:07.877556   27705 host.go:66] Checking if "ha-617764-m03" exists ...
	I0913 18:49:07.877853   27705 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:49:07.877892   27705 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:49:07.892848   27705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43139
	I0913 18:49:07.893289   27705 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:49:07.893760   27705 main.go:141] libmachine: Using API Version  1
	I0913 18:49:07.893782   27705 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:49:07.894143   27705 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:49:07.894311   27705 main.go:141] libmachine: (ha-617764-m03) Calling .GetIP
	I0913 18:49:07.896964   27705 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:49:07.897442   27705 main.go:141] libmachine: (ha-617764-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:bc:fa", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:44:05 +0000 UTC Type:0 Mac:52:54:00:4c:bc:fa Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:ha-617764-m03 Clientid:01:52:54:00:4c:bc:fa}
	I0913 18:49:07.897463   27705 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined IP address 192.168.39.118 and MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:49:07.897618   27705 host.go:66] Checking if "ha-617764-m03" exists ...
	I0913 18:49:07.897955   27705 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:49:07.897993   27705 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:49:07.912390   27705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39895
	I0913 18:49:07.913068   27705 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:49:07.913638   27705 main.go:141] libmachine: Using API Version  1
	I0913 18:49:07.913661   27705 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:49:07.914024   27705 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:49:07.914240   27705 main.go:141] libmachine: (ha-617764-m03) Calling .DriverName
	I0913 18:49:07.914427   27705 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0913 18:49:07.914450   27705 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHHostname
	I0913 18:49:07.917220   27705 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:49:07.917655   27705 main.go:141] libmachine: (ha-617764-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:bc:fa", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:44:05 +0000 UTC Type:0 Mac:52:54:00:4c:bc:fa Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:ha-617764-m03 Clientid:01:52:54:00:4c:bc:fa}
	I0913 18:49:07.917678   27705 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined IP address 192.168.39.118 and MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:49:07.917838   27705 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHPort
	I0913 18:49:07.917986   27705 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHKeyPath
	I0913 18:49:07.918141   27705 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHUsername
	I0913 18:49:07.918265   27705 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764-m03/id_rsa Username:docker}
	I0913 18:49:08.001622   27705 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 18:49:08.016415   27705 kubeconfig.go:125] found "ha-617764" server: "https://192.168.39.254:8443"
	I0913 18:49:08.016439   27705 api_server.go:166] Checking apiserver status ...
	I0913 18:49:08.016473   27705 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 18:49:08.030987   27705 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1393/cgroup
	W0913 18:49:08.043563   27705 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1393/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0913 18:49:08.043618   27705 ssh_runner.go:195] Run: ls
	I0913 18:49:08.047912   27705 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0913 18:49:08.054465   27705 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0913 18:49:08.054491   27705 status.go:422] ha-617764-m03 apiserver status = Running (err=<nil>)
	I0913 18:49:08.054502   27705 status.go:257] ha-617764-m03 status: &{Name:ha-617764-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0913 18:49:08.054534   27705 status.go:255] checking status of ha-617764-m04 ...
	I0913 18:49:08.054829   27705 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:49:08.054862   27705 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:49:08.069833   27705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33925
	I0913 18:49:08.070263   27705 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:49:08.070733   27705 main.go:141] libmachine: Using API Version  1
	I0913 18:49:08.070755   27705 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:49:08.071057   27705 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:49:08.071248   27705 main.go:141] libmachine: (ha-617764-m04) Calling .GetState
	I0913 18:49:08.072810   27705 status.go:330] ha-617764-m04 host status = "Running" (err=<nil>)
	I0913 18:49:08.072825   27705 host.go:66] Checking if "ha-617764-m04" exists ...
	I0913 18:49:08.073114   27705 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:49:08.073157   27705 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:49:08.090399   27705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36971
	I0913 18:49:08.090947   27705 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:49:08.091449   27705 main.go:141] libmachine: Using API Version  1
	I0913 18:49:08.091468   27705 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:49:08.091799   27705 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:49:08.091982   27705 main.go:141] libmachine: (ha-617764-m04) Calling .GetIP
	I0913 18:49:08.094696   27705 main.go:141] libmachine: (ha-617764-m04) DBG | domain ha-617764-m04 has defined MAC address 52:54:00:08:6e:e8 in network mk-ha-617764
	I0913 18:49:08.095159   27705 main.go:141] libmachine: (ha-617764-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:6e:e8", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:45:38 +0000 UTC Type:0 Mac:52:54:00:08:6e:e8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-617764-m04 Clientid:01:52:54:00:08:6e:e8}
	I0913 18:49:08.095196   27705 main.go:141] libmachine: (ha-617764-m04) DBG | domain ha-617764-m04 has defined IP address 192.168.39.238 and MAC address 52:54:00:08:6e:e8 in network mk-ha-617764
	I0913 18:49:08.095345   27705 host.go:66] Checking if "ha-617764-m04" exists ...
	I0913 18:49:08.095669   27705 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:49:08.095707   27705 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:49:08.110501   27705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46779
	I0913 18:49:08.110929   27705 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:49:08.111388   27705 main.go:141] libmachine: Using API Version  1
	I0913 18:49:08.111406   27705 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:49:08.111693   27705 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:49:08.111874   27705 main.go:141] libmachine: (ha-617764-m04) Calling .DriverName
	I0913 18:49:08.112021   27705 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0913 18:49:08.112037   27705 main.go:141] libmachine: (ha-617764-m04) Calling .GetSSHHostname
	I0913 18:49:08.114959   27705 main.go:141] libmachine: (ha-617764-m04) DBG | domain ha-617764-m04 has defined MAC address 52:54:00:08:6e:e8 in network mk-ha-617764
	I0913 18:49:08.115397   27705 main.go:141] libmachine: (ha-617764-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:6e:e8", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:45:38 +0000 UTC Type:0 Mac:52:54:00:08:6e:e8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-617764-m04 Clientid:01:52:54:00:08:6e:e8}
	I0913 18:49:08.115420   27705 main.go:141] libmachine: (ha-617764-m04) DBG | domain ha-617764-m04 has defined IP address 192.168.39.238 and MAC address 52:54:00:08:6e:e8 in network mk-ha-617764
	I0913 18:49:08.115594   27705 main.go:141] libmachine: (ha-617764-m04) Calling .GetSSHPort
	I0913 18:49:08.115750   27705 main.go:141] libmachine: (ha-617764-m04) Calling .GetSSHKeyPath
	I0913 18:49:08.115917   27705 main.go:141] libmachine: (ha-617764-m04) Calling .GetSSHUsername
	I0913 18:49:08.116034   27705 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764-m04/id_rsa Username:docker}
	I0913 18:49:08.193440   27705 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 18:49:08.207266   27705 status.go:257] ha-617764-m04 status: &{Name:ha-617764-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-617764 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-617764 status -v=7 --alsologtostderr: exit status 3 (4.926023072s)

                                                
                                                
-- stdout --
	ha-617764
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-617764-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-617764-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-617764-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 18:49:09.467322   27806 out.go:345] Setting OutFile to fd 1 ...
	I0913 18:49:09.467554   27806 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 18:49:09.467563   27806 out.go:358] Setting ErrFile to fd 2...
	I0913 18:49:09.467566   27806 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 18:49:09.467736   27806 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19636-3902/.minikube/bin
	I0913 18:49:09.467891   27806 out.go:352] Setting JSON to false
	I0913 18:49:09.467917   27806 mustload.go:65] Loading cluster: ha-617764
	I0913 18:49:09.468014   27806 notify.go:220] Checking for updates...
	I0913 18:49:09.468256   27806 config.go:182] Loaded profile config "ha-617764": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 18:49:09.468269   27806 status.go:255] checking status of ha-617764 ...
	I0913 18:49:09.468639   27806 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:49:09.468689   27806 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:49:09.487952   27806 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33003
	I0913 18:49:09.488478   27806 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:49:09.489105   27806 main.go:141] libmachine: Using API Version  1
	I0913 18:49:09.489129   27806 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:49:09.489588   27806 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:49:09.489797   27806 main.go:141] libmachine: (ha-617764) Calling .GetState
	I0913 18:49:09.491772   27806 status.go:330] ha-617764 host status = "Running" (err=<nil>)
	I0913 18:49:09.491787   27806 host.go:66] Checking if "ha-617764" exists ...
	I0913 18:49:09.492059   27806 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:49:09.492119   27806 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:49:09.506578   27806 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46087
	I0913 18:49:09.507034   27806 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:49:09.507532   27806 main.go:141] libmachine: Using API Version  1
	I0913 18:49:09.507557   27806 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:49:09.507858   27806 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:49:09.508040   27806 main.go:141] libmachine: (ha-617764) Calling .GetIP
	I0913 18:49:09.510915   27806 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:49:09.511299   27806 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 18:49:09.511325   27806 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:49:09.511536   27806 host.go:66] Checking if "ha-617764" exists ...
	I0913 18:49:09.511818   27806 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:49:09.511850   27806 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:49:09.526201   27806 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43039
	I0913 18:49:09.526685   27806 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:49:09.527159   27806 main.go:141] libmachine: Using API Version  1
	I0913 18:49:09.527181   27806 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:49:09.527491   27806 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:49:09.527653   27806 main.go:141] libmachine: (ha-617764) Calling .DriverName
	I0913 18:49:09.527829   27806 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0913 18:49:09.527851   27806 main.go:141] libmachine: (ha-617764) Calling .GetSSHHostname
	I0913 18:49:09.530487   27806 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:49:09.530882   27806 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 18:49:09.530904   27806 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:49:09.531061   27806 main.go:141] libmachine: (ha-617764) Calling .GetSSHPort
	I0913 18:49:09.531248   27806 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 18:49:09.531389   27806 main.go:141] libmachine: (ha-617764) Calling .GetSSHUsername
	I0913 18:49:09.531564   27806 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764/id_rsa Username:docker}
	I0913 18:49:09.614640   27806 ssh_runner.go:195] Run: systemctl --version
	I0913 18:49:09.622862   27806 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 18:49:09.639171   27806 kubeconfig.go:125] found "ha-617764" server: "https://192.168.39.254:8443"
	I0913 18:49:09.639199   27806 api_server.go:166] Checking apiserver status ...
	I0913 18:49:09.639242   27806 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 18:49:09.654887   27806 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1125/cgroup
	W0913 18:49:09.664710   27806 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1125/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0913 18:49:09.664764   27806 ssh_runner.go:195] Run: ls
	I0913 18:49:09.670318   27806 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0913 18:49:09.676116   27806 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0913 18:49:09.676161   27806 status.go:422] ha-617764 apiserver status = Running (err=<nil>)
	I0913 18:49:09.676203   27806 status.go:257] ha-617764 status: &{Name:ha-617764 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0913 18:49:09.676232   27806 status.go:255] checking status of ha-617764-m02 ...
	I0913 18:49:09.676534   27806 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:49:09.676574   27806 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:49:09.691591   27806 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40661
	I0913 18:49:09.692026   27806 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:49:09.692531   27806 main.go:141] libmachine: Using API Version  1
	I0913 18:49:09.692552   27806 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:49:09.692919   27806 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:49:09.693161   27806 main.go:141] libmachine: (ha-617764-m02) Calling .GetState
	I0913 18:49:09.694741   27806 status.go:330] ha-617764-m02 host status = "Running" (err=<nil>)
	I0913 18:49:09.694759   27806 host.go:66] Checking if "ha-617764-m02" exists ...
	I0913 18:49:09.695131   27806 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:49:09.695178   27806 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:49:09.709779   27806 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45351
	I0913 18:49:09.710305   27806 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:49:09.710758   27806 main.go:141] libmachine: Using API Version  1
	I0913 18:49:09.710778   27806 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:49:09.711117   27806 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:49:09.711292   27806 main.go:141] libmachine: (ha-617764-m02) Calling .GetIP
	I0913 18:49:09.714195   27806 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:49:09.714641   27806 main.go:141] libmachine: (ha-617764-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:42:52", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:48 +0000 UTC Type:0 Mac:52:54:00:ab:42:52 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-617764-m02 Clientid:01:52:54:00:ab:42:52}
	I0913 18:49:09.714674   27806 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined IP address 192.168.39.203 and MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:49:09.714837   27806 host.go:66] Checking if "ha-617764-m02" exists ...
	I0913 18:49:09.715238   27806 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:49:09.715280   27806 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:49:09.731445   27806 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43905
	I0913 18:49:09.731812   27806 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:49:09.732248   27806 main.go:141] libmachine: Using API Version  1
	I0913 18:49:09.732267   27806 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:49:09.732608   27806 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:49:09.732759   27806 main.go:141] libmachine: (ha-617764-m02) Calling .DriverName
	I0913 18:49:09.732919   27806 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0913 18:49:09.732938   27806 main.go:141] libmachine: (ha-617764-m02) Calling .GetSSHHostname
	I0913 18:49:09.735527   27806 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:49:09.735905   27806 main.go:141] libmachine: (ha-617764-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:42:52", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:48 +0000 UTC Type:0 Mac:52:54:00:ab:42:52 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-617764-m02 Clientid:01:52:54:00:ab:42:52}
	I0913 18:49:09.735928   27806 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined IP address 192.168.39.203 and MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:49:09.736055   27806 main.go:141] libmachine: (ha-617764-m02) Calling .GetSSHPort
	I0913 18:49:09.736213   27806 main.go:141] libmachine: (ha-617764-m02) Calling .GetSSHKeyPath
	I0913 18:49:09.736356   27806 main.go:141] libmachine: (ha-617764-m02) Calling .GetSSHUsername
	I0913 18:49:09.736487   27806 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764-m02/id_rsa Username:docker}
	W0913 18:49:10.930427   27806 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.203:22: connect: no route to host
	I0913 18:49:10.930484   27806 retry.go:31] will retry after 189.316064ms: dial tcp 192.168.39.203:22: connect: no route to host
	W0913 18:49:14.002373   27806 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.203:22: connect: no route to host
	W0913 18:49:14.002449   27806 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.203:22: connect: no route to host
	E0913 18:49:14.002463   27806 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.203:22: connect: no route to host
	I0913 18:49:14.002472   27806 status.go:257] ha-617764-m02 status: &{Name:ha-617764-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0913 18:49:14.002491   27806 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.203:22: connect: no route to host
	I0913 18:49:14.002497   27806 status.go:255] checking status of ha-617764-m03 ...
	I0913 18:49:14.002818   27806 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:49:14.002857   27806 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:49:14.017465   27806 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46021
	I0913 18:49:14.017892   27806 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:49:14.018360   27806 main.go:141] libmachine: Using API Version  1
	I0913 18:49:14.018381   27806 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:49:14.018682   27806 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:49:14.018841   27806 main.go:141] libmachine: (ha-617764-m03) Calling .GetState
	I0913 18:49:14.020185   27806 status.go:330] ha-617764-m03 host status = "Running" (err=<nil>)
	I0913 18:49:14.020200   27806 host.go:66] Checking if "ha-617764-m03" exists ...
	I0913 18:49:14.020529   27806 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:49:14.020566   27806 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:49:14.036241   27806 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33577
	I0913 18:49:14.036692   27806 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:49:14.037182   27806 main.go:141] libmachine: Using API Version  1
	I0913 18:49:14.037203   27806 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:49:14.037498   27806 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:49:14.037687   27806 main.go:141] libmachine: (ha-617764-m03) Calling .GetIP
	I0913 18:49:14.040619   27806 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:49:14.041086   27806 main.go:141] libmachine: (ha-617764-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:bc:fa", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:44:05 +0000 UTC Type:0 Mac:52:54:00:4c:bc:fa Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:ha-617764-m03 Clientid:01:52:54:00:4c:bc:fa}
	I0913 18:49:14.041115   27806 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined IP address 192.168.39.118 and MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:49:14.041255   27806 host.go:66] Checking if "ha-617764-m03" exists ...
	I0913 18:49:14.041564   27806 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:49:14.041600   27806 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:49:14.056333   27806 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44543
	I0913 18:49:14.056659   27806 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:49:14.057059   27806 main.go:141] libmachine: Using API Version  1
	I0913 18:49:14.057077   27806 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:49:14.057345   27806 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:49:14.057538   27806 main.go:141] libmachine: (ha-617764-m03) Calling .DriverName
	I0913 18:49:14.057682   27806 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0913 18:49:14.057707   27806 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHHostname
	I0913 18:49:14.060136   27806 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:49:14.060613   27806 main.go:141] libmachine: (ha-617764-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:bc:fa", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:44:05 +0000 UTC Type:0 Mac:52:54:00:4c:bc:fa Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:ha-617764-m03 Clientid:01:52:54:00:4c:bc:fa}
	I0913 18:49:14.060638   27806 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined IP address 192.168.39.118 and MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:49:14.060811   27806 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHPort
	I0913 18:49:14.060959   27806 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHKeyPath
	I0913 18:49:14.061123   27806 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHUsername
	I0913 18:49:14.061258   27806 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764-m03/id_rsa Username:docker}
	I0913 18:49:14.141865   27806 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 18:49:14.158253   27806 kubeconfig.go:125] found "ha-617764" server: "https://192.168.39.254:8443"
	I0913 18:49:14.158279   27806 api_server.go:166] Checking apiserver status ...
	I0913 18:49:14.158308   27806 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 18:49:14.174437   27806 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1393/cgroup
	W0913 18:49:14.185697   27806 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1393/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0913 18:49:14.185750   27806 ssh_runner.go:195] Run: ls
	I0913 18:49:14.191185   27806 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0913 18:49:14.197378   27806 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0913 18:49:14.197402   27806 status.go:422] ha-617764-m03 apiserver status = Running (err=<nil>)
	I0913 18:49:14.197410   27806 status.go:257] ha-617764-m03 status: &{Name:ha-617764-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0913 18:49:14.197424   27806 status.go:255] checking status of ha-617764-m04 ...
	I0913 18:49:14.197700   27806 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:49:14.197740   27806 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:49:14.213114   27806 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37365
	I0913 18:49:14.213614   27806 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:49:14.214071   27806 main.go:141] libmachine: Using API Version  1
	I0913 18:49:14.214090   27806 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:49:14.214513   27806 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:49:14.214734   27806 main.go:141] libmachine: (ha-617764-m04) Calling .GetState
	I0913 18:49:14.216459   27806 status.go:330] ha-617764-m04 host status = "Running" (err=<nil>)
	I0913 18:49:14.216475   27806 host.go:66] Checking if "ha-617764-m04" exists ...
	I0913 18:49:14.216766   27806 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:49:14.216808   27806 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:49:14.232450   27806 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33221
	I0913 18:49:14.232867   27806 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:49:14.233365   27806 main.go:141] libmachine: Using API Version  1
	I0913 18:49:14.233387   27806 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:49:14.233726   27806 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:49:14.233928   27806 main.go:141] libmachine: (ha-617764-m04) Calling .GetIP
	I0913 18:49:14.236868   27806 main.go:141] libmachine: (ha-617764-m04) DBG | domain ha-617764-m04 has defined MAC address 52:54:00:08:6e:e8 in network mk-ha-617764
	I0913 18:49:14.237296   27806 main.go:141] libmachine: (ha-617764-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:6e:e8", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:45:38 +0000 UTC Type:0 Mac:52:54:00:08:6e:e8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-617764-m04 Clientid:01:52:54:00:08:6e:e8}
	I0913 18:49:14.237323   27806 main.go:141] libmachine: (ha-617764-m04) DBG | domain ha-617764-m04 has defined IP address 192.168.39.238 and MAC address 52:54:00:08:6e:e8 in network mk-ha-617764
	I0913 18:49:14.237488   27806 host.go:66] Checking if "ha-617764-m04" exists ...
	I0913 18:49:14.237840   27806 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:49:14.237877   27806 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:49:14.252605   27806 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34081
	I0913 18:49:14.253045   27806 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:49:14.253534   27806 main.go:141] libmachine: Using API Version  1
	I0913 18:49:14.253556   27806 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:49:14.253918   27806 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:49:14.254128   27806 main.go:141] libmachine: (ha-617764-m04) Calling .DriverName
	I0913 18:49:14.254300   27806 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0913 18:49:14.254319   27806 main.go:141] libmachine: (ha-617764-m04) Calling .GetSSHHostname
	I0913 18:49:14.257025   27806 main.go:141] libmachine: (ha-617764-m04) DBG | domain ha-617764-m04 has defined MAC address 52:54:00:08:6e:e8 in network mk-ha-617764
	I0913 18:49:14.257433   27806 main.go:141] libmachine: (ha-617764-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:6e:e8", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:45:38 +0000 UTC Type:0 Mac:52:54:00:08:6e:e8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-617764-m04 Clientid:01:52:54:00:08:6e:e8}
	I0913 18:49:14.257467   27806 main.go:141] libmachine: (ha-617764-m04) DBG | domain ha-617764-m04 has defined IP address 192.168.39.238 and MAC address 52:54:00:08:6e:e8 in network mk-ha-617764
	I0913 18:49:14.257568   27806 main.go:141] libmachine: (ha-617764-m04) Calling .GetSSHPort
	I0913 18:49:14.257726   27806 main.go:141] libmachine: (ha-617764-m04) Calling .GetSSHKeyPath
	I0913 18:49:14.257864   27806 main.go:141] libmachine: (ha-617764-m04) Calling .GetSSHUsername
	I0913 18:49:14.257996   27806 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764-m04/id_rsa Username:docker}
	I0913 18:49:14.337215   27806 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 18:49:14.351549   27806 status.go:257] ha-617764-m04 status: &{Name:ha-617764-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-617764 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-617764 status -v=7 --alsologtostderr: exit status 3 (3.691929529s)

                                                
                                                
-- stdout --
	ha-617764
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-617764-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-617764-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-617764-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 18:49:17.571690   27922 out.go:345] Setting OutFile to fd 1 ...
	I0913 18:49:17.571899   27922 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 18:49:17.571907   27922 out.go:358] Setting ErrFile to fd 2...
	I0913 18:49:17.571911   27922 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 18:49:17.572062   27922 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19636-3902/.minikube/bin
	I0913 18:49:17.572203   27922 out.go:352] Setting JSON to false
	I0913 18:49:17.572231   27922 mustload.go:65] Loading cluster: ha-617764
	I0913 18:49:17.572335   27922 notify.go:220] Checking for updates...
	I0913 18:49:17.572622   27922 config.go:182] Loaded profile config "ha-617764": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 18:49:17.572637   27922 status.go:255] checking status of ha-617764 ...
	I0913 18:49:17.573109   27922 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:49:17.573171   27922 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:49:17.593309   27922 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40617
	I0913 18:49:17.593818   27922 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:49:17.594377   27922 main.go:141] libmachine: Using API Version  1
	I0913 18:49:17.594399   27922 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:49:17.594814   27922 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:49:17.595013   27922 main.go:141] libmachine: (ha-617764) Calling .GetState
	I0913 18:49:17.596711   27922 status.go:330] ha-617764 host status = "Running" (err=<nil>)
	I0913 18:49:17.596727   27922 host.go:66] Checking if "ha-617764" exists ...
	I0913 18:49:17.597106   27922 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:49:17.597148   27922 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:49:17.611889   27922 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35773
	I0913 18:49:17.612298   27922 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:49:17.612747   27922 main.go:141] libmachine: Using API Version  1
	I0913 18:49:17.612767   27922 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:49:17.613039   27922 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:49:17.613202   27922 main.go:141] libmachine: (ha-617764) Calling .GetIP
	I0913 18:49:17.616054   27922 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:49:17.616543   27922 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 18:49:17.616566   27922 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:49:17.616679   27922 host.go:66] Checking if "ha-617764" exists ...
	I0913 18:49:17.616962   27922 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:49:17.616995   27922 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:49:17.631469   27922 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40765
	I0913 18:49:17.631888   27922 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:49:17.632306   27922 main.go:141] libmachine: Using API Version  1
	I0913 18:49:17.632326   27922 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:49:17.632658   27922 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:49:17.632795   27922 main.go:141] libmachine: (ha-617764) Calling .DriverName
	I0913 18:49:17.632968   27922 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0913 18:49:17.632994   27922 main.go:141] libmachine: (ha-617764) Calling .GetSSHHostname
	I0913 18:49:17.635521   27922 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:49:17.635926   27922 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 18:49:17.635956   27922 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:49:17.636112   27922 main.go:141] libmachine: (ha-617764) Calling .GetSSHPort
	I0913 18:49:17.636333   27922 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 18:49:17.636525   27922 main.go:141] libmachine: (ha-617764) Calling .GetSSHUsername
	I0913 18:49:17.636761   27922 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764/id_rsa Username:docker}
	I0913 18:49:17.717739   27922 ssh_runner.go:195] Run: systemctl --version
	I0913 18:49:17.724363   27922 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 18:49:17.738818   27922 kubeconfig.go:125] found "ha-617764" server: "https://192.168.39.254:8443"
	I0913 18:49:17.738851   27922 api_server.go:166] Checking apiserver status ...
	I0913 18:49:17.738891   27922 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 18:49:17.752603   27922 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1125/cgroup
	W0913 18:49:17.762488   27922 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1125/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0913 18:49:17.762535   27922 ssh_runner.go:195] Run: ls
	I0913 18:49:17.766902   27922 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0913 18:49:17.771239   27922 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0913 18:49:17.771258   27922 status.go:422] ha-617764 apiserver status = Running (err=<nil>)
	I0913 18:49:17.771267   27922 status.go:257] ha-617764 status: &{Name:ha-617764 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0913 18:49:17.771281   27922 status.go:255] checking status of ha-617764-m02 ...
	I0913 18:49:17.771554   27922 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:49:17.771584   27922 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:49:17.787026   27922 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33471
	I0913 18:49:17.787405   27922 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:49:17.787799   27922 main.go:141] libmachine: Using API Version  1
	I0913 18:49:17.787821   27922 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:49:17.788130   27922 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:49:17.788292   27922 main.go:141] libmachine: (ha-617764-m02) Calling .GetState
	I0913 18:49:17.789649   27922 status.go:330] ha-617764-m02 host status = "Running" (err=<nil>)
	I0913 18:49:17.789666   27922 host.go:66] Checking if "ha-617764-m02" exists ...
	I0913 18:49:17.789950   27922 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:49:17.789995   27922 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:49:17.805295   27922 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34893
	I0913 18:49:17.805754   27922 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:49:17.806292   27922 main.go:141] libmachine: Using API Version  1
	I0913 18:49:17.806311   27922 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:49:17.806630   27922 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:49:17.806796   27922 main.go:141] libmachine: (ha-617764-m02) Calling .GetIP
	I0913 18:49:17.809246   27922 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:49:17.809624   27922 main.go:141] libmachine: (ha-617764-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:42:52", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:48 +0000 UTC Type:0 Mac:52:54:00:ab:42:52 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-617764-m02 Clientid:01:52:54:00:ab:42:52}
	I0913 18:49:17.809664   27922 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined IP address 192.168.39.203 and MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:49:17.809763   27922 host.go:66] Checking if "ha-617764-m02" exists ...
	I0913 18:49:17.810110   27922 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:49:17.810157   27922 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:49:17.826424   27922 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39441
	I0913 18:49:17.826816   27922 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:49:17.827238   27922 main.go:141] libmachine: Using API Version  1
	I0913 18:49:17.827257   27922 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:49:17.827582   27922 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:49:17.827741   27922 main.go:141] libmachine: (ha-617764-m02) Calling .DriverName
	I0913 18:49:17.827916   27922 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0913 18:49:17.827932   27922 main.go:141] libmachine: (ha-617764-m02) Calling .GetSSHHostname
	I0913 18:49:17.830734   27922 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:49:17.831185   27922 main.go:141] libmachine: (ha-617764-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:42:52", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:48 +0000 UTC Type:0 Mac:52:54:00:ab:42:52 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-617764-m02 Clientid:01:52:54:00:ab:42:52}
	I0913 18:49:17.831203   27922 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined IP address 192.168.39.203 and MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:49:17.831383   27922 main.go:141] libmachine: (ha-617764-m02) Calling .GetSSHPort
	I0913 18:49:17.831602   27922 main.go:141] libmachine: (ha-617764-m02) Calling .GetSSHKeyPath
	I0913 18:49:17.831741   27922 main.go:141] libmachine: (ha-617764-m02) Calling .GetSSHUsername
	I0913 18:49:17.831936   27922 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764-m02/id_rsa Username:docker}
	W0913 18:49:20.882321   27922 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.203:22: connect: no route to host
	W0913 18:49:20.882426   27922 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.203:22: connect: no route to host
	E0913 18:49:20.882443   27922 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.203:22: connect: no route to host
	I0913 18:49:20.882452   27922 status.go:257] ha-617764-m02 status: &{Name:ha-617764-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0913 18:49:20.882473   27922 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.203:22: connect: no route to host
	I0913 18:49:20.882486   27922 status.go:255] checking status of ha-617764-m03 ...
	I0913 18:49:20.882945   27922 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:49:20.882998   27922 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:49:20.898250   27922 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40681
	I0913 18:49:20.898713   27922 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:49:20.899181   27922 main.go:141] libmachine: Using API Version  1
	I0913 18:49:20.899202   27922 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:49:20.899528   27922 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:49:20.899709   27922 main.go:141] libmachine: (ha-617764-m03) Calling .GetState
	I0913 18:49:20.901274   27922 status.go:330] ha-617764-m03 host status = "Running" (err=<nil>)
	I0913 18:49:20.901286   27922 host.go:66] Checking if "ha-617764-m03" exists ...
	I0913 18:49:20.901561   27922 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:49:20.901612   27922 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:49:20.916120   27922 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34291
	I0913 18:49:20.916554   27922 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:49:20.916994   27922 main.go:141] libmachine: Using API Version  1
	I0913 18:49:20.917010   27922 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:49:20.917350   27922 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:49:20.917511   27922 main.go:141] libmachine: (ha-617764-m03) Calling .GetIP
	I0913 18:49:20.920566   27922 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:49:20.921005   27922 main.go:141] libmachine: (ha-617764-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:bc:fa", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:44:05 +0000 UTC Type:0 Mac:52:54:00:4c:bc:fa Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:ha-617764-m03 Clientid:01:52:54:00:4c:bc:fa}
	I0913 18:49:20.921021   27922 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined IP address 192.168.39.118 and MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:49:20.921124   27922 host.go:66] Checking if "ha-617764-m03" exists ...
	I0913 18:49:20.921410   27922 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:49:20.921441   27922 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:49:20.937272   27922 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37759
	I0913 18:49:20.937614   27922 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:49:20.938065   27922 main.go:141] libmachine: Using API Version  1
	I0913 18:49:20.938080   27922 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:49:20.938346   27922 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:49:20.938518   27922 main.go:141] libmachine: (ha-617764-m03) Calling .DriverName
	I0913 18:49:20.938694   27922 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0913 18:49:20.938713   27922 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHHostname
	I0913 18:49:20.941578   27922 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:49:20.942082   27922 main.go:141] libmachine: (ha-617764-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:bc:fa", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:44:05 +0000 UTC Type:0 Mac:52:54:00:4c:bc:fa Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:ha-617764-m03 Clientid:01:52:54:00:4c:bc:fa}
	I0913 18:49:20.942136   27922 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined IP address 192.168.39.118 and MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:49:20.942291   27922 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHPort
	I0913 18:49:20.942449   27922 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHKeyPath
	I0913 18:49:20.942601   27922 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHUsername
	I0913 18:49:20.942762   27922 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764-m03/id_rsa Username:docker}
	I0913 18:49:21.026498   27922 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 18:49:21.042249   27922 kubeconfig.go:125] found "ha-617764" server: "https://192.168.39.254:8443"
	I0913 18:49:21.042275   27922 api_server.go:166] Checking apiserver status ...
	I0913 18:49:21.042316   27922 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 18:49:21.055041   27922 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1393/cgroup
	W0913 18:49:21.064692   27922 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1393/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0913 18:49:21.064746   27922 ssh_runner.go:195] Run: ls
	I0913 18:49:21.069121   27922 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0913 18:49:21.073425   27922 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0913 18:49:21.073445   27922 status.go:422] ha-617764-m03 apiserver status = Running (err=<nil>)
	I0913 18:49:21.073456   27922 status.go:257] ha-617764-m03 status: &{Name:ha-617764-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0913 18:49:21.073475   27922 status.go:255] checking status of ha-617764-m04 ...
	I0913 18:49:21.073740   27922 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:49:21.073780   27922 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:49:21.088891   27922 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39011
	I0913 18:49:21.089310   27922 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:49:21.089775   27922 main.go:141] libmachine: Using API Version  1
	I0913 18:49:21.089797   27922 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:49:21.090157   27922 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:49:21.090427   27922 main.go:141] libmachine: (ha-617764-m04) Calling .GetState
	I0913 18:49:21.092044   27922 status.go:330] ha-617764-m04 host status = "Running" (err=<nil>)
	I0913 18:49:21.092076   27922 host.go:66] Checking if "ha-617764-m04" exists ...
	I0913 18:49:21.092400   27922 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:49:21.092433   27922 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:49:21.106765   27922 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45461
	I0913 18:49:21.107175   27922 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:49:21.107651   27922 main.go:141] libmachine: Using API Version  1
	I0913 18:49:21.107666   27922 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:49:21.108056   27922 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:49:21.108286   27922 main.go:141] libmachine: (ha-617764-m04) Calling .GetIP
	I0913 18:49:21.110884   27922 main.go:141] libmachine: (ha-617764-m04) DBG | domain ha-617764-m04 has defined MAC address 52:54:00:08:6e:e8 in network mk-ha-617764
	I0913 18:49:21.111271   27922 main.go:141] libmachine: (ha-617764-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:6e:e8", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:45:38 +0000 UTC Type:0 Mac:52:54:00:08:6e:e8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-617764-m04 Clientid:01:52:54:00:08:6e:e8}
	I0913 18:49:21.111292   27922 main.go:141] libmachine: (ha-617764-m04) DBG | domain ha-617764-m04 has defined IP address 192.168.39.238 and MAC address 52:54:00:08:6e:e8 in network mk-ha-617764
	I0913 18:49:21.111428   27922 host.go:66] Checking if "ha-617764-m04" exists ...
	I0913 18:49:21.111715   27922 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:49:21.111749   27922 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:49:21.126530   27922 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42627
	I0913 18:49:21.126999   27922 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:49:21.127496   27922 main.go:141] libmachine: Using API Version  1
	I0913 18:49:21.127515   27922 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:49:21.127794   27922 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:49:21.127978   27922 main.go:141] libmachine: (ha-617764-m04) Calling .DriverName
	I0913 18:49:21.128127   27922 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0913 18:49:21.128147   27922 main.go:141] libmachine: (ha-617764-m04) Calling .GetSSHHostname
	I0913 18:49:21.130967   27922 main.go:141] libmachine: (ha-617764-m04) DBG | domain ha-617764-m04 has defined MAC address 52:54:00:08:6e:e8 in network mk-ha-617764
	I0913 18:49:21.131400   27922 main.go:141] libmachine: (ha-617764-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:6e:e8", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:45:38 +0000 UTC Type:0 Mac:52:54:00:08:6e:e8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-617764-m04 Clientid:01:52:54:00:08:6e:e8}
	I0913 18:49:21.131433   27922 main.go:141] libmachine: (ha-617764-m04) DBG | domain ha-617764-m04 has defined IP address 192.168.39.238 and MAC address 52:54:00:08:6e:e8 in network mk-ha-617764
	I0913 18:49:21.131544   27922 main.go:141] libmachine: (ha-617764-m04) Calling .GetSSHPort
	I0913 18:49:21.131682   27922 main.go:141] libmachine: (ha-617764-m04) Calling .GetSSHKeyPath
	I0913 18:49:21.131825   27922 main.go:141] libmachine: (ha-617764-m04) Calling .GetSSHUsername
	I0913 18:49:21.131924   27922 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764-m04/id_rsa Username:docker}
	I0913 18:49:21.209294   27922 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 18:49:21.223100   27922 status.go:257] ha-617764-m04 status: &{Name:ha-617764-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-617764 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-617764 status -v=7 --alsologtostderr: exit status 3 (3.721898802s)

                                                
                                                
-- stdout --
	ha-617764
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-617764-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-617764-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-617764-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 18:49:24.534116   28022 out.go:345] Setting OutFile to fd 1 ...
	I0913 18:49:24.534217   28022 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 18:49:24.534225   28022 out.go:358] Setting ErrFile to fd 2...
	I0913 18:49:24.534229   28022 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 18:49:24.534379   28022 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19636-3902/.minikube/bin
	I0913 18:49:24.534516   28022 out.go:352] Setting JSON to false
	I0913 18:49:24.534543   28022 mustload.go:65] Loading cluster: ha-617764
	I0913 18:49:24.534633   28022 notify.go:220] Checking for updates...
	I0913 18:49:24.534957   28022 config.go:182] Loaded profile config "ha-617764": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 18:49:24.534977   28022 status.go:255] checking status of ha-617764 ...
	I0913 18:49:24.535472   28022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:49:24.535502   28022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:49:24.555190   28022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33785
	I0913 18:49:24.555569   28022 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:49:24.556189   28022 main.go:141] libmachine: Using API Version  1
	I0913 18:49:24.556219   28022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:49:24.556517   28022 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:49:24.556674   28022 main.go:141] libmachine: (ha-617764) Calling .GetState
	I0913 18:49:24.558244   28022 status.go:330] ha-617764 host status = "Running" (err=<nil>)
	I0913 18:49:24.558260   28022 host.go:66] Checking if "ha-617764" exists ...
	I0913 18:49:24.558547   28022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:49:24.558585   28022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:49:24.573201   28022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41213
	I0913 18:49:24.573638   28022 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:49:24.574054   28022 main.go:141] libmachine: Using API Version  1
	I0913 18:49:24.574077   28022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:49:24.574381   28022 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:49:24.574541   28022 main.go:141] libmachine: (ha-617764) Calling .GetIP
	I0913 18:49:24.576997   28022 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:49:24.577393   28022 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 18:49:24.577419   28022 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:49:24.577557   28022 host.go:66] Checking if "ha-617764" exists ...
	I0913 18:49:24.577947   28022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:49:24.577988   28022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:49:24.592321   28022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33657
	I0913 18:49:24.592816   28022 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:49:24.593254   28022 main.go:141] libmachine: Using API Version  1
	I0913 18:49:24.593274   28022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:49:24.593595   28022 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:49:24.593812   28022 main.go:141] libmachine: (ha-617764) Calling .DriverName
	I0913 18:49:24.594042   28022 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0913 18:49:24.594085   28022 main.go:141] libmachine: (ha-617764) Calling .GetSSHHostname
	I0913 18:49:24.597025   28022 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:49:24.597447   28022 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 18:49:24.597474   28022 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:49:24.597606   28022 main.go:141] libmachine: (ha-617764) Calling .GetSSHPort
	I0913 18:49:24.597779   28022 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 18:49:24.597915   28022 main.go:141] libmachine: (ha-617764) Calling .GetSSHUsername
	I0913 18:49:24.598046   28022 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764/id_rsa Username:docker}
	I0913 18:49:24.681806   28022 ssh_runner.go:195] Run: systemctl --version
	I0913 18:49:24.688354   28022 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 18:49:24.703240   28022 kubeconfig.go:125] found "ha-617764" server: "https://192.168.39.254:8443"
	I0913 18:49:24.703278   28022 api_server.go:166] Checking apiserver status ...
	I0913 18:49:24.703326   28022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 18:49:24.717734   28022 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1125/cgroup
	W0913 18:49:24.727083   28022 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1125/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0913 18:49:24.727139   28022 ssh_runner.go:195] Run: ls
	I0913 18:49:24.731330   28022 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0913 18:49:24.737598   28022 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0913 18:49:24.737619   28022 status.go:422] ha-617764 apiserver status = Running (err=<nil>)
	I0913 18:49:24.737631   28022 status.go:257] ha-617764 status: &{Name:ha-617764 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0913 18:49:24.737652   28022 status.go:255] checking status of ha-617764-m02 ...
	I0913 18:49:24.737931   28022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:49:24.737985   28022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:49:24.753032   28022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36391
	I0913 18:49:24.753504   28022 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:49:24.754054   28022 main.go:141] libmachine: Using API Version  1
	I0913 18:49:24.754069   28022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:49:24.754429   28022 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:49:24.754572   28022 main.go:141] libmachine: (ha-617764-m02) Calling .GetState
	I0913 18:49:24.756228   28022 status.go:330] ha-617764-m02 host status = "Running" (err=<nil>)
	I0913 18:49:24.756243   28022 host.go:66] Checking if "ha-617764-m02" exists ...
	I0913 18:49:24.756558   28022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:49:24.756596   28022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:49:24.771203   28022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43113
	I0913 18:49:24.771668   28022 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:49:24.772173   28022 main.go:141] libmachine: Using API Version  1
	I0913 18:49:24.772190   28022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:49:24.772491   28022 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:49:24.772657   28022 main.go:141] libmachine: (ha-617764-m02) Calling .GetIP
	I0913 18:49:24.775106   28022 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:49:24.775482   28022 main.go:141] libmachine: (ha-617764-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:42:52", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:48 +0000 UTC Type:0 Mac:52:54:00:ab:42:52 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-617764-m02 Clientid:01:52:54:00:ab:42:52}
	I0913 18:49:24.775503   28022 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined IP address 192.168.39.203 and MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:49:24.775634   28022 host.go:66] Checking if "ha-617764-m02" exists ...
	I0913 18:49:24.775933   28022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:49:24.775973   28022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:49:24.790656   28022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42735
	I0913 18:49:24.791105   28022 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:49:24.791570   28022 main.go:141] libmachine: Using API Version  1
	I0913 18:49:24.791595   28022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:49:24.791884   28022 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:49:24.792080   28022 main.go:141] libmachine: (ha-617764-m02) Calling .DriverName
	I0913 18:49:24.792233   28022 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0913 18:49:24.792248   28022 main.go:141] libmachine: (ha-617764-m02) Calling .GetSSHHostname
	I0913 18:49:24.795161   28022 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:49:24.795559   28022 main.go:141] libmachine: (ha-617764-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:42:52", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:48 +0000 UTC Type:0 Mac:52:54:00:ab:42:52 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-617764-m02 Clientid:01:52:54:00:ab:42:52}
	I0913 18:49:24.795582   28022 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined IP address 192.168.39.203 and MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:49:24.795705   28022 main.go:141] libmachine: (ha-617764-m02) Calling .GetSSHPort
	I0913 18:49:24.795861   28022 main.go:141] libmachine: (ha-617764-m02) Calling .GetSSHKeyPath
	I0913 18:49:24.795997   28022 main.go:141] libmachine: (ha-617764-m02) Calling .GetSSHUsername
	I0913 18:49:24.796121   28022 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764-m02/id_rsa Username:docker}
	W0913 18:49:27.862411   28022 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.203:22: connect: no route to host
	W0913 18:49:27.862526   28022 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.203:22: connect: no route to host
	E0913 18:49:27.862548   28022 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.203:22: connect: no route to host
	I0913 18:49:27.862560   28022 status.go:257] ha-617764-m02 status: &{Name:ha-617764-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0913 18:49:27.862592   28022 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.203:22: connect: no route to host
	I0913 18:49:27.862609   28022 status.go:255] checking status of ha-617764-m03 ...
	I0913 18:49:27.862907   28022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:49:27.862951   28022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:49:27.878036   28022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45877
	I0913 18:49:27.878510   28022 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:49:27.879016   28022 main.go:141] libmachine: Using API Version  1
	I0913 18:49:27.879043   28022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:49:27.879342   28022 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:49:27.879492   28022 main.go:141] libmachine: (ha-617764-m03) Calling .GetState
	I0913 18:49:27.881055   28022 status.go:330] ha-617764-m03 host status = "Running" (err=<nil>)
	I0913 18:49:27.881078   28022 host.go:66] Checking if "ha-617764-m03" exists ...
	I0913 18:49:27.881392   28022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:49:27.881424   28022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:49:27.896706   28022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33663
	I0913 18:49:27.897144   28022 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:49:27.897588   28022 main.go:141] libmachine: Using API Version  1
	I0913 18:49:27.897609   28022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:49:27.897893   28022 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:49:27.898069   28022 main.go:141] libmachine: (ha-617764-m03) Calling .GetIP
	I0913 18:49:27.900378   28022 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:49:27.900687   28022 main.go:141] libmachine: (ha-617764-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:bc:fa", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:44:05 +0000 UTC Type:0 Mac:52:54:00:4c:bc:fa Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:ha-617764-m03 Clientid:01:52:54:00:4c:bc:fa}
	I0913 18:49:27.900713   28022 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined IP address 192.168.39.118 and MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:49:27.900897   28022 host.go:66] Checking if "ha-617764-m03" exists ...
	I0913 18:49:27.901337   28022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:49:27.901379   28022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:49:27.916430   28022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43665
	I0913 18:49:27.916925   28022 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:49:27.917419   28022 main.go:141] libmachine: Using API Version  1
	I0913 18:49:27.917439   28022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:49:27.917807   28022 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:49:27.918045   28022 main.go:141] libmachine: (ha-617764-m03) Calling .DriverName
	I0913 18:49:27.918281   28022 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0913 18:49:27.918301   28022 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHHostname
	I0913 18:49:27.921094   28022 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:49:27.921510   28022 main.go:141] libmachine: (ha-617764-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:bc:fa", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:44:05 +0000 UTC Type:0 Mac:52:54:00:4c:bc:fa Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:ha-617764-m03 Clientid:01:52:54:00:4c:bc:fa}
	I0913 18:49:27.921543   28022 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined IP address 192.168.39.118 and MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:49:27.921707   28022 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHPort
	I0913 18:49:27.921856   28022 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHKeyPath
	I0913 18:49:27.922000   28022 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHUsername
	I0913 18:49:27.922137   28022 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764-m03/id_rsa Username:docker}
	I0913 18:49:28.006118   28022 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 18:49:28.022266   28022 kubeconfig.go:125] found "ha-617764" server: "https://192.168.39.254:8443"
	I0913 18:49:28.022293   28022 api_server.go:166] Checking apiserver status ...
	I0913 18:49:28.022327   28022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 18:49:28.036950   28022 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1393/cgroup
	W0913 18:49:28.048230   28022 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1393/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0913 18:49:28.048275   28022 ssh_runner.go:195] Run: ls
	I0913 18:49:28.052751   28022 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0913 18:49:28.058470   28022 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0913 18:49:28.058497   28022 status.go:422] ha-617764-m03 apiserver status = Running (err=<nil>)
	I0913 18:49:28.058508   28022 status.go:257] ha-617764-m03 status: &{Name:ha-617764-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0913 18:49:28.058526   28022 status.go:255] checking status of ha-617764-m04 ...
	I0913 18:49:28.058910   28022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:49:28.058951   28022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:49:28.074657   28022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44219
	I0913 18:49:28.075074   28022 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:49:28.075496   28022 main.go:141] libmachine: Using API Version  1
	I0913 18:49:28.075516   28022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:49:28.075870   28022 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:49:28.076051   28022 main.go:141] libmachine: (ha-617764-m04) Calling .GetState
	I0913 18:49:28.077608   28022 status.go:330] ha-617764-m04 host status = "Running" (err=<nil>)
	I0913 18:49:28.077623   28022 host.go:66] Checking if "ha-617764-m04" exists ...
	I0913 18:49:28.077991   28022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:49:28.078037   28022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:49:28.094824   28022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42269
	I0913 18:49:28.095193   28022 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:49:28.095727   28022 main.go:141] libmachine: Using API Version  1
	I0913 18:49:28.095748   28022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:49:28.096097   28022 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:49:28.096283   28022 main.go:141] libmachine: (ha-617764-m04) Calling .GetIP
	I0913 18:49:28.098962   28022 main.go:141] libmachine: (ha-617764-m04) DBG | domain ha-617764-m04 has defined MAC address 52:54:00:08:6e:e8 in network mk-ha-617764
	I0913 18:49:28.099325   28022 main.go:141] libmachine: (ha-617764-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:6e:e8", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:45:38 +0000 UTC Type:0 Mac:52:54:00:08:6e:e8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-617764-m04 Clientid:01:52:54:00:08:6e:e8}
	I0913 18:49:28.099348   28022 main.go:141] libmachine: (ha-617764-m04) DBG | domain ha-617764-m04 has defined IP address 192.168.39.238 and MAC address 52:54:00:08:6e:e8 in network mk-ha-617764
	I0913 18:49:28.099511   28022 host.go:66] Checking if "ha-617764-m04" exists ...
	I0913 18:49:28.099792   28022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:49:28.099833   28022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:49:28.114571   28022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34183
	I0913 18:49:28.114996   28022 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:49:28.115471   28022 main.go:141] libmachine: Using API Version  1
	I0913 18:49:28.115494   28022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:49:28.115804   28022 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:49:28.115959   28022 main.go:141] libmachine: (ha-617764-m04) Calling .DriverName
	I0913 18:49:28.116154   28022 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0913 18:49:28.116178   28022 main.go:141] libmachine: (ha-617764-m04) Calling .GetSSHHostname
	I0913 18:49:28.119126   28022 main.go:141] libmachine: (ha-617764-m04) DBG | domain ha-617764-m04 has defined MAC address 52:54:00:08:6e:e8 in network mk-ha-617764
	I0913 18:49:28.119539   28022 main.go:141] libmachine: (ha-617764-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:6e:e8", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:45:38 +0000 UTC Type:0 Mac:52:54:00:08:6e:e8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-617764-m04 Clientid:01:52:54:00:08:6e:e8}
	I0913 18:49:28.119556   28022 main.go:141] libmachine: (ha-617764-m04) DBG | domain ha-617764-m04 has defined IP address 192.168.39.238 and MAC address 52:54:00:08:6e:e8 in network mk-ha-617764
	I0913 18:49:28.119719   28022 main.go:141] libmachine: (ha-617764-m04) Calling .GetSSHPort
	I0913 18:49:28.119874   28022 main.go:141] libmachine: (ha-617764-m04) Calling .GetSSHKeyPath
	I0913 18:49:28.120028   28022 main.go:141] libmachine: (ha-617764-m04) Calling .GetSSHUsername
	I0913 18:49:28.120180   28022 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764-m04/id_rsa Username:docker}
	I0913 18:49:28.201347   28022 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 18:49:28.215584   28022 status.go:257] ha-617764-m04 status: &{Name:ha-617764-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-617764 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-617764 status -v=7 --alsologtostderr: exit status 7 (622.878878ms)

                                                
                                                
-- stdout --
	ha-617764
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-617764-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-617764-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-617764-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 18:49:35.402081   28155 out.go:345] Setting OutFile to fd 1 ...
	I0913 18:49:35.402400   28155 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 18:49:35.402420   28155 out.go:358] Setting ErrFile to fd 2...
	I0913 18:49:35.402427   28155 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 18:49:35.402717   28155 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19636-3902/.minikube/bin
	I0913 18:49:35.402935   28155 out.go:352] Setting JSON to false
	I0913 18:49:35.402969   28155 mustload.go:65] Loading cluster: ha-617764
	I0913 18:49:35.403023   28155 notify.go:220] Checking for updates...
	I0913 18:49:35.403376   28155 config.go:182] Loaded profile config "ha-617764": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 18:49:35.403393   28155 status.go:255] checking status of ha-617764 ...
	I0913 18:49:35.403919   28155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:49:35.403990   28155 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:49:35.419415   28155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33907
	I0913 18:49:35.419970   28155 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:49:35.420568   28155 main.go:141] libmachine: Using API Version  1
	I0913 18:49:35.420594   28155 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:49:35.421083   28155 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:49:35.421286   28155 main.go:141] libmachine: (ha-617764) Calling .GetState
	I0913 18:49:35.423246   28155 status.go:330] ha-617764 host status = "Running" (err=<nil>)
	I0913 18:49:35.423262   28155 host.go:66] Checking if "ha-617764" exists ...
	I0913 18:49:35.423679   28155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:49:35.423723   28155 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:49:35.438621   28155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33111
	I0913 18:49:35.439116   28155 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:49:35.439689   28155 main.go:141] libmachine: Using API Version  1
	I0913 18:49:35.439725   28155 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:49:35.440094   28155 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:49:35.440235   28155 main.go:141] libmachine: (ha-617764) Calling .GetIP
	I0913 18:49:35.443138   28155 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:49:35.443505   28155 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 18:49:35.443535   28155 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:49:35.443746   28155 host.go:66] Checking if "ha-617764" exists ...
	I0913 18:49:35.444030   28155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:49:35.444064   28155 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:49:35.458846   28155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46251
	I0913 18:49:35.459338   28155 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:49:35.459807   28155 main.go:141] libmachine: Using API Version  1
	I0913 18:49:35.459827   28155 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:49:35.460134   28155 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:49:35.460324   28155 main.go:141] libmachine: (ha-617764) Calling .DriverName
	I0913 18:49:35.460520   28155 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0913 18:49:35.460546   28155 main.go:141] libmachine: (ha-617764) Calling .GetSSHHostname
	I0913 18:49:35.464502   28155 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:49:35.464829   28155 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 18:49:35.464864   28155 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:49:35.464983   28155 main.go:141] libmachine: (ha-617764) Calling .GetSSHPort
	I0913 18:49:35.465148   28155 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 18:49:35.465303   28155 main.go:141] libmachine: (ha-617764) Calling .GetSSHUsername
	I0913 18:49:35.465493   28155 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764/id_rsa Username:docker}
	I0913 18:49:35.546375   28155 ssh_runner.go:195] Run: systemctl --version
	I0913 18:49:35.552415   28155 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 18:49:35.567300   28155 kubeconfig.go:125] found "ha-617764" server: "https://192.168.39.254:8443"
	I0913 18:49:35.567328   28155 api_server.go:166] Checking apiserver status ...
	I0913 18:49:35.567358   28155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 18:49:35.584214   28155 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1125/cgroup
	W0913 18:49:35.603651   28155 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1125/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0913 18:49:35.603722   28155 ssh_runner.go:195] Run: ls
	I0913 18:49:35.612215   28155 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0913 18:49:35.617840   28155 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0913 18:49:35.617865   28155 status.go:422] ha-617764 apiserver status = Running (err=<nil>)
	I0913 18:49:35.617878   28155 status.go:257] ha-617764 status: &{Name:ha-617764 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0913 18:49:35.617898   28155 status.go:255] checking status of ha-617764-m02 ...
	I0913 18:49:35.618255   28155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:49:35.618290   28155 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:49:35.632895   28155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44695
	I0913 18:49:35.633342   28155 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:49:35.633800   28155 main.go:141] libmachine: Using API Version  1
	I0913 18:49:35.633821   28155 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:49:35.634117   28155 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:49:35.634278   28155 main.go:141] libmachine: (ha-617764-m02) Calling .GetState
	I0913 18:49:35.635758   28155 status.go:330] ha-617764-m02 host status = "Stopped" (err=<nil>)
	I0913 18:49:35.635769   28155 status.go:343] host is not running, skipping remaining checks
	I0913 18:49:35.635774   28155 status.go:257] ha-617764-m02 status: &{Name:ha-617764-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0913 18:49:35.635789   28155 status.go:255] checking status of ha-617764-m03 ...
	I0913 18:49:35.636087   28155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:49:35.636121   28155 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:49:35.650779   28155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37143
	I0913 18:49:35.651223   28155 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:49:35.651688   28155 main.go:141] libmachine: Using API Version  1
	I0913 18:49:35.651706   28155 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:49:35.651986   28155 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:49:35.652175   28155 main.go:141] libmachine: (ha-617764-m03) Calling .GetState
	I0913 18:49:35.653768   28155 status.go:330] ha-617764-m03 host status = "Running" (err=<nil>)
	I0913 18:49:35.653785   28155 host.go:66] Checking if "ha-617764-m03" exists ...
	I0913 18:49:35.654216   28155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:49:35.654261   28155 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:49:35.669047   28155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33471
	I0913 18:49:35.669473   28155 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:49:35.669905   28155 main.go:141] libmachine: Using API Version  1
	I0913 18:49:35.669925   28155 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:49:35.670239   28155 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:49:35.670395   28155 main.go:141] libmachine: (ha-617764-m03) Calling .GetIP
	I0913 18:49:35.673208   28155 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:49:35.673643   28155 main.go:141] libmachine: (ha-617764-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:bc:fa", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:44:05 +0000 UTC Type:0 Mac:52:54:00:4c:bc:fa Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:ha-617764-m03 Clientid:01:52:54:00:4c:bc:fa}
	I0913 18:49:35.673674   28155 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined IP address 192.168.39.118 and MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:49:35.673782   28155 host.go:66] Checking if "ha-617764-m03" exists ...
	I0913 18:49:35.674181   28155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:49:35.674218   28155 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:49:35.688501   28155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40611
	I0913 18:49:35.688903   28155 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:49:35.689388   28155 main.go:141] libmachine: Using API Version  1
	I0913 18:49:35.689423   28155 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:49:35.689699   28155 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:49:35.689873   28155 main.go:141] libmachine: (ha-617764-m03) Calling .DriverName
	I0913 18:49:35.690146   28155 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0913 18:49:35.690174   28155 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHHostname
	I0913 18:49:35.693177   28155 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:49:35.693567   28155 main.go:141] libmachine: (ha-617764-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:bc:fa", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:44:05 +0000 UTC Type:0 Mac:52:54:00:4c:bc:fa Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:ha-617764-m03 Clientid:01:52:54:00:4c:bc:fa}
	I0913 18:49:35.693592   28155 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined IP address 192.168.39.118 and MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:49:35.693710   28155 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHPort
	I0913 18:49:35.693840   28155 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHKeyPath
	I0913 18:49:35.693974   28155 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHUsername
	I0913 18:49:35.694142   28155 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764-m03/id_rsa Username:docker}
	I0913 18:49:35.777934   28155 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 18:49:35.792743   28155 kubeconfig.go:125] found "ha-617764" server: "https://192.168.39.254:8443"
	I0913 18:49:35.792772   28155 api_server.go:166] Checking apiserver status ...
	I0913 18:49:35.792810   28155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 18:49:35.806550   28155 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1393/cgroup
	W0913 18:49:35.818425   28155 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1393/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0913 18:49:35.818477   28155 ssh_runner.go:195] Run: ls
	I0913 18:49:35.823484   28155 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0913 18:49:35.827737   28155 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0913 18:49:35.827756   28155 status.go:422] ha-617764-m03 apiserver status = Running (err=<nil>)
	I0913 18:49:35.827764   28155 status.go:257] ha-617764-m03 status: &{Name:ha-617764-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0913 18:49:35.827778   28155 status.go:255] checking status of ha-617764-m04 ...
	I0913 18:49:35.828064   28155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:49:35.828095   28155 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:49:35.842713   28155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35047
	I0913 18:49:35.843193   28155 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:49:35.843747   28155 main.go:141] libmachine: Using API Version  1
	I0913 18:49:35.843772   28155 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:49:35.844089   28155 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:49:35.844279   28155 main.go:141] libmachine: (ha-617764-m04) Calling .GetState
	I0913 18:49:35.845670   28155 status.go:330] ha-617764-m04 host status = "Running" (err=<nil>)
	I0913 18:49:35.845686   28155 host.go:66] Checking if "ha-617764-m04" exists ...
	I0913 18:49:35.845954   28155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:49:35.845983   28155 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:49:35.860984   28155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45189
	I0913 18:49:35.861373   28155 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:49:35.861799   28155 main.go:141] libmachine: Using API Version  1
	I0913 18:49:35.861818   28155 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:49:35.862139   28155 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:49:35.862314   28155 main.go:141] libmachine: (ha-617764-m04) Calling .GetIP
	I0913 18:49:35.865073   28155 main.go:141] libmachine: (ha-617764-m04) DBG | domain ha-617764-m04 has defined MAC address 52:54:00:08:6e:e8 in network mk-ha-617764
	I0913 18:49:35.865495   28155 main.go:141] libmachine: (ha-617764-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:6e:e8", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:45:38 +0000 UTC Type:0 Mac:52:54:00:08:6e:e8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-617764-m04 Clientid:01:52:54:00:08:6e:e8}
	I0913 18:49:35.865520   28155 main.go:141] libmachine: (ha-617764-m04) DBG | domain ha-617764-m04 has defined IP address 192.168.39.238 and MAC address 52:54:00:08:6e:e8 in network mk-ha-617764
	I0913 18:49:35.865661   28155 host.go:66] Checking if "ha-617764-m04" exists ...
	I0913 18:49:35.866062   28155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:49:35.866144   28155 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:49:35.881571   28155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45341
	I0913 18:49:35.882045   28155 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:49:35.882609   28155 main.go:141] libmachine: Using API Version  1
	I0913 18:49:35.882632   28155 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:49:35.882970   28155 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:49:35.883193   28155 main.go:141] libmachine: (ha-617764-m04) Calling .DriverName
	I0913 18:49:35.883349   28155 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0913 18:49:35.883368   28155 main.go:141] libmachine: (ha-617764-m04) Calling .GetSSHHostname
	I0913 18:49:35.885960   28155 main.go:141] libmachine: (ha-617764-m04) DBG | domain ha-617764-m04 has defined MAC address 52:54:00:08:6e:e8 in network mk-ha-617764
	I0913 18:49:35.886420   28155 main.go:141] libmachine: (ha-617764-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:6e:e8", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:45:38 +0000 UTC Type:0 Mac:52:54:00:08:6e:e8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-617764-m04 Clientid:01:52:54:00:08:6e:e8}
	I0913 18:49:35.886443   28155 main.go:141] libmachine: (ha-617764-m04) DBG | domain ha-617764-m04 has defined IP address 192.168.39.238 and MAC address 52:54:00:08:6e:e8 in network mk-ha-617764
	I0913 18:49:35.886568   28155 main.go:141] libmachine: (ha-617764-m04) Calling .GetSSHPort
	I0913 18:49:35.886728   28155 main.go:141] libmachine: (ha-617764-m04) Calling .GetSSHKeyPath
	I0913 18:49:35.886855   28155 main.go:141] libmachine: (ha-617764-m04) Calling .GetSSHUsername
	I0913 18:49:35.886968   28155 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764-m04/id_rsa Username:docker}
	I0913 18:49:35.965395   28155 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 18:49:35.979540   28155 status.go:257] ha-617764-m04 status: &{Name:ha-617764-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-617764 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-617764 status -v=7 --alsologtostderr: exit status 7 (618.472569ms)

                                                
                                                
-- stdout --
	ha-617764
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-617764-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-617764-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-617764-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 18:49:45.169608   28262 out.go:345] Setting OutFile to fd 1 ...
	I0913 18:49:45.169878   28262 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 18:49:45.169888   28262 out.go:358] Setting ErrFile to fd 2...
	I0913 18:49:45.169893   28262 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 18:49:45.170115   28262 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19636-3902/.minikube/bin
	I0913 18:49:45.170318   28262 out.go:352] Setting JSON to false
	I0913 18:49:45.170348   28262 mustload.go:65] Loading cluster: ha-617764
	I0913 18:49:45.170381   28262 notify.go:220] Checking for updates...
	I0913 18:49:45.170783   28262 config.go:182] Loaded profile config "ha-617764": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 18:49:45.170798   28262 status.go:255] checking status of ha-617764 ...
	I0913 18:49:45.171228   28262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:49:45.171285   28262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:49:45.190904   28262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32869
	I0913 18:49:45.191436   28262 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:49:45.192060   28262 main.go:141] libmachine: Using API Version  1
	I0913 18:49:45.192088   28262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:49:45.192413   28262 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:49:45.192568   28262 main.go:141] libmachine: (ha-617764) Calling .GetState
	I0913 18:49:45.194201   28262 status.go:330] ha-617764 host status = "Running" (err=<nil>)
	I0913 18:49:45.194216   28262 host.go:66] Checking if "ha-617764" exists ...
	I0913 18:49:45.194497   28262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:49:45.194527   28262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:49:45.209413   28262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40981
	I0913 18:49:45.209741   28262 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:49:45.210276   28262 main.go:141] libmachine: Using API Version  1
	I0913 18:49:45.210298   28262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:49:45.210618   28262 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:49:45.210847   28262 main.go:141] libmachine: (ha-617764) Calling .GetIP
	I0913 18:49:45.213518   28262 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:49:45.213954   28262 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 18:49:45.213977   28262 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:49:45.214114   28262 host.go:66] Checking if "ha-617764" exists ...
	I0913 18:49:45.214405   28262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:49:45.214457   28262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:49:45.228763   28262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33315
	I0913 18:49:45.229187   28262 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:49:45.229629   28262 main.go:141] libmachine: Using API Version  1
	I0913 18:49:45.229650   28262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:49:45.229960   28262 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:49:45.230134   28262 main.go:141] libmachine: (ha-617764) Calling .DriverName
	I0913 18:49:45.230281   28262 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0913 18:49:45.230308   28262 main.go:141] libmachine: (ha-617764) Calling .GetSSHHostname
	I0913 18:49:45.232921   28262 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:49:45.233329   28262 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 18:49:45.233355   28262 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:49:45.233510   28262 main.go:141] libmachine: (ha-617764) Calling .GetSSHPort
	I0913 18:49:45.233636   28262 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 18:49:45.233756   28262 main.go:141] libmachine: (ha-617764) Calling .GetSSHUsername
	I0913 18:49:45.233854   28262 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764/id_rsa Username:docker}
	I0913 18:49:45.322170   28262 ssh_runner.go:195] Run: systemctl --version
	I0913 18:49:45.329318   28262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 18:49:45.347695   28262 kubeconfig.go:125] found "ha-617764" server: "https://192.168.39.254:8443"
	I0913 18:49:45.347724   28262 api_server.go:166] Checking apiserver status ...
	I0913 18:49:45.347754   28262 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 18:49:45.363487   28262 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1125/cgroup
	W0913 18:49:45.373618   28262 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1125/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0913 18:49:45.373667   28262 ssh_runner.go:195] Run: ls
	I0913 18:49:45.378491   28262 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0913 18:49:45.382749   28262 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0913 18:49:45.382771   28262 status.go:422] ha-617764 apiserver status = Running (err=<nil>)
	I0913 18:49:45.382783   28262 status.go:257] ha-617764 status: &{Name:ha-617764 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0913 18:49:45.382803   28262 status.go:255] checking status of ha-617764-m02 ...
	I0913 18:49:45.383112   28262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:49:45.383153   28262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:49:45.398160   28262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36999
	I0913 18:49:45.398622   28262 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:49:45.399065   28262 main.go:141] libmachine: Using API Version  1
	I0913 18:49:45.399087   28262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:49:45.399436   28262 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:49:45.399623   28262 main.go:141] libmachine: (ha-617764-m02) Calling .GetState
	I0913 18:49:45.401036   28262 status.go:330] ha-617764-m02 host status = "Stopped" (err=<nil>)
	I0913 18:49:45.401053   28262 status.go:343] host is not running, skipping remaining checks
	I0913 18:49:45.401061   28262 status.go:257] ha-617764-m02 status: &{Name:ha-617764-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0913 18:49:45.401093   28262 status.go:255] checking status of ha-617764-m03 ...
	I0913 18:49:45.401483   28262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:49:45.401526   28262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:49:45.415937   28262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37005
	I0913 18:49:45.416453   28262 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:49:45.416913   28262 main.go:141] libmachine: Using API Version  1
	I0913 18:49:45.416928   28262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:49:45.417248   28262 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:49:45.417416   28262 main.go:141] libmachine: (ha-617764-m03) Calling .GetState
	I0913 18:49:45.418756   28262 status.go:330] ha-617764-m03 host status = "Running" (err=<nil>)
	I0913 18:49:45.418770   28262 host.go:66] Checking if "ha-617764-m03" exists ...
	I0913 18:49:45.419101   28262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:49:45.419146   28262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:49:45.433789   28262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38573
	I0913 18:49:45.434257   28262 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:49:45.434770   28262 main.go:141] libmachine: Using API Version  1
	I0913 18:49:45.434784   28262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:49:45.435063   28262 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:49:45.435183   28262 main.go:141] libmachine: (ha-617764-m03) Calling .GetIP
	I0913 18:49:45.437748   28262 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:49:45.438232   28262 main.go:141] libmachine: (ha-617764-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:bc:fa", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:44:05 +0000 UTC Type:0 Mac:52:54:00:4c:bc:fa Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:ha-617764-m03 Clientid:01:52:54:00:4c:bc:fa}
	I0913 18:49:45.438257   28262 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined IP address 192.168.39.118 and MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:49:45.438403   28262 host.go:66] Checking if "ha-617764-m03" exists ...
	I0913 18:49:45.438774   28262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:49:45.438813   28262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:49:45.453400   28262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34443
	I0913 18:49:45.453743   28262 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:49:45.454194   28262 main.go:141] libmachine: Using API Version  1
	I0913 18:49:45.454217   28262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:49:45.454528   28262 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:49:45.454699   28262 main.go:141] libmachine: (ha-617764-m03) Calling .DriverName
	I0913 18:49:45.454866   28262 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0913 18:49:45.454889   28262 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHHostname
	I0913 18:49:45.457307   28262 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:49:45.457738   28262 main.go:141] libmachine: (ha-617764-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:bc:fa", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:44:05 +0000 UTC Type:0 Mac:52:54:00:4c:bc:fa Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:ha-617764-m03 Clientid:01:52:54:00:4c:bc:fa}
	I0913 18:49:45.457764   28262 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined IP address 192.168.39.118 and MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:49:45.457908   28262 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHPort
	I0913 18:49:45.458061   28262 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHKeyPath
	I0913 18:49:45.458221   28262 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHUsername
	I0913 18:49:45.458339   28262 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764-m03/id_rsa Username:docker}
	I0913 18:49:45.541363   28262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 18:49:45.556291   28262 kubeconfig.go:125] found "ha-617764" server: "https://192.168.39.254:8443"
	I0913 18:49:45.556320   28262 api_server.go:166] Checking apiserver status ...
	I0913 18:49:45.556360   28262 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 18:49:45.572999   28262 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1393/cgroup
	W0913 18:49:45.582892   28262 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1393/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0913 18:49:45.582956   28262 ssh_runner.go:195] Run: ls
	I0913 18:49:45.587503   28262 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0913 18:49:45.591961   28262 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0913 18:49:45.591982   28262 status.go:422] ha-617764-m03 apiserver status = Running (err=<nil>)
	I0913 18:49:45.591990   28262 status.go:257] ha-617764-m03 status: &{Name:ha-617764-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0913 18:49:45.592013   28262 status.go:255] checking status of ha-617764-m04 ...
	I0913 18:49:45.592397   28262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:49:45.592435   28262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:49:45.607707   28262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42973
	I0913 18:49:45.608119   28262 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:49:45.608716   28262 main.go:141] libmachine: Using API Version  1
	I0913 18:49:45.608740   28262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:49:45.609019   28262 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:49:45.609180   28262 main.go:141] libmachine: (ha-617764-m04) Calling .GetState
	I0913 18:49:45.610587   28262 status.go:330] ha-617764-m04 host status = "Running" (err=<nil>)
	I0913 18:49:45.610604   28262 host.go:66] Checking if "ha-617764-m04" exists ...
	I0913 18:49:45.610879   28262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:49:45.610914   28262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:49:45.625504   28262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40561
	I0913 18:49:45.625880   28262 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:49:45.626386   28262 main.go:141] libmachine: Using API Version  1
	I0913 18:49:45.626412   28262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:49:45.626734   28262 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:49:45.626924   28262 main.go:141] libmachine: (ha-617764-m04) Calling .GetIP
	I0913 18:49:45.629841   28262 main.go:141] libmachine: (ha-617764-m04) DBG | domain ha-617764-m04 has defined MAC address 52:54:00:08:6e:e8 in network mk-ha-617764
	I0913 18:49:45.630334   28262 main.go:141] libmachine: (ha-617764-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:6e:e8", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:45:38 +0000 UTC Type:0 Mac:52:54:00:08:6e:e8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-617764-m04 Clientid:01:52:54:00:08:6e:e8}
	I0913 18:49:45.630364   28262 main.go:141] libmachine: (ha-617764-m04) DBG | domain ha-617764-m04 has defined IP address 192.168.39.238 and MAC address 52:54:00:08:6e:e8 in network mk-ha-617764
	I0913 18:49:45.630562   28262 host.go:66] Checking if "ha-617764-m04" exists ...
	I0913 18:49:45.630917   28262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:49:45.630953   28262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:49:45.645622   28262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38537
	I0913 18:49:45.645972   28262 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:49:45.646462   28262 main.go:141] libmachine: Using API Version  1
	I0913 18:49:45.646486   28262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:49:45.646781   28262 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:49:45.646983   28262 main.go:141] libmachine: (ha-617764-m04) Calling .DriverName
	I0913 18:49:45.647164   28262 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0913 18:49:45.647183   28262 main.go:141] libmachine: (ha-617764-m04) Calling .GetSSHHostname
	I0913 18:49:45.649811   28262 main.go:141] libmachine: (ha-617764-m04) DBG | domain ha-617764-m04 has defined MAC address 52:54:00:08:6e:e8 in network mk-ha-617764
	I0913 18:49:45.650330   28262 main.go:141] libmachine: (ha-617764-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:6e:e8", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:45:38 +0000 UTC Type:0 Mac:52:54:00:08:6e:e8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-617764-m04 Clientid:01:52:54:00:08:6e:e8}
	I0913 18:49:45.650366   28262 main.go:141] libmachine: (ha-617764-m04) DBG | domain ha-617764-m04 has defined IP address 192.168.39.238 and MAC address 52:54:00:08:6e:e8 in network mk-ha-617764
	I0913 18:49:45.650543   28262 main.go:141] libmachine: (ha-617764-m04) Calling .GetSSHPort
	I0913 18:49:45.650736   28262 main.go:141] libmachine: (ha-617764-m04) Calling .GetSSHKeyPath
	I0913 18:49:45.650865   28262 main.go:141] libmachine: (ha-617764-m04) Calling .GetSSHUsername
	I0913 18:49:45.650989   28262 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764-m04/id_rsa Username:docker}
	I0913 18:49:45.733154   28262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 18:49:45.747070   28262 status.go:257] ha-617764-m04 status: &{Name:ha-617764-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-617764 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-617764 status -v=7 --alsologtostderr: exit status 7 (589.810157ms)

                                                
                                                
-- stdout --
	ha-617764
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-617764-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-617764-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-617764-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 18:49:54.424944   28366 out.go:345] Setting OutFile to fd 1 ...
	I0913 18:49:54.425158   28366 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 18:49:54.425166   28366 out.go:358] Setting ErrFile to fd 2...
	I0913 18:49:54.425176   28366 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 18:49:54.425362   28366 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19636-3902/.minikube/bin
	I0913 18:49:54.425506   28366 out.go:352] Setting JSON to false
	I0913 18:49:54.425532   28366 mustload.go:65] Loading cluster: ha-617764
	I0913 18:49:54.425636   28366 notify.go:220] Checking for updates...
	I0913 18:49:54.425905   28366 config.go:182] Loaded profile config "ha-617764": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 18:49:54.425920   28366 status.go:255] checking status of ha-617764 ...
	I0913 18:49:54.426405   28366 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:49:54.426465   28366 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:49:54.446531   28366 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34721
	I0913 18:49:54.447006   28366 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:49:54.447672   28366 main.go:141] libmachine: Using API Version  1
	I0913 18:49:54.447709   28366 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:49:54.448015   28366 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:49:54.448180   28366 main.go:141] libmachine: (ha-617764) Calling .GetState
	I0913 18:49:54.449772   28366 status.go:330] ha-617764 host status = "Running" (err=<nil>)
	I0913 18:49:54.449787   28366 host.go:66] Checking if "ha-617764" exists ...
	I0913 18:49:54.450073   28366 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:49:54.450137   28366 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:49:54.464283   28366 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41677
	I0913 18:49:54.464595   28366 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:49:54.464974   28366 main.go:141] libmachine: Using API Version  1
	I0913 18:49:54.464991   28366 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:49:54.465276   28366 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:49:54.465411   28366 main.go:141] libmachine: (ha-617764) Calling .GetIP
	I0913 18:49:54.467768   28366 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:49:54.468173   28366 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 18:49:54.468203   28366 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:49:54.468317   28366 host.go:66] Checking if "ha-617764" exists ...
	I0913 18:49:54.468622   28366 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:49:54.468657   28366 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:49:54.482564   28366 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42787
	I0913 18:49:54.482925   28366 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:49:54.483348   28366 main.go:141] libmachine: Using API Version  1
	I0913 18:49:54.483367   28366 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:49:54.483647   28366 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:49:54.483812   28366 main.go:141] libmachine: (ha-617764) Calling .DriverName
	I0913 18:49:54.483965   28366 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0913 18:49:54.483989   28366 main.go:141] libmachine: (ha-617764) Calling .GetSSHHostname
	I0913 18:49:54.486708   28366 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:49:54.487122   28366 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 18:49:54.487143   28366 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:49:54.487305   28366 main.go:141] libmachine: (ha-617764) Calling .GetSSHPort
	I0913 18:49:54.487464   28366 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 18:49:54.487603   28366 main.go:141] libmachine: (ha-617764) Calling .GetSSHUsername
	I0913 18:49:54.487712   28366 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764/id_rsa Username:docker}
	I0913 18:49:54.565348   28366 ssh_runner.go:195] Run: systemctl --version
	I0913 18:49:54.570907   28366 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 18:49:54.585771   28366 kubeconfig.go:125] found "ha-617764" server: "https://192.168.39.254:8443"
	I0913 18:49:54.585804   28366 api_server.go:166] Checking apiserver status ...
	I0913 18:49:54.585850   28366 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 18:49:54.599742   28366 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1125/cgroup
	W0913 18:49:54.610037   28366 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1125/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0913 18:49:54.610085   28366 ssh_runner.go:195] Run: ls
	I0913 18:49:54.614506   28366 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0913 18:49:54.618757   28366 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0913 18:49:54.618775   28366 status.go:422] ha-617764 apiserver status = Running (err=<nil>)
	I0913 18:49:54.618784   28366 status.go:257] ha-617764 status: &{Name:ha-617764 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0913 18:49:54.618798   28366 status.go:255] checking status of ha-617764-m02 ...
	I0913 18:49:54.619086   28366 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:49:54.619119   28366 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:49:54.633556   28366 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43439
	I0913 18:49:54.633885   28366 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:49:54.634450   28366 main.go:141] libmachine: Using API Version  1
	I0913 18:49:54.634478   28366 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:49:54.634794   28366 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:49:54.634977   28366 main.go:141] libmachine: (ha-617764-m02) Calling .GetState
	I0913 18:49:54.636621   28366 status.go:330] ha-617764-m02 host status = "Stopped" (err=<nil>)
	I0913 18:49:54.636638   28366 status.go:343] host is not running, skipping remaining checks
	I0913 18:49:54.636646   28366 status.go:257] ha-617764-m02 status: &{Name:ha-617764-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0913 18:49:54.636664   28366 status.go:255] checking status of ha-617764-m03 ...
	I0913 18:49:54.637069   28366 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:49:54.637111   28366 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:49:54.651611   28366 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36307
	I0913 18:49:54.652063   28366 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:49:54.652494   28366 main.go:141] libmachine: Using API Version  1
	I0913 18:49:54.652512   28366 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:49:54.652829   28366 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:49:54.652993   28366 main.go:141] libmachine: (ha-617764-m03) Calling .GetState
	I0913 18:49:54.654592   28366 status.go:330] ha-617764-m03 host status = "Running" (err=<nil>)
	I0913 18:49:54.654606   28366 host.go:66] Checking if "ha-617764-m03" exists ...
	I0913 18:49:54.654922   28366 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:49:54.654964   28366 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:49:54.670667   28366 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33043
	I0913 18:49:54.671183   28366 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:49:54.671695   28366 main.go:141] libmachine: Using API Version  1
	I0913 18:49:54.671720   28366 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:49:54.672076   28366 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:49:54.672279   28366 main.go:141] libmachine: (ha-617764-m03) Calling .GetIP
	I0913 18:49:54.674927   28366 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:49:54.675326   28366 main.go:141] libmachine: (ha-617764-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:bc:fa", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:44:05 +0000 UTC Type:0 Mac:52:54:00:4c:bc:fa Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:ha-617764-m03 Clientid:01:52:54:00:4c:bc:fa}
	I0913 18:49:54.675353   28366 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined IP address 192.168.39.118 and MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:49:54.675531   28366 host.go:66] Checking if "ha-617764-m03" exists ...
	I0913 18:49:54.675826   28366 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:49:54.675859   28366 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:49:54.690813   28366 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43951
	I0913 18:49:54.691140   28366 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:49:54.691607   28366 main.go:141] libmachine: Using API Version  1
	I0913 18:49:54.691625   28366 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:49:54.691995   28366 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:49:54.692172   28366 main.go:141] libmachine: (ha-617764-m03) Calling .DriverName
	I0913 18:49:54.692344   28366 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0913 18:49:54.692382   28366 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHHostname
	I0913 18:49:54.695134   28366 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:49:54.695502   28366 main.go:141] libmachine: (ha-617764-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:bc:fa", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:44:05 +0000 UTC Type:0 Mac:52:54:00:4c:bc:fa Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:ha-617764-m03 Clientid:01:52:54:00:4c:bc:fa}
	I0913 18:49:54.695536   28366 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined IP address 192.168.39.118 and MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:49:54.695644   28366 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHPort
	I0913 18:49:54.695776   28366 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHKeyPath
	I0913 18:49:54.695877   28366 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHUsername
	I0913 18:49:54.695997   28366 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764-m03/id_rsa Username:docker}
	I0913 18:49:54.777726   28366 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 18:49:54.794248   28366 kubeconfig.go:125] found "ha-617764" server: "https://192.168.39.254:8443"
	I0913 18:49:54.794274   28366 api_server.go:166] Checking apiserver status ...
	I0913 18:49:54.794306   28366 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 18:49:54.808058   28366 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1393/cgroup
	W0913 18:49:54.817516   28366 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1393/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0913 18:49:54.817561   28366 ssh_runner.go:195] Run: ls
	I0913 18:49:54.822551   28366 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0913 18:49:54.826925   28366 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0913 18:49:54.826947   28366 status.go:422] ha-617764-m03 apiserver status = Running (err=<nil>)
	I0913 18:49:54.826956   28366 status.go:257] ha-617764-m03 status: &{Name:ha-617764-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0913 18:49:54.826971   28366 status.go:255] checking status of ha-617764-m04 ...
	I0913 18:49:54.827266   28366 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:49:54.827301   28366 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:49:54.841938   28366 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45713
	I0913 18:49:54.842374   28366 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:49:54.842832   28366 main.go:141] libmachine: Using API Version  1
	I0913 18:49:54.842852   28366 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:49:54.843157   28366 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:49:54.843353   28366 main.go:141] libmachine: (ha-617764-m04) Calling .GetState
	I0913 18:49:54.844746   28366 status.go:330] ha-617764-m04 host status = "Running" (err=<nil>)
	I0913 18:49:54.844763   28366 host.go:66] Checking if "ha-617764-m04" exists ...
	I0913 18:49:54.845050   28366 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:49:54.845098   28366 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:49:54.859639   28366 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43595
	I0913 18:49:54.860051   28366 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:49:54.860509   28366 main.go:141] libmachine: Using API Version  1
	I0913 18:49:54.860529   28366 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:49:54.860796   28366 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:49:54.860975   28366 main.go:141] libmachine: (ha-617764-m04) Calling .GetIP
	I0913 18:49:54.863875   28366 main.go:141] libmachine: (ha-617764-m04) DBG | domain ha-617764-m04 has defined MAC address 52:54:00:08:6e:e8 in network mk-ha-617764
	I0913 18:49:54.864321   28366 main.go:141] libmachine: (ha-617764-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:6e:e8", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:45:38 +0000 UTC Type:0 Mac:52:54:00:08:6e:e8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-617764-m04 Clientid:01:52:54:00:08:6e:e8}
	I0913 18:49:54.864357   28366 main.go:141] libmachine: (ha-617764-m04) DBG | domain ha-617764-m04 has defined IP address 192.168.39.238 and MAC address 52:54:00:08:6e:e8 in network mk-ha-617764
	I0913 18:49:54.864475   28366 host.go:66] Checking if "ha-617764-m04" exists ...
	I0913 18:49:54.864758   28366 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:49:54.864807   28366 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:49:54.879741   28366 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43135
	I0913 18:49:54.880149   28366 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:49:54.880602   28366 main.go:141] libmachine: Using API Version  1
	I0913 18:49:54.880618   28366 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:49:54.880888   28366 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:49:54.881085   28366 main.go:141] libmachine: (ha-617764-m04) Calling .DriverName
	I0913 18:49:54.881250   28366 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0913 18:49:54.881271   28366 main.go:141] libmachine: (ha-617764-m04) Calling .GetSSHHostname
	I0913 18:49:54.883787   28366 main.go:141] libmachine: (ha-617764-m04) DBG | domain ha-617764-m04 has defined MAC address 52:54:00:08:6e:e8 in network mk-ha-617764
	I0913 18:49:54.884330   28366 main.go:141] libmachine: (ha-617764-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:6e:e8", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:45:38 +0000 UTC Type:0 Mac:52:54:00:08:6e:e8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-617764-m04 Clientid:01:52:54:00:08:6e:e8}
	I0913 18:49:54.884354   28366 main.go:141] libmachine: (ha-617764-m04) DBG | domain ha-617764-m04 has defined IP address 192.168.39.238 and MAC address 52:54:00:08:6e:e8 in network mk-ha-617764
	I0913 18:49:54.884489   28366 main.go:141] libmachine: (ha-617764-m04) Calling .GetSSHPort
	I0913 18:49:54.884623   28366 main.go:141] libmachine: (ha-617764-m04) Calling .GetSSHKeyPath
	I0913 18:49:54.884754   28366 main.go:141] libmachine: (ha-617764-m04) Calling .GetSSHUsername
	I0913 18:49:54.884887   28366 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764-m04/id_rsa Username:docker}
	I0913 18:49:54.961031   28366 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 18:49:54.974612   28366 status.go:257] ha-617764-m04 status: &{Name:ha-617764-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-617764 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-617764 -n ha-617764
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-617764 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-617764 logs -n 25: (1.384019639s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                      |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-617764 ssh -n                                                               | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC | 13 Sep 24 18:46 UTC |
	|         | ha-617764-m03 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| cp      | ha-617764 cp ha-617764-m03:/home/docker/cp-test.txt                            | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC | 13 Sep 24 18:46 UTC |
	|         | ha-617764:/home/docker/cp-test_ha-617764-m03_ha-617764.txt                     |           |         |         |                     |                     |
	| ssh     | ha-617764 ssh -n                                                               | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC | 13 Sep 24 18:46 UTC |
	|         | ha-617764-m03 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-617764 ssh -n ha-617764 sudo cat                                            | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC | 13 Sep 24 18:46 UTC |
	|         | /home/docker/cp-test_ha-617764-m03_ha-617764.txt                               |           |         |         |                     |                     |
	| cp      | ha-617764 cp ha-617764-m03:/home/docker/cp-test.txt                            | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC | 13 Sep 24 18:46 UTC |
	|         | ha-617764-m02:/home/docker/cp-test_ha-617764-m03_ha-617764-m02.txt             |           |         |         |                     |                     |
	| ssh     | ha-617764 ssh -n                                                               | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC | 13 Sep 24 18:46 UTC |
	|         | ha-617764-m03 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-617764 ssh -n ha-617764-m02 sudo cat                                        | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC | 13 Sep 24 18:46 UTC |
	|         | /home/docker/cp-test_ha-617764-m03_ha-617764-m02.txt                           |           |         |         |                     |                     |
	| cp      | ha-617764 cp ha-617764-m03:/home/docker/cp-test.txt                            | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC | 13 Sep 24 18:46 UTC |
	|         | ha-617764-m04:/home/docker/cp-test_ha-617764-m03_ha-617764-m04.txt             |           |         |         |                     |                     |
	| ssh     | ha-617764 ssh -n                                                               | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC | 13 Sep 24 18:46 UTC |
	|         | ha-617764-m03 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-617764 ssh -n ha-617764-m04 sudo cat                                        | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC | 13 Sep 24 18:46 UTC |
	|         | /home/docker/cp-test_ha-617764-m03_ha-617764-m04.txt                           |           |         |         |                     |                     |
	| cp      | ha-617764 cp testdata/cp-test.txt                                              | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC | 13 Sep 24 18:46 UTC |
	|         | ha-617764-m04:/home/docker/cp-test.txt                                         |           |         |         |                     |                     |
	| ssh     | ha-617764 ssh -n                                                               | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC | 13 Sep 24 18:46 UTC |
	|         | ha-617764-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| cp      | ha-617764 cp ha-617764-m04:/home/docker/cp-test.txt                            | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC | 13 Sep 24 18:46 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile62644144/001/cp-test_ha-617764-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-617764 ssh -n                                                               | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC | 13 Sep 24 18:46 UTC |
	|         | ha-617764-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| cp      | ha-617764 cp ha-617764-m04:/home/docker/cp-test.txt                            | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC | 13 Sep 24 18:46 UTC |
	|         | ha-617764:/home/docker/cp-test_ha-617764-m04_ha-617764.txt                     |           |         |         |                     |                     |
	| ssh     | ha-617764 ssh -n                                                               | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC | 13 Sep 24 18:46 UTC |
	|         | ha-617764-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-617764 ssh -n ha-617764 sudo cat                                            | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC | 13 Sep 24 18:46 UTC |
	|         | /home/docker/cp-test_ha-617764-m04_ha-617764.txt                               |           |         |         |                     |                     |
	| cp      | ha-617764 cp ha-617764-m04:/home/docker/cp-test.txt                            | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC | 13 Sep 24 18:46 UTC |
	|         | ha-617764-m02:/home/docker/cp-test_ha-617764-m04_ha-617764-m02.txt             |           |         |         |                     |                     |
	| ssh     | ha-617764 ssh -n                                                               | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC | 13 Sep 24 18:46 UTC |
	|         | ha-617764-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-617764 ssh -n ha-617764-m02 sudo cat                                        | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC | 13 Sep 24 18:46 UTC |
	|         | /home/docker/cp-test_ha-617764-m04_ha-617764-m02.txt                           |           |         |         |                     |                     |
	| cp      | ha-617764 cp ha-617764-m04:/home/docker/cp-test.txt                            | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC | 13 Sep 24 18:46 UTC |
	|         | ha-617764-m03:/home/docker/cp-test_ha-617764-m04_ha-617764-m03.txt             |           |         |         |                     |                     |
	| ssh     | ha-617764 ssh -n                                                               | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC | 13 Sep 24 18:46 UTC |
	|         | ha-617764-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-617764 ssh -n ha-617764-m03 sudo cat                                        | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC | 13 Sep 24 18:46 UTC |
	|         | /home/docker/cp-test_ha-617764-m04_ha-617764-m03.txt                           |           |         |         |                     |                     |
	| node    | ha-617764 node stop m02 -v=7                                                   | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC |                     |
	|         | --alsologtostderr                                                              |           |         |         |                     |                     |
	| node    | ha-617764 node start m02 -v=7                                                  | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:48 UTC |                     |
	|         | --alsologtostderr                                                              |           |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/13 18:41:46
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0913 18:41:46.342076   22792 out.go:345] Setting OutFile to fd 1 ...
	I0913 18:41:46.342355   22792 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 18:41:46.342364   22792 out.go:358] Setting ErrFile to fd 2...
	I0913 18:41:46.342369   22792 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 18:41:46.342538   22792 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19636-3902/.minikube/bin
	I0913 18:41:46.343063   22792 out.go:352] Setting JSON to false
	I0913 18:41:46.343967   22792 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1449,"bootTime":1726251457,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0913 18:41:46.344058   22792 start.go:139] virtualization: kvm guest
	I0913 18:41:46.346218   22792 out.go:177] * [ha-617764] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0913 18:41:46.347591   22792 out.go:177]   - MINIKUBE_LOCATION=19636
	I0913 18:41:46.347592   22792 notify.go:220] Checking for updates...
	I0913 18:41:46.349905   22792 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 18:41:46.351182   22792 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19636-3902/kubeconfig
	I0913 18:41:46.352355   22792 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19636-3902/.minikube
	I0913 18:41:46.353531   22792 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0913 18:41:46.354851   22792 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 18:41:46.356378   22792 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 18:41:46.390751   22792 out.go:177] * Using the kvm2 driver based on user configuration
	I0913 18:41:46.392075   22792 start.go:297] selected driver: kvm2
	I0913 18:41:46.392084   22792 start.go:901] validating driver "kvm2" against <nil>
	I0913 18:41:46.392094   22792 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 18:41:46.392812   22792 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 18:41:46.392896   22792 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19636-3902/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0913 18:41:46.407318   22792 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0913 18:41:46.407361   22792 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0913 18:41:46.407592   22792 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0913 18:41:46.407622   22792 cni.go:84] Creating CNI manager for ""
	I0913 18:41:46.407659   22792 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0913 18:41:46.407666   22792 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0913 18:41:46.407735   22792 start.go:340] cluster config:
	{Name:ha-617764 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-617764 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0913 18:41:46.407833   22792 iso.go:125] acquiring lock: {Name:mk6d09a9e7ffc35d34bf41cb51ad15df14a4d34d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 18:41:46.409833   22792 out.go:177] * Starting "ha-617764" primary control-plane node in "ha-617764" cluster
	I0913 18:41:46.411217   22792 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0913 18:41:46.411244   22792 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0913 18:41:46.411250   22792 cache.go:56] Caching tarball of preloaded images
	I0913 18:41:46.411328   22792 preload.go:172] Found /home/jenkins/minikube-integration/19636-3902/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0913 18:41:46.411342   22792 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0913 18:41:46.411638   22792 profile.go:143] Saving config to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/config.json ...
	I0913 18:41:46.411660   22792 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/config.json: {Name:mk4f12574a12f474df5f3b929e48935a5774feaa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:41:46.411795   22792 start.go:360] acquireMachinesLock for ha-617764: {Name:mk2a4fc9f87a3264088553785e086036ce1d8c5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 18:41:46.411830   22792 start.go:364] duration metric: took 18.873µs to acquireMachinesLock for "ha-617764"
	I0913 18:41:46.411852   22792 start.go:93] Provisioning new machine with config: &{Name:ha-617764 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-617764 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0913 18:41:46.411920   22792 start.go:125] createHost starting for "" (driver="kvm2")
	I0913 18:41:46.413820   22792 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0913 18:41:46.413936   22792 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:41:46.413977   22792 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:41:46.428170   22792 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42427
	I0913 18:41:46.428606   22792 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:41:46.429169   22792 main.go:141] libmachine: Using API Version  1
	I0913 18:41:46.429192   22792 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:41:46.429573   22792 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:41:46.429755   22792 main.go:141] libmachine: (ha-617764) Calling .GetMachineName
	I0913 18:41:46.429898   22792 main.go:141] libmachine: (ha-617764) Calling .DriverName
	I0913 18:41:46.430037   22792 start.go:159] libmachine.API.Create for "ha-617764" (driver="kvm2")
	I0913 18:41:46.430070   22792 client.go:168] LocalClient.Create starting
	I0913 18:41:46.430113   22792 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem
	I0913 18:41:46.430174   22792 main.go:141] libmachine: Decoding PEM data...
	I0913 18:41:46.430193   22792 main.go:141] libmachine: Parsing certificate...
	I0913 18:41:46.430263   22792 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem
	I0913 18:41:46.430287   22792 main.go:141] libmachine: Decoding PEM data...
	I0913 18:41:46.430308   22792 main.go:141] libmachine: Parsing certificate...
	I0913 18:41:46.430331   22792 main.go:141] libmachine: Running pre-create checks...
	I0913 18:41:46.430350   22792 main.go:141] libmachine: (ha-617764) Calling .PreCreateCheck
	I0913 18:41:46.430738   22792 main.go:141] libmachine: (ha-617764) Calling .GetConfigRaw
	I0913 18:41:46.431083   22792 main.go:141] libmachine: Creating machine...
	I0913 18:41:46.431095   22792 main.go:141] libmachine: (ha-617764) Calling .Create
	I0913 18:41:46.431240   22792 main.go:141] libmachine: (ha-617764) Creating KVM machine...
	I0913 18:41:46.432342   22792 main.go:141] libmachine: (ha-617764) DBG | found existing default KVM network
	I0913 18:41:46.432950   22792 main.go:141] libmachine: (ha-617764) DBG | I0913 18:41:46.432804   22815 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002091e0}
	I0913 18:41:46.432972   22792 main.go:141] libmachine: (ha-617764) DBG | created network xml: 
	I0913 18:41:46.432985   22792 main.go:141] libmachine: (ha-617764) DBG | <network>
	I0913 18:41:46.432992   22792 main.go:141] libmachine: (ha-617764) DBG |   <name>mk-ha-617764</name>
	I0913 18:41:46.433004   22792 main.go:141] libmachine: (ha-617764) DBG |   <dns enable='no'/>
	I0913 18:41:46.433009   22792 main.go:141] libmachine: (ha-617764) DBG |   
	I0913 18:41:46.433017   22792 main.go:141] libmachine: (ha-617764) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0913 18:41:46.433026   22792 main.go:141] libmachine: (ha-617764) DBG |     <dhcp>
	I0913 18:41:46.433036   22792 main.go:141] libmachine: (ha-617764) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0913 18:41:46.433054   22792 main.go:141] libmachine: (ha-617764) DBG |     </dhcp>
	I0913 18:41:46.433063   22792 main.go:141] libmachine: (ha-617764) DBG |   </ip>
	I0913 18:41:46.433068   22792 main.go:141] libmachine: (ha-617764) DBG |   
	I0913 18:41:46.433076   22792 main.go:141] libmachine: (ha-617764) DBG | </network>
	I0913 18:41:46.433082   22792 main.go:141] libmachine: (ha-617764) DBG | 
	I0913 18:41:46.438128   22792 main.go:141] libmachine: (ha-617764) DBG | trying to create private KVM network mk-ha-617764 192.168.39.0/24...
	I0913 18:41:46.501990   22792 main.go:141] libmachine: (ha-617764) Setting up store path in /home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764 ...
	I0913 18:41:46.502020   22792 main.go:141] libmachine: (ha-617764) Building disk image from file:///home/jenkins/minikube-integration/19636-3902/.minikube/cache/iso/amd64/minikube-v1.34.0-1726156389-19616-amd64.iso
	I0913 18:41:46.502029   22792 main.go:141] libmachine: (ha-617764) DBG | private KVM network mk-ha-617764 192.168.39.0/24 created
	I0913 18:41:46.502049   22792 main.go:141] libmachine: (ha-617764) DBG | I0913 18:41:46.501959   22815 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19636-3902/.minikube
	I0913 18:41:46.502172   22792 main.go:141] libmachine: (ha-617764) Downloading /home/jenkins/minikube-integration/19636-3902/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19636-3902/.minikube/cache/iso/amd64/minikube-v1.34.0-1726156389-19616-amd64.iso...
	I0913 18:41:46.746853   22792 main.go:141] libmachine: (ha-617764) DBG | I0913 18:41:46.746736   22815 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764/id_rsa...
	I0913 18:41:46.901725   22792 main.go:141] libmachine: (ha-617764) DBG | I0913 18:41:46.901613   22815 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764/ha-617764.rawdisk...
	I0913 18:41:46.901768   22792 main.go:141] libmachine: (ha-617764) DBG | Writing magic tar header
	I0913 18:41:46.901781   22792 main.go:141] libmachine: (ha-617764) DBG | Writing SSH key tar header
	I0913 18:41:46.901791   22792 main.go:141] libmachine: (ha-617764) DBG | I0913 18:41:46.901725   22815 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764 ...
	I0913 18:41:46.901917   22792 main.go:141] libmachine: (ha-617764) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764
	I0913 18:41:46.901965   22792 main.go:141] libmachine: (ha-617764) Setting executable bit set on /home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764 (perms=drwx------)
	I0913 18:41:46.901980   22792 main.go:141] libmachine: (ha-617764) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19636-3902/.minikube/machines
	I0913 18:41:46.901994   22792 main.go:141] libmachine: (ha-617764) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19636-3902/.minikube
	I0913 18:41:46.902001   22792 main.go:141] libmachine: (ha-617764) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19636-3902
	I0913 18:41:46.902008   22792 main.go:141] libmachine: (ha-617764) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0913 18:41:46.902014   22792 main.go:141] libmachine: (ha-617764) DBG | Checking permissions on dir: /home/jenkins
	I0913 18:41:46.902024   22792 main.go:141] libmachine: (ha-617764) DBG | Checking permissions on dir: /home
	I0913 18:41:46.902029   22792 main.go:141] libmachine: (ha-617764) DBG | Skipping /home - not owner
	I0913 18:41:46.902039   22792 main.go:141] libmachine: (ha-617764) Setting executable bit set on /home/jenkins/minikube-integration/19636-3902/.minikube/machines (perms=drwxr-xr-x)
	I0913 18:41:46.902056   22792 main.go:141] libmachine: (ha-617764) Setting executable bit set on /home/jenkins/minikube-integration/19636-3902/.minikube (perms=drwxr-xr-x)
	I0913 18:41:46.902071   22792 main.go:141] libmachine: (ha-617764) Setting executable bit set on /home/jenkins/minikube-integration/19636-3902 (perms=drwxrwxr-x)
	I0913 18:41:46.902082   22792 main.go:141] libmachine: (ha-617764) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0913 18:41:46.902113   22792 main.go:141] libmachine: (ha-617764) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0913 18:41:46.902131   22792 main.go:141] libmachine: (ha-617764) Creating domain...
	I0913 18:41:46.903143   22792 main.go:141] libmachine: (ha-617764) define libvirt domain using xml: 
	I0913 18:41:46.903185   22792 main.go:141] libmachine: (ha-617764) <domain type='kvm'>
	I0913 18:41:46.903198   22792 main.go:141] libmachine: (ha-617764)   <name>ha-617764</name>
	I0913 18:41:46.903209   22792 main.go:141] libmachine: (ha-617764)   <memory unit='MiB'>2200</memory>
	I0913 18:41:46.903220   22792 main.go:141] libmachine: (ha-617764)   <vcpu>2</vcpu>
	I0913 18:41:46.903227   22792 main.go:141] libmachine: (ha-617764)   <features>
	I0913 18:41:46.903237   22792 main.go:141] libmachine: (ha-617764)     <acpi/>
	I0913 18:41:46.903245   22792 main.go:141] libmachine: (ha-617764)     <apic/>
	I0913 18:41:46.903255   22792 main.go:141] libmachine: (ha-617764)     <pae/>
	I0913 18:41:46.903267   22792 main.go:141] libmachine: (ha-617764)     
	I0913 18:41:46.903300   22792 main.go:141] libmachine: (ha-617764)   </features>
	I0913 18:41:46.903322   22792 main.go:141] libmachine: (ha-617764)   <cpu mode='host-passthrough'>
	I0913 18:41:46.903331   22792 main.go:141] libmachine: (ha-617764)   
	I0913 18:41:46.903341   22792 main.go:141] libmachine: (ha-617764)   </cpu>
	I0913 18:41:46.903378   22792 main.go:141] libmachine: (ha-617764)   <os>
	I0913 18:41:46.903394   22792 main.go:141] libmachine: (ha-617764)     <type>hvm</type>
	I0913 18:41:46.903401   22792 main.go:141] libmachine: (ha-617764)     <boot dev='cdrom'/>
	I0913 18:41:46.903407   22792 main.go:141] libmachine: (ha-617764)     <boot dev='hd'/>
	I0913 18:41:46.903413   22792 main.go:141] libmachine: (ha-617764)     <bootmenu enable='no'/>
	I0913 18:41:46.903419   22792 main.go:141] libmachine: (ha-617764)   </os>
	I0913 18:41:46.903426   22792 main.go:141] libmachine: (ha-617764)   <devices>
	I0913 18:41:46.903449   22792 main.go:141] libmachine: (ha-617764)     <disk type='file' device='cdrom'>
	I0913 18:41:46.903459   22792 main.go:141] libmachine: (ha-617764)       <source file='/home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764/boot2docker.iso'/>
	I0913 18:41:46.903464   22792 main.go:141] libmachine: (ha-617764)       <target dev='hdc' bus='scsi'/>
	I0913 18:41:46.903468   22792 main.go:141] libmachine: (ha-617764)       <readonly/>
	I0913 18:41:46.903472   22792 main.go:141] libmachine: (ha-617764)     </disk>
	I0913 18:41:46.903477   22792 main.go:141] libmachine: (ha-617764)     <disk type='file' device='disk'>
	I0913 18:41:46.903482   22792 main.go:141] libmachine: (ha-617764)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0913 18:41:46.903489   22792 main.go:141] libmachine: (ha-617764)       <source file='/home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764/ha-617764.rawdisk'/>
	I0913 18:41:46.903495   22792 main.go:141] libmachine: (ha-617764)       <target dev='hda' bus='virtio'/>
	I0913 18:41:46.903499   22792 main.go:141] libmachine: (ha-617764)     </disk>
	I0913 18:41:46.903503   22792 main.go:141] libmachine: (ha-617764)     <interface type='network'>
	I0913 18:41:46.903510   22792 main.go:141] libmachine: (ha-617764)       <source network='mk-ha-617764'/>
	I0913 18:41:46.903514   22792 main.go:141] libmachine: (ha-617764)       <model type='virtio'/>
	I0913 18:41:46.903529   22792 main.go:141] libmachine: (ha-617764)     </interface>
	I0913 18:41:46.903545   22792 main.go:141] libmachine: (ha-617764)     <interface type='network'>
	I0913 18:41:46.903560   22792 main.go:141] libmachine: (ha-617764)       <source network='default'/>
	I0913 18:41:46.903572   22792 main.go:141] libmachine: (ha-617764)       <model type='virtio'/>
	I0913 18:41:46.903580   22792 main.go:141] libmachine: (ha-617764)     </interface>
	I0913 18:41:46.903585   22792 main.go:141] libmachine: (ha-617764)     <serial type='pty'>
	I0913 18:41:46.903591   22792 main.go:141] libmachine: (ha-617764)       <target port='0'/>
	I0913 18:41:46.903600   22792 main.go:141] libmachine: (ha-617764)     </serial>
	I0913 18:41:46.903609   22792 main.go:141] libmachine: (ha-617764)     <console type='pty'>
	I0913 18:41:46.903619   22792 main.go:141] libmachine: (ha-617764)       <target type='serial' port='0'/>
	I0913 18:41:46.903637   22792 main.go:141] libmachine: (ha-617764)     </console>
	I0913 18:41:46.903652   22792 main.go:141] libmachine: (ha-617764)     <rng model='virtio'>
	I0913 18:41:46.903666   22792 main.go:141] libmachine: (ha-617764)       <backend model='random'>/dev/random</backend>
	I0913 18:41:46.903675   22792 main.go:141] libmachine: (ha-617764)     </rng>
	I0913 18:41:46.903682   22792 main.go:141] libmachine: (ha-617764)     
	I0913 18:41:46.903691   22792 main.go:141] libmachine: (ha-617764)     
	I0913 18:41:46.903699   22792 main.go:141] libmachine: (ha-617764)   </devices>
	I0913 18:41:46.903708   22792 main.go:141] libmachine: (ha-617764) </domain>
	I0913 18:41:46.903718   22792 main.go:141] libmachine: (ha-617764) 
	I0913 18:41:46.908004   22792 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:03:35:b9 in network default
	I0913 18:41:46.908582   22792 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:41:46.908600   22792 main.go:141] libmachine: (ha-617764) Ensuring networks are active...
	I0913 18:41:46.909237   22792 main.go:141] libmachine: (ha-617764) Ensuring network default is active
	I0913 18:41:46.909547   22792 main.go:141] libmachine: (ha-617764) Ensuring network mk-ha-617764 is active
	I0913 18:41:46.910141   22792 main.go:141] libmachine: (ha-617764) Getting domain xml...
	I0913 18:41:46.910893   22792 main.go:141] libmachine: (ha-617764) Creating domain...
	I0913 18:41:48.077626   22792 main.go:141] libmachine: (ha-617764) Waiting to get IP...
	I0913 18:41:48.078377   22792 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:41:48.078794   22792 main.go:141] libmachine: (ha-617764) DBG | unable to find current IP address of domain ha-617764 in network mk-ha-617764
	I0913 18:41:48.078836   22792 main.go:141] libmachine: (ha-617764) DBG | I0913 18:41:48.078769   22815 retry.go:31] will retry after 204.25518ms: waiting for machine to come up
	I0913 18:41:48.284172   22792 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:41:48.284644   22792 main.go:141] libmachine: (ha-617764) DBG | unable to find current IP address of domain ha-617764 in network mk-ha-617764
	I0913 18:41:48.284671   22792 main.go:141] libmachine: (ha-617764) DBG | I0913 18:41:48.284596   22815 retry.go:31] will retry after 380.64238ms: waiting for machine to come up
	I0913 18:41:48.667071   22792 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:41:48.667404   22792 main.go:141] libmachine: (ha-617764) DBG | unable to find current IP address of domain ha-617764 in network mk-ha-617764
	I0913 18:41:48.667448   22792 main.go:141] libmachine: (ha-617764) DBG | I0913 18:41:48.667387   22815 retry.go:31] will retry after 461.878657ms: waiting for machine to come up
	I0913 18:41:49.131208   22792 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:41:49.131674   22792 main.go:141] libmachine: (ha-617764) DBG | unable to find current IP address of domain ha-617764 in network mk-ha-617764
	I0913 18:41:49.131696   22792 main.go:141] libmachine: (ha-617764) DBG | I0913 18:41:49.131636   22815 retry.go:31] will retry after 465.910019ms: waiting for machine to come up
	I0913 18:41:49.599586   22792 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:41:49.600042   22792 main.go:141] libmachine: (ha-617764) DBG | unable to find current IP address of domain ha-617764 in network mk-ha-617764
	I0913 18:41:49.600071   22792 main.go:141] libmachine: (ha-617764) DBG | I0913 18:41:49.599990   22815 retry.go:31] will retry after 520.107531ms: waiting for machine to come up
	I0913 18:41:50.121442   22792 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:41:50.121811   22792 main.go:141] libmachine: (ha-617764) DBG | unable to find current IP address of domain ha-617764 in network mk-ha-617764
	I0913 18:41:50.121847   22792 main.go:141] libmachine: (ha-617764) DBG | I0913 18:41:50.121771   22815 retry.go:31] will retry after 841.781356ms: waiting for machine to come up
	I0913 18:41:50.964741   22792 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:41:50.965088   22792 main.go:141] libmachine: (ha-617764) DBG | unable to find current IP address of domain ha-617764 in network mk-ha-617764
	I0913 18:41:50.965138   22792 main.go:141] libmachine: (ha-617764) DBG | I0913 18:41:50.965055   22815 retry.go:31] will retry after 878.516977ms: waiting for machine to come up
	I0913 18:41:51.844650   22792 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:41:51.845078   22792 main.go:141] libmachine: (ha-617764) DBG | unable to find current IP address of domain ha-617764 in network mk-ha-617764
	I0913 18:41:51.845105   22792 main.go:141] libmachine: (ha-617764) DBG | I0913 18:41:51.845024   22815 retry.go:31] will retry after 1.02797598s: waiting for machine to come up
	I0913 18:41:52.874267   22792 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:41:52.874720   22792 main.go:141] libmachine: (ha-617764) DBG | unable to find current IP address of domain ha-617764 in network mk-ha-617764
	I0913 18:41:52.874771   22792 main.go:141] libmachine: (ha-617764) DBG | I0913 18:41:52.874669   22815 retry.go:31] will retry after 1.506028162s: waiting for machine to come up
	I0913 18:41:54.382227   22792 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:41:54.382632   22792 main.go:141] libmachine: (ha-617764) DBG | unable to find current IP address of domain ha-617764 in network mk-ha-617764
	I0913 18:41:54.382653   22792 main.go:141] libmachine: (ha-617764) DBG | I0913 18:41:54.382588   22815 retry.go:31] will retry after 2.112322208s: waiting for machine to come up
	I0913 18:41:56.496683   22792 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:41:56.497136   22792 main.go:141] libmachine: (ha-617764) DBG | unable to find current IP address of domain ha-617764 in network mk-ha-617764
	I0913 18:41:56.497181   22792 main.go:141] libmachine: (ha-617764) DBG | I0913 18:41:56.497110   22815 retry.go:31] will retry after 2.314980479s: waiting for machine to come up
	I0913 18:41:58.814590   22792 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:41:58.814997   22792 main.go:141] libmachine: (ha-617764) DBG | unable to find current IP address of domain ha-617764 in network mk-ha-617764
	I0913 18:41:58.815019   22792 main.go:141] libmachine: (ha-617764) DBG | I0913 18:41:58.814968   22815 retry.go:31] will retry after 3.001940314s: waiting for machine to come up
	I0913 18:42:01.818637   22792 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:42:01.818951   22792 main.go:141] libmachine: (ha-617764) DBG | unable to find current IP address of domain ha-617764 in network mk-ha-617764
	I0913 18:42:01.818972   22792 main.go:141] libmachine: (ha-617764) DBG | I0913 18:42:01.818927   22815 retry.go:31] will retry after 4.031102313s: waiting for machine to come up
	I0913 18:42:05.852122   22792 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:42:05.852506   22792 main.go:141] libmachine: (ha-617764) DBG | unable to find current IP address of domain ha-617764 in network mk-ha-617764
	I0913 18:42:05.852527   22792 main.go:141] libmachine: (ha-617764) DBG | I0913 18:42:05.852470   22815 retry.go:31] will retry after 4.375378529s: waiting for machine to come up
	I0913 18:42:10.229015   22792 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:42:10.229456   22792 main.go:141] libmachine: (ha-617764) Found IP for machine: 192.168.39.145
	I0913 18:42:10.229476   22792 main.go:141] libmachine: (ha-617764) Reserving static IP address...
	I0913 18:42:10.229488   22792 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has current primary IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:42:10.229797   22792 main.go:141] libmachine: (ha-617764) DBG | unable to find host DHCP lease matching {name: "ha-617764", mac: "52:54:00:1a:5d:60", ip: "192.168.39.145"} in network mk-ha-617764
	I0913 18:42:10.299811   22792 main.go:141] libmachine: (ha-617764) DBG | Getting to WaitForSSH function...
	I0913 18:42:10.299835   22792 main.go:141] libmachine: (ha-617764) Reserved static IP address: 192.168.39.145
	I0913 18:42:10.299847   22792 main.go:141] libmachine: (ha-617764) Waiting for SSH to be available...
	I0913 18:42:10.302478   22792 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:42:10.302834   22792 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:minikube Clientid:01:52:54:00:1a:5d:60}
	I0913 18:42:10.302854   22792 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:42:10.302969   22792 main.go:141] libmachine: (ha-617764) DBG | Using SSH client type: external
	I0913 18:42:10.302995   22792 main.go:141] libmachine: (ha-617764) DBG | Using SSH private key: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764/id_rsa (-rw-------)
	I0913 18:42:10.303051   22792 main.go:141] libmachine: (ha-617764) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.145 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0913 18:42:10.303075   22792 main.go:141] libmachine: (ha-617764) DBG | About to run SSH command:
	I0913 18:42:10.303090   22792 main.go:141] libmachine: (ha-617764) DBG | exit 0
	I0913 18:42:10.426273   22792 main.go:141] libmachine: (ha-617764) DBG | SSH cmd err, output: <nil>: 
	I0913 18:42:10.426570   22792 main.go:141] libmachine: (ha-617764) KVM machine creation complete!
	I0913 18:42:10.426839   22792 main.go:141] libmachine: (ha-617764) Calling .GetConfigRaw
	I0913 18:42:10.427462   22792 main.go:141] libmachine: (ha-617764) Calling .DriverName
	I0913 18:42:10.427655   22792 main.go:141] libmachine: (ha-617764) Calling .DriverName
	I0913 18:42:10.427809   22792 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0913 18:42:10.427826   22792 main.go:141] libmachine: (ha-617764) Calling .GetState
	I0913 18:42:10.428962   22792 main.go:141] libmachine: Detecting operating system of created instance...
	I0913 18:42:10.428973   22792 main.go:141] libmachine: Waiting for SSH to be available...
	I0913 18:42:10.428985   22792 main.go:141] libmachine: Getting to WaitForSSH function...
	I0913 18:42:10.428992   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHHostname
	I0913 18:42:10.431154   22792 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:42:10.431525   22792 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 18:42:10.431551   22792 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:42:10.431737   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHPort
	I0913 18:42:10.431931   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 18:42:10.432072   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 18:42:10.432202   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHUsername
	I0913 18:42:10.432369   22792 main.go:141] libmachine: Using SSH client type: native
	I0913 18:42:10.432565   22792 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.145 22 <nil> <nil>}
	I0913 18:42:10.432579   22792 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0913 18:42:10.533614   22792 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0913 18:42:10.533653   22792 main.go:141] libmachine: Detecting the provisioner...
	I0913 18:42:10.533661   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHHostname
	I0913 18:42:10.536476   22792 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:42:10.536863   22792 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 18:42:10.536896   22792 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:42:10.537040   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHPort
	I0913 18:42:10.537233   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 18:42:10.537404   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 18:42:10.537541   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHUsername
	I0913 18:42:10.537692   22792 main.go:141] libmachine: Using SSH client type: native
	I0913 18:42:10.537958   22792 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.145 22 <nil> <nil>}
	I0913 18:42:10.537969   22792 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0913 18:42:10.642894   22792 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0913 18:42:10.643008   22792 main.go:141] libmachine: found compatible host: buildroot
	I0913 18:42:10.643022   22792 main.go:141] libmachine: Provisioning with buildroot...
	I0913 18:42:10.643031   22792 main.go:141] libmachine: (ha-617764) Calling .GetMachineName
	I0913 18:42:10.643282   22792 buildroot.go:166] provisioning hostname "ha-617764"
	I0913 18:42:10.643309   22792 main.go:141] libmachine: (ha-617764) Calling .GetMachineName
	I0913 18:42:10.643482   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHHostname
	I0913 18:42:10.646247   22792 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:42:10.646623   22792 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 18:42:10.646650   22792 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:42:10.646771   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHPort
	I0913 18:42:10.646959   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 18:42:10.647132   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 18:42:10.647295   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHUsername
	I0913 18:42:10.647445   22792 main.go:141] libmachine: Using SSH client type: native
	I0913 18:42:10.647616   22792 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.145 22 <nil> <nil>}
	I0913 18:42:10.647626   22792 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-617764 && echo "ha-617764" | sudo tee /etc/hostname
	I0913 18:42:10.763740   22792 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-617764
	
	I0913 18:42:10.763771   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHHostname
	I0913 18:42:10.766562   22792 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:42:10.766902   22792 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 18:42:10.766930   22792 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:42:10.767076   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHPort
	I0913 18:42:10.767278   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 18:42:10.767451   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 18:42:10.767568   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHUsername
	I0913 18:42:10.767702   22792 main.go:141] libmachine: Using SSH client type: native
	I0913 18:42:10.767869   22792 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.145 22 <nil> <nil>}
	I0913 18:42:10.767885   22792 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-617764' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-617764/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-617764' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0913 18:42:10.883089   22792 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0913 18:42:10.883119   22792 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19636-3902/.minikube CaCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19636-3902/.minikube}
	I0913 18:42:10.883137   22792 buildroot.go:174] setting up certificates
	I0913 18:42:10.883191   22792 provision.go:84] configureAuth start
	I0913 18:42:10.883207   22792 main.go:141] libmachine: (ha-617764) Calling .GetMachineName
	I0913 18:42:10.883440   22792 main.go:141] libmachine: (ha-617764) Calling .GetIP
	I0913 18:42:10.886378   22792 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:42:10.886734   22792 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 18:42:10.886754   22792 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:42:10.886911   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHHostname
	I0913 18:42:10.888976   22792 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:42:10.889323   22792 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 18:42:10.889339   22792 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:42:10.889465   22792 provision.go:143] copyHostCerts
	I0913 18:42:10.889498   22792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem
	I0913 18:42:10.889526   22792 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem, removing ...
	I0913 18:42:10.889534   22792 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem
	I0913 18:42:10.889595   22792 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem (1123 bytes)
	I0913 18:42:10.889676   22792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem
	I0913 18:42:10.889704   22792 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem, removing ...
	I0913 18:42:10.889708   22792 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem
	I0913 18:42:10.889730   22792 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem (1679 bytes)
	I0913 18:42:10.889783   22792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem
	I0913 18:42:10.889800   22792 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem, removing ...
	I0913 18:42:10.889803   22792 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem
	I0913 18:42:10.889823   22792 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem (1082 bytes)
	I0913 18:42:10.889878   22792 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem org=jenkins.ha-617764 san=[127.0.0.1 192.168.39.145 ha-617764 localhost minikube]
	I0913 18:42:11.091571   22792 provision.go:177] copyRemoteCerts
	I0913 18:42:11.091641   22792 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0913 18:42:11.091663   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHHostname
	I0913 18:42:11.094175   22792 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:42:11.094504   22792 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 18:42:11.094534   22792 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:42:11.094665   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHPort
	I0913 18:42:11.094832   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 18:42:11.094937   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHUsername
	I0913 18:42:11.095049   22792 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764/id_rsa Username:docker}
	I0913 18:42:11.176343   22792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0913 18:42:11.176413   22792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0913 18:42:11.200756   22792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0913 18:42:11.200825   22792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0913 18:42:11.224844   22792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0913 18:42:11.224901   22792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0913 18:42:11.248467   22792 provision.go:87] duration metric: took 365.261129ms to configureAuth
	I0913 18:42:11.248494   22792 buildroot.go:189] setting minikube options for container-runtime
	I0913 18:42:11.248676   22792 config.go:182] Loaded profile config "ha-617764": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 18:42:11.248745   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHHostname
	I0913 18:42:11.251102   22792 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:42:11.251430   22792 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 18:42:11.251460   22792 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:42:11.251576   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHPort
	I0913 18:42:11.251729   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 18:42:11.251860   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 18:42:11.251978   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHUsername
	I0913 18:42:11.252097   22792 main.go:141] libmachine: Using SSH client type: native
	I0913 18:42:11.252311   22792 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.145 22 <nil> <nil>}
	I0913 18:42:11.252326   22792 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0913 18:42:11.474430   22792 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0913 18:42:11.474454   22792 main.go:141] libmachine: Checking connection to Docker...
	I0913 18:42:11.474462   22792 main.go:141] libmachine: (ha-617764) Calling .GetURL
	I0913 18:42:11.475676   22792 main.go:141] libmachine: (ha-617764) DBG | Using libvirt version 6000000
	I0913 18:42:11.477592   22792 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:42:11.477910   22792 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 18:42:11.477933   22792 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:42:11.478053   22792 main.go:141] libmachine: Docker is up and running!
	I0913 18:42:11.478067   22792 main.go:141] libmachine: Reticulating splines...
	I0913 18:42:11.478074   22792 client.go:171] duration metric: took 25.04799423s to LocalClient.Create
	I0913 18:42:11.478112   22792 start.go:167] duration metric: took 25.048062384s to libmachine.API.Create "ha-617764"
	I0913 18:42:11.478125   22792 start.go:293] postStartSetup for "ha-617764" (driver="kvm2")
	I0913 18:42:11.478143   22792 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0913 18:42:11.478160   22792 main.go:141] libmachine: (ha-617764) Calling .DriverName
	I0913 18:42:11.478359   22792 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0913 18:42:11.478384   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHHostname
	I0913 18:42:11.480294   22792 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:42:11.480543   22792 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 18:42:11.480561   22792 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:42:11.480705   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHPort
	I0913 18:42:11.480847   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 18:42:11.480987   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHUsername
	I0913 18:42:11.481112   22792 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764/id_rsa Username:docker}
	I0913 18:42:11.565059   22792 ssh_runner.go:195] Run: cat /etc/os-release
	I0913 18:42:11.569516   22792 info.go:137] Remote host: Buildroot 2023.02.9
	I0913 18:42:11.569550   22792 filesync.go:126] Scanning /home/jenkins/minikube-integration/19636-3902/.minikube/addons for local assets ...
	I0913 18:42:11.569637   22792 filesync.go:126] Scanning /home/jenkins/minikube-integration/19636-3902/.minikube/files for local assets ...
	I0913 18:42:11.569734   22792 filesync.go:149] local asset: /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem -> 110792.pem in /etc/ssl/certs
	I0913 18:42:11.569745   22792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem -> /etc/ssl/certs/110792.pem
	I0913 18:42:11.569860   22792 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0913 18:42:11.579256   22792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem --> /etc/ssl/certs/110792.pem (1708 bytes)
	I0913 18:42:11.603060   22792 start.go:296] duration metric: took 124.923337ms for postStartSetup
	I0913 18:42:11.603117   22792 main.go:141] libmachine: (ha-617764) Calling .GetConfigRaw
	I0913 18:42:11.603688   22792 main.go:141] libmachine: (ha-617764) Calling .GetIP
	I0913 18:42:11.606119   22792 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:42:11.606546   22792 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 18:42:11.606572   22792 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:42:11.606803   22792 profile.go:143] Saving config to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/config.json ...
	I0913 18:42:11.606978   22792 start.go:128] duration metric: took 25.195049778s to createHost
	I0913 18:42:11.607011   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHHostname
	I0913 18:42:11.609202   22792 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:42:11.609513   22792 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 18:42:11.609531   22792 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:42:11.609667   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHPort
	I0913 18:42:11.609836   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 18:42:11.609967   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 18:42:11.610070   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHUsername
	I0913 18:42:11.610208   22792 main.go:141] libmachine: Using SSH client type: native
	I0913 18:42:11.610404   22792 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.145 22 <nil> <nil>}
	I0913 18:42:11.610417   22792 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0913 18:42:11.714852   22792 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726252931.693013594
	
	I0913 18:42:11.714875   22792 fix.go:216] guest clock: 1726252931.693013594
	I0913 18:42:11.714884   22792 fix.go:229] Guest: 2024-09-13 18:42:11.693013594 +0000 UTC Remote: 2024-09-13 18:42:11.606989503 +0000 UTC m=+25.297899776 (delta=86.024091ms)
	I0913 18:42:11.714951   22792 fix.go:200] guest clock delta is within tolerance: 86.024091ms
	I0913 18:42:11.714960   22792 start.go:83] releasing machines lock for "ha-617764", held for 25.303117412s
	I0913 18:42:11.714991   22792 main.go:141] libmachine: (ha-617764) Calling .DriverName
	I0913 18:42:11.715245   22792 main.go:141] libmachine: (ha-617764) Calling .GetIP
	I0913 18:42:11.717660   22792 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:42:11.718028   22792 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 18:42:11.718057   22792 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:42:11.718183   22792 main.go:141] libmachine: (ha-617764) Calling .DriverName
	I0913 18:42:11.718784   22792 main.go:141] libmachine: (ha-617764) Calling .DriverName
	I0913 18:42:11.718983   22792 main.go:141] libmachine: (ha-617764) Calling .DriverName
	I0913 18:42:11.719074   22792 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0913 18:42:11.719163   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHHostname
	I0913 18:42:11.719206   22792 ssh_runner.go:195] Run: cat /version.json
	I0913 18:42:11.719227   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHHostname
	I0913 18:42:11.721920   22792 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:42:11.721954   22792 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:42:11.722230   22792 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 18:42:11.722254   22792 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:42:11.722282   22792 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 18:42:11.722301   22792 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:42:11.722411   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHPort
	I0913 18:42:11.722537   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHPort
	I0913 18:42:11.722601   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 18:42:11.722702   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 18:42:11.722754   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHUsername
	I0913 18:42:11.722842   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHUsername
	I0913 18:42:11.722890   22792 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764/id_rsa Username:docker}
	I0913 18:42:11.722960   22792 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764/id_rsa Username:docker}
	I0913 18:42:11.820073   22792 ssh_runner.go:195] Run: systemctl --version
	I0913 18:42:11.825894   22792 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0913 18:42:11.982385   22792 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0913 18:42:11.988782   22792 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0913 18:42:11.988864   22792 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0913 18:42:12.004565   22792 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0913 18:42:12.004593   22792 start.go:495] detecting cgroup driver to use...
	I0913 18:42:12.004661   22792 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0913 18:42:12.019979   22792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0913 18:42:12.032588   22792 docker.go:217] disabling cri-docker service (if available) ...
	I0913 18:42:12.032636   22792 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0913 18:42:12.045995   22792 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0913 18:42:12.058796   22792 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0913 18:42:12.171682   22792 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0913 18:42:12.328312   22792 docker.go:233] disabling docker service ...
	I0913 18:42:12.328387   22792 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0913 18:42:12.342929   22792 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0913 18:42:12.355609   22792 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0913 18:42:12.461539   22792 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0913 18:42:12.583650   22792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0913 18:42:12.597599   22792 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0913 18:42:12.616301   22792 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0913 18:42:12.616369   22792 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 18:42:12.627045   22792 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0913 18:42:12.627114   22792 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 18:42:12.637884   22792 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 18:42:12.648895   22792 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 18:42:12.659405   22792 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0913 18:42:12.670256   22792 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 18:42:12.680556   22792 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 18:42:12.697451   22792 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 18:42:12.708124   22792 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0913 18:42:12.717399   22792 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0913 18:42:12.717467   22792 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0913 18:42:12.730124   22792 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0913 18:42:12.740070   22792 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 18:42:12.860993   22792 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0913 18:42:12.952434   22792 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0913 18:42:12.952520   22792 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0913 18:42:12.957244   22792 start.go:563] Will wait 60s for crictl version
	I0913 18:42:12.957290   22792 ssh_runner.go:195] Run: which crictl
	I0913 18:42:12.960871   22792 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0913 18:42:13.003023   22792 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0913 18:42:13.003108   22792 ssh_runner.go:195] Run: crio --version
	I0913 18:42:13.030965   22792 ssh_runner.go:195] Run: crio --version
	I0913 18:42:13.061413   22792 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0913 18:42:13.062704   22792 main.go:141] libmachine: (ha-617764) Calling .GetIP
	I0913 18:42:13.065064   22792 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:42:13.065406   22792 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 18:42:13.065433   22792 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:42:13.065636   22792 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0913 18:42:13.069674   22792 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0913 18:42:13.082398   22792 kubeadm.go:883] updating cluster {Name:ha-617764 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-617764 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.145 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0913 18:42:13.082510   22792 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0913 18:42:13.082551   22792 ssh_runner.go:195] Run: sudo crictl images --output json
	I0913 18:42:13.114270   22792 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0913 18:42:13.114344   22792 ssh_runner.go:195] Run: which lz4
	I0913 18:42:13.118116   22792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0913 18:42:13.118209   22792 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0913 18:42:13.122135   22792 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0913 18:42:13.122172   22792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0913 18:42:14.418387   22792 crio.go:462] duration metric: took 1.300206452s to copy over tarball
	I0913 18:42:14.418465   22792 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0913 18:42:16.405722   22792 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.987230034s)
	I0913 18:42:16.405745   22792 crio.go:469] duration metric: took 1.987328817s to extract the tarball
	I0913 18:42:16.405752   22792 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0913 18:42:16.443623   22792 ssh_runner.go:195] Run: sudo crictl images --output json
	I0913 18:42:16.489290   22792 crio.go:514] all images are preloaded for cri-o runtime.
	I0913 18:42:16.489312   22792 cache_images.go:84] Images are preloaded, skipping loading
	I0913 18:42:16.489319   22792 kubeadm.go:934] updating node { 192.168.39.145 8443 v1.31.1 crio true true} ...
	I0913 18:42:16.489446   22792 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-617764 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.145
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-617764 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0913 18:42:16.489517   22792 ssh_runner.go:195] Run: crio config
	I0913 18:42:16.532922   22792 cni.go:84] Creating CNI manager for ""
	I0913 18:42:16.532944   22792 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0913 18:42:16.532955   22792 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0913 18:42:16.532974   22792 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.145 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-617764 NodeName:ha-617764 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.145"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.145 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0913 18:42:16.533087   22792 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.145
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-617764"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.145
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.145"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0913 18:42:16.533109   22792 kube-vip.go:115] generating kube-vip config ...
	I0913 18:42:16.533150   22792 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0913 18:42:16.549716   22792 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0913 18:42:16.549818   22792 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0913 18:42:16.549866   22792 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0913 18:42:16.559900   22792 binaries.go:44] Found k8s binaries, skipping transfer
	I0913 18:42:16.559962   22792 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0913 18:42:16.569382   22792 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0913 18:42:16.585673   22792 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0913 18:42:16.602255   22792 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0913 18:42:16.618723   22792 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0913 18:42:16.634794   22792 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0913 18:42:16.638626   22792 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0913 18:42:16.651362   22792 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 18:42:16.762368   22792 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0913 18:42:16.779430   22792 certs.go:68] Setting up /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764 for IP: 192.168.39.145
	I0913 18:42:16.779452   22792 certs.go:194] generating shared ca certs ...
	I0913 18:42:16.779510   22792 certs.go:226] acquiring lock for ca certs: {Name:mke780aab4c2895bded11772e31c7a357d07742c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:42:16.779672   22792 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.key
	I0913 18:42:16.779714   22792 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.key
	I0913 18:42:16.779721   22792 certs.go:256] generating profile certs ...
	I0913 18:42:16.779771   22792 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/client.key
	I0913 18:42:16.779792   22792 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/client.crt with IP's: []
	I0913 18:42:16.941388   22792 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/client.crt ...
	I0913 18:42:16.941415   22792 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/client.crt: {Name:mk44eed791f2583040b622110d984321628f6223 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:42:16.941581   22792 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/client.key ...
	I0913 18:42:16.941593   22792 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/client.key: {Name:mk1915c48dc6fc804dedf32c0a46e920bb821a2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:42:16.941665   22792 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.key.9cc0c887
	I0913 18:42:16.941679   22792 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.crt.9cc0c887 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.145 192.168.39.254]
	I0913 18:42:17.210285   22792 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.crt.9cc0c887 ...
	I0913 18:42:17.210315   22792 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.crt.9cc0c887: {Name:mk8a652a777a3d4d8cb2161b0f1935680536b79d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:42:17.210463   22792 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.key.9cc0c887 ...
	I0913 18:42:17.210475   22792 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.key.9cc0c887: {Name:mkdb5fbb1ec247d9ce8891014dfa79d01eef24fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:42:17.210543   22792 certs.go:381] copying /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.crt.9cc0c887 -> /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.crt
	I0913 18:42:17.210633   22792 certs.go:385] copying /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.key.9cc0c887 -> /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.key
	I0913 18:42:17.210686   22792 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/proxy-client.key
	I0913 18:42:17.210700   22792 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/proxy-client.crt with IP's: []
	I0913 18:42:17.337363   22792 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/proxy-client.crt ...
	I0913 18:42:17.337393   22792 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/proxy-client.crt: {Name:mkd514a028f059d8de360447f0fae602d4a32c05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:42:17.337549   22792 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/proxy-client.key ...
	I0913 18:42:17.337560   22792 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/proxy-client.key: {Name:mk3daf966e864f78edc7ad53314f95accf71a54b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:42:17.337625   22792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0913 18:42:17.337642   22792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0913 18:42:17.337652   22792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0913 18:42:17.337662   22792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0913 18:42:17.337673   22792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0913 18:42:17.337683   22792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0913 18:42:17.337695   22792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0913 18:42:17.337704   22792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0913 18:42:17.337755   22792 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079.pem (1338 bytes)
	W0913 18:42:17.337788   22792 certs.go:480] ignoring /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079_empty.pem, impossibly tiny 0 bytes
	I0913 18:42:17.337796   22792 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem (1675 bytes)
	I0913 18:42:17.337829   22792 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem (1082 bytes)
	I0913 18:42:17.337856   22792 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem (1123 bytes)
	I0913 18:42:17.337877   22792 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem (1679 bytes)
	I0913 18:42:17.337916   22792 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem (1708 bytes)
	I0913 18:42:17.337940   22792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079.pem -> /usr/share/ca-certificates/11079.pem
	I0913 18:42:17.337959   22792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem -> /usr/share/ca-certificates/110792.pem
	I0913 18:42:17.337972   22792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0913 18:42:17.338554   22792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0913 18:42:17.364338   22792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0913 18:42:17.387197   22792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0913 18:42:17.410443   22792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0913 18:42:17.433814   22792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0913 18:42:17.456479   22792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0913 18:42:17.479080   22792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0913 18:42:17.501736   22792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0913 18:42:17.524376   22792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079.pem --> /usr/share/ca-certificates/11079.pem (1338 bytes)
	I0913 18:42:17.549608   22792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem --> /usr/share/ca-certificates/110792.pem (1708 bytes)
	I0913 18:42:17.572433   22792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0913 18:42:17.597199   22792 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0913 18:42:17.613267   22792 ssh_runner.go:195] Run: openssl version
	I0913 18:42:17.619055   22792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0913 18:42:17.629948   22792 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0913 18:42:17.634415   22792 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 13 18:22 /usr/share/ca-certificates/minikubeCA.pem
	I0913 18:42:17.634473   22792 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0913 18:42:17.640077   22792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0913 18:42:17.650772   22792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11079.pem && ln -fs /usr/share/ca-certificates/11079.pem /etc/ssl/certs/11079.pem"
	I0913 18:42:17.661921   22792 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11079.pem
	I0913 18:42:17.666610   22792 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 13 18:38 /usr/share/ca-certificates/11079.pem
	I0913 18:42:17.666668   22792 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11079.pem
	I0913 18:42:17.672350   22792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11079.pem /etc/ssl/certs/51391683.0"
	I0913 18:42:17.683307   22792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110792.pem && ln -fs /usr/share/ca-certificates/110792.pem /etc/ssl/certs/110792.pem"
	I0913 18:42:17.694195   22792 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110792.pem
	I0913 18:42:17.698826   22792 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 13 18:38 /usr/share/ca-certificates/110792.pem
	I0913 18:42:17.698883   22792 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110792.pem
	I0913 18:42:17.704664   22792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110792.pem /etc/ssl/certs/3ec20f2e.0"
	I0913 18:42:17.715573   22792 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0913 18:42:17.719695   22792 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0913 18:42:17.719743   22792 kubeadm.go:392] StartCluster: {Name:ha-617764 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-617764 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.145 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 18:42:17.719833   22792 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0913 18:42:17.719901   22792 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0913 18:42:17.756895   22792 cri.go:89] found id: ""
	I0913 18:42:17.756978   22792 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0913 18:42:17.767125   22792 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0913 18:42:17.776625   22792 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0913 18:42:17.786162   22792 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0913 18:42:17.786188   22792 kubeadm.go:157] found existing configuration files:
	
	I0913 18:42:17.786239   22792 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0913 18:42:17.795290   22792 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0913 18:42:17.795350   22792 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0913 18:42:17.804626   22792 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0913 18:42:17.813683   22792 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0913 18:42:17.813741   22792 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0913 18:42:17.823450   22792 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0913 18:42:17.832901   22792 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0913 18:42:17.832962   22792 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0913 18:42:17.842504   22792 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0913 18:42:17.851577   22792 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0913 18:42:17.851639   22792 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0913 18:42:17.861524   22792 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0913 18:42:17.958735   22792 kubeadm.go:310] W0913 18:42:17.943166     843 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0913 18:42:17.961057   22792 kubeadm.go:310] W0913 18:42:17.945581     843 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0913 18:42:18.060353   22792 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0913 18:42:29.172501   22792 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0913 18:42:29.172573   22792 kubeadm.go:310] [preflight] Running pre-flight checks
	I0913 18:42:29.172684   22792 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0913 18:42:29.172832   22792 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0913 18:42:29.172965   22792 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0913 18:42:29.173065   22792 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0913 18:42:29.174820   22792 out.go:235]   - Generating certificates and keys ...
	I0913 18:42:29.174903   22792 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0913 18:42:29.174960   22792 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0913 18:42:29.175019   22792 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0913 18:42:29.175086   22792 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0913 18:42:29.175159   22792 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0913 18:42:29.175230   22792 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0913 18:42:29.175305   22792 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0913 18:42:29.175507   22792 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-617764 localhost] and IPs [192.168.39.145 127.0.0.1 ::1]
	I0913 18:42:29.175590   22792 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0913 18:42:29.175753   22792 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-617764 localhost] and IPs [192.168.39.145 127.0.0.1 ::1]
	I0913 18:42:29.175840   22792 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0913 18:42:29.175930   22792 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0913 18:42:29.175992   22792 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0913 18:42:29.176080   22792 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0913 18:42:29.176162   22792 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0913 18:42:29.176240   22792 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0913 18:42:29.176320   22792 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0913 18:42:29.176409   22792 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0913 18:42:29.176484   22792 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0913 18:42:29.176570   22792 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0913 18:42:29.176629   22792 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0913 18:42:29.178531   22792 out.go:235]   - Booting up control plane ...
	I0913 18:42:29.178618   22792 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0913 18:42:29.178715   22792 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0913 18:42:29.178797   22792 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0913 18:42:29.178891   22792 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0913 18:42:29.178971   22792 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0913 18:42:29.179009   22792 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0913 18:42:29.179149   22792 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0913 18:42:29.179252   22792 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0913 18:42:29.179307   22792 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001926088s
	I0913 18:42:29.179401   22792 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0913 18:42:29.179459   22792 kubeadm.go:310] [api-check] The API server is healthy after 5.655401274s
	I0913 18:42:29.179582   22792 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0913 18:42:29.179756   22792 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0913 18:42:29.179836   22792 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0913 18:42:29.180032   22792 kubeadm.go:310] [mark-control-plane] Marking the node ha-617764 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0913 18:42:29.180085   22792 kubeadm.go:310] [bootstrap-token] Using token: wcshh7.vfnyb8uttcj6bcfg
	I0913 18:42:29.181519   22792 out.go:235]   - Configuring RBAC rules ...
	I0913 18:42:29.181620   22792 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0913 18:42:29.181691   22792 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0913 18:42:29.181810   22792 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0913 18:42:29.181964   22792 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0913 18:42:29.182169   22792 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0913 18:42:29.182276   22792 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0913 18:42:29.182380   22792 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0913 18:42:29.182420   22792 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0913 18:42:29.182464   22792 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0913 18:42:29.182470   22792 kubeadm.go:310] 
	I0913 18:42:29.182523   22792 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0913 18:42:29.182529   22792 kubeadm.go:310] 
	I0913 18:42:29.182597   22792 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0913 18:42:29.182603   22792 kubeadm.go:310] 
	I0913 18:42:29.182624   22792 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0913 18:42:29.182677   22792 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0913 18:42:29.182728   22792 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0913 18:42:29.182735   22792 kubeadm.go:310] 
	I0913 18:42:29.182778   22792 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0913 18:42:29.182784   22792 kubeadm.go:310] 
	I0913 18:42:29.182823   22792 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0913 18:42:29.182828   22792 kubeadm.go:310] 
	I0913 18:42:29.182875   22792 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0913 18:42:29.182938   22792 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0913 18:42:29.183002   22792 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0913 18:42:29.183009   22792 kubeadm.go:310] 
	I0913 18:42:29.183083   22792 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0913 18:42:29.183152   22792 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0913 18:42:29.183158   22792 kubeadm.go:310] 
	I0913 18:42:29.183226   22792 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token wcshh7.vfnyb8uttcj6bcfg \
	I0913 18:42:29.183315   22792 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a4240e11abfbfd373505036507e5ef67d548fb372e560a526b421dfcccc65691 \
	I0913 18:42:29.183349   22792 kubeadm.go:310] 	--control-plane 
	I0913 18:42:29.183372   22792 kubeadm.go:310] 
	I0913 18:42:29.183490   22792 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0913 18:42:29.183497   22792 kubeadm.go:310] 
	I0913 18:42:29.183600   22792 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token wcshh7.vfnyb8uttcj6bcfg \
	I0913 18:42:29.183695   22792 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a4240e11abfbfd373505036507e5ef67d548fb372e560a526b421dfcccc65691 
	I0913 18:42:29.183705   22792 cni.go:84] Creating CNI manager for ""
	I0913 18:42:29.183712   22792 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0913 18:42:29.185184   22792 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0913 18:42:29.186427   22792 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0913 18:42:29.193124   22792 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0913 18:42:29.193152   22792 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0913 18:42:29.211367   22792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0913 18:42:29.620466   22792 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0913 18:42:29.620575   22792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 18:42:29.620716   22792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-617764 minikube.k8s.io/updated_at=2024_09_13T18_42_29_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=fdd33bebc6743cfd1c61ec7fe066add478610a92 minikube.k8s.io/name=ha-617764 minikube.k8s.io/primary=true
	I0913 18:42:29.842265   22792 ops.go:34] apiserver oom_adj: -16
	I0913 18:42:29.842480   22792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 18:42:30.342706   22792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 18:42:30.842533   22792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 18:42:31.343422   22792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 18:42:31.842644   22792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 18:42:32.342550   22792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 18:42:32.842702   22792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 18:42:32.945537   22792 kubeadm.go:1113] duration metric: took 3.325029347s to wait for elevateKubeSystemPrivileges
	I0913 18:42:32.945573   22792 kubeadm.go:394] duration metric: took 15.225833532s to StartCluster
	I0913 18:42:32.945595   22792 settings.go:142] acquiring lock: {Name:mk3b751ea8a9f3a6c2d19469cffad500e96411b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:42:32.945688   22792 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19636-3902/kubeconfig
	I0913 18:42:32.946598   22792 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/kubeconfig: {Name:mk324cf401f96b93fed93af51e24b0634e5fa1a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:42:32.946842   22792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0913 18:42:32.946852   22792 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.145 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0913 18:42:32.946877   22792 start.go:241] waiting for startup goroutines ...
	I0913 18:42:32.946891   22792 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0913 18:42:32.946971   22792 addons.go:69] Setting storage-provisioner=true in profile "ha-617764"
	I0913 18:42:32.946990   22792 addons.go:234] Setting addon storage-provisioner=true in "ha-617764"
	I0913 18:42:32.947019   22792 host.go:66] Checking if "ha-617764" exists ...
	I0913 18:42:32.947062   22792 config.go:182] Loaded profile config "ha-617764": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 18:42:32.946989   22792 addons.go:69] Setting default-storageclass=true in profile "ha-617764"
	I0913 18:42:32.947091   22792 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-617764"
	I0913 18:42:32.947394   22792 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:42:32.947404   22792 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:42:32.947425   22792 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:42:32.947513   22792 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:42:32.963607   22792 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36063
	I0913 18:42:32.963910   22792 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42943
	I0913 18:42:32.964165   22792 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:42:32.964260   22792 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:42:32.964866   22792 main.go:141] libmachine: Using API Version  1
	I0913 18:42:32.964886   22792 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:42:32.964931   22792 main.go:141] libmachine: Using API Version  1
	I0913 18:42:32.964951   22792 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:42:32.965288   22792 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:42:32.965289   22792 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:42:32.965504   22792 main.go:141] libmachine: (ha-617764) Calling .GetState
	I0913 18:42:32.965895   22792 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:42:32.965935   22792 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:42:32.967835   22792 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19636-3902/kubeconfig
	I0913 18:42:32.968146   22792 kapi.go:59] client config for ha-617764: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/client.crt", KeyFile:"/home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/client.key", CAFile:"/home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6f9c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0913 18:42:32.968608   22792 cert_rotation.go:140] Starting client certificate rotation controller
	I0913 18:42:32.968754   22792 addons.go:234] Setting addon default-storageclass=true in "ha-617764"
	I0913 18:42:32.968780   22792 host.go:66] Checking if "ha-617764" exists ...
	I0913 18:42:32.969002   22792 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:42:32.969032   22792 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:42:32.981249   22792 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34449
	I0913 18:42:32.981684   22792 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:42:32.982160   22792 main.go:141] libmachine: Using API Version  1
	I0913 18:42:32.982185   22792 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:42:32.982525   22792 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:42:32.982698   22792 main.go:141] libmachine: (ha-617764) Calling .GetState
	I0913 18:42:32.983665   22792 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38159
	I0913 18:42:32.984052   22792 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:42:32.984518   22792 main.go:141] libmachine: Using API Version  1
	I0913 18:42:32.984532   22792 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:42:32.984586   22792 main.go:141] libmachine: (ha-617764) Calling .DriverName
	I0913 18:42:32.984938   22792 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:42:32.985343   22792 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:42:32.985373   22792 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:42:32.986418   22792 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 18:42:32.987796   22792 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0913 18:42:32.987811   22792 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0913 18:42:32.987825   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHHostname
	I0913 18:42:32.990920   22792 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:42:32.991398   22792 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 18:42:32.991430   22792 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:42:32.991626   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHPort
	I0913 18:42:32.991806   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 18:42:32.991948   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHUsername
	I0913 18:42:32.992069   22792 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764/id_rsa Username:docker}
	I0913 18:42:33.000960   22792 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45465
	I0913 18:42:33.001377   22792 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:42:33.001866   22792 main.go:141] libmachine: Using API Version  1
	I0913 18:42:33.001887   22792 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:42:33.002180   22792 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:42:33.002376   22792 main.go:141] libmachine: (ha-617764) Calling .GetState
	I0913 18:42:33.003800   22792 main.go:141] libmachine: (ha-617764) Calling .DriverName
	I0913 18:42:33.003996   22792 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0913 18:42:33.004014   22792 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0913 18:42:33.004029   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHHostname
	I0913 18:42:33.006450   22792 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:42:33.006853   22792 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 18:42:33.006869   22792 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:42:33.007034   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHPort
	I0913 18:42:33.007221   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 18:42:33.007366   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHUsername
	I0913 18:42:33.007510   22792 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764/id_rsa Username:docker}
	I0913 18:42:33.090159   22792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0913 18:42:33.189726   22792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0913 18:42:33.218377   22792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0913 18:42:33.516232   22792 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0913 18:42:33.790186   22792 main.go:141] libmachine: Making call to close driver server
	I0913 18:42:33.790220   22792 main.go:141] libmachine: (ha-617764) Calling .Close
	I0913 18:42:33.790255   22792 main.go:141] libmachine: Making call to close driver server
	I0913 18:42:33.790274   22792 main.go:141] libmachine: (ha-617764) Calling .Close
	I0913 18:42:33.790546   22792 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:42:33.790561   22792 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:42:33.790571   22792 main.go:141] libmachine: Making call to close driver server
	I0913 18:42:33.790579   22792 main.go:141] libmachine: (ha-617764) Calling .Close
	I0913 18:42:33.790608   22792 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:42:33.790621   22792 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:42:33.790631   22792 main.go:141] libmachine: Making call to close driver server
	I0913 18:42:33.790638   22792 main.go:141] libmachine: (ha-617764) Calling .Close
	I0913 18:42:33.790810   22792 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:42:33.790813   22792 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:42:33.790833   22792 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:42:33.790852   22792 main.go:141] libmachine: (ha-617764) DBG | Closing plugin on server side
	I0913 18:42:33.790821   22792 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:42:33.790904   22792 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0913 18:42:33.790927   22792 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0913 18:42:33.791043   22792 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0913 18:42:33.791054   22792 round_trippers.go:469] Request Headers:
	I0913 18:42:33.791064   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:42:33.791076   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:42:33.808225   22792 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0913 18:42:33.808988   22792 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0913 18:42:33.809008   22792 round_trippers.go:469] Request Headers:
	I0913 18:42:33.809019   22792 round_trippers.go:473]     Content-Type: application/json
	I0913 18:42:33.809024   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:42:33.809028   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:42:33.813534   22792 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0913 18:42:33.813685   22792 main.go:141] libmachine: Making call to close driver server
	I0913 18:42:33.813703   22792 main.go:141] libmachine: (ha-617764) Calling .Close
	I0913 18:42:33.813977   22792 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:42:33.813997   22792 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:42:33.816633   22792 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0913 18:42:33.817831   22792 addons.go:510] duration metric: took 870.940329ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0913 18:42:33.817878   22792 start.go:246] waiting for cluster config update ...
	I0913 18:42:33.817894   22792 start.go:255] writing updated cluster config ...
	I0913 18:42:33.820194   22792 out.go:201] 
	I0913 18:42:33.821789   22792 config.go:182] Loaded profile config "ha-617764": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 18:42:33.821919   22792 profile.go:143] Saving config to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/config.json ...
	I0913 18:42:33.823747   22792 out.go:177] * Starting "ha-617764-m02" control-plane node in "ha-617764" cluster
	I0913 18:42:33.825412   22792 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0913 18:42:33.825435   22792 cache.go:56] Caching tarball of preloaded images
	I0913 18:42:33.825541   22792 preload.go:172] Found /home/jenkins/minikube-integration/19636-3902/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0913 18:42:33.825552   22792 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0913 18:42:33.825621   22792 profile.go:143] Saving config to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/config.json ...
	I0913 18:42:33.825926   22792 start.go:360] acquireMachinesLock for ha-617764-m02: {Name:mk2a4fc9f87a3264088553785e086036ce1d8c5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 18:42:33.825968   22792 start.go:364] duration metric: took 23.623µs to acquireMachinesLock for "ha-617764-m02"
	I0913 18:42:33.825984   22792 start.go:93] Provisioning new machine with config: &{Name:ha-617764 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-617764 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.145 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0913 18:42:33.826053   22792 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0913 18:42:33.827760   22792 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0913 18:42:33.827853   22792 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:42:33.827885   22792 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:42:33.842456   22792 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38521
	I0913 18:42:33.842932   22792 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:42:33.843363   22792 main.go:141] libmachine: Using API Version  1
	I0913 18:42:33.843385   22792 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:42:33.843677   22792 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:42:33.843837   22792 main.go:141] libmachine: (ha-617764-m02) Calling .GetMachineName
	I0913 18:42:33.844018   22792 main.go:141] libmachine: (ha-617764-m02) Calling .DriverName
	I0913 18:42:33.844168   22792 start.go:159] libmachine.API.Create for "ha-617764" (driver="kvm2")
	I0913 18:42:33.844198   22792 client.go:168] LocalClient.Create starting
	I0913 18:42:33.844239   22792 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem
	I0913 18:42:33.844270   22792 main.go:141] libmachine: Decoding PEM data...
	I0913 18:42:33.844285   22792 main.go:141] libmachine: Parsing certificate...
	I0913 18:42:33.844331   22792 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem
	I0913 18:42:33.844352   22792 main.go:141] libmachine: Decoding PEM data...
	I0913 18:42:33.844362   22792 main.go:141] libmachine: Parsing certificate...
	I0913 18:42:33.844379   22792 main.go:141] libmachine: Running pre-create checks...
	I0913 18:42:33.844387   22792 main.go:141] libmachine: (ha-617764-m02) Calling .PreCreateCheck
	I0913 18:42:33.844535   22792 main.go:141] libmachine: (ha-617764-m02) Calling .GetConfigRaw
	I0913 18:42:33.844909   22792 main.go:141] libmachine: Creating machine...
	I0913 18:42:33.844921   22792 main.go:141] libmachine: (ha-617764-m02) Calling .Create
	I0913 18:42:33.845093   22792 main.go:141] libmachine: (ha-617764-m02) Creating KVM machine...
	I0913 18:42:33.846503   22792 main.go:141] libmachine: (ha-617764-m02) DBG | found existing default KVM network
	I0913 18:42:33.846596   22792 main.go:141] libmachine: (ha-617764-m02) DBG | found existing private KVM network mk-ha-617764
	I0913 18:42:33.846724   22792 main.go:141] libmachine: (ha-617764-m02) Setting up store path in /home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764-m02 ...
	I0913 18:42:33.846769   22792 main.go:141] libmachine: (ha-617764-m02) Building disk image from file:///home/jenkins/minikube-integration/19636-3902/.minikube/cache/iso/amd64/minikube-v1.34.0-1726156389-19616-amd64.iso
	I0913 18:42:33.846832   22792 main.go:141] libmachine: (ha-617764-m02) DBG | I0913 18:42:33.846727   23143 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19636-3902/.minikube
	I0913 18:42:33.846916   22792 main.go:141] libmachine: (ha-617764-m02) Downloading /home/jenkins/minikube-integration/19636-3902/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19636-3902/.minikube/cache/iso/amd64/minikube-v1.34.0-1726156389-19616-amd64.iso...
	I0913 18:42:34.098734   22792 main.go:141] libmachine: (ha-617764-m02) DBG | I0913 18:42:34.098637   23143 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764-m02/id_rsa...
	I0913 18:42:34.182300   22792 main.go:141] libmachine: (ha-617764-m02) DBG | I0913 18:42:34.182200   23143 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764-m02/ha-617764-m02.rawdisk...
	I0913 18:42:34.182336   22792 main.go:141] libmachine: (ha-617764-m02) DBG | Writing magic tar header
	I0913 18:42:34.182360   22792 main.go:141] libmachine: (ha-617764-m02) DBG | Writing SSH key tar header
	I0913 18:42:34.182375   22792 main.go:141] libmachine: (ha-617764-m02) DBG | I0913 18:42:34.182308   23143 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764-m02 ...
	I0913 18:42:34.182445   22792 main.go:141] libmachine: (ha-617764-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764-m02
	I0913 18:42:34.182476   22792 main.go:141] libmachine: (ha-617764-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19636-3902/.minikube/machines
	I0913 18:42:34.182497   22792 main.go:141] libmachine: (ha-617764-m02) Setting executable bit set on /home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764-m02 (perms=drwx------)
	I0913 18:42:34.182512   22792 main.go:141] libmachine: (ha-617764-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19636-3902/.minikube
	I0913 18:42:34.182525   22792 main.go:141] libmachine: (ha-617764-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19636-3902
	I0913 18:42:34.182535   22792 main.go:141] libmachine: (ha-617764-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0913 18:42:34.182545   22792 main.go:141] libmachine: (ha-617764-m02) DBG | Checking permissions on dir: /home/jenkins
	I0913 18:42:34.182554   22792 main.go:141] libmachine: (ha-617764-m02) DBG | Checking permissions on dir: /home
	I0913 18:42:34.182565   22792 main.go:141] libmachine: (ha-617764-m02) DBG | Skipping /home - not owner
	I0913 18:42:34.182576   22792 main.go:141] libmachine: (ha-617764-m02) Setting executable bit set on /home/jenkins/minikube-integration/19636-3902/.minikube/machines (perms=drwxr-xr-x)
	I0913 18:42:34.182590   22792 main.go:141] libmachine: (ha-617764-m02) Setting executable bit set on /home/jenkins/minikube-integration/19636-3902/.minikube (perms=drwxr-xr-x)
	I0913 18:42:34.182605   22792 main.go:141] libmachine: (ha-617764-m02) Setting executable bit set on /home/jenkins/minikube-integration/19636-3902 (perms=drwxrwxr-x)
	I0913 18:42:34.182625   22792 main.go:141] libmachine: (ha-617764-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0913 18:42:34.182637   22792 main.go:141] libmachine: (ha-617764-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0913 18:42:34.182650   22792 main.go:141] libmachine: (ha-617764-m02) Creating domain...
	I0913 18:42:34.183657   22792 main.go:141] libmachine: (ha-617764-m02) define libvirt domain using xml: 
	I0913 18:42:34.183679   22792 main.go:141] libmachine: (ha-617764-m02) <domain type='kvm'>
	I0913 18:42:34.183690   22792 main.go:141] libmachine: (ha-617764-m02)   <name>ha-617764-m02</name>
	I0913 18:42:34.183700   22792 main.go:141] libmachine: (ha-617764-m02)   <memory unit='MiB'>2200</memory>
	I0913 18:42:34.183709   22792 main.go:141] libmachine: (ha-617764-m02)   <vcpu>2</vcpu>
	I0913 18:42:34.183718   22792 main.go:141] libmachine: (ha-617764-m02)   <features>
	I0913 18:42:34.183726   22792 main.go:141] libmachine: (ha-617764-m02)     <acpi/>
	I0913 18:42:34.183733   22792 main.go:141] libmachine: (ha-617764-m02)     <apic/>
	I0913 18:42:34.183742   22792 main.go:141] libmachine: (ha-617764-m02)     <pae/>
	I0913 18:42:34.183752   22792 main.go:141] libmachine: (ha-617764-m02)     
	I0913 18:42:34.183760   22792 main.go:141] libmachine: (ha-617764-m02)   </features>
	I0913 18:42:34.183771   22792 main.go:141] libmachine: (ha-617764-m02)   <cpu mode='host-passthrough'>
	I0913 18:42:34.183778   22792 main.go:141] libmachine: (ha-617764-m02)   
	I0913 18:42:34.183791   22792 main.go:141] libmachine: (ha-617764-m02)   </cpu>
	I0913 18:42:34.183820   22792 main.go:141] libmachine: (ha-617764-m02)   <os>
	I0913 18:42:34.183838   22792 main.go:141] libmachine: (ha-617764-m02)     <type>hvm</type>
	I0913 18:42:34.183852   22792 main.go:141] libmachine: (ha-617764-m02)     <boot dev='cdrom'/>
	I0913 18:42:34.183862   22792 main.go:141] libmachine: (ha-617764-m02)     <boot dev='hd'/>
	I0913 18:42:34.183875   22792 main.go:141] libmachine: (ha-617764-m02)     <bootmenu enable='no'/>
	I0913 18:42:34.183885   22792 main.go:141] libmachine: (ha-617764-m02)   </os>
	I0913 18:42:34.183895   22792 main.go:141] libmachine: (ha-617764-m02)   <devices>
	I0913 18:42:34.183905   22792 main.go:141] libmachine: (ha-617764-m02)     <disk type='file' device='cdrom'>
	I0913 18:42:34.183923   22792 main.go:141] libmachine: (ha-617764-m02)       <source file='/home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764-m02/boot2docker.iso'/>
	I0913 18:42:34.183934   22792 main.go:141] libmachine: (ha-617764-m02)       <target dev='hdc' bus='scsi'/>
	I0913 18:42:34.183946   22792 main.go:141] libmachine: (ha-617764-m02)       <readonly/>
	I0913 18:42:34.183956   22792 main.go:141] libmachine: (ha-617764-m02)     </disk>
	I0913 18:42:34.183967   22792 main.go:141] libmachine: (ha-617764-m02)     <disk type='file' device='disk'>
	I0913 18:42:34.183979   22792 main.go:141] libmachine: (ha-617764-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0913 18:42:34.183995   22792 main.go:141] libmachine: (ha-617764-m02)       <source file='/home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764-m02/ha-617764-m02.rawdisk'/>
	I0913 18:42:34.184005   22792 main.go:141] libmachine: (ha-617764-m02)       <target dev='hda' bus='virtio'/>
	I0913 18:42:34.184014   22792 main.go:141] libmachine: (ha-617764-m02)     </disk>
	I0913 18:42:34.184024   22792 main.go:141] libmachine: (ha-617764-m02)     <interface type='network'>
	I0913 18:42:34.184047   22792 main.go:141] libmachine: (ha-617764-m02)       <source network='mk-ha-617764'/>
	I0913 18:42:34.184072   22792 main.go:141] libmachine: (ha-617764-m02)       <model type='virtio'/>
	I0913 18:42:34.184082   22792 main.go:141] libmachine: (ha-617764-m02)     </interface>
	I0913 18:42:34.184089   22792 main.go:141] libmachine: (ha-617764-m02)     <interface type='network'>
	I0913 18:42:34.184098   22792 main.go:141] libmachine: (ha-617764-m02)       <source network='default'/>
	I0913 18:42:34.184105   22792 main.go:141] libmachine: (ha-617764-m02)       <model type='virtio'/>
	I0913 18:42:34.184112   22792 main.go:141] libmachine: (ha-617764-m02)     </interface>
	I0913 18:42:34.184121   22792 main.go:141] libmachine: (ha-617764-m02)     <serial type='pty'>
	I0913 18:42:34.184133   22792 main.go:141] libmachine: (ha-617764-m02)       <target port='0'/>
	I0913 18:42:34.184139   22792 main.go:141] libmachine: (ha-617764-m02)     </serial>
	I0913 18:42:34.184147   22792 main.go:141] libmachine: (ha-617764-m02)     <console type='pty'>
	I0913 18:42:34.184155   22792 main.go:141] libmachine: (ha-617764-m02)       <target type='serial' port='0'/>
	I0913 18:42:34.184162   22792 main.go:141] libmachine: (ha-617764-m02)     </console>
	I0913 18:42:34.184172   22792 main.go:141] libmachine: (ha-617764-m02)     <rng model='virtio'>
	I0913 18:42:34.184181   22792 main.go:141] libmachine: (ha-617764-m02)       <backend model='random'>/dev/random</backend>
	I0913 18:42:34.184190   22792 main.go:141] libmachine: (ha-617764-m02)     </rng>
	I0913 18:42:34.184196   22792 main.go:141] libmachine: (ha-617764-m02)     
	I0913 18:42:34.184205   22792 main.go:141] libmachine: (ha-617764-m02)     
	I0913 18:42:34.184213   22792 main.go:141] libmachine: (ha-617764-m02)   </devices>
	I0913 18:42:34.184224   22792 main.go:141] libmachine: (ha-617764-m02) </domain>
	I0913 18:42:34.184234   22792 main.go:141] libmachine: (ha-617764-m02) 
	I0913 18:42:34.191005   22792 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined MAC address 52:54:00:bc:5e:d5 in network default
	I0913 18:42:34.191737   22792 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:42:34.191757   22792 main.go:141] libmachine: (ha-617764-m02) Ensuring networks are active...
	I0913 18:42:34.192718   22792 main.go:141] libmachine: (ha-617764-m02) Ensuring network default is active
	I0913 18:42:34.193103   22792 main.go:141] libmachine: (ha-617764-m02) Ensuring network mk-ha-617764 is active
	I0913 18:42:34.193588   22792 main.go:141] libmachine: (ha-617764-m02) Getting domain xml...
	I0913 18:42:34.194419   22792 main.go:141] libmachine: (ha-617764-m02) Creating domain...
	I0913 18:42:35.408107   22792 main.go:141] libmachine: (ha-617764-m02) Waiting to get IP...
	I0913 18:42:35.408973   22792 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:42:35.409470   22792 main.go:141] libmachine: (ha-617764-m02) DBG | unable to find current IP address of domain ha-617764-m02 in network mk-ha-617764
	I0913 18:42:35.409493   22792 main.go:141] libmachine: (ha-617764-m02) DBG | I0913 18:42:35.409451   23143 retry.go:31] will retry after 264.373822ms: waiting for machine to come up
	I0913 18:42:35.676087   22792 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:42:35.676476   22792 main.go:141] libmachine: (ha-617764-m02) DBG | unable to find current IP address of domain ha-617764-m02 in network mk-ha-617764
	I0913 18:42:35.676503   22792 main.go:141] libmachine: (ha-617764-m02) DBG | I0913 18:42:35.676421   23143 retry.go:31] will retry after 263.878522ms: waiting for machine to come up
	I0913 18:42:35.942022   22792 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:42:35.942487   22792 main.go:141] libmachine: (ha-617764-m02) DBG | unable to find current IP address of domain ha-617764-m02 in network mk-ha-617764
	I0913 18:42:35.942515   22792 main.go:141] libmachine: (ha-617764-m02) DBG | I0913 18:42:35.942441   23143 retry.go:31] will retry after 338.022522ms: waiting for machine to come up
	I0913 18:42:36.282060   22792 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:42:36.282605   22792 main.go:141] libmachine: (ha-617764-m02) DBG | unable to find current IP address of domain ha-617764-m02 in network mk-ha-617764
	I0913 18:42:36.282631   22792 main.go:141] libmachine: (ha-617764-m02) DBG | I0913 18:42:36.282553   23143 retry.go:31] will retry after 536.406863ms: waiting for machine to come up
	I0913 18:42:36.820192   22792 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:42:36.820631   22792 main.go:141] libmachine: (ha-617764-m02) DBG | unable to find current IP address of domain ha-617764-m02 in network mk-ha-617764
	I0913 18:42:36.820655   22792 main.go:141] libmachine: (ha-617764-m02) DBG | I0913 18:42:36.820599   23143 retry.go:31] will retry after 505.176991ms: waiting for machine to come up
	I0913 18:42:37.327316   22792 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:42:37.327776   22792 main.go:141] libmachine: (ha-617764-m02) DBG | unable to find current IP address of domain ha-617764-m02 in network mk-ha-617764
	I0913 18:42:37.327808   22792 main.go:141] libmachine: (ha-617764-m02) DBG | I0913 18:42:37.327731   23143 retry.go:31] will retry after 710.248346ms: waiting for machine to come up
	I0913 18:42:38.039518   22792 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:42:38.039974   22792 main.go:141] libmachine: (ha-617764-m02) DBG | unable to find current IP address of domain ha-617764-m02 in network mk-ha-617764
	I0913 18:42:38.039999   22792 main.go:141] libmachine: (ha-617764-m02) DBG | I0913 18:42:38.039914   23143 retry.go:31] will retry after 1.093957656s: waiting for machine to come up
	I0913 18:42:39.135450   22792 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:42:39.135831   22792 main.go:141] libmachine: (ha-617764-m02) DBG | unable to find current IP address of domain ha-617764-m02 in network mk-ha-617764
	I0913 18:42:39.135859   22792 main.go:141] libmachine: (ha-617764-m02) DBG | I0913 18:42:39.135778   23143 retry.go:31] will retry after 1.203417577s: waiting for machine to come up
	I0913 18:42:40.340982   22792 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:42:40.341334   22792 main.go:141] libmachine: (ha-617764-m02) DBG | unable to find current IP address of domain ha-617764-m02 in network mk-ha-617764
	I0913 18:42:40.341362   22792 main.go:141] libmachine: (ha-617764-m02) DBG | I0913 18:42:40.341294   23143 retry.go:31] will retry after 1.236225531s: waiting for machine to come up
	I0913 18:42:41.579551   22792 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:42:41.580029   22792 main.go:141] libmachine: (ha-617764-m02) DBG | unable to find current IP address of domain ha-617764-m02 in network mk-ha-617764
	I0913 18:42:41.580051   22792 main.go:141] libmachine: (ha-617764-m02) DBG | I0913 18:42:41.579969   23143 retry.go:31] will retry after 2.326969723s: waiting for machine to come up
	I0913 18:42:43.908257   22792 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:42:43.908629   22792 main.go:141] libmachine: (ha-617764-m02) DBG | unable to find current IP address of domain ha-617764-m02 in network mk-ha-617764
	I0913 18:42:43.908654   22792 main.go:141] libmachine: (ha-617764-m02) DBG | I0913 18:42:43.908589   23143 retry.go:31] will retry after 2.078305319s: waiting for machine to come up
	I0913 18:42:45.988301   22792 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:42:45.988776   22792 main.go:141] libmachine: (ha-617764-m02) DBG | unable to find current IP address of domain ha-617764-m02 in network mk-ha-617764
	I0913 18:42:45.988805   22792 main.go:141] libmachine: (ha-617764-m02) DBG | I0913 18:42:45.988726   23143 retry.go:31] will retry after 2.330094079s: waiting for machine to come up
	I0913 18:42:48.322144   22792 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:42:48.322497   22792 main.go:141] libmachine: (ha-617764-m02) DBG | unable to find current IP address of domain ha-617764-m02 in network mk-ha-617764
	I0913 18:42:48.322511   22792 main.go:141] libmachine: (ha-617764-m02) DBG | I0913 18:42:48.322461   23143 retry.go:31] will retry after 3.235874809s: waiting for machine to come up
	I0913 18:42:51.562199   22792 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:42:51.562650   22792 main.go:141] libmachine: (ha-617764-m02) DBG | unable to find current IP address of domain ha-617764-m02 in network mk-ha-617764
	I0913 18:42:51.562678   22792 main.go:141] libmachine: (ha-617764-m02) DBG | I0913 18:42:51.562590   23143 retry.go:31] will retry after 3.996843955s: waiting for machine to come up
	I0913 18:42:55.562043   22792 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:42:55.562475   22792 main.go:141] libmachine: (ha-617764-m02) Found IP for machine: 192.168.39.203
	I0913 18:42:55.562497   22792 main.go:141] libmachine: (ha-617764-m02) Reserving static IP address...
	I0913 18:42:55.562514   22792 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has current primary IP address 192.168.39.203 and MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:42:55.562848   22792 main.go:141] libmachine: (ha-617764-m02) DBG | unable to find host DHCP lease matching {name: "ha-617764-m02", mac: "52:54:00:ab:42:52", ip: "192.168.39.203"} in network mk-ha-617764
	I0913 18:42:55.635170   22792 main.go:141] libmachine: (ha-617764-m02) DBG | Getting to WaitForSSH function...
	I0913 18:42:55.635207   22792 main.go:141] libmachine: (ha-617764-m02) Reserved static IP address: 192.168.39.203
	I0913 18:42:55.635256   22792 main.go:141] libmachine: (ha-617764-m02) Waiting for SSH to be available...
	I0913 18:42:55.638187   22792 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:42:55.638602   22792 main.go:141] libmachine: (ha-617764-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:42:52", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:48 +0000 UTC Type:0 Mac:52:54:00:ab:42:52 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:minikube Clientid:01:52:54:00:ab:42:52}
	I0913 18:42:55.638630   22792 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined IP address 192.168.39.203 and MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:42:55.638793   22792 main.go:141] libmachine: (ha-617764-m02) DBG | Using SSH client type: external
	I0913 18:42:55.638873   22792 main.go:141] libmachine: (ha-617764-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764-m02/id_rsa (-rw-------)
	I0913 18:42:55.639483   22792 main.go:141] libmachine: (ha-617764-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.203 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0913 18:42:55.640013   22792 main.go:141] libmachine: (ha-617764-m02) DBG | About to run SSH command:
	I0913 18:42:55.640037   22792 main.go:141] libmachine: (ha-617764-m02) DBG | exit 0
	I0913 18:42:55.762288   22792 main.go:141] libmachine: (ha-617764-m02) DBG | SSH cmd err, output: <nil>: 
	I0913 18:42:55.762565   22792 main.go:141] libmachine: (ha-617764-m02) KVM machine creation complete!
	I0913 18:42:55.762890   22792 main.go:141] libmachine: (ha-617764-m02) Calling .GetConfigRaw
	I0913 18:42:55.763481   22792 main.go:141] libmachine: (ha-617764-m02) Calling .DriverName
	I0913 18:42:55.763669   22792 main.go:141] libmachine: (ha-617764-m02) Calling .DriverName
	I0913 18:42:55.763800   22792 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0913 18:42:55.763813   22792 main.go:141] libmachine: (ha-617764-m02) Calling .GetState
	I0913 18:42:55.765272   22792 main.go:141] libmachine: Detecting operating system of created instance...
	I0913 18:42:55.765287   22792 main.go:141] libmachine: Waiting for SSH to be available...
	I0913 18:42:55.765293   22792 main.go:141] libmachine: Getting to WaitForSSH function...
	I0913 18:42:55.765298   22792 main.go:141] libmachine: (ha-617764-m02) Calling .GetSSHHostname
	I0913 18:42:55.767597   22792 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:42:55.767917   22792 main.go:141] libmachine: (ha-617764-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:42:52", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:48 +0000 UTC Type:0 Mac:52:54:00:ab:42:52 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-617764-m02 Clientid:01:52:54:00:ab:42:52}
	I0913 18:42:55.767935   22792 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined IP address 192.168.39.203 and MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:42:55.768060   22792 main.go:141] libmachine: (ha-617764-m02) Calling .GetSSHPort
	I0913 18:42:55.768273   22792 main.go:141] libmachine: (ha-617764-m02) Calling .GetSSHKeyPath
	I0913 18:42:55.768403   22792 main.go:141] libmachine: (ha-617764-m02) Calling .GetSSHKeyPath
	I0913 18:42:55.768509   22792 main.go:141] libmachine: (ha-617764-m02) Calling .GetSSHUsername
	I0913 18:42:55.768631   22792 main.go:141] libmachine: Using SSH client type: native
	I0913 18:42:55.768890   22792 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.203 22 <nil> <nil>}
	I0913 18:42:55.768908   22792 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0913 18:42:55.865390   22792 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0913 18:42:55.865413   22792 main.go:141] libmachine: Detecting the provisioner...
	I0913 18:42:55.865424   22792 main.go:141] libmachine: (ha-617764-m02) Calling .GetSSHHostname
	I0913 18:42:55.868116   22792 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:42:55.868486   22792 main.go:141] libmachine: (ha-617764-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:42:52", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:48 +0000 UTC Type:0 Mac:52:54:00:ab:42:52 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-617764-m02 Clientid:01:52:54:00:ab:42:52}
	I0913 18:42:55.868512   22792 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined IP address 192.168.39.203 and MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:42:55.868653   22792 main.go:141] libmachine: (ha-617764-m02) Calling .GetSSHPort
	I0913 18:42:55.868837   22792 main.go:141] libmachine: (ha-617764-m02) Calling .GetSSHKeyPath
	I0913 18:42:55.868991   22792 main.go:141] libmachine: (ha-617764-m02) Calling .GetSSHKeyPath
	I0913 18:42:55.869119   22792 main.go:141] libmachine: (ha-617764-m02) Calling .GetSSHUsername
	I0913 18:42:55.869326   22792 main.go:141] libmachine: Using SSH client type: native
	I0913 18:42:55.869599   22792 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.203 22 <nil> <nil>}
	I0913 18:42:55.869613   22792 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0913 18:42:55.966894   22792 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0913 18:42:55.966998   22792 main.go:141] libmachine: found compatible host: buildroot
	I0913 18:42:55.967011   22792 main.go:141] libmachine: Provisioning with buildroot...
	I0913 18:42:55.967022   22792 main.go:141] libmachine: (ha-617764-m02) Calling .GetMachineName
	I0913 18:42:55.967311   22792 buildroot.go:166] provisioning hostname "ha-617764-m02"
	I0913 18:42:55.967338   22792 main.go:141] libmachine: (ha-617764-m02) Calling .GetMachineName
	I0913 18:42:55.967522   22792 main.go:141] libmachine: (ha-617764-m02) Calling .GetSSHHostname
	I0913 18:42:55.970301   22792 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:42:55.970631   22792 main.go:141] libmachine: (ha-617764-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:42:52", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:48 +0000 UTC Type:0 Mac:52:54:00:ab:42:52 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-617764-m02 Clientid:01:52:54:00:ab:42:52}
	I0913 18:42:55.970660   22792 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined IP address 192.168.39.203 and MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:42:55.970825   22792 main.go:141] libmachine: (ha-617764-m02) Calling .GetSSHPort
	I0913 18:42:55.971018   22792 main.go:141] libmachine: (ha-617764-m02) Calling .GetSSHKeyPath
	I0913 18:42:55.971163   22792 main.go:141] libmachine: (ha-617764-m02) Calling .GetSSHKeyPath
	I0913 18:42:55.971301   22792 main.go:141] libmachine: (ha-617764-m02) Calling .GetSSHUsername
	I0913 18:42:55.971496   22792 main.go:141] libmachine: Using SSH client type: native
	I0913 18:42:55.971707   22792 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.203 22 <nil> <nil>}
	I0913 18:42:55.971725   22792 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-617764-m02 && echo "ha-617764-m02" | sudo tee /etc/hostname
	I0913 18:42:56.086576   22792 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-617764-m02
	
	I0913 18:42:56.086607   22792 main.go:141] libmachine: (ha-617764-m02) Calling .GetSSHHostname
	I0913 18:42:56.089443   22792 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:42:56.089742   22792 main.go:141] libmachine: (ha-617764-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:42:52", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:48 +0000 UTC Type:0 Mac:52:54:00:ab:42:52 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-617764-m02 Clientid:01:52:54:00:ab:42:52}
	I0913 18:42:56.089766   22792 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined IP address 192.168.39.203 and MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:42:56.089955   22792 main.go:141] libmachine: (ha-617764-m02) Calling .GetSSHPort
	I0913 18:42:56.090166   22792 main.go:141] libmachine: (ha-617764-m02) Calling .GetSSHKeyPath
	I0913 18:42:56.090435   22792 main.go:141] libmachine: (ha-617764-m02) Calling .GetSSHKeyPath
	I0913 18:42:56.090571   22792 main.go:141] libmachine: (ha-617764-m02) Calling .GetSSHUsername
	I0913 18:42:56.090760   22792 main.go:141] libmachine: Using SSH client type: native
	I0913 18:42:56.090911   22792 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.203 22 <nil> <nil>}
	I0913 18:42:56.090926   22792 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-617764-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-617764-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-617764-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0913 18:42:56.195182   22792 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0913 18:42:56.195220   22792 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19636-3902/.minikube CaCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19636-3902/.minikube}
	I0913 18:42:56.195241   22792 buildroot.go:174] setting up certificates
	I0913 18:42:56.195252   22792 provision.go:84] configureAuth start
	I0913 18:42:56.195262   22792 main.go:141] libmachine: (ha-617764-m02) Calling .GetMachineName
	I0913 18:42:56.195523   22792 main.go:141] libmachine: (ha-617764-m02) Calling .GetIP
	I0913 18:42:56.197899   22792 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:42:56.198225   22792 main.go:141] libmachine: (ha-617764-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:42:52", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:48 +0000 UTC Type:0 Mac:52:54:00:ab:42:52 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-617764-m02 Clientid:01:52:54:00:ab:42:52}
	I0913 18:42:56.198248   22792 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined IP address 192.168.39.203 and MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:42:56.198365   22792 main.go:141] libmachine: (ha-617764-m02) Calling .GetSSHHostname
	I0913 18:42:56.200705   22792 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:42:56.201030   22792 main.go:141] libmachine: (ha-617764-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:42:52", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:48 +0000 UTC Type:0 Mac:52:54:00:ab:42:52 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-617764-m02 Clientid:01:52:54:00:ab:42:52}
	I0913 18:42:56.201057   22792 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined IP address 192.168.39.203 and MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:42:56.201201   22792 provision.go:143] copyHostCerts
	I0913 18:42:56.201233   22792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem
	I0913 18:42:56.201274   22792 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem, removing ...
	I0913 18:42:56.201286   22792 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem
	I0913 18:42:56.201366   22792 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem (1082 bytes)
	I0913 18:42:56.201456   22792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem
	I0913 18:42:56.201478   22792 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem, removing ...
	I0913 18:42:56.201486   22792 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem
	I0913 18:42:56.201516   22792 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem (1123 bytes)
	I0913 18:42:56.201567   22792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem
	I0913 18:42:56.201589   22792 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem, removing ...
	I0913 18:42:56.201597   22792 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem
	I0913 18:42:56.201623   22792 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem (1679 bytes)
	I0913 18:42:56.201680   22792 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem org=jenkins.ha-617764-m02 san=[127.0.0.1 192.168.39.203 ha-617764-m02 localhost minikube]
	I0913 18:42:56.304838   22792 provision.go:177] copyRemoteCerts
	I0913 18:42:56.304894   22792 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0913 18:42:56.304915   22792 main.go:141] libmachine: (ha-617764-m02) Calling .GetSSHHostname
	I0913 18:42:56.307334   22792 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:42:56.307653   22792 main.go:141] libmachine: (ha-617764-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:42:52", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:48 +0000 UTC Type:0 Mac:52:54:00:ab:42:52 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-617764-m02 Clientid:01:52:54:00:ab:42:52}
	I0913 18:42:56.307685   22792 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined IP address 192.168.39.203 and MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:42:56.307806   22792 main.go:141] libmachine: (ha-617764-m02) Calling .GetSSHPort
	I0913 18:42:56.307976   22792 main.go:141] libmachine: (ha-617764-m02) Calling .GetSSHKeyPath
	I0913 18:42:56.308108   22792 main.go:141] libmachine: (ha-617764-m02) Calling .GetSSHUsername
	I0913 18:42:56.308232   22792 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764-m02/id_rsa Username:docker}
	I0913 18:42:56.388206   22792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0913 18:42:56.388295   22792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1671 bytes)
	I0913 18:42:56.412902   22792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0913 18:42:56.412975   22792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0913 18:42:56.437081   22792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0913 18:42:56.437162   22792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0913 18:42:56.461095   22792 provision.go:87] duration metric: took 265.820588ms to configureAuth
	I0913 18:42:56.461120   22792 buildroot.go:189] setting minikube options for container-runtime
	I0913 18:42:56.461323   22792 config.go:182] Loaded profile config "ha-617764": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 18:42:56.461405   22792 main.go:141] libmachine: (ha-617764-m02) Calling .GetSSHHostname
	I0913 18:42:56.464186   22792 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:42:56.464537   22792 main.go:141] libmachine: (ha-617764-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:42:52", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:48 +0000 UTC Type:0 Mac:52:54:00:ab:42:52 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-617764-m02 Clientid:01:52:54:00:ab:42:52}
	I0913 18:42:56.464571   22792 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined IP address 192.168.39.203 and MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:42:56.464774   22792 main.go:141] libmachine: (ha-617764-m02) Calling .GetSSHPort
	I0913 18:42:56.464944   22792 main.go:141] libmachine: (ha-617764-m02) Calling .GetSSHKeyPath
	I0913 18:42:56.465101   22792 main.go:141] libmachine: (ha-617764-m02) Calling .GetSSHKeyPath
	I0913 18:42:56.465223   22792 main.go:141] libmachine: (ha-617764-m02) Calling .GetSSHUsername
	I0913 18:42:56.465371   22792 main.go:141] libmachine: Using SSH client type: native
	I0913 18:42:56.465559   22792 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.203 22 <nil> <nil>}
	I0913 18:42:56.465575   22792 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0913 18:42:56.681537   22792 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0913 18:42:56.681567   22792 main.go:141] libmachine: Checking connection to Docker...
	I0913 18:42:56.681579   22792 main.go:141] libmachine: (ha-617764-m02) Calling .GetURL
	I0913 18:42:56.682877   22792 main.go:141] libmachine: (ha-617764-m02) DBG | Using libvirt version 6000000
	I0913 18:42:56.684960   22792 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:42:56.685263   22792 main.go:141] libmachine: (ha-617764-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:42:52", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:48 +0000 UTC Type:0 Mac:52:54:00:ab:42:52 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-617764-m02 Clientid:01:52:54:00:ab:42:52}
	I0913 18:42:56.685292   22792 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined IP address 192.168.39.203 and MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:42:56.685455   22792 main.go:141] libmachine: Docker is up and running!
	I0913 18:42:56.685473   22792 main.go:141] libmachine: Reticulating splines...
	I0913 18:42:56.685479   22792 client.go:171] duration metric: took 22.841271502s to LocalClient.Create
	I0913 18:42:56.685504   22792 start.go:167] duration metric: took 22.841337164s to libmachine.API.Create "ha-617764"
	I0913 18:42:56.685514   22792 start.go:293] postStartSetup for "ha-617764-m02" (driver="kvm2")
	I0913 18:42:56.685530   22792 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0913 18:42:56.685549   22792 main.go:141] libmachine: (ha-617764-m02) Calling .DriverName
	I0913 18:42:56.685743   22792 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0913 18:42:56.685764   22792 main.go:141] libmachine: (ha-617764-m02) Calling .GetSSHHostname
	I0913 18:42:56.687558   22792 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:42:56.687865   22792 main.go:141] libmachine: (ha-617764-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:42:52", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:48 +0000 UTC Type:0 Mac:52:54:00:ab:42:52 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-617764-m02 Clientid:01:52:54:00:ab:42:52}
	I0913 18:42:56.687891   22792 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined IP address 192.168.39.203 and MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:42:56.688053   22792 main.go:141] libmachine: (ha-617764-m02) Calling .GetSSHPort
	I0913 18:42:56.688205   22792 main.go:141] libmachine: (ha-617764-m02) Calling .GetSSHKeyPath
	I0913 18:42:56.688342   22792 main.go:141] libmachine: (ha-617764-m02) Calling .GetSSHUsername
	I0913 18:42:56.688451   22792 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764-m02/id_rsa Username:docker}
	I0913 18:42:56.767885   22792 ssh_runner.go:195] Run: cat /etc/os-release
	I0913 18:42:56.772109   22792 info.go:137] Remote host: Buildroot 2023.02.9
	I0913 18:42:56.772127   22792 filesync.go:126] Scanning /home/jenkins/minikube-integration/19636-3902/.minikube/addons for local assets ...
	I0913 18:42:56.772191   22792 filesync.go:126] Scanning /home/jenkins/minikube-integration/19636-3902/.minikube/files for local assets ...
	I0913 18:42:56.772259   22792 filesync.go:149] local asset: /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem -> 110792.pem in /etc/ssl/certs
	I0913 18:42:56.772268   22792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem -> /etc/ssl/certs/110792.pem
	I0913 18:42:56.772342   22792 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0913 18:42:56.781943   22792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem --> /etc/ssl/certs/110792.pem (1708 bytes)
	I0913 18:42:56.808581   22792 start.go:296] duration metric: took 123.052756ms for postStartSetup
	I0913 18:42:56.808619   22792 main.go:141] libmachine: (ha-617764-m02) Calling .GetConfigRaw
	I0913 18:42:56.809145   22792 main.go:141] libmachine: (ha-617764-m02) Calling .GetIP
	I0913 18:42:56.811531   22792 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:42:56.811840   22792 main.go:141] libmachine: (ha-617764-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:42:52", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:48 +0000 UTC Type:0 Mac:52:54:00:ab:42:52 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-617764-m02 Clientid:01:52:54:00:ab:42:52}
	I0913 18:42:56.811859   22792 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined IP address 192.168.39.203 and MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:42:56.812097   22792 profile.go:143] Saving config to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/config.json ...
	I0913 18:42:56.812259   22792 start.go:128] duration metric: took 22.986195771s to createHost
	I0913 18:42:56.812278   22792 main.go:141] libmachine: (ha-617764-m02) Calling .GetSSHHostname
	I0913 18:42:56.814271   22792 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:42:56.814590   22792 main.go:141] libmachine: (ha-617764-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:42:52", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:48 +0000 UTC Type:0 Mac:52:54:00:ab:42:52 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-617764-m02 Clientid:01:52:54:00:ab:42:52}
	I0913 18:42:56.814616   22792 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined IP address 192.168.39.203 and MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:42:56.814735   22792 main.go:141] libmachine: (ha-617764-m02) Calling .GetSSHPort
	I0913 18:42:56.814900   22792 main.go:141] libmachine: (ha-617764-m02) Calling .GetSSHKeyPath
	I0913 18:42:56.815055   22792 main.go:141] libmachine: (ha-617764-m02) Calling .GetSSHKeyPath
	I0913 18:42:56.815181   22792 main.go:141] libmachine: (ha-617764-m02) Calling .GetSSHUsername
	I0913 18:42:56.815329   22792 main.go:141] libmachine: Using SSH client type: native
	I0913 18:42:56.815477   22792 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.203 22 <nil> <nil>}
	I0913 18:42:56.815485   22792 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0913 18:42:56.910973   22792 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726252976.878556016
	
	I0913 18:42:56.910995   22792 fix.go:216] guest clock: 1726252976.878556016
	I0913 18:42:56.911001   22792 fix.go:229] Guest: 2024-09-13 18:42:56.878556016 +0000 UTC Remote: 2024-09-13 18:42:56.812269104 +0000 UTC m=+70.503179379 (delta=66.286912ms)
	I0913 18:42:56.911016   22792 fix.go:200] guest clock delta is within tolerance: 66.286912ms
	I0913 18:42:56.911021   22792 start.go:83] releasing machines lock for "ha-617764-m02", held for 23.085044062s
	I0913 18:42:56.911037   22792 main.go:141] libmachine: (ha-617764-m02) Calling .DriverName
	I0913 18:42:56.911342   22792 main.go:141] libmachine: (ha-617764-m02) Calling .GetIP
	I0913 18:42:56.913641   22792 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:42:56.914008   22792 main.go:141] libmachine: (ha-617764-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:42:52", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:48 +0000 UTC Type:0 Mac:52:54:00:ab:42:52 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-617764-m02 Clientid:01:52:54:00:ab:42:52}
	I0913 18:42:56.914034   22792 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined IP address 192.168.39.203 and MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:42:56.916176   22792 out.go:177] * Found network options:
	I0913 18:42:56.917389   22792 out.go:177]   - NO_PROXY=192.168.39.145
	W0913 18:42:56.918480   22792 proxy.go:119] fail to check proxy env: Error ip not in block
	I0913 18:42:56.918510   22792 main.go:141] libmachine: (ha-617764-m02) Calling .DriverName
	I0913 18:42:56.918961   22792 main.go:141] libmachine: (ha-617764-m02) Calling .DriverName
	I0913 18:42:56.919119   22792 main.go:141] libmachine: (ha-617764-m02) Calling .DriverName
	I0913 18:42:56.919195   22792 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0913 18:42:56.919235   22792 main.go:141] libmachine: (ha-617764-m02) Calling .GetSSHHostname
	W0913 18:42:56.919318   22792 proxy.go:119] fail to check proxy env: Error ip not in block
	I0913 18:42:56.919377   22792 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0913 18:42:56.919395   22792 main.go:141] libmachine: (ha-617764-m02) Calling .GetSSHHostname
	I0913 18:42:56.922064   22792 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:42:56.922354   22792 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:42:56.922410   22792 main.go:141] libmachine: (ha-617764-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:42:52", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:48 +0000 UTC Type:0 Mac:52:54:00:ab:42:52 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-617764-m02 Clientid:01:52:54:00:ab:42:52}
	I0913 18:42:56.922440   22792 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined IP address 192.168.39.203 and MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:42:56.922589   22792 main.go:141] libmachine: (ha-617764-m02) Calling .GetSSHPort
	I0913 18:42:56.922762   22792 main.go:141] libmachine: (ha-617764-m02) Calling .GetSSHKeyPath
	I0913 18:42:56.922781   22792 main.go:141] libmachine: (ha-617764-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:42:52", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:48 +0000 UTC Type:0 Mac:52:54:00:ab:42:52 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-617764-m02 Clientid:01:52:54:00:ab:42:52}
	I0913 18:42:56.922796   22792 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined IP address 192.168.39.203 and MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:42:56.922906   22792 main.go:141] libmachine: (ha-617764-m02) Calling .GetSSHUsername
	I0913 18:42:56.922935   22792 main.go:141] libmachine: (ha-617764-m02) Calling .GetSSHPort
	I0913 18:42:56.923116   22792 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764-m02/id_rsa Username:docker}
	I0913 18:42:56.923130   22792 main.go:141] libmachine: (ha-617764-m02) Calling .GetSSHKeyPath
	I0913 18:42:56.923273   22792 main.go:141] libmachine: (ha-617764-m02) Calling .GetSSHUsername
	I0913 18:42:56.923389   22792 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764-m02/id_rsa Username:docker}
	I0913 18:42:57.147627   22792 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0913 18:42:57.154515   22792 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0913 18:42:57.154583   22792 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0913 18:42:57.171030   22792 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0913 18:42:57.171050   22792 start.go:495] detecting cgroup driver to use...
	I0913 18:42:57.171111   22792 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0913 18:42:57.187446   22792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0913 18:42:57.200316   22792 docker.go:217] disabling cri-docker service (if available) ...
	I0913 18:42:57.200359   22792 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0913 18:42:57.212970   22792 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0913 18:42:57.225988   22792 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0913 18:42:57.344734   22792 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0913 18:42:57.484508   22792 docker.go:233] disabling docker service ...
	I0913 18:42:57.484569   22792 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0913 18:42:57.499332   22792 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0913 18:42:57.512148   22792 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0913 18:42:57.656863   22792 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0913 18:42:57.779451   22792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0913 18:42:57.793246   22792 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0913 18:42:57.811312   22792 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0913 18:42:57.811380   22792 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 18:42:57.822030   22792 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0913 18:42:57.822082   22792 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 18:42:57.832599   22792 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 18:42:57.843228   22792 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 18:42:57.854115   22792 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0913 18:42:57.864918   22792 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 18:42:57.876273   22792 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 18:42:57.893313   22792 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 18:42:57.904216   22792 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0913 18:42:57.914207   22792 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0913 18:42:57.914268   22792 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0913 18:42:57.928195   22792 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0913 18:42:57.938419   22792 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 18:42:58.064351   22792 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0913 18:42:58.165182   22792 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0913 18:42:58.165248   22792 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0913 18:42:58.170298   22792 start.go:563] Will wait 60s for crictl version
	I0913 18:42:58.170339   22792 ssh_runner.go:195] Run: which crictl
	I0913 18:42:58.174086   22792 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0913 18:42:58.211997   22792 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0913 18:42:58.212072   22792 ssh_runner.go:195] Run: crio --version
	I0913 18:42:58.239488   22792 ssh_runner.go:195] Run: crio --version
	I0913 18:42:58.283822   22792 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0913 18:42:58.285548   22792 out.go:177]   - env NO_PROXY=192.168.39.145
	I0913 18:42:58.286654   22792 main.go:141] libmachine: (ha-617764-m02) Calling .GetIP
	I0913 18:42:58.289221   22792 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:42:58.289622   22792 main.go:141] libmachine: (ha-617764-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:42:52", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:48 +0000 UTC Type:0 Mac:52:54:00:ab:42:52 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-617764-m02 Clientid:01:52:54:00:ab:42:52}
	I0913 18:42:58.289650   22792 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined IP address 192.168.39.203 and MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:42:58.289857   22792 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0913 18:42:58.294318   22792 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0913 18:42:58.306758   22792 mustload.go:65] Loading cluster: ha-617764
	I0913 18:42:58.306968   22792 config.go:182] Loaded profile config "ha-617764": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 18:42:58.307259   22792 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:42:58.307299   22792 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:42:58.322070   22792 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41437
	I0913 18:42:58.322504   22792 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:42:58.323022   22792 main.go:141] libmachine: Using API Version  1
	I0913 18:42:58.323053   22792 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:42:58.323361   22792 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:42:58.323580   22792 main.go:141] libmachine: (ha-617764) Calling .GetState
	I0913 18:42:58.325023   22792 host.go:66] Checking if "ha-617764" exists ...
	I0913 18:42:58.325289   22792 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:42:58.325319   22792 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:42:58.339300   22792 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44227
	I0913 18:42:58.339772   22792 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:42:58.340260   22792 main.go:141] libmachine: Using API Version  1
	I0913 18:42:58.340278   22792 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:42:58.340575   22792 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:42:58.340724   22792 main.go:141] libmachine: (ha-617764) Calling .DriverName
	I0913 18:42:58.340859   22792 certs.go:68] Setting up /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764 for IP: 192.168.39.203
	I0913 18:42:58.340870   22792 certs.go:194] generating shared ca certs ...
	I0913 18:42:58.340882   22792 certs.go:226] acquiring lock for ca certs: {Name:mke780aab4c2895bded11772e31c7a357d07742c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:42:58.340990   22792 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.key
	I0913 18:42:58.341027   22792 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.key
	I0913 18:42:58.341036   22792 certs.go:256] generating profile certs ...
	I0913 18:42:58.341109   22792 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/client.key
	I0913 18:42:58.341133   22792 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.key.609bdfe2
	I0913 18:42:58.341148   22792 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.crt.609bdfe2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.145 192.168.39.203 192.168.39.254]
	I0913 18:42:58.505948   22792 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.crt.609bdfe2 ...
	I0913 18:42:58.505974   22792 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.crt.609bdfe2: {Name:mk1f0f163f6880fd564fdf3cf71c4cf20e0ab1cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:42:58.506144   22792 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.key.609bdfe2 ...
	I0913 18:42:58.506157   22792 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.key.609bdfe2: {Name:mkb45e7c95cfc51b46c801a3c439fa0dbd0be17b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:42:58.506229   22792 certs.go:381] copying /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.crt.609bdfe2 -> /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.crt
	I0913 18:42:58.506354   22792 certs.go:385] copying /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.key.609bdfe2 -> /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.key
	I0913 18:42:58.506480   22792 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/proxy-client.key
	I0913 18:42:58.506494   22792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0913 18:42:58.506507   22792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0913 18:42:58.506521   22792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0913 18:42:58.506533   22792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0913 18:42:58.506544   22792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0913 18:42:58.506557   22792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0913 18:42:58.506568   22792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0913 18:42:58.506580   22792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0913 18:42:58.506623   22792 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079.pem (1338 bytes)
	W0913 18:42:58.506650   22792 certs.go:480] ignoring /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079_empty.pem, impossibly tiny 0 bytes
	I0913 18:42:58.506659   22792 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem (1675 bytes)
	I0913 18:42:58.506682   22792 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem (1082 bytes)
	I0913 18:42:58.506702   22792 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem (1123 bytes)
	I0913 18:42:58.506722   22792 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem (1679 bytes)
	I0913 18:42:58.506756   22792 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem (1708 bytes)
	I0913 18:42:58.506782   22792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0913 18:42:58.506795   22792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079.pem -> /usr/share/ca-certificates/11079.pem
	I0913 18:42:58.506807   22792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem -> /usr/share/ca-certificates/110792.pem
	I0913 18:42:58.506835   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHHostname
	I0913 18:42:58.509789   22792 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:42:58.510175   22792 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 18:42:58.510204   22792 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:42:58.510371   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHPort
	I0913 18:42:58.510571   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 18:42:58.510733   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHUsername
	I0913 18:42:58.510861   22792 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764/id_rsa Username:docker}
	I0913 18:42:58.586366   22792 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0913 18:42:58.591311   22792 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0913 18:42:58.602028   22792 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0913 18:42:58.606047   22792 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0913 18:42:58.615371   22792 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0913 18:42:58.619130   22792 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0913 18:42:58.628861   22792 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0913 18:42:58.633263   22792 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0913 18:42:58.643569   22792 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0913 18:42:58.647816   22792 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0913 18:42:58.658335   22792 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0913 18:42:58.662734   22792 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0913 18:42:58.672848   22792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0913 18:42:58.699188   22792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0913 18:42:58.724599   22792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0913 18:42:58.749275   22792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0913 18:42:58.773279   22792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0913 18:42:58.796703   22792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0913 18:42:58.820178   22792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0913 18:42:58.844900   22792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0913 18:42:58.868932   22792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0913 18:42:58.893473   22792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079.pem --> /usr/share/ca-certificates/11079.pem (1338 bytes)
	I0913 18:42:58.917540   22792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem --> /usr/share/ca-certificates/110792.pem (1708 bytes)
	I0913 18:42:58.940368   22792 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0913 18:42:58.956895   22792 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0913 18:42:58.972923   22792 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0913 18:42:58.989583   22792 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0913 18:42:59.008802   22792 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0913 18:42:59.026107   22792 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0913 18:42:59.042329   22792 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0913 18:42:59.058225   22792 ssh_runner.go:195] Run: openssl version
	I0913 18:42:59.063866   22792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110792.pem && ln -fs /usr/share/ca-certificates/110792.pem /etc/ssl/certs/110792.pem"
	I0913 18:42:59.074196   22792 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110792.pem
	I0913 18:42:59.078791   22792 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 13 18:38 /usr/share/ca-certificates/110792.pem
	I0913 18:42:59.078835   22792 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110792.pem
	I0913 18:42:59.084460   22792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110792.pem /etc/ssl/certs/3ec20f2e.0"
	I0913 18:42:59.094525   22792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0913 18:42:59.104776   22792 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0913 18:42:59.109074   22792 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 13 18:22 /usr/share/ca-certificates/minikubeCA.pem
	I0913 18:42:59.109126   22792 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0913 18:42:59.114673   22792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0913 18:42:59.125259   22792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11079.pem && ln -fs /usr/share/ca-certificates/11079.pem /etc/ssl/certs/11079.pem"
	I0913 18:42:59.135745   22792 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11079.pem
	I0913 18:42:59.140613   22792 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 13 18:38 /usr/share/ca-certificates/11079.pem
	I0913 18:42:59.140695   22792 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11079.pem
	I0913 18:42:59.146658   22792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11079.pem /etc/ssl/certs/51391683.0"
	I0913 18:42:59.157420   22792 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0913 18:42:59.161755   22792 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0913 18:42:59.161816   22792 kubeadm.go:934] updating node {m02 192.168.39.203 8443 v1.31.1 crio true true} ...
	I0913 18:42:59.161900   22792 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-617764-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.203
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-617764 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0913 18:42:59.161922   22792 kube-vip.go:115] generating kube-vip config ...
	I0913 18:42:59.161952   22792 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0913 18:42:59.176862   22792 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0913 18:42:59.176957   22792 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0913 18:42:59.177009   22792 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0913 18:42:59.187364   22792 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0913 18:42:59.187422   22792 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0913 18:42:59.197410   22792 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0913 18:42:59.197436   22792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0913 18:42:59.197495   22792 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0913 18:42:59.197522   22792 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19636-3902/.minikube/cache/linux/amd64/v1.31.1/kubeadm
	I0913 18:42:59.197496   22792 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19636-3902/.minikube/cache/linux/amd64/v1.31.1/kubelet
	I0913 18:42:59.202183   22792 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0913 18:42:59.202209   22792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0913 18:43:02.732054   22792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0913 18:43:02.732128   22792 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0913 18:43:02.737270   22792 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0913 18:43:02.737313   22792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0913 18:43:02.958947   22792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 18:43:02.994648   22792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0913 18:43:02.994758   22792 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0913 18:43:03.007070   22792 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0913 18:43:03.007115   22792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0913 18:43:03.373882   22792 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0913 18:43:03.384047   22792 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0913 18:43:03.402339   22792 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0913 18:43:03.421245   22792 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0913 18:43:03.439591   22792 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0913 18:43:03.443820   22792 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0913 18:43:03.456121   22792 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 18:43:03.581257   22792 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0913 18:43:03.600338   22792 host.go:66] Checking if "ha-617764" exists ...
	I0913 18:43:03.600751   22792 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:43:03.600803   22792 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:43:03.615242   22792 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43377
	I0913 18:43:03.615707   22792 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:43:03.616197   22792 main.go:141] libmachine: Using API Version  1
	I0913 18:43:03.616216   22792 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:43:03.616509   22792 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:43:03.616709   22792 main.go:141] libmachine: (ha-617764) Calling .DriverName
	I0913 18:43:03.616831   22792 start.go:317] joinCluster: &{Name:ha-617764 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-617764 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.145 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.203 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 18:43:03.616931   22792 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0913 18:43:03.616951   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHHostname
	I0913 18:43:03.619819   22792 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:43:03.620222   22792 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 18:43:03.620246   22792 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:43:03.620371   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHPort
	I0913 18:43:03.620523   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 18:43:03.620676   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHUsername
	I0913 18:43:03.620807   22792 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764/id_rsa Username:docker}
	I0913 18:43:03.766712   22792 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.203 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0913 18:43:03.766767   22792 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7j7evy.9yflqt75sqaf2ecw --discovery-token-ca-cert-hash sha256:a4240e11abfbfd373505036507e5ef67d548fb372e560a526b421dfcccc65691 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-617764-m02 --control-plane --apiserver-advertise-address=192.168.39.203 --apiserver-bind-port=8443"
	I0913 18:43:26.782487   22792 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7j7evy.9yflqt75sqaf2ecw --discovery-token-ca-cert-hash sha256:a4240e11abfbfd373505036507e5ef67d548fb372e560a526b421dfcccc65691 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-617764-m02 --control-plane --apiserver-advertise-address=192.168.39.203 --apiserver-bind-port=8443": (23.015680117s)
	I0913 18:43:26.782526   22792 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0913 18:43:27.216131   22792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-617764-m02 minikube.k8s.io/updated_at=2024_09_13T18_43_27_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=fdd33bebc6743cfd1c61ec7fe066add478610a92 minikube.k8s.io/name=ha-617764 minikube.k8s.io/primary=false
	I0913 18:43:27.365316   22792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-617764-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0913 18:43:27.479009   22792 start.go:319] duration metric: took 23.862174011s to joinCluster
	I0913 18:43:27.479149   22792 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.203 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0913 18:43:27.479426   22792 config.go:182] Loaded profile config "ha-617764": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 18:43:27.480584   22792 out.go:177] * Verifying Kubernetes components...
	I0913 18:43:27.481817   22792 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 18:43:27.724286   22792 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0913 18:43:27.745509   22792 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19636-3902/kubeconfig
	I0913 18:43:27.745863   22792 kapi.go:59] client config for ha-617764: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/client.crt", KeyFile:"/home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/client.key", CAFile:"/home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6f9c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0913 18:43:27.745948   22792 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.145:8443
	I0913 18:43:27.746279   22792 node_ready.go:35] waiting up to 6m0s for node "ha-617764-m02" to be "Ready" ...
	I0913 18:43:27.746428   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 18:43:27.746442   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:27.746456   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:27.746462   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:27.755757   22792 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0913 18:43:28.247360   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 18:43:28.247380   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:28.247388   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:28.247392   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:28.251395   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:43:28.746797   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 18:43:28.746817   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:28.746824   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:28.746827   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:28.755187   22792 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0913 18:43:29.247368   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 18:43:29.247393   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:29.247402   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:29.247410   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:29.250841   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:43:29.747281   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 18:43:29.747304   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:29.747312   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:29.747315   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:29.750870   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:43:29.751575   22792 node_ready.go:53] node "ha-617764-m02" has status "Ready":"False"
	I0913 18:43:30.246565   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 18:43:30.246586   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:30.246594   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:30.246597   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:30.250022   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:43:30.746560   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 18:43:30.746587   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:30.746597   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:30.746602   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:30.750616   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:43:31.246768   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 18:43:31.246788   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:31.246795   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:31.246800   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:31.250304   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:43:31.746805   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 18:43:31.746828   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:31.746838   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:31.746844   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:31.751727   22792 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0913 18:43:31.752531   22792 node_ready.go:53] node "ha-617764-m02" has status "Ready":"False"
	I0913 18:43:32.246890   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 18:43:32.246911   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:32.246924   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:32.246928   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:32.250249   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:43:32.747092   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 18:43:32.747114   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:32.747122   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:32.747127   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:32.750815   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:43:33.247103   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 18:43:33.247125   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:33.247133   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:33.247138   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:33.250742   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:43:33.747045   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 18:43:33.747070   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:33.747083   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:33.747087   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:33.751216   22792 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0913 18:43:34.247426   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 18:43:34.247454   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:34.247465   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:34.247472   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:34.251446   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:43:34.252350   22792 node_ready.go:53] node "ha-617764-m02" has status "Ready":"False"
	I0913 18:43:34.746671   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 18:43:34.746699   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:34.746708   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:34.746713   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:34.750454   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:43:35.246648   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 18:43:35.246666   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:35.246675   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:35.246682   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:35.249677   22792 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 18:43:35.746686   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 18:43:35.746707   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:35.746714   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:35.746718   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:35.750343   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:43:36.247410   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 18:43:36.247438   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:36.247450   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:36.247456   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:36.251732   22792 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0913 18:43:36.252557   22792 node_ready.go:53] node "ha-617764-m02" has status "Ready":"False"
	I0913 18:43:36.746913   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 18:43:36.746933   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:36.746944   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:36.746949   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:36.750250   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:43:37.247384   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 18:43:37.247405   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:37.247414   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:37.247418   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:37.251417   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:43:37.747331   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 18:43:37.747351   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:37.747358   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:37.747362   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:37.751415   22792 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0913 18:43:38.247314   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 18:43:38.247336   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:38.247344   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:38.247348   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:38.251107   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:43:38.746717   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 18:43:38.746739   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:38.746752   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:38.746758   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:38.750605   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:43:38.751267   22792 node_ready.go:53] node "ha-617764-m02" has status "Ready":"False"
	I0913 18:43:39.247047   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 18:43:39.247069   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:39.247079   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:39.247084   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:39.250631   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:43:39.746863   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 18:43:39.746893   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:39.746904   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:39.746911   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:39.750055   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:43:40.247216   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 18:43:40.247240   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:40.247247   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:40.247250   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:40.250686   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:43:40.746930   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 18:43:40.746950   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:40.746958   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:40.746961   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:40.750049   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:43:41.247174   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 18:43:41.247200   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:41.247212   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:41.247217   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:41.250485   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:43:41.251328   22792 node_ready.go:53] node "ha-617764-m02" has status "Ready":"False"
	I0913 18:43:41.747306   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 18:43:41.747330   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:41.747337   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:41.747340   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:41.750596   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:43:42.246615   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 18:43:42.246642   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:42.246654   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:42.246662   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:42.250518   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:43:42.746549   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 18:43:42.746572   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:42.746580   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:42.746583   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:42.749508   22792 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 18:43:43.246689   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 18:43:43.246711   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:43.246719   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:43.246724   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:43.250023   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:43:43.747148   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 18:43:43.747170   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:43.747181   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:43.747187   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:43.749897   22792 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 18:43:43.750484   22792 node_ready.go:53] node "ha-617764-m02" has status "Ready":"False"
	I0913 18:43:44.246957   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 18:43:44.246981   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:44.246989   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:44.246995   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:44.250339   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:43:44.747562   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 18:43:44.747589   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:44.747601   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:44.747606   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:44.791116   22792 round_trippers.go:574] Response Status: 200 OK in 43 milliseconds
	I0913 18:43:45.247268   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 18:43:45.247294   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:45.247304   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:45.247310   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:45.251318   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:43:45.747422   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 18:43:45.747445   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:45.747453   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:45.747456   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:45.750923   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:43:45.751400   22792 node_ready.go:53] node "ha-617764-m02" has status "Ready":"False"
	I0913 18:43:46.246779   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 18:43:46.246806   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:46.246817   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:46.246822   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:46.250788   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:43:46.251283   22792 node_ready.go:49] node "ha-617764-m02" has status "Ready":"True"
	I0913 18:43:46.251316   22792 node_ready.go:38] duration metric: took 18.504986298s for node "ha-617764-m02" to be "Ready" ...
	I0913 18:43:46.251336   22792 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0913 18:43:46.251458   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods
	I0913 18:43:46.251470   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:46.251480   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:46.251488   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:46.255607   22792 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0913 18:43:46.261970   22792 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-fdhnm" in "kube-system" namespace to be "Ready" ...
	I0913 18:43:46.262045   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fdhnm
	I0913 18:43:46.262054   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:46.262061   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:46.262068   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:46.264813   22792 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 18:43:46.265441   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764
	I0913 18:43:46.265458   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:46.265464   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:46.265468   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:46.268153   22792 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 18:43:46.268738   22792 pod_ready.go:93] pod "coredns-7c65d6cfc9-fdhnm" in "kube-system" namespace has status "Ready":"True"
	I0913 18:43:46.268758   22792 pod_ready.go:82] duration metric: took 6.7655ms for pod "coredns-7c65d6cfc9-fdhnm" in "kube-system" namespace to be "Ready" ...
	I0913 18:43:46.268767   22792 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-htrbt" in "kube-system" namespace to be "Ready" ...
	I0913 18:43:46.268814   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-htrbt
	I0913 18:43:46.268826   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:46.268836   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:46.268842   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:46.271260   22792 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 18:43:46.271819   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764
	I0913 18:43:46.271833   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:46.271843   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:46.271847   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:46.274282   22792 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 18:43:46.274979   22792 pod_ready.go:93] pod "coredns-7c65d6cfc9-htrbt" in "kube-system" namespace has status "Ready":"True"
	I0913 18:43:46.274998   22792 pod_ready.go:82] duration metric: took 6.225608ms for pod "coredns-7c65d6cfc9-htrbt" in "kube-system" namespace to be "Ready" ...
	I0913 18:43:46.275010   22792 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-617764" in "kube-system" namespace to be "Ready" ...
	I0913 18:43:46.275081   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/etcd-ha-617764
	I0913 18:43:46.275092   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:46.275128   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:46.275136   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:46.278197   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:43:46.278964   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764
	I0913 18:43:46.278980   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:46.278992   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:46.278997   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:46.281160   22792 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 18:43:46.281715   22792 pod_ready.go:93] pod "etcd-ha-617764" in "kube-system" namespace has status "Ready":"True"
	I0913 18:43:46.281729   22792 pod_ready.go:82] duration metric: took 6.70395ms for pod "etcd-ha-617764" in "kube-system" namespace to be "Ready" ...
	I0913 18:43:46.281739   22792 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-617764-m02" in "kube-system" namespace to be "Ready" ...
	I0913 18:43:46.281792   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/etcd-ha-617764-m02
	I0913 18:43:46.281799   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:46.281806   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:46.281812   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:46.283916   22792 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 18:43:46.284433   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 18:43:46.284444   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:46.284453   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:46.284464   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:46.288133   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:43:46.288518   22792 pod_ready.go:93] pod "etcd-ha-617764-m02" in "kube-system" namespace has status "Ready":"True"
	I0913 18:43:46.288533   22792 pod_ready.go:82] duration metric: took 6.783837ms for pod "etcd-ha-617764-m02" in "kube-system" namespace to be "Ready" ...
	I0913 18:43:46.288554   22792 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-617764" in "kube-system" namespace to be "Ready" ...
	I0913 18:43:46.447062   22792 request.go:632] Waited for 158.444752ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-617764
	I0913 18:43:46.447156   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-617764
	I0913 18:43:46.447167   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:46.447178   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:46.447186   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:46.450727   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:43:46.647832   22792 request.go:632] Waited for 196.372609ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.145:8443/api/v1/nodes/ha-617764
	I0913 18:43:46.647919   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764
	I0913 18:43:46.647927   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:46.647999   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:46.648027   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:46.651891   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:43:46.652784   22792 pod_ready.go:93] pod "kube-apiserver-ha-617764" in "kube-system" namespace has status "Ready":"True"
	I0913 18:43:46.652798   22792 pod_ready.go:82] duration metric: took 364.234884ms for pod "kube-apiserver-ha-617764" in "kube-system" namespace to be "Ready" ...
	I0913 18:43:46.652808   22792 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-617764-m02" in "kube-system" namespace to be "Ready" ...
	I0913 18:43:46.846869   22792 request.go:632] Waited for 194.006603ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-617764-m02
	I0913 18:43:46.846945   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-617764-m02
	I0913 18:43:46.846952   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:46.846961   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:46.846972   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:46.849816   22792 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 18:43:47.046830   22792 request.go:632] Waited for 196.296816ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 18:43:47.046892   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 18:43:47.046896   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:47.046903   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:47.046908   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:47.049999   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:43:47.050465   22792 pod_ready.go:93] pod "kube-apiserver-ha-617764-m02" in "kube-system" namespace has status "Ready":"True"
	I0913 18:43:47.050482   22792 pod_ready.go:82] duration metric: took 397.667915ms for pod "kube-apiserver-ha-617764-m02" in "kube-system" namespace to be "Ready" ...
	I0913 18:43:47.050492   22792 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-617764" in "kube-system" namespace to be "Ready" ...
	I0913 18:43:47.247613   22792 request.go:632] Waited for 197.055207ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-617764
	I0913 18:43:47.247708   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-617764
	I0913 18:43:47.247714   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:47.247722   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:47.247726   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:47.251150   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:43:47.447025   22792 request.go:632] Waited for 195.29363ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.145:8443/api/v1/nodes/ha-617764
	I0913 18:43:47.447096   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764
	I0913 18:43:47.447101   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:47.447110   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:47.447115   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:47.450667   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:43:47.451356   22792 pod_ready.go:93] pod "kube-controller-manager-ha-617764" in "kube-system" namespace has status "Ready":"True"
	I0913 18:43:47.451383   22792 pod_ready.go:82] duration metric: took 400.884125ms for pod "kube-controller-manager-ha-617764" in "kube-system" namespace to be "Ready" ...
	I0913 18:43:47.451397   22792 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-617764-m02" in "kube-system" namespace to be "Ready" ...
	I0913 18:43:47.647436   22792 request.go:632] Waited for 195.961235ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-617764-m02
	I0913 18:43:47.647509   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-617764-m02
	I0913 18:43:47.647514   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:47.647521   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:47.647526   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:47.651652   22792 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0913 18:43:47.847602   22792 request.go:632] Waited for 195.36147ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 18:43:47.847668   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 18:43:47.847674   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:47.847682   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:47.847691   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:47.851451   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:43:47.852078   22792 pod_ready.go:93] pod "kube-controller-manager-ha-617764-m02" in "kube-system" namespace has status "Ready":"True"
	I0913 18:43:47.852098   22792 pod_ready.go:82] duration metric: took 400.693621ms for pod "kube-controller-manager-ha-617764-m02" in "kube-system" namespace to be "Ready" ...
	I0913 18:43:47.852111   22792 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-92mml" in "kube-system" namespace to be "Ready" ...
	I0913 18:43:48.047116   22792 request.go:632] Waited for 194.935132ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-proxy-92mml
	I0913 18:43:48.047239   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-proxy-92mml
	I0913 18:43:48.047266   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:48.047273   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:48.047277   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:48.050797   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:43:48.246855   22792 request.go:632] Waited for 195.227248ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.145:8443/api/v1/nodes/ha-617764
	I0913 18:43:48.246929   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764
	I0913 18:43:48.246936   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:48.246946   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:48.246955   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:48.250290   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:43:48.250828   22792 pod_ready.go:93] pod "kube-proxy-92mml" in "kube-system" namespace has status "Ready":"True"
	I0913 18:43:48.250845   22792 pod_ready.go:82] duration metric: took 398.720708ms for pod "kube-proxy-92mml" in "kube-system" namespace to be "Ready" ...
	I0913 18:43:48.250855   22792 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-hqm8n" in "kube-system" namespace to be "Ready" ...
	I0913 18:43:48.447815   22792 request.go:632] Waited for 196.902431ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hqm8n
	I0913 18:43:48.447893   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hqm8n
	I0913 18:43:48.447901   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:48.447912   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:48.447922   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:48.450968   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:43:48.647003   22792 request.go:632] Waited for 195.22434ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 18:43:48.647081   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 18:43:48.647089   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:48.647100   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:48.647108   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:48.650460   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:43:48.651040   22792 pod_ready.go:93] pod "kube-proxy-hqm8n" in "kube-system" namespace has status "Ready":"True"
	I0913 18:43:48.651060   22792 pod_ready.go:82] duration metric: took 400.198016ms for pod "kube-proxy-hqm8n" in "kube-system" namespace to be "Ready" ...
	I0913 18:43:48.651072   22792 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-617764" in "kube-system" namespace to be "Ready" ...
	I0913 18:43:48.847203   22792 request.go:632] Waited for 196.062994ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-617764
	I0913 18:43:48.847260   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-617764
	I0913 18:43:48.847275   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:48.847283   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:48.847291   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:48.850230   22792 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 18:43:49.047242   22792 request.go:632] Waited for 196.44001ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.145:8443/api/v1/nodes/ha-617764
	I0913 18:43:49.047295   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764
	I0913 18:43:49.047300   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:49.047307   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:49.047311   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:49.051206   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:43:49.051718   22792 pod_ready.go:93] pod "kube-scheduler-ha-617764" in "kube-system" namespace has status "Ready":"True"
	I0913 18:43:49.051736   22792 pod_ready.go:82] duration metric: took 400.657373ms for pod "kube-scheduler-ha-617764" in "kube-system" namespace to be "Ready" ...
	I0913 18:43:49.051746   22792 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-617764-m02" in "kube-system" namespace to be "Ready" ...
	I0913 18:43:49.246865   22792 request.go:632] Waited for 195.040081ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-617764-m02
	I0913 18:43:49.246928   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-617764-m02
	I0913 18:43:49.246933   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:49.246940   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:49.246945   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:49.250686   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:43:49.447653   22792 request.go:632] Waited for 196.379077ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 18:43:49.447718   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 18:43:49.447725   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:49.447736   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:49.447741   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:49.451346   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:43:49.451937   22792 pod_ready.go:93] pod "kube-scheduler-ha-617764-m02" in "kube-system" namespace has status "Ready":"True"
	I0913 18:43:49.451961   22792 pod_ready.go:82] duration metric: took 400.208032ms for pod "kube-scheduler-ha-617764-m02" in "kube-system" namespace to be "Ready" ...
	I0913 18:43:49.451976   22792 pod_ready.go:39] duration metric: took 3.200594709s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0913 18:43:49.452001   22792 api_server.go:52] waiting for apiserver process to appear ...
	I0913 18:43:49.452067   22792 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 18:43:49.469404   22792 api_server.go:72] duration metric: took 21.990223278s to wait for apiserver process to appear ...
	I0913 18:43:49.469427   22792 api_server.go:88] waiting for apiserver healthz status ...
	I0913 18:43:49.469457   22792 api_server.go:253] Checking apiserver healthz at https://192.168.39.145:8443/healthz ...
	I0913 18:43:49.474387   22792 api_server.go:279] https://192.168.39.145:8443/healthz returned 200:
	ok
	I0913 18:43:49.474465   22792 round_trippers.go:463] GET https://192.168.39.145:8443/version
	I0913 18:43:49.474474   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:49.474483   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:49.474494   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:49.475410   22792 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0913 18:43:49.475511   22792 api_server.go:141] control plane version: v1.31.1
	I0913 18:43:49.475529   22792 api_server.go:131] duration metric: took 6.095026ms to wait for apiserver health ...
	I0913 18:43:49.475545   22792 system_pods.go:43] waiting for kube-system pods to appear ...
	I0913 18:43:49.646847   22792 request.go:632] Waited for 171.210404ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods
	I0913 18:43:49.646915   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods
	I0913 18:43:49.646922   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:49.646931   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:49.646938   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:49.651797   22792 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0913 18:43:49.656746   22792 system_pods.go:59] 17 kube-system pods found
	I0913 18:43:49.656779   22792 system_pods.go:61] "coredns-7c65d6cfc9-fdhnm" [5c509676-c7ba-4841-89b5-7e4266abd9c9] Running
	I0913 18:43:49.656785   22792 system_pods.go:61] "coredns-7c65d6cfc9-htrbt" [41a8301e-fca3-4907-bc77-808b013a2d2a] Running
	I0913 18:43:49.656788   22792 system_pods.go:61] "etcd-ha-617764" [e8b297d1-ae3c-45c7-bc17-086a7411c65e] Running
	I0913 18:43:49.656791   22792 system_pods.go:61] "etcd-ha-617764-m02" [54cadd8e-b226-4748-a065-efe913b74058] Running
	I0913 18:43:49.656795   22792 system_pods.go:61] "kindnet-b9bzd" [81130c38-ff5d-4c9e-ab7b-7eae4a62d3b8] Running
	I0913 18:43:49.656798   22792 system_pods.go:61] "kindnet-bc2zg" [e1f2f8d7-bb8a-44cb-ac52-c9f87c6f6170] Running
	I0913 18:43:49.656801   22792 system_pods.go:61] "kube-apiserver-ha-617764" [b9779d8c-fceb-4764-a9fa-e98e0e8446fd] Running
	I0913 18:43:49.656804   22792 system_pods.go:61] "kube-apiserver-ha-617764-m02" [6a67c49f-958f-4463-b3f3-cdc449987a0e] Running
	I0913 18:43:49.656808   22792 system_pods.go:61] "kube-controller-manager-ha-617764" [50f8efc0-95c9-4356-ac31-5ef778f43620] Running
	I0913 18:43:49.656811   22792 system_pods.go:61] "kube-controller-manager-ha-617764-m02" [c9405f4f-2355-4d74-8bce-0afd9709a297] Running
	I0913 18:43:49.656816   22792 system_pods.go:61] "kube-proxy-92mml" [36bd37dc-88c4-4264-9e7c-a90246cc5212] Running
	I0913 18:43:49.656819   22792 system_pods.go:61] "kube-proxy-hqm8n" [d21c9abc-9d25-4a59-9830-2325e7f8ad44] Running
	I0913 18:43:49.656823   22792 system_pods.go:61] "kube-scheduler-ha-617764" [0ffae4ca-101d-499c-a10e-d24d42c6ddbd] Running
	I0913 18:43:49.656826   22792 system_pods.go:61] "kube-scheduler-ha-617764-m02" [46451fc8-0fbe-4c70-b331-3db2cefacd60] Running
	I0913 18:43:49.656831   22792 system_pods.go:61] "kube-vip-ha-617764" [7960420c-8f57-47a3-8d63-de5ad027f8bd] Running
	I0913 18:43:49.656834   22792 system_pods.go:61] "kube-vip-ha-617764-m02" [da600d89-fd3d-45a3-af3b-66cfd443562d] Running
	I0913 18:43:49.656837   22792 system_pods.go:61] "storage-provisioner" [1e5f1a84-1798-430e-af04-82469e8f4a7b] Running
	I0913 18:43:49.656842   22792 system_pods.go:74] duration metric: took 181.289408ms to wait for pod list to return data ...
	I0913 18:43:49.656852   22792 default_sa.go:34] waiting for default service account to be created ...
	I0913 18:43:49.847258   22792 request.go:632] Waited for 190.329384ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.145:8443/api/v1/namespaces/default/serviceaccounts
	I0913 18:43:49.847325   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/default/serviceaccounts
	I0913 18:43:49.847332   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:49.847353   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:49.847376   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:49.860502   22792 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0913 18:43:49.860781   22792 default_sa.go:45] found service account: "default"
	I0913 18:43:49.860806   22792 default_sa.go:55] duration metric: took 203.946475ms for default service account to be created ...
	I0913 18:43:49.860818   22792 system_pods.go:116] waiting for k8s-apps to be running ...
	I0913 18:43:50.047230   22792 request.go:632] Waited for 186.339317ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods
	I0913 18:43:50.047293   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods
	I0913 18:43:50.047300   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:50.047311   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:50.047320   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:50.053175   22792 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0913 18:43:50.057396   22792 system_pods.go:86] 17 kube-system pods found
	I0913 18:43:50.057418   22792 system_pods.go:89] "coredns-7c65d6cfc9-fdhnm" [5c509676-c7ba-4841-89b5-7e4266abd9c9] Running
	I0913 18:43:50.057423   22792 system_pods.go:89] "coredns-7c65d6cfc9-htrbt" [41a8301e-fca3-4907-bc77-808b013a2d2a] Running
	I0913 18:43:50.057427   22792 system_pods.go:89] "etcd-ha-617764" [e8b297d1-ae3c-45c7-bc17-086a7411c65e] Running
	I0913 18:43:50.057431   22792 system_pods.go:89] "etcd-ha-617764-m02" [54cadd8e-b226-4748-a065-efe913b74058] Running
	I0913 18:43:50.057435   22792 system_pods.go:89] "kindnet-b9bzd" [81130c38-ff5d-4c9e-ab7b-7eae4a62d3b8] Running
	I0913 18:43:50.057439   22792 system_pods.go:89] "kindnet-bc2zg" [e1f2f8d7-bb8a-44cb-ac52-c9f87c6f6170] Running
	I0913 18:43:50.057442   22792 system_pods.go:89] "kube-apiserver-ha-617764" [b9779d8c-fceb-4764-a9fa-e98e0e8446fd] Running
	I0913 18:43:50.057446   22792 system_pods.go:89] "kube-apiserver-ha-617764-m02" [6a67c49f-958f-4463-b3f3-cdc449987a0e] Running
	I0913 18:43:50.057450   22792 system_pods.go:89] "kube-controller-manager-ha-617764" [50f8efc0-95c9-4356-ac31-5ef778f43620] Running
	I0913 18:43:50.057453   22792 system_pods.go:89] "kube-controller-manager-ha-617764-m02" [c9405f4f-2355-4d74-8bce-0afd9709a297] Running
	I0913 18:43:50.057457   22792 system_pods.go:89] "kube-proxy-92mml" [36bd37dc-88c4-4264-9e7c-a90246cc5212] Running
	I0913 18:43:50.057460   22792 system_pods.go:89] "kube-proxy-hqm8n" [d21c9abc-9d25-4a59-9830-2325e7f8ad44] Running
	I0913 18:43:50.057463   22792 system_pods.go:89] "kube-scheduler-ha-617764" [0ffae4ca-101d-499c-a10e-d24d42c6ddbd] Running
	I0913 18:43:50.057467   22792 system_pods.go:89] "kube-scheduler-ha-617764-m02" [46451fc8-0fbe-4c70-b331-3db2cefacd60] Running
	I0913 18:43:50.057472   22792 system_pods.go:89] "kube-vip-ha-617764" [7960420c-8f57-47a3-8d63-de5ad027f8bd] Running
	I0913 18:43:50.057475   22792 system_pods.go:89] "kube-vip-ha-617764-m02" [da600d89-fd3d-45a3-af3b-66cfd443562d] Running
	I0913 18:43:50.057480   22792 system_pods.go:89] "storage-provisioner" [1e5f1a84-1798-430e-af04-82469e8f4a7b] Running
	I0913 18:43:50.057486   22792 system_pods.go:126] duration metric: took 196.658835ms to wait for k8s-apps to be running ...
	I0913 18:43:50.057501   22792 system_svc.go:44] waiting for kubelet service to be running ....
	I0913 18:43:50.057549   22792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 18:43:50.073387   22792 system_svc.go:56] duration metric: took 15.885277ms WaitForService to wait for kubelet
	I0913 18:43:50.073415   22792 kubeadm.go:582] duration metric: took 22.594235765s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0913 18:43:50.073434   22792 node_conditions.go:102] verifying NodePressure condition ...
	I0913 18:43:50.247824   22792 request.go:632] Waited for 174.319724ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.145:8443/api/v1/nodes
	I0913 18:43:50.247892   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes
	I0913 18:43:50.247899   22792 round_trippers.go:469] Request Headers:
	I0913 18:43:50.247910   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:43:50.247914   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:43:50.251836   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:43:50.252517   22792 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0913 18:43:50.252547   22792 node_conditions.go:123] node cpu capacity is 2
	I0913 18:43:50.252570   22792 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0913 18:43:50.252576   22792 node_conditions.go:123] node cpu capacity is 2
	I0913 18:43:50.252585   22792 node_conditions.go:105] duration metric: took 179.145226ms to run NodePressure ...
	I0913 18:43:50.252600   22792 start.go:241] waiting for startup goroutines ...
	I0913 18:43:50.252623   22792 start.go:255] writing updated cluster config ...
	I0913 18:43:50.254637   22792 out.go:201] 
	I0913 18:43:50.256021   22792 config.go:182] Loaded profile config "ha-617764": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 18:43:50.256102   22792 profile.go:143] Saving config to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/config.json ...
	I0913 18:43:50.257560   22792 out.go:177] * Starting "ha-617764-m03" control-plane node in "ha-617764" cluster
	I0913 18:43:50.258691   22792 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0913 18:43:50.258711   22792 cache.go:56] Caching tarball of preloaded images
	I0913 18:43:50.258841   22792 preload.go:172] Found /home/jenkins/minikube-integration/19636-3902/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0913 18:43:50.258854   22792 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0913 18:43:50.258945   22792 profile.go:143] Saving config to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/config.json ...
	I0913 18:43:50.259133   22792 start.go:360] acquireMachinesLock for ha-617764-m03: {Name:mk2a4fc9f87a3264088553785e086036ce1d8c5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 18:43:50.259190   22792 start.go:364] duration metric: took 36.307µs to acquireMachinesLock for "ha-617764-m03"
	I0913 18:43:50.259213   22792 start.go:93] Provisioning new machine with config: &{Name:ha-617764 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-617764 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.145 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.203 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspekto
r-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0913 18:43:50.259350   22792 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0913 18:43:50.260708   22792 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0913 18:43:50.260798   22792 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:43:50.260839   22792 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:43:50.276521   22792 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34271
	I0913 18:43:50.276883   22792 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:43:50.277314   22792 main.go:141] libmachine: Using API Version  1
	I0913 18:43:50.277333   22792 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:43:50.277654   22792 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:43:50.277825   22792 main.go:141] libmachine: (ha-617764-m03) Calling .GetMachineName
	I0913 18:43:50.277948   22792 main.go:141] libmachine: (ha-617764-m03) Calling .DriverName
	I0913 18:43:50.278139   22792 start.go:159] libmachine.API.Create for "ha-617764" (driver="kvm2")
	I0913 18:43:50.278171   22792 client.go:168] LocalClient.Create starting
	I0913 18:43:50.278210   22792 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem
	I0913 18:43:50.278240   22792 main.go:141] libmachine: Decoding PEM data...
	I0913 18:43:50.278253   22792 main.go:141] libmachine: Parsing certificate...
	I0913 18:43:50.278299   22792 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem
	I0913 18:43:50.278317   22792 main.go:141] libmachine: Decoding PEM data...
	I0913 18:43:50.278327   22792 main.go:141] libmachine: Parsing certificate...
	I0913 18:43:50.278341   22792 main.go:141] libmachine: Running pre-create checks...
	I0913 18:43:50.278348   22792 main.go:141] libmachine: (ha-617764-m03) Calling .PreCreateCheck
	I0913 18:43:50.278514   22792 main.go:141] libmachine: (ha-617764-m03) Calling .GetConfigRaw
	I0913 18:43:50.278875   22792 main.go:141] libmachine: Creating machine...
	I0913 18:43:50.278886   22792 main.go:141] libmachine: (ha-617764-m03) Calling .Create
	I0913 18:43:50.279010   22792 main.go:141] libmachine: (ha-617764-m03) Creating KVM machine...
	I0913 18:43:50.280249   22792 main.go:141] libmachine: (ha-617764-m03) DBG | found existing default KVM network
	I0913 18:43:50.280409   22792 main.go:141] libmachine: (ha-617764-m03) DBG | found existing private KVM network mk-ha-617764
	I0913 18:43:50.280562   22792 main.go:141] libmachine: (ha-617764-m03) Setting up store path in /home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764-m03 ...
	I0913 18:43:50.280585   22792 main.go:141] libmachine: (ha-617764-m03) Building disk image from file:///home/jenkins/minikube-integration/19636-3902/.minikube/cache/iso/amd64/minikube-v1.34.0-1726156389-19616-amd64.iso
	I0913 18:43:50.280698   22792 main.go:141] libmachine: (ha-617764-m03) DBG | I0913 18:43:50.280556   23564 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19636-3902/.minikube
	I0913 18:43:50.280766   22792 main.go:141] libmachine: (ha-617764-m03) Downloading /home/jenkins/minikube-integration/19636-3902/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19636-3902/.minikube/cache/iso/amd64/minikube-v1.34.0-1726156389-19616-amd64.iso...
	I0913 18:43:50.509770   22792 main.go:141] libmachine: (ha-617764-m03) DBG | I0913 18:43:50.509656   23564 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764-m03/id_rsa...
	I0913 18:43:50.718355   22792 main.go:141] libmachine: (ha-617764-m03) DBG | I0913 18:43:50.718232   23564 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764-m03/ha-617764-m03.rawdisk...
	I0913 18:43:50.718383   22792 main.go:141] libmachine: (ha-617764-m03) DBG | Writing magic tar header
	I0913 18:43:50.718394   22792 main.go:141] libmachine: (ha-617764-m03) DBG | Writing SSH key tar header
	I0913 18:43:50.718401   22792 main.go:141] libmachine: (ha-617764-m03) DBG | I0913 18:43:50.718356   23564 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764-m03 ...
	I0913 18:43:50.718520   22792 main.go:141] libmachine: (ha-617764-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764-m03
	I0913 18:43:50.718542   22792 main.go:141] libmachine: (ha-617764-m03) Setting executable bit set on /home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764-m03 (perms=drwx------)
	I0913 18:43:50.718556   22792 main.go:141] libmachine: (ha-617764-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19636-3902/.minikube/machines
	I0913 18:43:50.718574   22792 main.go:141] libmachine: (ha-617764-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19636-3902/.minikube
	I0913 18:43:50.718582   22792 main.go:141] libmachine: (ha-617764-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19636-3902
	I0913 18:43:50.718589   22792 main.go:141] libmachine: (ha-617764-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0913 18:43:50.718595   22792 main.go:141] libmachine: (ha-617764-m03) DBG | Checking permissions on dir: /home/jenkins
	I0913 18:43:50.718604   22792 main.go:141] libmachine: (ha-617764-m03) DBG | Checking permissions on dir: /home
	I0913 18:43:50.718611   22792 main.go:141] libmachine: (ha-617764-m03) DBG | Skipping /home - not owner
	I0913 18:43:50.718635   22792 main.go:141] libmachine: (ha-617764-m03) Setting executable bit set on /home/jenkins/minikube-integration/19636-3902/.minikube/machines (perms=drwxr-xr-x)
	I0913 18:43:50.718653   22792 main.go:141] libmachine: (ha-617764-m03) Setting executable bit set on /home/jenkins/minikube-integration/19636-3902/.minikube (perms=drwxr-xr-x)
	I0913 18:43:50.718671   22792 main.go:141] libmachine: (ha-617764-m03) Setting executable bit set on /home/jenkins/minikube-integration/19636-3902 (perms=drwxrwxr-x)
	I0913 18:43:50.718679   22792 main.go:141] libmachine: (ha-617764-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0913 18:43:50.718689   22792 main.go:141] libmachine: (ha-617764-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0913 18:43:50.718694   22792 main.go:141] libmachine: (ha-617764-m03) Creating domain...
	I0913 18:43:50.719572   22792 main.go:141] libmachine: (ha-617764-m03) define libvirt domain using xml: 
	I0913 18:43:50.719592   22792 main.go:141] libmachine: (ha-617764-m03) <domain type='kvm'>
	I0913 18:43:50.719600   22792 main.go:141] libmachine: (ha-617764-m03)   <name>ha-617764-m03</name>
	I0913 18:43:50.719604   22792 main.go:141] libmachine: (ha-617764-m03)   <memory unit='MiB'>2200</memory>
	I0913 18:43:50.719618   22792 main.go:141] libmachine: (ha-617764-m03)   <vcpu>2</vcpu>
	I0913 18:43:50.719627   22792 main.go:141] libmachine: (ha-617764-m03)   <features>
	I0913 18:43:50.719639   22792 main.go:141] libmachine: (ha-617764-m03)     <acpi/>
	I0913 18:43:50.719647   22792 main.go:141] libmachine: (ha-617764-m03)     <apic/>
	I0913 18:43:50.719654   22792 main.go:141] libmachine: (ha-617764-m03)     <pae/>
	I0913 18:43:50.719663   22792 main.go:141] libmachine: (ha-617764-m03)     
	I0913 18:43:50.719670   22792 main.go:141] libmachine: (ha-617764-m03)   </features>
	I0913 18:43:50.719678   22792 main.go:141] libmachine: (ha-617764-m03)   <cpu mode='host-passthrough'>
	I0913 18:43:50.719685   22792 main.go:141] libmachine: (ha-617764-m03)   
	I0913 18:43:50.719693   22792 main.go:141] libmachine: (ha-617764-m03)   </cpu>
	I0913 18:43:50.719700   22792 main.go:141] libmachine: (ha-617764-m03)   <os>
	I0913 18:43:50.719709   22792 main.go:141] libmachine: (ha-617764-m03)     <type>hvm</type>
	I0913 18:43:50.719719   22792 main.go:141] libmachine: (ha-617764-m03)     <boot dev='cdrom'/>
	I0913 18:43:50.719728   22792 main.go:141] libmachine: (ha-617764-m03)     <boot dev='hd'/>
	I0913 18:43:50.719746   22792 main.go:141] libmachine: (ha-617764-m03)     <bootmenu enable='no'/>
	I0913 18:43:50.719754   22792 main.go:141] libmachine: (ha-617764-m03)   </os>
	I0913 18:43:50.719764   22792 main.go:141] libmachine: (ha-617764-m03)   <devices>
	I0913 18:43:50.719773   22792 main.go:141] libmachine: (ha-617764-m03)     <disk type='file' device='cdrom'>
	I0913 18:43:50.719785   22792 main.go:141] libmachine: (ha-617764-m03)       <source file='/home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764-m03/boot2docker.iso'/>
	I0913 18:43:50.719794   22792 main.go:141] libmachine: (ha-617764-m03)       <target dev='hdc' bus='scsi'/>
	I0913 18:43:50.719802   22792 main.go:141] libmachine: (ha-617764-m03)       <readonly/>
	I0913 18:43:50.719813   22792 main.go:141] libmachine: (ha-617764-m03)     </disk>
	I0913 18:43:50.719821   22792 main.go:141] libmachine: (ha-617764-m03)     <disk type='file' device='disk'>
	I0913 18:43:50.719832   22792 main.go:141] libmachine: (ha-617764-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0913 18:43:50.719849   22792 main.go:141] libmachine: (ha-617764-m03)       <source file='/home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764-m03/ha-617764-m03.rawdisk'/>
	I0913 18:43:50.719860   22792 main.go:141] libmachine: (ha-617764-m03)       <target dev='hda' bus='virtio'/>
	I0913 18:43:50.719871   22792 main.go:141] libmachine: (ha-617764-m03)     </disk>
	I0913 18:43:50.719881   22792 main.go:141] libmachine: (ha-617764-m03)     <interface type='network'>
	I0913 18:43:50.719888   22792 main.go:141] libmachine: (ha-617764-m03)       <source network='mk-ha-617764'/>
	I0913 18:43:50.719902   22792 main.go:141] libmachine: (ha-617764-m03)       <model type='virtio'/>
	I0913 18:43:50.719913   22792 main.go:141] libmachine: (ha-617764-m03)     </interface>
	I0913 18:43:50.719921   22792 main.go:141] libmachine: (ha-617764-m03)     <interface type='network'>
	I0913 18:43:50.719932   22792 main.go:141] libmachine: (ha-617764-m03)       <source network='default'/>
	I0913 18:43:50.719944   22792 main.go:141] libmachine: (ha-617764-m03)       <model type='virtio'/>
	I0913 18:43:50.719952   22792 main.go:141] libmachine: (ha-617764-m03)     </interface>
	I0913 18:43:50.719961   22792 main.go:141] libmachine: (ha-617764-m03)     <serial type='pty'>
	I0913 18:43:50.719971   22792 main.go:141] libmachine: (ha-617764-m03)       <target port='0'/>
	I0913 18:43:50.719984   22792 main.go:141] libmachine: (ha-617764-m03)     </serial>
	I0913 18:43:50.720013   22792 main.go:141] libmachine: (ha-617764-m03)     <console type='pty'>
	I0913 18:43:50.720036   22792 main.go:141] libmachine: (ha-617764-m03)       <target type='serial' port='0'/>
	I0913 18:43:50.720053   22792 main.go:141] libmachine: (ha-617764-m03)     </console>
	I0913 18:43:50.720064   22792 main.go:141] libmachine: (ha-617764-m03)     <rng model='virtio'>
	I0913 18:43:50.720076   22792 main.go:141] libmachine: (ha-617764-m03)       <backend model='random'>/dev/random</backend>
	I0913 18:43:50.720085   22792 main.go:141] libmachine: (ha-617764-m03)     </rng>
	I0913 18:43:50.720093   22792 main.go:141] libmachine: (ha-617764-m03)     
	I0913 18:43:50.720103   22792 main.go:141] libmachine: (ha-617764-m03)     
	I0913 18:43:50.720112   22792 main.go:141] libmachine: (ha-617764-m03)   </devices>
	I0913 18:43:50.720121   22792 main.go:141] libmachine: (ha-617764-m03) </domain>
	I0913 18:43:50.720133   22792 main.go:141] libmachine: (ha-617764-m03) 
	I0913 18:43:50.727105   22792 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined MAC address 52:54:00:83:c8:09 in network default
	I0913 18:43:50.727653   22792 main.go:141] libmachine: (ha-617764-m03) Ensuring networks are active...
	I0913 18:43:50.727670   22792 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:43:50.728499   22792 main.go:141] libmachine: (ha-617764-m03) Ensuring network default is active
	I0913 18:43:50.728841   22792 main.go:141] libmachine: (ha-617764-m03) Ensuring network mk-ha-617764 is active
	I0913 18:43:50.729292   22792 main.go:141] libmachine: (ha-617764-m03) Getting domain xml...
	I0913 18:43:50.729984   22792 main.go:141] libmachine: (ha-617764-m03) Creating domain...
	I0913 18:43:51.960516   22792 main.go:141] libmachine: (ha-617764-m03) Waiting to get IP...
	I0913 18:43:51.961283   22792 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:43:51.961628   22792 main.go:141] libmachine: (ha-617764-m03) DBG | unable to find current IP address of domain ha-617764-m03 in network mk-ha-617764
	I0913 18:43:51.961674   22792 main.go:141] libmachine: (ha-617764-m03) DBG | I0913 18:43:51.961619   23564 retry.go:31] will retry after 222.94822ms: waiting for machine to come up
	I0913 18:43:52.185989   22792 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:43:52.186489   22792 main.go:141] libmachine: (ha-617764-m03) DBG | unable to find current IP address of domain ha-617764-m03 in network mk-ha-617764
	I0913 18:43:52.186519   22792 main.go:141] libmachine: (ha-617764-m03) DBG | I0913 18:43:52.186468   23564 retry.go:31] will retry after 348.512697ms: waiting for machine to come up
	I0913 18:43:52.536967   22792 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:43:52.537348   22792 main.go:141] libmachine: (ha-617764-m03) DBG | unable to find current IP address of domain ha-617764-m03 in network mk-ha-617764
	I0913 18:43:52.537378   22792 main.go:141] libmachine: (ha-617764-m03) DBG | I0913 18:43:52.537294   23564 retry.go:31] will retry after 356.439128ms: waiting for machine to come up
	I0913 18:43:52.895652   22792 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:43:52.896099   22792 main.go:141] libmachine: (ha-617764-m03) DBG | unable to find current IP address of domain ha-617764-m03 in network mk-ha-617764
	I0913 18:43:52.896129   22792 main.go:141] libmachine: (ha-617764-m03) DBG | I0913 18:43:52.896049   23564 retry.go:31] will retry after 531.086298ms: waiting for machine to come up
	I0913 18:43:53.428881   22792 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:43:53.429320   22792 main.go:141] libmachine: (ha-617764-m03) DBG | unable to find current IP address of domain ha-617764-m03 in network mk-ha-617764
	I0913 18:43:53.429348   22792 main.go:141] libmachine: (ha-617764-m03) DBG | I0913 18:43:53.429273   23564 retry.go:31] will retry after 545.757086ms: waiting for machine to come up
	I0913 18:43:53.977006   22792 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:43:53.977444   22792 main.go:141] libmachine: (ha-617764-m03) DBG | unable to find current IP address of domain ha-617764-m03 in network mk-ha-617764
	I0913 18:43:53.977469   22792 main.go:141] libmachine: (ha-617764-m03) DBG | I0913 18:43:53.977389   23564 retry.go:31] will retry after 899.801689ms: waiting for machine to come up
	I0913 18:43:54.878395   22792 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:43:54.878846   22792 main.go:141] libmachine: (ha-617764-m03) DBG | unable to find current IP address of domain ha-617764-m03 in network mk-ha-617764
	I0913 18:43:54.878874   22792 main.go:141] libmachine: (ha-617764-m03) DBG | I0913 18:43:54.878805   23564 retry.go:31] will retry after 936.88095ms: waiting for machine to come up
	I0913 18:43:55.817262   22792 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:43:55.817647   22792 main.go:141] libmachine: (ha-617764-m03) DBG | unable to find current IP address of domain ha-617764-m03 in network mk-ha-617764
	I0913 18:43:55.817673   22792 main.go:141] libmachine: (ha-617764-m03) DBG | I0913 18:43:55.817605   23564 retry.go:31] will retry after 1.411862736s: waiting for machine to come up
	I0913 18:43:57.231474   22792 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:43:57.232007   22792 main.go:141] libmachine: (ha-617764-m03) DBG | unable to find current IP address of domain ha-617764-m03 in network mk-ha-617764
	I0913 18:43:57.232035   22792 main.go:141] libmachine: (ha-617764-m03) DBG | I0913 18:43:57.231965   23564 retry.go:31] will retry after 1.158592591s: waiting for machine to come up
	I0913 18:43:58.392379   22792 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:43:58.392788   22792 main.go:141] libmachine: (ha-617764-m03) DBG | unable to find current IP address of domain ha-617764-m03 in network mk-ha-617764
	I0913 18:43:58.392803   22792 main.go:141] libmachine: (ha-617764-m03) DBG | I0913 18:43:58.392764   23564 retry.go:31] will retry after 1.974547795s: waiting for machine to come up
	I0913 18:44:00.369279   22792 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:44:00.369865   22792 main.go:141] libmachine: (ha-617764-m03) DBG | unable to find current IP address of domain ha-617764-m03 in network mk-ha-617764
	I0913 18:44:00.369894   22792 main.go:141] libmachine: (ha-617764-m03) DBG | I0913 18:44:00.369815   23564 retry.go:31] will retry after 2.798968918s: waiting for machine to come up
	I0913 18:44:03.171087   22792 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:44:03.171475   22792 main.go:141] libmachine: (ha-617764-m03) DBG | unable to find current IP address of domain ha-617764-m03 in network mk-ha-617764
	I0913 18:44:03.171512   22792 main.go:141] libmachine: (ha-617764-m03) DBG | I0913 18:44:03.171449   23564 retry.go:31] will retry after 2.54793054s: waiting for machine to come up
	I0913 18:44:05.721058   22792 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:44:05.721564   22792 main.go:141] libmachine: (ha-617764-m03) DBG | unable to find current IP address of domain ha-617764-m03 in network mk-ha-617764
	I0913 18:44:05.721585   22792 main.go:141] libmachine: (ha-617764-m03) DBG | I0913 18:44:05.721527   23564 retry.go:31] will retry after 3.45685189s: waiting for machine to come up
	I0913 18:44:09.179717   22792 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:44:09.180158   22792 main.go:141] libmachine: (ha-617764-m03) DBG | unable to find current IP address of domain ha-617764-m03 in network mk-ha-617764
	I0913 18:44:09.180185   22792 main.go:141] libmachine: (ha-617764-m03) DBG | I0913 18:44:09.180093   23564 retry.go:31] will retry after 4.407544734s: waiting for machine to come up
	I0913 18:44:13.591186   22792 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:44:13.591703   22792 main.go:141] libmachine: (ha-617764-m03) Found IP for machine: 192.168.39.118
	I0913 18:44:13.591736   22792 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has current primary IP address 192.168.39.118 and MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:44:13.591745   22792 main.go:141] libmachine: (ha-617764-m03) Reserving static IP address...
	I0913 18:44:13.592220   22792 main.go:141] libmachine: (ha-617764-m03) DBG | unable to find host DHCP lease matching {name: "ha-617764-m03", mac: "52:54:00:4c:bc:fa", ip: "192.168.39.118"} in network mk-ha-617764
	I0913 18:44:13.663972   22792 main.go:141] libmachine: (ha-617764-m03) Reserved static IP address: 192.168.39.118
	I0913 18:44:13.664003   22792 main.go:141] libmachine: (ha-617764-m03) DBG | Getting to WaitForSSH function...
	I0913 18:44:13.664010   22792 main.go:141] libmachine: (ha-617764-m03) Waiting for SSH to be available...
	I0913 18:44:13.666336   22792 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:44:13.666646   22792 main.go:141] libmachine: (ha-617764-m03) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:4c:bc:fa", ip: ""} in network mk-ha-617764
	I0913 18:44:13.666682   22792 main.go:141] libmachine: (ha-617764-m03) DBG | unable to find defined IP address of network mk-ha-617764 interface with MAC address 52:54:00:4c:bc:fa
	I0913 18:44:13.666775   22792 main.go:141] libmachine: (ha-617764-m03) DBG | Using SSH client type: external
	I0913 18:44:13.666797   22792 main.go:141] libmachine: (ha-617764-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764-m03/id_rsa (-rw-------)
	I0913 18:44:13.666862   22792 main.go:141] libmachine: (ha-617764-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0913 18:44:13.666896   22792 main.go:141] libmachine: (ha-617764-m03) DBG | About to run SSH command:
	I0913 18:44:13.666915   22792 main.go:141] libmachine: (ha-617764-m03) DBG | exit 0
	I0913 18:44:13.670667   22792 main.go:141] libmachine: (ha-617764-m03) DBG | SSH cmd err, output: exit status 255: 
	I0913 18:44:13.670691   22792 main.go:141] libmachine: (ha-617764-m03) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0913 18:44:13.670701   22792 main.go:141] libmachine: (ha-617764-m03) DBG | command : exit 0
	I0913 18:44:13.670712   22792 main.go:141] libmachine: (ha-617764-m03) DBG | err     : exit status 255
	I0913 18:44:13.670722   22792 main.go:141] libmachine: (ha-617764-m03) DBG | output  : 
	I0913 18:44:16.671501   22792 main.go:141] libmachine: (ha-617764-m03) DBG | Getting to WaitForSSH function...
	I0913 18:44:16.674272   22792 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:44:16.674700   22792 main.go:141] libmachine: (ha-617764-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:bc:fa", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:44:05 +0000 UTC Type:0 Mac:52:54:00:4c:bc:fa Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:ha-617764-m03 Clientid:01:52:54:00:4c:bc:fa}
	I0913 18:44:16.674728   22792 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined IP address 192.168.39.118 and MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:44:16.674886   22792 main.go:141] libmachine: (ha-617764-m03) DBG | Using SSH client type: external
	I0913 18:44:16.674901   22792 main.go:141] libmachine: (ha-617764-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764-m03/id_rsa (-rw-------)
	I0913 18:44:16.674917   22792 main.go:141] libmachine: (ha-617764-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.118 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0913 18:44:16.674926   22792 main.go:141] libmachine: (ha-617764-m03) DBG | About to run SSH command:
	I0913 18:44:16.674937   22792 main.go:141] libmachine: (ha-617764-m03) DBG | exit 0
	I0913 18:44:16.802087   22792 main.go:141] libmachine: (ha-617764-m03) DBG | SSH cmd err, output: <nil>: 
	I0913 18:44:16.802352   22792 main.go:141] libmachine: (ha-617764-m03) KVM machine creation complete!
	I0913 18:44:16.802725   22792 main.go:141] libmachine: (ha-617764-m03) Calling .GetConfigRaw
	I0913 18:44:16.803249   22792 main.go:141] libmachine: (ha-617764-m03) Calling .DriverName
	I0913 18:44:16.803483   22792 main.go:141] libmachine: (ha-617764-m03) Calling .DriverName
	I0913 18:44:16.803650   22792 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0913 18:44:16.803666   22792 main.go:141] libmachine: (ha-617764-m03) Calling .GetState
	I0913 18:44:16.804794   22792 main.go:141] libmachine: Detecting operating system of created instance...
	I0913 18:44:16.804809   22792 main.go:141] libmachine: Waiting for SSH to be available...
	I0913 18:44:16.804822   22792 main.go:141] libmachine: Getting to WaitForSSH function...
	I0913 18:44:16.804833   22792 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHHostname
	I0913 18:44:16.807097   22792 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:44:16.807435   22792 main.go:141] libmachine: (ha-617764-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:bc:fa", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:44:05 +0000 UTC Type:0 Mac:52:54:00:4c:bc:fa Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:ha-617764-m03 Clientid:01:52:54:00:4c:bc:fa}
	I0913 18:44:16.807460   22792 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined IP address 192.168.39.118 and MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:44:16.807595   22792 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHPort
	I0913 18:44:16.807770   22792 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHKeyPath
	I0913 18:44:16.807894   22792 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHKeyPath
	I0913 18:44:16.808004   22792 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHUsername
	I0913 18:44:16.808115   22792 main.go:141] libmachine: Using SSH client type: native
	I0913 18:44:16.808373   22792 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.118 22 <nil> <nil>}
	I0913 18:44:16.808390   22792 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0913 18:44:16.917430   22792 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0913 18:44:16.917450   22792 main.go:141] libmachine: Detecting the provisioner...
	I0913 18:44:16.917457   22792 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHHostname
	I0913 18:44:16.920222   22792 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:44:16.920568   22792 main.go:141] libmachine: (ha-617764-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:bc:fa", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:44:05 +0000 UTC Type:0 Mac:52:54:00:4c:bc:fa Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:ha-617764-m03 Clientid:01:52:54:00:4c:bc:fa}
	I0913 18:44:16.920593   22792 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined IP address 192.168.39.118 and MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:44:16.920710   22792 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHPort
	I0913 18:44:16.920899   22792 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHKeyPath
	I0913 18:44:16.921041   22792 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHKeyPath
	I0913 18:44:16.921197   22792 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHUsername
	I0913 18:44:16.921389   22792 main.go:141] libmachine: Using SSH client type: native
	I0913 18:44:16.921627   22792 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.118 22 <nil> <nil>}
	I0913 18:44:16.921647   22792 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0913 18:44:17.035046   22792 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0913 18:44:17.035107   22792 main.go:141] libmachine: found compatible host: buildroot
	I0913 18:44:17.035116   22792 main.go:141] libmachine: Provisioning with buildroot...
	I0913 18:44:17.035126   22792 main.go:141] libmachine: (ha-617764-m03) Calling .GetMachineName
	I0913 18:44:17.035348   22792 buildroot.go:166] provisioning hostname "ha-617764-m03"
	I0913 18:44:17.035373   22792 main.go:141] libmachine: (ha-617764-m03) Calling .GetMachineName
	I0913 18:44:17.035514   22792 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHHostname
	I0913 18:44:17.037946   22792 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:44:17.038320   22792 main.go:141] libmachine: (ha-617764-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:bc:fa", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:44:05 +0000 UTC Type:0 Mac:52:54:00:4c:bc:fa Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:ha-617764-m03 Clientid:01:52:54:00:4c:bc:fa}
	I0913 18:44:17.038346   22792 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined IP address 192.168.39.118 and MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:44:17.038484   22792 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHPort
	I0913 18:44:17.038678   22792 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHKeyPath
	I0913 18:44:17.038833   22792 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHKeyPath
	I0913 18:44:17.038940   22792 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHUsername
	I0913 18:44:17.039090   22792 main.go:141] libmachine: Using SSH client type: native
	I0913 18:44:17.039237   22792 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.118 22 <nil> <nil>}
	I0913 18:44:17.039248   22792 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-617764-m03 && echo "ha-617764-m03" | sudo tee /etc/hostname
	I0913 18:44:17.162627   22792 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-617764-m03
	
	I0913 18:44:17.162684   22792 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHHostname
	I0913 18:44:17.165667   22792 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:44:17.166190   22792 main.go:141] libmachine: (ha-617764-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:bc:fa", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:44:05 +0000 UTC Type:0 Mac:52:54:00:4c:bc:fa Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:ha-617764-m03 Clientid:01:52:54:00:4c:bc:fa}
	I0913 18:44:17.166221   22792 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined IP address 192.168.39.118 and MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:44:17.166426   22792 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHPort
	I0913 18:44:17.166745   22792 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHKeyPath
	I0913 18:44:17.166994   22792 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHKeyPath
	I0913 18:44:17.167180   22792 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHUsername
	I0913 18:44:17.167381   22792 main.go:141] libmachine: Using SSH client type: native
	I0913 18:44:17.167575   22792 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.118 22 <nil> <nil>}
	I0913 18:44:17.167602   22792 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-617764-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-617764-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-617764-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0913 18:44:17.289053   22792 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0913 18:44:17.289089   22792 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19636-3902/.minikube CaCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19636-3902/.minikube}
	I0913 18:44:17.289114   22792 buildroot.go:174] setting up certificates
	I0913 18:44:17.289126   22792 provision.go:84] configureAuth start
	I0913 18:44:17.289138   22792 main.go:141] libmachine: (ha-617764-m03) Calling .GetMachineName
	I0913 18:44:17.289455   22792 main.go:141] libmachine: (ha-617764-m03) Calling .GetIP
	I0913 18:44:17.292727   22792 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:44:17.293193   22792 main.go:141] libmachine: (ha-617764-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:bc:fa", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:44:05 +0000 UTC Type:0 Mac:52:54:00:4c:bc:fa Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:ha-617764-m03 Clientid:01:52:54:00:4c:bc:fa}
	I0913 18:44:17.293219   22792 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined IP address 192.168.39.118 and MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:44:17.293507   22792 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHHostname
	I0913 18:44:17.296104   22792 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:44:17.296401   22792 main.go:141] libmachine: (ha-617764-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:bc:fa", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:44:05 +0000 UTC Type:0 Mac:52:54:00:4c:bc:fa Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:ha-617764-m03 Clientid:01:52:54:00:4c:bc:fa}
	I0913 18:44:17.296436   22792 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined IP address 192.168.39.118 and MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:44:17.296508   22792 provision.go:143] copyHostCerts
	I0913 18:44:17.296548   22792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem
	I0913 18:44:17.296589   22792 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem, removing ...
	I0913 18:44:17.296601   22792 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem
	I0913 18:44:17.296679   22792 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem (1082 bytes)
	I0913 18:44:17.296782   22792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem
	I0913 18:44:17.296810   22792 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem, removing ...
	I0913 18:44:17.296819   22792 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem
	I0913 18:44:17.296874   22792 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem (1123 bytes)
	I0913 18:44:17.296935   22792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem
	I0913 18:44:17.296958   22792 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem, removing ...
	I0913 18:44:17.296967   22792 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem
	I0913 18:44:17.296998   22792 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem (1679 bytes)
	I0913 18:44:17.297108   22792 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem org=jenkins.ha-617764-m03 san=[127.0.0.1 192.168.39.118 ha-617764-m03 localhost minikube]
	I0913 18:44:17.994603   22792 provision.go:177] copyRemoteCerts
	I0913 18:44:17.994665   22792 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0913 18:44:17.994687   22792 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHHostname
	I0913 18:44:17.997165   22792 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:44:17.997477   22792 main.go:141] libmachine: (ha-617764-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:bc:fa", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:44:05 +0000 UTC Type:0 Mac:52:54:00:4c:bc:fa Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:ha-617764-m03 Clientid:01:52:54:00:4c:bc:fa}
	I0913 18:44:17.997501   22792 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined IP address 192.168.39.118 and MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:44:17.997667   22792 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHPort
	I0913 18:44:17.997867   22792 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHKeyPath
	I0913 18:44:17.998004   22792 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHUsername
	I0913 18:44:17.998164   22792 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764-m03/id_rsa Username:docker}
	I0913 18:44:18.085053   22792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0913 18:44:18.085147   22792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0913 18:44:18.113227   22792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0913 18:44:18.113322   22792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0913 18:44:18.139984   22792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0913 18:44:18.140045   22792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0913 18:44:18.163918   22792 provision.go:87] duration metric: took 874.778214ms to configureAuth
	I0913 18:44:18.163947   22792 buildroot.go:189] setting minikube options for container-runtime
	I0913 18:44:18.164223   22792 config.go:182] Loaded profile config "ha-617764": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 18:44:18.164325   22792 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHHostname
	I0913 18:44:18.166705   22792 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:44:18.167021   22792 main.go:141] libmachine: (ha-617764-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:bc:fa", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:44:05 +0000 UTC Type:0 Mac:52:54:00:4c:bc:fa Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:ha-617764-m03 Clientid:01:52:54:00:4c:bc:fa}
	I0913 18:44:18.167051   22792 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined IP address 192.168.39.118 and MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:44:18.167203   22792 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHPort
	I0913 18:44:18.167392   22792 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHKeyPath
	I0913 18:44:18.167550   22792 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHKeyPath
	I0913 18:44:18.167683   22792 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHUsername
	I0913 18:44:18.167830   22792 main.go:141] libmachine: Using SSH client type: native
	I0913 18:44:18.167978   22792 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.118 22 <nil> <nil>}
	I0913 18:44:18.167991   22792 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0913 18:44:18.407262   22792 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0913 18:44:18.407290   22792 main.go:141] libmachine: Checking connection to Docker...
	I0913 18:44:18.407298   22792 main.go:141] libmachine: (ha-617764-m03) Calling .GetURL
	I0913 18:44:18.408775   22792 main.go:141] libmachine: (ha-617764-m03) DBG | Using libvirt version 6000000
	I0913 18:44:18.411073   22792 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:44:18.411441   22792 main.go:141] libmachine: (ha-617764-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:bc:fa", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:44:05 +0000 UTC Type:0 Mac:52:54:00:4c:bc:fa Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:ha-617764-m03 Clientid:01:52:54:00:4c:bc:fa}
	I0913 18:44:18.411469   22792 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined IP address 192.168.39.118 and MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:44:18.411627   22792 main.go:141] libmachine: Docker is up and running!
	I0913 18:44:18.411642   22792 main.go:141] libmachine: Reticulating splines...
	I0913 18:44:18.411649   22792 client.go:171] duration metric: took 28.133468342s to LocalClient.Create
	I0913 18:44:18.411675   22792 start.go:167] duration metric: took 28.133537197s to libmachine.API.Create "ha-617764"
	I0913 18:44:18.411687   22792 start.go:293] postStartSetup for "ha-617764-m03" (driver="kvm2")
	I0913 18:44:18.411701   22792 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0913 18:44:18.411723   22792 main.go:141] libmachine: (ha-617764-m03) Calling .DriverName
	I0913 18:44:18.411923   22792 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0913 18:44:18.411947   22792 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHHostname
	I0913 18:44:18.413754   22792 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:44:18.414041   22792 main.go:141] libmachine: (ha-617764-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:bc:fa", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:44:05 +0000 UTC Type:0 Mac:52:54:00:4c:bc:fa Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:ha-617764-m03 Clientid:01:52:54:00:4c:bc:fa}
	I0913 18:44:18.414067   22792 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined IP address 192.168.39.118 and MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:44:18.414188   22792 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHPort
	I0913 18:44:18.414367   22792 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHKeyPath
	I0913 18:44:18.414521   22792 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHUsername
	I0913 18:44:18.414649   22792 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764-m03/id_rsa Username:docker}
	I0913 18:44:18.500086   22792 ssh_runner.go:195] Run: cat /etc/os-release
	I0913 18:44:18.504465   22792 info.go:137] Remote host: Buildroot 2023.02.9
	I0913 18:44:18.504492   22792 filesync.go:126] Scanning /home/jenkins/minikube-integration/19636-3902/.minikube/addons for local assets ...
	I0913 18:44:18.504570   22792 filesync.go:126] Scanning /home/jenkins/minikube-integration/19636-3902/.minikube/files for local assets ...
	I0913 18:44:18.504640   22792 filesync.go:149] local asset: /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem -> 110792.pem in /etc/ssl/certs
	I0913 18:44:18.504648   22792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem -> /etc/ssl/certs/110792.pem
	I0913 18:44:18.504724   22792 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0913 18:44:18.513533   22792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem --> /etc/ssl/certs/110792.pem (1708 bytes)
	I0913 18:44:18.538121   22792 start.go:296] duration metric: took 126.41811ms for postStartSetup
	I0913 18:44:18.538175   22792 main.go:141] libmachine: (ha-617764-m03) Calling .GetConfigRaw
	I0913 18:44:18.538744   22792 main.go:141] libmachine: (ha-617764-m03) Calling .GetIP
	I0913 18:44:18.541022   22792 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:44:18.541373   22792 main.go:141] libmachine: (ha-617764-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:bc:fa", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:44:05 +0000 UTC Type:0 Mac:52:54:00:4c:bc:fa Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:ha-617764-m03 Clientid:01:52:54:00:4c:bc:fa}
	I0913 18:44:18.541402   22792 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined IP address 192.168.39.118 and MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:44:18.541667   22792 profile.go:143] Saving config to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/config.json ...
	I0913 18:44:18.541859   22792 start.go:128] duration metric: took 28.282497305s to createHost
	I0913 18:44:18.541881   22792 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHHostname
	I0913 18:44:18.543900   22792 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:44:18.544232   22792 main.go:141] libmachine: (ha-617764-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:bc:fa", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:44:05 +0000 UTC Type:0 Mac:52:54:00:4c:bc:fa Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:ha-617764-m03 Clientid:01:52:54:00:4c:bc:fa}
	I0913 18:44:18.544274   22792 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined IP address 192.168.39.118 and MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:44:18.544436   22792 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHPort
	I0913 18:44:18.544575   22792 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHKeyPath
	I0913 18:44:18.544729   22792 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHKeyPath
	I0913 18:44:18.544825   22792 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHUsername
	I0913 18:44:18.544940   22792 main.go:141] libmachine: Using SSH client type: native
	I0913 18:44:18.545159   22792 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.118 22 <nil> <nil>}
	I0913 18:44:18.545174   22792 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0913 18:44:18.654826   22792 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726253058.635136982
	
	I0913 18:44:18.654846   22792 fix.go:216] guest clock: 1726253058.635136982
	I0913 18:44:18.654855   22792 fix.go:229] Guest: 2024-09-13 18:44:18.635136982 +0000 UTC Remote: 2024-09-13 18:44:18.541870412 +0000 UTC m=+152.232780684 (delta=93.26657ms)
	I0913 18:44:18.654874   22792 fix.go:200] guest clock delta is within tolerance: 93.26657ms
	I0913 18:44:18.654880   22792 start.go:83] releasing machines lock for "ha-617764-m03", held for 28.395679518s
	I0913 18:44:18.654905   22792 main.go:141] libmachine: (ha-617764-m03) Calling .DriverName
	I0913 18:44:18.655148   22792 main.go:141] libmachine: (ha-617764-m03) Calling .GetIP
	I0913 18:44:18.657542   22792 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:44:18.657923   22792 main.go:141] libmachine: (ha-617764-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:bc:fa", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:44:05 +0000 UTC Type:0 Mac:52:54:00:4c:bc:fa Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:ha-617764-m03 Clientid:01:52:54:00:4c:bc:fa}
	I0913 18:44:18.657954   22792 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined IP address 192.168.39.118 and MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:44:18.660294   22792 out.go:177] * Found network options:
	I0913 18:44:18.661658   22792 out.go:177]   - NO_PROXY=192.168.39.145,192.168.39.203
	W0913 18:44:18.662833   22792 proxy.go:119] fail to check proxy env: Error ip not in block
	W0913 18:44:18.662855   22792 proxy.go:119] fail to check proxy env: Error ip not in block
	I0913 18:44:18.662867   22792 main.go:141] libmachine: (ha-617764-m03) Calling .DriverName
	I0913 18:44:18.663354   22792 main.go:141] libmachine: (ha-617764-m03) Calling .DriverName
	I0913 18:44:18.663520   22792 main.go:141] libmachine: (ha-617764-m03) Calling .DriverName
	I0913 18:44:18.663595   22792 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0913 18:44:18.663630   22792 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHHostname
	W0913 18:44:18.663661   22792 proxy.go:119] fail to check proxy env: Error ip not in block
	W0913 18:44:18.663686   22792 proxy.go:119] fail to check proxy env: Error ip not in block
	I0913 18:44:18.663750   22792 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0913 18:44:18.663773   22792 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHHostname
	I0913 18:44:18.666489   22792 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:44:18.666717   22792 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:44:18.666864   22792 main.go:141] libmachine: (ha-617764-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:bc:fa", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:44:05 +0000 UTC Type:0 Mac:52:54:00:4c:bc:fa Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:ha-617764-m03 Clientid:01:52:54:00:4c:bc:fa}
	I0913 18:44:18.666891   22792 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined IP address 192.168.39.118 and MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:44:18.667045   22792 main.go:141] libmachine: (ha-617764-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:bc:fa", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:44:05 +0000 UTC Type:0 Mac:52:54:00:4c:bc:fa Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:ha-617764-m03 Clientid:01:52:54:00:4c:bc:fa}
	I0913 18:44:18.667063   22792 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined IP address 192.168.39.118 and MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:44:18.667090   22792 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHPort
	I0913 18:44:18.667280   22792 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHKeyPath
	I0913 18:44:18.667318   22792 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHPort
	I0913 18:44:18.667454   22792 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHKeyPath
	I0913 18:44:18.667457   22792 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHUsername
	I0913 18:44:18.667656   22792 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHUsername
	I0913 18:44:18.667669   22792 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764-m03/id_rsa Username:docker}
	I0913 18:44:18.667774   22792 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764-m03/id_rsa Username:docker}
	I0913 18:44:18.904393   22792 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0913 18:44:18.910388   22792 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0913 18:44:18.910459   22792 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0913 18:44:18.926370   22792 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0913 18:44:18.926401   22792 start.go:495] detecting cgroup driver to use...
	I0913 18:44:18.926455   22792 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0913 18:44:18.942741   22792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0913 18:44:18.956665   22792 docker.go:217] disabling cri-docker service (if available) ...
	I0913 18:44:18.956716   22792 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0913 18:44:18.970209   22792 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0913 18:44:18.984000   22792 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0913 18:44:19.105582   22792 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0913 18:44:19.253613   22792 docker.go:233] disabling docker service ...
	I0913 18:44:19.253679   22792 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0913 18:44:19.269462   22792 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0913 18:44:19.282397   22792 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0913 18:44:19.421118   22792 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0913 18:44:19.552164   22792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0913 18:44:19.566377   22792 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0913 18:44:19.585430   22792 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0913 18:44:19.585485   22792 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 18:44:19.596399   22792 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0913 18:44:19.596450   22792 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 18:44:19.607523   22792 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 18:44:19.618292   22792 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 18:44:19.629162   22792 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0913 18:44:19.640258   22792 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 18:44:19.651512   22792 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 18:44:19.669361   22792 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 18:44:19.682032   22792 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0913 18:44:19.693153   22792 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0913 18:44:19.693220   22792 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0913 18:44:19.708001   22792 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0913 18:44:19.719219   22792 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 18:44:19.842723   22792 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0913 18:44:19.941502   22792 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0913 18:44:19.941573   22792 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0913 18:44:19.946517   22792 start.go:563] Will wait 60s for crictl version
	I0913 18:44:19.946584   22792 ssh_runner.go:195] Run: which crictl
	I0913 18:44:19.951033   22792 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0913 18:44:19.994419   22792 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0913 18:44:19.994508   22792 ssh_runner.go:195] Run: crio --version
	I0913 18:44:20.026203   22792 ssh_runner.go:195] Run: crio --version
	I0913 18:44:20.057969   22792 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0913 18:44:20.059353   22792 out.go:177]   - env NO_PROXY=192.168.39.145
	I0913 18:44:20.060544   22792 out.go:177]   - env NO_PROXY=192.168.39.145,192.168.39.203
	I0913 18:44:20.061885   22792 main.go:141] libmachine: (ha-617764-m03) Calling .GetIP
	I0913 18:44:20.064491   22792 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:44:20.064889   22792 main.go:141] libmachine: (ha-617764-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:bc:fa", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:44:05 +0000 UTC Type:0 Mac:52:54:00:4c:bc:fa Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:ha-617764-m03 Clientid:01:52:54:00:4c:bc:fa}
	I0913 18:44:20.064910   22792 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined IP address 192.168.39.118 and MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:44:20.065147   22792 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0913 18:44:20.069234   22792 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0913 18:44:20.085265   22792 mustload.go:65] Loading cluster: ha-617764
	I0913 18:44:20.085536   22792 config.go:182] Loaded profile config "ha-617764": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 18:44:20.085832   22792 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:44:20.085873   22792 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:44:20.100678   22792 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44327
	I0913 18:44:20.101132   22792 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:44:20.101632   22792 main.go:141] libmachine: Using API Version  1
	I0913 18:44:20.101652   22792 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:44:20.101952   22792 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:44:20.102112   22792 main.go:141] libmachine: (ha-617764) Calling .GetState
	I0913 18:44:20.103679   22792 host.go:66] Checking if "ha-617764" exists ...
	I0913 18:44:20.104082   22792 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:44:20.104127   22792 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:44:20.118274   22792 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46767
	I0913 18:44:20.118755   22792 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:44:20.119183   22792 main.go:141] libmachine: Using API Version  1
	I0913 18:44:20.119202   22792 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:44:20.119526   22792 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:44:20.119672   22792 main.go:141] libmachine: (ha-617764) Calling .DriverName
	I0913 18:44:20.119844   22792 certs.go:68] Setting up /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764 for IP: 192.168.39.118
	I0913 18:44:20.119854   22792 certs.go:194] generating shared ca certs ...
	I0913 18:44:20.119866   22792 certs.go:226] acquiring lock for ca certs: {Name:mke780aab4c2895bded11772e31c7a357d07742c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:44:20.119979   22792 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.key
	I0913 18:44:20.120016   22792 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.key
	I0913 18:44:20.120025   22792 certs.go:256] generating profile certs ...
	I0913 18:44:20.120095   22792 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/client.key
	I0913 18:44:20.120118   22792 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.key.43e3f53f
	I0913 18:44:20.120131   22792 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.crt.43e3f53f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.145 192.168.39.203 192.168.39.118 192.168.39.254]
	I0913 18:44:20.197533   22792 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.crt.43e3f53f ...
	I0913 18:44:20.197562   22792 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.crt.43e3f53f: {Name:mk56f9dfde1b148b5c4a8abc62ca190d87a808ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:44:20.197747   22792 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.key.43e3f53f ...
	I0913 18:44:20.197761   22792 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.key.43e3f53f: {Name:mk8928cafe5417a6fe2ae9196048e3f96fa72023 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:44:20.197855   22792 certs.go:381] copying /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.crt.43e3f53f -> /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.crt
	I0913 18:44:20.198000   22792 certs.go:385] copying /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.key.43e3f53f -> /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.key
	I0913 18:44:20.198186   22792 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/proxy-client.key
	I0913 18:44:20.198201   22792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0913 18:44:20.198217   22792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0913 18:44:20.198231   22792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0913 18:44:20.198250   22792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0913 18:44:20.198269   22792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0913 18:44:20.198286   22792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0913 18:44:20.198302   22792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0913 18:44:20.226232   22792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0913 18:44:20.226325   22792 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079.pem (1338 bytes)
	W0913 18:44:20.226376   22792 certs.go:480] ignoring /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079_empty.pem, impossibly tiny 0 bytes
	I0913 18:44:20.226390   22792 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem (1675 bytes)
	I0913 18:44:20.226444   22792 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem (1082 bytes)
	I0913 18:44:20.226479   22792 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem (1123 bytes)
	I0913 18:44:20.226507   22792 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem (1679 bytes)
	I0913 18:44:20.226573   22792 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem (1708 bytes)
	I0913 18:44:20.226609   22792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079.pem -> /usr/share/ca-certificates/11079.pem
	I0913 18:44:20.226629   22792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem -> /usr/share/ca-certificates/110792.pem
	I0913 18:44:20.226647   22792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0913 18:44:20.226684   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHHostname
	I0913 18:44:20.229767   22792 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:44:20.230182   22792 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 18:44:20.230200   22792 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:44:20.230414   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHPort
	I0913 18:44:20.230602   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 18:44:20.230742   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHUsername
	I0913 18:44:20.230837   22792 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764/id_rsa Username:docker}
	I0913 18:44:20.302398   22792 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0913 18:44:20.307468   22792 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0913 18:44:20.320292   22792 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0913 18:44:20.324955   22792 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0913 18:44:20.337983   22792 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0913 18:44:20.344488   22792 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0913 18:44:20.356113   22792 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0913 18:44:20.360329   22792 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0913 18:44:20.371659   22792 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0913 18:44:20.376502   22792 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0913 18:44:20.387569   22792 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0913 18:44:20.391714   22792 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0913 18:44:20.408717   22792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0913 18:44:20.435090   22792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0913 18:44:20.460942   22792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0913 18:44:20.485491   22792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0913 18:44:20.508611   22792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0913 18:44:20.532845   22792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0913 18:44:20.555757   22792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0913 18:44:20.578859   22792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0913 18:44:20.602953   22792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079.pem --> /usr/share/ca-certificates/11079.pem (1338 bytes)
	I0913 18:44:20.628234   22792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem --> /usr/share/ca-certificates/110792.pem (1708 bytes)
	I0913 18:44:20.653837   22792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0913 18:44:20.678692   22792 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0913 18:44:20.695969   22792 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0913 18:44:20.713357   22792 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0913 18:44:20.730533   22792 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0913 18:44:20.747290   22792 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0913 18:44:20.763797   22792 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0913 18:44:20.780741   22792 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0913 18:44:20.797290   22792 ssh_runner.go:195] Run: openssl version
	I0913 18:44:20.803524   22792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11079.pem && ln -fs /usr/share/ca-certificates/11079.pem /etc/ssl/certs/11079.pem"
	I0913 18:44:20.814404   22792 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11079.pem
	I0913 18:44:20.819001   22792 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 13 18:38 /usr/share/ca-certificates/11079.pem
	I0913 18:44:20.819051   22792 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11079.pem
	I0913 18:44:20.824835   22792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11079.pem /etc/ssl/certs/51391683.0"
	I0913 18:44:20.836589   22792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110792.pem && ln -fs /usr/share/ca-certificates/110792.pem /etc/ssl/certs/110792.pem"
	I0913 18:44:20.847760   22792 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110792.pem
	I0913 18:44:20.852138   22792 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 13 18:38 /usr/share/ca-certificates/110792.pem
	I0913 18:44:20.852182   22792 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110792.pem
	I0913 18:44:20.857733   22792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110792.pem /etc/ssl/certs/3ec20f2e.0"
	I0913 18:44:20.868683   22792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0913 18:44:20.880517   22792 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0913 18:44:20.884835   22792 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 13 18:22 /usr/share/ca-certificates/minikubeCA.pem
	I0913 18:44:20.884879   22792 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0913 18:44:20.890420   22792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0913 18:44:20.902701   22792 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0913 18:44:20.906972   22792 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0913 18:44:20.907018   22792 kubeadm.go:934] updating node {m03 192.168.39.118 8443 v1.31.1 crio true true} ...
	I0913 18:44:20.907126   22792 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-617764-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.118
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-617764 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0913 18:44:20.907158   22792 kube-vip.go:115] generating kube-vip config ...
	I0913 18:44:20.907199   22792 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0913 18:44:20.923403   22792 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0913 18:44:20.923474   22792 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0913 18:44:20.923532   22792 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0913 18:44:20.933709   22792 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0913 18:44:20.933772   22792 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0913 18:44:20.943277   22792 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I0913 18:44:20.943297   22792 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0913 18:44:20.943314   22792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0913 18:44:20.943356   22792 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0913 18:44:20.943303   22792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0913 18:44:20.943278   22792 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I0913 18:44:20.943428   22792 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0913 18:44:20.943455   22792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 18:44:20.958921   22792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0913 18:44:20.958948   22792 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0913 18:44:20.958986   22792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0913 18:44:20.959011   22792 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0913 18:44:20.959019   22792 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0913 18:44:20.959050   22792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0913 18:44:20.983538   22792 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0913 18:44:20.983581   22792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0913 18:44:21.866684   22792 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0913 18:44:21.877058   22792 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0913 18:44:21.896399   22792 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0913 18:44:21.913772   22792 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0913 18:44:21.931619   22792 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0913 18:44:21.936255   22792 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0913 18:44:21.949711   22792 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 18:44:22.077379   22792 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0913 18:44:22.095404   22792 host.go:66] Checking if "ha-617764" exists ...
	I0913 18:44:22.095709   22792 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:44:22.095743   22792 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:44:22.112680   22792 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43637
	I0913 18:44:22.113186   22792 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:44:22.113686   22792 main.go:141] libmachine: Using API Version  1
	I0913 18:44:22.113705   22792 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:44:22.114081   22792 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:44:22.114441   22792 main.go:141] libmachine: (ha-617764) Calling .DriverName
	I0913 18:44:22.114602   22792 start.go:317] joinCluster: &{Name:ha-617764 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-617764 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.145 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.203 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.118 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:f
alse istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 18:44:22.114755   22792 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0913 18:44:22.114776   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHHostname
	I0913 18:44:22.117737   22792 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:44:22.118269   22792 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 18:44:22.118298   22792 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:44:22.118403   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHPort
	I0913 18:44:22.118574   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 18:44:22.118738   22792 main.go:141] libmachine: (ha-617764) Calling .GetSSHUsername
	I0913 18:44:22.118864   22792 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764/id_rsa Username:docker}
	I0913 18:44:22.290532   22792 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.118 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0913 18:44:22.290589   22792 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token szldyi.jx7bkapu8c26p2ux --discovery-token-ca-cert-hash sha256:a4240e11abfbfd373505036507e5ef67d548fb372e560a526b421dfcccc65691 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-617764-m03 --control-plane --apiserver-advertise-address=192.168.39.118 --apiserver-bind-port=8443"
	I0913 18:44:46.125346   22792 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token szldyi.jx7bkapu8c26p2ux --discovery-token-ca-cert-hash sha256:a4240e11abfbfd373505036507e5ef67d548fb372e560a526b421dfcccc65691 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-617764-m03 --control-plane --apiserver-advertise-address=192.168.39.118 --apiserver-bind-port=8443": (23.834727038s)
	I0913 18:44:46.125383   22792 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0913 18:44:46.675572   22792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-617764-m03 minikube.k8s.io/updated_at=2024_09_13T18_44_46_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=fdd33bebc6743cfd1c61ec7fe066add478610a92 minikube.k8s.io/name=ha-617764 minikube.k8s.io/primary=false
	I0913 18:44:46.828529   22792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-617764-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0913 18:44:46.940550   22792 start.go:319] duration metric: took 24.825943975s to joinCluster
	I0913 18:44:46.940677   22792 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.118 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0913 18:44:46.941034   22792 config.go:182] Loaded profile config "ha-617764": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 18:44:46.942345   22792 out.go:177] * Verifying Kubernetes components...
	I0913 18:44:46.943542   22792 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 18:44:47.214458   22792 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0913 18:44:47.257262   22792 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19636-3902/kubeconfig
	I0913 18:44:47.257469   22792 kapi.go:59] client config for ha-617764: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/client.crt", KeyFile:"/home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/client.key", CAFile:"/home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6f9c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0913 18:44:47.257525   22792 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.145:8443
	I0913 18:44:47.257700   22792 node_ready.go:35] waiting up to 6m0s for node "ha-617764-m03" to be "Ready" ...
	I0913 18:44:47.257767   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m03
	I0913 18:44:47.257775   22792 round_trippers.go:469] Request Headers:
	I0913 18:44:47.257782   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:44:47.257789   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:44:47.261015   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:44:47.758260   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m03
	I0913 18:44:47.758281   22792 round_trippers.go:469] Request Headers:
	I0913 18:44:47.758290   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:44:47.758294   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:44:47.761632   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:44:48.258586   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m03
	I0913 18:44:48.258620   22792 round_trippers.go:469] Request Headers:
	I0913 18:44:48.258639   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:44:48.258645   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:44:48.262554   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:44:48.758911   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m03
	I0913 18:44:48.758936   22792 round_trippers.go:469] Request Headers:
	I0913 18:44:48.758947   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:44:48.758952   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:44:48.763981   22792 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0913 18:44:49.258403   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m03
	I0913 18:44:49.258424   22792 round_trippers.go:469] Request Headers:
	I0913 18:44:49.258432   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:44:49.258436   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:44:49.261673   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:44:49.262242   22792 node_ready.go:53] node "ha-617764-m03" has status "Ready":"False"
	I0913 18:44:49.758261   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m03
	I0913 18:44:49.758284   22792 round_trippers.go:469] Request Headers:
	I0913 18:44:49.758296   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:44:49.758308   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:44:49.761487   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:44:50.258217   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m03
	I0913 18:44:50.258240   22792 round_trippers.go:469] Request Headers:
	I0913 18:44:50.258250   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:44:50.258254   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:44:50.261917   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:44:50.758653   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m03
	I0913 18:44:50.758679   22792 round_trippers.go:469] Request Headers:
	I0913 18:44:50.758691   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:44:50.758697   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:44:50.761871   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:44:51.257891   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m03
	I0913 18:44:51.257932   22792 round_trippers.go:469] Request Headers:
	I0913 18:44:51.257941   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:44:51.257945   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:44:51.261395   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:44:51.757959   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m03
	I0913 18:44:51.757987   22792 round_trippers.go:469] Request Headers:
	I0913 18:44:51.758000   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:44:51.758005   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:44:51.761401   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:44:51.762347   22792 node_ready.go:53] node "ha-617764-m03" has status "Ready":"False"
	I0913 18:44:52.257922   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m03
	I0913 18:44:52.257944   22792 round_trippers.go:469] Request Headers:
	I0913 18:44:52.257952   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:44:52.257957   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:44:52.262582   22792 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0913 18:44:52.757893   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m03
	I0913 18:44:52.757919   22792 round_trippers.go:469] Request Headers:
	I0913 18:44:52.757928   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:44:52.757933   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:44:52.761982   22792 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0913 18:44:53.258147   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m03
	I0913 18:44:53.258170   22792 round_trippers.go:469] Request Headers:
	I0913 18:44:53.258183   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:44:53.258188   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:44:53.261248   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:44:53.758906   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m03
	I0913 18:44:53.758929   22792 round_trippers.go:469] Request Headers:
	I0913 18:44:53.758938   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:44:53.758945   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:44:53.762479   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:44:53.763110   22792 node_ready.go:53] node "ha-617764-m03" has status "Ready":"False"
	I0913 18:44:54.258911   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m03
	I0913 18:44:54.258932   22792 round_trippers.go:469] Request Headers:
	I0913 18:44:54.258940   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:44:54.258943   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:44:54.262344   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:44:54.758801   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m03
	I0913 18:44:54.758823   22792 round_trippers.go:469] Request Headers:
	I0913 18:44:54.758831   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:44:54.758835   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:44:54.762012   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:44:55.258836   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m03
	I0913 18:44:55.258860   22792 round_trippers.go:469] Request Headers:
	I0913 18:44:55.258872   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:44:55.258878   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:44:55.262275   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:44:55.757958   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m03
	I0913 18:44:55.757997   22792 round_trippers.go:469] Request Headers:
	I0913 18:44:55.758008   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:44:55.758013   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:44:55.761419   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:44:56.258260   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m03
	I0913 18:44:56.258287   22792 round_trippers.go:469] Request Headers:
	I0913 18:44:56.258297   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:44:56.258304   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:44:56.261753   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:44:56.262571   22792 node_ready.go:53] node "ha-617764-m03" has status "Ready":"False"
	I0913 18:44:56.758786   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m03
	I0913 18:44:56.758809   22792 round_trippers.go:469] Request Headers:
	I0913 18:44:56.758818   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:44:56.758821   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:44:56.762274   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:44:57.258650   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m03
	I0913 18:44:57.258677   22792 round_trippers.go:469] Request Headers:
	I0913 18:44:57.258688   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:44:57.258693   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:44:57.262219   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:44:57.758298   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m03
	I0913 18:44:57.758319   22792 round_trippers.go:469] Request Headers:
	I0913 18:44:57.758329   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:44:57.758334   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:44:57.761704   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:44:58.258395   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m03
	I0913 18:44:58.258421   22792 round_trippers.go:469] Request Headers:
	I0913 18:44:58.258429   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:44:58.258434   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:44:58.262263   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:44:58.262860   22792 node_ready.go:53] node "ha-617764-m03" has status "Ready":"False"
	I0913 18:44:58.758293   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m03
	I0913 18:44:58.758320   22792 round_trippers.go:469] Request Headers:
	I0913 18:44:58.758333   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:44:58.758340   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:44:58.761869   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:44:59.258216   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m03
	I0913 18:44:59.258240   22792 round_trippers.go:469] Request Headers:
	I0913 18:44:59.258248   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:44:59.258252   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:44:59.261660   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:44:59.758798   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m03
	I0913 18:44:59.758824   22792 round_trippers.go:469] Request Headers:
	I0913 18:44:59.758833   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:44:59.758837   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:44:59.762196   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:45:00.257949   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m03
	I0913 18:45:00.257969   22792 round_trippers.go:469] Request Headers:
	I0913 18:45:00.257977   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:45:00.257980   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:45:00.261779   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:45:00.758236   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m03
	I0913 18:45:00.758257   22792 round_trippers.go:469] Request Headers:
	I0913 18:45:00.758266   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:45:00.758270   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:45:00.761640   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:45:00.762348   22792 node_ready.go:53] node "ha-617764-m03" has status "Ready":"False"
	I0913 18:45:01.258661   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m03
	I0913 18:45:01.258684   22792 round_trippers.go:469] Request Headers:
	I0913 18:45:01.258692   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:45:01.258695   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:45:01.262043   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:45:01.758524   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m03
	I0913 18:45:01.758549   22792 round_trippers.go:469] Request Headers:
	I0913 18:45:01.758559   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:45:01.758566   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:45:01.762147   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:45:02.258789   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m03
	I0913 18:45:02.258816   22792 round_trippers.go:469] Request Headers:
	I0913 18:45:02.258827   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:45:02.258832   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:45:02.262512   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:45:02.757854   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m03
	I0913 18:45:02.757879   22792 round_trippers.go:469] Request Headers:
	I0913 18:45:02.757889   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:45:02.757894   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:45:02.761694   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:45:02.762546   22792 node_ready.go:53] node "ha-617764-m03" has status "Ready":"False"
	I0913 18:45:03.257869   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m03
	I0913 18:45:03.257891   22792 round_trippers.go:469] Request Headers:
	I0913 18:45:03.257902   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:45:03.257905   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:45:03.261551   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:45:03.758746   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m03
	I0913 18:45:03.758769   22792 round_trippers.go:469] Request Headers:
	I0913 18:45:03.758777   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:45:03.758781   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:45:03.762559   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:45:04.257962   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m03
	I0913 18:45:04.257985   22792 round_trippers.go:469] Request Headers:
	I0913 18:45:04.257993   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:45:04.257997   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:45:04.261414   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:45:04.758251   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m03
	I0913 18:45:04.758274   22792 round_trippers.go:469] Request Headers:
	I0913 18:45:04.758282   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:45:04.758292   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:45:04.762024   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:45:04.762716   22792 node_ready.go:53] node "ha-617764-m03" has status "Ready":"False"
	I0913 18:45:05.258158   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m03
	I0913 18:45:05.258180   22792 round_trippers.go:469] Request Headers:
	I0913 18:45:05.258188   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:45:05.258192   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:45:05.261750   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:45:05.758157   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m03
	I0913 18:45:05.758185   22792 round_trippers.go:469] Request Headers:
	I0913 18:45:05.758191   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:45:05.758194   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:45:05.761652   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:45:06.258659   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m03
	I0913 18:45:06.258681   22792 round_trippers.go:469] Request Headers:
	I0913 18:45:06.258689   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:45:06.258693   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:45:06.262236   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:45:06.758069   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m03
	I0913 18:45:06.758107   22792 round_trippers.go:469] Request Headers:
	I0913 18:45:06.758117   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:45:06.758137   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:45:06.761583   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:45:07.257901   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m03
	I0913 18:45:07.257929   22792 round_trippers.go:469] Request Headers:
	I0913 18:45:07.257940   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:45:07.257945   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:45:07.261293   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:45:07.261948   22792 node_ready.go:49] node "ha-617764-m03" has status "Ready":"True"
	I0913 18:45:07.261964   22792 node_ready.go:38] duration metric: took 20.004251057s for node "ha-617764-m03" to be "Ready" ...
	I0913 18:45:07.261979   22792 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0913 18:45:07.262045   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods
	I0913 18:45:07.262054   22792 round_trippers.go:469] Request Headers:
	I0913 18:45:07.262062   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:45:07.262070   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:45:07.269216   22792 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0913 18:45:07.278002   22792 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-fdhnm" in "kube-system" namespace to be "Ready" ...
	I0913 18:45:07.278075   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fdhnm
	I0913 18:45:07.278083   22792 round_trippers.go:469] Request Headers:
	I0913 18:45:07.278089   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:45:07.278113   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:45:07.281227   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:45:07.281938   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764
	I0913 18:45:07.281956   22792 round_trippers.go:469] Request Headers:
	I0913 18:45:07.281967   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:45:07.281979   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:45:07.284497   22792 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 18:45:07.284957   22792 pod_ready.go:93] pod "coredns-7c65d6cfc9-fdhnm" in "kube-system" namespace has status "Ready":"True"
	I0913 18:45:07.284974   22792 pod_ready.go:82] duration metric: took 6.948175ms for pod "coredns-7c65d6cfc9-fdhnm" in "kube-system" namespace to be "Ready" ...
	I0913 18:45:07.284985   22792 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-htrbt" in "kube-system" namespace to be "Ready" ...
	I0913 18:45:07.285047   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-htrbt
	I0913 18:45:07.285058   22792 round_trippers.go:469] Request Headers:
	I0913 18:45:07.285070   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:45:07.285077   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:45:07.287707   22792 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 18:45:07.288385   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764
	I0913 18:45:07.288398   22792 round_trippers.go:469] Request Headers:
	I0913 18:45:07.288408   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:45:07.288416   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:45:07.291237   22792 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 18:45:07.291898   22792 pod_ready.go:93] pod "coredns-7c65d6cfc9-htrbt" in "kube-system" namespace has status "Ready":"True"
	I0913 18:45:07.291913   22792 pod_ready.go:82] duration metric: took 6.921874ms for pod "coredns-7c65d6cfc9-htrbt" in "kube-system" namespace to be "Ready" ...
	I0913 18:45:07.291921   22792 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-617764" in "kube-system" namespace to be "Ready" ...
	I0913 18:45:07.291976   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/etcd-ha-617764
	I0913 18:45:07.291987   22792 round_trippers.go:469] Request Headers:
	I0913 18:45:07.291997   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:45:07.292002   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:45:07.296919   22792 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0913 18:45:07.297475   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764
	I0913 18:45:07.297487   22792 round_trippers.go:469] Request Headers:
	I0913 18:45:07.297494   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:45:07.297498   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:45:07.303799   22792 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0913 18:45:07.304372   22792 pod_ready.go:93] pod "etcd-ha-617764" in "kube-system" namespace has status "Ready":"True"
	I0913 18:45:07.304400   22792 pod_ready.go:82] duration metric: took 12.472064ms for pod "etcd-ha-617764" in "kube-system" namespace to be "Ready" ...
	I0913 18:45:07.304413   22792 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-617764-m02" in "kube-system" namespace to be "Ready" ...
	I0913 18:45:07.304479   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/etcd-ha-617764-m02
	I0913 18:45:07.304489   22792 round_trippers.go:469] Request Headers:
	I0913 18:45:07.304500   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:45:07.304506   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:45:07.309120   22792 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0913 18:45:07.309935   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 18:45:07.309954   22792 round_trippers.go:469] Request Headers:
	I0913 18:45:07.309964   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:45:07.309970   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:45:07.314376   22792 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0913 18:45:07.314767   22792 pod_ready.go:93] pod "etcd-ha-617764-m02" in "kube-system" namespace has status "Ready":"True"
	I0913 18:45:07.314784   22792 pod_ready.go:82] duration metric: took 10.364044ms for pod "etcd-ha-617764-m02" in "kube-system" namespace to be "Ready" ...
	I0913 18:45:07.314793   22792 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-617764-m03" in "kube-system" namespace to be "Ready" ...
	I0913 18:45:07.458166   22792 request.go:632] Waited for 143.309667ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/etcd-ha-617764-m03
	I0913 18:45:07.458240   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/etcd-ha-617764-m03
	I0913 18:45:07.458262   22792 round_trippers.go:469] Request Headers:
	I0913 18:45:07.458273   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:45:07.458280   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:45:07.461635   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:45:07.658619   22792 request.go:632] Waited for 196.368397ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.145:8443/api/v1/nodes/ha-617764-m03
	I0913 18:45:07.658677   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m03
	I0913 18:45:07.658682   22792 round_trippers.go:469] Request Headers:
	I0913 18:45:07.658690   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:45:07.658699   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:45:07.661920   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:45:07.662467   22792 pod_ready.go:93] pod "etcd-ha-617764-m03" in "kube-system" namespace has status "Ready":"True"
	I0913 18:45:07.662484   22792 pod_ready.go:82] duration metric: took 347.68543ms for pod "etcd-ha-617764-m03" in "kube-system" namespace to be "Ready" ...
	I0913 18:45:07.662500   22792 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-617764" in "kube-system" namespace to be "Ready" ...
	I0913 18:45:07.858673   22792 request.go:632] Waited for 196.108753ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-617764
	I0913 18:45:07.858733   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-617764
	I0913 18:45:07.858738   22792 round_trippers.go:469] Request Headers:
	I0913 18:45:07.858757   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:45:07.858764   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:45:07.861654   22792 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 18:45:08.058763   22792 request.go:632] Waited for 196.379707ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.145:8443/api/v1/nodes/ha-617764
	I0913 18:45:08.058857   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764
	I0913 18:45:08.058869   22792 round_trippers.go:469] Request Headers:
	I0913 18:45:08.058881   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:45:08.058890   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:45:08.062245   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:45:08.062930   22792 pod_ready.go:93] pod "kube-apiserver-ha-617764" in "kube-system" namespace has status "Ready":"True"
	I0913 18:45:08.062951   22792 pod_ready.go:82] duration metric: took 400.444861ms for pod "kube-apiserver-ha-617764" in "kube-system" namespace to be "Ready" ...
	I0913 18:45:08.062963   22792 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-617764-m02" in "kube-system" namespace to be "Ready" ...
	I0913 18:45:08.257911   22792 request.go:632] Waited for 194.878186ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-617764-m02
	I0913 18:45:08.257985   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-617764-m02
	I0913 18:45:08.257992   22792 round_trippers.go:469] Request Headers:
	I0913 18:45:08.258002   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:45:08.258011   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:45:08.261892   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:45:08.458402   22792 request.go:632] Waited for 195.746351ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 18:45:08.458486   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 18:45:08.458497   22792 round_trippers.go:469] Request Headers:
	I0913 18:45:08.458509   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:45:08.458520   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:45:08.462081   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:45:08.463183   22792 pod_ready.go:93] pod "kube-apiserver-ha-617764-m02" in "kube-system" namespace has status "Ready":"True"
	I0913 18:45:08.463206   22792 pod_ready.go:82] duration metric: took 400.237121ms for pod "kube-apiserver-ha-617764-m02" in "kube-system" namespace to be "Ready" ...
	I0913 18:45:08.463220   22792 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-617764-m03" in "kube-system" namespace to be "Ready" ...
	I0913 18:45:08.658691   22792 request.go:632] Waited for 195.384277ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-617764-m03
	I0913 18:45:08.658743   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-617764-m03
	I0913 18:45:08.658749   22792 round_trippers.go:469] Request Headers:
	I0913 18:45:08.658756   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:45:08.658760   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:45:08.662235   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:45:08.858689   22792 request.go:632] Waited for 195.371118ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.145:8443/api/v1/nodes/ha-617764-m03
	I0913 18:45:08.858776   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m03
	I0913 18:45:08.858789   22792 round_trippers.go:469] Request Headers:
	I0913 18:45:08.858798   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:45:08.858807   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:45:08.862189   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:45:08.862736   22792 pod_ready.go:93] pod "kube-apiserver-ha-617764-m03" in "kube-system" namespace has status "Ready":"True"
	I0913 18:45:08.862759   22792 pod_ready.go:82] duration metric: took 399.530638ms for pod "kube-apiserver-ha-617764-m03" in "kube-system" namespace to be "Ready" ...
	I0913 18:45:08.862772   22792 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-617764" in "kube-system" namespace to be "Ready" ...
	I0913 18:45:09.058077   22792 request.go:632] Waited for 195.237895ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-617764
	I0913 18:45:09.058174   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-617764
	I0913 18:45:09.058182   22792 round_trippers.go:469] Request Headers:
	I0913 18:45:09.058195   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:45:09.058205   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:45:09.061599   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:45:09.258562   22792 request.go:632] Waited for 196.201704ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.145:8443/api/v1/nodes/ha-617764
	I0913 18:45:09.258636   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764
	I0913 18:45:09.258647   22792 round_trippers.go:469] Request Headers:
	I0913 18:45:09.258657   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:45:09.258665   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:45:09.261933   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:45:09.262732   22792 pod_ready.go:93] pod "kube-controller-manager-ha-617764" in "kube-system" namespace has status "Ready":"True"
	I0913 18:45:09.262754   22792 pod_ready.go:82] duration metric: took 399.972907ms for pod "kube-controller-manager-ha-617764" in "kube-system" namespace to be "Ready" ...
	I0913 18:45:09.262768   22792 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-617764-m02" in "kube-system" namespace to be "Ready" ...
	I0913 18:45:09.458787   22792 request.go:632] Waited for 195.940964ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-617764-m02
	I0913 18:45:09.458839   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-617764-m02
	I0913 18:45:09.458844   22792 round_trippers.go:469] Request Headers:
	I0913 18:45:09.458852   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:45:09.458857   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:45:09.462034   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:45:09.657980   22792 request.go:632] Waited for 195.27571ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 18:45:09.658064   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 18:45:09.658074   22792 round_trippers.go:469] Request Headers:
	I0913 18:45:09.658086   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:45:09.658113   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:45:09.661913   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:45:09.662725   22792 pod_ready.go:93] pod "kube-controller-manager-ha-617764-m02" in "kube-system" namespace has status "Ready":"True"
	I0913 18:45:09.662743   22792 pod_ready.go:82] duration metric: took 399.963324ms for pod "kube-controller-manager-ha-617764-m02" in "kube-system" namespace to be "Ready" ...
	I0913 18:45:09.662752   22792 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-617764-m03" in "kube-system" namespace to be "Ready" ...
	I0913 18:45:09.858912   22792 request.go:632] Waited for 196.078833ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-617764-m03
	I0913 18:45:09.858972   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-617764-m03
	I0913 18:45:09.858979   22792 round_trippers.go:469] Request Headers:
	I0913 18:45:09.858988   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:45:09.858995   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:45:09.862666   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:45:10.058882   22792 request.go:632] Waited for 195.333873ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.145:8443/api/v1/nodes/ha-617764-m03
	I0913 18:45:10.058952   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m03
	I0913 18:45:10.058960   22792 round_trippers.go:469] Request Headers:
	I0913 18:45:10.058967   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:45:10.058971   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:45:10.062375   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:45:10.063280   22792 pod_ready.go:93] pod "kube-controller-manager-ha-617764-m03" in "kube-system" namespace has status "Ready":"True"
	I0913 18:45:10.063298   22792 pod_ready.go:82] duration metric: took 400.53806ms for pod "kube-controller-manager-ha-617764-m03" in "kube-system" namespace to be "Ready" ...
	I0913 18:45:10.063308   22792 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-7bpk5" in "kube-system" namespace to be "Ready" ...
	I0913 18:45:10.258308   22792 request.go:632] Waited for 194.921956ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7bpk5
	I0913 18:45:10.258366   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7bpk5
	I0913 18:45:10.258372   22792 round_trippers.go:469] Request Headers:
	I0913 18:45:10.258383   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:45:10.258393   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:45:10.261695   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:45:10.458778   22792 request.go:632] Waited for 196.165114ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.145:8443/api/v1/nodes/ha-617764-m03
	I0913 18:45:10.458835   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m03
	I0913 18:45:10.458842   22792 round_trippers.go:469] Request Headers:
	I0913 18:45:10.458851   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:45:10.458856   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:45:10.462795   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:45:10.463269   22792 pod_ready.go:93] pod "kube-proxy-7bpk5" in "kube-system" namespace has status "Ready":"True"
	I0913 18:45:10.463285   22792 pod_ready.go:82] duration metric: took 399.971446ms for pod "kube-proxy-7bpk5" in "kube-system" namespace to be "Ready" ...
	I0913 18:45:10.463295   22792 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-92mml" in "kube-system" namespace to be "Ready" ...
	I0913 18:45:10.658473   22792 request.go:632] Waited for 195.113067ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-proxy-92mml
	I0913 18:45:10.658534   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-proxy-92mml
	I0913 18:45:10.658540   22792 round_trippers.go:469] Request Headers:
	I0913 18:45:10.658547   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:45:10.658552   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:45:10.662470   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:45:10.858668   22792 request.go:632] Waited for 195.3392ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.145:8443/api/v1/nodes/ha-617764
	I0913 18:45:10.858733   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764
	I0913 18:45:10.858740   22792 round_trippers.go:469] Request Headers:
	I0913 18:45:10.858751   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:45:10.858759   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:45:10.861462   22792 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 18:45:10.862049   22792 pod_ready.go:93] pod "kube-proxy-92mml" in "kube-system" namespace has status "Ready":"True"
	I0913 18:45:10.862071   22792 pod_ready.go:82] duration metric: took 398.769606ms for pod "kube-proxy-92mml" in "kube-system" namespace to be "Ready" ...
	I0913 18:45:10.862082   22792 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-hqm8n" in "kube-system" namespace to be "Ready" ...
	I0913 18:45:11.058203   22792 request.go:632] Waited for 196.022069ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hqm8n
	I0913 18:45:11.058265   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hqm8n
	I0913 18:45:11.058270   22792 round_trippers.go:469] Request Headers:
	I0913 18:45:11.058277   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:45:11.058281   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:45:11.061914   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:45:11.258044   22792 request.go:632] Waited for 195.273377ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 18:45:11.258117   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 18:45:11.258126   22792 round_trippers.go:469] Request Headers:
	I0913 18:45:11.258138   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:45:11.258145   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:45:11.261745   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:45:11.262304   22792 pod_ready.go:93] pod "kube-proxy-hqm8n" in "kube-system" namespace has status "Ready":"True"
	I0913 18:45:11.262327   22792 pod_ready.go:82] duration metric: took 400.239534ms for pod "kube-proxy-hqm8n" in "kube-system" namespace to be "Ready" ...
	I0913 18:45:11.262337   22792 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-617764" in "kube-system" namespace to be "Ready" ...
	I0913 18:45:11.458444   22792 request.go:632] Waited for 196.01969ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-617764
	I0913 18:45:11.458497   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-617764
	I0913 18:45:11.458504   22792 round_trippers.go:469] Request Headers:
	I0913 18:45:11.458514   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:45:11.458521   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:45:11.461946   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:45:11.657948   22792 request.go:632] Waited for 195.28823ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.145:8443/api/v1/nodes/ha-617764
	I0913 18:45:11.658002   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764
	I0913 18:45:11.658007   22792 round_trippers.go:469] Request Headers:
	I0913 18:45:11.658017   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:45:11.658023   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:45:11.661841   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:45:11.662470   22792 pod_ready.go:93] pod "kube-scheduler-ha-617764" in "kube-system" namespace has status "Ready":"True"
	I0913 18:45:11.662492   22792 pod_ready.go:82] duration metric: took 400.146385ms for pod "kube-scheduler-ha-617764" in "kube-system" namespace to be "Ready" ...
	I0913 18:45:11.662506   22792 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-617764-m02" in "kube-system" namespace to be "Ready" ...
	I0913 18:45:11.858450   22792 request.go:632] Waited for 195.863677ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-617764-m02
	I0913 18:45:11.858507   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-617764-m02
	I0913 18:45:11.858512   22792 round_trippers.go:469] Request Headers:
	I0913 18:45:11.858522   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:45:11.858526   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:45:11.861821   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:45:12.058902   22792 request.go:632] Waited for 196.361586ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 18:45:12.058952   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 18:45:12.058957   22792 round_trippers.go:469] Request Headers:
	I0913 18:45:12.058964   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:45:12.058968   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:45:12.062080   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:45:12.062688   22792 pod_ready.go:93] pod "kube-scheduler-ha-617764-m02" in "kube-system" namespace has status "Ready":"True"
	I0913 18:45:12.062705   22792 pod_ready.go:82] duration metric: took 400.191873ms for pod "kube-scheduler-ha-617764-m02" in "kube-system" namespace to be "Ready" ...
	I0913 18:45:12.062717   22792 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-617764-m03" in "kube-system" namespace to be "Ready" ...
	I0913 18:45:12.258239   22792 request.go:632] Waited for 195.452487ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-617764-m03
	I0913 18:45:12.258294   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-617764-m03
	I0913 18:45:12.258299   22792 round_trippers.go:469] Request Headers:
	I0913 18:45:12.258306   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:45:12.258310   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:45:12.261850   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:45:12.458741   22792 request.go:632] Waited for 196.359842ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.145:8443/api/v1/nodes/ha-617764-m03
	I0913 18:45:12.458799   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m03
	I0913 18:45:12.458804   22792 round_trippers.go:469] Request Headers:
	I0913 18:45:12.458812   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:45:12.458819   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:45:12.461925   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:45:12.462443   22792 pod_ready.go:93] pod "kube-scheduler-ha-617764-m03" in "kube-system" namespace has status "Ready":"True"
	I0913 18:45:12.462462   22792 pod_ready.go:82] duration metric: took 399.738229ms for pod "kube-scheduler-ha-617764-m03" in "kube-system" namespace to be "Ready" ...
	I0913 18:45:12.462476   22792 pod_ready.go:39] duration metric: took 5.200482826s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0913 18:45:12.462493   22792 api_server.go:52] waiting for apiserver process to appear ...
	I0913 18:45:12.462545   22792 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 18:45:12.479364   22792 api_server.go:72] duration metric: took 25.538641921s to wait for apiserver process to appear ...
	I0913 18:45:12.479384   22792 api_server.go:88] waiting for apiserver healthz status ...
	I0913 18:45:12.479408   22792 api_server.go:253] Checking apiserver healthz at https://192.168.39.145:8443/healthz ...
	I0913 18:45:12.483655   22792 api_server.go:279] https://192.168.39.145:8443/healthz returned 200:
	ok
	I0913 18:45:12.483722   22792 round_trippers.go:463] GET https://192.168.39.145:8443/version
	I0913 18:45:12.483732   22792 round_trippers.go:469] Request Headers:
	I0913 18:45:12.483743   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:45:12.483752   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:45:12.484691   22792 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0913 18:45:12.484751   22792 api_server.go:141] control plane version: v1.31.1
	I0913 18:45:12.484765   22792 api_server.go:131] duration metric: took 5.374766ms to wait for apiserver health ...
	I0913 18:45:12.484771   22792 system_pods.go:43] waiting for kube-system pods to appear ...
	I0913 18:45:12.658175   22792 request.go:632] Waited for 173.338358ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods
	I0913 18:45:12.658263   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods
	I0913 18:45:12.658282   22792 round_trippers.go:469] Request Headers:
	I0913 18:45:12.658293   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:45:12.658301   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:45:12.663873   22792 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0913 18:45:12.670428   22792 system_pods.go:59] 24 kube-system pods found
	I0913 18:45:12.670456   22792 system_pods.go:61] "coredns-7c65d6cfc9-fdhnm" [5c509676-c7ba-4841-89b5-7e4266abd9c9] Running
	I0913 18:45:12.670461   22792 system_pods.go:61] "coredns-7c65d6cfc9-htrbt" [41a8301e-fca3-4907-bc77-808b013a2d2a] Running
	I0913 18:45:12.670466   22792 system_pods.go:61] "etcd-ha-617764" [e8b297d1-ae3c-45c7-bc17-086a7411c65e] Running
	I0913 18:45:12.670469   22792 system_pods.go:61] "etcd-ha-617764-m02" [54cadd8e-b226-4748-a065-efe913b74058] Running
	I0913 18:45:12.670473   22792 system_pods.go:61] "etcd-ha-617764-m03" [4247e8e8-fa8d-47f3-9ab3-1ec5c9d85de9] Running
	I0913 18:45:12.670476   22792 system_pods.go:61] "kindnet-8mbkd" [4fe1b67c-b4ca-4839-bbc9-2bfeddf91611] Running
	I0913 18:45:12.670479   22792 system_pods.go:61] "kindnet-b9bzd" [81130c38-ff5d-4c9e-ab7b-7eae4a62d3b8] Running
	I0913 18:45:12.670482   22792 system_pods.go:61] "kindnet-bc2zg" [e1f2f8d7-bb8a-44cb-ac52-c9f87c6f6170] Running
	I0913 18:45:12.670485   22792 system_pods.go:61] "kube-apiserver-ha-617764" [b9779d8c-fceb-4764-a9fa-e98e0e8446fd] Running
	I0913 18:45:12.670489   22792 system_pods.go:61] "kube-apiserver-ha-617764-m02" [6a67c49f-958f-4463-b3f3-cdc449987a0e] Running
	I0913 18:45:12.670492   22792 system_pods.go:61] "kube-apiserver-ha-617764-m03" [3dedc18a-1964-41af-8797-eec61443095e] Running
	I0913 18:45:12.670496   22792 system_pods.go:61] "kube-controller-manager-ha-617764" [50f8efc0-95c9-4356-ac31-5ef778f43620] Running
	I0913 18:45:12.670499   22792 system_pods.go:61] "kube-controller-manager-ha-617764-m02" [c9405f4f-2355-4d74-8bce-0afd9709a297] Running
	I0913 18:45:12.670502   22792 system_pods.go:61] "kube-controller-manager-ha-617764-m03" [2ef16dd1-da44-4c17-b191-f13d7401a21d] Running
	I0913 18:45:12.670506   22792 system_pods.go:61] "kube-proxy-7bpk5" [075a72a7-32a5-4502-b52d-eeba572f94d4] Running
	I0913 18:45:12.670509   22792 system_pods.go:61] "kube-proxy-92mml" [36bd37dc-88c4-4264-9e7c-a90246cc5212] Running
	I0913 18:45:12.670512   22792 system_pods.go:61] "kube-proxy-hqm8n" [d21c9abc-9d25-4a59-9830-2325e7f8ad44] Running
	I0913 18:45:12.670515   22792 system_pods.go:61] "kube-scheduler-ha-617764" [0ffae4ca-101d-499c-a10e-d24d42c6ddbd] Running
	I0913 18:45:12.670519   22792 system_pods.go:61] "kube-scheduler-ha-617764-m02" [46451fc8-0fbe-4c70-b331-3db2cefacd60] Running
	I0913 18:45:12.670522   22792 system_pods.go:61] "kube-scheduler-ha-617764-m03" [01d83f8e-84af-4ebb-a64d-90a1a4dd7799] Running
	I0913 18:45:12.670525   22792 system_pods.go:61] "kube-vip-ha-617764" [7960420c-8f57-47a3-8d63-de5ad027f8bd] Running
	I0913 18:45:12.670528   22792 system_pods.go:61] "kube-vip-ha-617764-m02" [da600d89-fd3d-45a3-af3b-66cfd443562d] Running
	I0913 18:45:12.670531   22792 system_pods.go:61] "kube-vip-ha-617764-m03" [21987759-d9ea-4367-96c5-f95df97fa81a] Running
	I0913 18:45:12.670534   22792 system_pods.go:61] "storage-provisioner" [1e5f1a84-1798-430e-af04-82469e8f4a7b] Running
	I0913 18:45:12.670540   22792 system_pods.go:74] duration metric: took 185.763517ms to wait for pod list to return data ...
	I0913 18:45:12.670547   22792 default_sa.go:34] waiting for default service account to be created ...
	I0913 18:45:12.857932   22792 request.go:632] Waited for 187.304017ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.145:8443/api/v1/namespaces/default/serviceaccounts
	I0913 18:45:12.858002   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/default/serviceaccounts
	I0913 18:45:12.858012   22792 round_trippers.go:469] Request Headers:
	I0913 18:45:12.858024   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:45:12.858031   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:45:12.861412   22792 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 18:45:12.861530   22792 default_sa.go:45] found service account: "default"
	I0913 18:45:12.861547   22792 default_sa.go:55] duration metric: took 190.99324ms for default service account to be created ...
	I0913 18:45:12.861561   22792 system_pods.go:116] waiting for k8s-apps to be running ...
	I0913 18:45:13.058902   22792 request.go:632] Waited for 197.279772ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods
	I0913 18:45:13.058968   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods
	I0913 18:45:13.058975   22792 round_trippers.go:469] Request Headers:
	I0913 18:45:13.058983   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:45:13.058989   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:45:13.064227   22792 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0913 18:45:13.070856   22792 system_pods.go:86] 24 kube-system pods found
	I0913 18:45:13.070880   22792 system_pods.go:89] "coredns-7c65d6cfc9-fdhnm" [5c509676-c7ba-4841-89b5-7e4266abd9c9] Running
	I0913 18:45:13.070885   22792 system_pods.go:89] "coredns-7c65d6cfc9-htrbt" [41a8301e-fca3-4907-bc77-808b013a2d2a] Running
	I0913 18:45:13.070889   22792 system_pods.go:89] "etcd-ha-617764" [e8b297d1-ae3c-45c7-bc17-086a7411c65e] Running
	I0913 18:45:13.070892   22792 system_pods.go:89] "etcd-ha-617764-m02" [54cadd8e-b226-4748-a065-efe913b74058] Running
	I0913 18:45:13.070896   22792 system_pods.go:89] "etcd-ha-617764-m03" [4247e8e8-fa8d-47f3-9ab3-1ec5c9d85de9] Running
	I0913 18:45:13.070899   22792 system_pods.go:89] "kindnet-8mbkd" [4fe1b67c-b4ca-4839-bbc9-2bfeddf91611] Running
	I0913 18:45:13.070902   22792 system_pods.go:89] "kindnet-b9bzd" [81130c38-ff5d-4c9e-ab7b-7eae4a62d3b8] Running
	I0913 18:45:13.070905   22792 system_pods.go:89] "kindnet-bc2zg" [e1f2f8d7-bb8a-44cb-ac52-c9f87c6f6170] Running
	I0913 18:45:13.070908   22792 system_pods.go:89] "kube-apiserver-ha-617764" [b9779d8c-fceb-4764-a9fa-e98e0e8446fd] Running
	I0913 18:45:13.070912   22792 system_pods.go:89] "kube-apiserver-ha-617764-m02" [6a67c49f-958f-4463-b3f3-cdc449987a0e] Running
	I0913 18:45:13.070916   22792 system_pods.go:89] "kube-apiserver-ha-617764-m03" [3dedc18a-1964-41af-8797-eec61443095e] Running
	I0913 18:45:13.070920   22792 system_pods.go:89] "kube-controller-manager-ha-617764" [50f8efc0-95c9-4356-ac31-5ef778f43620] Running
	I0913 18:45:13.070924   22792 system_pods.go:89] "kube-controller-manager-ha-617764-m02" [c9405f4f-2355-4d74-8bce-0afd9709a297] Running
	I0913 18:45:13.070928   22792 system_pods.go:89] "kube-controller-manager-ha-617764-m03" [2ef16dd1-da44-4c17-b191-f13d7401a21d] Running
	I0913 18:45:13.070934   22792 system_pods.go:89] "kube-proxy-7bpk5" [075a72a7-32a5-4502-b52d-eeba572f94d4] Running
	I0913 18:45:13.070938   22792 system_pods.go:89] "kube-proxy-92mml" [36bd37dc-88c4-4264-9e7c-a90246cc5212] Running
	I0913 18:45:13.070944   22792 system_pods.go:89] "kube-proxy-hqm8n" [d21c9abc-9d25-4a59-9830-2325e7f8ad44] Running
	I0913 18:45:13.070947   22792 system_pods.go:89] "kube-scheduler-ha-617764" [0ffae4ca-101d-499c-a10e-d24d42c6ddbd] Running
	I0913 18:45:13.070951   22792 system_pods.go:89] "kube-scheduler-ha-617764-m02" [46451fc8-0fbe-4c70-b331-3db2cefacd60] Running
	I0913 18:45:13.070955   22792 system_pods.go:89] "kube-scheduler-ha-617764-m03" [01d83f8e-84af-4ebb-a64d-90a1a4dd7799] Running
	I0913 18:45:13.070961   22792 system_pods.go:89] "kube-vip-ha-617764" [7960420c-8f57-47a3-8d63-de5ad027f8bd] Running
	I0913 18:45:13.070964   22792 system_pods.go:89] "kube-vip-ha-617764-m02" [da600d89-fd3d-45a3-af3b-66cfd443562d] Running
	I0913 18:45:13.070967   22792 system_pods.go:89] "kube-vip-ha-617764-m03" [21987759-d9ea-4367-96c5-f95df97fa81a] Running
	I0913 18:45:13.070970   22792 system_pods.go:89] "storage-provisioner" [1e5f1a84-1798-430e-af04-82469e8f4a7b] Running
	I0913 18:45:13.070975   22792 system_pods.go:126] duration metric: took 209.406637ms to wait for k8s-apps to be running ...
	I0913 18:45:13.070983   22792 system_svc.go:44] waiting for kubelet service to be running ....
	I0913 18:45:13.071021   22792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 18:45:13.090449   22792 system_svc.go:56] duration metric: took 19.454477ms WaitForService to wait for kubelet
	I0913 18:45:13.090497   22792 kubeadm.go:582] duration metric: took 26.149775771s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0913 18:45:13.090519   22792 node_conditions.go:102] verifying NodePressure condition ...
	I0913 18:45:13.258912   22792 request.go:632] Waited for 168.315715ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.145:8443/api/v1/nodes
	I0913 18:45:13.258991   22792 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes
	I0913 18:45:13.259000   22792 round_trippers.go:469] Request Headers:
	I0913 18:45:13.259020   22792 round_trippers.go:473]     Accept: application/json, */*
	I0913 18:45:13.259027   22792 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 18:45:13.263259   22792 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0913 18:45:13.264256   22792 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0913 18:45:13.264275   22792 node_conditions.go:123] node cpu capacity is 2
	I0913 18:45:13.264288   22792 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0913 18:45:13.264294   22792 node_conditions.go:123] node cpu capacity is 2
	I0913 18:45:13.264299   22792 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0913 18:45:13.264303   22792 node_conditions.go:123] node cpu capacity is 2
	I0913 18:45:13.264308   22792 node_conditions.go:105] duration metric: took 173.783377ms to run NodePressure ...
	I0913 18:45:13.264323   22792 start.go:241] waiting for startup goroutines ...
	I0913 18:45:13.264349   22792 start.go:255] writing updated cluster config ...
	I0913 18:45:13.264642   22792 ssh_runner.go:195] Run: rm -f paused
	I0913 18:45:13.317314   22792 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0913 18:45:13.319418   22792 out.go:177] * Done! kubectl is now configured to use "ha-617764" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 13 18:49:55 ha-617764 crio[672]: time="2024-09-13 18:49:55.669169508Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726253395669149304,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3aabe467-c455-4e44-ae3c-39b70eb44ac4 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 18:49:55 ha-617764 crio[672]: time="2024-09-13 18:49:55.670059115Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a0d570a9-1f60-45e3-b84d-460cb9f1aed3 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 18:49:55 ha-617764 crio[672]: time="2024-09-13 18:49:55.670129300Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a0d570a9-1f60-45e3-b84d-460cb9f1aed3 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 18:49:55 ha-617764 crio[672]: time="2024-09-13 18:49:55.670422757Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0d456d4bd90d260631341afbae7431ccb0cc790417a4c9e8160e57467bfaf2b9,PodSandboxId:99c7958cb4872950974230539dc45944b50688bb9878563aee2e61fedfb5c35c,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726253118277078152,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-t4fwq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1bc3749b-0225-445c-9b86-767558392df7,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31a66627d146af0c454548e90aaf09b72db4e0fb35a2436b75ee6b7712ebfd7b,PodSandboxId:e586cc7654290c3c0cda0d6fd83b97505e3979cb49833b1eea94d979d037f3b4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726252966219275312,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-htrbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41a8301e-fca3-4907-bc77-808b013a2d2a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3502979cf3ea177059372b048c527fdda963bdd51e52d12d22dfa810cf54057d,PodSandboxId:bd08f2ca13336963167d06a6cc6476a2e09fa40dcabe8661be5e3dacaf6be576,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726252966228541661,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fdhnm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
5c509676-c7ba-4841-89b5-7e4266abd9c9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0647676f81788ee0bbd56eb7d60f950a46f51bd631508ec8b7f81c7a92597539,PodSandboxId:83953eef4efcdfdc33ae2e81c51ed92fad3f247f0c72fd0ea207b106fd0a340a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1726252966074952662,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e5f1a84-1798-430e-af04-82469e8f4a7b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e98c43ffb7347041b10b0e7a00cc20d3901c203313e4f54385c199a191115e1,PodSandboxId:47bf9789759213fdfb7ee71e76e2a2afa4707545aca3efe3f7ebec2b63ea5635,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17262529
54213914741,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-b9bzd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81130c38-ff5d-4c9e-ab7b-7eae4a62d3b8,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5065ca788226965fdaff6088f9b63d8cf7f5a5a7f59a07f825cfcdd7bc02e218,PodSandboxId:585827783c6745769918e227877e73f139672d7f78bf2704aaebc3850f3d07f1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726252953884531828,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-92mml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36bd37dc-88c4-4264-9e7c-a90246cc5212,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b116fa0d9ecbf5eef9e58f830dd785949b52bfac86d2d3b084cc734d4d60272a,PodSandboxId:12d8d3bba4f5d79766fb30a3be93e5c5b6b1d99c264f7fe990971c521de8ba00,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726252945807765880,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e1649fa4768b4904b58c62ad7504e2b,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a41f6c9e152de2576ff4360ccc68e55259081afe9bb9bcb9f172aec46f9ba14,PodSandboxId:c771b93aaed83a51d9fae504a0ebd7917b460d9afca8dcf2f7e6262edda971f4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726252942325370148,Labels:map[string]string{io.kubernetes.container.
name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 815ca8cb73177215968b5c5242b63776,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a31170a295b77a01a2c07a4bf7dbd0be4738f757372e24a85bcaf7d50d27d4c,PodSandboxId:16bf73d50b501d98243464444491201ec86d4a669bf5b3098590c9930bee5091,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726252942304927946,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15cf7928620050653d6239c1007547bd,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d66613ccb1f220327ae486a7304f4ca06bdfa65b31bf2cde55a2f616174be80,PodSandboxId:4d7e2cf8f9de86c4268f7fbb0b56f23617bf78a0a49ea7214fefaa5facb8d3d5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726252942281818435,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.
kubernetes.pod.name: kube-apiserver-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f4db9ee38410b02d601ed80ae90b5a4,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b2f0c73fe9ef4e082cc81429c4d5b062aa2764776ca148ed40e6d633b128ae5,PodSandboxId:353214980e0a16b1f3f76759918d9ab56c9f4add56df1e83cc1f6e8daf96c9a6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726252942238691966,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-617764,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf3d4ca74d8429dc43b760fdf8f185ab,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a0d570a9-1f60-45e3-b84d-460cb9f1aed3 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 18:49:55 ha-617764 crio[672]: time="2024-09-13 18:49:55.714807065Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=348af007-0094-4584-af39-68920b16534a name=/runtime.v1.RuntimeService/Version
	Sep 13 18:49:55 ha-617764 crio[672]: time="2024-09-13 18:49:55.714880321Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=348af007-0094-4584-af39-68920b16534a name=/runtime.v1.RuntimeService/Version
	Sep 13 18:49:55 ha-617764 crio[672]: time="2024-09-13 18:49:55.716014181Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ab77602f-c22b-4de7-a58c-86750d9ead0a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 18:49:55 ha-617764 crio[672]: time="2024-09-13 18:49:55.716478660Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726253395716456963,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ab77602f-c22b-4de7-a58c-86750d9ead0a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 18:49:55 ha-617764 crio[672]: time="2024-09-13 18:49:55.716963286Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=82982893-dde1-4392-8fe9-6692dd5c0422 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 18:49:55 ha-617764 crio[672]: time="2024-09-13 18:49:55.717017693Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=82982893-dde1-4392-8fe9-6692dd5c0422 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 18:49:55 ha-617764 crio[672]: time="2024-09-13 18:49:55.717334196Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0d456d4bd90d260631341afbae7431ccb0cc790417a4c9e8160e57467bfaf2b9,PodSandboxId:99c7958cb4872950974230539dc45944b50688bb9878563aee2e61fedfb5c35c,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726253118277078152,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-t4fwq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1bc3749b-0225-445c-9b86-767558392df7,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31a66627d146af0c454548e90aaf09b72db4e0fb35a2436b75ee6b7712ebfd7b,PodSandboxId:e586cc7654290c3c0cda0d6fd83b97505e3979cb49833b1eea94d979d037f3b4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726252966219275312,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-htrbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41a8301e-fca3-4907-bc77-808b013a2d2a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3502979cf3ea177059372b048c527fdda963bdd51e52d12d22dfa810cf54057d,PodSandboxId:bd08f2ca13336963167d06a6cc6476a2e09fa40dcabe8661be5e3dacaf6be576,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726252966228541661,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fdhnm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
5c509676-c7ba-4841-89b5-7e4266abd9c9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0647676f81788ee0bbd56eb7d60f950a46f51bd631508ec8b7f81c7a92597539,PodSandboxId:83953eef4efcdfdc33ae2e81c51ed92fad3f247f0c72fd0ea207b106fd0a340a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1726252966074952662,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e5f1a84-1798-430e-af04-82469e8f4a7b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e98c43ffb7347041b10b0e7a00cc20d3901c203313e4f54385c199a191115e1,PodSandboxId:47bf9789759213fdfb7ee71e76e2a2afa4707545aca3efe3f7ebec2b63ea5635,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17262529
54213914741,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-b9bzd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81130c38-ff5d-4c9e-ab7b-7eae4a62d3b8,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5065ca788226965fdaff6088f9b63d8cf7f5a5a7f59a07f825cfcdd7bc02e218,PodSandboxId:585827783c6745769918e227877e73f139672d7f78bf2704aaebc3850f3d07f1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726252953884531828,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-92mml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36bd37dc-88c4-4264-9e7c-a90246cc5212,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b116fa0d9ecbf5eef9e58f830dd785949b52bfac86d2d3b084cc734d4d60272a,PodSandboxId:12d8d3bba4f5d79766fb30a3be93e5c5b6b1d99c264f7fe990971c521de8ba00,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726252945807765880,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e1649fa4768b4904b58c62ad7504e2b,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a41f6c9e152de2576ff4360ccc68e55259081afe9bb9bcb9f172aec46f9ba14,PodSandboxId:c771b93aaed83a51d9fae504a0ebd7917b460d9afca8dcf2f7e6262edda971f4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726252942325370148,Labels:map[string]string{io.kubernetes.container.
name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 815ca8cb73177215968b5c5242b63776,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a31170a295b77a01a2c07a4bf7dbd0be4738f757372e24a85bcaf7d50d27d4c,PodSandboxId:16bf73d50b501d98243464444491201ec86d4a669bf5b3098590c9930bee5091,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726252942304927946,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15cf7928620050653d6239c1007547bd,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d66613ccb1f220327ae486a7304f4ca06bdfa65b31bf2cde55a2f616174be80,PodSandboxId:4d7e2cf8f9de86c4268f7fbb0b56f23617bf78a0a49ea7214fefaa5facb8d3d5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726252942281818435,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.
kubernetes.pod.name: kube-apiserver-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f4db9ee38410b02d601ed80ae90b5a4,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b2f0c73fe9ef4e082cc81429c4d5b062aa2764776ca148ed40e6d633b128ae5,PodSandboxId:353214980e0a16b1f3f76759918d9ab56c9f4add56df1e83cc1f6e8daf96c9a6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726252942238691966,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-617764,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf3d4ca74d8429dc43b760fdf8f185ab,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=82982893-dde1-4392-8fe9-6692dd5c0422 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 18:49:55 ha-617764 crio[672]: time="2024-09-13 18:49:55.756072167Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bf3e2b15-eba7-4675-854c-64f042bf5cfb name=/runtime.v1.RuntimeService/Version
	Sep 13 18:49:55 ha-617764 crio[672]: time="2024-09-13 18:49:55.756141771Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bf3e2b15-eba7-4675-854c-64f042bf5cfb name=/runtime.v1.RuntimeService/Version
	Sep 13 18:49:55 ha-617764 crio[672]: time="2024-09-13 18:49:55.757445593Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d859f64c-ded3-4a49-84ba-5735f8cfad86 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 18:49:55 ha-617764 crio[672]: time="2024-09-13 18:49:55.757834123Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726253395757813982,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d859f64c-ded3-4a49-84ba-5735f8cfad86 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 18:49:55 ha-617764 crio[672]: time="2024-09-13 18:49:55.758295041Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ca86dd60-3b71-4336-a962-ae22b767db4a name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 18:49:55 ha-617764 crio[672]: time="2024-09-13 18:49:55.758346900Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ca86dd60-3b71-4336-a962-ae22b767db4a name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 18:49:55 ha-617764 crio[672]: time="2024-09-13 18:49:55.758562059Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0d456d4bd90d260631341afbae7431ccb0cc790417a4c9e8160e57467bfaf2b9,PodSandboxId:99c7958cb4872950974230539dc45944b50688bb9878563aee2e61fedfb5c35c,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726253118277078152,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-t4fwq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1bc3749b-0225-445c-9b86-767558392df7,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31a66627d146af0c454548e90aaf09b72db4e0fb35a2436b75ee6b7712ebfd7b,PodSandboxId:e586cc7654290c3c0cda0d6fd83b97505e3979cb49833b1eea94d979d037f3b4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726252966219275312,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-htrbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41a8301e-fca3-4907-bc77-808b013a2d2a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3502979cf3ea177059372b048c527fdda963bdd51e52d12d22dfa810cf54057d,PodSandboxId:bd08f2ca13336963167d06a6cc6476a2e09fa40dcabe8661be5e3dacaf6be576,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726252966228541661,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fdhnm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
5c509676-c7ba-4841-89b5-7e4266abd9c9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0647676f81788ee0bbd56eb7d60f950a46f51bd631508ec8b7f81c7a92597539,PodSandboxId:83953eef4efcdfdc33ae2e81c51ed92fad3f247f0c72fd0ea207b106fd0a340a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1726252966074952662,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e5f1a84-1798-430e-af04-82469e8f4a7b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e98c43ffb7347041b10b0e7a00cc20d3901c203313e4f54385c199a191115e1,PodSandboxId:47bf9789759213fdfb7ee71e76e2a2afa4707545aca3efe3f7ebec2b63ea5635,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17262529
54213914741,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-b9bzd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81130c38-ff5d-4c9e-ab7b-7eae4a62d3b8,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5065ca788226965fdaff6088f9b63d8cf7f5a5a7f59a07f825cfcdd7bc02e218,PodSandboxId:585827783c6745769918e227877e73f139672d7f78bf2704aaebc3850f3d07f1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726252953884531828,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-92mml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36bd37dc-88c4-4264-9e7c-a90246cc5212,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b116fa0d9ecbf5eef9e58f830dd785949b52bfac86d2d3b084cc734d4d60272a,PodSandboxId:12d8d3bba4f5d79766fb30a3be93e5c5b6b1d99c264f7fe990971c521de8ba00,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726252945807765880,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e1649fa4768b4904b58c62ad7504e2b,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a41f6c9e152de2576ff4360ccc68e55259081afe9bb9bcb9f172aec46f9ba14,PodSandboxId:c771b93aaed83a51d9fae504a0ebd7917b460d9afca8dcf2f7e6262edda971f4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726252942325370148,Labels:map[string]string{io.kubernetes.container.
name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 815ca8cb73177215968b5c5242b63776,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a31170a295b77a01a2c07a4bf7dbd0be4738f757372e24a85bcaf7d50d27d4c,PodSandboxId:16bf73d50b501d98243464444491201ec86d4a669bf5b3098590c9930bee5091,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726252942304927946,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15cf7928620050653d6239c1007547bd,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d66613ccb1f220327ae486a7304f4ca06bdfa65b31bf2cde55a2f616174be80,PodSandboxId:4d7e2cf8f9de86c4268f7fbb0b56f23617bf78a0a49ea7214fefaa5facb8d3d5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726252942281818435,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.
kubernetes.pod.name: kube-apiserver-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f4db9ee38410b02d601ed80ae90b5a4,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b2f0c73fe9ef4e082cc81429c4d5b062aa2764776ca148ed40e6d633b128ae5,PodSandboxId:353214980e0a16b1f3f76759918d9ab56c9f4add56df1e83cc1f6e8daf96c9a6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726252942238691966,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-617764,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf3d4ca74d8429dc43b760fdf8f185ab,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ca86dd60-3b71-4336-a962-ae22b767db4a name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 18:49:55 ha-617764 crio[672]: time="2024-09-13 18:49:55.795164731Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3929451b-933f-42b4-a33b-a8004d820397 name=/runtime.v1.RuntimeService/Version
	Sep 13 18:49:55 ha-617764 crio[672]: time="2024-09-13 18:49:55.795279914Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3929451b-933f-42b4-a33b-a8004d820397 name=/runtime.v1.RuntimeService/Version
	Sep 13 18:49:55 ha-617764 crio[672]: time="2024-09-13 18:49:55.797091221Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=68628f89-cc61-4dff-9735-7601c9240e72 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 18:49:55 ha-617764 crio[672]: time="2024-09-13 18:49:55.797575938Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726253395797549415,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=68628f89-cc61-4dff-9735-7601c9240e72 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 18:49:55 ha-617764 crio[672]: time="2024-09-13 18:49:55.798172545Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6ca6235e-c711-4215-bb06-d8a30dbe8087 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 18:49:55 ha-617764 crio[672]: time="2024-09-13 18:49:55.798225503Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6ca6235e-c711-4215-bb06-d8a30dbe8087 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 18:49:55 ha-617764 crio[672]: time="2024-09-13 18:49:55.798519708Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0d456d4bd90d260631341afbae7431ccb0cc790417a4c9e8160e57467bfaf2b9,PodSandboxId:99c7958cb4872950974230539dc45944b50688bb9878563aee2e61fedfb5c35c,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726253118277078152,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-t4fwq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1bc3749b-0225-445c-9b86-767558392df7,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31a66627d146af0c454548e90aaf09b72db4e0fb35a2436b75ee6b7712ebfd7b,PodSandboxId:e586cc7654290c3c0cda0d6fd83b97505e3979cb49833b1eea94d979d037f3b4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726252966219275312,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-htrbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41a8301e-fca3-4907-bc77-808b013a2d2a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3502979cf3ea177059372b048c527fdda963bdd51e52d12d22dfa810cf54057d,PodSandboxId:bd08f2ca13336963167d06a6cc6476a2e09fa40dcabe8661be5e3dacaf6be576,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726252966228541661,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fdhnm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
5c509676-c7ba-4841-89b5-7e4266abd9c9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0647676f81788ee0bbd56eb7d60f950a46f51bd631508ec8b7f81c7a92597539,PodSandboxId:83953eef4efcdfdc33ae2e81c51ed92fad3f247f0c72fd0ea207b106fd0a340a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1726252966074952662,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e5f1a84-1798-430e-af04-82469e8f4a7b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e98c43ffb7347041b10b0e7a00cc20d3901c203313e4f54385c199a191115e1,PodSandboxId:47bf9789759213fdfb7ee71e76e2a2afa4707545aca3efe3f7ebec2b63ea5635,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17262529
54213914741,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-b9bzd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81130c38-ff5d-4c9e-ab7b-7eae4a62d3b8,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5065ca788226965fdaff6088f9b63d8cf7f5a5a7f59a07f825cfcdd7bc02e218,PodSandboxId:585827783c6745769918e227877e73f139672d7f78bf2704aaebc3850f3d07f1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726252953884531828,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-92mml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36bd37dc-88c4-4264-9e7c-a90246cc5212,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b116fa0d9ecbf5eef9e58f830dd785949b52bfac86d2d3b084cc734d4d60272a,PodSandboxId:12d8d3bba4f5d79766fb30a3be93e5c5b6b1d99c264f7fe990971c521de8ba00,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726252945807765880,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e1649fa4768b4904b58c62ad7504e2b,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a41f6c9e152de2576ff4360ccc68e55259081afe9bb9bcb9f172aec46f9ba14,PodSandboxId:c771b93aaed83a51d9fae504a0ebd7917b460d9afca8dcf2f7e6262edda971f4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726252942325370148,Labels:map[string]string{io.kubernetes.container.
name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 815ca8cb73177215968b5c5242b63776,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a31170a295b77a01a2c07a4bf7dbd0be4738f757372e24a85bcaf7d50d27d4c,PodSandboxId:16bf73d50b501d98243464444491201ec86d4a669bf5b3098590c9930bee5091,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726252942304927946,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15cf7928620050653d6239c1007547bd,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d66613ccb1f220327ae486a7304f4ca06bdfa65b31bf2cde55a2f616174be80,PodSandboxId:4d7e2cf8f9de86c4268f7fbb0b56f23617bf78a0a49ea7214fefaa5facb8d3d5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726252942281818435,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.
kubernetes.pod.name: kube-apiserver-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f4db9ee38410b02d601ed80ae90b5a4,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b2f0c73fe9ef4e082cc81429c4d5b062aa2764776ca148ed40e6d633b128ae5,PodSandboxId:353214980e0a16b1f3f76759918d9ab56c9f4add56df1e83cc1f6e8daf96c9a6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726252942238691966,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-617764,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf3d4ca74d8429dc43b760fdf8f185ab,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6ca6235e-c711-4215-bb06-d8a30dbe8087 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	0d456d4bd90d2       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 minutes ago       Running             busybox                   0                   99c7958cb4872       busybox-7dff88458-t4fwq
	3502979cf3ea1       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      7 minutes ago       Running             coredns                   0                   bd08f2ca13336       coredns-7c65d6cfc9-fdhnm
	31a66627d146a       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      7 minutes ago       Running             coredns                   0                   e586cc7654290       coredns-7c65d6cfc9-htrbt
	0647676f81788       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago       Running             storage-provisioner       0                   83953eef4efcd       storage-provisioner
	7e98c43ffb734       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      7 minutes ago       Running             kindnet-cni               0                   47bf978975921       kindnet-b9bzd
	5065ca7882269       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      7 minutes ago       Running             kube-proxy                0                   585827783c674       kube-proxy-92mml
	b116fa0d9ecbf       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     7 minutes ago       Running             kube-vip                  0                   12d8d3bba4f5d       kube-vip-ha-617764
	8a41f6c9e152d       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      7 minutes ago       Running             kube-controller-manager   0                   c771b93aaed83       kube-controller-manager-ha-617764
	8a31170a295b7       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      7 minutes ago       Running             kube-scheduler            0                   16bf73d50b501       kube-scheduler-ha-617764
	1d66613ccb1f2       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      7 minutes ago       Running             kube-apiserver            0                   4d7e2cf8f9de8       kube-apiserver-ha-617764
	3b2f0c73fe9ef       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      7 minutes ago       Running             etcd                      0                   353214980e0a1       etcd-ha-617764
	
	
	==> coredns [31a66627d146af0c454548e90aaf09b72db4e0fb35a2436b75ee6b7712ebfd7b] <==
	[INFO] 10.244.1.2:49297 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.01533144s
	[INFO] 10.244.1.2:34775 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000173343s
	[INFO] 10.244.1.2:48094 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000185771s
	[INFO] 10.244.1.2:38224 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000261627s
	[INFO] 10.244.2.2:46762 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001531358s
	[INFO] 10.244.2.2:49140 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000110788s
	[INFO] 10.244.2.2:48200 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000122858s
	[INFO] 10.244.0.4:42212 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000107526s
	[INFO] 10.244.0.4:55473 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001625324s
	[INFO] 10.244.0.4:57662 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00027413s
	[INFO] 10.244.0.4:42804 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000086384s
	[INFO] 10.244.1.2:42712 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000149698s
	[INFO] 10.244.1.2:33468 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000117843s
	[INFO] 10.244.1.2:53696 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000125501s
	[INFO] 10.244.1.2:59050 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000121214s
	[INFO] 10.244.2.2:60250 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000129604s
	[INFO] 10.244.2.2:33290 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000127517s
	[INFO] 10.244.0.4:48739 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000096314s
	[INFO] 10.244.0.4:42249 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000049139s
	[INFO] 10.244.1.2:35348 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000327466s
	[INFO] 10.244.1.2:36802 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000158894s
	[INFO] 10.244.2.2:33661 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000134839s
	[INFO] 10.244.2.2:41493 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000135174s
	[INFO] 10.244.0.4:55720 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00006804s
	[INFO] 10.244.0.4:59841 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00009592s
	
	
	==> coredns [3502979cf3ea177059372b048c527fdda963bdd51e52d12d22dfa810cf54057d] <==
	[INFO] 10.244.0.4:34399 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000211802s
	[INFO] 10.244.0.4:50067 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000522446s
	[INFO] 10.244.0.4:39102 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001720209s
	[INFO] 10.244.1.2:37027 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000286563s
	[INFO] 10.244.1.2:60285 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.013541777s
	[INFO] 10.244.1.2:53881 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000133465s
	[INFO] 10.244.2.2:44355 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000163171s
	[INFO] 10.244.2.2:36763 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001800499s
	[INFO] 10.244.2.2:41469 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000115361s
	[INFO] 10.244.2.2:40909 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000145743s
	[INFO] 10.244.2.2:44681 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000149088s
	[INFO] 10.244.0.4:51555 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000069764s
	[INFO] 10.244.0.4:53574 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001057592s
	[INFO] 10.244.0.4:45350 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000035427s
	[INFO] 10.244.0.4:48145 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000190172s
	[INFO] 10.244.2.2:36852 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000187208s
	[INFO] 10.244.2.2:58201 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00010302s
	[INFO] 10.244.0.4:45335 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000139302s
	[INFO] 10.244.0.4:41623 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000054642s
	[INFO] 10.244.1.2:43471 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000145957s
	[INFO] 10.244.1.2:55858 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000179256s
	[INFO] 10.244.2.2:35120 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000154146s
	[INFO] 10.244.2.2:57748 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000106668s
	[INFO] 10.244.0.4:35176 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00009163s
	[INFO] 10.244.0.4:35630 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000191227s
	
	
	==> describe nodes <==
	Name:               ha-617764
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-617764
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fdd33bebc6743cfd1c61ec7fe066add478610a92
	                    minikube.k8s.io/name=ha-617764
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_13T18_42_29_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Sep 2024 18:42:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-617764
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 13 Sep 2024 18:49:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Sep 2024 18:45:31 +0000   Fri, 13 Sep 2024 18:42:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Sep 2024 18:45:31 +0000   Fri, 13 Sep 2024 18:42:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Sep 2024 18:45:31 +0000   Fri, 13 Sep 2024 18:42:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 13 Sep 2024 18:45:31 +0000   Fri, 13 Sep 2024 18:42:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.145
	  Hostname:    ha-617764
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0f2aba800dcc4d28902afcae110b3305
	  System UUID:                0f2aba80-0dcc-4d28-902a-fcae110b3305
	  Boot ID:                    07a71fa7-1555-49dc-a1c7-c845200ddeaa
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-t4fwq              0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m42s
	  kube-system                 coredns-7c65d6cfc9-fdhnm             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     7m23s
	  kube-system                 coredns-7c65d6cfc9-htrbt             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     7m23s
	  kube-system                 etcd-ha-617764                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         7m28s
	  kube-system                 kindnet-b9bzd                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      7m24s
	  kube-system                 kube-apiserver-ha-617764             250m (12%)    0 (0%)      0 (0%)           0 (0%)         7m28s
	  kube-system                 kube-controller-manager-ha-617764    200m (10%)    0 (0%)      0 (0%)           0 (0%)         7m28s
	  kube-system                 kube-proxy-92mml                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m24s
	  kube-system                 kube-scheduler-ha-617764             100m (5%)     0 (0%)      0 (0%)           0 (0%)         7m28s
	  kube-system                 kube-vip-ha-617764                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m30s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 7m21s  kube-proxy       
	  Normal  Starting                 7m28s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m28s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m28s  kubelet          Node ha-617764 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m28s  kubelet          Node ha-617764 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m28s  kubelet          Node ha-617764 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           7m24s  node-controller  Node ha-617764 event: Registered Node ha-617764 in Controller
	  Normal  NodeReady                7m11s  kubelet          Node ha-617764 status is now: NodeReady
	  Normal  RegisteredNode           6m24s  node-controller  Node ha-617764 event: Registered Node ha-617764 in Controller
	  Normal  RegisteredNode           5m4s   node-controller  Node ha-617764 event: Registered Node ha-617764 in Controller
	
	
	Name:               ha-617764-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-617764-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fdd33bebc6743cfd1c61ec7fe066add478610a92
	                    minikube.k8s.io/name=ha-617764
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_13T18_43_27_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Sep 2024 18:43:24 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-617764-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 13 Sep 2024 18:46:27 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 13 Sep 2024 18:45:27 +0000   Fri, 13 Sep 2024 18:47:12 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 13 Sep 2024 18:45:27 +0000   Fri, 13 Sep 2024 18:47:12 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 13 Sep 2024 18:45:27 +0000   Fri, 13 Sep 2024 18:47:12 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 13 Sep 2024 18:45:27 +0000   Fri, 13 Sep 2024 18:47:12 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.203
	  Hostname:    ha-617764-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 3cc6f4389cf8474aa5ba9d0c108b603d
	  System UUID:                3cc6f438-9cf8-474a-a5ba-9d0c108b603d
	  Boot ID:                    a73fc468-bba1-4d38-b835-10012a86fc0a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-c28t9                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m42s
	  kube-system                 etcd-ha-617764-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m30s
	  kube-system                 kindnet-bc2zg                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m32s
	  kube-system                 kube-apiserver-ha-617764-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m31s
	  kube-system                 kube-controller-manager-ha-617764-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m31s
	  kube-system                 kube-proxy-hqm8n                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m32s
	  kube-system                 kube-scheduler-ha-617764-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m31s
	  kube-system                 kube-vip-ha-617764-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m28s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  6m32s (x8 over 6m32s)  kubelet          Node ha-617764-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m32s (x8 over 6m32s)  kubelet          Node ha-617764-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m32s (x7 over 6m32s)  kubelet          Node ha-617764-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m32s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m29s                  node-controller  Node ha-617764-m02 event: Registered Node ha-617764-m02 in Controller
	  Normal  RegisteredNode           6m24s                  node-controller  Node ha-617764-m02 event: Registered Node ha-617764-m02 in Controller
	  Normal  RegisteredNode           5m4s                   node-controller  Node ha-617764-m02 event: Registered Node ha-617764-m02 in Controller
	  Normal  NodeNotReady             2m44s                  node-controller  Node ha-617764-m02 status is now: NodeNotReady
	
	
	Name:               ha-617764-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-617764-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fdd33bebc6743cfd1c61ec7fe066add478610a92
	                    minikube.k8s.io/name=ha-617764
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_13T18_44_46_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Sep 2024 18:44:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-617764-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 13 Sep 2024 18:49:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Sep 2024 18:45:44 +0000   Fri, 13 Sep 2024 18:44:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Sep 2024 18:45:44 +0000   Fri, 13 Sep 2024 18:44:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Sep 2024 18:45:44 +0000   Fri, 13 Sep 2024 18:44:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 13 Sep 2024 18:45:44 +0000   Fri, 13 Sep 2024 18:45:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.118
	  Hostname:    ha-617764-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 bf9ad263c8a24e5ab1b585d83dd0c49b
	  System UUID:                bf9ad263-c8a2-4e5a-b1b5-85d83dd0c49b
	  Boot ID:                    5302b469-e319-46e4-a87d-2fbb7190087e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-srmxt                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m42s
	  kube-system                 etcd-ha-617764-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m11s
	  kube-system                 kindnet-8mbkd                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m13s
	  kube-system                 kube-apiserver-ha-617764-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m11s
	  kube-system                 kube-controller-manager-ha-617764-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m11s
	  kube-system                 kube-proxy-7bpk5                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m13s
	  kube-system                 kube-scheduler-ha-617764-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m7s
	  kube-system                 kube-vip-ha-617764-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m8s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  5m13s (x8 over 5m13s)  kubelet          Node ha-617764-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m13s (x8 over 5m13s)  kubelet          Node ha-617764-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m13s (x7 over 5m13s)  kubelet          Node ha-617764-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m13s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m9s                   node-controller  Node ha-617764-m03 event: Registered Node ha-617764-m03 in Controller
	  Normal  RegisteredNode           5m9s                   node-controller  Node ha-617764-m03 event: Registered Node ha-617764-m03 in Controller
	  Normal  RegisteredNode           5m4s                   node-controller  Node ha-617764-m03 event: Registered Node ha-617764-m03 in Controller
	
	
	Name:               ha-617764-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-617764-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fdd33bebc6743cfd1c61ec7fe066add478610a92
	                    minikube.k8s.io/name=ha-617764
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_13T18_45_53_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Sep 2024 18:45:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-617764-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 13 Sep 2024 18:49:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Sep 2024 18:46:23 +0000   Fri, 13 Sep 2024 18:45:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Sep 2024 18:46:23 +0000   Fri, 13 Sep 2024 18:45:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Sep 2024 18:46:23 +0000   Fri, 13 Sep 2024 18:45:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 13 Sep 2024 18:46:23 +0000   Fri, 13 Sep 2024 18:46:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.238
	  Hostname:    ha-617764-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 377b052ecb52420ba7e0fb039f04c4f5
	  System UUID:                377b052e-cb52-420b-a7e0-fb039f04c4f5
	  Boot ID:                    d2b9d80d-fb6e-4958-9da8-1e29e77fa9a6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-47jgz       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m4s
	  kube-system                 kube-proxy-5rlkn    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 3m58s                kube-proxy       
	  Normal  NodeHasSufficientMemory  4m4s (x2 over 4m4s)  kubelet          Node ha-617764-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m4s (x2 over 4m4s)  kubelet          Node ha-617764-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m4s (x2 over 4m4s)  kubelet          Node ha-617764-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m4s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m59s                node-controller  Node ha-617764-m04 event: Registered Node ha-617764-m04 in Controller
	  Normal  RegisteredNode           3m59s                node-controller  Node ha-617764-m04 event: Registered Node ha-617764-m04 in Controller
	  Normal  RegisteredNode           3m59s                node-controller  Node ha-617764-m04 event: Registered Node ha-617764-m04 in Controller
	  Normal  NodeReady                3m43s                kubelet          Node ha-617764-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Sep13 18:41] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050724] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040063] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.773769] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.469844] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[Sep13 18:42] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +10.036071] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.066350] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.051740] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.182667] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.119649] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.275654] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +3.901030] systemd-fstab-generator[757]: Ignoring "noauto" option for root device
	[  +4.328019] systemd-fstab-generator[895]: Ignoring "noauto" option for root device
	[  +0.063743] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.140604] systemd-fstab-generator[1308]: Ignoring "noauto" option for root device
	[  +0.090298] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.514292] kauditd_printk_skb: 21 callbacks suppressed
	[ +12.179813] kauditd_printk_skb: 38 callbacks suppressed
	[Sep13 18:43] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [3b2f0c73fe9ef4e082cc81429c4d5b062aa2764776ca148ed40e6d633b128ae5] <==
	{"level":"warn","ts":"2024-09-13T18:49:55.860804Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"44b3a0f32f80bb09","from":"44b3a0f32f80bb09","remote-peer-id":"130da78b66ce9e95","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-13T18:49:55.898782Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"44b3a0f32f80bb09","from":"44b3a0f32f80bb09","remote-peer-id":"130da78b66ce9e95","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-13T18:49:55.960505Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"44b3a0f32f80bb09","from":"44b3a0f32f80bb09","remote-peer-id":"130da78b66ce9e95","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-13T18:49:56.060554Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"44b3a0f32f80bb09","from":"44b3a0f32f80bb09","remote-peer-id":"130da78b66ce9e95","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-13T18:49:56.081550Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"44b3a0f32f80bb09","from":"44b3a0f32f80bb09","remote-peer-id":"130da78b66ce9e95","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-13T18:49:56.088534Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"44b3a0f32f80bb09","from":"44b3a0f32f80bb09","remote-peer-id":"130da78b66ce9e95","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-13T18:49:56.092169Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"44b3a0f32f80bb09","from":"44b3a0f32f80bb09","remote-peer-id":"130da78b66ce9e95","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-13T18:49:56.101413Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"44b3a0f32f80bb09","from":"44b3a0f32f80bb09","remote-peer-id":"130da78b66ce9e95","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-13T18:49:56.108220Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"44b3a0f32f80bb09","from":"44b3a0f32f80bb09","remote-peer-id":"130da78b66ce9e95","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-13T18:49:56.114118Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"44b3a0f32f80bb09","from":"44b3a0f32f80bb09","remote-peer-id":"130da78b66ce9e95","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-13T18:49:56.118978Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"44b3a0f32f80bb09","from":"44b3a0f32f80bb09","remote-peer-id":"130da78b66ce9e95","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-13T18:49:56.122521Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"44b3a0f32f80bb09","from":"44b3a0f32f80bb09","remote-peer-id":"130da78b66ce9e95","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-13T18:49:56.130008Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"44b3a0f32f80bb09","from":"44b3a0f32f80bb09","remote-peer-id":"130da78b66ce9e95","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-13T18:49:56.136662Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"44b3a0f32f80bb09","from":"44b3a0f32f80bb09","remote-peer-id":"130da78b66ce9e95","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-13T18:49:56.143482Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"44b3a0f32f80bb09","from":"44b3a0f32f80bb09","remote-peer-id":"130da78b66ce9e95","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-13T18:49:56.167875Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"44b3a0f32f80bb09","from":"44b3a0f32f80bb09","remote-peer-id":"130da78b66ce9e95","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-13T18:49:56.175145Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"44b3a0f32f80bb09","from":"44b3a0f32f80bb09","remote-peer-id":"130da78b66ce9e95","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-13T18:49:56.180615Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"44b3a0f32f80bb09","from":"44b3a0f32f80bb09","remote-peer-id":"130da78b66ce9e95","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-13T18:49:56.184553Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"44b3a0f32f80bb09","from":"44b3a0f32f80bb09","remote-peer-id":"130da78b66ce9e95","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-13T18:49:56.189135Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"44b3a0f32f80bb09","from":"44b3a0f32f80bb09","remote-peer-id":"130da78b66ce9e95","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-13T18:49:56.196128Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"44b3a0f32f80bb09","from":"44b3a0f32f80bb09","remote-peer-id":"130da78b66ce9e95","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-13T18:49:56.202217Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"44b3a0f32f80bb09","from":"44b3a0f32f80bb09","remote-peer-id":"130da78b66ce9e95","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-13T18:49:56.211145Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"44b3a0f32f80bb09","from":"44b3a0f32f80bb09","remote-peer-id":"130da78b66ce9e95","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-13T18:49:56.260408Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"44b3a0f32f80bb09","from":"44b3a0f32f80bb09","remote-peer-id":"130da78b66ce9e95","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-13T18:49:56.277784Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"44b3a0f32f80bb09","from":"44b3a0f32f80bb09","remote-peer-id":"130da78b66ce9e95","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 18:49:56 up 8 min,  0 users,  load average: 0.27, 0.27, 0.14
	Linux ha-617764 5.10.207 #1 SMP Thu Sep 12 19:03:33 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [7e98c43ffb7347041b10b0e7a00cc20d3901c203313e4f54385c199a191115e1] <==
	I0913 18:49:25.376837       1 main.go:322] Node ha-617764-m02 has CIDR [10.244.1.0/24] 
	I0913 18:49:35.370028       1 main.go:295] Handling node with IPs: map[192.168.39.145:{}]
	I0913 18:49:35.370078       1 main.go:299] handling current node
	I0913 18:49:35.370101       1 main.go:295] Handling node with IPs: map[192.168.39.203:{}]
	I0913 18:49:35.370107       1 main.go:322] Node ha-617764-m02 has CIDR [10.244.1.0/24] 
	I0913 18:49:35.370330       1 main.go:295] Handling node with IPs: map[192.168.39.118:{}]
	I0913 18:49:35.370393       1 main.go:322] Node ha-617764-m03 has CIDR [10.244.2.0/24] 
	I0913 18:49:35.370471       1 main.go:295] Handling node with IPs: map[192.168.39.238:{}]
	I0913 18:49:35.370477       1 main.go:322] Node ha-617764-m04 has CIDR [10.244.3.0/24] 
	I0913 18:49:45.371212       1 main.go:295] Handling node with IPs: map[192.168.39.145:{}]
	I0913 18:49:45.371339       1 main.go:299] handling current node
	I0913 18:49:45.371369       1 main.go:295] Handling node with IPs: map[192.168.39.203:{}]
	I0913 18:49:45.371375       1 main.go:322] Node ha-617764-m02 has CIDR [10.244.1.0/24] 
	I0913 18:49:45.371588       1 main.go:295] Handling node with IPs: map[192.168.39.118:{}]
	I0913 18:49:45.371596       1 main.go:322] Node ha-617764-m03 has CIDR [10.244.2.0/24] 
	I0913 18:49:45.371654       1 main.go:295] Handling node with IPs: map[192.168.39.238:{}]
	I0913 18:49:45.371658       1 main.go:322] Node ha-617764-m04 has CIDR [10.244.3.0/24] 
	I0913 18:49:55.374977       1 main.go:295] Handling node with IPs: map[192.168.39.145:{}]
	I0913 18:49:55.375194       1 main.go:299] handling current node
	I0913 18:49:55.375305       1 main.go:295] Handling node with IPs: map[192.168.39.203:{}]
	I0913 18:49:55.375348       1 main.go:322] Node ha-617764-m02 has CIDR [10.244.1.0/24] 
	I0913 18:49:55.375561       1 main.go:295] Handling node with IPs: map[192.168.39.118:{}]
	I0913 18:49:55.375664       1 main.go:322] Node ha-617764-m03 has CIDR [10.244.2.0/24] 
	I0913 18:49:55.375773       1 main.go:295] Handling node with IPs: map[192.168.39.238:{}]
	I0913 18:49:55.375804       1 main.go:322] Node ha-617764-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [1d66613ccb1f220327ae486a7304f4ca06bdfa65b31bf2cde55a2f616174be80] <==
	I0913 18:42:28.598737       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0913 18:42:28.608881       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0913 18:42:32.773006       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0913 18:42:33.086168       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0913 18:43:24.969062       1 writers.go:122] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E0913 18:43:24.969780       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 556.42µs, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
	E0913 18:43:24.970485       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E0913 18:43:24.971765       1 writers.go:135] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E0913 18:43:24.973078       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="4.502168ms" method="POST" path="/api/v1/namespaces/kube-system/events" result=null
	E0913 18:45:20.119793       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57468: use of closed network connection
	E0913 18:45:20.304607       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57490: use of closed network connection
	E0913 18:45:20.490076       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57522: use of closed network connection
	E0913 18:45:20.696819       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57540: use of closed network connection
	E0913 18:45:20.876650       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57570: use of closed network connection
	E0913 18:45:21.058755       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57580: use of closed network connection
	E0913 18:45:21.228559       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57598: use of closed network connection
	E0913 18:45:21.414467       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57608: use of closed network connection
	E0913 18:45:21.597909       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57624: use of closed network connection
	E0913 18:45:21.914524       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57660: use of closed network connection
	E0913 18:45:22.101117       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57682: use of closed network connection
	E0913 18:45:22.284427       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57706: use of closed network connection
	E0913 18:45:22.461214       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57722: use of closed network connection
	E0913 18:45:22.648226       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57742: use of closed network connection
	E0913 18:45:22.810730       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57764: use of closed network connection
	W0913 18:46:47.295761       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.118 192.168.39.145]
	
	
	==> kube-controller-manager [8a41f6c9e152de2576ff4360ccc68e55259081afe9bb9bcb9f172aec46f9ba14] <==
	I0913 18:45:52.540334       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-617764-m04"
	I0913 18:45:52.551008       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-617764-m04"
	I0913 18:45:52.757917       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-617764-m04"
	E0913 18:45:52.767717       1 daemon_controller.go:329] "Unhandled Error" err="kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kindnet\", GenerateName:\"\", Namespace:\"kube-system\", SelfLink:\"\", UID:\"a6795e37-2984-4e51-b0e9-20f3c3a9e522\", ResourceVersion:\"935\", Generation:1, CreationTimestamp:time.Date(2024, time.September, 13, 18, 42, 29, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"app\":\"kindnet\", \"k8s-app\":\"kindnet\", \"tier\":\"node\"}, Annotations:map[string]string{\"deprecated.daemonset.template.generation\":\"1\", \"kubectl.kubernetes.io/last-applied-configuration\":\"{\\\"apiVersion\\\":\\\"apps/v1\\\",\\\"kind\\\":\\\"DaemonSet\\\",\\\"metadata\\\":{\\\"annotations\\\":{},\\\"labels\\\":{\\\"app\\\":\\\"kindnet\\\",\\\"k8s-app\\\":\\\"kindnet\\\",\\\"tier\\\":\\\"node\\\"},\\\"name\\\":\\\"kindnet\
\\",\\\"namespace\\\":\\\"kube-system\\\"},\\\"spec\\\":{\\\"selector\\\":{\\\"matchLabels\\\":{\\\"app\\\":\\\"kindnet\\\"}},\\\"template\\\":{\\\"metadata\\\":{\\\"labels\\\":{\\\"app\\\":\\\"kindnet\\\",\\\"k8s-app\\\":\\\"kindnet\\\",\\\"tier\\\":\\\"node\\\"}},\\\"spec\\\":{\\\"containers\\\":[{\\\"env\\\":[{\\\"name\\\":\\\"HOST_IP\\\",\\\"valueFrom\\\":{\\\"fieldRef\\\":{\\\"fieldPath\\\":\\\"status.hostIP\\\"}}},{\\\"name\\\":\\\"POD_IP\\\",\\\"valueFrom\\\":{\\\"fieldRef\\\":{\\\"fieldPath\\\":\\\"status.podIP\\\"}}},{\\\"name\\\":\\\"POD_SUBNET\\\",\\\"value\\\":\\\"10.244.0.0/16\\\"}],\\\"image\\\":\\\"docker.io/kindest/kindnetd:v20240813-c6f155d6\\\",\\\"name\\\":\\\"kindnet-cni\\\",\\\"resources\\\":{\\\"limits\\\":{\\\"cpu\\\":\\\"100m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"requests\\\":{\\\"cpu\\\":\\\"100m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"securityContext\\\":{\\\"capabilities\\\":{\\\"add\\\":[\\\"NET_RAW\\\",\\\"NET_ADMIN\\\"]},\\\"privileged\\\":false},\\\"volumeMounts\\\":[{\\\"mountPath
\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"cni-cfg\\\"},{\\\"mountPath\\\":\\\"/run/xtables.lock\\\",\\\"name\\\":\\\"xtables-lock\\\",\\\"readOnly\\\":false},{\\\"mountPath\\\":\\\"/lib/modules\\\",\\\"name\\\":\\\"lib-modules\\\",\\\"readOnly\\\":true}]}],\\\"hostNetwork\\\":true,\\\"serviceAccountName\\\":\\\"kindnet\\\",\\\"tolerations\\\":[{\\\"effect\\\":\\\"NoSchedule\\\",\\\"operator\\\":\\\"Exists\\\"}],\\\"volumes\\\":[{\\\"hostPath\\\":{\\\"path\\\":\\\"/etc/cni/net.d\\\",\\\"type\\\":\\\"DirectoryOrCreate\\\"},\\\"name\\\":\\\"cni-cfg\\\"},{\\\"hostPath\\\":{\\\"path\\\":\\\"/run/xtables.lock\\\",\\\"type\\\":\\\"FileOrCreate\\\"},\\\"name\\\":\\\"xtables-lock\\\"},{\\\"hostPath\\\":{\\\"path\\\":\\\"/lib/modules\\\"},\\\"name\\\":\\\"lib-modules\\\"}]}}}}\\n\"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc001b8a0a0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name
:\"\", GenerateName:\"\", Namespace:\"\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"app\":\"kindnet\", \"k8s-app\":\"kindnet\", \"tier\":\"node\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:\"cni-cfg\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001dbb9c8), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolu
meClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}, v1.Volume{Name:\"xtables-lock\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001dbb9f8), EmptyDir:(*v1.EmptyDirVolumeSourc
e)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.Portwo
rxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}, v1.Volume{Name:\"lib-modules\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001dbba28), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil),
AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:\"kindnet-cni\", Image:\"docker.io/kindest/kindnetd:v20240813-c6f155d6\", Command:[]string(nil), Args:[]string(nil), WorkingDir:\"\", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:\"HOST_IP\", Value:\"\", ValueFrom:(*v1.EnvVarSource)(0xc001b8a0c0)}, v1.EnvVar{Name:\"POD_IP\", Value:\"\", ValueFrom:(*v1.EnvVa
rSource)(0xc001b8a100)}, v1.EnvVar{Name:\"POD_SUBNET\", Value:\"10.244.0.0/16\", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{\"cpu\":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"100m\", Format:\"DecimalSI\"}, \"memory\":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"50Mi\", Format:\"BinarySI\"}}, Requests:v1.ResourceList{\"cpu\":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"100m\", Format:\"DecimalSI\"}, \"memory\":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"50Mi\", Format:\"BinarySI\"}}, Claims:[]v1.ResourceClaim(nil)}, ResizePolicy:[]v1.ContainerResizePolicy(nil), RestartPolicy:(*v1.ContainerRestartPolicy)(nil), VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:\"cni-cfg\", ReadOnly:fa
lse, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/etc/cni/net.d\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}, v1.VolumeMount{Name:\"xtables-lock\", ReadOnly:false, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/run/xtables.lock\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}, v1.VolumeMount{Name:\"lib-modules\", ReadOnly:true, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/lib/modules\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:\"/dev/termination-log\", TerminationMessagePolicy:\"File\", ImagePullPolicy:\"IfNotPresent\", SecurityContext:(*v1.SecurityContext)(0xc0026b0600), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralCo
ntainer(nil), RestartPolicy:\"Always\", TerminationGracePeriodSeconds:(*int64)(0xc002575ec0), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:\"ClusterFirst\", NodeSelector:map[string]string(nil), ServiceAccountName:\"kindnet\", DeprecatedServiceAccount:\"kindnet\", AutomountServiceAccountToken:(*bool)(nil), NodeName:\"\", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002378900), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:\"\", Subdomain:\"\", Affinity:(*v1.Affinity)(nil), SchedulerName:\"default-scheduler\", Tolerations:[]v1.Toleration{v1.Toleration{Key:\"\", Operator:\"Exists\", Value:\"\", Effect:\"NoSchedule\", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:\"\", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil),
Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil), HostUsers:(*bool)(nil), SchedulingGates:[]v1.PodSchedulingGate(nil), ResourceClaims:[]v1.PodResourceClaim(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:\"RollingUpdate\", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc0024fb120)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc002575efc)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:3, NumberMisscheduled:0, DesiredNumberScheduled:3, NumberReady:3, ObservedGeneration:1, UpdatedNumberScheduled:3, NumberAvailable:3, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps \"kindnet\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	E0913 18:45:52.776897       1 daemon_controller.go:329] "Unhandled Error" err="kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kube-proxy\", GenerateName:\"\", Namespace:\"kube-system\", SelfLink:\"\", UID:\"e6e9eb5f-8178-4a93-9c83-0365ad1f7e6b\", ResourceVersion:\"888\", Generation:1, CreationTimestamp:time.Date(2024, time.September, 13, 18, 42, 28, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"k8s-app\":\"kube-proxy\"}, Annotations:map[string]string{\"deprecated.daemonset.template.generation\":\"1\"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc0017b3480), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:\"\", GenerateName:\"\", Namespace:\"\", SelfLink:\"\", UID:\"\", ResourceVersion:\"
\", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"k8s-app\":\"kube-proxy\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:\"kube-proxy\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSourc
e)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc0024c1f00), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}, v1.Volume{Name:\"xtables-lock\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001c27590), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolu
meSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIV
olumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}, v1.Volume{Name:\"lib-modules\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001c275a8), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtu
alDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:\"kube-proxy\", Image:\"registry.k8s.io/kube-proxy:v1.31.1\", Command:[]string{\"/usr/local/bin/kube-proxy\", \"--config=/var/lib/kube-proxy/config.conf\", \"--hostname-override=$(NODE_NAME)\"}, Args:[]string(nil), WorkingDir:\"\", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:\"NODE_NAME\", Value:\"\", ValueFrom:(*v1.EnvVarSource)(0xc0017b3500)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.Re
sourceList(nil), Claims:[]v1.ResourceClaim(nil)}, ResizePolicy:[]v1.ContainerResizePolicy(nil), RestartPolicy:(*v1.ContainerRestartPolicy)(nil), VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:\"kube-proxy\", ReadOnly:false, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/var/lib/kube-proxy\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}, v1.VolumeMount{Name:\"xtables-lock\", ReadOnly:false, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/run/xtables.lock\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}, v1.VolumeMount{Name:\"lib-modules\", ReadOnly:true, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/lib/modules\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:
\"/dev/termination-log\", TerminationMessagePolicy:\"File\", ImagePullPolicy:\"IfNotPresent\", SecurityContext:(*v1.SecurityContext)(0xc00259c2a0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:\"Always\", TerminationGracePeriodSeconds:(*int64)(0xc001d6db68), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:\"ClusterFirst\", NodeSelector:map[string]string{\"kubernetes.io/os\":\"linux\"}, ServiceAccountName:\"kube-proxy\", DeprecatedServiceAccount:\"kube-proxy\", AutomountServiceAccountToken:(*bool)(nil), NodeName:\"\", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00235af00), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:\"\", Subdomain:\"\", Affinity:(*v1.Affinity)(nil), SchedulerName:\"default-scheduler\", Tolerations:[]v1.Toleration{v1.Toleration{Key:\"\", Operator:\"Exists\", Value:\"\", Effect:\"\", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.Hos
tAlias(nil), PriorityClassName:\"system-node-critical\", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil), HostUsers:(*bool)(nil), SchedulingGates:[]v1.PodSchedulingGate(nil), ResourceClaims:[]v1.PodResourceClaim(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:\"RollingUpdate\", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc0023a9bc0)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc001d6dbc0)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:3, NumberMisscheduled:0, DesiredNumberScheduled:3, NumberReady:3, ObservedGeneration:1, UpdatedNumberScheduled:3, NumberAvailable:3, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfill
ed on daemonsets.apps \"kube-proxy\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I0913 18:45:53.180832       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-617764-m04"
	I0913 18:45:57.199944       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-617764-m04"
	I0913 18:45:57.241929       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-617764-m04"
	I0913 18:45:57.242678       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-617764-m04"
	I0913 18:45:57.257869       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-617764-m04"
	I0913 18:46:02.771478       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-617764-m04"
	I0913 18:46:13.500518       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-617764-m04"
	I0913 18:46:13.500677       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-617764-m04"
	I0913 18:46:13.514958       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-617764-m04"
	I0913 18:46:17.098470       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-617764-m04"
	I0913 18:46:23.493929       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-617764-m04"
	I0913 18:47:12.126896       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-617764-m04"
	I0913 18:47:12.126991       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-617764-m02"
	I0913 18:47:12.156704       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-617764-m02"
	I0913 18:47:12.303974       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="14.787558ms"
	I0913 18:47:12.304320       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="99.062µs"
	I0913 18:47:12.351458       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-617764-m02"
	I0913 18:47:17.350018       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-617764-m02"
	
	
	==> kube-proxy [5065ca788226965fdaff6088f9b63d8cf7f5a5a7f59a07f825cfcdd7bc02e218] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0913 18:42:34.167647       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0913 18:42:34.198918       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.145"]
	E0913 18:42:34.199182       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0913 18:42:34.253828       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0913 18:42:34.253872       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0913 18:42:34.253905       1 server_linux.go:169] "Using iptables Proxier"
	I0913 18:42:34.256484       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0913 18:42:34.257771       1 server.go:483] "Version info" version="v1.31.1"
	I0913 18:42:34.257801       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0913 18:42:34.260502       1 config.go:199] "Starting service config controller"
	I0913 18:42:34.260914       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0913 18:42:34.261139       1 config.go:105] "Starting endpoint slice config controller"
	I0913 18:42:34.261164       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0913 18:42:34.262109       1 config.go:328] "Starting node config controller"
	I0913 18:42:34.262140       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0913 18:42:34.361759       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0913 18:42:34.361863       1 shared_informer.go:320] Caches are synced for service config
	I0913 18:42:34.362333       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [8a31170a295b77a01a2c07a4bf7dbd0be4738f757372e24a85bcaf7d50d27d4c] <==
	W0913 18:42:26.897784       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0913 18:42:26.897834       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0913 18:42:29.405500       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0913 18:45:52.604951       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-tw74q\": pod kube-proxy-tw74q is already assigned to node \"ha-617764-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-tw74q" node="ha-617764-m04"
	E0913 18:45:52.605182       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-tw74q\": pod kube-proxy-tw74q is already assigned to node \"ha-617764-m04\"" pod="kube-system/kube-proxy-tw74q"
	E0913 18:45:52.616503       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-47jgz\": pod kindnet-47jgz is already assigned to node \"ha-617764-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-47jgz" node="ha-617764-m04"
	E0913 18:45:52.616777       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 52c2fe7a-7d09-4d11-ae85-b0fc016f6f16(kube-system/kindnet-47jgz) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-47jgz"
	E0913 18:45:52.616962       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-47jgz\": pod kindnet-47jgz is already assigned to node \"ha-617764-m04\"" pod="kube-system/kindnet-47jgz"
	I0913 18:45:52.617091       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-47jgz" node="ha-617764-m04"
	E0913 18:45:52.684630       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-j4ht7\": pod kindnet-j4ht7 is already assigned to node \"ha-617764-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-j4ht7" node="ha-617764-m04"
	E0913 18:45:52.684705       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 427dbc82-b752-4208-aa44-73c372996446(kube-system/kindnet-j4ht7) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-j4ht7"
	E0913 18:45:52.684722       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-j4ht7\": pod kindnet-j4ht7 is already assigned to node \"ha-617764-m04\"" pod="kube-system/kindnet-j4ht7"
	I0913 18:45:52.684740       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-j4ht7" node="ha-617764-m04"
	E0913 18:45:52.688566       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-jvrw5\": pod kindnet-jvrw5 is already assigned to node \"ha-617764-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-jvrw5" node="ha-617764-m04"
	E0913 18:45:52.688697       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 1c4990d1-e2c7-48fe-85a3-c6571c60c9b7(kube-system/kindnet-jvrw5) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-jvrw5"
	E0913 18:45:52.688716       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-jvrw5\": pod kindnet-jvrw5 is already assigned to node \"ha-617764-m04\"" pod="kube-system/kindnet-jvrw5"
	I0913 18:45:52.688769       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-jvrw5" node="ha-617764-m04"
	E0913 18:45:52.689590       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-5rlkn\": pod kube-proxy-5rlkn is already assigned to node \"ha-617764-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-5rlkn" node="ha-617764-m04"
	E0913 18:45:52.689658       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod fb31ed1c-fbc0-46ca-b60c-7201362519ff(kube-system/kube-proxy-5rlkn) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-5rlkn"
	E0913 18:45:52.689678       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-5rlkn\": pod kube-proxy-5rlkn is already assigned to node \"ha-617764-m04\"" pod="kube-system/kube-proxy-5rlkn"
	I0913 18:45:52.689696       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-5rlkn" node="ha-617764-m04"
	E0913 18:45:52.694462       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-xtt2d\": pod kube-proxy-xtt2d is already assigned to node \"ha-617764-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-xtt2d" node="ha-617764-m04"
	E0913 18:45:52.694585       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 848151c4-6f4d-47e6-9447-bd1d09469957(kube-system/kube-proxy-xtt2d) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-xtt2d"
	E0913 18:45:52.694606       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-xtt2d\": pod kube-proxy-xtt2d is already assigned to node \"ha-617764-m04\"" pod="kube-system/kube-proxy-xtt2d"
	I0913 18:45:52.694636       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-xtt2d" node="ha-617764-m04"
	
	
	==> kubelet <==
	Sep 13 18:48:28 ha-617764 kubelet[1315]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 13 18:48:28 ha-617764 kubelet[1315]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 13 18:48:28 ha-617764 kubelet[1315]: E0913 18:48:28.649696    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726253308649378157,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 18:48:28 ha-617764 kubelet[1315]: E0913 18:48:28.649725    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726253308649378157,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 18:48:38 ha-617764 kubelet[1315]: E0913 18:48:38.651439    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726253318650959548,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 18:48:38 ha-617764 kubelet[1315]: E0913 18:48:38.651760    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726253318650959548,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 18:48:48 ha-617764 kubelet[1315]: E0913 18:48:48.653615    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726253328652971679,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 18:48:48 ha-617764 kubelet[1315]: E0913 18:48:48.653963    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726253328652971679,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 18:48:58 ha-617764 kubelet[1315]: E0913 18:48:58.658068    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726253338657123289,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 18:48:58 ha-617764 kubelet[1315]: E0913 18:48:58.658170    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726253338657123289,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 18:49:08 ha-617764 kubelet[1315]: E0913 18:49:08.660548    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726253348660182129,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 18:49:08 ha-617764 kubelet[1315]: E0913 18:49:08.660575    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726253348660182129,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 18:49:18 ha-617764 kubelet[1315]: E0913 18:49:18.662851    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726253358662507730,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 18:49:18 ha-617764 kubelet[1315]: E0913 18:49:18.662924    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726253358662507730,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 18:49:28 ha-617764 kubelet[1315]: E0913 18:49:28.549641    1315 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 13 18:49:28 ha-617764 kubelet[1315]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 13 18:49:28 ha-617764 kubelet[1315]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 13 18:49:28 ha-617764 kubelet[1315]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 13 18:49:28 ha-617764 kubelet[1315]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 13 18:49:28 ha-617764 kubelet[1315]: E0913 18:49:28.665679    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726253368665132649,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 18:49:28 ha-617764 kubelet[1315]: E0913 18:49:28.665725    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726253368665132649,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 18:49:38 ha-617764 kubelet[1315]: E0913 18:49:38.668148    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726253378667582945,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 18:49:38 ha-617764 kubelet[1315]: E0913 18:49:38.668527    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726253378667582945,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 18:49:48 ha-617764 kubelet[1315]: E0913 18:49:48.670876    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726253388670575768,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 18:49:48 ha-617764 kubelet[1315]: E0913 18:49:48.670929    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726253388670575768,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-617764 -n ha-617764
helpers_test.go:261: (dbg) Run:  kubectl --context ha-617764 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (58.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (366.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-617764 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-617764 -v=7 --alsologtostderr
E0913 18:50:57.575633   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/functional-204039/client.crt: no such file or directory" logger="UnhandledError"
E0913 18:51:25.281048   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/functional-204039/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-617764 -v=7 --alsologtostderr: exit status 82 (2m1.805922443s)

                                                
                                                
-- stdout --
	* Stopping node "ha-617764-m04"  ...
	* Stopping node "ha-617764-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 18:49:57.643369   28614 out.go:345] Setting OutFile to fd 1 ...
	I0913 18:49:57.643502   28614 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 18:49:57.643513   28614 out.go:358] Setting ErrFile to fd 2...
	I0913 18:49:57.643519   28614 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 18:49:57.643719   28614 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19636-3902/.minikube/bin
	I0913 18:49:57.643939   28614 out.go:352] Setting JSON to false
	I0913 18:49:57.644023   28614 mustload.go:65] Loading cluster: ha-617764
	I0913 18:49:57.644419   28614 config.go:182] Loaded profile config "ha-617764": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 18:49:57.644523   28614 profile.go:143] Saving config to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/config.json ...
	I0913 18:49:57.644695   28614 mustload.go:65] Loading cluster: ha-617764
	I0913 18:49:57.644824   28614 config.go:182] Loaded profile config "ha-617764": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 18:49:57.644846   28614 stop.go:39] StopHost: ha-617764-m04
	I0913 18:49:57.645208   28614 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:49:57.645248   28614 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:49:57.660381   28614 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39839
	I0913 18:49:57.660866   28614 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:49:57.661428   28614 main.go:141] libmachine: Using API Version  1
	I0913 18:49:57.661450   28614 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:49:57.661752   28614 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:49:57.664061   28614 out.go:177] * Stopping node "ha-617764-m04"  ...
	I0913 18:49:57.665553   28614 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0913 18:49:57.665587   28614 main.go:141] libmachine: (ha-617764-m04) Calling .DriverName
	I0913 18:49:57.665774   28614 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0913 18:49:57.665795   28614 main.go:141] libmachine: (ha-617764-m04) Calling .GetSSHHostname
	I0913 18:49:57.668824   28614 main.go:141] libmachine: (ha-617764-m04) DBG | domain ha-617764-m04 has defined MAC address 52:54:00:08:6e:e8 in network mk-ha-617764
	I0913 18:49:57.669224   28614 main.go:141] libmachine: (ha-617764-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:6e:e8", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:45:38 +0000 UTC Type:0 Mac:52:54:00:08:6e:e8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-617764-m04 Clientid:01:52:54:00:08:6e:e8}
	I0913 18:49:57.669249   28614 main.go:141] libmachine: (ha-617764-m04) DBG | domain ha-617764-m04 has defined IP address 192.168.39.238 and MAC address 52:54:00:08:6e:e8 in network mk-ha-617764
	I0913 18:49:57.669377   28614 main.go:141] libmachine: (ha-617764-m04) Calling .GetSSHPort
	I0913 18:49:57.669532   28614 main.go:141] libmachine: (ha-617764-m04) Calling .GetSSHKeyPath
	I0913 18:49:57.669663   28614 main.go:141] libmachine: (ha-617764-m04) Calling .GetSSHUsername
	I0913 18:49:57.669771   28614 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764-m04/id_rsa Username:docker}
	I0913 18:49:57.753700   28614 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0913 18:49:57.807951   28614 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0913 18:49:57.862515   28614 main.go:141] libmachine: Stopping "ha-617764-m04"...
	I0913 18:49:57.862547   28614 main.go:141] libmachine: (ha-617764-m04) Calling .GetState
	I0913 18:49:57.863971   28614 main.go:141] libmachine: (ha-617764-m04) Calling .Stop
	I0913 18:49:57.866917   28614 main.go:141] libmachine: (ha-617764-m04) Waiting for machine to stop 0/120
	I0913 18:49:58.998649   28614 main.go:141] libmachine: (ha-617764-m04) Calling .GetState
	I0913 18:49:58.999609   28614 main.go:141] libmachine: Machine "ha-617764-m04" was stopped.
	I0913 18:49:58.999629   28614 stop.go:75] duration metric: took 1.334079441s to stop
	I0913 18:49:58.999645   28614 stop.go:39] StopHost: ha-617764-m03
	I0913 18:49:58.999934   28614 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:49:58.999969   28614 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:49:59.014190   28614 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44915
	I0913 18:49:59.014534   28614 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:49:59.015023   28614 main.go:141] libmachine: Using API Version  1
	I0913 18:49:59.015048   28614 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:49:59.015391   28614 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:49:59.017480   28614 out.go:177] * Stopping node "ha-617764-m03"  ...
	I0913 18:49:59.018780   28614 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0913 18:49:59.018811   28614 main.go:141] libmachine: (ha-617764-m03) Calling .DriverName
	I0913 18:49:59.019032   28614 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0913 18:49:59.019057   28614 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHHostname
	I0913 18:49:59.022037   28614 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:49:59.022479   28614 main.go:141] libmachine: (ha-617764-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:bc:fa", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:44:05 +0000 UTC Type:0 Mac:52:54:00:4c:bc:fa Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:ha-617764-m03 Clientid:01:52:54:00:4c:bc:fa}
	I0913 18:49:59.022517   28614 main.go:141] libmachine: (ha-617764-m03) DBG | domain ha-617764-m03 has defined IP address 192.168.39.118 and MAC address 52:54:00:4c:bc:fa in network mk-ha-617764
	I0913 18:49:59.022571   28614 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHPort
	I0913 18:49:59.022720   28614 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHKeyPath
	I0913 18:49:59.022834   28614 main.go:141] libmachine: (ha-617764-m03) Calling .GetSSHUsername
	I0913 18:49:59.022943   28614 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764-m03/id_rsa Username:docker}
	I0913 18:49:59.108713   28614 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0913 18:49:59.162536   28614 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0913 18:49:59.215869   28614 main.go:141] libmachine: Stopping "ha-617764-m03"...
	I0913 18:49:59.215892   28614 main.go:141] libmachine: (ha-617764-m03) Calling .GetState
	I0913 18:49:59.217380   28614 main.go:141] libmachine: (ha-617764-m03) Calling .Stop
	I0913 18:49:59.220954   28614 main.go:141] libmachine: (ha-617764-m03) Waiting for machine to stop 0/120
	I0913 18:50:00.222487   28614 main.go:141] libmachine: (ha-617764-m03) Waiting for machine to stop 1/120
	I0913 18:50:01.224624   28614 main.go:141] libmachine: (ha-617764-m03) Waiting for machine to stop 2/120
	I0913 18:50:02.226294   28614 main.go:141] libmachine: (ha-617764-m03) Waiting for machine to stop 3/120
	I0913 18:50:03.228569   28614 main.go:141] libmachine: (ha-617764-m03) Waiting for machine to stop 4/120
	I0913 18:50:04.230712   28614 main.go:141] libmachine: (ha-617764-m03) Waiting for machine to stop 5/120
	I0913 18:50:05.232058   28614 main.go:141] libmachine: (ha-617764-m03) Waiting for machine to stop 6/120
	I0913 18:50:06.233486   28614 main.go:141] libmachine: (ha-617764-m03) Waiting for machine to stop 7/120
	I0913 18:50:07.234904   28614 main.go:141] libmachine: (ha-617764-m03) Waiting for machine to stop 8/120
	I0913 18:50:08.236427   28614 main.go:141] libmachine: (ha-617764-m03) Waiting for machine to stop 9/120
	I0913 18:50:09.238882   28614 main.go:141] libmachine: (ha-617764-m03) Waiting for machine to stop 10/120
	I0913 18:50:10.240158   28614 main.go:141] libmachine: (ha-617764-m03) Waiting for machine to stop 11/120
	I0913 18:50:11.242286   28614 main.go:141] libmachine: (ha-617764-m03) Waiting for machine to stop 12/120
	I0913 18:50:12.243722   28614 main.go:141] libmachine: (ha-617764-m03) Waiting for machine to stop 13/120
	I0913 18:50:13.245247   28614 main.go:141] libmachine: (ha-617764-m03) Waiting for machine to stop 14/120
	I0913 18:50:14.246804   28614 main.go:141] libmachine: (ha-617764-m03) Waiting for machine to stop 15/120
	I0913 18:50:15.248748   28614 main.go:141] libmachine: (ha-617764-m03) Waiting for machine to stop 16/120
	I0913 18:50:16.250282   28614 main.go:141] libmachine: (ha-617764-m03) Waiting for machine to stop 17/120
	I0913 18:50:17.251941   28614 main.go:141] libmachine: (ha-617764-m03) Waiting for machine to stop 18/120
	I0913 18:50:18.253393   28614 main.go:141] libmachine: (ha-617764-m03) Waiting for machine to stop 19/120
	I0913 18:50:19.255254   28614 main.go:141] libmachine: (ha-617764-m03) Waiting for machine to stop 20/120
	I0913 18:50:20.256895   28614 main.go:141] libmachine: (ha-617764-m03) Waiting for machine to stop 21/120
	I0913 18:50:21.258346   28614 main.go:141] libmachine: (ha-617764-m03) Waiting for machine to stop 22/120
	I0913 18:50:22.260151   28614 main.go:141] libmachine: (ha-617764-m03) Waiting for machine to stop 23/120
	I0913 18:50:23.261645   28614 main.go:141] libmachine: (ha-617764-m03) Waiting for machine to stop 24/120
	I0913 18:50:24.263469   28614 main.go:141] libmachine: (ha-617764-m03) Waiting for machine to stop 25/120
	I0913 18:50:25.265035   28614 main.go:141] libmachine: (ha-617764-m03) Waiting for machine to stop 26/120
	I0913 18:50:26.266431   28614 main.go:141] libmachine: (ha-617764-m03) Waiting for machine to stop 27/120
	I0913 18:50:27.267839   28614 main.go:141] libmachine: (ha-617764-m03) Waiting for machine to stop 28/120
	I0913 18:50:28.269036   28614 main.go:141] libmachine: (ha-617764-m03) Waiting for machine to stop 29/120
	I0913 18:50:29.270833   28614 main.go:141] libmachine: (ha-617764-m03) Waiting for machine to stop 30/120
	I0913 18:50:30.272169   28614 main.go:141] libmachine: (ha-617764-m03) Waiting for machine to stop 31/120
	I0913 18:50:31.273729   28614 main.go:141] libmachine: (ha-617764-m03) Waiting for machine to stop 32/120
	I0913 18:50:32.275027   28614 main.go:141] libmachine: (ha-617764-m03) Waiting for machine to stop 33/120
	I0913 18:50:33.276368   28614 main.go:141] libmachine: (ha-617764-m03) Waiting for machine to stop 34/120
	I0913 18:50:34.278110   28614 main.go:141] libmachine: (ha-617764-m03) Waiting for machine to stop 35/120
	I0913 18:50:35.279286   28614 main.go:141] libmachine: (ha-617764-m03) Waiting for machine to stop 36/120
	I0913 18:50:36.280532   28614 main.go:141] libmachine: (ha-617764-m03) Waiting for machine to stop 37/120
	I0913 18:50:37.281795   28614 main.go:141] libmachine: (ha-617764-m03) Waiting for machine to stop 38/120
	I0913 18:50:38.283327   28614 main.go:141] libmachine: (ha-617764-m03) Waiting for machine to stop 39/120
	I0913 18:50:39.285013   28614 main.go:141] libmachine: (ha-617764-m03) Waiting for machine to stop 40/120
	I0913 18:50:40.286773   28614 main.go:141] libmachine: (ha-617764-m03) Waiting for machine to stop 41/120
	I0913 18:50:41.288210   28614 main.go:141] libmachine: (ha-617764-m03) Waiting for machine to stop 42/120
	I0913 18:50:42.289694   28614 main.go:141] libmachine: (ha-617764-m03) Waiting for machine to stop 43/120
	I0913 18:50:43.291088   28614 main.go:141] libmachine: (ha-617764-m03) Waiting for machine to stop 44/120
	I0913 18:50:44.292990   28614 main.go:141] libmachine: (ha-617764-m03) Waiting for machine to stop 45/120
	I0913 18:50:45.294614   28614 main.go:141] libmachine: (ha-617764-m03) Waiting for machine to stop 46/120
	I0913 18:50:46.296127   28614 main.go:141] libmachine: (ha-617764-m03) Waiting for machine to stop 47/120
	I0913 18:50:47.297476   28614 main.go:141] libmachine: (ha-617764-m03) Waiting for machine to stop 48/120
	I0913 18:50:48.298850   28614 main.go:141] libmachine: (ha-617764-m03) Waiting for machine to stop 49/120
	I0913 18:50:49.301144   28614 main.go:141] libmachine: (ha-617764-m03) Waiting for machine to stop 50/120
	I0913 18:50:50.302472   28614 main.go:141] libmachine: (ha-617764-m03) Waiting for machine to stop 51/120
	I0913 18:50:51.303826   28614 main.go:141] libmachine: (ha-617764-m03) Waiting for machine to stop 52/120
	I0913 18:50:52.305436   28614 main.go:141] libmachine: (ha-617764-m03) Waiting for machine to stop 53/120
	I0913 18:50:53.307129   28614 main.go:141] libmachine: (ha-617764-m03) Waiting for machine to stop 54/120
	I0913 18:50:54.308848   28614 main.go:141] libmachine: (ha-617764-m03) Waiting for machine to stop 55/120
	I0913 18:50:55.310160   28614 main.go:141] libmachine: (ha-617764-m03) Waiting for machine to stop 56/120
	I0913 18:50:56.311607   28614 main.go:141] libmachine: (ha-617764-m03) Waiting for machine to stop 57/120
	I0913 18:50:57.313048   28614 main.go:141] libmachine: (ha-617764-m03) Waiting for machine to stop 58/120
	I0913 18:50:58.314373   28614 main.go:141] libmachine: (ha-617764-m03) Waiting for machine to stop 59/120
	I0913 18:50:59.316132   28614 main.go:141] libmachine: (ha-617764-m03) Waiting for machine to stop 60/120
	I0913 18:51:00.317378   28614 main.go:141] libmachine: (ha-617764-m03) Waiting for machine to stop 61/120
	I0913 18:51:01.318712   28614 main.go:141] libmachine: (ha-617764-m03) Waiting for machine to stop 62/120
	I0913 18:51:02.320020   28614 main.go:141] libmachine: (ha-617764-m03) Waiting for machine to stop 63/120
	I0913 18:51:03.321294   28614 main.go:141] libmachine: (ha-617764-m03) Waiting for machine to stop 64/120
	I0913 18:51:04.322938   28614 main.go:141] libmachine: (ha-617764-m03) Waiting for machine to stop 65/120
	I0913 18:51:05.324172   28614 main.go:141] libmachine: (ha-617764-m03) Waiting for machine to stop 66/120
	I0913 18:51:06.325588   28614 main.go:141] libmachine: (ha-617764-m03) Waiting for machine to stop 67/120
	I0913 18:51:07.326845   28614 main.go:141] libmachine: (ha-617764-m03) Waiting for machine to stop 68/120
	I0913 18:51:08.328283   28614 main.go:141] libmachine: (ha-617764-m03) Waiting for machine to stop 69/120
	I0913 18:51:09.329912   28614 main.go:141] libmachine: (ha-617764-m03) Waiting for machine to stop 70/120
	I0913 18:51:10.331408   28614 main.go:141] libmachine: (ha-617764-m03) Waiting for machine to stop 71/120
	I0913 18:51:11.332591   28614 main.go:141] libmachine: (ha-617764-m03) Waiting for machine to stop 72/120
	I0913 18:51:12.333950   28614 main.go:141] libmachine: (ha-617764-m03) Waiting for machine to stop 73/120
	I0913 18:51:13.335189   28614 main.go:141] libmachine: (ha-617764-m03) Waiting for machine to stop 74/120
	I0913 18:51:14.336793   28614 main.go:141] libmachine: (ha-617764-m03) Waiting for machine to stop 75/120
	I0913 18:51:15.338079   28614 main.go:141] libmachine: (ha-617764-m03) Waiting for machine to stop 76/120
	I0913 18:51:16.339326   28614 main.go:141] libmachine: (ha-617764-m03) Waiting for machine to stop 77/120
	I0913 18:51:17.340721   28614 main.go:141] libmachine: (ha-617764-m03) Waiting for machine to stop 78/120
	I0913 18:51:18.341961   28614 main.go:141] libmachine: (ha-617764-m03) Waiting for machine to stop 79/120
	I0913 18:51:19.343700   28614 main.go:141] libmachine: (ha-617764-m03) Waiting for machine to stop 80/120
	I0913 18:51:20.344844   28614 main.go:141] libmachine: (ha-617764-m03) Waiting for machine to stop 81/120
	I0913 18:51:21.346163   28614 main.go:141] libmachine: (ha-617764-m03) Waiting for machine to stop 82/120
	I0913 18:51:22.347660   28614 main.go:141] libmachine: (ha-617764-m03) Waiting for machine to stop 83/120
	I0913 18:51:23.348927   28614 main.go:141] libmachine: (ha-617764-m03) Waiting for machine to stop 84/120
	I0913 18:51:24.350880   28614 main.go:141] libmachine: (ha-617764-m03) Waiting for machine to stop 85/120
	I0913 18:51:25.352229   28614 main.go:141] libmachine: (ha-617764-m03) Waiting for machine to stop 86/120
	I0913 18:51:26.353411   28614 main.go:141] libmachine: (ha-617764-m03) Waiting for machine to stop 87/120
	I0913 18:51:27.354668   28614 main.go:141] libmachine: (ha-617764-m03) Waiting for machine to stop 88/120
	I0913 18:51:28.356065   28614 main.go:141] libmachine: (ha-617764-m03) Waiting for machine to stop 89/120
	I0913 18:51:29.357970   28614 main.go:141] libmachine: (ha-617764-m03) Waiting for machine to stop 90/120
	I0913 18:51:30.359415   28614 main.go:141] libmachine: (ha-617764-m03) Waiting for machine to stop 91/120
	I0913 18:51:31.360648   28614 main.go:141] libmachine: (ha-617764-m03) Waiting for machine to stop 92/120
	I0913 18:51:32.361806   28614 main.go:141] libmachine: (ha-617764-m03) Waiting for machine to stop 93/120
	I0913 18:51:33.363043   28614 main.go:141] libmachine: (ha-617764-m03) Waiting for machine to stop 94/120
	I0913 18:51:34.365076   28614 main.go:141] libmachine: (ha-617764-m03) Waiting for machine to stop 95/120
	I0913 18:51:35.366291   28614 main.go:141] libmachine: (ha-617764-m03) Waiting for machine to stop 96/120
	I0913 18:51:36.368654   28614 main.go:141] libmachine: (ha-617764-m03) Waiting for machine to stop 97/120
	I0913 18:51:37.370045   28614 main.go:141] libmachine: (ha-617764-m03) Waiting for machine to stop 98/120
	I0913 18:51:38.371553   28614 main.go:141] libmachine: (ha-617764-m03) Waiting for machine to stop 99/120
	I0913 18:51:39.372900   28614 main.go:141] libmachine: (ha-617764-m03) Waiting for machine to stop 100/120
	I0913 18:51:40.374953   28614 main.go:141] libmachine: (ha-617764-m03) Waiting for machine to stop 101/120
	I0913 18:51:41.376420   28614 main.go:141] libmachine: (ha-617764-m03) Waiting for machine to stop 102/120
	I0913 18:51:42.377665   28614 main.go:141] libmachine: (ha-617764-m03) Waiting for machine to stop 103/120
	I0913 18:51:43.379150   28614 main.go:141] libmachine: (ha-617764-m03) Waiting for machine to stop 104/120
	I0913 18:51:44.380810   28614 main.go:141] libmachine: (ha-617764-m03) Waiting for machine to stop 105/120
	I0913 18:51:45.382253   28614 main.go:141] libmachine: (ha-617764-m03) Waiting for machine to stop 106/120
	I0913 18:51:46.383513   28614 main.go:141] libmachine: (ha-617764-m03) Waiting for machine to stop 107/120
	I0913 18:51:47.384754   28614 main.go:141] libmachine: (ha-617764-m03) Waiting for machine to stop 108/120
	I0913 18:51:48.386778   28614 main.go:141] libmachine: (ha-617764-m03) Waiting for machine to stop 109/120
	I0913 18:51:49.388365   28614 main.go:141] libmachine: (ha-617764-m03) Waiting for machine to stop 110/120
	I0913 18:51:50.389707   28614 main.go:141] libmachine: (ha-617764-m03) Waiting for machine to stop 111/120
	I0913 18:51:51.391082   28614 main.go:141] libmachine: (ha-617764-m03) Waiting for machine to stop 112/120
	I0913 18:51:52.392575   28614 main.go:141] libmachine: (ha-617764-m03) Waiting for machine to stop 113/120
	I0913 18:51:53.393785   28614 main.go:141] libmachine: (ha-617764-m03) Waiting for machine to stop 114/120
	I0913 18:51:54.395416   28614 main.go:141] libmachine: (ha-617764-m03) Waiting for machine to stop 115/120
	I0913 18:51:55.396773   28614 main.go:141] libmachine: (ha-617764-m03) Waiting for machine to stop 116/120
	I0913 18:51:56.397960   28614 main.go:141] libmachine: (ha-617764-m03) Waiting for machine to stop 117/120
	I0913 18:51:57.399351   28614 main.go:141] libmachine: (ha-617764-m03) Waiting for machine to stop 118/120
	I0913 18:51:58.400574   28614 main.go:141] libmachine: (ha-617764-m03) Waiting for machine to stop 119/120
	I0913 18:51:59.401480   28614 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0913 18:51:59.401532   28614 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0913 18:51:59.403310   28614 out.go:201] 
	W0913 18:51:59.404628   28614 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0913 18:51:59.404642   28614 out.go:270] * 
	* 
	W0913 18:51:59.406886   28614 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0913 18:51:59.408156   28614 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:464: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-617764 -v=7 --alsologtostderr" : exit status 82
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-617764 --wait=true -v=7 --alsologtostderr
E0913 18:54:06.601636   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357/client.crt: no such file or directory" logger="UnhandledError"
E0913 18:55:29.666651   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357/client.crt: no such file or directory" logger="UnhandledError"
E0913 18:55:57.575984   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/functional-204039/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-617764 --wait=true -v=7 --alsologtostderr: (4m2.338014567s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-617764
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-617764 -n ha-617764
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-617764 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-617764 logs -n 25: (1.803238021s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                      |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-617764 cp ha-617764-m03:/home/docker/cp-test.txt                            | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC | 13 Sep 24 18:46 UTC |
	|         | ha-617764-m02:/home/docker/cp-test_ha-617764-m03_ha-617764-m02.txt             |           |         |         |                     |                     |
	| ssh     | ha-617764 ssh -n                                                               | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC | 13 Sep 24 18:46 UTC |
	|         | ha-617764-m03 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-617764 ssh -n ha-617764-m02 sudo cat                                        | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC | 13 Sep 24 18:46 UTC |
	|         | /home/docker/cp-test_ha-617764-m03_ha-617764-m02.txt                           |           |         |         |                     |                     |
	| cp      | ha-617764 cp ha-617764-m03:/home/docker/cp-test.txt                            | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC | 13 Sep 24 18:46 UTC |
	|         | ha-617764-m04:/home/docker/cp-test_ha-617764-m03_ha-617764-m04.txt             |           |         |         |                     |                     |
	| ssh     | ha-617764 ssh -n                                                               | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC | 13 Sep 24 18:46 UTC |
	|         | ha-617764-m03 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-617764 ssh -n ha-617764-m04 sudo cat                                        | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC | 13 Sep 24 18:46 UTC |
	|         | /home/docker/cp-test_ha-617764-m03_ha-617764-m04.txt                           |           |         |         |                     |                     |
	| cp      | ha-617764 cp testdata/cp-test.txt                                              | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC | 13 Sep 24 18:46 UTC |
	|         | ha-617764-m04:/home/docker/cp-test.txt                                         |           |         |         |                     |                     |
	| ssh     | ha-617764 ssh -n                                                               | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC | 13 Sep 24 18:46 UTC |
	|         | ha-617764-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| cp      | ha-617764 cp ha-617764-m04:/home/docker/cp-test.txt                            | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC | 13 Sep 24 18:46 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile62644144/001/cp-test_ha-617764-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-617764 ssh -n                                                               | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC | 13 Sep 24 18:46 UTC |
	|         | ha-617764-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| cp      | ha-617764 cp ha-617764-m04:/home/docker/cp-test.txt                            | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC | 13 Sep 24 18:46 UTC |
	|         | ha-617764:/home/docker/cp-test_ha-617764-m04_ha-617764.txt                     |           |         |         |                     |                     |
	| ssh     | ha-617764 ssh -n                                                               | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC | 13 Sep 24 18:46 UTC |
	|         | ha-617764-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-617764 ssh -n ha-617764 sudo cat                                            | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC | 13 Sep 24 18:46 UTC |
	|         | /home/docker/cp-test_ha-617764-m04_ha-617764.txt                               |           |         |         |                     |                     |
	| cp      | ha-617764 cp ha-617764-m04:/home/docker/cp-test.txt                            | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC | 13 Sep 24 18:46 UTC |
	|         | ha-617764-m02:/home/docker/cp-test_ha-617764-m04_ha-617764-m02.txt             |           |         |         |                     |                     |
	| ssh     | ha-617764 ssh -n                                                               | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC | 13 Sep 24 18:46 UTC |
	|         | ha-617764-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-617764 ssh -n ha-617764-m02 sudo cat                                        | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC | 13 Sep 24 18:46 UTC |
	|         | /home/docker/cp-test_ha-617764-m04_ha-617764-m02.txt                           |           |         |         |                     |                     |
	| cp      | ha-617764 cp ha-617764-m04:/home/docker/cp-test.txt                            | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC | 13 Sep 24 18:46 UTC |
	|         | ha-617764-m03:/home/docker/cp-test_ha-617764-m04_ha-617764-m03.txt             |           |         |         |                     |                     |
	| ssh     | ha-617764 ssh -n                                                               | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC | 13 Sep 24 18:46 UTC |
	|         | ha-617764-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-617764 ssh -n ha-617764-m03 sudo cat                                        | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC | 13 Sep 24 18:46 UTC |
	|         | /home/docker/cp-test_ha-617764-m04_ha-617764-m03.txt                           |           |         |         |                     |                     |
	| node    | ha-617764 node stop m02 -v=7                                                   | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC |                     |
	|         | --alsologtostderr                                                              |           |         |         |                     |                     |
	| node    | ha-617764 node start m02 -v=7                                                  | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:48 UTC |                     |
	|         | --alsologtostderr                                                              |           |         |         |                     |                     |
	| node    | list -p ha-617764 -v=7                                                         | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:49 UTC |                     |
	|         | --alsologtostderr                                                              |           |         |         |                     |                     |
	| stop    | -p ha-617764 -v=7                                                              | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:49 UTC |                     |
	|         | --alsologtostderr                                                              |           |         |         |                     |                     |
	| start   | -p ha-617764 --wait=true -v=7                                                  | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:51 UTC | 13 Sep 24 18:56 UTC |
	|         | --alsologtostderr                                                              |           |         |         |                     |                     |
	| node    | list -p ha-617764                                                              | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:56 UTC |                     |
	|---------|--------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/13 18:51:59
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0913 18:51:59.451042   29072 out.go:345] Setting OutFile to fd 1 ...
	I0913 18:51:59.451140   29072 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 18:51:59.451144   29072 out.go:358] Setting ErrFile to fd 2...
	I0913 18:51:59.451149   29072 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 18:51:59.451314   29072 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19636-3902/.minikube/bin
	I0913 18:51:59.451836   29072 out.go:352] Setting JSON to false
	I0913 18:51:59.452744   29072 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2062,"bootTime":1726251457,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0913 18:51:59.452831   29072 start.go:139] virtualization: kvm guest
	I0913 18:51:59.455234   29072 out.go:177] * [ha-617764] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0913 18:51:59.456815   29072 out.go:177]   - MINIKUBE_LOCATION=19636
	I0913 18:51:59.456815   29072 notify.go:220] Checking for updates...
	I0913 18:51:59.458955   29072 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 18:51:59.460165   29072 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19636-3902/kubeconfig
	I0913 18:51:59.461323   29072 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19636-3902/.minikube
	I0913 18:51:59.462462   29072 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0913 18:51:59.463570   29072 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 18:51:59.465250   29072 config.go:182] Loaded profile config "ha-617764": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 18:51:59.465363   29072 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 18:51:59.465995   29072 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:51:59.466037   29072 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:51:59.481305   29072 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43779
	I0913 18:51:59.481921   29072 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:51:59.482544   29072 main.go:141] libmachine: Using API Version  1
	I0913 18:51:59.482573   29072 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:51:59.482889   29072 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:51:59.483034   29072 main.go:141] libmachine: (ha-617764) Calling .DriverName
	I0913 18:51:59.517035   29072 out.go:177] * Using the kvm2 driver based on existing profile
	I0913 18:51:59.518169   29072 start.go:297] selected driver: kvm2
	I0913 18:51:59.518183   29072 start.go:901] validating driver "kvm2" against &{Name:ha-617764 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.1 ClusterName:ha-617764 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.145 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.203 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.118 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.238 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false e
fk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:
docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 18:51:59.518313   29072 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 18:51:59.518606   29072 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 18:51:59.518673   29072 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19636-3902/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0913 18:51:59.533606   29072 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0913 18:51:59.534276   29072 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0913 18:51:59.534309   29072 cni.go:84] Creating CNI manager for ""
	I0913 18:51:59.534361   29072 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0913 18:51:59.534417   29072 start.go:340] cluster config:
	{Name:ha-617764 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-617764 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.145 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.203 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.118 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.238 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel
:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 18:51:59.534554   29072 iso.go:125] acquiring lock: {Name:mk6d09a9e7ffc35d34bf41cb51ad15df14a4d34d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 18:51:59.536450   29072 out.go:177] * Starting "ha-617764" primary control-plane node in "ha-617764" cluster
	I0913 18:51:59.537765   29072 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0913 18:51:59.537816   29072 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0913 18:51:59.537831   29072 cache.go:56] Caching tarball of preloaded images
	I0913 18:51:59.537907   29072 preload.go:172] Found /home/jenkins/minikube-integration/19636-3902/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0913 18:51:59.537916   29072 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0913 18:51:59.538024   29072 profile.go:143] Saving config to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/config.json ...
	I0913 18:51:59.538284   29072 start.go:360] acquireMachinesLock for ha-617764: {Name:mk2a4fc9f87a3264088553785e086036ce1d8c5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 18:51:59.538328   29072 start.go:364] duration metric: took 24.333µs to acquireMachinesLock for "ha-617764"
	I0913 18:51:59.538341   29072 start.go:96] Skipping create...Using existing machine configuration
	I0913 18:51:59.538348   29072 fix.go:54] fixHost starting: 
	I0913 18:51:59.538613   29072 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:51:59.538646   29072 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:51:59.553757   29072 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46393
	I0913 18:51:59.554281   29072 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:51:59.554746   29072 main.go:141] libmachine: Using API Version  1
	I0913 18:51:59.554766   29072 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:51:59.555115   29072 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:51:59.555302   29072 main.go:141] libmachine: (ha-617764) Calling .DriverName
	I0913 18:51:59.555470   29072 main.go:141] libmachine: (ha-617764) Calling .GetState
	I0913 18:51:59.556975   29072 fix.go:112] recreateIfNeeded on ha-617764: state=Running err=<nil>
	W0913 18:51:59.556997   29072 fix.go:138] unexpected machine state, will restart: <nil>
	I0913 18:51:59.559148   29072 out.go:177] * Updating the running kvm2 "ha-617764" VM ...
	I0913 18:51:59.560610   29072 machine.go:93] provisionDockerMachine start ...
	I0913 18:51:59.560637   29072 main.go:141] libmachine: (ha-617764) Calling .DriverName
	I0913 18:51:59.560860   29072 main.go:141] libmachine: (ha-617764) Calling .GetSSHHostname
	I0913 18:51:59.563529   29072 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:51:59.564010   29072 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 18:51:59.564033   29072 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:51:59.564192   29072 main.go:141] libmachine: (ha-617764) Calling .GetSSHPort
	I0913 18:51:59.564370   29072 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 18:51:59.564495   29072 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 18:51:59.564621   29072 main.go:141] libmachine: (ha-617764) Calling .GetSSHUsername
	I0913 18:51:59.564767   29072 main.go:141] libmachine: Using SSH client type: native
	I0913 18:51:59.564944   29072 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.145 22 <nil> <nil>}
	I0913 18:51:59.564955   29072 main.go:141] libmachine: About to run SSH command:
	hostname
	I0913 18:51:59.675402   29072 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-617764
	
	I0913 18:51:59.675429   29072 main.go:141] libmachine: (ha-617764) Calling .GetMachineName
	I0913 18:51:59.675681   29072 buildroot.go:166] provisioning hostname "ha-617764"
	I0913 18:51:59.675704   29072 main.go:141] libmachine: (ha-617764) Calling .GetMachineName
	I0913 18:51:59.675871   29072 main.go:141] libmachine: (ha-617764) Calling .GetSSHHostname
	I0913 18:51:59.678408   29072 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:51:59.678803   29072 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 18:51:59.678836   29072 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:51:59.678929   29072 main.go:141] libmachine: (ha-617764) Calling .GetSSHPort
	I0913 18:51:59.679153   29072 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 18:51:59.679316   29072 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 18:51:59.679474   29072 main.go:141] libmachine: (ha-617764) Calling .GetSSHUsername
	I0913 18:51:59.679633   29072 main.go:141] libmachine: Using SSH client type: native
	I0913 18:51:59.679848   29072 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.145 22 <nil> <nil>}
	I0913 18:51:59.679862   29072 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-617764 && echo "ha-617764" | sudo tee /etc/hostname
	I0913 18:51:59.811505   29072 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-617764
	
	I0913 18:51:59.811532   29072 main.go:141] libmachine: (ha-617764) Calling .GetSSHHostname
	I0913 18:51:59.814236   29072 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:51:59.814663   29072 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 18:51:59.814681   29072 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:51:59.814889   29072 main.go:141] libmachine: (ha-617764) Calling .GetSSHPort
	I0913 18:51:59.815061   29072 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 18:51:59.815194   29072 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 18:51:59.815302   29072 main.go:141] libmachine: (ha-617764) Calling .GetSSHUsername
	I0913 18:51:59.815458   29072 main.go:141] libmachine: Using SSH client type: native
	I0913 18:51:59.815642   29072 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.145 22 <nil> <nil>}
	I0913 18:51:59.815664   29072 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-617764' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-617764/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-617764' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0913 18:51:59.923967   29072 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0913 18:51:59.923996   29072 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19636-3902/.minikube CaCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19636-3902/.minikube}
	I0913 18:51:59.924029   29072 buildroot.go:174] setting up certificates
	I0913 18:51:59.924037   29072 provision.go:84] configureAuth start
	I0913 18:51:59.924045   29072 main.go:141] libmachine: (ha-617764) Calling .GetMachineName
	I0913 18:51:59.924335   29072 main.go:141] libmachine: (ha-617764) Calling .GetIP
	I0913 18:51:59.927027   29072 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:51:59.927351   29072 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 18:51:59.927381   29072 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:51:59.927497   29072 main.go:141] libmachine: (ha-617764) Calling .GetSSHHostname
	I0913 18:51:59.929576   29072 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:51:59.929904   29072 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 18:51:59.929924   29072 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:51:59.930048   29072 provision.go:143] copyHostCerts
	I0913 18:51:59.930078   29072 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem
	I0913 18:51:59.930140   29072 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem, removing ...
	I0913 18:51:59.930153   29072 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem
	I0913 18:51:59.930219   29072 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem (1082 bytes)
	I0913 18:51:59.930289   29072 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem
	I0913 18:51:59.930306   29072 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem, removing ...
	I0913 18:51:59.930312   29072 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem
	I0913 18:51:59.930336   29072 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem (1123 bytes)
	I0913 18:51:59.930376   29072 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem
	I0913 18:51:59.930393   29072 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem, removing ...
	I0913 18:51:59.930398   29072 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem
	I0913 18:51:59.930418   29072 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem (1679 bytes)
	I0913 18:51:59.930461   29072 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem org=jenkins.ha-617764 san=[127.0.0.1 192.168.39.145 ha-617764 localhost minikube]
	I0913 18:52:00.048247   29072 provision.go:177] copyRemoteCerts
	I0913 18:52:00.048305   29072 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0913 18:52:00.048326   29072 main.go:141] libmachine: (ha-617764) Calling .GetSSHHostname
	I0913 18:52:00.050757   29072 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:52:00.051071   29072 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 18:52:00.051100   29072 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:52:00.051290   29072 main.go:141] libmachine: (ha-617764) Calling .GetSSHPort
	I0913 18:52:00.051461   29072 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 18:52:00.051606   29072 main.go:141] libmachine: (ha-617764) Calling .GetSSHUsername
	I0913 18:52:00.051723   29072 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764/id_rsa Username:docker}
	I0913 18:52:00.137264   29072 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0913 18:52:00.137334   29072 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0913 18:52:00.163407   29072 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0913 18:52:00.163496   29072 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0913 18:52:00.189621   29072 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0913 18:52:00.189701   29072 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0913 18:52:00.217762   29072 provision.go:87] duration metric: took 293.713666ms to configureAuth
	I0913 18:52:00.217788   29072 buildroot.go:189] setting minikube options for container-runtime
	I0913 18:52:00.218009   29072 config.go:182] Loaded profile config "ha-617764": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 18:52:00.218092   29072 main.go:141] libmachine: (ha-617764) Calling .GetSSHHostname
	I0913 18:52:00.220836   29072 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:52:00.221218   29072 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 18:52:00.221242   29072 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:52:00.221408   29072 main.go:141] libmachine: (ha-617764) Calling .GetSSHPort
	I0913 18:52:00.221603   29072 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 18:52:00.221754   29072 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 18:52:00.221858   29072 main.go:141] libmachine: (ha-617764) Calling .GetSSHUsername
	I0913 18:52:00.222007   29072 main.go:141] libmachine: Using SSH client type: native
	I0913 18:52:00.222233   29072 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.145 22 <nil> <nil>}
	I0913 18:52:00.222253   29072 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0913 18:53:30.964596   29072 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0913 18:53:30.964623   29072 machine.go:96] duration metric: took 1m31.403995279s to provisionDockerMachine
	I0913 18:53:30.964635   29072 start.go:293] postStartSetup for "ha-617764" (driver="kvm2")
	I0913 18:53:30.964646   29072 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0913 18:53:30.964661   29072 main.go:141] libmachine: (ha-617764) Calling .DriverName
	I0913 18:53:30.964953   29072 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0913 18:53:30.964983   29072 main.go:141] libmachine: (ha-617764) Calling .GetSSHHostname
	I0913 18:53:30.967854   29072 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:53:30.968233   29072 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 18:53:30.968256   29072 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:53:30.968420   29072 main.go:141] libmachine: (ha-617764) Calling .GetSSHPort
	I0913 18:53:30.968577   29072 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 18:53:30.968735   29072 main.go:141] libmachine: (ha-617764) Calling .GetSSHUsername
	I0913 18:53:30.968862   29072 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764/id_rsa Username:docker}
	I0913 18:53:31.054747   29072 ssh_runner.go:195] Run: cat /etc/os-release
	I0913 18:53:31.058746   29072 info.go:137] Remote host: Buildroot 2023.02.9
	I0913 18:53:31.058765   29072 filesync.go:126] Scanning /home/jenkins/minikube-integration/19636-3902/.minikube/addons for local assets ...
	I0913 18:53:31.058826   29072 filesync.go:126] Scanning /home/jenkins/minikube-integration/19636-3902/.minikube/files for local assets ...
	I0913 18:53:31.058894   29072 filesync.go:149] local asset: /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem -> 110792.pem in /etc/ssl/certs
	I0913 18:53:31.058903   29072 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem -> /etc/ssl/certs/110792.pem
	I0913 18:53:31.058980   29072 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0913 18:53:31.070546   29072 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem --> /etc/ssl/certs/110792.pem (1708 bytes)
	I0913 18:53:31.095694   29072 start.go:296] duration metric: took 131.045494ms for postStartSetup
	I0913 18:53:31.095766   29072 main.go:141] libmachine: (ha-617764) Calling .DriverName
	I0913 18:53:31.096067   29072 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0913 18:53:31.096098   29072 main.go:141] libmachine: (ha-617764) Calling .GetSSHHostname
	I0913 18:53:31.099237   29072 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:53:31.099625   29072 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 18:53:31.099662   29072 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:53:31.099882   29072 main.go:141] libmachine: (ha-617764) Calling .GetSSHPort
	I0913 18:53:31.100052   29072 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 18:53:31.100306   29072 main.go:141] libmachine: (ha-617764) Calling .GetSSHUsername
	I0913 18:53:31.100466   29072 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764/id_rsa Username:docker}
	W0913 18:53:31.181755   29072 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0913 18:53:31.181776   29072 fix.go:56] duration metric: took 1m31.64342706s for fixHost
	I0913 18:53:31.181802   29072 main.go:141] libmachine: (ha-617764) Calling .GetSSHHostname
	I0913 18:53:31.184579   29072 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:53:31.184952   29072 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 18:53:31.184977   29072 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:53:31.185120   29072 main.go:141] libmachine: (ha-617764) Calling .GetSSHPort
	I0913 18:53:31.185300   29072 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 18:53:31.185440   29072 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 18:53:31.185543   29072 main.go:141] libmachine: (ha-617764) Calling .GetSSHUsername
	I0913 18:53:31.185681   29072 main.go:141] libmachine: Using SSH client type: native
	I0913 18:53:31.185881   29072 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.145 22 <nil> <nil>}
	I0913 18:53:31.185895   29072 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0913 18:53:31.290841   29072 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726253611.257610943
	
	I0913 18:53:31.290871   29072 fix.go:216] guest clock: 1726253611.257610943
	I0913 18:53:31.290882   29072 fix.go:229] Guest: 2024-09-13 18:53:31.257610943 +0000 UTC Remote: 2024-09-13 18:53:31.181784392 +0000 UTC m=+91.764368095 (delta=75.826551ms)
	I0913 18:53:31.290914   29072 fix.go:200] guest clock delta is within tolerance: 75.826551ms
	I0913 18:53:31.290921   29072 start.go:83] releasing machines lock for "ha-617764", held for 1m31.752584374s
	I0913 18:53:31.290949   29072 main.go:141] libmachine: (ha-617764) Calling .DriverName
	I0913 18:53:31.291205   29072 main.go:141] libmachine: (ha-617764) Calling .GetIP
	I0913 18:53:31.293801   29072 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:53:31.294132   29072 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 18:53:31.294162   29072 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:53:31.294317   29072 main.go:141] libmachine: (ha-617764) Calling .DriverName
	I0913 18:53:31.294836   29072 main.go:141] libmachine: (ha-617764) Calling .DriverName
	I0913 18:53:31.294997   29072 main.go:141] libmachine: (ha-617764) Calling .DriverName
	I0913 18:53:31.295175   29072 ssh_runner.go:195] Run: cat /version.json
	I0913 18:53:31.295188   29072 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0913 18:53:31.295194   29072 main.go:141] libmachine: (ha-617764) Calling .GetSSHHostname
	I0913 18:53:31.295224   29072 main.go:141] libmachine: (ha-617764) Calling .GetSSHHostname
	I0913 18:53:31.297699   29072 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:53:31.297943   29072 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:53:31.298058   29072 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 18:53:31.298127   29072 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:53:31.298320   29072 main.go:141] libmachine: (ha-617764) Calling .GetSSHPort
	I0913 18:53:31.298384   29072 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 18:53:31.298412   29072 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:53:31.298457   29072 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 18:53:31.298549   29072 main.go:141] libmachine: (ha-617764) Calling .GetSSHPort
	I0913 18:53:31.298608   29072 main.go:141] libmachine: (ha-617764) Calling .GetSSHUsername
	I0913 18:53:31.298661   29072 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 18:53:31.298724   29072 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764/id_rsa Username:docker}
	I0913 18:53:31.298767   29072 main.go:141] libmachine: (ha-617764) Calling .GetSSHUsername
	I0913 18:53:31.298868   29072 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764/id_rsa Username:docker}
	I0913 18:53:31.375612   29072 ssh_runner.go:195] Run: systemctl --version
	I0913 18:53:31.401066   29072 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0913 18:53:31.565852   29072 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0913 18:53:31.571778   29072 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0913 18:53:31.571847   29072 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0913 18:53:31.581630   29072 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0913 18:53:31.581654   29072 start.go:495] detecting cgroup driver to use...
	I0913 18:53:31.581722   29072 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0913 18:53:31.604675   29072 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0913 18:53:31.619441   29072 docker.go:217] disabling cri-docker service (if available) ...
	I0913 18:53:31.619504   29072 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0913 18:53:31.633774   29072 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0913 18:53:31.648164   29072 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0913 18:53:31.796712   29072 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0913 18:53:31.946300   29072 docker.go:233] disabling docker service ...
	I0913 18:53:31.946385   29072 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0913 18:53:31.964222   29072 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0913 18:53:31.978104   29072 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0913 18:53:32.122796   29072 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0913 18:53:32.266790   29072 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0913 18:53:32.280955   29072 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0913 18:53:32.300799   29072 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0913 18:53:32.300873   29072 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 18:53:32.311427   29072 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0913 18:53:32.311491   29072 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 18:53:32.321633   29072 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 18:53:32.331772   29072 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 18:53:32.342393   29072 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0913 18:53:32.352735   29072 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 18:53:32.362749   29072 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 18:53:32.373605   29072 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 18:53:32.383318   29072 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0913 18:53:32.392115   29072 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0913 18:53:32.400858   29072 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 18:53:32.539943   29072 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0913 18:53:38.961852   29072 ssh_runner.go:235] Completed: sudo systemctl restart crio: (6.421871662s)
	I0913 18:53:38.961887   29072 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0913 18:53:38.961940   29072 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0913 18:53:38.967154   29072 start.go:563] Will wait 60s for crictl version
	I0913 18:53:38.967216   29072 ssh_runner.go:195] Run: which crictl
	I0913 18:53:38.971149   29072 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0913 18:53:39.014499   29072 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0913 18:53:39.014569   29072 ssh_runner.go:195] Run: crio --version
	I0913 18:53:39.044787   29072 ssh_runner.go:195] Run: crio --version
	I0913 18:53:39.078019   29072 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0913 18:53:39.079451   29072 main.go:141] libmachine: (ha-617764) Calling .GetIP
	I0913 18:53:39.082069   29072 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:53:39.082416   29072 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 18:53:39.082436   29072 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:53:39.082687   29072 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0913 18:53:39.087441   29072 kubeadm.go:883] updating cluster {Name:ha-617764 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-617764 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.145 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.203 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.118 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.238 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fr
eshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Moun
tIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0913 18:53:39.087568   29072 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0913 18:53:39.087605   29072 ssh_runner.go:195] Run: sudo crictl images --output json
	I0913 18:53:39.136584   29072 crio.go:514] all images are preloaded for cri-o runtime.
	I0913 18:53:39.136607   29072 crio.go:433] Images already preloaded, skipping extraction
	I0913 18:53:39.136667   29072 ssh_runner.go:195] Run: sudo crictl images --output json
	I0913 18:53:39.172458   29072 crio.go:514] all images are preloaded for cri-o runtime.
	I0913 18:53:39.172492   29072 cache_images.go:84] Images are preloaded, skipping loading
	I0913 18:53:39.172503   29072 kubeadm.go:934] updating node { 192.168.39.145 8443 v1.31.1 crio true true} ...
	I0913 18:53:39.172747   29072 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-617764 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.145
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-617764 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0913 18:53:39.172847   29072 ssh_runner.go:195] Run: crio config
	I0913 18:53:39.228507   29072 cni.go:84] Creating CNI manager for ""
	I0913 18:53:39.228533   29072 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0913 18:53:39.228544   29072 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0913 18:53:39.228572   29072 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.145 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-617764 NodeName:ha-617764 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.145"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.145 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0913 18:53:39.228739   29072 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.145
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-617764"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.145
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.145"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0913 18:53:39.228762   29072 kube-vip.go:115] generating kube-vip config ...
	I0913 18:53:39.228808   29072 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0913 18:53:39.241131   29072 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0913 18:53:39.241264   29072 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0913 18:53:39.241322   29072 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0913 18:53:39.251995   29072 binaries.go:44] Found k8s binaries, skipping transfer
	I0913 18:53:39.252053   29072 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0913 18:53:39.262324   29072 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0913 18:53:39.280473   29072 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0913 18:53:39.298610   29072 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0913 18:53:39.316756   29072 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0913 18:53:39.334687   29072 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0913 18:53:39.340146   29072 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 18:53:39.486893   29072 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0913 18:53:39.502556   29072 certs.go:68] Setting up /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764 for IP: 192.168.39.145
	I0913 18:53:39.502579   29072 certs.go:194] generating shared ca certs ...
	I0913 18:53:39.502597   29072 certs.go:226] acquiring lock for ca certs: {Name:mke780aab4c2895bded11772e31c7a357d07742c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:53:39.502766   29072 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.key
	I0913 18:53:39.502812   29072 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.key
	I0913 18:53:39.502821   29072 certs.go:256] generating profile certs ...
	I0913 18:53:39.502893   29072 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/client.key
	I0913 18:53:39.502919   29072 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.key.4c9a0066
	I0913 18:53:39.502941   29072 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.crt.4c9a0066 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.145 192.168.39.203 192.168.39.118 192.168.39.254]
	I0913 18:53:39.671445   29072 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.crt.4c9a0066 ...
	I0913 18:53:39.671472   29072 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.crt.4c9a0066: {Name:mk866d8ebfd148c5aa5dd4cf3cd73b7d93c34404 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:53:39.671644   29072 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.key.4c9a0066 ...
	I0913 18:53:39.671655   29072 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.key.4c9a0066: {Name:mk328fdb2d1d58c24ba660ea05d28edbd4af5263 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:53:39.671724   29072 certs.go:381] copying /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.crt.4c9a0066 -> /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.crt
	I0913 18:53:39.671886   29072 certs.go:385] copying /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.key.4c9a0066 -> /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.key
	I0913 18:53:39.672011   29072 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/proxy-client.key
	I0913 18:53:39.672025   29072 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0913 18:53:39.672037   29072 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0913 18:53:39.672051   29072 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0913 18:53:39.672064   29072 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0913 18:53:39.672076   29072 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0913 18:53:39.672102   29072 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0913 18:53:39.672116   29072 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0913 18:53:39.672128   29072 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0913 18:53:39.672174   29072 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079.pem (1338 bytes)
	W0913 18:53:39.672203   29072 certs.go:480] ignoring /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079_empty.pem, impossibly tiny 0 bytes
	I0913 18:53:39.672212   29072 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem (1675 bytes)
	I0913 18:53:39.672238   29072 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem (1082 bytes)
	I0913 18:53:39.672260   29072 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem (1123 bytes)
	I0913 18:53:39.672281   29072 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem (1679 bytes)
	I0913 18:53:39.672318   29072 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem (1708 bytes)
	I0913 18:53:39.672341   29072 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079.pem -> /usr/share/ca-certificates/11079.pem
	I0913 18:53:39.672356   29072 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem -> /usr/share/ca-certificates/110792.pem
	I0913 18:53:39.672370   29072 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0913 18:53:39.672951   29072 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0913 18:53:39.700311   29072 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0913 18:53:39.724706   29072 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0913 18:53:39.748770   29072 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0913 18:53:39.772719   29072 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0913 18:53:39.796146   29072 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0913 18:53:39.819779   29072 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0913 18:53:39.844430   29072 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0913 18:53:39.868108   29072 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079.pem --> /usr/share/ca-certificates/11079.pem (1338 bytes)
	I0913 18:53:39.891117   29072 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem --> /usr/share/ca-certificates/110792.pem (1708 bytes)
	I0913 18:53:39.928910   29072 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0913 18:53:39.956532   29072 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0913 18:53:39.974013   29072 ssh_runner.go:195] Run: openssl version
	I0913 18:53:39.980139   29072 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11079.pem && ln -fs /usr/share/ca-certificates/11079.pem /etc/ssl/certs/11079.pem"
	I0913 18:53:39.991816   29072 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11079.pem
	I0913 18:53:39.996429   29072 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 13 18:38 /usr/share/ca-certificates/11079.pem
	I0913 18:53:39.996499   29072 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11079.pem
	I0913 18:53:40.002296   29072 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11079.pem /etc/ssl/certs/51391683.0"
	I0913 18:53:40.011742   29072 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110792.pem && ln -fs /usr/share/ca-certificates/110792.pem /etc/ssl/certs/110792.pem"
	I0913 18:53:40.022292   29072 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110792.pem
	I0913 18:53:40.027042   29072 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 13 18:38 /usr/share/ca-certificates/110792.pem
	I0913 18:53:40.027104   29072 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110792.pem
	I0913 18:53:40.032750   29072 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110792.pem /etc/ssl/certs/3ec20f2e.0"
	I0913 18:53:40.041962   29072 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0913 18:53:40.052402   29072 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0913 18:53:40.056867   29072 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 13 18:22 /usr/share/ca-certificates/minikubeCA.pem
	I0913 18:53:40.056911   29072 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0913 18:53:40.062554   29072 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0913 18:53:40.071636   29072 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0913 18:53:40.076041   29072 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0913 18:53:40.081558   29072 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0913 18:53:40.086944   29072 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0913 18:53:40.092374   29072 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0913 18:53:40.097877   29072 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0913 18:53:40.103538   29072 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0913 18:53:40.109085   29072 kubeadm.go:392] StartCluster: {Name:ha-617764 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-617764 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.145 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.203 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.118 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.238 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fresh
pod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 18:53:40.109196   29072 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0913 18:53:40.109230   29072 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0913 18:53:40.146805   29072 cri.go:89] found id: "6cda6910c0b8a31f1305274b5b5159cd8e7f49d4f80f9a990f705bf107a548a6"
	I0913 18:53:40.146828   29072 cri.go:89] found id: "0439d3ac606c787a4b2867d3b05dc915beecb59f9e5b7bfdd3792f7d2ac6208a"
	I0913 18:53:40.146832   29072 cri.go:89] found id: "6b090ae4c1c69f7f8d5633fb50dcc2f26a44e8e5949ec8befbaabc61bb3a0bec"
	I0913 18:53:40.146835   29072 cri.go:89] found id: "3502979cf3ea177059372b048c527fdda963bdd51e52d12d22dfa810cf54057d"
	I0913 18:53:40.146837   29072 cri.go:89] found id: "31a66627d146af0c454548e90aaf09b72db4e0fb35a2436b75ee6b7712ebfd7b"
	I0913 18:53:40.146841   29072 cri.go:89] found id: "0647676f81788ee0bbd56eb7d60f950a46f51bd631508ec8b7f81c7a92597539"
	I0913 18:53:40.146853   29072 cri.go:89] found id: "7e98c43ffb7347041b10b0e7a00cc20d3901c203313e4f54385c199a191115e1"
	I0913 18:53:40.146856   29072 cri.go:89] found id: "5065ca788226965fdaff6088f9b63d8cf7f5a5a7f59a07f825cfcdd7bc02e218"
	I0913 18:53:40.146858   29072 cri.go:89] found id: "b116fa0d9ecbf5eef9e58f830dd785949b52bfac86d2d3b084cc734d4d60272a"
	I0913 18:53:40.146863   29072 cri.go:89] found id: "8a41f6c9e152de2576ff4360ccc68e55259081afe9bb9bcb9f172aec46f9ba14"
	I0913 18:53:40.146866   29072 cri.go:89] found id: "8a31170a295b77a01a2c07a4bf7dbd0be4738f757372e24a85bcaf7d50d27d4c"
	I0913 18:53:40.146868   29072 cri.go:89] found id: "1d66613ccb1f220327ae486a7304f4ca06bdfa65b31bf2cde55a2f616174be80"
	I0913 18:53:40.146873   29072 cri.go:89] found id: "3b2f0c73fe9ef4e082cc81429c4d5b062aa2764776ca148ed40e6d633b128ae5"
	I0913 18:53:40.146876   29072 cri.go:89] found id: ""
	I0913 18:53:40.146911   29072 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 13 18:56:02 ha-617764 crio[3569]: time="2024-09-13 18:56:02.512710429Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:570c77981741ff23e853840359e686b304c18dfbe54cb12c07ca5d8bf2b8de17,PodSandboxId:9e2f87d06434fbb2cc22f0b55402dcb47eb004c630073d3484ea5c76f045adb0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726253708567358184,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e5f1a84-1798-430e-af04-82469e8f4a7b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32fcfa457f3ff0e638142def4aa43f12d6a5a779bfe86e597cc242d7f4d9d19d,PodSandboxId:ba66cfa072f5d2c9d7e729390ec3e24cdc5d4f227dd184ba1ffe78dc46889849,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726253664532596979,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 815ca8cb73177215968b5c5242b63776,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a368121b3974a88cb67c446e03fd5709ed4dd291d3b5de37b4544a7c42b60cc,PodSandboxId:dec28a9d29645441c6f66b40ed9acc817db86c5fcda1b52aec34a8ea0a961910,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726253664535947858,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f4db9ee38410b02d601ed80ae90b5a4,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bb3333d84624db9148786fc866cc4cb99179a577e31d3365459788ba7b02f59,PodSandboxId:0238ab84a512173bc79a44c0c48ee4a9bfeeaed98c7655e711a18a704a5951f9,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726253659862465525,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-t4fwq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1bc3749b-0225-445c-9b86-767558392df7,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59d4f7dd69063bd415cb8f477dce014cc757b01ddc9460b2a58e69522054176f,PodSandboxId:9e2f87d06434fbb2cc22f0b55402dcb47eb004c630073d3484ea5c76f045adb0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726253658537126239,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e5f1a84-1798-430e-af04-82469e8f4a7b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46d659112c682a93bb4560212566d776e405246e1b3f91e9cc2ad5198b2b8c69,PodSandboxId:566613db4514bf925926ac26426504bb464936a657b23ca25af3ad3aa0a803e1,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726253640657559269,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5545735943f8ff5a38c9aea0b4c785ad,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dddc0dfb6a255d4d35498f110cd8ade544b6f9eb17cef621a12500640512581e,PodSandboxId:18e2ef1278c487f38e01dc8ec652cadc524a2e3c08f1b80bae0f12b5ce7d5f45,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726253626790149443,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-b9bzd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81130c38-ff5d-4c9e-ab7b-7eae4a62d3b8,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:09fe052337ef3c922d075bb554c17c548520bce6283d1973e3507ca8d99ace87,PodSandboxId:5f1a3394b645b31406ab8d54ac120cd39398070996c972bf73df2d505c1848b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726253626803895339,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fdhnm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c509676-c7ba-4841-89b5-7e4266abd9c9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kube
rnetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b752b1ac699cb46bd464528882ec5c1bfb29241c94a1eda062141311151c2fc1,PodSandboxId:3a3adb124d23e71e636eaf0f200f72f3779d41c493e0475b011f1e825d452f00,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726253626698846769,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-htrbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41a8301e-fca3-4907-bc77-808b013a2d2a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"nam
e\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d1a0b2d1c95ea3234342a3c075a74b46c01ce0a91b97801503cb6fd1fc06163,PodSandboxId:09bbefd12114cb9e6833e18edd3e080d729dc865efaaa80af99dd78c1c2387c2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726253626405055504,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-92mml,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: 36bd37dc-88c4-4264-9e7c-a90246cc5212,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15c33340e3091f96212bbbf29c4a8802468f76ac07126f47265d4f2b86321a89,PodSandboxId:acfcaea56c23ed4626d8134635877c58a053b2861b24d0d53cf3024c7c8c1ca0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726253626518194944,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
bf3d4ca74d8429dc43b760fdf8f185ab,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da04db3dd67098c6ed0ba3118018e8a2ed0dbc987d1383c585c961ef2d592ff0,PodSandboxId:ba66cfa072f5d2c9d7e729390ec3e24cdc5d4f227dd184ba1ffe78dc46889849,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726253626457035523,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 815ca8cb73177215968b5c5242b63776,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed301adb1e4543af8e7b0bc6901dd6ef8c6bd3a45f58443df445f01ad6bb0a0a,PodSandboxId:dec28a9d29645441c6f66b40ed9acc817db86c5fcda1b52aec34a8ea0a961910,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726253626494877289,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f4db9ee38410b02d
601ed80ae90b5a4,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80a7cb47dee67b788f693c83be0a9b7c41fc30e21ddf9b0131e13737c8047222,PodSandboxId:a63972ff65b125bee7dbfeebba7408899591af85f385eeba93cb4a7472b5a831,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726253626301722665,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15cf7928620050653d6239c1007547bd,},Ann
otations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d456d4bd90d260631341afbae7431ccb0cc790417a4c9e8160e57467bfaf2b9,PodSandboxId:99c7958cb4872950974230539dc45944b50688bb9878563aee2e61fedfb5c35c,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726253118277457132,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-t4fwq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1bc3749b-0225-445c-9b86-767558392df7,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31a66627d146af0c454548e90aaf09b72db4e0fb35a2436b75ee6b7712ebfd7b,PodSandboxId:e586cc7654290c3c0cda0d6fd83b97505e3979cb49833b1eea94d979d037f3b4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726252966219468000,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-htrbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41a8301e-fca3-4907-bc77-808b013a2d2a,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3502979cf3ea177059372b048c527fdda963bdd51e52d12d22dfa810cf54057d,PodSandboxId:bd08f2ca13336963167d06a6cc6476a2e09fa40dcabe8661be5e3dacaf6be576,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726252966228593988,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fdhnm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c509676-c7ba-4841-89b5-7e4266abd9c9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e98c43ffb7347041b10b0e7a00cc20d3901c203313e4f54385c199a191115e1,PodSandboxId:47bf9789759213fdfb7ee71e76e2a2afa4707545aca3efe3f7ebec2b63ea5635,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726252954213974086,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-b9bzd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81130c38-ff5d-4c9e-ab7b-7eae4a62d3b8,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5065ca788226965fdaff6088f9b63d8cf7f5a5a7f59a07f825cfcdd7bc02e218,PodSandboxId:585827783c6745769918e227877e73f139672d7f78bf2704aaebc3850f3d07f1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726252953884543786,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-92mml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36bd37dc-88c4-4264-9e7c-a90246cc5212,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a31170a295b77a01a2c07a4bf7dbd0be4738f757372e24a85bcaf7d50d27d4c,PodSandboxId:16bf73d50b501d98243464444491201ec86d4a669bf5b3098590c9930bee5091,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe
954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726252942305036275,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15cf7928620050653d6239c1007547bd,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b2f0c73fe9ef4e082cc81429c4d5b062aa2764776ca148ed40e6d633b128ae5,PodSandboxId:353214980e0a16b1f3f76759918d9ab56c9f4add56df1e83cc1f6e8daf96c9a6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTA
INER_EXITED,CreatedAt:1726252942238767403,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf3d4ca74d8429dc43b760fdf8f185ab,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a83570fe-ff1b-4305-bc5f-46c08788b50b name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 18:56:02 ha-617764 crio[3569]: time="2024-09-13 18:56:02.515948843Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5540cf72-6780-4425-a20c-3f4da078f336 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 13 18:56:02 ha-617764 crio[3569]: time="2024-09-13 18:56:02.516223018Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:0238ab84a512173bc79a44c0c48ee4a9bfeeaed98c7655e711a18a704a5951f9,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-t4fwq,Uid:1bc3749b-0225-445c-9b86-767558392df7,Namespace:default,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726253659740539351,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-t4fwq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1bc3749b-0225-445c-9b86-767558392df7,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-13T18:45:14.261429860Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:566613db4514bf925926ac26426504bb464936a657b23ca25af3ad3aa0a803e1,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-617764,Uid:5545735943f8ff5a38c9aea0b4c785ad,Namespace:kube-system,Attempt:0,},State:SANDBOX_RE
ADY,CreatedAt:1726253640555148083,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5545735943f8ff5a38c9aea0b4c785ad,},Annotations:map[string]string{kubernetes.io/config.hash: 5545735943f8ff5a38c9aea0b4c785ad,kubernetes.io/config.seen: 2024-09-13T18:53:39.302561175Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:3a3adb124d23e71e636eaf0f200f72f3779d41c493e0475b011f1e825d452f00,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-htrbt,Uid:41a8301e-fca3-4907-bc77-808b013a2d2a,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726253626040810467,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-htrbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41a8301e-fca3-4907-bc77-808b013a2d2a,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09
-13T18:42:45.550911912Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:dec28a9d29645441c6f66b40ed9acc817db86c5fcda1b52aec34a8ea0a961910,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-617764,Uid:7f4db9ee38410b02d601ed80ae90b5a4,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726253626023937802,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f4db9ee38410b02d601ed80ae90b5a4,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.145:8443,kubernetes.io/config.hash: 7f4db9ee38410b02d601ed80ae90b5a4,kubernetes.io/config.seen: 2024-09-13T18:42:28.449170428Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:5f1a3394b645b31406ab8d54ac120cd39398070996c972bf73df2d505c1848b5,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-fdhnm,Uid:5c50
9676-c7ba-4841-89b5-7e4266abd9c9,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726253626004489664,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-fdhnm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c509676-c7ba-4841-89b5-7e4266abd9c9,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-13T18:42:45.562483180Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:acfcaea56c23ed4626d8134635877c58a053b2861b24d0d53cf3024c7c8c1ca0,Metadata:&PodSandboxMetadata{Name:etcd-ha-617764,Uid:bf3d4ca74d8429dc43b760fdf8f185ab,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726253625988044685,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf3d4ca74d8429dc43b760fdf8f185ab,tier: control-plane,},Annotations:map[string]s
tring{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.145:2379,kubernetes.io/config.hash: bf3d4ca74d8429dc43b760fdf8f185ab,kubernetes.io/config.seen: 2024-09-13T18:42:28.449166555Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:09bbefd12114cb9e6833e18edd3e080d729dc865efaaa80af99dd78c1c2387c2,Metadata:&PodSandboxMetadata{Name:kube-proxy-92mml,Uid:36bd37dc-88c4-4264-9e7c-a90246cc5212,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726253625963443558,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-92mml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36bd37dc-88c4-4264-9e7c-a90246cc5212,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-13T18:42:32.813185019Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ba66cfa072f5d2c9d7e729390ec3e24cdc5d4f227dd184ba1ffe78dc46889849,Meta
data:&PodSandboxMetadata{Name:kube-controller-manager-ha-617764,Uid:815ca8cb73177215968b5c5242b63776,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726253625961334887,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 815ca8cb73177215968b5c5242b63776,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 815ca8cb73177215968b5c5242b63776,kubernetes.io/config.seen: 2024-09-13T18:42:28.449171632Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:9e2f87d06434fbb2cc22f0b55402dcb47eb004c630073d3484ea5c76f045adb0,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:1e5f1a84-1798-430e-af04-82469e8f4a7b,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726253625952996915,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,i
o.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e5f1a84-1798-430e-af04-82469e8f4a7b,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-09-13T18:42:45.559796406Z,kubernetes.io/config.source: api,},RuntimeHa
ndler:,},&PodSandbox{Id:a63972ff65b125bee7dbfeebba7408899591af85f385eeba93cb4a7472b5a831,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-617764,Uid:15cf7928620050653d6239c1007547bd,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726253625952430273,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15cf7928620050653d6239c1007547bd,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 15cf7928620050653d6239c1007547bd,kubernetes.io/config.seen: 2024-09-13T18:42:28.449172711Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:18e2ef1278c487f38e01dc8ec652cadc524a2e3c08f1b80bae0f12b5ce7d5f45,Metadata:&PodSandboxMetadata{Name:kindnet-b9bzd,Uid:81130c38-ff5d-4c9e-ab7b-7eae4a62d3b8,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726253625948438015,Labels:map[string]string{app: kindnet,controlle
r-revision-hash: 65cbdfc95f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-b9bzd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81130c38-ff5d-4c9e-ab7b-7eae4a62d3b8,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-13T18:42:32.806641658Z,kubernetes.io/config.source: api,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=5540cf72-6780-4425-a20c-3f4da078f336 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 13 18:56:02 ha-617764 crio[3569]: time="2024-09-13 18:56:02.517210234Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ff53d03d-b18d-40a4-b85d-be1fb0903085 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 18:56:02 ha-617764 crio[3569]: time="2024-09-13 18:56:02.517352170Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ff53d03d-b18d-40a4-b85d-be1fb0903085 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 18:56:02 ha-617764 crio[3569]: time="2024-09-13 18:56:02.517560604Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:570c77981741ff23e853840359e686b304c18dfbe54cb12c07ca5d8bf2b8de17,PodSandboxId:9e2f87d06434fbb2cc22f0b55402dcb47eb004c630073d3484ea5c76f045adb0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726253708567358184,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e5f1a84-1798-430e-af04-82469e8f4a7b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32fcfa457f3ff0e638142def4aa43f12d6a5a779bfe86e597cc242d7f4d9d19d,PodSandboxId:ba66cfa072f5d2c9d7e729390ec3e24cdc5d4f227dd184ba1ffe78dc46889849,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726253664532596979,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 815ca8cb73177215968b5c5242b63776,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a368121b3974a88cb67c446e03fd5709ed4dd291d3b5de37b4544a7c42b60cc,PodSandboxId:dec28a9d29645441c6f66b40ed9acc817db86c5fcda1b52aec34a8ea0a961910,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726253664535947858,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f4db9ee38410b02d601ed80ae90b5a4,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bb3333d84624db9148786fc866cc4cb99179a577e31d3365459788ba7b02f59,PodSandboxId:0238ab84a512173bc79a44c0c48ee4a9bfeeaed98c7655e711a18a704a5951f9,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726253659862465525,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-t4fwq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1bc3749b-0225-445c-9b86-767558392df7,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46d659112c682a93bb4560212566d776e405246e1b3f91e9cc2ad5198b2b8c69,PodSandboxId:566613db4514bf925926ac26426504bb464936a657b23ca25af3ad3aa0a803e1,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726253640657559269,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5545735943f8ff5a38c9aea0b4c785ad,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dddc0dfb6a255d4d35498f110cd8ade544b6f9eb17cef621a12500640512581e,PodSandboxId:18e2ef1278c487f38e01dc8ec652cadc524a2e3c08f1b80bae0f12b5ce7d5f45,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726253626790149443,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-b9bzd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81130c38-ff5d-4c9e-ab7b-7eae4a62d3b8,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:09fe052337ef3c922d075bb554c17c548520bce6283d1973e3507ca8d99ace87,PodSandboxId:5f1a3394b645b31406ab8d54ac120cd39398070996c972bf73df2d505c1848b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726253626803895339,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fdhnm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c509676-c7ba-4841-89b5-7e4266abd9c9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TC
P\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b752b1ac699cb46bd464528882ec5c1bfb29241c94a1eda062141311151c2fc1,PodSandboxId:3a3adb124d23e71e636eaf0f200f72f3779d41c493e0475b011f1e825d452f00,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726253626698846769,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-htrbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41a8301e-fca3-4907-bc77-808b013a2d2a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.p
orts: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d1a0b2d1c95ea3234342a3c075a74b46c01ce0a91b97801503cb6fd1fc06163,PodSandboxId:09bbefd12114cb9e6833e18edd3e080d729dc865efaaa80af99dd78c1c2387c2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726253626405055504,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-92mml,io.k
ubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36bd37dc-88c4-4264-9e7c-a90246cc5212,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15c33340e3091f96212bbbf29c4a8802468f76ac07126f47265d4f2b86321a89,PodSandboxId:acfcaea56c23ed4626d8134635877c58a053b2861b24d0d53cf3024c7c8c1ca0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726253626518194944,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: bf3d4ca74d8429dc43b760fdf8f185ab,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80a7cb47dee67b788f693c83be0a9b7c41fc30e21ddf9b0131e13737c8047222,PodSandboxId:a63972ff65b125bee7dbfeebba7408899591af85f385eeba93cb4a7472b5a831,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726253626301722665,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15cf7928
620050653d6239c1007547bd,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ff53d03d-b18d-40a4-b85d-be1fb0903085 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 18:56:02 ha-617764 crio[3569]: time="2024-09-13 18:56:02.558818588Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=62e8fa84-974e-4865-87c7-594943e12459 name=/runtime.v1.RuntimeService/Version
	Sep 13 18:56:02 ha-617764 crio[3569]: time="2024-09-13 18:56:02.558927032Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=62e8fa84-974e-4865-87c7-594943e12459 name=/runtime.v1.RuntimeService/Version
	Sep 13 18:56:02 ha-617764 crio[3569]: time="2024-09-13 18:56:02.560382536Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=63828d80-9291-4978-b0dc-92be55131ee2 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 18:56:02 ha-617764 crio[3569]: time="2024-09-13 18:56:02.560839971Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726253762560815969,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=63828d80-9291-4978-b0dc-92be55131ee2 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 18:56:02 ha-617764 crio[3569]: time="2024-09-13 18:56:02.561544479Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c937c6ea-f274-4715-b455-9426e30b815a name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 18:56:02 ha-617764 crio[3569]: time="2024-09-13 18:56:02.561652378Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c937c6ea-f274-4715-b455-9426e30b815a name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 18:56:02 ha-617764 crio[3569]: time="2024-09-13 18:56:02.562121017Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:570c77981741ff23e853840359e686b304c18dfbe54cb12c07ca5d8bf2b8de17,PodSandboxId:9e2f87d06434fbb2cc22f0b55402dcb47eb004c630073d3484ea5c76f045adb0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726253708567358184,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e5f1a84-1798-430e-af04-82469e8f4a7b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32fcfa457f3ff0e638142def4aa43f12d6a5a779bfe86e597cc242d7f4d9d19d,PodSandboxId:ba66cfa072f5d2c9d7e729390ec3e24cdc5d4f227dd184ba1ffe78dc46889849,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726253664532596979,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 815ca8cb73177215968b5c5242b63776,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a368121b3974a88cb67c446e03fd5709ed4dd291d3b5de37b4544a7c42b60cc,PodSandboxId:dec28a9d29645441c6f66b40ed9acc817db86c5fcda1b52aec34a8ea0a961910,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726253664535947858,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f4db9ee38410b02d601ed80ae90b5a4,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bb3333d84624db9148786fc866cc4cb99179a577e31d3365459788ba7b02f59,PodSandboxId:0238ab84a512173bc79a44c0c48ee4a9bfeeaed98c7655e711a18a704a5951f9,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726253659862465525,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-t4fwq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1bc3749b-0225-445c-9b86-767558392df7,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59d4f7dd69063bd415cb8f477dce014cc757b01ddc9460b2a58e69522054176f,PodSandboxId:9e2f87d06434fbb2cc22f0b55402dcb47eb004c630073d3484ea5c76f045adb0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726253658537126239,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e5f1a84-1798-430e-af04-82469e8f4a7b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46d659112c682a93bb4560212566d776e405246e1b3f91e9cc2ad5198b2b8c69,PodSandboxId:566613db4514bf925926ac26426504bb464936a657b23ca25af3ad3aa0a803e1,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726253640657559269,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5545735943f8ff5a38c9aea0b4c785ad,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dddc0dfb6a255d4d35498f110cd8ade544b6f9eb17cef621a12500640512581e,PodSandboxId:18e2ef1278c487f38e01dc8ec652cadc524a2e3c08f1b80bae0f12b5ce7d5f45,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726253626790149443,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-b9bzd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81130c38-ff5d-4c9e-ab7b-7eae4a62d3b8,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:09fe052337ef3c922d075bb554c17c548520bce6283d1973e3507ca8d99ace87,PodSandboxId:5f1a3394b645b31406ab8d54ac120cd39398070996c972bf73df2d505c1848b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726253626803895339,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fdhnm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c509676-c7ba-4841-89b5-7e4266abd9c9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kube
rnetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b752b1ac699cb46bd464528882ec5c1bfb29241c94a1eda062141311151c2fc1,PodSandboxId:3a3adb124d23e71e636eaf0f200f72f3779d41c493e0475b011f1e825d452f00,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726253626698846769,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-htrbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41a8301e-fca3-4907-bc77-808b013a2d2a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"nam
e\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d1a0b2d1c95ea3234342a3c075a74b46c01ce0a91b97801503cb6fd1fc06163,PodSandboxId:09bbefd12114cb9e6833e18edd3e080d729dc865efaaa80af99dd78c1c2387c2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726253626405055504,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-92mml,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: 36bd37dc-88c4-4264-9e7c-a90246cc5212,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15c33340e3091f96212bbbf29c4a8802468f76ac07126f47265d4f2b86321a89,PodSandboxId:acfcaea56c23ed4626d8134635877c58a053b2861b24d0d53cf3024c7c8c1ca0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726253626518194944,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
bf3d4ca74d8429dc43b760fdf8f185ab,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da04db3dd67098c6ed0ba3118018e8a2ed0dbc987d1383c585c961ef2d592ff0,PodSandboxId:ba66cfa072f5d2c9d7e729390ec3e24cdc5d4f227dd184ba1ffe78dc46889849,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726253626457035523,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 815ca8cb73177215968b5c5242b63776,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed301adb1e4543af8e7b0bc6901dd6ef8c6bd3a45f58443df445f01ad6bb0a0a,PodSandboxId:dec28a9d29645441c6f66b40ed9acc817db86c5fcda1b52aec34a8ea0a961910,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726253626494877289,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f4db9ee38410b02d
601ed80ae90b5a4,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80a7cb47dee67b788f693c83be0a9b7c41fc30e21ddf9b0131e13737c8047222,PodSandboxId:a63972ff65b125bee7dbfeebba7408899591af85f385eeba93cb4a7472b5a831,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726253626301722665,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15cf7928620050653d6239c1007547bd,},Ann
otations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d456d4bd90d260631341afbae7431ccb0cc790417a4c9e8160e57467bfaf2b9,PodSandboxId:99c7958cb4872950974230539dc45944b50688bb9878563aee2e61fedfb5c35c,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726253118277457132,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-t4fwq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1bc3749b-0225-445c-9b86-767558392df7,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31a66627d146af0c454548e90aaf09b72db4e0fb35a2436b75ee6b7712ebfd7b,PodSandboxId:e586cc7654290c3c0cda0d6fd83b97505e3979cb49833b1eea94d979d037f3b4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726252966219468000,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-htrbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41a8301e-fca3-4907-bc77-808b013a2d2a,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3502979cf3ea177059372b048c527fdda963bdd51e52d12d22dfa810cf54057d,PodSandboxId:bd08f2ca13336963167d06a6cc6476a2e09fa40dcabe8661be5e3dacaf6be576,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726252966228593988,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fdhnm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c509676-c7ba-4841-89b5-7e4266abd9c9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e98c43ffb7347041b10b0e7a00cc20d3901c203313e4f54385c199a191115e1,PodSandboxId:47bf9789759213fdfb7ee71e76e2a2afa4707545aca3efe3f7ebec2b63ea5635,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726252954213974086,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-b9bzd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81130c38-ff5d-4c9e-ab7b-7eae4a62d3b8,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5065ca788226965fdaff6088f9b63d8cf7f5a5a7f59a07f825cfcdd7bc02e218,PodSandboxId:585827783c6745769918e227877e73f139672d7f78bf2704aaebc3850f3d07f1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726252953884543786,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-92mml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36bd37dc-88c4-4264-9e7c-a90246cc5212,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a31170a295b77a01a2c07a4bf7dbd0be4738f757372e24a85bcaf7d50d27d4c,PodSandboxId:16bf73d50b501d98243464444491201ec86d4a669bf5b3098590c9930bee5091,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe
954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726252942305036275,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15cf7928620050653d6239c1007547bd,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b2f0c73fe9ef4e082cc81429c4d5b062aa2764776ca148ed40e6d633b128ae5,PodSandboxId:353214980e0a16b1f3f76759918d9ab56c9f4add56df1e83cc1f6e8daf96c9a6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTA
INER_EXITED,CreatedAt:1726252942238767403,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf3d4ca74d8429dc43b760fdf8f185ab,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c937c6ea-f274-4715-b455-9426e30b815a name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 18:56:02 ha-617764 crio[3569]: time="2024-09-13 18:56:02.615071857Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=54ab78ba-b18c-494f-9ed1-991199a0d12d name=/runtime.v1.RuntimeService/Version
	Sep 13 18:56:02 ha-617764 crio[3569]: time="2024-09-13 18:56:02.615174893Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=54ab78ba-b18c-494f-9ed1-991199a0d12d name=/runtime.v1.RuntimeService/Version
	Sep 13 18:56:02 ha-617764 crio[3569]: time="2024-09-13 18:56:02.617355312Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6ffcfb62-dba9-4aee-a9fb-fee05514ea38 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 18:56:02 ha-617764 crio[3569]: time="2024-09-13 18:56:02.618181647Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726253762618138726,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6ffcfb62-dba9-4aee-a9fb-fee05514ea38 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 18:56:02 ha-617764 crio[3569]: time="2024-09-13 18:56:02.627791851Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ba81e386-b75c-409e-8b07-145a0430ca83 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 18:56:02 ha-617764 crio[3569]: time="2024-09-13 18:56:02.627927456Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ba81e386-b75c-409e-8b07-145a0430ca83 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 18:56:02 ha-617764 crio[3569]: time="2024-09-13 18:56:02.628520819Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:570c77981741ff23e853840359e686b304c18dfbe54cb12c07ca5d8bf2b8de17,PodSandboxId:9e2f87d06434fbb2cc22f0b55402dcb47eb004c630073d3484ea5c76f045adb0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726253708567358184,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e5f1a84-1798-430e-af04-82469e8f4a7b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32fcfa457f3ff0e638142def4aa43f12d6a5a779bfe86e597cc242d7f4d9d19d,PodSandboxId:ba66cfa072f5d2c9d7e729390ec3e24cdc5d4f227dd184ba1ffe78dc46889849,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726253664532596979,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 815ca8cb73177215968b5c5242b63776,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a368121b3974a88cb67c446e03fd5709ed4dd291d3b5de37b4544a7c42b60cc,PodSandboxId:dec28a9d29645441c6f66b40ed9acc817db86c5fcda1b52aec34a8ea0a961910,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726253664535947858,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f4db9ee38410b02d601ed80ae90b5a4,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bb3333d84624db9148786fc866cc4cb99179a577e31d3365459788ba7b02f59,PodSandboxId:0238ab84a512173bc79a44c0c48ee4a9bfeeaed98c7655e711a18a704a5951f9,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726253659862465525,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-t4fwq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1bc3749b-0225-445c-9b86-767558392df7,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59d4f7dd69063bd415cb8f477dce014cc757b01ddc9460b2a58e69522054176f,PodSandboxId:9e2f87d06434fbb2cc22f0b55402dcb47eb004c630073d3484ea5c76f045adb0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726253658537126239,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e5f1a84-1798-430e-af04-82469e8f4a7b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46d659112c682a93bb4560212566d776e405246e1b3f91e9cc2ad5198b2b8c69,PodSandboxId:566613db4514bf925926ac26426504bb464936a657b23ca25af3ad3aa0a803e1,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726253640657559269,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5545735943f8ff5a38c9aea0b4c785ad,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dddc0dfb6a255d4d35498f110cd8ade544b6f9eb17cef621a12500640512581e,PodSandboxId:18e2ef1278c487f38e01dc8ec652cadc524a2e3c08f1b80bae0f12b5ce7d5f45,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726253626790149443,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-b9bzd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81130c38-ff5d-4c9e-ab7b-7eae4a62d3b8,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:09fe052337ef3c922d075bb554c17c548520bce6283d1973e3507ca8d99ace87,PodSandboxId:5f1a3394b645b31406ab8d54ac120cd39398070996c972bf73df2d505c1848b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726253626803895339,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fdhnm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c509676-c7ba-4841-89b5-7e4266abd9c9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kube
rnetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b752b1ac699cb46bd464528882ec5c1bfb29241c94a1eda062141311151c2fc1,PodSandboxId:3a3adb124d23e71e636eaf0f200f72f3779d41c493e0475b011f1e825d452f00,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726253626698846769,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-htrbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41a8301e-fca3-4907-bc77-808b013a2d2a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"nam
e\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d1a0b2d1c95ea3234342a3c075a74b46c01ce0a91b97801503cb6fd1fc06163,PodSandboxId:09bbefd12114cb9e6833e18edd3e080d729dc865efaaa80af99dd78c1c2387c2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726253626405055504,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-92mml,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: 36bd37dc-88c4-4264-9e7c-a90246cc5212,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15c33340e3091f96212bbbf29c4a8802468f76ac07126f47265d4f2b86321a89,PodSandboxId:acfcaea56c23ed4626d8134635877c58a053b2861b24d0d53cf3024c7c8c1ca0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726253626518194944,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
bf3d4ca74d8429dc43b760fdf8f185ab,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da04db3dd67098c6ed0ba3118018e8a2ed0dbc987d1383c585c961ef2d592ff0,PodSandboxId:ba66cfa072f5d2c9d7e729390ec3e24cdc5d4f227dd184ba1ffe78dc46889849,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726253626457035523,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 815ca8cb73177215968b5c5242b63776,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed301adb1e4543af8e7b0bc6901dd6ef8c6bd3a45f58443df445f01ad6bb0a0a,PodSandboxId:dec28a9d29645441c6f66b40ed9acc817db86c5fcda1b52aec34a8ea0a961910,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726253626494877289,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f4db9ee38410b02d
601ed80ae90b5a4,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80a7cb47dee67b788f693c83be0a9b7c41fc30e21ddf9b0131e13737c8047222,PodSandboxId:a63972ff65b125bee7dbfeebba7408899591af85f385eeba93cb4a7472b5a831,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726253626301722665,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15cf7928620050653d6239c1007547bd,},Ann
otations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d456d4bd90d260631341afbae7431ccb0cc790417a4c9e8160e57467bfaf2b9,PodSandboxId:99c7958cb4872950974230539dc45944b50688bb9878563aee2e61fedfb5c35c,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726253118277457132,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-t4fwq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1bc3749b-0225-445c-9b86-767558392df7,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31a66627d146af0c454548e90aaf09b72db4e0fb35a2436b75ee6b7712ebfd7b,PodSandboxId:e586cc7654290c3c0cda0d6fd83b97505e3979cb49833b1eea94d979d037f3b4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726252966219468000,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-htrbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41a8301e-fca3-4907-bc77-808b013a2d2a,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3502979cf3ea177059372b048c527fdda963bdd51e52d12d22dfa810cf54057d,PodSandboxId:bd08f2ca13336963167d06a6cc6476a2e09fa40dcabe8661be5e3dacaf6be576,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726252966228593988,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fdhnm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c509676-c7ba-4841-89b5-7e4266abd9c9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e98c43ffb7347041b10b0e7a00cc20d3901c203313e4f54385c199a191115e1,PodSandboxId:47bf9789759213fdfb7ee71e76e2a2afa4707545aca3efe3f7ebec2b63ea5635,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726252954213974086,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-b9bzd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81130c38-ff5d-4c9e-ab7b-7eae4a62d3b8,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5065ca788226965fdaff6088f9b63d8cf7f5a5a7f59a07f825cfcdd7bc02e218,PodSandboxId:585827783c6745769918e227877e73f139672d7f78bf2704aaebc3850f3d07f1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726252953884543786,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-92mml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36bd37dc-88c4-4264-9e7c-a90246cc5212,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a31170a295b77a01a2c07a4bf7dbd0be4738f757372e24a85bcaf7d50d27d4c,PodSandboxId:16bf73d50b501d98243464444491201ec86d4a669bf5b3098590c9930bee5091,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe
954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726252942305036275,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15cf7928620050653d6239c1007547bd,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b2f0c73fe9ef4e082cc81429c4d5b062aa2764776ca148ed40e6d633b128ae5,PodSandboxId:353214980e0a16b1f3f76759918d9ab56c9f4add56df1e83cc1f6e8daf96c9a6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTA
INER_EXITED,CreatedAt:1726252942238767403,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf3d4ca74d8429dc43b760fdf8f185ab,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ba81e386-b75c-409e-8b07-145a0430ca83 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 18:56:02 ha-617764 crio[3569]: time="2024-09-13 18:56:02.647073149Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=fbba10bd-68b3-40aa-a1c5-f7e6b5c05f6f name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 13 18:56:02 ha-617764 crio[3569]: time="2024-09-13 18:56:02.647540486Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:0238ab84a512173bc79a44c0c48ee4a9bfeeaed98c7655e711a18a704a5951f9,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-t4fwq,Uid:1bc3749b-0225-445c-9b86-767558392df7,Namespace:default,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726253659740539351,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-t4fwq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1bc3749b-0225-445c-9b86-767558392df7,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-13T18:45:14.261429860Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:566613db4514bf925926ac26426504bb464936a657b23ca25af3ad3aa0a803e1,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-617764,Uid:5545735943f8ff5a38c9aea0b4c785ad,Namespace:kube-system,Attempt:0,},State:SANDBOX_RE
ADY,CreatedAt:1726253640555148083,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5545735943f8ff5a38c9aea0b4c785ad,},Annotations:map[string]string{kubernetes.io/config.hash: 5545735943f8ff5a38c9aea0b4c785ad,kubernetes.io/config.seen: 2024-09-13T18:53:39.302561175Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:3a3adb124d23e71e636eaf0f200f72f3779d41c493e0475b011f1e825d452f00,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-htrbt,Uid:41a8301e-fca3-4907-bc77-808b013a2d2a,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726253626040810467,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-htrbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41a8301e-fca3-4907-bc77-808b013a2d2a,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09
-13T18:42:45.550911912Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:dec28a9d29645441c6f66b40ed9acc817db86c5fcda1b52aec34a8ea0a961910,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-617764,Uid:7f4db9ee38410b02d601ed80ae90b5a4,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726253626023937802,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f4db9ee38410b02d601ed80ae90b5a4,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.145:8443,kubernetes.io/config.hash: 7f4db9ee38410b02d601ed80ae90b5a4,kubernetes.io/config.seen: 2024-09-13T18:42:28.449170428Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:5f1a3394b645b31406ab8d54ac120cd39398070996c972bf73df2d505c1848b5,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-fdhnm,Uid:5c50
9676-c7ba-4841-89b5-7e4266abd9c9,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726253626004489664,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-fdhnm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c509676-c7ba-4841-89b5-7e4266abd9c9,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-13T18:42:45.562483180Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:acfcaea56c23ed4626d8134635877c58a053b2861b24d0d53cf3024c7c8c1ca0,Metadata:&PodSandboxMetadata{Name:etcd-ha-617764,Uid:bf3d4ca74d8429dc43b760fdf8f185ab,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726253625988044685,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf3d4ca74d8429dc43b760fdf8f185ab,tier: control-plane,},Annotations:map[string]s
tring{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.145:2379,kubernetes.io/config.hash: bf3d4ca74d8429dc43b760fdf8f185ab,kubernetes.io/config.seen: 2024-09-13T18:42:28.449166555Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:09bbefd12114cb9e6833e18edd3e080d729dc865efaaa80af99dd78c1c2387c2,Metadata:&PodSandboxMetadata{Name:kube-proxy-92mml,Uid:36bd37dc-88c4-4264-9e7c-a90246cc5212,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726253625963443558,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-92mml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36bd37dc-88c4-4264-9e7c-a90246cc5212,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-13T18:42:32.813185019Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ba66cfa072f5d2c9d7e729390ec3e24cdc5d4f227dd184ba1ffe78dc46889849,Meta
data:&PodSandboxMetadata{Name:kube-controller-manager-ha-617764,Uid:815ca8cb73177215968b5c5242b63776,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726253625961334887,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 815ca8cb73177215968b5c5242b63776,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 815ca8cb73177215968b5c5242b63776,kubernetes.io/config.seen: 2024-09-13T18:42:28.449171632Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:9e2f87d06434fbb2cc22f0b55402dcb47eb004c630073d3484ea5c76f045adb0,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:1e5f1a84-1798-430e-af04-82469e8f4a7b,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726253625952996915,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,i
o.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e5f1a84-1798-430e-af04-82469e8f4a7b,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-09-13T18:42:45.559796406Z,kubernetes.io/config.source: api,},RuntimeHa
ndler:,},&PodSandbox{Id:a63972ff65b125bee7dbfeebba7408899591af85f385eeba93cb4a7472b5a831,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-617764,Uid:15cf7928620050653d6239c1007547bd,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726253625952430273,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15cf7928620050653d6239c1007547bd,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 15cf7928620050653d6239c1007547bd,kubernetes.io/config.seen: 2024-09-13T18:42:28.449172711Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:18e2ef1278c487f38e01dc8ec652cadc524a2e3c08f1b80bae0f12b5ce7d5f45,Metadata:&PodSandboxMetadata{Name:kindnet-b9bzd,Uid:81130c38-ff5d-4c9e-ab7b-7eae4a62d3b8,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726253625948438015,Labels:map[string]string{app: kindnet,controlle
r-revision-hash: 65cbdfc95f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-b9bzd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81130c38-ff5d-4c9e-ab7b-7eae4a62d3b8,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-13T18:42:32.806641658Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:99c7958cb4872950974230539dc45944b50688bb9878563aee2e61fedfb5c35c,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-t4fwq,Uid:1bc3749b-0225-445c-9b86-767558392df7,Namespace:default,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726253114586821266,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-t4fwq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1bc3749b-0225-445c-9b86-767558392df7,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-13T18:45:14.261429860Z,kubernetes.io/config.source
: api,},RuntimeHandler:,},&PodSandbox{Id:bd08f2ca13336963167d06a6cc6476a2e09fa40dcabe8661be5e3dacaf6be576,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-fdhnm,Uid:5c509676-c7ba-4841-89b5-7e4266abd9c9,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726252965889636482,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-fdhnm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c509676-c7ba-4841-89b5-7e4266abd9c9,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-13T18:42:45.562483180Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e586cc7654290c3c0cda0d6fd83b97505e3979cb49833b1eea94d979d037f3b4,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-htrbt,Uid:41a8301e-fca3-4907-bc77-808b013a2d2a,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726252965859469324,Labels:map[string]string{io.kubernetes.container.name: POD,io.ku
bernetes.pod.name: coredns-7c65d6cfc9-htrbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41a8301e-fca3-4907-bc77-808b013a2d2a,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-13T18:42:45.550911912Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:585827783c6745769918e227877e73f139672d7f78bf2704aaebc3850f3d07f1,Metadata:&PodSandboxMetadata{Name:kube-proxy-92mml,Uid:36bd37dc-88c4-4264-9e7c-a90246cc5212,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726252953724191437,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-92mml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36bd37dc-88c4-4264-9e7c-a90246cc5212,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-13T18:42:32.813185019Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&
PodSandbox{Id:47bf9789759213fdfb7ee71e76e2a2afa4707545aca3efe3f7ebec2b63ea5635,Metadata:&PodSandboxMetadata{Name:kindnet-b9bzd,Uid:81130c38-ff5d-4c9e-ab7b-7eae4a62d3b8,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726252953714445038,Labels:map[string]string{app: kindnet,controller-revision-hash: 65cbdfc95f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-b9bzd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81130c38-ff5d-4c9e-ab7b-7eae4a62d3b8,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-13T18:42:32.806641658Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:353214980e0a16b1f3f76759918d9ab56c9f4add56df1e83cc1f6e8daf96c9a6,Metadata:&PodSandboxMetadata{Name:etcd-ha-617764,Uid:bf3d4ca74d8429dc43b760fdf8f185ab,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726252942072834224,Labels:map[string]string{component: etcd,io.kubernetes.container.name:
POD,io.kubernetes.pod.name: etcd-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf3d4ca74d8429dc43b760fdf8f185ab,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.145:2379,kubernetes.io/config.hash: bf3d4ca74d8429dc43b760fdf8f185ab,kubernetes.io/config.seen: 2024-09-13T18:42:21.598282672Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:16bf73d50b501d98243464444491201ec86d4a669bf5b3098590c9930bee5091,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-617764,Uid:15cf7928620050653d6239c1007547bd,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726252942055915287,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15cf7928620050653d6239c1007547bd,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 15cf7928
620050653d6239c1007547bd,kubernetes.io/config.seen: 2024-09-13T18:42:21.598280864Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=fbba10bd-68b3-40aa-a1c5-f7e6b5c05f6f name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 13 18:56:02 ha-617764 crio[3569]: time="2024-09-13 18:56:02.648892109Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=07dce380-1b68-439a-9e01-ddb7af6260ab name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 18:56:02 ha-617764 crio[3569]: time="2024-09-13 18:56:02.648965805Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=07dce380-1b68-439a-9e01-ddb7af6260ab name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 18:56:02 ha-617764 crio[3569]: time="2024-09-13 18:56:02.649410459Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:570c77981741ff23e853840359e686b304c18dfbe54cb12c07ca5d8bf2b8de17,PodSandboxId:9e2f87d06434fbb2cc22f0b55402dcb47eb004c630073d3484ea5c76f045adb0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726253708567358184,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e5f1a84-1798-430e-af04-82469e8f4a7b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32fcfa457f3ff0e638142def4aa43f12d6a5a779bfe86e597cc242d7f4d9d19d,PodSandboxId:ba66cfa072f5d2c9d7e729390ec3e24cdc5d4f227dd184ba1ffe78dc46889849,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726253664532596979,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 815ca8cb73177215968b5c5242b63776,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a368121b3974a88cb67c446e03fd5709ed4dd291d3b5de37b4544a7c42b60cc,PodSandboxId:dec28a9d29645441c6f66b40ed9acc817db86c5fcda1b52aec34a8ea0a961910,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726253664535947858,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f4db9ee38410b02d601ed80ae90b5a4,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bb3333d84624db9148786fc866cc4cb99179a577e31d3365459788ba7b02f59,PodSandboxId:0238ab84a512173bc79a44c0c48ee4a9bfeeaed98c7655e711a18a704a5951f9,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726253659862465525,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-t4fwq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1bc3749b-0225-445c-9b86-767558392df7,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59d4f7dd69063bd415cb8f477dce014cc757b01ddc9460b2a58e69522054176f,PodSandboxId:9e2f87d06434fbb2cc22f0b55402dcb47eb004c630073d3484ea5c76f045adb0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726253658537126239,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e5f1a84-1798-430e-af04-82469e8f4a7b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46d659112c682a93bb4560212566d776e405246e1b3f91e9cc2ad5198b2b8c69,PodSandboxId:566613db4514bf925926ac26426504bb464936a657b23ca25af3ad3aa0a803e1,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726253640657559269,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5545735943f8ff5a38c9aea0b4c785ad,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dddc0dfb6a255d4d35498f110cd8ade544b6f9eb17cef621a12500640512581e,PodSandboxId:18e2ef1278c487f38e01dc8ec652cadc524a2e3c08f1b80bae0f12b5ce7d5f45,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726253626790149443,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-b9bzd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81130c38-ff5d-4c9e-ab7b-7eae4a62d3b8,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:09fe052337ef3c922d075bb554c17c548520bce6283d1973e3507ca8d99ace87,PodSandboxId:5f1a3394b645b31406ab8d54ac120cd39398070996c972bf73df2d505c1848b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726253626803895339,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fdhnm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c509676-c7ba-4841-89b5-7e4266abd9c9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kube
rnetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b752b1ac699cb46bd464528882ec5c1bfb29241c94a1eda062141311151c2fc1,PodSandboxId:3a3adb124d23e71e636eaf0f200f72f3779d41c493e0475b011f1e825d452f00,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726253626698846769,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-htrbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41a8301e-fca3-4907-bc77-808b013a2d2a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"nam
e\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d1a0b2d1c95ea3234342a3c075a74b46c01ce0a91b97801503cb6fd1fc06163,PodSandboxId:09bbefd12114cb9e6833e18edd3e080d729dc865efaaa80af99dd78c1c2387c2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726253626405055504,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-92mml,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: 36bd37dc-88c4-4264-9e7c-a90246cc5212,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15c33340e3091f96212bbbf29c4a8802468f76ac07126f47265d4f2b86321a89,PodSandboxId:acfcaea56c23ed4626d8134635877c58a053b2861b24d0d53cf3024c7c8c1ca0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726253626518194944,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
bf3d4ca74d8429dc43b760fdf8f185ab,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da04db3dd67098c6ed0ba3118018e8a2ed0dbc987d1383c585c961ef2d592ff0,PodSandboxId:ba66cfa072f5d2c9d7e729390ec3e24cdc5d4f227dd184ba1ffe78dc46889849,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726253626457035523,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 815ca8cb73177215968b5c5242b63776,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed301adb1e4543af8e7b0bc6901dd6ef8c6bd3a45f58443df445f01ad6bb0a0a,PodSandboxId:dec28a9d29645441c6f66b40ed9acc817db86c5fcda1b52aec34a8ea0a961910,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726253626494877289,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f4db9ee38410b02d
601ed80ae90b5a4,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80a7cb47dee67b788f693c83be0a9b7c41fc30e21ddf9b0131e13737c8047222,PodSandboxId:a63972ff65b125bee7dbfeebba7408899591af85f385eeba93cb4a7472b5a831,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726253626301722665,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15cf7928620050653d6239c1007547bd,},Ann
otations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d456d4bd90d260631341afbae7431ccb0cc790417a4c9e8160e57467bfaf2b9,PodSandboxId:99c7958cb4872950974230539dc45944b50688bb9878563aee2e61fedfb5c35c,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726253118277457132,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-t4fwq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1bc3749b-0225-445c-9b86-767558392df7,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31a66627d146af0c454548e90aaf09b72db4e0fb35a2436b75ee6b7712ebfd7b,PodSandboxId:e586cc7654290c3c0cda0d6fd83b97505e3979cb49833b1eea94d979d037f3b4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726252966219468000,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-htrbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41a8301e-fca3-4907-bc77-808b013a2d2a,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3502979cf3ea177059372b048c527fdda963bdd51e52d12d22dfa810cf54057d,PodSandboxId:bd08f2ca13336963167d06a6cc6476a2e09fa40dcabe8661be5e3dacaf6be576,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726252966228593988,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fdhnm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c509676-c7ba-4841-89b5-7e4266abd9c9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e98c43ffb7347041b10b0e7a00cc20d3901c203313e4f54385c199a191115e1,PodSandboxId:47bf9789759213fdfb7ee71e76e2a2afa4707545aca3efe3f7ebec2b63ea5635,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726252954213974086,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-b9bzd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81130c38-ff5d-4c9e-ab7b-7eae4a62d3b8,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5065ca788226965fdaff6088f9b63d8cf7f5a5a7f59a07f825cfcdd7bc02e218,PodSandboxId:585827783c6745769918e227877e73f139672d7f78bf2704aaebc3850f3d07f1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726252953884543786,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-92mml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36bd37dc-88c4-4264-9e7c-a90246cc5212,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a31170a295b77a01a2c07a4bf7dbd0be4738f757372e24a85bcaf7d50d27d4c,PodSandboxId:16bf73d50b501d98243464444491201ec86d4a669bf5b3098590c9930bee5091,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe
954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726252942305036275,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15cf7928620050653d6239c1007547bd,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b2f0c73fe9ef4e082cc81429c4d5b062aa2764776ca148ed40e6d633b128ae5,PodSandboxId:353214980e0a16b1f3f76759918d9ab56c9f4add56df1e83cc1f6e8daf96c9a6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTA
INER_EXITED,CreatedAt:1726252942238767403,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf3d4ca74d8429dc43b760fdf8f185ab,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=07dce380-1b68-439a-9e01-ddb7af6260ab name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	570c77981741f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      54 seconds ago       Running             storage-provisioner       4                   9e2f87d06434f       storage-provisioner
	0a368121b3974       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      About a minute ago   Running             kube-apiserver            3                   dec28a9d29645       kube-apiserver-ha-617764
	32fcfa457f3ff       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      About a minute ago   Running             kube-controller-manager   2                   ba66cfa072f5d       kube-controller-manager-ha-617764
	2bb3333d84624       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   0238ab84a5121       busybox-7dff88458-t4fwq
	59d4f7dd69063       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Exited              storage-provisioner       3                   9e2f87d06434f       storage-provisioner
	46d659112c682       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      2 minutes ago        Running             kube-vip                  0                   566613db4514b       kube-vip-ha-617764
	09fe052337ef3       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      2 minutes ago        Running             coredns                   1                   5f1a3394b645b       coredns-7c65d6cfc9-fdhnm
	dddc0dfb6a255       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      2 minutes ago        Running             kindnet-cni               1                   18e2ef1278c48       kindnet-b9bzd
	b752b1ac699cb       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      2 minutes ago        Running             coredns                   1                   3a3adb124d23e       coredns-7c65d6cfc9-htrbt
	15c33340e3091       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      2 minutes ago        Running             etcd                      1                   acfcaea56c23e       etcd-ha-617764
	ed301adb1e454       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      2 minutes ago        Exited              kube-apiserver            2                   dec28a9d29645       kube-apiserver-ha-617764
	da04db3dd6709       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      2 minutes ago        Exited              kube-controller-manager   1                   ba66cfa072f5d       kube-controller-manager-ha-617764
	1d1a0b2d1c95e       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      2 minutes ago        Running             kube-proxy                1                   09bbefd12114c       kube-proxy-92mml
	80a7cb47dee67       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      2 minutes ago        Running             kube-scheduler            1                   a63972ff65b12       kube-scheduler-ha-617764
	0d456d4bd90d2       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   10 minutes ago       Exited              busybox                   0                   99c7958cb4872       busybox-7dff88458-t4fwq
	3502979cf3ea1       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      13 minutes ago       Exited              coredns                   0                   bd08f2ca13336       coredns-7c65d6cfc9-fdhnm
	31a66627d146a       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      13 minutes ago       Exited              coredns                   0                   e586cc7654290       coredns-7c65d6cfc9-htrbt
	7e98c43ffb734       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      13 minutes ago       Exited              kindnet-cni               0                   47bf978975921       kindnet-b9bzd
	5065ca7882269       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      13 minutes ago       Exited              kube-proxy                0                   585827783c674       kube-proxy-92mml
	8a31170a295b7       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      13 minutes ago       Exited              kube-scheduler            0                   16bf73d50b501       kube-scheduler-ha-617764
	3b2f0c73fe9ef       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      13 minutes ago       Exited              etcd                      0                   353214980e0a1       etcd-ha-617764
	
	
	==> coredns [09fe052337ef3c922d075bb554c17c548520bce6283d1973e3507ca8d99ace87] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[818669773]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (13-Sep-2024 18:53:51.525) (total time: 10000ms):
	Trace[818669773]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (18:54:01.526)
	Trace[818669773]: [10.000979018s] [10.000979018s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:52492->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:52492->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:43514->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:43514->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [31a66627d146af0c454548e90aaf09b72db4e0fb35a2436b75ee6b7712ebfd7b] <==
	[INFO] 10.244.0.4:42212 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000107526s
	[INFO] 10.244.0.4:55473 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001625324s
	[INFO] 10.244.0.4:57662 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00027413s
	[INFO] 10.244.0.4:42804 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000086384s
	[INFO] 10.244.1.2:42712 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000149698s
	[INFO] 10.244.1.2:33468 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000117843s
	[INFO] 10.244.1.2:53696 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000125501s
	[INFO] 10.244.1.2:59050 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000121214s
	[INFO] 10.244.2.2:60250 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000129604s
	[INFO] 10.244.2.2:33290 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000127517s
	[INFO] 10.244.0.4:48739 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000096314s
	[INFO] 10.244.0.4:42249 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000049139s
	[INFO] 10.244.1.2:35348 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000327466s
	[INFO] 10.244.1.2:36802 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000158894s
	[INFO] 10.244.2.2:33661 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000134839s
	[INFO] 10.244.2.2:41493 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000135174s
	[INFO] 10.244.0.4:55720 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00006804s
	[INFO] 10.244.0.4:59841 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00009592s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1896&timeout=7m50s&timeoutSeconds=470&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces)
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io)
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [3502979cf3ea177059372b048c527fdda963bdd51e52d12d22dfa810cf54057d] <==
	[INFO] 10.244.1.2:53881 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000133465s
	[INFO] 10.244.2.2:44355 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000163171s
	[INFO] 10.244.2.2:36763 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001800499s
	[INFO] 10.244.2.2:41469 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000115361s
	[INFO] 10.244.2.2:40909 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000145743s
	[INFO] 10.244.2.2:44681 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000149088s
	[INFO] 10.244.0.4:51555 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000069764s
	[INFO] 10.244.0.4:53574 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001057592s
	[INFO] 10.244.0.4:45350 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000035427s
	[INFO] 10.244.0.4:48145 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000190172s
	[INFO] 10.244.2.2:36852 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000187208s
	[INFO] 10.244.2.2:58201 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00010302s
	[INFO] 10.244.0.4:45335 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000139302s
	[INFO] 10.244.0.4:41623 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000054642s
	[INFO] 10.244.1.2:43471 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000145957s
	[INFO] 10.244.1.2:55858 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000179256s
	[INFO] 10.244.2.2:35120 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000154146s
	[INFO] 10.244.2.2:57748 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000106668s
	[INFO] 10.244.0.4:35176 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00009163s
	[INFO] 10.244.0.4:35630 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000191227s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=1842&timeout=9m53s&timeoutSeconds=593&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1865&timeout=8m22s&timeoutSeconds=502&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1865&timeout=5m46s&timeoutSeconds=346&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [b752b1ac699cb46bd464528882ec5c1bfb29241c94a1eda062141311151c2fc1] <==
	Trace[858252908]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:54310->10.96.0.1:443: read: connection reset by peer 13702ms (18:54:11.872)
	Trace[858252908]: [13.702752412s] [13.702752412s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:54310->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-617764
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-617764
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fdd33bebc6743cfd1c61ec7fe066add478610a92
	                    minikube.k8s.io/name=ha-617764
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_13T18_42_29_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Sep 2024 18:42:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-617764
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 13 Sep 2024 18:55:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Sep 2024 18:54:31 +0000   Fri, 13 Sep 2024 18:42:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Sep 2024 18:54:31 +0000   Fri, 13 Sep 2024 18:42:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Sep 2024 18:54:31 +0000   Fri, 13 Sep 2024 18:42:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 13 Sep 2024 18:54:31 +0000   Fri, 13 Sep 2024 18:42:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.145
	  Hostname:    ha-617764
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0f2aba800dcc4d28902afcae110b3305
	  System UUID:                0f2aba80-0dcc-4d28-902a-fcae110b3305
	  Boot ID:                    07a71fa7-1555-49dc-a1c7-c845200ddeaa
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-t4fwq              0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-7c65d6cfc9-fdhnm             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     13m
	  kube-system                 coredns-7c65d6cfc9-htrbt             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     13m
	  kube-system                 etcd-ha-617764                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         13m
	  kube-system                 kindnet-b9bzd                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      13m
	  kube-system                 kube-apiserver-ha-617764             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-617764    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-92mml                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-617764             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-617764                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 94s                    kube-proxy       
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   NodeHasSufficientMemory  13m                    kubelet          Node ha-617764 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     13m                    kubelet          Node ha-617764 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    13m                    kubelet          Node ha-617764 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 13m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           13m                    node-controller  Node ha-617764 event: Registered Node ha-617764 in Controller
	  Normal   NodeReady                13m                    kubelet          Node ha-617764 status is now: NodeReady
	  Normal   RegisteredNode           12m                    node-controller  Node ha-617764 event: Registered Node ha-617764 in Controller
	  Normal   RegisteredNode           11m                    node-controller  Node ha-617764 event: Registered Node ha-617764 in Controller
	  Warning  ContainerGCFailed        2m35s (x2 over 3m35s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeNotReady             2m27s (x3 over 3m16s)  kubelet          Node ha-617764 status is now: NodeNotReady
	  Normal   RegisteredNode           93s                    node-controller  Node ha-617764 event: Registered Node ha-617764 in Controller
	  Normal   RegisteredNode           93s                    node-controller  Node ha-617764 event: Registered Node ha-617764 in Controller
	  Normal   RegisteredNode           35s                    node-controller  Node ha-617764 event: Registered Node ha-617764 in Controller
	
	
	Name:               ha-617764-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-617764-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fdd33bebc6743cfd1c61ec7fe066add478610a92
	                    minikube.k8s.io/name=ha-617764
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_13T18_43_27_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Sep 2024 18:43:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-617764-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 13 Sep 2024 18:56:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Sep 2024 18:55:10 +0000   Fri, 13 Sep 2024 18:54:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Sep 2024 18:55:10 +0000   Fri, 13 Sep 2024 18:54:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Sep 2024 18:55:10 +0000   Fri, 13 Sep 2024 18:54:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 13 Sep 2024 18:55:10 +0000   Fri, 13 Sep 2024 18:54:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.203
	  Hostname:    ha-617764-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 3cc6f4389cf8474aa5ba9d0c108b603d
	  System UUID:                3cc6f438-9cf8-474a-a5ba-9d0c108b603d
	  Boot ID:                    3ff149de-a1f6-4a53-9c3a-07c56d69cf30
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-c28t9                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 etcd-ha-617764-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         12m
	  kube-system                 kindnet-bc2zg                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      12m
	  kube-system                 kube-apiserver-ha-617764-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-ha-617764-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-hqm8n                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-ha-617764-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-vip-ha-617764-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 89s                  kube-proxy       
	  Normal  Starting                 12m                  kube-proxy       
	  Normal  NodeAllocatableEnforced  12m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)    kubelet          Node ha-617764-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)    kubelet          Node ha-617764-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)    kubelet          Node ha-617764-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           12m                  node-controller  Node ha-617764-m02 event: Registered Node ha-617764-m02 in Controller
	  Normal  RegisteredNode           12m                  node-controller  Node ha-617764-m02 event: Registered Node ha-617764-m02 in Controller
	  Normal  RegisteredNode           11m                  node-controller  Node ha-617764-m02 event: Registered Node ha-617764-m02 in Controller
	  Normal  NodeNotReady             8m51s                node-controller  Node ha-617764-m02 status is now: NodeNotReady
	  Normal  Starting                 2m1s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m1s (x8 over 2m1s)  kubelet          Node ha-617764-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m1s (x8 over 2m1s)  kubelet          Node ha-617764-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m1s (x7 over 2m1s)  kubelet          Node ha-617764-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m1s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           93s                  node-controller  Node ha-617764-m02 event: Registered Node ha-617764-m02 in Controller
	  Normal  RegisteredNode           93s                  node-controller  Node ha-617764-m02 event: Registered Node ha-617764-m02 in Controller
	  Normal  RegisteredNode           35s                  node-controller  Node ha-617764-m02 event: Registered Node ha-617764-m02 in Controller
	
	
	Name:               ha-617764-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-617764-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fdd33bebc6743cfd1c61ec7fe066add478610a92
	                    minikube.k8s.io/name=ha-617764
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_13T18_44_46_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Sep 2024 18:44:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-617764-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 13 Sep 2024 18:55:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Sep 2024 18:55:35 +0000   Fri, 13 Sep 2024 18:44:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Sep 2024 18:55:35 +0000   Fri, 13 Sep 2024 18:44:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Sep 2024 18:55:35 +0000   Fri, 13 Sep 2024 18:44:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 13 Sep 2024 18:55:35 +0000   Fri, 13 Sep 2024 18:45:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.118
	  Hostname:    ha-617764-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 bf9ad263c8a24e5ab1b585d83dd0c49b
	  System UUID:                bf9ad263-c8a2-4e5a-b1b5-85d83dd0c49b
	  Boot ID:                    2d4dbd2c-c2a8-496d-a8b1-aec5a3e1ead3
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-srmxt                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 etcd-ha-617764-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         11m
	  kube-system                 kindnet-8mbkd                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      11m
	  kube-system                 kube-apiserver-ha-617764-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-ha-617764-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-7bpk5                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-ha-617764-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-vip-ha-617764-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 42s                kube-proxy       
	  Normal   Starting                 11m                kube-proxy       
	  Normal   NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node ha-617764-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node ha-617764-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node ha-617764-m03 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           11m                node-controller  Node ha-617764-m03 event: Registered Node ha-617764-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-617764-m03 event: Registered Node ha-617764-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-617764-m03 event: Registered Node ha-617764-m03 in Controller
	  Normal   RegisteredNode           93s                node-controller  Node ha-617764-m03 event: Registered Node ha-617764-m03 in Controller
	  Normal   RegisteredNode           93s                node-controller  Node ha-617764-m03 event: Registered Node ha-617764-m03 in Controller
	  Normal   Starting                 59s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  59s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  59s                kubelet          Node ha-617764-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    59s                kubelet          Node ha-617764-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     59s                kubelet          Node ha-617764-m03 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 59s                kubelet          Node ha-617764-m03 has been rebooted, boot id: 2d4dbd2c-c2a8-496d-a8b1-aec5a3e1ead3
	  Normal   RegisteredNode           35s                node-controller  Node ha-617764-m03 event: Registered Node ha-617764-m03 in Controller
	
	
	Name:               ha-617764-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-617764-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fdd33bebc6743cfd1c61ec7fe066add478610a92
	                    minikube.k8s.io/name=ha-617764
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_13T18_45_53_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Sep 2024 18:45:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-617764-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 13 Sep 2024 18:55:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Sep 2024 18:55:54 +0000   Fri, 13 Sep 2024 18:55:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Sep 2024 18:55:54 +0000   Fri, 13 Sep 2024 18:55:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Sep 2024 18:55:54 +0000   Fri, 13 Sep 2024 18:55:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 13 Sep 2024 18:55:54 +0000   Fri, 13 Sep 2024 18:55:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.238
	  Hostname:    ha-617764-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 377b052ecb52420ba7e0fb039f04c4f5
	  System UUID:                377b052e-cb52-420b-a7e0-fb039f04c4f5
	  Boot ID:                    44a904c1-478c-4733-89d7-64bb5ed6ea9f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-47jgz       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      10m
	  kube-system                 kube-proxy-5rlkn    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 5s                 kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   NodeHasSufficientMemory  10m (x2 over 10m)  kubelet          Node ha-617764-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x2 over 10m)  kubelet          Node ha-617764-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x2 over 10m)  kubelet          Node ha-617764-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           10m                node-controller  Node ha-617764-m04 event: Registered Node ha-617764-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-617764-m04 event: Registered Node ha-617764-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-617764-m04 event: Registered Node ha-617764-m04 in Controller
	  Normal   NodeReady                9m50s              kubelet          Node ha-617764-m04 status is now: NodeReady
	  Normal   RegisteredNode           93s                node-controller  Node ha-617764-m04 event: Registered Node ha-617764-m04 in Controller
	  Normal   RegisteredNode           93s                node-controller  Node ha-617764-m04 event: Registered Node ha-617764-m04 in Controller
	  Normal   NodeNotReady             53s                node-controller  Node ha-617764-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           35s                node-controller  Node ha-617764-m04 event: Registered Node ha-617764-m04 in Controller
	  Normal   Starting                 9s                 kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  9s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  9s (x2 over 9s)    kubelet          Node ha-617764-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9s (x2 over 9s)    kubelet          Node ha-617764-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9s (x2 over 9s)    kubelet          Node ha-617764-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 9s                 kubelet          Node ha-617764-m04 has been rebooted, boot id: 44a904c1-478c-4733-89d7-64bb5ed6ea9f
	  Normal   NodeReady                9s                 kubelet          Node ha-617764-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[ +10.036071] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.066350] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.051740] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.182667] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.119649] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.275654] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +3.901030] systemd-fstab-generator[757]: Ignoring "noauto" option for root device
	[  +4.328019] systemd-fstab-generator[895]: Ignoring "noauto" option for root device
	[  +0.063743] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.140604] systemd-fstab-generator[1308]: Ignoring "noauto" option for root device
	[  +0.090298] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.514292] kauditd_printk_skb: 21 callbacks suppressed
	[ +12.179813] kauditd_printk_skb: 38 callbacks suppressed
	[Sep13 18:43] kauditd_printk_skb: 24 callbacks suppressed
	[Sep13 18:53] systemd-fstab-generator[3494]: Ignoring "noauto" option for root device
	[  +0.152592] systemd-fstab-generator[3506]: Ignoring "noauto" option for root device
	[  +0.176959] systemd-fstab-generator[3520]: Ignoring "noauto" option for root device
	[  +0.144628] systemd-fstab-generator[3532]: Ignoring "noauto" option for root device
	[  +0.278033] systemd-fstab-generator[3560]: Ignoring "noauto" option for root device
	[  +6.938453] systemd-fstab-generator[3656]: Ignoring "noauto" option for root device
	[  +0.087335] kauditd_printk_skb: 100 callbacks suppressed
	[  +6.505183] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.221465] kauditd_printk_skb: 85 callbacks suppressed
	[Sep13 18:54] kauditd_printk_skb: 5 callbacks suppressed
	[  +8.066370] kauditd_printk_skb: 4 callbacks suppressed
	
	
	==> etcd [15c33340e3091f96212bbbf29c4a8802468f76ac07126f47265d4f2b86321a89] <==
	{"level":"warn","ts":"2024-09-13T18:55:02.979702Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.118:2380/version","remote-member-id":"d1b5616c38681b99","error":"Get \"https://192.168.39.118:2380/version\": dial tcp 192.168.39.118:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-13T18:55:02.979755Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"d1b5616c38681b99","error":"Get \"https://192.168.39.118:2380/version\": dial tcp 192.168.39.118:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-13T18:55:06.981124Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.118:2380/version","remote-member-id":"d1b5616c38681b99","error":"Get \"https://192.168.39.118:2380/version\": dial tcp 192.168.39.118:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-13T18:55:06.981184Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"d1b5616c38681b99","error":"Get \"https://192.168.39.118:2380/version\": dial tcp 192.168.39.118:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-13T18:55:07.526630Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"d1b5616c38681b99","rtt":"0s","error":"dial tcp 192.168.39.118:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-13T18:55:07.528053Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"d1b5616c38681b99","rtt":"0s","error":"dial tcp 192.168.39.118:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-13T18:55:10.983317Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.118:2380/version","remote-member-id":"d1b5616c38681b99","error":"Get \"https://192.168.39.118:2380/version\": dial tcp 192.168.39.118:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-13T18:55:10.983444Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"d1b5616c38681b99","error":"Get \"https://192.168.39.118:2380/version\": dial tcp 192.168.39.118:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-13T18:55:12.527684Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"d1b5616c38681b99","rtt":"0s","error":"dial tcp 192.168.39.118:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-13T18:55:12.528828Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"d1b5616c38681b99","rtt":"0s","error":"dial tcp 192.168.39.118:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-13T18:55:14.984900Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.118:2380/version","remote-member-id":"d1b5616c38681b99","error":"Get \"https://192.168.39.118:2380/version\": dial tcp 192.168.39.118:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-13T18:55:14.984957Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"d1b5616c38681b99","error":"Get \"https://192.168.39.118:2380/version\": dial tcp 192.168.39.118:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-13T18:55:17.528195Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"d1b5616c38681b99","rtt":"0s","error":"dial tcp 192.168.39.118:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-13T18:55:17.529425Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"d1b5616c38681b99","rtt":"0s","error":"dial tcp 192.168.39.118:2380: connect: connection refused"}
	{"level":"info","ts":"2024-09-13T18:55:18.305845Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"d1b5616c38681b99"}
	{"level":"info","ts":"2024-09-13T18:55:18.306048Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"44b3a0f32f80bb09","remote-peer-id":"d1b5616c38681b99"}
	{"level":"info","ts":"2024-09-13T18:55:18.309508Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"44b3a0f32f80bb09","remote-peer-id":"d1b5616c38681b99"}
	{"level":"info","ts":"2024-09-13T18:55:18.319201Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"44b3a0f32f80bb09","to":"d1b5616c38681b99","stream-type":"stream Message"}
	{"level":"info","ts":"2024-09-13T18:55:18.319335Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"44b3a0f32f80bb09","remote-peer-id":"d1b5616c38681b99"}
	{"level":"info","ts":"2024-09-13T18:55:18.326463Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"44b3a0f32f80bb09","to":"d1b5616c38681b99","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-09-13T18:55:18.326549Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"44b3a0f32f80bb09","remote-peer-id":"d1b5616c38681b99"}
	{"level":"info","ts":"2024-09-13T18:55:19.216734Z","caller":"traceutil/trace.go:171","msg":"trace[1422151895] transaction","detail":"{read_only:false; response_revision:2359; number_of_response:1; }","duration":"160.343154ms","start":"2024-09-13T18:55:19.056370Z","end":"2024-09-13T18:55:19.216714Z","steps":["trace[1422151895] 'process raft request'  (duration: 160.238449ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-13T18:55:19.218487Z","caller":"traceutil/trace.go:171","msg":"trace[1096473554] linearizableReadLoop","detail":"{readStateIndex:2777; appliedIndex:2778; }","duration":"150.825564ms","start":"2024-09-13T18:55:19.067646Z","end":"2024-09-13T18:55:19.218472Z","steps":["trace[1096473554] 'read index received'  (duration: 150.820286ms)","trace[1096473554] 'applied index is now lower than readState.Index'  (duration: 3.856µs)"],"step_count":2}
	{"level":"warn","ts":"2024-09-13T18:55:19.218640Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"150.99442ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-13T18:55:19.218733Z","caller":"traceutil/trace.go:171","msg":"trace[1131931288] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:2359; }","duration":"151.099422ms","start":"2024-09-13T18:55:19.067623Z","end":"2024-09-13T18:55:19.218723Z","steps":["trace[1131931288] 'agreement among raft nodes before linearized reading'  (duration: 150.970018ms)"],"step_count":1}
	
	
	==> etcd [3b2f0c73fe9ef4e082cc81429c4d5b062aa2764776ca148ed40e6d633b128ae5] <==
	{"level":"warn","ts":"2024-09-13T18:52:00.370494Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-13T18:51:59.772954Z","time spent":"597.509508ms","remote":"127.0.0.1:54368","response type":"/etcdserverpb.KV/Range","request count":0,"request size":41,"response count":0,"response size":0,"request content":"key:\"/registry/leases/\" range_end:\"/registry/leases0\" limit:10000 "}
	2024/09/13 18:52:00 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-09-13T18:52:00.402546Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.145:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-13T18:52:00.402739Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.145:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-13T18:52:00.404118Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"44b3a0f32f80bb09","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-09-13T18:52:00.404398Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"130da78b66ce9e95"}
	{"level":"info","ts":"2024-09-13T18:52:00.404452Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"130da78b66ce9e95"}
	{"level":"info","ts":"2024-09-13T18:52:00.404476Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"130da78b66ce9e95"}
	{"level":"info","ts":"2024-09-13T18:52:00.404521Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"44b3a0f32f80bb09","remote-peer-id":"130da78b66ce9e95"}
	{"level":"info","ts":"2024-09-13T18:52:00.404572Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"44b3a0f32f80bb09","remote-peer-id":"130da78b66ce9e95"}
	{"level":"info","ts":"2024-09-13T18:52:00.404630Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"44b3a0f32f80bb09","remote-peer-id":"130da78b66ce9e95"}
	{"level":"info","ts":"2024-09-13T18:52:00.404643Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"130da78b66ce9e95"}
	{"level":"info","ts":"2024-09-13T18:52:00.404649Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"d1b5616c38681b99"}
	{"level":"info","ts":"2024-09-13T18:52:00.404661Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"d1b5616c38681b99"}
	{"level":"info","ts":"2024-09-13T18:52:00.404679Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"d1b5616c38681b99"}
	{"level":"info","ts":"2024-09-13T18:52:00.404768Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"44b3a0f32f80bb09","remote-peer-id":"d1b5616c38681b99"}
	{"level":"info","ts":"2024-09-13T18:52:00.404813Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"44b3a0f32f80bb09","remote-peer-id":"d1b5616c38681b99"}
	{"level":"info","ts":"2024-09-13T18:52:00.404841Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"44b3a0f32f80bb09","remote-peer-id":"d1b5616c38681b99"}
	{"level":"info","ts":"2024-09-13T18:52:00.404867Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"d1b5616c38681b99"}
	{"level":"info","ts":"2024-09-13T18:52:00.408189Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.145:2380"}
	{"level":"warn","ts":"2024-09-13T18:52:00.408214Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"8.642224444s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: server stopped"}
	{"level":"info","ts":"2024-09-13T18:52:00.408375Z","caller":"traceutil/trace.go:171","msg":"trace[494011848] range","detail":"{range_begin:; range_end:; }","duration":"8.642401593s","start":"2024-09-13T18:51:51.765965Z","end":"2024-09-13T18:52:00.408366Z","steps":["trace[494011848] 'agreement among raft nodes before linearized reading'  (duration: 8.642223024s)"],"step_count":1}
	{"level":"error","ts":"2024-09-13T18:52:00.408425Z","caller":"etcdhttp/health.go:367","msg":"Health check error","path":"/readyz","reason":"[+]data_corruption ok\n[+]serializable_read ok\n[-]linearizable_read failed: etcdserver: server stopped\n","status-code":503,"stacktrace":"go.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp.(*CheckRegistry).installRootHttpEndpoint.newHealthHandler.func2\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp/health.go:367\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2141\nnet/http.(*ServeMux).ServeHTTP\n\tnet/http/server.go:2519\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:2943\nnet/http.(*conn).serve\n\tnet/http/server.go:2014"}
	{"level":"info","ts":"2024-09-13T18:52:00.408519Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.145:2380"}
	{"level":"info","ts":"2024-09-13T18:52:00.408810Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-617764","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.145:2380"],"advertise-client-urls":["https://192.168.39.145:2379"]}
	
	
	==> kernel <==
	 18:56:03 up 14 min,  0 users,  load average: 0.57, 0.51, 0.31
	Linux ha-617764 5.10.207 #1 SMP Thu Sep 12 19:03:33 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [7e98c43ffb7347041b10b0e7a00cc20d3901c203313e4f54385c199a191115e1] <==
	I0913 18:51:35.370279       1 main.go:295] Handling node with IPs: map[192.168.39.203:{}]
	I0913 18:51:35.370303       1 main.go:322] Node ha-617764-m02 has CIDR [10.244.1.0/24] 
	I0913 18:51:35.370514       1 main.go:295] Handling node with IPs: map[192.168.39.118:{}]
	I0913 18:51:35.370554       1 main.go:322] Node ha-617764-m03 has CIDR [10.244.2.0/24] 
	I0913 18:51:35.370658       1 main.go:295] Handling node with IPs: map[192.168.39.238:{}]
	I0913 18:51:35.370696       1 main.go:322] Node ha-617764-m04 has CIDR [10.244.3.0/24] 
	E0913 18:51:35.968903       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=1896&timeout=7m45s&timeoutSeconds=465&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	I0913 18:51:45.378637       1 main.go:295] Handling node with IPs: map[192.168.39.145:{}]
	I0913 18:51:45.378689       1 main.go:299] handling current node
	I0913 18:51:45.378721       1 main.go:295] Handling node with IPs: map[192.168.39.203:{}]
	I0913 18:51:45.378727       1 main.go:322] Node ha-617764-m02 has CIDR [10.244.1.0/24] 
	I0913 18:51:45.378867       1 main.go:295] Handling node with IPs: map[192.168.39.118:{}]
	I0913 18:51:45.378889       1 main.go:322] Node ha-617764-m03 has CIDR [10.244.2.0/24] 
	I0913 18:51:45.378940       1 main.go:295] Handling node with IPs: map[192.168.39.238:{}]
	I0913 18:51:45.378944       1 main.go:322] Node ha-617764-m04 has CIDR [10.244.3.0/24] 
	W0913 18:51:54.400664       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?resourceVersion=1896": dial tcp 10.96.0.1:443: connect: no route to host
	E0913 18:51:54.400725       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?resourceVersion=1896": dial tcp 10.96.0.1:443: connect: no route to host
	I0913 18:51:55.369335       1 main.go:295] Handling node with IPs: map[192.168.39.145:{}]
	I0913 18:51:55.369380       1 main.go:299] handling current node
	I0913 18:51:55.369395       1 main.go:295] Handling node with IPs: map[192.168.39.203:{}]
	I0913 18:51:55.369400       1 main.go:322] Node ha-617764-m02 has CIDR [10.244.1.0/24] 
	I0913 18:51:55.369546       1 main.go:295] Handling node with IPs: map[192.168.39.118:{}]
	I0913 18:51:55.369569       1 main.go:322] Node ha-617764-m03 has CIDR [10.244.2.0/24] 
	I0913 18:51:55.369634       1 main.go:295] Handling node with IPs: map[192.168.39.238:{}]
	I0913 18:51:55.369653       1 main.go:322] Node ha-617764-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [dddc0dfb6a255d4d35498f110cd8ade544b6f9eb17cef621a12500640512581e] <==
	I0913 18:55:27.990630       1 main.go:322] Node ha-617764-m04 has CIDR [10.244.3.0/24] 
	I0913 18:55:37.988059       1 main.go:295] Handling node with IPs: map[192.168.39.118:{}]
	I0913 18:55:37.988141       1 main.go:322] Node ha-617764-m03 has CIDR [10.244.2.0/24] 
	I0913 18:55:37.988395       1 main.go:295] Handling node with IPs: map[192.168.39.238:{}]
	I0913 18:55:37.988429       1 main.go:322] Node ha-617764-m04 has CIDR [10.244.3.0/24] 
	I0913 18:55:37.988585       1 main.go:295] Handling node with IPs: map[192.168.39.145:{}]
	I0913 18:55:37.988621       1 main.go:299] handling current node
	I0913 18:55:37.988655       1 main.go:295] Handling node with IPs: map[192.168.39.203:{}]
	I0913 18:55:37.988664       1 main.go:322] Node ha-617764-m02 has CIDR [10.244.1.0/24] 
	I0913 18:55:47.986539       1 main.go:295] Handling node with IPs: map[192.168.39.145:{}]
	I0913 18:55:47.986591       1 main.go:299] handling current node
	I0913 18:55:47.986613       1 main.go:295] Handling node with IPs: map[192.168.39.203:{}]
	I0913 18:55:47.986622       1 main.go:322] Node ha-617764-m02 has CIDR [10.244.1.0/24] 
	I0913 18:55:47.986841       1 main.go:295] Handling node with IPs: map[192.168.39.118:{}]
	I0913 18:55:47.986875       1 main.go:322] Node ha-617764-m03 has CIDR [10.244.2.0/24] 
	I0913 18:55:47.986969       1 main.go:295] Handling node with IPs: map[192.168.39.238:{}]
	I0913 18:55:47.986980       1 main.go:322] Node ha-617764-m04 has CIDR [10.244.3.0/24] 
	I0913 18:55:57.988870       1 main.go:295] Handling node with IPs: map[192.168.39.238:{}]
	I0913 18:55:57.988950       1 main.go:322] Node ha-617764-m04 has CIDR [10.244.3.0/24] 
	I0913 18:55:57.989083       1 main.go:295] Handling node with IPs: map[192.168.39.145:{}]
	I0913 18:55:57.989105       1 main.go:299] handling current node
	I0913 18:55:57.989120       1 main.go:295] Handling node with IPs: map[192.168.39.203:{}]
	I0913 18:55:57.989125       1 main.go:322] Node ha-617764-m02 has CIDR [10.244.1.0/24] 
	I0913 18:55:57.989173       1 main.go:295] Handling node with IPs: map[192.168.39.118:{}]
	I0913 18:55:57.989179       1 main.go:322] Node ha-617764-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [0a368121b3974a88cb67c446e03fd5709ed4dd291d3b5de37b4544a7c42b60cc] <==
	I0913 18:54:26.762950       1 crdregistration_controller.go:114] Starting crd-autoregister controller
	I0913 18:54:26.762982       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0913 18:54:26.855539       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0913 18:54:26.855613       1 policy_source.go:224] refreshing policies
	I0913 18:54:26.863834       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0913 18:54:26.863916       1 aggregator.go:171] initial CRD sync complete...
	I0913 18:54:26.863956       1 autoregister_controller.go:144] Starting autoregister controller
	I0913 18:54:26.863979       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0913 18:54:26.864001       1 cache.go:39] Caches are synced for autoregister controller
	I0913 18:54:26.896130       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0913 18:54:26.930711       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0913 18:54:26.931123       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0913 18:54:26.932008       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0913 18:54:26.934071       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0913 18:54:26.936515       1 shared_informer.go:320] Caches are synced for configmaps
	I0913 18:54:26.937087       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0913 18:54:26.937125       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0913 18:54:26.937311       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0913 18:54:26.947349       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	W0913 18:54:27.101971       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.118]
	I0913 18:54:27.103804       1 controller.go:615] quota admission added evaluator for: endpoints
	I0913 18:54:27.111764       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0913 18:54:27.115555       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0913 18:54:27.738608       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0913 18:54:28.135579       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.118 192.168.39.145]
	
	
	==> kube-apiserver [ed301adb1e4543af8e7b0bc6901dd6ef8c6bd3a45f58443df445f01ad6bb0a0a] <==
	I0913 18:53:47.282613       1 options.go:228] external host was not specified, using 192.168.39.145
	I0913 18:53:47.287697       1 server.go:142] Version: v1.31.1
	I0913 18:53:47.292409       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0913 18:53:48.004354       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0913 18:53:48.010657       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0913 18:53:48.014664       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0913 18:53:48.014696       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0913 18:53:48.014906       1 instance.go:232] Using reconciler: lease
	W0913 18:54:08.003438       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	W0913 18:54:08.003437       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	W0913 18:54:08.015771       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0913 18:54:08.015858       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [32fcfa457f3ff0e638142def4aa43f12d6a5a779bfe86e597cc242d7f4d9d19d] <==
	I0913 18:54:45.901709       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-58pk6 EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-58pk6\": the object has been modified; please apply your changes to the latest version and try again"
	I0913 18:54:45.903201       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"10c7d117-e7b8-460b-bc18-c29f1dc4ff8b", APIVersion:"v1", ResourceVersion:"248", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-58pk6 EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-58pk6": the object has been modified; please apply your changes to the latest version and try again
	I0913 18:54:45.939294       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="53.746782ms"
	I0913 18:54:45.939481       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="78.607µs"
	I0913 18:54:55.910665       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-58pk6 EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-58pk6\": the object has been modified; please apply your changes to the latest version and try again"
	I0913 18:54:55.911044       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"10c7d117-e7b8-460b-bc18-c29f1dc4ff8b", APIVersion:"v1", ResourceVersion:"248", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-58pk6 EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-58pk6": the object has been modified; please apply your changes to the latest version and try again
	I0913 18:54:55.915748       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="31.255676ms"
	I0913 18:54:55.915929       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="98.755µs"
	I0913 18:55:04.819763       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-617764-m03"
	I0913 18:55:05.640101       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="16.518558ms"
	I0913 18:55:05.641692       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="41.152µs"
	I0913 18:55:10.233294       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-617764-m04"
	I0913 18:55:10.255594       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-617764-m04"
	I0913 18:55:10.395149       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-617764-m04"
	I0913 18:55:10.623147       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-617764-m02"
	I0913 18:55:15.376490       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-617764-m04"
	I0913 18:55:24.095888       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="14.419788ms"
	I0913 18:55:24.096630       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="60.268µs"
	I0913 18:55:28.063104       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-617764-m04"
	I0913 18:55:28.153953       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-617764-m04"
	I0913 18:55:35.140560       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-617764-m03"
	I0913 18:55:54.300521       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-617764-m04"
	I0913 18:55:54.300813       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-617764-m04"
	I0913 18:55:54.322204       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-617764-m04"
	I0913 18:55:55.294389       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-617764-m04"
	
	
	==> kube-controller-manager [da04db3dd67098c6ed0ba3118018e8a2ed0dbc987d1383c585c961ef2d592ff0] <==
	I0913 18:53:47.517644       1 serving.go:386] Generated self-signed cert in-memory
	I0913 18:53:47.859631       1 controllermanager.go:197] "Starting" version="v1.31.1"
	I0913 18:53:47.859681       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0913 18:53:47.861631       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0913 18:53:47.862454       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0913 18:53:47.862626       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0913 18:53:47.862727       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0913 18:54:09.021092       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.145:8443/healthz\": dial tcp 192.168.39.145:8443: connect: connection refused"
	
	
	==> kube-proxy [1d1a0b2d1c95ea3234342a3c075a74b46c01ce0a91b97801503cb6fd1fc06163] <==
	E0913 18:54:28.193745       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-617764\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0913 18:54:28.194003       1 server.go:646] "Can't determine this node's IP, assuming loopback; if this is incorrect, please set the --bind-address flag"
	E0913 18:54:28.194170       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0913 18:54:28.234105       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0913 18:54:28.234302       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0913 18:54:28.234395       1 server_linux.go:169] "Using iptables Proxier"
	I0913 18:54:28.237390       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0913 18:54:28.237818       1 server.go:483] "Version info" version="v1.31.1"
	I0913 18:54:28.237860       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0913 18:54:28.240362       1 config.go:199] "Starting service config controller"
	I0913 18:54:28.240424       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0913 18:54:28.240535       1 config.go:105] "Starting endpoint slice config controller"
	I0913 18:54:28.240556       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0913 18:54:28.241385       1 config.go:328] "Starting node config controller"
	I0913 18:54:28.241411       1 shared_informer.go:313] Waiting for caches to sync for node config
	E0913 18:54:31.266663       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.39.254:8443: connect: no route to host"
	W0913 18:54:31.266902       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0913 18:54:31.267053       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0913 18:54:31.267155       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0913 18:54:31.267225       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0913 18:54:31.270424       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-617764&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0913 18:54:31.270680       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-617764&limit=500&resourceVersion=0\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	I0913 18:54:32.241327       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0913 18:54:32.541475       1 shared_informer.go:320] Caches are synced for service config
	I0913 18:54:32.642363       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [5065ca788226965fdaff6088f9b63d8cf7f5a5a7f59a07f825cfcdd7bc02e218] <==
	E0913 18:50:42.913076       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1842\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0913 18:50:42.913118       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1842": dial tcp 192.168.39.254:8443: connect: no route to host
	E0913 18:50:42.913227       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1842\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0913 18:50:50.080660       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-617764&resourceVersion=1874": dial tcp 192.168.39.254:8443: connect: no route to host
	E0913 18:50:50.080899       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-617764&resourceVersion=1874\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0913 18:50:50.082079       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1842": dial tcp 192.168.39.254:8443: connect: no route to host
	E0913 18:50:50.082429       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1842\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0913 18:50:50.082306       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1842": dial tcp 192.168.39.254:8443: connect: no route to host
	E0913 18:50:50.082585       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1842\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0913 18:50:59.298341       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1842": dial tcp 192.168.39.254:8443: connect: no route to host
	E0913 18:50:59.298561       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1842\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0913 18:51:02.368906       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-617764&resourceVersion=1874": dial tcp 192.168.39.254:8443: connect: no route to host
	E0913 18:51:02.369558       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-617764&resourceVersion=1874\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0913 18:51:02.369471       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1842": dial tcp 192.168.39.254:8443: connect: no route to host
	E0913 18:51:02.370108       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1842\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0913 18:51:17.728693       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1842": dial tcp 192.168.39.254:8443: connect: no route to host
	E0913 18:51:17.728769       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1842\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0913 18:51:20.801275       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1842": dial tcp 192.168.39.254:8443: connect: no route to host
	E0913 18:51:20.801339       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1842\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0913 18:51:26.945182       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-617764&resourceVersion=1874": dial tcp 192.168.39.254:8443: connect: no route to host
	E0913 18:51:26.945336       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-617764&resourceVersion=1874\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0913 18:51:48.449478       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1842": dial tcp 192.168.39.254:8443: connect: no route to host
	E0913 18:51:48.449614       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1842\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0913 18:51:54.592787       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-617764&resourceVersion=1874": dial tcp 192.168.39.254:8443: connect: no route to host
	E0913 18:51:54.592909       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-617764&resourceVersion=1874\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	
	
	==> kube-scheduler [80a7cb47dee67b788f693c83be0a9b7c41fc30e21ddf9b0131e13737c8047222] <==
	W0913 18:54:18.098708       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.145:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.145:8443: connect: connection refused
	E0913 18:54:18.098849       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get \"https://192.168.39.145:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.39.145:8443: connect: connection refused" logger="UnhandledError"
	W0913 18:54:18.198406       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.145:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.145:8443: connect: connection refused
	E0913 18:54:18.198532       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.168.39.145:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.145:8443: connect: connection refused" logger="UnhandledError"
	W0913 18:54:18.337710       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.145:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.145:8443: connect: connection refused
	E0913 18:54:18.337790       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.168.39.145:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.39.145:8443: connect: connection refused" logger="UnhandledError"
	W0913 18:54:18.785652       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.145:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.145:8443: connect: connection refused
	E0913 18:54:18.785751       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.168.39.145:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.39.145:8443: connect: connection refused" logger="UnhandledError"
	W0913 18:54:23.154505       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.145:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.145:8443: connect: connection refused
	E0913 18:54:23.154624       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://192.168.39.145:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.39.145:8443: connect: connection refused" logger="UnhandledError"
	W0913 18:54:26.780601       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0913 18:54:26.780738       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0913 18:54:26.780951       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0913 18:54:26.781066       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0913 18:54:26.783577       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0913 18:54:26.783651       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0913 18:54:26.783823       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0913 18:54:26.783883       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0913 18:54:26.783831       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0913 18:54:26.783955       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0913 18:54:26.783968       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0913 18:54:26.784151       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0913 18:54:26.784400       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0913 18:54:26.784439       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0913 18:54:44.032097       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [8a31170a295b77a01a2c07a4bf7dbd0be4738f757372e24a85bcaf7d50d27d4c] <==
	I0913 18:45:52.688769       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-jvrw5" node="ha-617764-m04"
	E0913 18:45:52.689590       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-5rlkn\": pod kube-proxy-5rlkn is already assigned to node \"ha-617764-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-5rlkn" node="ha-617764-m04"
	E0913 18:45:52.689658       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod fb31ed1c-fbc0-46ca-b60c-7201362519ff(kube-system/kube-proxy-5rlkn) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-5rlkn"
	E0913 18:45:52.689678       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-5rlkn\": pod kube-proxy-5rlkn is already assigned to node \"ha-617764-m04\"" pod="kube-system/kube-proxy-5rlkn"
	I0913 18:45:52.689696       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-5rlkn" node="ha-617764-m04"
	E0913 18:45:52.694462       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-xtt2d\": pod kube-proxy-xtt2d is already assigned to node \"ha-617764-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-xtt2d" node="ha-617764-m04"
	E0913 18:45:52.694585       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 848151c4-6f4d-47e6-9447-bd1d09469957(kube-system/kube-proxy-xtt2d) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-xtt2d"
	E0913 18:45:52.694606       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-xtt2d\": pod kube-proxy-xtt2d is already assigned to node \"ha-617764-m04\"" pod="kube-system/kube-proxy-xtt2d"
	I0913 18:45:52.694636       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-xtt2d" node="ha-617764-m04"
	E0913 18:51:44.585541       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: unknown (get services)" logger="UnhandledError"
	E0913 18:51:45.076949       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)" logger="UnhandledError"
	E0913 18:51:46.688890       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)" logger="UnhandledError"
	E0913 18:51:46.694206       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: unknown (get configmaps)" logger="UnhandledError"
	E0913 18:51:48.372407       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)" logger="UnhandledError"
	E0913 18:51:49.073673       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: unknown (get nodes)" logger="UnhandledError"
	E0913 18:51:49.842409       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)" logger="UnhandledError"
	E0913 18:51:51.306914       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)" logger="UnhandledError"
	E0913 18:51:51.530632       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)" logger="UnhandledError"
	E0913 18:51:51.856307       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io)" logger="UnhandledError"
	E0913 18:51:52.826080       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: unknown (get namespaces)" logger="UnhandledError"
	E0913 18:51:52.886933       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)" logger="UnhandledError"
	E0913 18:51:54.578007       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)" logger="UnhandledError"
	E0913 18:51:55.399131       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: unknown (get pods)" logger="UnhandledError"
	E0913 18:51:56.976982       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io)" logger="UnhandledError"
	E0913 18:52:00.329637       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 13 18:54:54 ha-617764 kubelet[1315]: E0913 18:54:54.515218    1315 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(1e5f1a84-1798-430e-af04-82469e8f4a7b)\"" pod="kube-system/storage-provisioner" podUID="1e5f1a84-1798-430e-af04-82469e8f4a7b"
	Sep 13 18:54:58 ha-617764 kubelet[1315]: E0913 18:54:58.751597    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726253698750634986,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 18:54:58 ha-617764 kubelet[1315]: E0913 18:54:58.752169    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726253698750634986,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 18:55:07 ha-617764 kubelet[1315]: I0913 18:55:07.515016    1315 kubelet.go:1895] "Trying to delete pod" pod="kube-system/kube-vip-ha-617764" podUID="7960420c-8f57-47a3-8d63-de5ad027f8bd"
	Sep 13 18:55:07 ha-617764 kubelet[1315]: I0913 18:55:07.530442    1315 kubelet.go:1900] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-617764"
	Sep 13 18:55:08 ha-617764 kubelet[1315]: I0913 18:55:08.530793    1315 scope.go:117] "RemoveContainer" containerID="59d4f7dd69063bd415cb8f477dce014cc757b01ddc9460b2a58e69522054176f"
	Sep 13 18:55:08 ha-617764 kubelet[1315]: E0913 18:55:08.754636    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726253708754150119,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 18:55:08 ha-617764 kubelet[1315]: E0913 18:55:08.754692    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726253708754150119,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 18:55:18 ha-617764 kubelet[1315]: E0913 18:55:18.758821    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726253718757594010,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 18:55:18 ha-617764 kubelet[1315]: E0913 18:55:18.759070    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726253718757594010,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 18:55:22 ha-617764 kubelet[1315]: I0913 18:55:22.929881    1315 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-7dff88458-t4fwq" podStartSLOduration=605.581859336 podStartE2EDuration="10m8.929808432s" podCreationTimestamp="2024-09-13 18:45:14 +0000 UTC" firstStartedPulling="2024-09-13 18:45:14.91242577 +0000 UTC m=+166.549913166" lastFinishedPulling="2024-09-13 18:45:18.260374866 +0000 UTC m=+169.897862262" observedRunningTime="2024-09-13 18:45:19.210451982 +0000 UTC m=+170.847939387" watchObservedRunningTime="2024-09-13 18:55:22.929808432 +0000 UTC m=+774.567295835"
	Sep 13 18:55:22 ha-617764 kubelet[1315]: I0913 18:55:22.947957    1315 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-617764" podStartSLOduration=15.947941239 podStartE2EDuration="15.947941239s" podCreationTimestamp="2024-09-13 18:55:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-13 18:55:22.945149462 +0000 UTC m=+774.582636848" watchObservedRunningTime="2024-09-13 18:55:22.947941239 +0000 UTC m=+774.585428643"
	Sep 13 18:55:28 ha-617764 kubelet[1315]: E0913 18:55:28.543730    1315 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 13 18:55:28 ha-617764 kubelet[1315]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 13 18:55:28 ha-617764 kubelet[1315]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 13 18:55:28 ha-617764 kubelet[1315]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 13 18:55:28 ha-617764 kubelet[1315]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 13 18:55:28 ha-617764 kubelet[1315]: E0913 18:55:28.762957    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726253728761969710,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 18:55:28 ha-617764 kubelet[1315]: E0913 18:55:28.763006    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726253728761969710,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 18:55:38 ha-617764 kubelet[1315]: E0913 18:55:38.765005    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726253738764430773,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 18:55:38 ha-617764 kubelet[1315]: E0913 18:55:38.765067    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726253738764430773,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 18:55:48 ha-617764 kubelet[1315]: E0913 18:55:48.767455    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726253748766954664,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 18:55:48 ha-617764 kubelet[1315]: E0913 18:55:48.767517    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726253748766954664,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 18:55:58 ha-617764 kubelet[1315]: E0913 18:55:58.771697    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726253758770855038,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 18:55:58 ha-617764 kubelet[1315]: E0913 18:55:58.772322    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726253758770855038,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0913 18:56:02.139044   30386 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19636-3902/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-617764 -n ha-617764
helpers_test.go:261: (dbg) Run:  kubectl --context ha-617764 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (366.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (141.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-617764 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-617764 stop -v=7 --alsologtostderr: exit status 82 (2m0.46001892s)

                                                
                                                
-- stdout --
	* Stopping node "ha-617764-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 18:56:21.283252   30783 out.go:345] Setting OutFile to fd 1 ...
	I0913 18:56:21.283384   30783 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 18:56:21.283394   30783 out.go:358] Setting ErrFile to fd 2...
	I0913 18:56:21.283399   30783 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 18:56:21.283604   30783 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19636-3902/.minikube/bin
	I0913 18:56:21.283842   30783 out.go:352] Setting JSON to false
	I0913 18:56:21.283934   30783 mustload.go:65] Loading cluster: ha-617764
	I0913 18:56:21.284323   30783 config.go:182] Loaded profile config "ha-617764": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 18:56:21.284417   30783 profile.go:143] Saving config to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/config.json ...
	I0913 18:56:21.284603   30783 mustload.go:65] Loading cluster: ha-617764
	I0913 18:56:21.284752   30783 config.go:182] Loaded profile config "ha-617764": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 18:56:21.284780   30783 stop.go:39] StopHost: ha-617764-m04
	I0913 18:56:21.285168   30783 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:56:21.285211   30783 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:56:21.299904   30783 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41451
	I0913 18:56:21.300490   30783 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:56:21.301066   30783 main.go:141] libmachine: Using API Version  1
	I0913 18:56:21.301102   30783 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:56:21.301406   30783 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:56:21.303992   30783 out.go:177] * Stopping node "ha-617764-m04"  ...
	I0913 18:56:21.305340   30783 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0913 18:56:21.305368   30783 main.go:141] libmachine: (ha-617764-m04) Calling .DriverName
	I0913 18:56:21.305545   30783 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0913 18:56:21.305577   30783 main.go:141] libmachine: (ha-617764-m04) Calling .GetSSHHostname
	I0913 18:56:21.308266   30783 main.go:141] libmachine: (ha-617764-m04) DBG | domain ha-617764-m04 has defined MAC address 52:54:00:08:6e:e8 in network mk-ha-617764
	I0913 18:56:21.308767   30783 main.go:141] libmachine: (ha-617764-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:6e:e8", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:55:49 +0000 UTC Type:0 Mac:52:54:00:08:6e:e8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-617764-m04 Clientid:01:52:54:00:08:6e:e8}
	I0913 18:56:21.308795   30783 main.go:141] libmachine: (ha-617764-m04) DBG | domain ha-617764-m04 has defined IP address 192.168.39.238 and MAC address 52:54:00:08:6e:e8 in network mk-ha-617764
	I0913 18:56:21.308989   30783 main.go:141] libmachine: (ha-617764-m04) Calling .GetSSHPort
	I0913 18:56:21.309142   30783 main.go:141] libmachine: (ha-617764-m04) Calling .GetSSHKeyPath
	I0913 18:56:21.309251   30783 main.go:141] libmachine: (ha-617764-m04) Calling .GetSSHUsername
	I0913 18:56:21.309380   30783 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764-m04/id_rsa Username:docker}
	I0913 18:56:21.397106   30783 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0913 18:56:21.451006   30783 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0913 18:56:21.503983   30783 main.go:141] libmachine: Stopping "ha-617764-m04"...
	I0913 18:56:21.504011   30783 main.go:141] libmachine: (ha-617764-m04) Calling .GetState
	I0913 18:56:21.505486   30783 main.go:141] libmachine: (ha-617764-m04) Calling .Stop
	I0913 18:56:21.508543   30783 main.go:141] libmachine: (ha-617764-m04) Waiting for machine to stop 0/120
	I0913 18:56:22.509840   30783 main.go:141] libmachine: (ha-617764-m04) Waiting for machine to stop 1/120
	I0913 18:56:23.511147   30783 main.go:141] libmachine: (ha-617764-m04) Waiting for machine to stop 2/120
	I0913 18:56:24.512450   30783 main.go:141] libmachine: (ha-617764-m04) Waiting for machine to stop 3/120
	I0913 18:56:25.513715   30783 main.go:141] libmachine: (ha-617764-m04) Waiting for machine to stop 4/120
	I0913 18:56:26.515591   30783 main.go:141] libmachine: (ha-617764-m04) Waiting for machine to stop 5/120
	I0913 18:56:27.516992   30783 main.go:141] libmachine: (ha-617764-m04) Waiting for machine to stop 6/120
	I0913 18:56:28.518189   30783 main.go:141] libmachine: (ha-617764-m04) Waiting for machine to stop 7/120
	I0913 18:56:29.519390   30783 main.go:141] libmachine: (ha-617764-m04) Waiting for machine to stop 8/120
	I0913 18:56:30.521007   30783 main.go:141] libmachine: (ha-617764-m04) Waiting for machine to stop 9/120
	I0913 18:56:31.522488   30783 main.go:141] libmachine: (ha-617764-m04) Waiting for machine to stop 10/120
	I0913 18:56:32.523771   30783 main.go:141] libmachine: (ha-617764-m04) Waiting for machine to stop 11/120
	I0913 18:56:33.525055   30783 main.go:141] libmachine: (ha-617764-m04) Waiting for machine to stop 12/120
	I0913 18:56:34.526319   30783 main.go:141] libmachine: (ha-617764-m04) Waiting for machine to stop 13/120
	I0913 18:56:35.527639   30783 main.go:141] libmachine: (ha-617764-m04) Waiting for machine to stop 14/120
	I0913 18:56:36.529387   30783 main.go:141] libmachine: (ha-617764-m04) Waiting for machine to stop 15/120
	I0913 18:56:37.530721   30783 main.go:141] libmachine: (ha-617764-m04) Waiting for machine to stop 16/120
	I0913 18:56:38.532235   30783 main.go:141] libmachine: (ha-617764-m04) Waiting for machine to stop 17/120
	I0913 18:56:39.533500   30783 main.go:141] libmachine: (ha-617764-m04) Waiting for machine to stop 18/120
	I0913 18:56:40.534950   30783 main.go:141] libmachine: (ha-617764-m04) Waiting for machine to stop 19/120
	I0913 18:56:41.537122   30783 main.go:141] libmachine: (ha-617764-m04) Waiting for machine to stop 20/120
	I0913 18:56:42.538715   30783 main.go:141] libmachine: (ha-617764-m04) Waiting for machine to stop 21/120
	I0913 18:56:43.540480   30783 main.go:141] libmachine: (ha-617764-m04) Waiting for machine to stop 22/120
	I0913 18:56:44.542453   30783 main.go:141] libmachine: (ha-617764-m04) Waiting for machine to stop 23/120
	I0913 18:56:45.543679   30783 main.go:141] libmachine: (ha-617764-m04) Waiting for machine to stop 24/120
	I0913 18:56:46.545572   30783 main.go:141] libmachine: (ha-617764-m04) Waiting for machine to stop 25/120
	I0913 18:56:47.546864   30783 main.go:141] libmachine: (ha-617764-m04) Waiting for machine to stop 26/120
	I0913 18:56:48.548408   30783 main.go:141] libmachine: (ha-617764-m04) Waiting for machine to stop 27/120
	I0913 18:56:49.549671   30783 main.go:141] libmachine: (ha-617764-m04) Waiting for machine to stop 28/120
	I0913 18:56:50.551171   30783 main.go:141] libmachine: (ha-617764-m04) Waiting for machine to stop 29/120
	I0913 18:56:51.553335   30783 main.go:141] libmachine: (ha-617764-m04) Waiting for machine to stop 30/120
	I0913 18:56:52.554818   30783 main.go:141] libmachine: (ha-617764-m04) Waiting for machine to stop 31/120
	I0913 18:56:53.556134   30783 main.go:141] libmachine: (ha-617764-m04) Waiting for machine to stop 32/120
	I0913 18:56:54.557411   30783 main.go:141] libmachine: (ha-617764-m04) Waiting for machine to stop 33/120
	I0913 18:56:55.558893   30783 main.go:141] libmachine: (ha-617764-m04) Waiting for machine to stop 34/120
	I0913 18:56:56.560664   30783 main.go:141] libmachine: (ha-617764-m04) Waiting for machine to stop 35/120
	I0913 18:56:57.562240   30783 main.go:141] libmachine: (ha-617764-m04) Waiting for machine to stop 36/120
	I0913 18:56:58.564633   30783 main.go:141] libmachine: (ha-617764-m04) Waiting for machine to stop 37/120
	I0913 18:56:59.566160   30783 main.go:141] libmachine: (ha-617764-m04) Waiting for machine to stop 38/120
	I0913 18:57:00.567482   30783 main.go:141] libmachine: (ha-617764-m04) Waiting for machine to stop 39/120
	I0913 18:57:01.569616   30783 main.go:141] libmachine: (ha-617764-m04) Waiting for machine to stop 40/120
	I0913 18:57:02.570825   30783 main.go:141] libmachine: (ha-617764-m04) Waiting for machine to stop 41/120
	I0913 18:57:03.572561   30783 main.go:141] libmachine: (ha-617764-m04) Waiting for machine to stop 42/120
	I0913 18:57:04.574525   30783 main.go:141] libmachine: (ha-617764-m04) Waiting for machine to stop 43/120
	I0913 18:57:05.576064   30783 main.go:141] libmachine: (ha-617764-m04) Waiting for machine to stop 44/120
	I0913 18:57:06.578194   30783 main.go:141] libmachine: (ha-617764-m04) Waiting for machine to stop 45/120
	I0913 18:57:07.579450   30783 main.go:141] libmachine: (ha-617764-m04) Waiting for machine to stop 46/120
	I0913 18:57:08.580813   30783 main.go:141] libmachine: (ha-617764-m04) Waiting for machine to stop 47/120
	I0913 18:57:09.582068   30783 main.go:141] libmachine: (ha-617764-m04) Waiting for machine to stop 48/120
	I0913 18:57:10.583404   30783 main.go:141] libmachine: (ha-617764-m04) Waiting for machine to stop 49/120
	I0913 18:57:11.585094   30783 main.go:141] libmachine: (ha-617764-m04) Waiting for machine to stop 50/120
	I0913 18:57:12.586452   30783 main.go:141] libmachine: (ha-617764-m04) Waiting for machine to stop 51/120
	I0913 18:57:13.587745   30783 main.go:141] libmachine: (ha-617764-m04) Waiting for machine to stop 52/120
	I0913 18:57:14.589049   30783 main.go:141] libmachine: (ha-617764-m04) Waiting for machine to stop 53/120
	I0913 18:57:15.590314   30783 main.go:141] libmachine: (ha-617764-m04) Waiting for machine to stop 54/120
	I0913 18:57:16.592041   30783 main.go:141] libmachine: (ha-617764-m04) Waiting for machine to stop 55/120
	I0913 18:57:17.593428   30783 main.go:141] libmachine: (ha-617764-m04) Waiting for machine to stop 56/120
	I0913 18:57:18.594731   30783 main.go:141] libmachine: (ha-617764-m04) Waiting for machine to stop 57/120
	I0913 18:57:19.596469   30783 main.go:141] libmachine: (ha-617764-m04) Waiting for machine to stop 58/120
	I0913 18:57:20.597798   30783 main.go:141] libmachine: (ha-617764-m04) Waiting for machine to stop 59/120
	I0913 18:57:21.599832   30783 main.go:141] libmachine: (ha-617764-m04) Waiting for machine to stop 60/120
	I0913 18:57:22.601056   30783 main.go:141] libmachine: (ha-617764-m04) Waiting for machine to stop 61/120
	I0913 18:57:23.602468   30783 main.go:141] libmachine: (ha-617764-m04) Waiting for machine to stop 62/120
	I0913 18:57:24.604595   30783 main.go:141] libmachine: (ha-617764-m04) Waiting for machine to stop 63/120
	I0913 18:57:25.605934   30783 main.go:141] libmachine: (ha-617764-m04) Waiting for machine to stop 64/120
	I0913 18:57:26.607632   30783 main.go:141] libmachine: (ha-617764-m04) Waiting for machine to stop 65/120
	I0913 18:57:27.608939   30783 main.go:141] libmachine: (ha-617764-m04) Waiting for machine to stop 66/120
	I0913 18:57:28.610163   30783 main.go:141] libmachine: (ha-617764-m04) Waiting for machine to stop 67/120
	I0913 18:57:29.611454   30783 main.go:141] libmachine: (ha-617764-m04) Waiting for machine to stop 68/120
	I0913 18:57:30.612753   30783 main.go:141] libmachine: (ha-617764-m04) Waiting for machine to stop 69/120
	I0913 18:57:31.614915   30783 main.go:141] libmachine: (ha-617764-m04) Waiting for machine to stop 70/120
	I0913 18:57:32.617240   30783 main.go:141] libmachine: (ha-617764-m04) Waiting for machine to stop 71/120
	I0913 18:57:33.618649   30783 main.go:141] libmachine: (ha-617764-m04) Waiting for machine to stop 72/120
	I0913 18:57:34.619880   30783 main.go:141] libmachine: (ha-617764-m04) Waiting for machine to stop 73/120
	I0913 18:57:35.621303   30783 main.go:141] libmachine: (ha-617764-m04) Waiting for machine to stop 74/120
	I0913 18:57:36.623620   30783 main.go:141] libmachine: (ha-617764-m04) Waiting for machine to stop 75/120
	I0913 18:57:37.625082   30783 main.go:141] libmachine: (ha-617764-m04) Waiting for machine to stop 76/120
	I0913 18:57:38.626475   30783 main.go:141] libmachine: (ha-617764-m04) Waiting for machine to stop 77/120
	I0913 18:57:39.628012   30783 main.go:141] libmachine: (ha-617764-m04) Waiting for machine to stop 78/120
	I0913 18:57:40.629745   30783 main.go:141] libmachine: (ha-617764-m04) Waiting for machine to stop 79/120
	I0913 18:57:41.631334   30783 main.go:141] libmachine: (ha-617764-m04) Waiting for machine to stop 80/120
	I0913 18:57:42.632601   30783 main.go:141] libmachine: (ha-617764-m04) Waiting for machine to stop 81/120
	I0913 18:57:43.633882   30783 main.go:141] libmachine: (ha-617764-m04) Waiting for machine to stop 82/120
	I0913 18:57:44.635070   30783 main.go:141] libmachine: (ha-617764-m04) Waiting for machine to stop 83/120
	I0913 18:57:45.636520   30783 main.go:141] libmachine: (ha-617764-m04) Waiting for machine to stop 84/120
	I0913 18:57:46.638417   30783 main.go:141] libmachine: (ha-617764-m04) Waiting for machine to stop 85/120
	I0913 18:57:47.640445   30783 main.go:141] libmachine: (ha-617764-m04) Waiting for machine to stop 86/120
	I0913 18:57:48.641733   30783 main.go:141] libmachine: (ha-617764-m04) Waiting for machine to stop 87/120
	I0913 18:57:49.643029   30783 main.go:141] libmachine: (ha-617764-m04) Waiting for machine to stop 88/120
	I0913 18:57:50.644339   30783 main.go:141] libmachine: (ha-617764-m04) Waiting for machine to stop 89/120
	I0913 18:57:51.646392   30783 main.go:141] libmachine: (ha-617764-m04) Waiting for machine to stop 90/120
	I0913 18:57:52.648580   30783 main.go:141] libmachine: (ha-617764-m04) Waiting for machine to stop 91/120
	I0913 18:57:53.650135   30783 main.go:141] libmachine: (ha-617764-m04) Waiting for machine to stop 92/120
	I0913 18:57:54.651514   30783 main.go:141] libmachine: (ha-617764-m04) Waiting for machine to stop 93/120
	I0913 18:57:55.652853   30783 main.go:141] libmachine: (ha-617764-m04) Waiting for machine to stop 94/120
	I0913 18:57:56.654882   30783 main.go:141] libmachine: (ha-617764-m04) Waiting for machine to stop 95/120
	I0913 18:57:57.656412   30783 main.go:141] libmachine: (ha-617764-m04) Waiting for machine to stop 96/120
	I0913 18:57:58.657958   30783 main.go:141] libmachine: (ha-617764-m04) Waiting for machine to stop 97/120
	I0913 18:57:59.659406   30783 main.go:141] libmachine: (ha-617764-m04) Waiting for machine to stop 98/120
	I0913 18:58:00.660809   30783 main.go:141] libmachine: (ha-617764-m04) Waiting for machine to stop 99/120
	I0913 18:58:01.663051   30783 main.go:141] libmachine: (ha-617764-m04) Waiting for machine to stop 100/120
	I0913 18:58:02.664499   30783 main.go:141] libmachine: (ha-617764-m04) Waiting for machine to stop 101/120
	I0913 18:58:03.665680   30783 main.go:141] libmachine: (ha-617764-m04) Waiting for machine to stop 102/120
	I0913 18:58:04.667184   30783 main.go:141] libmachine: (ha-617764-m04) Waiting for machine to stop 103/120
	I0913 18:58:05.668428   30783 main.go:141] libmachine: (ha-617764-m04) Waiting for machine to stop 104/120
	I0913 18:58:06.669837   30783 main.go:141] libmachine: (ha-617764-m04) Waiting for machine to stop 105/120
	I0913 18:58:07.671238   30783 main.go:141] libmachine: (ha-617764-m04) Waiting for machine to stop 106/120
	I0913 18:58:08.672771   30783 main.go:141] libmachine: (ha-617764-m04) Waiting for machine to stop 107/120
	I0913 18:58:09.674406   30783 main.go:141] libmachine: (ha-617764-m04) Waiting for machine to stop 108/120
	I0913 18:58:10.675616   30783 main.go:141] libmachine: (ha-617764-m04) Waiting for machine to stop 109/120
	I0913 18:58:11.677471   30783 main.go:141] libmachine: (ha-617764-m04) Waiting for machine to stop 110/120
	I0913 18:58:12.678866   30783 main.go:141] libmachine: (ha-617764-m04) Waiting for machine to stop 111/120
	I0913 18:58:13.680631   30783 main.go:141] libmachine: (ha-617764-m04) Waiting for machine to stop 112/120
	I0913 18:58:14.682221   30783 main.go:141] libmachine: (ha-617764-m04) Waiting for machine to stop 113/120
	I0913 18:58:15.683584   30783 main.go:141] libmachine: (ha-617764-m04) Waiting for machine to stop 114/120
	I0913 18:58:16.685554   30783 main.go:141] libmachine: (ha-617764-m04) Waiting for machine to stop 115/120
	I0913 18:58:17.687758   30783 main.go:141] libmachine: (ha-617764-m04) Waiting for machine to stop 116/120
	I0913 18:58:18.689007   30783 main.go:141] libmachine: (ha-617764-m04) Waiting for machine to stop 117/120
	I0913 18:58:19.690792   30783 main.go:141] libmachine: (ha-617764-m04) Waiting for machine to stop 118/120
	I0913 18:58:20.692769   30783 main.go:141] libmachine: (ha-617764-m04) Waiting for machine to stop 119/120
	I0913 18:58:21.694061   30783 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0913 18:58:21.694143   30783 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0913 18:58:21.696032   30783 out.go:201] 
	W0913 18:58:21.697759   30783 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0913 18:58:21.697776   30783 out.go:270] * 
	* 
	W0913 18:58:21.699820   30783 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0913 18:58:21.701368   30783 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-617764 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-617764 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-617764 status -v=7 --alsologtostderr: exit status 3 (19.08331214s)

                                                
                                                
-- stdout --
	ha-617764
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-617764-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-617764-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 18:58:21.746660   31222 out.go:345] Setting OutFile to fd 1 ...
	I0913 18:58:21.746921   31222 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 18:58:21.746931   31222 out.go:358] Setting ErrFile to fd 2...
	I0913 18:58:21.746935   31222 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 18:58:21.747166   31222 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19636-3902/.minikube/bin
	I0913 18:58:21.747386   31222 out.go:352] Setting JSON to false
	I0913 18:58:21.747413   31222 mustload.go:65] Loading cluster: ha-617764
	I0913 18:58:21.747468   31222 notify.go:220] Checking for updates...
	I0913 18:58:21.747918   31222 config.go:182] Loaded profile config "ha-617764": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 18:58:21.747939   31222 status.go:255] checking status of ha-617764 ...
	I0913 18:58:21.748523   31222 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:58:21.748575   31222 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:58:21.775496   31222 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45793
	I0913 18:58:21.776071   31222 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:58:21.776571   31222 main.go:141] libmachine: Using API Version  1
	I0913 18:58:21.776591   31222 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:58:21.777045   31222 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:58:21.777260   31222 main.go:141] libmachine: (ha-617764) Calling .GetState
	I0913 18:58:21.778982   31222 status.go:330] ha-617764 host status = "Running" (err=<nil>)
	I0913 18:58:21.779003   31222 host.go:66] Checking if "ha-617764" exists ...
	I0913 18:58:21.779308   31222 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:58:21.779347   31222 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:58:21.793888   31222 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39731
	I0913 18:58:21.794385   31222 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:58:21.794868   31222 main.go:141] libmachine: Using API Version  1
	I0913 18:58:21.794894   31222 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:58:21.795199   31222 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:58:21.795373   31222 main.go:141] libmachine: (ha-617764) Calling .GetIP
	I0913 18:58:21.798027   31222 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:58:21.798511   31222 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 18:58:21.798535   31222 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:58:21.798693   31222 host.go:66] Checking if "ha-617764" exists ...
	I0913 18:58:21.799073   31222 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:58:21.799117   31222 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:58:21.814552   31222 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41389
	I0913 18:58:21.814949   31222 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:58:21.815573   31222 main.go:141] libmachine: Using API Version  1
	I0913 18:58:21.815600   31222 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:58:21.815961   31222 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:58:21.816168   31222 main.go:141] libmachine: (ha-617764) Calling .DriverName
	I0913 18:58:21.816351   31222 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0913 18:58:21.816383   31222 main.go:141] libmachine: (ha-617764) Calling .GetSSHHostname
	I0913 18:58:21.819425   31222 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:58:21.820046   31222 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 18:58:21.820069   31222 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:58:21.820263   31222 main.go:141] libmachine: (ha-617764) Calling .GetSSHPort
	I0913 18:58:21.820405   31222 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 18:58:21.820551   31222 main.go:141] libmachine: (ha-617764) Calling .GetSSHUsername
	I0913 18:58:21.820772   31222 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764/id_rsa Username:docker}
	I0913 18:58:21.905498   31222 ssh_runner.go:195] Run: systemctl --version
	I0913 18:58:21.912664   31222 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 18:58:21.930569   31222 kubeconfig.go:125] found "ha-617764" server: "https://192.168.39.254:8443"
	I0913 18:58:21.930601   31222 api_server.go:166] Checking apiserver status ...
	I0913 18:58:21.930676   31222 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 18:58:21.949726   31222 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/4843/cgroup
	W0913 18:58:21.960918   31222 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/4843/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0913 18:58:21.960992   31222 ssh_runner.go:195] Run: ls
	I0913 18:58:21.965635   31222 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0913 18:58:21.970576   31222 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0913 18:58:21.970606   31222 status.go:422] ha-617764 apiserver status = Running (err=<nil>)
	I0913 18:58:21.970618   31222 status.go:257] ha-617764 status: &{Name:ha-617764 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0913 18:58:21.970634   31222 status.go:255] checking status of ha-617764-m02 ...
	I0913 18:58:21.971031   31222 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:58:21.971079   31222 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:58:21.986339   31222 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38697
	I0913 18:58:21.986808   31222 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:58:21.987394   31222 main.go:141] libmachine: Using API Version  1
	I0913 18:58:21.987420   31222 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:58:21.987737   31222 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:58:21.987927   31222 main.go:141] libmachine: (ha-617764-m02) Calling .GetState
	I0913 18:58:21.989456   31222 status.go:330] ha-617764-m02 host status = "Running" (err=<nil>)
	I0913 18:58:21.989472   31222 host.go:66] Checking if "ha-617764-m02" exists ...
	I0913 18:58:21.989745   31222 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:58:21.989778   31222 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:58:22.004378   31222 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41193
	I0913 18:58:22.004813   31222 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:58:22.005360   31222 main.go:141] libmachine: Using API Version  1
	I0913 18:58:22.005386   31222 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:58:22.005674   31222 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:58:22.005849   31222 main.go:141] libmachine: (ha-617764-m02) Calling .GetIP
	I0913 18:58:22.008533   31222 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:58:22.009002   31222 main.go:141] libmachine: (ha-617764-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:42:52", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:53:51 +0000 UTC Type:0 Mac:52:54:00:ab:42:52 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-617764-m02 Clientid:01:52:54:00:ab:42:52}
	I0913 18:58:22.009026   31222 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined IP address 192.168.39.203 and MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:58:22.009170   31222 host.go:66] Checking if "ha-617764-m02" exists ...
	I0913 18:58:22.009498   31222 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:58:22.009551   31222 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:58:22.024748   31222 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46455
	I0913 18:58:22.025209   31222 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:58:22.025694   31222 main.go:141] libmachine: Using API Version  1
	I0913 18:58:22.025714   31222 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:58:22.026001   31222 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:58:22.026224   31222 main.go:141] libmachine: (ha-617764-m02) Calling .DriverName
	I0913 18:58:22.026386   31222 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0913 18:58:22.026409   31222 main.go:141] libmachine: (ha-617764-m02) Calling .GetSSHHostname
	I0913 18:58:22.029049   31222 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:58:22.029429   31222 main.go:141] libmachine: (ha-617764-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:42:52", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:53:51 +0000 UTC Type:0 Mac:52:54:00:ab:42:52 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-617764-m02 Clientid:01:52:54:00:ab:42:52}
	I0913 18:58:22.029449   31222 main.go:141] libmachine: (ha-617764-m02) DBG | domain ha-617764-m02 has defined IP address 192.168.39.203 and MAC address 52:54:00:ab:42:52 in network mk-ha-617764
	I0913 18:58:22.029635   31222 main.go:141] libmachine: (ha-617764-m02) Calling .GetSSHPort
	I0913 18:58:22.029802   31222 main.go:141] libmachine: (ha-617764-m02) Calling .GetSSHKeyPath
	I0913 18:58:22.029932   31222 main.go:141] libmachine: (ha-617764-m02) Calling .GetSSHUsername
	I0913 18:58:22.030058   31222 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764-m02/id_rsa Username:docker}
	I0913 18:58:22.111397   31222 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 18:58:22.129106   31222 kubeconfig.go:125] found "ha-617764" server: "https://192.168.39.254:8443"
	I0913 18:58:22.129138   31222 api_server.go:166] Checking apiserver status ...
	I0913 18:58:22.129178   31222 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 18:58:22.145190   31222 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1358/cgroup
	W0913 18:58:22.157692   31222 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1358/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0913 18:58:22.157747   31222 ssh_runner.go:195] Run: ls
	I0913 18:58:22.166159   31222 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0913 18:58:22.170603   31222 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0913 18:58:22.170629   31222 status.go:422] ha-617764-m02 apiserver status = Running (err=<nil>)
	I0913 18:58:22.170639   31222 status.go:257] ha-617764-m02 status: &{Name:ha-617764-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0913 18:58:22.170657   31222 status.go:255] checking status of ha-617764-m04 ...
	I0913 18:58:22.171047   31222 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:58:22.171102   31222 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:58:22.186763   31222 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43809
	I0913 18:58:22.187220   31222 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:58:22.187717   31222 main.go:141] libmachine: Using API Version  1
	I0913 18:58:22.187733   31222 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:58:22.188047   31222 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:58:22.188227   31222 main.go:141] libmachine: (ha-617764-m04) Calling .GetState
	I0913 18:58:22.190021   31222 status.go:330] ha-617764-m04 host status = "Running" (err=<nil>)
	I0913 18:58:22.190037   31222 host.go:66] Checking if "ha-617764-m04" exists ...
	I0913 18:58:22.190351   31222 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:58:22.190407   31222 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:58:22.206259   31222 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32857
	I0913 18:58:22.206755   31222 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:58:22.207251   31222 main.go:141] libmachine: Using API Version  1
	I0913 18:58:22.207277   31222 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:58:22.207629   31222 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:58:22.207806   31222 main.go:141] libmachine: (ha-617764-m04) Calling .GetIP
	I0913 18:58:22.210526   31222 main.go:141] libmachine: (ha-617764-m04) DBG | domain ha-617764-m04 has defined MAC address 52:54:00:08:6e:e8 in network mk-ha-617764
	I0913 18:58:22.210998   31222 main.go:141] libmachine: (ha-617764-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:6e:e8", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:55:49 +0000 UTC Type:0 Mac:52:54:00:08:6e:e8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-617764-m04 Clientid:01:52:54:00:08:6e:e8}
	I0913 18:58:22.211019   31222 main.go:141] libmachine: (ha-617764-m04) DBG | domain ha-617764-m04 has defined IP address 192.168.39.238 and MAC address 52:54:00:08:6e:e8 in network mk-ha-617764
	I0913 18:58:22.211253   31222 host.go:66] Checking if "ha-617764-m04" exists ...
	I0913 18:58:22.211553   31222 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:58:22.211600   31222 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:58:22.226448   31222 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35587
	I0913 18:58:22.226943   31222 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:58:22.227451   31222 main.go:141] libmachine: Using API Version  1
	I0913 18:58:22.227478   31222 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:58:22.227792   31222 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:58:22.227999   31222 main.go:141] libmachine: (ha-617764-m04) Calling .DriverName
	I0913 18:58:22.228175   31222 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0913 18:58:22.228205   31222 main.go:141] libmachine: (ha-617764-m04) Calling .GetSSHHostname
	I0913 18:58:22.231330   31222 main.go:141] libmachine: (ha-617764-m04) DBG | domain ha-617764-m04 has defined MAC address 52:54:00:08:6e:e8 in network mk-ha-617764
	I0913 18:58:22.231791   31222 main.go:141] libmachine: (ha-617764-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:6e:e8", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:55:49 +0000 UTC Type:0 Mac:52:54:00:08:6e:e8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-617764-m04 Clientid:01:52:54:00:08:6e:e8}
	I0913 18:58:22.231814   31222 main.go:141] libmachine: (ha-617764-m04) DBG | domain ha-617764-m04 has defined IP address 192.168.39.238 and MAC address 52:54:00:08:6e:e8 in network mk-ha-617764
	I0913 18:58:22.232017   31222 main.go:141] libmachine: (ha-617764-m04) Calling .GetSSHPort
	I0913 18:58:22.232182   31222 main.go:141] libmachine: (ha-617764-m04) Calling .GetSSHKeyPath
	I0913 18:58:22.232325   31222 main.go:141] libmachine: (ha-617764-m04) Calling .GetSSHUsername
	I0913 18:58:22.232460   31222 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764-m04/id_rsa Username:docker}
	W0913 18:58:40.786370   31222 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.238:22: connect: no route to host
	W0913 18:58:40.786471   31222 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.238:22: connect: no route to host
	E0913 18:58:40.786494   31222 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.238:22: connect: no route to host
	I0913 18:58:40.786503   31222 status.go:257] ha-617764-m04 status: &{Name:ha-617764-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0913 18:58:40.786521   31222 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.238:22: connect: no route to host

                                                
                                                
** /stderr **
ha_test.go:540: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-617764 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-617764 -n ha-617764
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-617764 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-617764 logs -n 25: (1.703632427s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                      |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-617764 ssh -n ha-617764-m02 sudo cat                                        | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC | 13 Sep 24 18:46 UTC |
	|         | /home/docker/cp-test_ha-617764-m03_ha-617764-m02.txt                           |           |         |         |                     |                     |
	| cp      | ha-617764 cp ha-617764-m03:/home/docker/cp-test.txt                            | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC | 13 Sep 24 18:46 UTC |
	|         | ha-617764-m04:/home/docker/cp-test_ha-617764-m03_ha-617764-m04.txt             |           |         |         |                     |                     |
	| ssh     | ha-617764 ssh -n                                                               | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC | 13 Sep 24 18:46 UTC |
	|         | ha-617764-m03 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-617764 ssh -n ha-617764-m04 sudo cat                                        | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC | 13 Sep 24 18:46 UTC |
	|         | /home/docker/cp-test_ha-617764-m03_ha-617764-m04.txt                           |           |         |         |                     |                     |
	| cp      | ha-617764 cp testdata/cp-test.txt                                              | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC | 13 Sep 24 18:46 UTC |
	|         | ha-617764-m04:/home/docker/cp-test.txt                                         |           |         |         |                     |                     |
	| ssh     | ha-617764 ssh -n                                                               | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC | 13 Sep 24 18:46 UTC |
	|         | ha-617764-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| cp      | ha-617764 cp ha-617764-m04:/home/docker/cp-test.txt                            | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC | 13 Sep 24 18:46 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile62644144/001/cp-test_ha-617764-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-617764 ssh -n                                                               | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC | 13 Sep 24 18:46 UTC |
	|         | ha-617764-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| cp      | ha-617764 cp ha-617764-m04:/home/docker/cp-test.txt                            | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC | 13 Sep 24 18:46 UTC |
	|         | ha-617764:/home/docker/cp-test_ha-617764-m04_ha-617764.txt                     |           |         |         |                     |                     |
	| ssh     | ha-617764 ssh -n                                                               | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC | 13 Sep 24 18:46 UTC |
	|         | ha-617764-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-617764 ssh -n ha-617764 sudo cat                                            | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC | 13 Sep 24 18:46 UTC |
	|         | /home/docker/cp-test_ha-617764-m04_ha-617764.txt                               |           |         |         |                     |                     |
	| cp      | ha-617764 cp ha-617764-m04:/home/docker/cp-test.txt                            | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC | 13 Sep 24 18:46 UTC |
	|         | ha-617764-m02:/home/docker/cp-test_ha-617764-m04_ha-617764-m02.txt             |           |         |         |                     |                     |
	| ssh     | ha-617764 ssh -n                                                               | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC | 13 Sep 24 18:46 UTC |
	|         | ha-617764-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-617764 ssh -n ha-617764-m02 sudo cat                                        | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC | 13 Sep 24 18:46 UTC |
	|         | /home/docker/cp-test_ha-617764-m04_ha-617764-m02.txt                           |           |         |         |                     |                     |
	| cp      | ha-617764 cp ha-617764-m04:/home/docker/cp-test.txt                            | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC | 13 Sep 24 18:46 UTC |
	|         | ha-617764-m03:/home/docker/cp-test_ha-617764-m04_ha-617764-m03.txt             |           |         |         |                     |                     |
	| ssh     | ha-617764 ssh -n                                                               | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC | 13 Sep 24 18:46 UTC |
	|         | ha-617764-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-617764 ssh -n ha-617764-m03 sudo cat                                        | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC | 13 Sep 24 18:46 UTC |
	|         | /home/docker/cp-test_ha-617764-m04_ha-617764-m03.txt                           |           |         |         |                     |                     |
	| node    | ha-617764 node stop m02 -v=7                                                   | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC |                     |
	|         | --alsologtostderr                                                              |           |         |         |                     |                     |
	| node    | ha-617764 node start m02 -v=7                                                  | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:48 UTC |                     |
	|         | --alsologtostderr                                                              |           |         |         |                     |                     |
	| node    | list -p ha-617764 -v=7                                                         | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:49 UTC |                     |
	|         | --alsologtostderr                                                              |           |         |         |                     |                     |
	| stop    | -p ha-617764 -v=7                                                              | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:49 UTC |                     |
	|         | --alsologtostderr                                                              |           |         |         |                     |                     |
	| start   | -p ha-617764 --wait=true -v=7                                                  | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:51 UTC | 13 Sep 24 18:56 UTC |
	|         | --alsologtostderr                                                              |           |         |         |                     |                     |
	| node    | list -p ha-617764                                                              | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:56 UTC |                     |
	| node    | ha-617764 node delete m03 -v=7                                                 | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:56 UTC | 13 Sep 24 18:56 UTC |
	|         | --alsologtostderr                                                              |           |         |         |                     |                     |
	| stop    | ha-617764 stop -v=7                                                            | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:56 UTC |                     |
	|         | --alsologtostderr                                                              |           |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/13 18:51:59
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0913 18:51:59.451042   29072 out.go:345] Setting OutFile to fd 1 ...
	I0913 18:51:59.451140   29072 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 18:51:59.451144   29072 out.go:358] Setting ErrFile to fd 2...
	I0913 18:51:59.451149   29072 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 18:51:59.451314   29072 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19636-3902/.minikube/bin
	I0913 18:51:59.451836   29072 out.go:352] Setting JSON to false
	I0913 18:51:59.452744   29072 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2062,"bootTime":1726251457,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0913 18:51:59.452831   29072 start.go:139] virtualization: kvm guest
	I0913 18:51:59.455234   29072 out.go:177] * [ha-617764] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0913 18:51:59.456815   29072 out.go:177]   - MINIKUBE_LOCATION=19636
	I0913 18:51:59.456815   29072 notify.go:220] Checking for updates...
	I0913 18:51:59.458955   29072 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 18:51:59.460165   29072 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19636-3902/kubeconfig
	I0913 18:51:59.461323   29072 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19636-3902/.minikube
	I0913 18:51:59.462462   29072 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0913 18:51:59.463570   29072 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 18:51:59.465250   29072 config.go:182] Loaded profile config "ha-617764": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 18:51:59.465363   29072 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 18:51:59.465995   29072 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:51:59.466037   29072 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:51:59.481305   29072 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43779
	I0913 18:51:59.481921   29072 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:51:59.482544   29072 main.go:141] libmachine: Using API Version  1
	I0913 18:51:59.482573   29072 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:51:59.482889   29072 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:51:59.483034   29072 main.go:141] libmachine: (ha-617764) Calling .DriverName
	I0913 18:51:59.517035   29072 out.go:177] * Using the kvm2 driver based on existing profile
	I0913 18:51:59.518169   29072 start.go:297] selected driver: kvm2
	I0913 18:51:59.518183   29072 start.go:901] validating driver "kvm2" against &{Name:ha-617764 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.1 ClusterName:ha-617764 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.145 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.203 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.118 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.238 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false e
fk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:
docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 18:51:59.518313   29072 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 18:51:59.518606   29072 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 18:51:59.518673   29072 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19636-3902/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0913 18:51:59.533606   29072 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0913 18:51:59.534276   29072 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0913 18:51:59.534309   29072 cni.go:84] Creating CNI manager for ""
	I0913 18:51:59.534361   29072 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0913 18:51:59.534417   29072 start.go:340] cluster config:
	{Name:ha-617764 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-617764 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.145 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.203 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.118 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.238 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel
:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 18:51:59.534554   29072 iso.go:125] acquiring lock: {Name:mk6d09a9e7ffc35d34bf41cb51ad15df14a4d34d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 18:51:59.536450   29072 out.go:177] * Starting "ha-617764" primary control-plane node in "ha-617764" cluster
	I0913 18:51:59.537765   29072 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0913 18:51:59.537816   29072 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0913 18:51:59.537831   29072 cache.go:56] Caching tarball of preloaded images
	I0913 18:51:59.537907   29072 preload.go:172] Found /home/jenkins/minikube-integration/19636-3902/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0913 18:51:59.537916   29072 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0913 18:51:59.538024   29072 profile.go:143] Saving config to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/config.json ...
	I0913 18:51:59.538284   29072 start.go:360] acquireMachinesLock for ha-617764: {Name:mk2a4fc9f87a3264088553785e086036ce1d8c5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 18:51:59.538328   29072 start.go:364] duration metric: took 24.333µs to acquireMachinesLock for "ha-617764"
	I0913 18:51:59.538341   29072 start.go:96] Skipping create...Using existing machine configuration
	I0913 18:51:59.538348   29072 fix.go:54] fixHost starting: 
	I0913 18:51:59.538613   29072 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:51:59.538646   29072 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:51:59.553757   29072 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46393
	I0913 18:51:59.554281   29072 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:51:59.554746   29072 main.go:141] libmachine: Using API Version  1
	I0913 18:51:59.554766   29072 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:51:59.555115   29072 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:51:59.555302   29072 main.go:141] libmachine: (ha-617764) Calling .DriverName
	I0913 18:51:59.555470   29072 main.go:141] libmachine: (ha-617764) Calling .GetState
	I0913 18:51:59.556975   29072 fix.go:112] recreateIfNeeded on ha-617764: state=Running err=<nil>
	W0913 18:51:59.556997   29072 fix.go:138] unexpected machine state, will restart: <nil>
	I0913 18:51:59.559148   29072 out.go:177] * Updating the running kvm2 "ha-617764" VM ...
	I0913 18:51:59.560610   29072 machine.go:93] provisionDockerMachine start ...
	I0913 18:51:59.560637   29072 main.go:141] libmachine: (ha-617764) Calling .DriverName
	I0913 18:51:59.560860   29072 main.go:141] libmachine: (ha-617764) Calling .GetSSHHostname
	I0913 18:51:59.563529   29072 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:51:59.564010   29072 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 18:51:59.564033   29072 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:51:59.564192   29072 main.go:141] libmachine: (ha-617764) Calling .GetSSHPort
	I0913 18:51:59.564370   29072 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 18:51:59.564495   29072 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 18:51:59.564621   29072 main.go:141] libmachine: (ha-617764) Calling .GetSSHUsername
	I0913 18:51:59.564767   29072 main.go:141] libmachine: Using SSH client type: native
	I0913 18:51:59.564944   29072 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.145 22 <nil> <nil>}
	I0913 18:51:59.564955   29072 main.go:141] libmachine: About to run SSH command:
	hostname
	I0913 18:51:59.675402   29072 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-617764
	
	I0913 18:51:59.675429   29072 main.go:141] libmachine: (ha-617764) Calling .GetMachineName
	I0913 18:51:59.675681   29072 buildroot.go:166] provisioning hostname "ha-617764"
	I0913 18:51:59.675704   29072 main.go:141] libmachine: (ha-617764) Calling .GetMachineName
	I0913 18:51:59.675871   29072 main.go:141] libmachine: (ha-617764) Calling .GetSSHHostname
	I0913 18:51:59.678408   29072 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:51:59.678803   29072 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 18:51:59.678836   29072 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:51:59.678929   29072 main.go:141] libmachine: (ha-617764) Calling .GetSSHPort
	I0913 18:51:59.679153   29072 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 18:51:59.679316   29072 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 18:51:59.679474   29072 main.go:141] libmachine: (ha-617764) Calling .GetSSHUsername
	I0913 18:51:59.679633   29072 main.go:141] libmachine: Using SSH client type: native
	I0913 18:51:59.679848   29072 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.145 22 <nil> <nil>}
	I0913 18:51:59.679862   29072 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-617764 && echo "ha-617764" | sudo tee /etc/hostname
	I0913 18:51:59.811505   29072 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-617764
	
	I0913 18:51:59.811532   29072 main.go:141] libmachine: (ha-617764) Calling .GetSSHHostname
	I0913 18:51:59.814236   29072 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:51:59.814663   29072 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 18:51:59.814681   29072 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:51:59.814889   29072 main.go:141] libmachine: (ha-617764) Calling .GetSSHPort
	I0913 18:51:59.815061   29072 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 18:51:59.815194   29072 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 18:51:59.815302   29072 main.go:141] libmachine: (ha-617764) Calling .GetSSHUsername
	I0913 18:51:59.815458   29072 main.go:141] libmachine: Using SSH client type: native
	I0913 18:51:59.815642   29072 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.145 22 <nil> <nil>}
	I0913 18:51:59.815664   29072 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-617764' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-617764/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-617764' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0913 18:51:59.923967   29072 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0913 18:51:59.923996   29072 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19636-3902/.minikube CaCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19636-3902/.minikube}
	I0913 18:51:59.924029   29072 buildroot.go:174] setting up certificates
	I0913 18:51:59.924037   29072 provision.go:84] configureAuth start
	I0913 18:51:59.924045   29072 main.go:141] libmachine: (ha-617764) Calling .GetMachineName
	I0913 18:51:59.924335   29072 main.go:141] libmachine: (ha-617764) Calling .GetIP
	I0913 18:51:59.927027   29072 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:51:59.927351   29072 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 18:51:59.927381   29072 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:51:59.927497   29072 main.go:141] libmachine: (ha-617764) Calling .GetSSHHostname
	I0913 18:51:59.929576   29072 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:51:59.929904   29072 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 18:51:59.929924   29072 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:51:59.930048   29072 provision.go:143] copyHostCerts
	I0913 18:51:59.930078   29072 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem
	I0913 18:51:59.930140   29072 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem, removing ...
	I0913 18:51:59.930153   29072 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem
	I0913 18:51:59.930219   29072 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem (1082 bytes)
	I0913 18:51:59.930289   29072 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem
	I0913 18:51:59.930306   29072 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem, removing ...
	I0913 18:51:59.930312   29072 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem
	I0913 18:51:59.930336   29072 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem (1123 bytes)
	I0913 18:51:59.930376   29072 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem
	I0913 18:51:59.930393   29072 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem, removing ...
	I0913 18:51:59.930398   29072 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem
	I0913 18:51:59.930418   29072 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem (1679 bytes)
	I0913 18:51:59.930461   29072 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem org=jenkins.ha-617764 san=[127.0.0.1 192.168.39.145 ha-617764 localhost minikube]
	I0913 18:52:00.048247   29072 provision.go:177] copyRemoteCerts
	I0913 18:52:00.048305   29072 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0913 18:52:00.048326   29072 main.go:141] libmachine: (ha-617764) Calling .GetSSHHostname
	I0913 18:52:00.050757   29072 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:52:00.051071   29072 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 18:52:00.051100   29072 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:52:00.051290   29072 main.go:141] libmachine: (ha-617764) Calling .GetSSHPort
	I0913 18:52:00.051461   29072 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 18:52:00.051606   29072 main.go:141] libmachine: (ha-617764) Calling .GetSSHUsername
	I0913 18:52:00.051723   29072 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764/id_rsa Username:docker}
	I0913 18:52:00.137264   29072 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0913 18:52:00.137334   29072 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0913 18:52:00.163407   29072 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0913 18:52:00.163496   29072 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0913 18:52:00.189621   29072 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0913 18:52:00.189701   29072 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0913 18:52:00.217762   29072 provision.go:87] duration metric: took 293.713666ms to configureAuth
	I0913 18:52:00.217788   29072 buildroot.go:189] setting minikube options for container-runtime
	I0913 18:52:00.218009   29072 config.go:182] Loaded profile config "ha-617764": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 18:52:00.218092   29072 main.go:141] libmachine: (ha-617764) Calling .GetSSHHostname
	I0913 18:52:00.220836   29072 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:52:00.221218   29072 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 18:52:00.221242   29072 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:52:00.221408   29072 main.go:141] libmachine: (ha-617764) Calling .GetSSHPort
	I0913 18:52:00.221603   29072 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 18:52:00.221754   29072 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 18:52:00.221858   29072 main.go:141] libmachine: (ha-617764) Calling .GetSSHUsername
	I0913 18:52:00.222007   29072 main.go:141] libmachine: Using SSH client type: native
	I0913 18:52:00.222233   29072 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.145 22 <nil> <nil>}
	I0913 18:52:00.222253   29072 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0913 18:53:30.964596   29072 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0913 18:53:30.964623   29072 machine.go:96] duration metric: took 1m31.403995279s to provisionDockerMachine
	I0913 18:53:30.964635   29072 start.go:293] postStartSetup for "ha-617764" (driver="kvm2")
	I0913 18:53:30.964646   29072 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0913 18:53:30.964661   29072 main.go:141] libmachine: (ha-617764) Calling .DriverName
	I0913 18:53:30.964953   29072 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0913 18:53:30.964983   29072 main.go:141] libmachine: (ha-617764) Calling .GetSSHHostname
	I0913 18:53:30.967854   29072 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:53:30.968233   29072 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 18:53:30.968256   29072 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:53:30.968420   29072 main.go:141] libmachine: (ha-617764) Calling .GetSSHPort
	I0913 18:53:30.968577   29072 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 18:53:30.968735   29072 main.go:141] libmachine: (ha-617764) Calling .GetSSHUsername
	I0913 18:53:30.968862   29072 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764/id_rsa Username:docker}
	I0913 18:53:31.054747   29072 ssh_runner.go:195] Run: cat /etc/os-release
	I0913 18:53:31.058746   29072 info.go:137] Remote host: Buildroot 2023.02.9
	I0913 18:53:31.058765   29072 filesync.go:126] Scanning /home/jenkins/minikube-integration/19636-3902/.minikube/addons for local assets ...
	I0913 18:53:31.058826   29072 filesync.go:126] Scanning /home/jenkins/minikube-integration/19636-3902/.minikube/files for local assets ...
	I0913 18:53:31.058894   29072 filesync.go:149] local asset: /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem -> 110792.pem in /etc/ssl/certs
	I0913 18:53:31.058903   29072 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem -> /etc/ssl/certs/110792.pem
	I0913 18:53:31.058980   29072 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0913 18:53:31.070546   29072 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem --> /etc/ssl/certs/110792.pem (1708 bytes)
	I0913 18:53:31.095694   29072 start.go:296] duration metric: took 131.045494ms for postStartSetup
	I0913 18:53:31.095766   29072 main.go:141] libmachine: (ha-617764) Calling .DriverName
	I0913 18:53:31.096067   29072 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0913 18:53:31.096098   29072 main.go:141] libmachine: (ha-617764) Calling .GetSSHHostname
	I0913 18:53:31.099237   29072 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:53:31.099625   29072 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 18:53:31.099662   29072 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:53:31.099882   29072 main.go:141] libmachine: (ha-617764) Calling .GetSSHPort
	I0913 18:53:31.100052   29072 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 18:53:31.100306   29072 main.go:141] libmachine: (ha-617764) Calling .GetSSHUsername
	I0913 18:53:31.100466   29072 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764/id_rsa Username:docker}
	W0913 18:53:31.181755   29072 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0913 18:53:31.181776   29072 fix.go:56] duration metric: took 1m31.64342706s for fixHost
	I0913 18:53:31.181802   29072 main.go:141] libmachine: (ha-617764) Calling .GetSSHHostname
	I0913 18:53:31.184579   29072 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:53:31.184952   29072 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 18:53:31.184977   29072 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:53:31.185120   29072 main.go:141] libmachine: (ha-617764) Calling .GetSSHPort
	I0913 18:53:31.185300   29072 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 18:53:31.185440   29072 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 18:53:31.185543   29072 main.go:141] libmachine: (ha-617764) Calling .GetSSHUsername
	I0913 18:53:31.185681   29072 main.go:141] libmachine: Using SSH client type: native
	I0913 18:53:31.185881   29072 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.145 22 <nil> <nil>}
	I0913 18:53:31.185895   29072 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0913 18:53:31.290841   29072 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726253611.257610943
	
	I0913 18:53:31.290871   29072 fix.go:216] guest clock: 1726253611.257610943
	I0913 18:53:31.290882   29072 fix.go:229] Guest: 2024-09-13 18:53:31.257610943 +0000 UTC Remote: 2024-09-13 18:53:31.181784392 +0000 UTC m=+91.764368095 (delta=75.826551ms)
	I0913 18:53:31.290914   29072 fix.go:200] guest clock delta is within tolerance: 75.826551ms
	I0913 18:53:31.290921   29072 start.go:83] releasing machines lock for "ha-617764", held for 1m31.752584374s
	I0913 18:53:31.290949   29072 main.go:141] libmachine: (ha-617764) Calling .DriverName
	I0913 18:53:31.291205   29072 main.go:141] libmachine: (ha-617764) Calling .GetIP
	I0913 18:53:31.293801   29072 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:53:31.294132   29072 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 18:53:31.294162   29072 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:53:31.294317   29072 main.go:141] libmachine: (ha-617764) Calling .DriverName
	I0913 18:53:31.294836   29072 main.go:141] libmachine: (ha-617764) Calling .DriverName
	I0913 18:53:31.294997   29072 main.go:141] libmachine: (ha-617764) Calling .DriverName
	I0913 18:53:31.295175   29072 ssh_runner.go:195] Run: cat /version.json
	I0913 18:53:31.295188   29072 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0913 18:53:31.295194   29072 main.go:141] libmachine: (ha-617764) Calling .GetSSHHostname
	I0913 18:53:31.295224   29072 main.go:141] libmachine: (ha-617764) Calling .GetSSHHostname
	I0913 18:53:31.297699   29072 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:53:31.297943   29072 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:53:31.298058   29072 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 18:53:31.298127   29072 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:53:31.298320   29072 main.go:141] libmachine: (ha-617764) Calling .GetSSHPort
	I0913 18:53:31.298384   29072 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 18:53:31.298412   29072 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:53:31.298457   29072 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 18:53:31.298549   29072 main.go:141] libmachine: (ha-617764) Calling .GetSSHPort
	I0913 18:53:31.298608   29072 main.go:141] libmachine: (ha-617764) Calling .GetSSHUsername
	I0913 18:53:31.298661   29072 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 18:53:31.298724   29072 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764/id_rsa Username:docker}
	I0913 18:53:31.298767   29072 main.go:141] libmachine: (ha-617764) Calling .GetSSHUsername
	I0913 18:53:31.298868   29072 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764/id_rsa Username:docker}
	I0913 18:53:31.375612   29072 ssh_runner.go:195] Run: systemctl --version
	I0913 18:53:31.401066   29072 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0913 18:53:31.565852   29072 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0913 18:53:31.571778   29072 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0913 18:53:31.571847   29072 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0913 18:53:31.581630   29072 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0913 18:53:31.581654   29072 start.go:495] detecting cgroup driver to use...
	I0913 18:53:31.581722   29072 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0913 18:53:31.604675   29072 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0913 18:53:31.619441   29072 docker.go:217] disabling cri-docker service (if available) ...
	I0913 18:53:31.619504   29072 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0913 18:53:31.633774   29072 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0913 18:53:31.648164   29072 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0913 18:53:31.796712   29072 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0913 18:53:31.946300   29072 docker.go:233] disabling docker service ...
	I0913 18:53:31.946385   29072 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0913 18:53:31.964222   29072 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0913 18:53:31.978104   29072 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0913 18:53:32.122796   29072 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0913 18:53:32.266790   29072 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0913 18:53:32.280955   29072 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0913 18:53:32.300799   29072 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0913 18:53:32.300873   29072 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 18:53:32.311427   29072 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0913 18:53:32.311491   29072 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 18:53:32.321633   29072 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 18:53:32.331772   29072 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 18:53:32.342393   29072 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0913 18:53:32.352735   29072 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 18:53:32.362749   29072 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 18:53:32.373605   29072 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 18:53:32.383318   29072 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0913 18:53:32.392115   29072 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0913 18:53:32.400858   29072 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 18:53:32.539943   29072 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0913 18:53:38.961852   29072 ssh_runner.go:235] Completed: sudo systemctl restart crio: (6.421871662s)
	I0913 18:53:38.961887   29072 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0913 18:53:38.961940   29072 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0913 18:53:38.967154   29072 start.go:563] Will wait 60s for crictl version
	I0913 18:53:38.967216   29072 ssh_runner.go:195] Run: which crictl
	I0913 18:53:38.971149   29072 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0913 18:53:39.014499   29072 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0913 18:53:39.014569   29072 ssh_runner.go:195] Run: crio --version
	I0913 18:53:39.044787   29072 ssh_runner.go:195] Run: crio --version
	I0913 18:53:39.078019   29072 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0913 18:53:39.079451   29072 main.go:141] libmachine: (ha-617764) Calling .GetIP
	I0913 18:53:39.082069   29072 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:53:39.082416   29072 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 18:53:39.082436   29072 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:53:39.082687   29072 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0913 18:53:39.087441   29072 kubeadm.go:883] updating cluster {Name:ha-617764 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-617764 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.145 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.203 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.118 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.238 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fr
eshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Moun
tIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0913 18:53:39.087568   29072 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0913 18:53:39.087605   29072 ssh_runner.go:195] Run: sudo crictl images --output json
	I0913 18:53:39.136584   29072 crio.go:514] all images are preloaded for cri-o runtime.
	I0913 18:53:39.136607   29072 crio.go:433] Images already preloaded, skipping extraction
	I0913 18:53:39.136667   29072 ssh_runner.go:195] Run: sudo crictl images --output json
	I0913 18:53:39.172458   29072 crio.go:514] all images are preloaded for cri-o runtime.
	I0913 18:53:39.172492   29072 cache_images.go:84] Images are preloaded, skipping loading
	I0913 18:53:39.172503   29072 kubeadm.go:934] updating node { 192.168.39.145 8443 v1.31.1 crio true true} ...
	I0913 18:53:39.172747   29072 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-617764 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.145
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-617764 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0913 18:53:39.172847   29072 ssh_runner.go:195] Run: crio config
	I0913 18:53:39.228507   29072 cni.go:84] Creating CNI manager for ""
	I0913 18:53:39.228533   29072 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0913 18:53:39.228544   29072 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0913 18:53:39.228572   29072 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.145 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-617764 NodeName:ha-617764 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.145"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.145 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0913 18:53:39.228739   29072 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.145
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-617764"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.145
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.145"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0913 18:53:39.228762   29072 kube-vip.go:115] generating kube-vip config ...
	I0913 18:53:39.228808   29072 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0913 18:53:39.241131   29072 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0913 18:53:39.241264   29072 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0913 18:53:39.241322   29072 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0913 18:53:39.251995   29072 binaries.go:44] Found k8s binaries, skipping transfer
	I0913 18:53:39.252053   29072 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0913 18:53:39.262324   29072 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0913 18:53:39.280473   29072 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0913 18:53:39.298610   29072 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0913 18:53:39.316756   29072 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0913 18:53:39.334687   29072 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0913 18:53:39.340146   29072 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 18:53:39.486893   29072 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0913 18:53:39.502556   29072 certs.go:68] Setting up /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764 for IP: 192.168.39.145
	I0913 18:53:39.502579   29072 certs.go:194] generating shared ca certs ...
	I0913 18:53:39.502597   29072 certs.go:226] acquiring lock for ca certs: {Name:mke780aab4c2895bded11772e31c7a357d07742c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:53:39.502766   29072 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.key
	I0913 18:53:39.502812   29072 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.key
	I0913 18:53:39.502821   29072 certs.go:256] generating profile certs ...
	I0913 18:53:39.502893   29072 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/client.key
	I0913 18:53:39.502919   29072 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.key.4c9a0066
	I0913 18:53:39.502941   29072 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.crt.4c9a0066 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.145 192.168.39.203 192.168.39.118 192.168.39.254]
	I0913 18:53:39.671445   29072 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.crt.4c9a0066 ...
	I0913 18:53:39.671472   29072 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.crt.4c9a0066: {Name:mk866d8ebfd148c5aa5dd4cf3cd73b7d93c34404 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:53:39.671644   29072 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.key.4c9a0066 ...
	I0913 18:53:39.671655   29072 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.key.4c9a0066: {Name:mk328fdb2d1d58c24ba660ea05d28edbd4af5263 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:53:39.671724   29072 certs.go:381] copying /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.crt.4c9a0066 -> /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.crt
	I0913 18:53:39.671886   29072 certs.go:385] copying /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.key.4c9a0066 -> /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.key
	I0913 18:53:39.672011   29072 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/proxy-client.key
	I0913 18:53:39.672025   29072 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0913 18:53:39.672037   29072 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0913 18:53:39.672051   29072 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0913 18:53:39.672064   29072 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0913 18:53:39.672076   29072 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0913 18:53:39.672102   29072 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0913 18:53:39.672116   29072 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0913 18:53:39.672128   29072 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0913 18:53:39.672174   29072 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079.pem (1338 bytes)
	W0913 18:53:39.672203   29072 certs.go:480] ignoring /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079_empty.pem, impossibly tiny 0 bytes
	I0913 18:53:39.672212   29072 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem (1675 bytes)
	I0913 18:53:39.672238   29072 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem (1082 bytes)
	I0913 18:53:39.672260   29072 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem (1123 bytes)
	I0913 18:53:39.672281   29072 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem (1679 bytes)
	I0913 18:53:39.672318   29072 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem (1708 bytes)
	I0913 18:53:39.672341   29072 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079.pem -> /usr/share/ca-certificates/11079.pem
	I0913 18:53:39.672356   29072 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem -> /usr/share/ca-certificates/110792.pem
	I0913 18:53:39.672370   29072 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0913 18:53:39.672951   29072 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0913 18:53:39.700311   29072 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0913 18:53:39.724706   29072 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0913 18:53:39.748770   29072 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0913 18:53:39.772719   29072 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0913 18:53:39.796146   29072 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0913 18:53:39.819779   29072 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0913 18:53:39.844430   29072 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0913 18:53:39.868108   29072 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079.pem --> /usr/share/ca-certificates/11079.pem (1338 bytes)
	I0913 18:53:39.891117   29072 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem --> /usr/share/ca-certificates/110792.pem (1708 bytes)
	I0913 18:53:39.928910   29072 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0913 18:53:39.956532   29072 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0913 18:53:39.974013   29072 ssh_runner.go:195] Run: openssl version
	I0913 18:53:39.980139   29072 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11079.pem && ln -fs /usr/share/ca-certificates/11079.pem /etc/ssl/certs/11079.pem"
	I0913 18:53:39.991816   29072 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11079.pem
	I0913 18:53:39.996429   29072 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 13 18:38 /usr/share/ca-certificates/11079.pem
	I0913 18:53:39.996499   29072 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11079.pem
	I0913 18:53:40.002296   29072 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11079.pem /etc/ssl/certs/51391683.0"
	I0913 18:53:40.011742   29072 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110792.pem && ln -fs /usr/share/ca-certificates/110792.pem /etc/ssl/certs/110792.pem"
	I0913 18:53:40.022292   29072 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110792.pem
	I0913 18:53:40.027042   29072 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 13 18:38 /usr/share/ca-certificates/110792.pem
	I0913 18:53:40.027104   29072 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110792.pem
	I0913 18:53:40.032750   29072 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110792.pem /etc/ssl/certs/3ec20f2e.0"
	I0913 18:53:40.041962   29072 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0913 18:53:40.052402   29072 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0913 18:53:40.056867   29072 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 13 18:22 /usr/share/ca-certificates/minikubeCA.pem
	I0913 18:53:40.056911   29072 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0913 18:53:40.062554   29072 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0913 18:53:40.071636   29072 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0913 18:53:40.076041   29072 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0913 18:53:40.081558   29072 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0913 18:53:40.086944   29072 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0913 18:53:40.092374   29072 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0913 18:53:40.097877   29072 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0913 18:53:40.103538   29072 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0913 18:53:40.109085   29072 kubeadm.go:392] StartCluster: {Name:ha-617764 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-617764 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.145 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.203 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.118 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.238 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fresh
pod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 18:53:40.109196   29072 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0913 18:53:40.109230   29072 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0913 18:53:40.146805   29072 cri.go:89] found id: "6cda6910c0b8a31f1305274b5b5159cd8e7f49d4f80f9a990f705bf107a548a6"
	I0913 18:53:40.146828   29072 cri.go:89] found id: "0439d3ac606c787a4b2867d3b05dc915beecb59f9e5b7bfdd3792f7d2ac6208a"
	I0913 18:53:40.146832   29072 cri.go:89] found id: "6b090ae4c1c69f7f8d5633fb50dcc2f26a44e8e5949ec8befbaabc61bb3a0bec"
	I0913 18:53:40.146835   29072 cri.go:89] found id: "3502979cf3ea177059372b048c527fdda963bdd51e52d12d22dfa810cf54057d"
	I0913 18:53:40.146837   29072 cri.go:89] found id: "31a66627d146af0c454548e90aaf09b72db4e0fb35a2436b75ee6b7712ebfd7b"
	I0913 18:53:40.146841   29072 cri.go:89] found id: "0647676f81788ee0bbd56eb7d60f950a46f51bd631508ec8b7f81c7a92597539"
	I0913 18:53:40.146853   29072 cri.go:89] found id: "7e98c43ffb7347041b10b0e7a00cc20d3901c203313e4f54385c199a191115e1"
	I0913 18:53:40.146856   29072 cri.go:89] found id: "5065ca788226965fdaff6088f9b63d8cf7f5a5a7f59a07f825cfcdd7bc02e218"
	I0913 18:53:40.146858   29072 cri.go:89] found id: "b116fa0d9ecbf5eef9e58f830dd785949b52bfac86d2d3b084cc734d4d60272a"
	I0913 18:53:40.146863   29072 cri.go:89] found id: "8a41f6c9e152de2576ff4360ccc68e55259081afe9bb9bcb9f172aec46f9ba14"
	I0913 18:53:40.146866   29072 cri.go:89] found id: "8a31170a295b77a01a2c07a4bf7dbd0be4738f757372e24a85bcaf7d50d27d4c"
	I0913 18:53:40.146868   29072 cri.go:89] found id: "1d66613ccb1f220327ae486a7304f4ca06bdfa65b31bf2cde55a2f616174be80"
	I0913 18:53:40.146873   29072 cri.go:89] found id: "3b2f0c73fe9ef4e082cc81429c4d5b062aa2764776ca148ed40e6d633b128ae5"
	I0913 18:53:40.146876   29072 cri.go:89] found id: ""
	I0913 18:53:40.146911   29072 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 13 18:58:41 ha-617764 crio[3569]: time="2024-09-13 18:58:41.423478245Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b0f7c6dc-7350-446d-ab49-9166f270d535 name=/runtime.v1.RuntimeService/Version
	Sep 13 18:58:41 ha-617764 crio[3569]: time="2024-09-13 18:58:41.424819442Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6c17755f-49d9-443c-8491-eab7e9a44d1a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 18:58:41 ha-617764 crio[3569]: time="2024-09-13 18:58:41.425293867Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726253921425225877,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6c17755f-49d9-443c-8491-eab7e9a44d1a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 18:58:41 ha-617764 crio[3569]: time="2024-09-13 18:58:41.425818359Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=af524e14-0b12-4612-8fd4-8edc37980d8c name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 18:58:41 ha-617764 crio[3569]: time="2024-09-13 18:58:41.425891428Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=af524e14-0b12-4612-8fd4-8edc37980d8c name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 18:58:41 ha-617764 crio[3569]: time="2024-09-13 18:58:41.426393313Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:570c77981741ff23e853840359e686b304c18dfbe54cb12c07ca5d8bf2b8de17,PodSandboxId:9e2f87d06434fbb2cc22f0b55402dcb47eb004c630073d3484ea5c76f045adb0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726253708567358184,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e5f1a84-1798-430e-af04-82469e8f4a7b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32fcfa457f3ff0e638142def4aa43f12d6a5a779bfe86e597cc242d7f4d9d19d,PodSandboxId:ba66cfa072f5d2c9d7e729390ec3e24cdc5d4f227dd184ba1ffe78dc46889849,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726253664532596979,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 815ca8cb73177215968b5c5242b63776,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a368121b3974a88cb67c446e03fd5709ed4dd291d3b5de37b4544a7c42b60cc,PodSandboxId:dec28a9d29645441c6f66b40ed9acc817db86c5fcda1b52aec34a8ea0a961910,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726253664535947858,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f4db9ee38410b02d601ed80ae90b5a4,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bb3333d84624db9148786fc866cc4cb99179a577e31d3365459788ba7b02f59,PodSandboxId:0238ab84a512173bc79a44c0c48ee4a9bfeeaed98c7655e711a18a704a5951f9,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726253659862465525,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-t4fwq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1bc3749b-0225-445c-9b86-767558392df7,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59d4f7dd69063bd415cb8f477dce014cc757b01ddc9460b2a58e69522054176f,PodSandboxId:9e2f87d06434fbb2cc22f0b55402dcb47eb004c630073d3484ea5c76f045adb0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726253658537126239,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e5f1a84-1798-430e-af04-82469e8f4a7b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46d659112c682a93bb4560212566d776e405246e1b3f91e9cc2ad5198b2b8c69,PodSandboxId:566613db4514bf925926ac26426504bb464936a657b23ca25af3ad3aa0a803e1,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726253640657559269,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5545735943f8ff5a38c9aea0b4c785ad,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dddc0dfb6a255d4d35498f110cd8ade544b6f9eb17cef621a12500640512581e,PodSandboxId:18e2ef1278c487f38e01dc8ec652cadc524a2e3c08f1b80bae0f12b5ce7d5f45,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726253626790149443,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-b9bzd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81130c38-ff5d-4c9e-ab7b-7eae4a62d3b8,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:09fe052337ef3c922d075bb554c17c548520bce6283d1973e3507ca8d99ace87,PodSandboxId:5f1a3394b645b31406ab8d54ac120cd39398070996c972bf73df2d505c1848b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726253626803895339,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fdhnm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c509676-c7ba-4841-89b5-7e4266abd9c9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kube
rnetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b752b1ac699cb46bd464528882ec5c1bfb29241c94a1eda062141311151c2fc1,PodSandboxId:3a3adb124d23e71e636eaf0f200f72f3779d41c493e0475b011f1e825d452f00,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726253626698846769,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-htrbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41a8301e-fca3-4907-bc77-808b013a2d2a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"nam
e\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d1a0b2d1c95ea3234342a3c075a74b46c01ce0a91b97801503cb6fd1fc06163,PodSandboxId:09bbefd12114cb9e6833e18edd3e080d729dc865efaaa80af99dd78c1c2387c2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726253626405055504,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-92mml,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: 36bd37dc-88c4-4264-9e7c-a90246cc5212,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15c33340e3091f96212bbbf29c4a8802468f76ac07126f47265d4f2b86321a89,PodSandboxId:acfcaea56c23ed4626d8134635877c58a053b2861b24d0d53cf3024c7c8c1ca0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726253626518194944,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
bf3d4ca74d8429dc43b760fdf8f185ab,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da04db3dd67098c6ed0ba3118018e8a2ed0dbc987d1383c585c961ef2d592ff0,PodSandboxId:ba66cfa072f5d2c9d7e729390ec3e24cdc5d4f227dd184ba1ffe78dc46889849,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726253626457035523,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 815ca8cb73177215968b5c5242b63776,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed301adb1e4543af8e7b0bc6901dd6ef8c6bd3a45f58443df445f01ad6bb0a0a,PodSandboxId:dec28a9d29645441c6f66b40ed9acc817db86c5fcda1b52aec34a8ea0a961910,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726253626494877289,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f4db9ee38410b02d
601ed80ae90b5a4,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80a7cb47dee67b788f693c83be0a9b7c41fc30e21ddf9b0131e13737c8047222,PodSandboxId:a63972ff65b125bee7dbfeebba7408899591af85f385eeba93cb4a7472b5a831,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726253626301722665,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15cf7928620050653d6239c1007547bd,},Ann
otations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d456d4bd90d260631341afbae7431ccb0cc790417a4c9e8160e57467bfaf2b9,PodSandboxId:99c7958cb4872950974230539dc45944b50688bb9878563aee2e61fedfb5c35c,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726253118277457132,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-t4fwq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1bc3749b-0225-445c-9b86-767558392df7,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31a66627d146af0c454548e90aaf09b72db4e0fb35a2436b75ee6b7712ebfd7b,PodSandboxId:e586cc7654290c3c0cda0d6fd83b97505e3979cb49833b1eea94d979d037f3b4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726252966219468000,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-htrbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41a8301e-fca3-4907-bc77-808b013a2d2a,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3502979cf3ea177059372b048c527fdda963bdd51e52d12d22dfa810cf54057d,PodSandboxId:bd08f2ca13336963167d06a6cc6476a2e09fa40dcabe8661be5e3dacaf6be576,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726252966228593988,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fdhnm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c509676-c7ba-4841-89b5-7e4266abd9c9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e98c43ffb7347041b10b0e7a00cc20d3901c203313e4f54385c199a191115e1,PodSandboxId:47bf9789759213fdfb7ee71e76e2a2afa4707545aca3efe3f7ebec2b63ea5635,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726252954213974086,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-b9bzd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81130c38-ff5d-4c9e-ab7b-7eae4a62d3b8,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5065ca788226965fdaff6088f9b63d8cf7f5a5a7f59a07f825cfcdd7bc02e218,PodSandboxId:585827783c6745769918e227877e73f139672d7f78bf2704aaebc3850f3d07f1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726252953884543786,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-92mml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36bd37dc-88c4-4264-9e7c-a90246cc5212,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a31170a295b77a01a2c07a4bf7dbd0be4738f757372e24a85bcaf7d50d27d4c,PodSandboxId:16bf73d50b501d98243464444491201ec86d4a669bf5b3098590c9930bee5091,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe
954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726252942305036275,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15cf7928620050653d6239c1007547bd,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b2f0c73fe9ef4e082cc81429c4d5b062aa2764776ca148ed40e6d633b128ae5,PodSandboxId:353214980e0a16b1f3f76759918d9ab56c9f4add56df1e83cc1f6e8daf96c9a6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTA
INER_EXITED,CreatedAt:1726252942238767403,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf3d4ca74d8429dc43b760fdf8f185ab,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=af524e14-0b12-4612-8fd4-8edc37980d8c name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 18:58:41 ha-617764 crio[3569]: time="2024-09-13 18:58:41.432357062Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=43b7957c-b1b0-4974-9117-8170974447f6 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 13 18:58:41 ha-617764 crio[3569]: time="2024-09-13 18:58:41.433838234Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:0238ab84a512173bc79a44c0c48ee4a9bfeeaed98c7655e711a18a704a5951f9,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-t4fwq,Uid:1bc3749b-0225-445c-9b86-767558392df7,Namespace:default,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726253659740539351,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-t4fwq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1bc3749b-0225-445c-9b86-767558392df7,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-13T18:45:14.261429860Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:566613db4514bf925926ac26426504bb464936a657b23ca25af3ad3aa0a803e1,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-617764,Uid:5545735943f8ff5a38c9aea0b4c785ad,Namespace:kube-system,Attempt:0,},State:SANDBOX_RE
ADY,CreatedAt:1726253640555148083,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5545735943f8ff5a38c9aea0b4c785ad,},Annotations:map[string]string{kubernetes.io/config.hash: 5545735943f8ff5a38c9aea0b4c785ad,kubernetes.io/config.seen: 2024-09-13T18:53:39.302561175Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:3a3adb124d23e71e636eaf0f200f72f3779d41c493e0475b011f1e825d452f00,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-htrbt,Uid:41a8301e-fca3-4907-bc77-808b013a2d2a,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726253626040810467,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-htrbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41a8301e-fca3-4907-bc77-808b013a2d2a,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09
-13T18:42:45.550911912Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:dec28a9d29645441c6f66b40ed9acc817db86c5fcda1b52aec34a8ea0a961910,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-617764,Uid:7f4db9ee38410b02d601ed80ae90b5a4,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726253626023937802,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f4db9ee38410b02d601ed80ae90b5a4,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.145:8443,kubernetes.io/config.hash: 7f4db9ee38410b02d601ed80ae90b5a4,kubernetes.io/config.seen: 2024-09-13T18:42:28.449170428Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:5f1a3394b645b31406ab8d54ac120cd39398070996c972bf73df2d505c1848b5,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-fdhnm,Uid:5c50
9676-c7ba-4841-89b5-7e4266abd9c9,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726253626004489664,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-fdhnm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c509676-c7ba-4841-89b5-7e4266abd9c9,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-13T18:42:45.562483180Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:acfcaea56c23ed4626d8134635877c58a053b2861b24d0d53cf3024c7c8c1ca0,Metadata:&PodSandboxMetadata{Name:etcd-ha-617764,Uid:bf3d4ca74d8429dc43b760fdf8f185ab,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726253625988044685,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf3d4ca74d8429dc43b760fdf8f185ab,tier: control-plane,},Annotations:map[string]s
tring{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.145:2379,kubernetes.io/config.hash: bf3d4ca74d8429dc43b760fdf8f185ab,kubernetes.io/config.seen: 2024-09-13T18:42:28.449166555Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:09bbefd12114cb9e6833e18edd3e080d729dc865efaaa80af99dd78c1c2387c2,Metadata:&PodSandboxMetadata{Name:kube-proxy-92mml,Uid:36bd37dc-88c4-4264-9e7c-a90246cc5212,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726253625963443558,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-92mml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36bd37dc-88c4-4264-9e7c-a90246cc5212,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-13T18:42:32.813185019Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ba66cfa072f5d2c9d7e729390ec3e24cdc5d4f227dd184ba1ffe78dc46889849,Meta
data:&PodSandboxMetadata{Name:kube-controller-manager-ha-617764,Uid:815ca8cb73177215968b5c5242b63776,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726253625961334887,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 815ca8cb73177215968b5c5242b63776,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 815ca8cb73177215968b5c5242b63776,kubernetes.io/config.seen: 2024-09-13T18:42:28.449171632Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:9e2f87d06434fbb2cc22f0b55402dcb47eb004c630073d3484ea5c76f045adb0,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:1e5f1a84-1798-430e-af04-82469e8f4a7b,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726253625952996915,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,i
o.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e5f1a84-1798-430e-af04-82469e8f4a7b,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-09-13T18:42:45.559796406Z,kubernetes.io/config.source: api,},RuntimeHa
ndler:,},&PodSandbox{Id:a63972ff65b125bee7dbfeebba7408899591af85f385eeba93cb4a7472b5a831,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-617764,Uid:15cf7928620050653d6239c1007547bd,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726253625952430273,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15cf7928620050653d6239c1007547bd,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 15cf7928620050653d6239c1007547bd,kubernetes.io/config.seen: 2024-09-13T18:42:28.449172711Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:18e2ef1278c487f38e01dc8ec652cadc524a2e3c08f1b80bae0f12b5ce7d5f45,Metadata:&PodSandboxMetadata{Name:kindnet-b9bzd,Uid:81130c38-ff5d-4c9e-ab7b-7eae4a62d3b8,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726253625948438015,Labels:map[string]string{app: kindnet,controlle
r-revision-hash: 65cbdfc95f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-b9bzd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81130c38-ff5d-4c9e-ab7b-7eae4a62d3b8,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-13T18:42:32.806641658Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:99c7958cb4872950974230539dc45944b50688bb9878563aee2e61fedfb5c35c,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-t4fwq,Uid:1bc3749b-0225-445c-9b86-767558392df7,Namespace:default,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726253114586821266,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-t4fwq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1bc3749b-0225-445c-9b86-767558392df7,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-13T18:45:14.261429860Z,kubernetes.io/config.source
: api,},RuntimeHandler:,},&PodSandbox{Id:bd08f2ca13336963167d06a6cc6476a2e09fa40dcabe8661be5e3dacaf6be576,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-fdhnm,Uid:5c509676-c7ba-4841-89b5-7e4266abd9c9,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726252965889636482,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-fdhnm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c509676-c7ba-4841-89b5-7e4266abd9c9,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-13T18:42:45.562483180Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e586cc7654290c3c0cda0d6fd83b97505e3979cb49833b1eea94d979d037f3b4,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-htrbt,Uid:41a8301e-fca3-4907-bc77-808b013a2d2a,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726252965859469324,Labels:map[string]string{io.kubernetes.container.name: POD,io.ku
bernetes.pod.name: coredns-7c65d6cfc9-htrbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41a8301e-fca3-4907-bc77-808b013a2d2a,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-13T18:42:45.550911912Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:585827783c6745769918e227877e73f139672d7f78bf2704aaebc3850f3d07f1,Metadata:&PodSandboxMetadata{Name:kube-proxy-92mml,Uid:36bd37dc-88c4-4264-9e7c-a90246cc5212,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726252953724191437,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-92mml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36bd37dc-88c4-4264-9e7c-a90246cc5212,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-13T18:42:32.813185019Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&
PodSandbox{Id:47bf9789759213fdfb7ee71e76e2a2afa4707545aca3efe3f7ebec2b63ea5635,Metadata:&PodSandboxMetadata{Name:kindnet-b9bzd,Uid:81130c38-ff5d-4c9e-ab7b-7eae4a62d3b8,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726252953714445038,Labels:map[string]string{app: kindnet,controller-revision-hash: 65cbdfc95f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-b9bzd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81130c38-ff5d-4c9e-ab7b-7eae4a62d3b8,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-13T18:42:32.806641658Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:353214980e0a16b1f3f76759918d9ab56c9f4add56df1e83cc1f6e8daf96c9a6,Metadata:&PodSandboxMetadata{Name:etcd-ha-617764,Uid:bf3d4ca74d8429dc43b760fdf8f185ab,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726252942072834224,Labels:map[string]string{component: etcd,io.kubernetes.container.name:
POD,io.kubernetes.pod.name: etcd-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf3d4ca74d8429dc43b760fdf8f185ab,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.145:2379,kubernetes.io/config.hash: bf3d4ca74d8429dc43b760fdf8f185ab,kubernetes.io/config.seen: 2024-09-13T18:42:21.598282672Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:16bf73d50b501d98243464444491201ec86d4a669bf5b3098590c9930bee5091,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-617764,Uid:15cf7928620050653d6239c1007547bd,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726252942055915287,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15cf7928620050653d6239c1007547bd,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 15cf7928
620050653d6239c1007547bd,kubernetes.io/config.seen: 2024-09-13T18:42:21.598280864Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=43b7957c-b1b0-4974-9117-8170974447f6 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 13 18:58:41 ha-617764 crio[3569]: time="2024-09-13 18:58:41.437430464Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=64386d05-1ea8-462f-a1eb-e07d1feb1166 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 18:58:41 ha-617764 crio[3569]: time="2024-09-13 18:58:41.437576016Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=64386d05-1ea8-462f-a1eb-e07d1feb1166 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 18:58:41 ha-617764 crio[3569]: time="2024-09-13 18:58:41.438172805Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:570c77981741ff23e853840359e686b304c18dfbe54cb12c07ca5d8bf2b8de17,PodSandboxId:9e2f87d06434fbb2cc22f0b55402dcb47eb004c630073d3484ea5c76f045adb0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726253708567358184,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e5f1a84-1798-430e-af04-82469e8f4a7b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32fcfa457f3ff0e638142def4aa43f12d6a5a779bfe86e597cc242d7f4d9d19d,PodSandboxId:ba66cfa072f5d2c9d7e729390ec3e24cdc5d4f227dd184ba1ffe78dc46889849,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726253664532596979,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 815ca8cb73177215968b5c5242b63776,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a368121b3974a88cb67c446e03fd5709ed4dd291d3b5de37b4544a7c42b60cc,PodSandboxId:dec28a9d29645441c6f66b40ed9acc817db86c5fcda1b52aec34a8ea0a961910,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726253664535947858,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f4db9ee38410b02d601ed80ae90b5a4,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bb3333d84624db9148786fc866cc4cb99179a577e31d3365459788ba7b02f59,PodSandboxId:0238ab84a512173bc79a44c0c48ee4a9bfeeaed98c7655e711a18a704a5951f9,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726253659862465525,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-t4fwq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1bc3749b-0225-445c-9b86-767558392df7,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59d4f7dd69063bd415cb8f477dce014cc757b01ddc9460b2a58e69522054176f,PodSandboxId:9e2f87d06434fbb2cc22f0b55402dcb47eb004c630073d3484ea5c76f045adb0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726253658537126239,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e5f1a84-1798-430e-af04-82469e8f4a7b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46d659112c682a93bb4560212566d776e405246e1b3f91e9cc2ad5198b2b8c69,PodSandboxId:566613db4514bf925926ac26426504bb464936a657b23ca25af3ad3aa0a803e1,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726253640657559269,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5545735943f8ff5a38c9aea0b4c785ad,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dddc0dfb6a255d4d35498f110cd8ade544b6f9eb17cef621a12500640512581e,PodSandboxId:18e2ef1278c487f38e01dc8ec652cadc524a2e3c08f1b80bae0f12b5ce7d5f45,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726253626790149443,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-b9bzd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81130c38-ff5d-4c9e-ab7b-7eae4a62d3b8,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:09fe052337ef3c922d075bb554c17c548520bce6283d1973e3507ca8d99ace87,PodSandboxId:5f1a3394b645b31406ab8d54ac120cd39398070996c972bf73df2d505c1848b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726253626803895339,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fdhnm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c509676-c7ba-4841-89b5-7e4266abd9c9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kube
rnetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b752b1ac699cb46bd464528882ec5c1bfb29241c94a1eda062141311151c2fc1,PodSandboxId:3a3adb124d23e71e636eaf0f200f72f3779d41c493e0475b011f1e825d452f00,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726253626698846769,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-htrbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41a8301e-fca3-4907-bc77-808b013a2d2a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"nam
e\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d1a0b2d1c95ea3234342a3c075a74b46c01ce0a91b97801503cb6fd1fc06163,PodSandboxId:09bbefd12114cb9e6833e18edd3e080d729dc865efaaa80af99dd78c1c2387c2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726253626405055504,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-92mml,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: 36bd37dc-88c4-4264-9e7c-a90246cc5212,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15c33340e3091f96212bbbf29c4a8802468f76ac07126f47265d4f2b86321a89,PodSandboxId:acfcaea56c23ed4626d8134635877c58a053b2861b24d0d53cf3024c7c8c1ca0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726253626518194944,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
bf3d4ca74d8429dc43b760fdf8f185ab,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da04db3dd67098c6ed0ba3118018e8a2ed0dbc987d1383c585c961ef2d592ff0,PodSandboxId:ba66cfa072f5d2c9d7e729390ec3e24cdc5d4f227dd184ba1ffe78dc46889849,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726253626457035523,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 815ca8cb73177215968b5c5242b63776,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed301adb1e4543af8e7b0bc6901dd6ef8c6bd3a45f58443df445f01ad6bb0a0a,PodSandboxId:dec28a9d29645441c6f66b40ed9acc817db86c5fcda1b52aec34a8ea0a961910,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726253626494877289,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f4db9ee38410b02d
601ed80ae90b5a4,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80a7cb47dee67b788f693c83be0a9b7c41fc30e21ddf9b0131e13737c8047222,PodSandboxId:a63972ff65b125bee7dbfeebba7408899591af85f385eeba93cb4a7472b5a831,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726253626301722665,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15cf7928620050653d6239c1007547bd,},Ann
otations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d456d4bd90d260631341afbae7431ccb0cc790417a4c9e8160e57467bfaf2b9,PodSandboxId:99c7958cb4872950974230539dc45944b50688bb9878563aee2e61fedfb5c35c,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726253118277457132,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-t4fwq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1bc3749b-0225-445c-9b86-767558392df7,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31a66627d146af0c454548e90aaf09b72db4e0fb35a2436b75ee6b7712ebfd7b,PodSandboxId:e586cc7654290c3c0cda0d6fd83b97505e3979cb49833b1eea94d979d037f3b4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726252966219468000,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-htrbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41a8301e-fca3-4907-bc77-808b013a2d2a,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3502979cf3ea177059372b048c527fdda963bdd51e52d12d22dfa810cf54057d,PodSandboxId:bd08f2ca13336963167d06a6cc6476a2e09fa40dcabe8661be5e3dacaf6be576,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726252966228593988,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fdhnm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c509676-c7ba-4841-89b5-7e4266abd9c9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e98c43ffb7347041b10b0e7a00cc20d3901c203313e4f54385c199a191115e1,PodSandboxId:47bf9789759213fdfb7ee71e76e2a2afa4707545aca3efe3f7ebec2b63ea5635,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726252954213974086,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-b9bzd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81130c38-ff5d-4c9e-ab7b-7eae4a62d3b8,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5065ca788226965fdaff6088f9b63d8cf7f5a5a7f59a07f825cfcdd7bc02e218,PodSandboxId:585827783c6745769918e227877e73f139672d7f78bf2704aaebc3850f3d07f1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726252953884543786,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-92mml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36bd37dc-88c4-4264-9e7c-a90246cc5212,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a31170a295b77a01a2c07a4bf7dbd0be4738f757372e24a85bcaf7d50d27d4c,PodSandboxId:16bf73d50b501d98243464444491201ec86d4a669bf5b3098590c9930bee5091,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe
954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726252942305036275,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15cf7928620050653d6239c1007547bd,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b2f0c73fe9ef4e082cc81429c4d5b062aa2764776ca148ed40e6d633b128ae5,PodSandboxId:353214980e0a16b1f3f76759918d9ab56c9f4add56df1e83cc1f6e8daf96c9a6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTA
INER_EXITED,CreatedAt:1726252942238767403,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf3d4ca74d8429dc43b760fdf8f185ab,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=64386d05-1ea8-462f-a1eb-e07d1feb1166 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 18:58:41 ha-617764 crio[3569]: time="2024-09-13 18:58:41.477880094Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7b4324a1-c694-4240-972c-8ce6f7357faa name=/runtime.v1.RuntimeService/Version
	Sep 13 18:58:41 ha-617764 crio[3569]: time="2024-09-13 18:58:41.477988676Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7b4324a1-c694-4240-972c-8ce6f7357faa name=/runtime.v1.RuntimeService/Version
	Sep 13 18:58:41 ha-617764 crio[3569]: time="2024-09-13 18:58:41.479625218Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=98b4b15b-18a2-476b-8190-2ede45f93fc8 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 18:58:41 ha-617764 crio[3569]: time="2024-09-13 18:58:41.480219103Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726253921480190014,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=98b4b15b-18a2-476b-8190-2ede45f93fc8 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 18:58:41 ha-617764 crio[3569]: time="2024-09-13 18:58:41.480994030Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=55b7fe2d-cc7c-4544-9335-3cda7427564d name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 18:58:41 ha-617764 crio[3569]: time="2024-09-13 18:58:41.481091459Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=55b7fe2d-cc7c-4544-9335-3cda7427564d name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 18:58:41 ha-617764 crio[3569]: time="2024-09-13 18:58:41.481642094Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:570c77981741ff23e853840359e686b304c18dfbe54cb12c07ca5d8bf2b8de17,PodSandboxId:9e2f87d06434fbb2cc22f0b55402dcb47eb004c630073d3484ea5c76f045adb0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726253708567358184,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e5f1a84-1798-430e-af04-82469e8f4a7b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32fcfa457f3ff0e638142def4aa43f12d6a5a779bfe86e597cc242d7f4d9d19d,PodSandboxId:ba66cfa072f5d2c9d7e729390ec3e24cdc5d4f227dd184ba1ffe78dc46889849,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726253664532596979,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 815ca8cb73177215968b5c5242b63776,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a368121b3974a88cb67c446e03fd5709ed4dd291d3b5de37b4544a7c42b60cc,PodSandboxId:dec28a9d29645441c6f66b40ed9acc817db86c5fcda1b52aec34a8ea0a961910,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726253664535947858,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f4db9ee38410b02d601ed80ae90b5a4,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bb3333d84624db9148786fc866cc4cb99179a577e31d3365459788ba7b02f59,PodSandboxId:0238ab84a512173bc79a44c0c48ee4a9bfeeaed98c7655e711a18a704a5951f9,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726253659862465525,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-t4fwq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1bc3749b-0225-445c-9b86-767558392df7,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59d4f7dd69063bd415cb8f477dce014cc757b01ddc9460b2a58e69522054176f,PodSandboxId:9e2f87d06434fbb2cc22f0b55402dcb47eb004c630073d3484ea5c76f045adb0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726253658537126239,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e5f1a84-1798-430e-af04-82469e8f4a7b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46d659112c682a93bb4560212566d776e405246e1b3f91e9cc2ad5198b2b8c69,PodSandboxId:566613db4514bf925926ac26426504bb464936a657b23ca25af3ad3aa0a803e1,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726253640657559269,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5545735943f8ff5a38c9aea0b4c785ad,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dddc0dfb6a255d4d35498f110cd8ade544b6f9eb17cef621a12500640512581e,PodSandboxId:18e2ef1278c487f38e01dc8ec652cadc524a2e3c08f1b80bae0f12b5ce7d5f45,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726253626790149443,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-b9bzd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81130c38-ff5d-4c9e-ab7b-7eae4a62d3b8,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:09fe052337ef3c922d075bb554c17c548520bce6283d1973e3507ca8d99ace87,PodSandboxId:5f1a3394b645b31406ab8d54ac120cd39398070996c972bf73df2d505c1848b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726253626803895339,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fdhnm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c509676-c7ba-4841-89b5-7e4266abd9c9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kube
rnetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b752b1ac699cb46bd464528882ec5c1bfb29241c94a1eda062141311151c2fc1,PodSandboxId:3a3adb124d23e71e636eaf0f200f72f3779d41c493e0475b011f1e825d452f00,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726253626698846769,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-htrbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41a8301e-fca3-4907-bc77-808b013a2d2a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"nam
e\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d1a0b2d1c95ea3234342a3c075a74b46c01ce0a91b97801503cb6fd1fc06163,PodSandboxId:09bbefd12114cb9e6833e18edd3e080d729dc865efaaa80af99dd78c1c2387c2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726253626405055504,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-92mml,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: 36bd37dc-88c4-4264-9e7c-a90246cc5212,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15c33340e3091f96212bbbf29c4a8802468f76ac07126f47265d4f2b86321a89,PodSandboxId:acfcaea56c23ed4626d8134635877c58a053b2861b24d0d53cf3024c7c8c1ca0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726253626518194944,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
bf3d4ca74d8429dc43b760fdf8f185ab,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da04db3dd67098c6ed0ba3118018e8a2ed0dbc987d1383c585c961ef2d592ff0,PodSandboxId:ba66cfa072f5d2c9d7e729390ec3e24cdc5d4f227dd184ba1ffe78dc46889849,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726253626457035523,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 815ca8cb73177215968b5c5242b63776,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed301adb1e4543af8e7b0bc6901dd6ef8c6bd3a45f58443df445f01ad6bb0a0a,PodSandboxId:dec28a9d29645441c6f66b40ed9acc817db86c5fcda1b52aec34a8ea0a961910,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726253626494877289,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f4db9ee38410b02d
601ed80ae90b5a4,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80a7cb47dee67b788f693c83be0a9b7c41fc30e21ddf9b0131e13737c8047222,PodSandboxId:a63972ff65b125bee7dbfeebba7408899591af85f385eeba93cb4a7472b5a831,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726253626301722665,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15cf7928620050653d6239c1007547bd,},Ann
otations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d456d4bd90d260631341afbae7431ccb0cc790417a4c9e8160e57467bfaf2b9,PodSandboxId:99c7958cb4872950974230539dc45944b50688bb9878563aee2e61fedfb5c35c,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726253118277457132,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-t4fwq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1bc3749b-0225-445c-9b86-767558392df7,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31a66627d146af0c454548e90aaf09b72db4e0fb35a2436b75ee6b7712ebfd7b,PodSandboxId:e586cc7654290c3c0cda0d6fd83b97505e3979cb49833b1eea94d979d037f3b4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726252966219468000,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-htrbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41a8301e-fca3-4907-bc77-808b013a2d2a,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3502979cf3ea177059372b048c527fdda963bdd51e52d12d22dfa810cf54057d,PodSandboxId:bd08f2ca13336963167d06a6cc6476a2e09fa40dcabe8661be5e3dacaf6be576,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726252966228593988,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fdhnm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c509676-c7ba-4841-89b5-7e4266abd9c9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e98c43ffb7347041b10b0e7a00cc20d3901c203313e4f54385c199a191115e1,PodSandboxId:47bf9789759213fdfb7ee71e76e2a2afa4707545aca3efe3f7ebec2b63ea5635,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726252954213974086,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-b9bzd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81130c38-ff5d-4c9e-ab7b-7eae4a62d3b8,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5065ca788226965fdaff6088f9b63d8cf7f5a5a7f59a07f825cfcdd7bc02e218,PodSandboxId:585827783c6745769918e227877e73f139672d7f78bf2704aaebc3850f3d07f1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726252953884543786,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-92mml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36bd37dc-88c4-4264-9e7c-a90246cc5212,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a31170a295b77a01a2c07a4bf7dbd0be4738f757372e24a85bcaf7d50d27d4c,PodSandboxId:16bf73d50b501d98243464444491201ec86d4a669bf5b3098590c9930bee5091,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe
954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726252942305036275,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15cf7928620050653d6239c1007547bd,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b2f0c73fe9ef4e082cc81429c4d5b062aa2764776ca148ed40e6d633b128ae5,PodSandboxId:353214980e0a16b1f3f76759918d9ab56c9f4add56df1e83cc1f6e8daf96c9a6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTA
INER_EXITED,CreatedAt:1726252942238767403,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf3d4ca74d8429dc43b760fdf8f185ab,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=55b7fe2d-cc7c-4544-9335-3cda7427564d name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 18:58:41 ha-617764 crio[3569]: time="2024-09-13 18:58:41.534568328Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=26080e22-a699-441e-9a02-8bdad8ffda52 name=/runtime.v1.RuntimeService/Version
	Sep 13 18:58:41 ha-617764 crio[3569]: time="2024-09-13 18:58:41.534671635Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=26080e22-a699-441e-9a02-8bdad8ffda52 name=/runtime.v1.RuntimeService/Version
	Sep 13 18:58:41 ha-617764 crio[3569]: time="2024-09-13 18:58:41.535824205Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a02bab1c-1eb6-4ea8-8375-e2ae4f348f96 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 18:58:41 ha-617764 crio[3569]: time="2024-09-13 18:58:41.536331619Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726253921536223901,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a02bab1c-1eb6-4ea8-8375-e2ae4f348f96 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 18:58:41 ha-617764 crio[3569]: time="2024-09-13 18:58:41.536745306Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e84cb108-f771-4ffa-b815-c06921d47ac0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 18:58:41 ha-617764 crio[3569]: time="2024-09-13 18:58:41.536814959Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e84cb108-f771-4ffa-b815-c06921d47ac0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 18:58:41 ha-617764 crio[3569]: time="2024-09-13 18:58:41.537297921Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:570c77981741ff23e853840359e686b304c18dfbe54cb12c07ca5d8bf2b8de17,PodSandboxId:9e2f87d06434fbb2cc22f0b55402dcb47eb004c630073d3484ea5c76f045adb0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726253708567358184,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e5f1a84-1798-430e-af04-82469e8f4a7b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32fcfa457f3ff0e638142def4aa43f12d6a5a779bfe86e597cc242d7f4d9d19d,PodSandboxId:ba66cfa072f5d2c9d7e729390ec3e24cdc5d4f227dd184ba1ffe78dc46889849,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726253664532596979,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 815ca8cb73177215968b5c5242b63776,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a368121b3974a88cb67c446e03fd5709ed4dd291d3b5de37b4544a7c42b60cc,PodSandboxId:dec28a9d29645441c6f66b40ed9acc817db86c5fcda1b52aec34a8ea0a961910,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726253664535947858,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f4db9ee38410b02d601ed80ae90b5a4,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bb3333d84624db9148786fc866cc4cb99179a577e31d3365459788ba7b02f59,PodSandboxId:0238ab84a512173bc79a44c0c48ee4a9bfeeaed98c7655e711a18a704a5951f9,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726253659862465525,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-t4fwq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1bc3749b-0225-445c-9b86-767558392df7,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59d4f7dd69063bd415cb8f477dce014cc757b01ddc9460b2a58e69522054176f,PodSandboxId:9e2f87d06434fbb2cc22f0b55402dcb47eb004c630073d3484ea5c76f045adb0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726253658537126239,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e5f1a84-1798-430e-af04-82469e8f4a7b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46d659112c682a93bb4560212566d776e405246e1b3f91e9cc2ad5198b2b8c69,PodSandboxId:566613db4514bf925926ac26426504bb464936a657b23ca25af3ad3aa0a803e1,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726253640657559269,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5545735943f8ff5a38c9aea0b4c785ad,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dddc0dfb6a255d4d35498f110cd8ade544b6f9eb17cef621a12500640512581e,PodSandboxId:18e2ef1278c487f38e01dc8ec652cadc524a2e3c08f1b80bae0f12b5ce7d5f45,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726253626790149443,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-b9bzd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81130c38-ff5d-4c9e-ab7b-7eae4a62d3b8,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:09fe052337ef3c922d075bb554c17c548520bce6283d1973e3507ca8d99ace87,PodSandboxId:5f1a3394b645b31406ab8d54ac120cd39398070996c972bf73df2d505c1848b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726253626803895339,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fdhnm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c509676-c7ba-4841-89b5-7e4266abd9c9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kube
rnetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b752b1ac699cb46bd464528882ec5c1bfb29241c94a1eda062141311151c2fc1,PodSandboxId:3a3adb124d23e71e636eaf0f200f72f3779d41c493e0475b011f1e825d452f00,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726253626698846769,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-htrbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41a8301e-fca3-4907-bc77-808b013a2d2a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"nam
e\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d1a0b2d1c95ea3234342a3c075a74b46c01ce0a91b97801503cb6fd1fc06163,PodSandboxId:09bbefd12114cb9e6833e18edd3e080d729dc865efaaa80af99dd78c1c2387c2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726253626405055504,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-92mml,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: 36bd37dc-88c4-4264-9e7c-a90246cc5212,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15c33340e3091f96212bbbf29c4a8802468f76ac07126f47265d4f2b86321a89,PodSandboxId:acfcaea56c23ed4626d8134635877c58a053b2861b24d0d53cf3024c7c8c1ca0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726253626518194944,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
bf3d4ca74d8429dc43b760fdf8f185ab,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da04db3dd67098c6ed0ba3118018e8a2ed0dbc987d1383c585c961ef2d592ff0,PodSandboxId:ba66cfa072f5d2c9d7e729390ec3e24cdc5d4f227dd184ba1ffe78dc46889849,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726253626457035523,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 815ca8cb73177215968b5c5242b63776,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed301adb1e4543af8e7b0bc6901dd6ef8c6bd3a45f58443df445f01ad6bb0a0a,PodSandboxId:dec28a9d29645441c6f66b40ed9acc817db86c5fcda1b52aec34a8ea0a961910,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726253626494877289,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f4db9ee38410b02d
601ed80ae90b5a4,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80a7cb47dee67b788f693c83be0a9b7c41fc30e21ddf9b0131e13737c8047222,PodSandboxId:a63972ff65b125bee7dbfeebba7408899591af85f385eeba93cb4a7472b5a831,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726253626301722665,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15cf7928620050653d6239c1007547bd,},Ann
otations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d456d4bd90d260631341afbae7431ccb0cc790417a4c9e8160e57467bfaf2b9,PodSandboxId:99c7958cb4872950974230539dc45944b50688bb9878563aee2e61fedfb5c35c,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726253118277457132,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-t4fwq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1bc3749b-0225-445c-9b86-767558392df7,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31a66627d146af0c454548e90aaf09b72db4e0fb35a2436b75ee6b7712ebfd7b,PodSandboxId:e586cc7654290c3c0cda0d6fd83b97505e3979cb49833b1eea94d979d037f3b4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726252966219468000,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-htrbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41a8301e-fca3-4907-bc77-808b013a2d2a,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3502979cf3ea177059372b048c527fdda963bdd51e52d12d22dfa810cf54057d,PodSandboxId:bd08f2ca13336963167d06a6cc6476a2e09fa40dcabe8661be5e3dacaf6be576,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726252966228593988,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fdhnm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c509676-c7ba-4841-89b5-7e4266abd9c9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e98c43ffb7347041b10b0e7a00cc20d3901c203313e4f54385c199a191115e1,PodSandboxId:47bf9789759213fdfb7ee71e76e2a2afa4707545aca3efe3f7ebec2b63ea5635,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726252954213974086,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-b9bzd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81130c38-ff5d-4c9e-ab7b-7eae4a62d3b8,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5065ca788226965fdaff6088f9b63d8cf7f5a5a7f59a07f825cfcdd7bc02e218,PodSandboxId:585827783c6745769918e227877e73f139672d7f78bf2704aaebc3850f3d07f1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726252953884543786,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-92mml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36bd37dc-88c4-4264-9e7c-a90246cc5212,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a31170a295b77a01a2c07a4bf7dbd0be4738f757372e24a85bcaf7d50d27d4c,PodSandboxId:16bf73d50b501d98243464444491201ec86d4a669bf5b3098590c9930bee5091,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe
954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726252942305036275,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15cf7928620050653d6239c1007547bd,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b2f0c73fe9ef4e082cc81429c4d5b062aa2764776ca148ed40e6d633b128ae5,PodSandboxId:353214980e0a16b1f3f76759918d9ab56c9f4add56df1e83cc1f6e8daf96c9a6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTA
INER_EXITED,CreatedAt:1726252942238767403,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf3d4ca74d8429dc43b760fdf8f185ab,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e84cb108-f771-4ffa-b815-c06921d47ac0 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	570c77981741f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       4                   9e2f87d06434f       storage-provisioner
	0a368121b3974       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      4 minutes ago       Running             kube-apiserver            3                   dec28a9d29645       kube-apiserver-ha-617764
	32fcfa457f3ff       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      4 minutes ago       Running             kube-controller-manager   2                   ba66cfa072f5d       kube-controller-manager-ha-617764
	2bb3333d84624       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      4 minutes ago       Running             busybox                   1                   0238ab84a5121       busybox-7dff88458-t4fwq
	59d4f7dd69063       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Exited              storage-provisioner       3                   9e2f87d06434f       storage-provisioner
	46d659112c682       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      4 minutes ago       Running             kube-vip                  0                   566613db4514b       kube-vip-ha-617764
	09fe052337ef3       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      4 minutes ago       Running             coredns                   1                   5f1a3394b645b       coredns-7c65d6cfc9-fdhnm
	dddc0dfb6a255       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      4 minutes ago       Running             kindnet-cni               1                   18e2ef1278c48       kindnet-b9bzd
	b752b1ac699cb       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      4 minutes ago       Running             coredns                   1                   3a3adb124d23e       coredns-7c65d6cfc9-htrbt
	15c33340e3091       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      4 minutes ago       Running             etcd                      1                   acfcaea56c23e       etcd-ha-617764
	ed301adb1e454       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      4 minutes ago       Exited              kube-apiserver            2                   dec28a9d29645       kube-apiserver-ha-617764
	da04db3dd6709       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      4 minutes ago       Exited              kube-controller-manager   1                   ba66cfa072f5d       kube-controller-manager-ha-617764
	1d1a0b2d1c95e       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      4 minutes ago       Running             kube-proxy                1                   09bbefd12114c       kube-proxy-92mml
	80a7cb47dee67       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      4 minutes ago       Running             kube-scheduler            1                   a63972ff65b12       kube-scheduler-ha-617764
	0d456d4bd90d2       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   13 minutes ago      Exited              busybox                   0                   99c7958cb4872       busybox-7dff88458-t4fwq
	3502979cf3ea1       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      15 minutes ago      Exited              coredns                   0                   bd08f2ca13336       coredns-7c65d6cfc9-fdhnm
	31a66627d146a       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      15 minutes ago      Exited              coredns                   0                   e586cc7654290       coredns-7c65d6cfc9-htrbt
	7e98c43ffb734       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      16 minutes ago      Exited              kindnet-cni               0                   47bf978975921       kindnet-b9bzd
	5065ca7882269       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      16 minutes ago      Exited              kube-proxy                0                   585827783c674       kube-proxy-92mml
	8a31170a295b7       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      16 minutes ago      Exited              kube-scheduler            0                   16bf73d50b501       kube-scheduler-ha-617764
	3b2f0c73fe9ef       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      16 minutes ago      Exited              etcd                      0                   353214980e0a1       etcd-ha-617764
	
	
	==> coredns [09fe052337ef3c922d075bb554c17c548520bce6283d1973e3507ca8d99ace87] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[818669773]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (13-Sep-2024 18:53:51.525) (total time: 10000ms):
	Trace[818669773]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (18:54:01.526)
	Trace[818669773]: [10.000979018s] [10.000979018s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:52492->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:52492->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:43514->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:43514->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [31a66627d146af0c454548e90aaf09b72db4e0fb35a2436b75ee6b7712ebfd7b] <==
	[INFO] 10.244.0.4:42212 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000107526s
	[INFO] 10.244.0.4:55473 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001625324s
	[INFO] 10.244.0.4:57662 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00027413s
	[INFO] 10.244.0.4:42804 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000086384s
	[INFO] 10.244.1.2:42712 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000149698s
	[INFO] 10.244.1.2:33468 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000117843s
	[INFO] 10.244.1.2:53696 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000125501s
	[INFO] 10.244.1.2:59050 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000121214s
	[INFO] 10.244.2.2:60250 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000129604s
	[INFO] 10.244.2.2:33290 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000127517s
	[INFO] 10.244.0.4:48739 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000096314s
	[INFO] 10.244.0.4:42249 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000049139s
	[INFO] 10.244.1.2:35348 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000327466s
	[INFO] 10.244.1.2:36802 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000158894s
	[INFO] 10.244.2.2:33661 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000134839s
	[INFO] 10.244.2.2:41493 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000135174s
	[INFO] 10.244.0.4:55720 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00006804s
	[INFO] 10.244.0.4:59841 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00009592s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1896&timeout=7m50s&timeoutSeconds=470&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces)
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io)
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [3502979cf3ea177059372b048c527fdda963bdd51e52d12d22dfa810cf54057d] <==
	[INFO] 10.244.1.2:53881 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000133465s
	[INFO] 10.244.2.2:44355 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000163171s
	[INFO] 10.244.2.2:36763 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001800499s
	[INFO] 10.244.2.2:41469 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000115361s
	[INFO] 10.244.2.2:40909 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000145743s
	[INFO] 10.244.2.2:44681 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000149088s
	[INFO] 10.244.0.4:51555 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000069764s
	[INFO] 10.244.0.4:53574 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001057592s
	[INFO] 10.244.0.4:45350 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000035427s
	[INFO] 10.244.0.4:48145 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000190172s
	[INFO] 10.244.2.2:36852 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000187208s
	[INFO] 10.244.2.2:58201 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00010302s
	[INFO] 10.244.0.4:45335 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000139302s
	[INFO] 10.244.0.4:41623 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000054642s
	[INFO] 10.244.1.2:43471 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000145957s
	[INFO] 10.244.1.2:55858 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000179256s
	[INFO] 10.244.2.2:35120 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000154146s
	[INFO] 10.244.2.2:57748 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000106668s
	[INFO] 10.244.0.4:35176 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00009163s
	[INFO] 10.244.0.4:35630 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000191227s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=1842&timeout=9m53s&timeoutSeconds=593&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1865&timeout=8m22s&timeoutSeconds=502&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1865&timeout=5m46s&timeoutSeconds=346&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [b752b1ac699cb46bd464528882ec5c1bfb29241c94a1eda062141311151c2fc1] <==
	Trace[858252908]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:54310->10.96.0.1:443: read: connection reset by peer 13702ms (18:54:11.872)
	Trace[858252908]: [13.702752412s] [13.702752412s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:54310->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-617764
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-617764
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fdd33bebc6743cfd1c61ec7fe066add478610a92
	                    minikube.k8s.io/name=ha-617764
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_13T18_42_29_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Sep 2024 18:42:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-617764
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 13 Sep 2024 18:58:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Sep 2024 18:57:14 +0000   Fri, 13 Sep 2024 18:57:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Sep 2024 18:57:14 +0000   Fri, 13 Sep 2024 18:57:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Sep 2024 18:57:14 +0000   Fri, 13 Sep 2024 18:57:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 13 Sep 2024 18:57:14 +0000   Fri, 13 Sep 2024 18:57:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.145
	  Hostname:    ha-617764
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0f2aba800dcc4d28902afcae110b3305
	  System UUID:                0f2aba80-0dcc-4d28-902a-fcae110b3305
	  Boot ID:                    07a71fa7-1555-49dc-a1c7-c845200ddeaa
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-t4fwq              0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 coredns-7c65d6cfc9-fdhnm             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     16m
	  kube-system                 coredns-7c65d6cfc9-htrbt             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     16m
	  kube-system                 etcd-ha-617764                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         16m
	  kube-system                 kindnet-b9bzd                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      16m
	  kube-system                 kube-apiserver-ha-617764             250m (12%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-controller-manager-ha-617764    200m (10%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-proxy-92mml                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-scheduler-ha-617764             100m (5%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-vip-ha-617764                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m34s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 16m                    kube-proxy       
	  Normal   Starting                 4m13s                  kube-proxy       
	  Normal   NodeAllocatableEnforced  16m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 16m                    kubelet          Starting kubelet.
	  Normal   RegisteredNode           16m                    node-controller  Node ha-617764 event: Registered Node ha-617764 in Controller
	  Normal   RegisteredNode           15m                    node-controller  Node ha-617764 event: Registered Node ha-617764 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-617764 event: Registered Node ha-617764 in Controller
	  Warning  ContainerGCFailed        5m13s (x2 over 6m13s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeNotReady             5m5s (x3 over 5m54s)   kubelet          Node ha-617764 status is now: NodeNotReady
	  Normal   RegisteredNode           4m11s                  node-controller  Node ha-617764 event: Registered Node ha-617764 in Controller
	  Normal   RegisteredNode           4m11s                  node-controller  Node ha-617764 event: Registered Node ha-617764 in Controller
	  Normal   RegisteredNode           3m13s                  node-controller  Node ha-617764 event: Registered Node ha-617764 in Controller
	  Normal   NodeNotReady             106s                   node-controller  Node ha-617764 status is now: NodeNotReady
	  Normal   NodeReady                87s (x2 over 15m)      kubelet          Node ha-617764 status is now: NodeReady
	  Normal   NodeHasSufficientPID     87s (x2 over 16m)      kubelet          Node ha-617764 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    87s (x2 over 16m)      kubelet          Node ha-617764 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  87s (x2 over 16m)      kubelet          Node ha-617764 status is now: NodeHasSufficientMemory
	
	
	Name:               ha-617764-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-617764-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fdd33bebc6743cfd1c61ec7fe066add478610a92
	                    minikube.k8s.io/name=ha-617764
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_13T18_43_27_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Sep 2024 18:43:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-617764-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 13 Sep 2024 18:58:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Sep 2024 18:55:10 +0000   Fri, 13 Sep 2024 18:54:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Sep 2024 18:55:10 +0000   Fri, 13 Sep 2024 18:54:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Sep 2024 18:55:10 +0000   Fri, 13 Sep 2024 18:54:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 13 Sep 2024 18:55:10 +0000   Fri, 13 Sep 2024 18:54:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.203
	  Hostname:    ha-617764-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 3cc6f4389cf8474aa5ba9d0c108b603d
	  System UUID:                3cc6f438-9cf8-474a-a5ba-9d0c108b603d
	  Boot ID:                    3ff149de-a1f6-4a53-9c3a-07c56d69cf30
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-c28t9                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 etcd-ha-617764-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         15m
	  kube-system                 kindnet-bc2zg                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      15m
	  kube-system                 kube-apiserver-ha-617764-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-ha-617764-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-proxy-hqm8n                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-scheduler-ha-617764-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-vip-ha-617764-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m8s                   kube-proxy       
	  Normal  Starting                 15m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  15m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)      kubelet          Node ha-617764-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)      kubelet          Node ha-617764-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)      kubelet          Node ha-617764-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           15m                    node-controller  Node ha-617764-m02 event: Registered Node ha-617764-m02 in Controller
	  Normal  RegisteredNode           15m                    node-controller  Node ha-617764-m02 event: Registered Node ha-617764-m02 in Controller
	  Normal  RegisteredNode           13m                    node-controller  Node ha-617764-m02 event: Registered Node ha-617764-m02 in Controller
	  Normal  NodeNotReady             11m                    node-controller  Node ha-617764-m02 status is now: NodeNotReady
	  Normal  Starting                 4m40s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m40s (x8 over 4m40s)  kubelet          Node ha-617764-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m40s (x8 over 4m40s)  kubelet          Node ha-617764-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m40s (x7 over 4m40s)  kubelet          Node ha-617764-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m40s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m12s                  node-controller  Node ha-617764-m02 event: Registered Node ha-617764-m02 in Controller
	  Normal  RegisteredNode           4m12s                  node-controller  Node ha-617764-m02 event: Registered Node ha-617764-m02 in Controller
	  Normal  RegisteredNode           3m14s                  node-controller  Node ha-617764-m02 event: Registered Node ha-617764-m02 in Controller
	
	
	Name:               ha-617764-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-617764-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fdd33bebc6743cfd1c61ec7fe066add478610a92
	                    minikube.k8s.io/name=ha-617764
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_13T18_45_53_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Sep 2024 18:45:52 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-617764-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 13 Sep 2024 18:56:14 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 13 Sep 2024 18:55:54 +0000   Fri, 13 Sep 2024 18:56:55 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 13 Sep 2024 18:55:54 +0000   Fri, 13 Sep 2024 18:56:55 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 13 Sep 2024 18:55:54 +0000   Fri, 13 Sep 2024 18:56:55 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 13 Sep 2024 18:55:54 +0000   Fri, 13 Sep 2024 18:56:55 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.238
	  Hostname:    ha-617764-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 377b052ecb52420ba7e0fb039f04c4f5
	  System UUID:                377b052e-cb52-420b-a7e0-fb039f04c4f5
	  Boot ID:                    44a904c1-478c-4733-89d7-64bb5ed6ea9f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-hzxvw    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m38s
	  kube-system                 kindnet-47jgz              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      12m
	  kube-system                 kube-proxy-5rlkn           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m44s                  kube-proxy       
	  Normal   Starting                 12m                    kube-proxy       
	  Normal   NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  12m (x2 over 12m)      kubelet          Node ha-617764-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x2 over 12m)      kubelet          Node ha-617764-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x2 over 12m)      kubelet          Node ha-617764-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12m                    node-controller  Node ha-617764-m04 event: Registered Node ha-617764-m04 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-617764-m04 event: Registered Node ha-617764-m04 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-617764-m04 event: Registered Node ha-617764-m04 in Controller
	  Normal   NodeReady                12m                    kubelet          Node ha-617764-m04 status is now: NodeReady
	  Normal   RegisteredNode           4m12s                  node-controller  Node ha-617764-m04 event: Registered Node ha-617764-m04 in Controller
	  Normal   RegisteredNode           4m12s                  node-controller  Node ha-617764-m04 event: Registered Node ha-617764-m04 in Controller
	  Normal   RegisteredNode           3m14s                  node-controller  Node ha-617764-m04 event: Registered Node ha-617764-m04 in Controller
	  Normal   NodeHasSufficientMemory  2m48s (x2 over 2m48s)  kubelet          Node ha-617764-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  2m48s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 2m48s                  kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    2m48s (x2 over 2m48s)  kubelet          Node ha-617764-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m48s (x2 over 2m48s)  kubelet          Node ha-617764-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 2m48s                  kubelet          Node ha-617764-m04 has been rebooted, boot id: 44a904c1-478c-4733-89d7-64bb5ed6ea9f
	  Normal   NodeReady                2m48s                  kubelet          Node ha-617764-m04 status is now: NodeReady
	  Normal   NodeNotReady             107s (x2 over 3m32s)   node-controller  Node ha-617764-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[ +10.036071] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.066350] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.051740] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.182667] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.119649] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.275654] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +3.901030] systemd-fstab-generator[757]: Ignoring "noauto" option for root device
	[  +4.328019] systemd-fstab-generator[895]: Ignoring "noauto" option for root device
	[  +0.063743] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.140604] systemd-fstab-generator[1308]: Ignoring "noauto" option for root device
	[  +0.090298] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.514292] kauditd_printk_skb: 21 callbacks suppressed
	[ +12.179813] kauditd_printk_skb: 38 callbacks suppressed
	[Sep13 18:43] kauditd_printk_skb: 24 callbacks suppressed
	[Sep13 18:53] systemd-fstab-generator[3494]: Ignoring "noauto" option for root device
	[  +0.152592] systemd-fstab-generator[3506]: Ignoring "noauto" option for root device
	[  +0.176959] systemd-fstab-generator[3520]: Ignoring "noauto" option for root device
	[  +0.144628] systemd-fstab-generator[3532]: Ignoring "noauto" option for root device
	[  +0.278033] systemd-fstab-generator[3560]: Ignoring "noauto" option for root device
	[  +6.938453] systemd-fstab-generator[3656]: Ignoring "noauto" option for root device
	[  +0.087335] kauditd_printk_skb: 100 callbacks suppressed
	[  +6.505183] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.221465] kauditd_printk_skb: 85 callbacks suppressed
	[Sep13 18:54] kauditd_printk_skb: 5 callbacks suppressed
	[  +8.066370] kauditd_printk_skb: 4 callbacks suppressed
	
	
	==> etcd [15c33340e3091f96212bbbf29c4a8802468f76ac07126f47265d4f2b86321a89] <==
	{"level":"info","ts":"2024-09-13T18:55:18.326463Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"44b3a0f32f80bb09","to":"d1b5616c38681b99","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-09-13T18:55:18.326549Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"44b3a0f32f80bb09","remote-peer-id":"d1b5616c38681b99"}
	{"level":"info","ts":"2024-09-13T18:55:19.216734Z","caller":"traceutil/trace.go:171","msg":"trace[1422151895] transaction","detail":"{read_only:false; response_revision:2359; number_of_response:1; }","duration":"160.343154ms","start":"2024-09-13T18:55:19.056370Z","end":"2024-09-13T18:55:19.216714Z","steps":["trace[1422151895] 'process raft request'  (duration: 160.238449ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-13T18:55:19.218487Z","caller":"traceutil/trace.go:171","msg":"trace[1096473554] linearizableReadLoop","detail":"{readStateIndex:2777; appliedIndex:2778; }","duration":"150.825564ms","start":"2024-09-13T18:55:19.067646Z","end":"2024-09-13T18:55:19.218472Z","steps":["trace[1096473554] 'read index received'  (duration: 150.820286ms)","trace[1096473554] 'applied index is now lower than readState.Index'  (duration: 3.856µs)"],"step_count":2}
	{"level":"warn","ts":"2024-09-13T18:55:19.218640Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"150.99442ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-13T18:55:19.218733Z","caller":"traceutil/trace.go:171","msg":"trace[1131931288] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:2359; }","duration":"151.099422ms","start":"2024-09-13T18:55:19.067623Z","end":"2024-09-13T18:55:19.218723Z","steps":["trace[1131931288] 'agreement among raft nodes before linearized reading'  (duration: 150.970018ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-13T18:56:08.287841Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"44b3a0f32f80bb09 switched to configuration voters=(1372937678584979093 4950477381744769801)"}
	{"level":"info","ts":"2024-09-13T18:56:08.290431Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"33ee9922f2bf4379","local-member-id":"44b3a0f32f80bb09","removed-remote-peer-id":"d1b5616c38681b99","removed-remote-peer-urls":["https://192.168.39.118:2380"]}
	{"level":"info","ts":"2024-09-13T18:56:08.290523Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"d1b5616c38681b99"}
	{"level":"warn","ts":"2024-09-13T18:56:08.290790Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"d1b5616c38681b99"}
	{"level":"info","ts":"2024-09-13T18:56:08.290833Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"d1b5616c38681b99"}
	{"level":"warn","ts":"2024-09-13T18:56:08.291038Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"d1b5616c38681b99"}
	{"level":"info","ts":"2024-09-13T18:56:08.291067Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"d1b5616c38681b99"}
	{"level":"warn","ts":"2024-09-13T18:56:08.291104Z","caller":"etcdserver/server.go:987","msg":"rejected Raft message from removed member","local-member-id":"44b3a0f32f80bb09","removed-member-id":"d1b5616c38681b99"}
	{"level":"warn","ts":"2024-09-13T18:56:08.291150Z","caller":"rafthttp/peer.go:180","msg":"failed to process Raft message","error":"cannot process message from removed member"}
	{"level":"info","ts":"2024-09-13T18:56:08.291431Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"44b3a0f32f80bb09","remote-peer-id":"d1b5616c38681b99"}
	{"level":"warn","ts":"2024-09-13T18:56:08.291682Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"44b3a0f32f80bb09","remote-peer-id":"d1b5616c38681b99","error":"context canceled"}
	{"level":"warn","ts":"2024-09-13T18:56:08.291737Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"d1b5616c38681b99","error":"failed to read d1b5616c38681b99 on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-09-13T18:56:08.291772Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"44b3a0f32f80bb09","remote-peer-id":"d1b5616c38681b99"}
	{"level":"warn","ts":"2024-09-13T18:56:08.291946Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"44b3a0f32f80bb09","remote-peer-id":"d1b5616c38681b99","error":"context canceled"}
	{"level":"info","ts":"2024-09-13T18:56:08.291985Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"44b3a0f32f80bb09","remote-peer-id":"d1b5616c38681b99"}
	{"level":"info","ts":"2024-09-13T18:56:08.291999Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"d1b5616c38681b99"}
	{"level":"info","ts":"2024-09-13T18:56:08.292013Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"44b3a0f32f80bb09","removed-remote-peer-id":"d1b5616c38681b99"}
	{"level":"warn","ts":"2024-09-13T18:56:08.303967Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"44b3a0f32f80bb09","remote-peer-id-stream-handler":"44b3a0f32f80bb09","remote-peer-id-from":"d1b5616c38681b99"}
	{"level":"warn","ts":"2024-09-13T18:56:08.312881Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"44b3a0f32f80bb09","remote-peer-id-stream-handler":"44b3a0f32f80bb09","remote-peer-id-from":"d1b5616c38681b99"}
	
	
	==> etcd [3b2f0c73fe9ef4e082cc81429c4d5b062aa2764776ca148ed40e6d633b128ae5] <==
	{"level":"warn","ts":"2024-09-13T18:52:00.370494Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-13T18:51:59.772954Z","time spent":"597.509508ms","remote":"127.0.0.1:54368","response type":"/etcdserverpb.KV/Range","request count":0,"request size":41,"response count":0,"response size":0,"request content":"key:\"/registry/leases/\" range_end:\"/registry/leases0\" limit:10000 "}
	2024/09/13 18:52:00 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-09-13T18:52:00.402546Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.145:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-13T18:52:00.402739Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.145:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-13T18:52:00.404118Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"44b3a0f32f80bb09","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-09-13T18:52:00.404398Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"130da78b66ce9e95"}
	{"level":"info","ts":"2024-09-13T18:52:00.404452Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"130da78b66ce9e95"}
	{"level":"info","ts":"2024-09-13T18:52:00.404476Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"130da78b66ce9e95"}
	{"level":"info","ts":"2024-09-13T18:52:00.404521Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"44b3a0f32f80bb09","remote-peer-id":"130da78b66ce9e95"}
	{"level":"info","ts":"2024-09-13T18:52:00.404572Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"44b3a0f32f80bb09","remote-peer-id":"130da78b66ce9e95"}
	{"level":"info","ts":"2024-09-13T18:52:00.404630Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"44b3a0f32f80bb09","remote-peer-id":"130da78b66ce9e95"}
	{"level":"info","ts":"2024-09-13T18:52:00.404643Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"130da78b66ce9e95"}
	{"level":"info","ts":"2024-09-13T18:52:00.404649Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"d1b5616c38681b99"}
	{"level":"info","ts":"2024-09-13T18:52:00.404661Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"d1b5616c38681b99"}
	{"level":"info","ts":"2024-09-13T18:52:00.404679Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"d1b5616c38681b99"}
	{"level":"info","ts":"2024-09-13T18:52:00.404768Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"44b3a0f32f80bb09","remote-peer-id":"d1b5616c38681b99"}
	{"level":"info","ts":"2024-09-13T18:52:00.404813Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"44b3a0f32f80bb09","remote-peer-id":"d1b5616c38681b99"}
	{"level":"info","ts":"2024-09-13T18:52:00.404841Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"44b3a0f32f80bb09","remote-peer-id":"d1b5616c38681b99"}
	{"level":"info","ts":"2024-09-13T18:52:00.404867Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"d1b5616c38681b99"}
	{"level":"info","ts":"2024-09-13T18:52:00.408189Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.145:2380"}
	{"level":"warn","ts":"2024-09-13T18:52:00.408214Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"8.642224444s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: server stopped"}
	{"level":"info","ts":"2024-09-13T18:52:00.408375Z","caller":"traceutil/trace.go:171","msg":"trace[494011848] range","detail":"{range_begin:; range_end:; }","duration":"8.642401593s","start":"2024-09-13T18:51:51.765965Z","end":"2024-09-13T18:52:00.408366Z","steps":["trace[494011848] 'agreement among raft nodes before linearized reading'  (duration: 8.642223024s)"],"step_count":1}
	{"level":"error","ts":"2024-09-13T18:52:00.408425Z","caller":"etcdhttp/health.go:367","msg":"Health check error","path":"/readyz","reason":"[+]data_corruption ok\n[+]serializable_read ok\n[-]linearizable_read failed: etcdserver: server stopped\n","status-code":503,"stacktrace":"go.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp.(*CheckRegistry).installRootHttpEndpoint.newHealthHandler.func2\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp/health.go:367\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2141\nnet/http.(*ServeMux).ServeHTTP\n\tnet/http/server.go:2519\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:2943\nnet/http.(*conn).serve\n\tnet/http/server.go:2014"}
	{"level":"info","ts":"2024-09-13T18:52:00.408519Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.145:2380"}
	{"level":"info","ts":"2024-09-13T18:52:00.408810Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-617764","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.145:2380"],"advertise-client-urls":["https://192.168.39.145:2379"]}
	
	
	==> kernel <==
	 18:58:42 up 16 min,  0 users,  load average: 0.49, 0.58, 0.36
	Linux ha-617764 5.10.207 #1 SMP Thu Sep 12 19:03:33 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [7e98c43ffb7347041b10b0e7a00cc20d3901c203313e4f54385c199a191115e1] <==
	I0913 18:51:35.370279       1 main.go:295] Handling node with IPs: map[192.168.39.203:{}]
	I0913 18:51:35.370303       1 main.go:322] Node ha-617764-m02 has CIDR [10.244.1.0/24] 
	I0913 18:51:35.370514       1 main.go:295] Handling node with IPs: map[192.168.39.118:{}]
	I0913 18:51:35.370554       1 main.go:322] Node ha-617764-m03 has CIDR [10.244.2.0/24] 
	I0913 18:51:35.370658       1 main.go:295] Handling node with IPs: map[192.168.39.238:{}]
	I0913 18:51:35.370696       1 main.go:322] Node ha-617764-m04 has CIDR [10.244.3.0/24] 
	E0913 18:51:35.968903       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=1896&timeout=7m45s&timeoutSeconds=465&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	I0913 18:51:45.378637       1 main.go:295] Handling node with IPs: map[192.168.39.145:{}]
	I0913 18:51:45.378689       1 main.go:299] handling current node
	I0913 18:51:45.378721       1 main.go:295] Handling node with IPs: map[192.168.39.203:{}]
	I0913 18:51:45.378727       1 main.go:322] Node ha-617764-m02 has CIDR [10.244.1.0/24] 
	I0913 18:51:45.378867       1 main.go:295] Handling node with IPs: map[192.168.39.118:{}]
	I0913 18:51:45.378889       1 main.go:322] Node ha-617764-m03 has CIDR [10.244.2.0/24] 
	I0913 18:51:45.378940       1 main.go:295] Handling node with IPs: map[192.168.39.238:{}]
	I0913 18:51:45.378944       1 main.go:322] Node ha-617764-m04 has CIDR [10.244.3.0/24] 
	W0913 18:51:54.400664       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?resourceVersion=1896": dial tcp 10.96.0.1:443: connect: no route to host
	E0913 18:51:54.400725       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?resourceVersion=1896": dial tcp 10.96.0.1:443: connect: no route to host
	I0913 18:51:55.369335       1 main.go:295] Handling node with IPs: map[192.168.39.145:{}]
	I0913 18:51:55.369380       1 main.go:299] handling current node
	I0913 18:51:55.369395       1 main.go:295] Handling node with IPs: map[192.168.39.203:{}]
	I0913 18:51:55.369400       1 main.go:322] Node ha-617764-m02 has CIDR [10.244.1.0/24] 
	I0913 18:51:55.369546       1 main.go:295] Handling node with IPs: map[192.168.39.118:{}]
	I0913 18:51:55.369569       1 main.go:322] Node ha-617764-m03 has CIDR [10.244.2.0/24] 
	I0913 18:51:55.369634       1 main.go:295] Handling node with IPs: map[192.168.39.238:{}]
	I0913 18:51:55.369653       1 main.go:322] Node ha-617764-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [dddc0dfb6a255d4d35498f110cd8ade544b6f9eb17cef621a12500640512581e] <==
	I0913 18:57:57.992785       1 main.go:322] Node ha-617764-m02 has CIDR [10.244.1.0/24] 
	I0913 18:58:07.986622       1 main.go:295] Handling node with IPs: map[192.168.39.145:{}]
	I0913 18:58:07.986810       1 main.go:299] handling current node
	I0913 18:58:07.986855       1 main.go:295] Handling node with IPs: map[192.168.39.203:{}]
	I0913 18:58:07.986874       1 main.go:322] Node ha-617764-m02 has CIDR [10.244.1.0/24] 
	I0913 18:58:07.987050       1 main.go:295] Handling node with IPs: map[192.168.39.238:{}]
	I0913 18:58:07.987072       1 main.go:322] Node ha-617764-m04 has CIDR [10.244.3.0/24] 
	I0913 18:58:17.988128       1 main.go:295] Handling node with IPs: map[192.168.39.238:{}]
	I0913 18:58:17.988336       1 main.go:322] Node ha-617764-m04 has CIDR [10.244.3.0/24] 
	I0913 18:58:17.988500       1 main.go:295] Handling node with IPs: map[192.168.39.145:{}]
	I0913 18:58:17.988524       1 main.go:299] handling current node
	I0913 18:58:17.988554       1 main.go:295] Handling node with IPs: map[192.168.39.203:{}]
	I0913 18:58:17.988558       1 main.go:322] Node ha-617764-m02 has CIDR [10.244.1.0/24] 
	I0913 18:58:27.988426       1 main.go:295] Handling node with IPs: map[192.168.39.145:{}]
	I0913 18:58:27.988495       1 main.go:299] handling current node
	I0913 18:58:27.988516       1 main.go:295] Handling node with IPs: map[192.168.39.203:{}]
	I0913 18:58:27.988521       1 main.go:322] Node ha-617764-m02 has CIDR [10.244.1.0/24] 
	I0913 18:58:27.988689       1 main.go:295] Handling node with IPs: map[192.168.39.238:{}]
	I0913 18:58:27.988745       1 main.go:322] Node ha-617764-m04 has CIDR [10.244.3.0/24] 
	I0913 18:58:37.994223       1 main.go:295] Handling node with IPs: map[192.168.39.145:{}]
	I0913 18:58:37.994340       1 main.go:299] handling current node
	I0913 18:58:37.994361       1 main.go:295] Handling node with IPs: map[192.168.39.203:{}]
	I0913 18:58:37.994371       1 main.go:322] Node ha-617764-m02 has CIDR [10.244.1.0/24] 
	I0913 18:58:37.994612       1 main.go:295] Handling node with IPs: map[192.168.39.238:{}]
	I0913 18:58:37.994637       1 main.go:322] Node ha-617764-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [0a368121b3974a88cb67c446e03fd5709ed4dd291d3b5de37b4544a7c42b60cc] <==
	I0913 18:54:26.762982       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0913 18:54:26.855539       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0913 18:54:26.855613       1 policy_source.go:224] refreshing policies
	I0913 18:54:26.863834       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0913 18:54:26.863916       1 aggregator.go:171] initial CRD sync complete...
	I0913 18:54:26.863956       1 autoregister_controller.go:144] Starting autoregister controller
	I0913 18:54:26.863979       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0913 18:54:26.864001       1 cache.go:39] Caches are synced for autoregister controller
	I0913 18:54:26.896130       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0913 18:54:26.930711       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0913 18:54:26.931123       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0913 18:54:26.932008       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0913 18:54:26.934071       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0913 18:54:26.936515       1 shared_informer.go:320] Caches are synced for configmaps
	I0913 18:54:26.937087       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0913 18:54:26.937125       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0913 18:54:26.937311       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0913 18:54:26.947349       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	W0913 18:54:27.101971       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.118]
	I0913 18:54:27.103804       1 controller.go:615] quota admission added evaluator for: endpoints
	I0913 18:54:27.111764       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0913 18:54:27.115555       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0913 18:54:27.738608       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0913 18:54:28.135579       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.118 192.168.39.145]
	W0913 18:56:28.145380       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.145 192.168.39.203]
	
	
	==> kube-apiserver [ed301adb1e4543af8e7b0bc6901dd6ef8c6bd3a45f58443df445f01ad6bb0a0a] <==
	I0913 18:53:47.282613       1 options.go:228] external host was not specified, using 192.168.39.145
	I0913 18:53:47.287697       1 server.go:142] Version: v1.31.1
	I0913 18:53:47.292409       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0913 18:53:48.004354       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0913 18:53:48.010657       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0913 18:53:48.014664       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0913 18:53:48.014696       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0913 18:53:48.014906       1 instance.go:232] Using reconciler: lease
	W0913 18:54:08.003438       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	W0913 18:54:08.003437       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	W0913 18:54:08.015771       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0913 18:54:08.015858       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [32fcfa457f3ff0e638142def4aa43f12d6a5a779bfe86e597cc242d7f4d9d19d] <==
	I0913 18:57:08.917941       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="51.467699ms"
	I0913 18:57:08.918051       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="59.28µs"
	E0913 18:57:10.164283       1 gc_controller.go:151] "Failed to get node" err="node \"ha-617764-m03\" not found" logger="pod-garbage-collector-controller" node="ha-617764-m03"
	E0913 18:57:10.164329       1 gc_controller.go:151] "Failed to get node" err="node \"ha-617764-m03\" not found" logger="pod-garbage-collector-controller" node="ha-617764-m03"
	E0913 18:57:10.164335       1 gc_controller.go:151] "Failed to get node" err="node \"ha-617764-m03\" not found" logger="pod-garbage-collector-controller" node="ha-617764-m03"
	E0913 18:57:10.164340       1 gc_controller.go:151] "Failed to get node" err="node \"ha-617764-m03\" not found" logger="pod-garbage-collector-controller" node="ha-617764-m03"
	E0913 18:57:10.164345       1 gc_controller.go:151] "Failed to get node" err="node \"ha-617764-m03\" not found" logger="pod-garbage-collector-controller" node="ha-617764-m03"
	I0913 18:57:10.185993       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-617764-m03"
	I0913 18:57:10.231191       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-617764-m03"
	I0913 18:57:10.231283       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-8mbkd"
	I0913 18:57:10.260481       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-8mbkd"
	I0913 18:57:10.260523       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-617764-m03"
	I0913 18:57:10.307797       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-617764-m03"
	I0913 18:57:10.307940       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-617764-m03"
	I0913 18:57:10.352223       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-617764-m03"
	I0913 18:57:10.352375       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-7bpk5"
	E0913 18:57:10.355451       1 gc_controller.go:255] "Unhandled Error" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"075a72a7-32a5-4502-b52d-eeba572f94d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"},{\\\"type\\\":\\\"DisruptionTarget\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2024-09-13T18:57:10Z\\\",\\\"message\\\":\\\"PodGC: node no longer exists\\\",\\\"reason\\\":\\\"DeletionByPodGC\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"DisruptionTarget\\\"}],\\\"phase\\\":\\\"Failed\\\"}}\" for pod \"kube-system\"/\"kube-proxy-7bpk5\": pods \"kube-proxy-7bpk5\" not found" logger="UnhandledError"
	I0913 18:57:10.356720       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-617764-m03"
	E0913 18:57:10.361495       1 gc_controller.go:255] "Unhandled Error" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01d83f8e-84af-4ebb-a64d-90a1a4dd7799\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"},{\\\"type\\\":\\\"DisruptionTarget\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2024-09-13T18:57:10Z\\\",\\\"message\\\":\\\"PodGC: node no longer exists\\\",\\\"reason\\\":\\\"DeletionByPodGC\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"DisruptionTarget\\\"}],\\\"phase\\\":\\\"Failed\\\"}}\" for pod \"kube-system\"/\"kube-scheduler-ha-617764-m03\": pods \"kube-scheduler-ha-617764-m03\" not found" logger="UnhandledError"
	I0913 18:57:10.362779       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-617764-m03"
	E0913 18:57:10.366398       1 gc_controller.go:255] "Unhandled Error" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21987759-d9ea-4367-96c5-f95df97fa81a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"},{\\\"type\\\":\\\"DisruptionTarget\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2024-09-13T18:57:10Z\\\",\\\"message\\\":\\\"PodGC: node no longer exists\\\",\\\"reason\\\":\\\"DeletionByPodGC\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"DisruptionTarget\\\"}],\\\"phase\\\":\\\"Failed\\\"}}\" for pod \"kube-system\"/\"kube-vip-ha-617764-m03\": pods \"kube-vip-ha-617764-m03\" not found" logger="UnhandledError"
	I0913 18:57:10.764058       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-617764"
	I0913 18:57:14.984541       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-617764"
	I0913 18:57:15.000452       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-617764"
	I0913 18:57:15.425955       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-617764"
	
	
	==> kube-controller-manager [da04db3dd67098c6ed0ba3118018e8a2ed0dbc987d1383c585c961ef2d592ff0] <==
	I0913 18:53:47.517644       1 serving.go:386] Generated self-signed cert in-memory
	I0913 18:53:47.859631       1 controllermanager.go:197] "Starting" version="v1.31.1"
	I0913 18:53:47.859681       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0913 18:53:47.861631       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0913 18:53:47.862454       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0913 18:53:47.862626       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0913 18:53:47.862727       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0913 18:54:09.021092       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.145:8443/healthz\": dial tcp 192.168.39.145:8443: connect: connection refused"
	
	
	==> kube-proxy [1d1a0b2d1c95ea3234342a3c075a74b46c01ce0a91b97801503cb6fd1fc06163] <==
	E0913 18:54:28.193745       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-617764\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0913 18:54:28.194003       1 server.go:646] "Can't determine this node's IP, assuming loopback; if this is incorrect, please set the --bind-address flag"
	E0913 18:54:28.194170       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0913 18:54:28.234105       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0913 18:54:28.234302       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0913 18:54:28.234395       1 server_linux.go:169] "Using iptables Proxier"
	I0913 18:54:28.237390       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0913 18:54:28.237818       1 server.go:483] "Version info" version="v1.31.1"
	I0913 18:54:28.237860       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0913 18:54:28.240362       1 config.go:199] "Starting service config controller"
	I0913 18:54:28.240424       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0913 18:54:28.240535       1 config.go:105] "Starting endpoint slice config controller"
	I0913 18:54:28.240556       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0913 18:54:28.241385       1 config.go:328] "Starting node config controller"
	I0913 18:54:28.241411       1 shared_informer.go:313] Waiting for caches to sync for node config
	E0913 18:54:31.266663       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.39.254:8443: connect: no route to host"
	W0913 18:54:31.266902       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0913 18:54:31.267053       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0913 18:54:31.267155       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0913 18:54:31.267225       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0913 18:54:31.270424       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-617764&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0913 18:54:31.270680       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-617764&limit=500&resourceVersion=0\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	I0913 18:54:32.241327       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0913 18:54:32.541475       1 shared_informer.go:320] Caches are synced for service config
	I0913 18:54:32.642363       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [5065ca788226965fdaff6088f9b63d8cf7f5a5a7f59a07f825cfcdd7bc02e218] <==
	E0913 18:50:42.913076       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1842\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0913 18:50:42.913118       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1842": dial tcp 192.168.39.254:8443: connect: no route to host
	E0913 18:50:42.913227       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1842\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0913 18:50:50.080660       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-617764&resourceVersion=1874": dial tcp 192.168.39.254:8443: connect: no route to host
	E0913 18:50:50.080899       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-617764&resourceVersion=1874\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0913 18:50:50.082079       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1842": dial tcp 192.168.39.254:8443: connect: no route to host
	E0913 18:50:50.082429       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1842\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0913 18:50:50.082306       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1842": dial tcp 192.168.39.254:8443: connect: no route to host
	E0913 18:50:50.082585       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1842\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0913 18:50:59.298341       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1842": dial tcp 192.168.39.254:8443: connect: no route to host
	E0913 18:50:59.298561       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1842\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0913 18:51:02.368906       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-617764&resourceVersion=1874": dial tcp 192.168.39.254:8443: connect: no route to host
	E0913 18:51:02.369558       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-617764&resourceVersion=1874\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0913 18:51:02.369471       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1842": dial tcp 192.168.39.254:8443: connect: no route to host
	E0913 18:51:02.370108       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1842\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0913 18:51:17.728693       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1842": dial tcp 192.168.39.254:8443: connect: no route to host
	E0913 18:51:17.728769       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1842\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0913 18:51:20.801275       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1842": dial tcp 192.168.39.254:8443: connect: no route to host
	E0913 18:51:20.801339       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1842\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0913 18:51:26.945182       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-617764&resourceVersion=1874": dial tcp 192.168.39.254:8443: connect: no route to host
	E0913 18:51:26.945336       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-617764&resourceVersion=1874\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0913 18:51:48.449478       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1842": dial tcp 192.168.39.254:8443: connect: no route to host
	E0913 18:51:48.449614       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1842\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0913 18:51:54.592787       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-617764&resourceVersion=1874": dial tcp 192.168.39.254:8443: connect: no route to host
	E0913 18:51:54.592909       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-617764&resourceVersion=1874\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	
	
	==> kube-scheduler [80a7cb47dee67b788f693c83be0a9b7c41fc30e21ddf9b0131e13737c8047222] <==
	W0913 18:54:18.337710       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.145:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.145:8443: connect: connection refused
	E0913 18:54:18.337790       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.168.39.145:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.39.145:8443: connect: connection refused" logger="UnhandledError"
	W0913 18:54:18.785652       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.145:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.145:8443: connect: connection refused
	E0913 18:54:18.785751       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.168.39.145:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.39.145:8443: connect: connection refused" logger="UnhandledError"
	W0913 18:54:23.154505       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.145:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.145:8443: connect: connection refused
	E0913 18:54:23.154624       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://192.168.39.145:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.39.145:8443: connect: connection refused" logger="UnhandledError"
	W0913 18:54:26.780601       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0913 18:54:26.780738       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0913 18:54:26.780951       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0913 18:54:26.781066       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0913 18:54:26.783577       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0913 18:54:26.783651       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0913 18:54:26.783823       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0913 18:54:26.783883       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0913 18:54:26.783831       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0913 18:54:26.783955       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0913 18:54:26.783968       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0913 18:54:26.784151       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0913 18:54:26.784400       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0913 18:54:26.784439       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0913 18:54:44.032097       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0913 18:56:04.977977       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-hzxvw\": pod busybox-7dff88458-hzxvw is already assigned to node \"ha-617764-m04\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-hzxvw" node="ha-617764-m04"
	E0913 18:56:04.978105       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 6a455845-10fb-415a-badb-63751bb03ec8(default/busybox-7dff88458-hzxvw) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-hzxvw"
	E0913 18:56:04.978138       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-hzxvw\": pod busybox-7dff88458-hzxvw is already assigned to node \"ha-617764-m04\"" pod="default/busybox-7dff88458-hzxvw"
	I0913 18:56:04.978160       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-hzxvw" node="ha-617764-m04"
	
	
	==> kube-scheduler [8a31170a295b77a01a2c07a4bf7dbd0be4738f757372e24a85bcaf7d50d27d4c] <==
	I0913 18:45:52.688769       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-jvrw5" node="ha-617764-m04"
	E0913 18:45:52.689590       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-5rlkn\": pod kube-proxy-5rlkn is already assigned to node \"ha-617764-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-5rlkn" node="ha-617764-m04"
	E0913 18:45:52.689658       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod fb31ed1c-fbc0-46ca-b60c-7201362519ff(kube-system/kube-proxy-5rlkn) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-5rlkn"
	E0913 18:45:52.689678       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-5rlkn\": pod kube-proxy-5rlkn is already assigned to node \"ha-617764-m04\"" pod="kube-system/kube-proxy-5rlkn"
	I0913 18:45:52.689696       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-5rlkn" node="ha-617764-m04"
	E0913 18:45:52.694462       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-xtt2d\": pod kube-proxy-xtt2d is already assigned to node \"ha-617764-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-xtt2d" node="ha-617764-m04"
	E0913 18:45:52.694585       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 848151c4-6f4d-47e6-9447-bd1d09469957(kube-system/kube-proxy-xtt2d) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-xtt2d"
	E0913 18:45:52.694606       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-xtt2d\": pod kube-proxy-xtt2d is already assigned to node \"ha-617764-m04\"" pod="kube-system/kube-proxy-xtt2d"
	I0913 18:45:52.694636       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-xtt2d" node="ha-617764-m04"
	E0913 18:51:44.585541       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: unknown (get services)" logger="UnhandledError"
	E0913 18:51:45.076949       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)" logger="UnhandledError"
	E0913 18:51:46.688890       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)" logger="UnhandledError"
	E0913 18:51:46.694206       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: unknown (get configmaps)" logger="UnhandledError"
	E0913 18:51:48.372407       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)" logger="UnhandledError"
	E0913 18:51:49.073673       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: unknown (get nodes)" logger="UnhandledError"
	E0913 18:51:49.842409       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)" logger="UnhandledError"
	E0913 18:51:51.306914       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)" logger="UnhandledError"
	E0913 18:51:51.530632       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)" logger="UnhandledError"
	E0913 18:51:51.856307       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io)" logger="UnhandledError"
	E0913 18:51:52.826080       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: unknown (get namespaces)" logger="UnhandledError"
	E0913 18:51:52.886933       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)" logger="UnhandledError"
	E0913 18:51:54.578007       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)" logger="UnhandledError"
	E0913 18:51:55.399131       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: unknown (get pods)" logger="UnhandledError"
	E0913 18:51:56.976982       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io)" logger="UnhandledError"
	E0913 18:52:00.329637       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 13 18:57:28 ha-617764 kubelet[1315]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 13 18:57:28 ha-617764 kubelet[1315]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 13 18:57:28 ha-617764 kubelet[1315]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 13 18:57:28 ha-617764 kubelet[1315]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 13 18:57:28 ha-617764 kubelet[1315]: E0913 18:57:28.795529    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726253848794950660,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 18:57:28 ha-617764 kubelet[1315]: E0913 18:57:28.795571    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726253848794950660,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 18:57:38 ha-617764 kubelet[1315]: E0913 18:57:38.797766    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726253858797138773,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 18:57:38 ha-617764 kubelet[1315]: E0913 18:57:38.798073    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726253858797138773,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 18:57:48 ha-617764 kubelet[1315]: E0913 18:57:48.800605    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726253868799867858,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 18:57:48 ha-617764 kubelet[1315]: E0913 18:57:48.800774    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726253868799867858,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 18:57:58 ha-617764 kubelet[1315]: E0913 18:57:58.803166    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726253878802778303,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 18:57:58 ha-617764 kubelet[1315]: E0913 18:57:58.803227    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726253878802778303,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 18:58:08 ha-617764 kubelet[1315]: E0913 18:58:08.804816    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726253888804331371,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 18:58:08 ha-617764 kubelet[1315]: E0913 18:58:08.804858    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726253888804331371,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 18:58:18 ha-617764 kubelet[1315]: E0913 18:58:18.807002    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726253898806604045,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 18:58:18 ha-617764 kubelet[1315]: E0913 18:58:18.807047    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726253898806604045,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 18:58:28 ha-617764 kubelet[1315]: E0913 18:58:28.543827    1315 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 13 18:58:28 ha-617764 kubelet[1315]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 13 18:58:28 ha-617764 kubelet[1315]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 13 18:58:28 ha-617764 kubelet[1315]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 13 18:58:28 ha-617764 kubelet[1315]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 13 18:58:28 ha-617764 kubelet[1315]: E0913 18:58:28.808893    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726253908808491224,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 18:58:28 ha-617764 kubelet[1315]: E0913 18:58:28.808927    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726253908808491224,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 18:58:38 ha-617764 kubelet[1315]: E0913 18:58:38.810969    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726253918810441793,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 18:58:38 ha-617764 kubelet[1315]: E0913 18:58:38.811049    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726253918810441793,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0913 18:58:41.099019   31367 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19636-3902/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-617764 -n ha-617764
helpers_test.go:261: (dbg) Run:  kubectl --context ha-617764 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopCluster (141.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (657.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-617764 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0913 18:59:06.600943   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:00:57.575545   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/functional-204039/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:02:20.643770   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/functional-204039/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:04:06.601598   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:05:57.575522   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/functional-204039/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:09:06.601574   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:560: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p ha-617764 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 105 (10m55.31113264s)

                                                
                                                
-- stdout --
	* [ha-617764] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19636
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19636-3902/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19636-3902/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "ha-617764" primary control-plane node in "ha-617764" cluster
	* Updating the running kvm2 "ha-617764" VM ...
	* Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	* Enabled addons: 
	
	* Starting "ha-617764-m02" control-plane node in "ha-617764" cluster
	* Updating the running kvm2 "ha-617764-m02" VM ...
	* Found network options:
	  - NO_PROXY=192.168.39.145
	* Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	  - env NO_PROXY=192.168.39.145
	* Verifying Kubernetes components...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 18:58:43.150705   31446 out.go:345] Setting OutFile to fd 1 ...
	I0913 18:58:43.150823   31446 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 18:58:43.150831   31446 out.go:358] Setting ErrFile to fd 2...
	I0913 18:58:43.150835   31446 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 18:58:43.150989   31446 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19636-3902/.minikube/bin
	I0913 18:58:43.151527   31446 out.go:352] Setting JSON to false
	I0913 18:58:43.152444   31446 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2466,"bootTime":1726251457,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0913 18:58:43.152531   31446 start.go:139] virtualization: kvm guest
	I0913 18:58:43.155078   31446 out.go:177] * [ha-617764] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0913 18:58:43.156678   31446 notify.go:220] Checking for updates...
	I0913 18:58:43.156709   31446 out.go:177]   - MINIKUBE_LOCATION=19636
	I0913 18:58:43.158268   31446 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 18:58:43.159544   31446 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19636-3902/kubeconfig
	I0913 18:58:43.160767   31446 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19636-3902/.minikube
	I0913 18:58:43.162220   31446 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0913 18:58:43.163615   31446 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 18:58:43.165451   31446 config.go:182] Loaded profile config "ha-617764": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 18:58:43.165853   31446 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:58:43.165907   31446 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:58:43.180911   31446 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45843
	I0913 18:58:43.181388   31446 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:58:43.181949   31446 main.go:141] libmachine: Using API Version  1
	I0913 18:58:43.181971   31446 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:58:43.182353   31446 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:58:43.182521   31446 main.go:141] libmachine: (ha-617764) Calling .DriverName
	I0913 18:58:43.182750   31446 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 18:58:43.183084   31446 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:58:43.183122   31446 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:58:43.197519   31446 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39037
	I0913 18:58:43.197916   31446 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:58:43.198411   31446 main.go:141] libmachine: Using API Version  1
	I0913 18:58:43.198429   31446 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:58:43.198758   31446 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:58:43.198946   31446 main.go:141] libmachine: (ha-617764) Calling .DriverName
	I0913 18:58:43.235966   31446 out.go:177] * Using the kvm2 driver based on existing profile
	I0913 18:58:43.237300   31446 start.go:297] selected driver: kvm2
	I0913 18:58:43.237333   31446 start.go:901] validating driver "kvm2" against &{Name:ha-617764 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.1 ClusterName:ha-617764 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.145 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.203 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.238 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false insp
ektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptim
izations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 18:58:43.237501   31446 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 18:58:43.237936   31446 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 18:58:43.238020   31446 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19636-3902/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0913 18:58:43.253448   31446 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0913 18:58:43.254210   31446 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0913 18:58:43.254249   31446 cni.go:84] Creating CNI manager for ""
	I0913 18:58:43.254286   31446 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0913 18:58:43.254380   31446 start.go:340] cluster config:
	{Name:ha-617764 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-617764 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.145 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.203 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.238 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow
:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetCli
entPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 18:58:43.254578   31446 iso.go:125] acquiring lock: {Name:mk6d09a9e7ffc35d34bf41cb51ad15df14a4d34d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 18:58:43.257570   31446 out.go:177] * Starting "ha-617764" primary control-plane node in "ha-617764" cluster
	I0913 18:58:43.258900   31446 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0913 18:58:43.258938   31446 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0913 18:58:43.258945   31446 cache.go:56] Caching tarball of preloaded images
	I0913 18:58:43.259017   31446 preload.go:172] Found /home/jenkins/minikube-integration/19636-3902/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0913 18:58:43.259028   31446 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0913 18:58:43.259156   31446 profile.go:143] Saving config to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/config.json ...
	I0913 18:58:43.259345   31446 start.go:360] acquireMachinesLock for ha-617764: {Name:mk2a4fc9f87a3264088553785e086036ce1d8c5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 18:58:43.259392   31446 start.go:364] duration metric: took 31.174µs to acquireMachinesLock for "ha-617764"
	I0913 18:58:43.259405   31446 start.go:96] Skipping create...Using existing machine configuration
	I0913 18:58:43.259413   31446 fix.go:54] fixHost starting: 
	I0913 18:58:43.259679   31446 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:58:43.259711   31446 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:58:43.274822   31446 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33137
	I0913 18:58:43.275298   31446 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:58:43.275852   31446 main.go:141] libmachine: Using API Version  1
	I0913 18:58:43.275878   31446 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:58:43.276311   31446 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:58:43.276486   31446 main.go:141] libmachine: (ha-617764) Calling .DriverName
	I0913 18:58:43.276663   31446 main.go:141] libmachine: (ha-617764) Calling .GetState
	I0913 18:58:43.278189   31446 fix.go:112] recreateIfNeeded on ha-617764: state=Running err=<nil>
	W0913 18:58:43.278219   31446 fix.go:138] unexpected machine state, will restart: <nil>
	I0913 18:58:43.280067   31446 out.go:177] * Updating the running kvm2 "ha-617764" VM ...
	I0913 18:58:43.281138   31446 machine.go:93] provisionDockerMachine start ...
	I0913 18:58:43.281155   31446 main.go:141] libmachine: (ha-617764) Calling .DriverName
	I0913 18:58:43.281323   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHHostname
	I0913 18:58:43.284023   31446 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:58:43.284521   31446 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 18:58:43.284555   31446 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:58:43.284669   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHPort
	I0913 18:58:43.284825   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 18:58:43.284952   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 18:58:43.285055   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHUsername
	I0913 18:58:43.285196   31446 main.go:141] libmachine: Using SSH client type: native
	I0913 18:58:43.285409   31446 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.145 22 <nil> <nil>}
	I0913 18:58:43.285420   31446 main.go:141] libmachine: About to run SSH command:
	hostname
	I0913 18:58:43.394451   31446 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-617764
	
	I0913 18:58:43.394477   31446 main.go:141] libmachine: (ha-617764) Calling .GetMachineName
	I0913 18:58:43.394708   31446 buildroot.go:166] provisioning hostname "ha-617764"
	I0913 18:58:43.394736   31446 main.go:141] libmachine: (ha-617764) Calling .GetMachineName
	I0913 18:58:43.394924   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHHostname
	I0913 18:58:43.397704   31446 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:58:43.398088   31446 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 18:58:43.398141   31446 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:58:43.398322   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHPort
	I0913 18:58:43.398529   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 18:58:43.398740   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 18:58:43.398893   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHUsername
	I0913 18:58:43.399057   31446 main.go:141] libmachine: Using SSH client type: native
	I0913 18:58:43.399258   31446 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.145 22 <nil> <nil>}
	I0913 18:58:43.399275   31446 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-617764 && echo "ha-617764" | sudo tee /etc/hostname
	I0913 18:58:43.520106   31446 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-617764
	
	I0913 18:58:43.520131   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHHostname
	I0913 18:58:43.522812   31446 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:58:43.523152   31446 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 18:58:43.523170   31446 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:58:43.523391   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHPort
	I0913 18:58:43.523571   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 18:58:43.523748   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 18:58:43.523885   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHUsername
	I0913 18:58:43.524100   31446 main.go:141] libmachine: Using SSH client type: native
	I0913 18:58:43.524293   31446 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.145 22 <nil> <nil>}
	I0913 18:58:43.524308   31446 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-617764' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-617764/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-617764' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0913 18:58:43.635855   31446 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0913 18:58:43.635900   31446 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19636-3902/.minikube CaCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19636-3902/.minikube}
	I0913 18:58:43.635930   31446 buildroot.go:174] setting up certificates
	I0913 18:58:43.635943   31446 provision.go:84] configureAuth start
	I0913 18:58:43.635958   31446 main.go:141] libmachine: (ha-617764) Calling .GetMachineName
	I0913 18:58:43.636270   31446 main.go:141] libmachine: (ha-617764) Calling .GetIP
	I0913 18:58:43.638723   31446 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:58:43.639091   31446 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 18:58:43.639122   31446 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:58:43.639263   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHHostname
	I0913 18:58:43.641516   31446 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:58:43.641896   31446 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 18:58:43.641921   31446 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:58:43.642009   31446 provision.go:143] copyHostCerts
	I0913 18:58:43.642045   31446 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem
	I0913 18:58:43.642090   31446 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem, removing ...
	I0913 18:58:43.642118   31446 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem
	I0913 18:58:43.642204   31446 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem (1082 bytes)
	I0913 18:58:43.642317   31446 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem
	I0913 18:58:43.642345   31446 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem, removing ...
	I0913 18:58:43.642351   31446 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem
	I0913 18:58:43.642393   31446 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem (1123 bytes)
	I0913 18:58:43.642482   31446 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem
	I0913 18:58:43.642507   31446 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem, removing ...
	I0913 18:58:43.642516   31446 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem
	I0913 18:58:43.642554   31446 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem (1679 bytes)
	I0913 18:58:43.642629   31446 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem org=jenkins.ha-617764 san=[127.0.0.1 192.168.39.145 ha-617764 localhost minikube]
	I0913 18:58:44.051872   31446 provision.go:177] copyRemoteCerts
	I0913 18:58:44.051926   31446 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0913 18:58:44.051949   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHHostname
	I0913 18:58:44.054378   31446 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:58:44.054746   31446 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 18:58:44.054779   31446 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:58:44.054963   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHPort
	I0913 18:58:44.055136   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 18:58:44.055290   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHUsername
	I0913 18:58:44.055443   31446 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764/id_rsa Username:docker}
	I0913 18:58:44.136923   31446 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0913 18:58:44.136991   31446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0913 18:58:44.167349   31446 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0913 18:58:44.167474   31446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0913 18:58:44.192816   31446 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0913 18:58:44.192890   31446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0913 18:58:44.219869   31446 provision.go:87] duration metric: took 583.909353ms to configureAuth
	I0913 18:58:44.219902   31446 buildroot.go:189] setting minikube options for container-runtime
	I0913 18:58:44.220142   31446 config.go:182] Loaded profile config "ha-617764": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 18:58:44.220219   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHHostname
	I0913 18:58:44.222922   31446 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:58:44.223448   31446 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 18:58:44.223533   31446 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:58:44.223808   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHPort
	I0913 18:58:44.224007   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 18:58:44.224174   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 18:58:44.224308   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHUsername
	I0913 18:58:44.224474   31446 main.go:141] libmachine: Using SSH client type: native
	I0913 18:58:44.224676   31446 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.145 22 <nil> <nil>}
	I0913 18:58:44.224698   31446 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0913 19:00:18.789819   31446 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0913 19:00:18.789840   31446 machine.go:96] duration metric: took 1m35.508690532s to provisionDockerMachine
	I0913 19:00:18.789851   31446 start.go:293] postStartSetup for "ha-617764" (driver="kvm2")
	I0913 19:00:18.789861   31446 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0913 19:00:18.789874   31446 main.go:141] libmachine: (ha-617764) Calling .DriverName
	I0913 19:00:18.790220   31446 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0913 19:00:18.790251   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHHostname
	I0913 19:00:18.793500   31446 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 19:00:18.793848   31446 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 19:00:18.793875   31446 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 19:00:18.794048   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHPort
	I0913 19:00:18.794238   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 19:00:18.794385   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHUsername
	I0913 19:00:18.794569   31446 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764/id_rsa Username:docker}
	I0913 19:00:18.877285   31446 ssh_runner.go:195] Run: cat /etc/os-release
	I0913 19:00:18.883268   31446 info.go:137] Remote host: Buildroot 2023.02.9
	I0913 19:00:18.883297   31446 filesync.go:126] Scanning /home/jenkins/minikube-integration/19636-3902/.minikube/addons for local assets ...
	I0913 19:00:18.883423   31446 filesync.go:126] Scanning /home/jenkins/minikube-integration/19636-3902/.minikube/files for local assets ...
	I0913 19:00:18.883612   31446 filesync.go:149] local asset: /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem -> 110792.pem in /etc/ssl/certs
	I0913 19:00:18.883631   31446 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem -> /etc/ssl/certs/110792.pem
	I0913 19:00:18.883718   31446 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0913 19:00:18.893226   31446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem --> /etc/ssl/certs/110792.pem (1708 bytes)
	I0913 19:00:18.920369   31446 start.go:296] duration metric: took 130.503832ms for postStartSetup
	I0913 19:00:18.920414   31446 main.go:141] libmachine: (ha-617764) Calling .DriverName
	I0913 19:00:18.920676   31446 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0913 19:00:18.920707   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHHostname
	I0913 19:00:18.923635   31446 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 19:00:18.924114   31446 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 19:00:18.924141   31446 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 19:00:18.924348   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHPort
	I0913 19:00:18.924535   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 19:00:18.924698   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHUsername
	I0913 19:00:18.924850   31446 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764/id_rsa Username:docker}
	W0913 19:00:19.009141   31446 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0913 19:00:19.009172   31446 fix.go:56] duration metric: took 1m35.749758939s for fixHost
	I0913 19:00:19.009198   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHHostname
	I0913 19:00:19.011920   31446 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 19:00:19.012313   31446 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 19:00:19.012336   31446 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 19:00:19.012505   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHPort
	I0913 19:00:19.012684   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 19:00:19.012842   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 19:00:19.012978   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHUsername
	I0913 19:00:19.013111   31446 main.go:141] libmachine: Using SSH client type: native
	I0913 19:00:19.013373   31446 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.145 22 <nil> <nil>}
	I0913 19:00:19.013392   31446 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0913 19:00:19.118884   31446 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726254019.083169511
	
	I0913 19:00:19.118912   31446 fix.go:216] guest clock: 1726254019.083169511
	I0913 19:00:19.118923   31446 fix.go:229] Guest: 2024-09-13 19:00:19.083169511 +0000 UTC Remote: 2024-09-13 19:00:19.009181164 +0000 UTC m=+95.893684428 (delta=73.988347ms)
	I0913 19:00:19.118983   31446 fix.go:200] guest clock delta is within tolerance: 73.988347ms
	I0913 19:00:19.118991   31446 start.go:83] releasing machines lock for "ha-617764", held for 1m35.85958928s
	I0913 19:00:19.119022   31446 main.go:141] libmachine: (ha-617764) Calling .DriverName
	I0913 19:00:19.119255   31446 main.go:141] libmachine: (ha-617764) Calling .GetIP
	I0913 19:00:19.121927   31446 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 19:00:19.122454   31446 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 19:00:19.122593   31446 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 19:00:19.122762   31446 main.go:141] libmachine: (ha-617764) Calling .DriverName
	I0913 19:00:19.123286   31446 main.go:141] libmachine: (ha-617764) Calling .DriverName
	I0913 19:00:19.123470   31446 main.go:141] libmachine: (ha-617764) Calling .DriverName
	I0913 19:00:19.123531   31446 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0913 19:00:19.123584   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHHostname
	I0913 19:00:19.123664   31446 ssh_runner.go:195] Run: cat /version.json
	I0913 19:00:19.123680   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHHostname
	I0913 19:00:19.126137   31446 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 19:00:19.126495   31446 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 19:00:19.126557   31446 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 19:00:19.126605   31446 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 19:00:19.126870   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHPort
	I0913 19:00:19.126965   31446 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 19:00:19.126997   31446 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 19:00:19.127049   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 19:00:19.127133   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHPort
	I0913 19:00:19.127204   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHUsername
	I0913 19:00:19.127289   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 19:00:19.127344   31446 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764/id_rsa Username:docker}
	I0913 19:00:19.127430   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHUsername
	I0913 19:00:19.127554   31446 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764/id_rsa Username:docker}
	I0913 19:00:19.230613   31446 ssh_runner.go:195] Run: systemctl --version
	I0913 19:00:19.238299   31446 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0913 19:00:19.405183   31446 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0913 19:00:19.411872   31446 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0913 19:00:19.411926   31446 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0913 19:00:19.421058   31446 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0913 19:00:19.421086   31446 start.go:495] detecting cgroup driver to use...
	I0913 19:00:19.421155   31446 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0913 19:00:19.436778   31446 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0913 19:00:19.450920   31446 docker.go:217] disabling cri-docker service (if available) ...
	I0913 19:00:19.450979   31446 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0913 19:00:19.464921   31446 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0913 19:00:19.478168   31446 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0913 19:00:19.645366   31446 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0913 19:00:19.801636   31446 docker.go:233] disabling docker service ...
	I0913 19:00:19.801712   31446 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0913 19:00:19.818239   31446 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0913 19:00:19.832446   31446 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0913 19:00:19.978995   31446 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0913 19:00:20.122997   31446 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0913 19:00:20.139838   31446 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0913 19:00:20.159570   31446 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0913 19:00:20.159648   31446 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:00:20.172313   31446 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0913 19:00:20.172387   31446 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:00:20.183969   31446 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:00:20.195156   31446 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:00:20.206292   31446 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0913 19:00:20.218569   31446 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:00:20.229457   31446 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:00:20.241787   31446 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:00:20.252269   31446 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0913 19:00:20.262210   31446 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0913 19:00:20.272169   31446 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 19:00:20.432441   31446 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0913 19:00:27.397849   31446 ssh_runner.go:235] Completed: sudo systemctl restart crio: (6.965372324s)
	I0913 19:00:27.397881   31446 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0913 19:00:27.397939   31446 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0913 19:00:27.404132   31446 start.go:563] Will wait 60s for crictl version
	I0913 19:00:27.404202   31446 ssh_runner.go:195] Run: which crictl
	I0913 19:00:27.407981   31446 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0913 19:00:27.443823   31446 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0913 19:00:27.443905   31446 ssh_runner.go:195] Run: crio --version
	I0913 19:00:27.475173   31446 ssh_runner.go:195] Run: crio --version
	I0913 19:00:27.506743   31446 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0913 19:00:27.508011   31446 main.go:141] libmachine: (ha-617764) Calling .GetIP
	I0913 19:00:27.510651   31446 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 19:00:27.511033   31446 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 19:00:27.511060   31446 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 19:00:27.511270   31446 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0913 19:00:27.516012   31446 kubeadm.go:883] updating cluster {Name:ha-617764 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-617764 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.145 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.203 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.238 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadge
t:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fa
lse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0913 19:00:27.516147   31446 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0913 19:00:27.516207   31446 ssh_runner.go:195] Run: sudo crictl images --output json
	I0913 19:00:27.563165   31446 crio.go:514] all images are preloaded for cri-o runtime.
	I0913 19:00:27.563185   31446 crio.go:433] Images already preloaded, skipping extraction
	I0913 19:00:27.563228   31446 ssh_runner.go:195] Run: sudo crictl images --output json
	I0913 19:00:27.599775   31446 crio.go:514] all images are preloaded for cri-o runtime.
	I0913 19:00:27.599799   31446 cache_images.go:84] Images are preloaded, skipping loading
	I0913 19:00:27.599809   31446 kubeadm.go:934] updating node { 192.168.39.145 8443 v1.31.1 crio true true} ...
	I0913 19:00:27.599915   31446 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-617764 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.145
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-617764 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0913 19:00:27.600007   31446 ssh_runner.go:195] Run: crio config
	I0913 19:00:27.651311   31446 cni.go:84] Creating CNI manager for ""
	I0913 19:00:27.651333   31446 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0913 19:00:27.651343   31446 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0913 19:00:27.651366   31446 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.145 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-617764 NodeName:ha-617764 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.145"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.145 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0913 19:00:27.651508   31446 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.145
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-617764"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.145
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.145"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0913 19:00:27.651538   31446 kube-vip.go:115] generating kube-vip config ...
	I0913 19:00:27.651587   31446 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0913 19:00:27.664287   31446 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0913 19:00:27.664396   31446 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0913 19:00:27.664455   31446 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0913 19:00:27.674466   31446 binaries.go:44] Found k8s binaries, skipping transfer
	I0913 19:00:27.674547   31446 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0913 19:00:27.684733   31446 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0913 19:00:27.702120   31446 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0913 19:00:27.719612   31446 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0913 19:00:27.737029   31446 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0913 19:00:27.755478   31446 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0913 19:00:27.759223   31446 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 19:00:27.910765   31446 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0913 19:00:27.925634   31446 certs.go:68] Setting up /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764 for IP: 192.168.39.145
	I0913 19:00:27.925655   31446 certs.go:194] generating shared ca certs ...
	I0913 19:00:27.925670   31446 certs.go:226] acquiring lock for ca certs: {Name:mke780aab4c2895bded11772e31c7a357d07742c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 19:00:27.925837   31446 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.key
	I0913 19:00:27.925877   31446 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.key
	I0913 19:00:27.925887   31446 certs.go:256] generating profile certs ...
	I0913 19:00:27.925954   31446 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/client.key
	I0913 19:00:27.925980   31446 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.key.902bad01
	I0913 19:00:27.926001   31446 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.crt.902bad01 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.145 192.168.39.203 192.168.39.254]
	I0913 19:00:28.083419   31446 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.crt.902bad01 ...
	I0913 19:00:28.083444   31446 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.crt.902bad01: {Name:mk5610f7b2a13e2e9a2db0fd30b419eeb2bcec9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 19:00:28.083629   31446 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.key.902bad01 ...
	I0913 19:00:28.083645   31446 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.key.902bad01: {Name:mk0e8fc15f8ef270cc2f47ac846f3a3e4156c718 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 19:00:28.083740   31446 certs.go:381] copying /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.crt.902bad01 -> /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.crt
	I0913 19:00:28.083880   31446 certs.go:385] copying /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.key.902bad01 -> /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.key
	I0913 19:00:28.084003   31446 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/proxy-client.key
	I0913 19:00:28.084017   31446 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0913 19:00:28.084030   31446 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0913 19:00:28.084042   31446 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0913 19:00:28.084057   31446 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0913 19:00:28.084069   31446 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0913 19:00:28.084082   31446 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0913 19:00:28.084100   31446 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0913 19:00:28.084113   31446 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0913 19:00:28.084157   31446 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079.pem (1338 bytes)
	W0913 19:00:28.084185   31446 certs.go:480] ignoring /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079_empty.pem, impossibly tiny 0 bytes
	I0913 19:00:28.084195   31446 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem (1675 bytes)
	I0913 19:00:28.084215   31446 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem (1082 bytes)
	I0913 19:00:28.084238   31446 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem (1123 bytes)
	I0913 19:00:28.084258   31446 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem (1679 bytes)
	I0913 19:00:28.084294   31446 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem (1708 bytes)
	I0913 19:00:28.084323   31446 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079.pem -> /usr/share/ca-certificates/11079.pem
	I0913 19:00:28.084336   31446 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem -> /usr/share/ca-certificates/110792.pem
	I0913 19:00:28.084348   31446 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0913 19:00:28.084922   31446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0913 19:00:28.111077   31446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0913 19:00:28.134495   31446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0913 19:00:28.159747   31446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0913 19:00:28.182325   31446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0913 19:00:28.205586   31446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0913 19:00:28.229539   31446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0913 19:00:28.252370   31446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0913 19:00:28.275737   31446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079.pem --> /usr/share/ca-certificates/11079.pem (1338 bytes)
	I0913 19:00:28.300247   31446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem --> /usr/share/ca-certificates/110792.pem (1708 bytes)
	I0913 19:00:28.324266   31446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0913 19:00:28.347577   31446 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0913 19:00:28.365115   31446 ssh_runner.go:195] Run: openssl version
	I0913 19:00:28.408066   31446 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110792.pem && ln -fs /usr/share/ca-certificates/110792.pem /etc/ssl/certs/110792.pem"
	I0913 19:00:28.469517   31446 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110792.pem
	I0913 19:00:28.486389   31446 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 13 18:38 /usr/share/ca-certificates/110792.pem
	I0913 19:00:28.486486   31446 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110792.pem
	I0913 19:00:28.525327   31446 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110792.pem /etc/ssl/certs/3ec20f2e.0"
	I0913 19:00:28.652306   31446 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0913 19:00:28.760544   31446 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0913 19:00:28.769712   31446 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 13 18:22 /usr/share/ca-certificates/minikubeCA.pem
	I0913 19:00:28.769775   31446 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0913 19:00:28.819345   31446 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0913 19:00:28.906062   31446 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11079.pem && ln -fs /usr/share/ca-certificates/11079.pem /etc/ssl/certs/11079.pem"
	I0913 19:00:29.048802   31446 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11079.pem
	I0913 19:00:29.102932   31446 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 13 18:38 /usr/share/ca-certificates/11079.pem
	I0913 19:00:29.103020   31446 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11079.pem
	I0913 19:00:29.115422   31446 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11079.pem /etc/ssl/certs/51391683.0"
	I0913 19:00:29.318793   31446 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0913 19:00:29.362153   31446 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0913 19:00:29.471278   31446 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0913 19:00:29.492455   31446 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0913 19:00:29.513786   31446 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0913 19:00:29.728338   31446 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0913 19:00:29.780205   31446 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0913 19:00:29.853145   31446 kubeadm.go:392] StartCluster: {Name:ha-617764 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-617764 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.145 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.203 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.238 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:f
alse istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 19:00:29.853301   31446 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0913 19:00:29.853366   31446 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0913 19:00:30.060193   31446 cri.go:89] found id: "7cb162ca4a91609e62eb9a3b82ac2d18f1f007e8a4b2577cce0250928caa85f2"
	I0913 19:00:30.060217   31446 cri.go:89] found id: "360965c899e522788532af521cbf0b477b35e977e91f5a28b282abf712fc953e"
	I0913 19:00:30.060223   31446 cri.go:89] found id: "26de4c71cc1f8d3a39e52e622c86361c67e1839a5b84f098c669196c7c161196"
	I0913 19:00:30.060228   31446 cri.go:89] found id: "12d8e3661fa4705e4486cfa4b69b3f31e0b159af038044b195db15b9345f4f4c"
	I0913 19:00:30.060233   31446 cri.go:89] found id: "c22324f5733e44ea66404b16a363cefc848145f9b3f1a75daecc8b7369ff96dd"
	I0913 19:00:30.060237   31446 cri.go:89] found id: "bc744a6ac873da0f38fbe5a85580ba322c578e68359078003640394bc1a9784f"
	I0913 19:00:30.060240   31446 cri.go:89] found id: "570c77981741ff23e853840359e686b304c18dfbe54cb12c07ca5d8bf2b8de17"
	I0913 19:00:30.060244   31446 cri.go:89] found id: "0a368121b3974a88cb67c446e03fd5709ed4dd291d3b5de37b4544a7c42b60cc"
	I0913 19:00:30.060247   31446 cri.go:89] found id: "32fcfa457f3ff0e638142def4aa43f12d6a5a779bfe86e597cc242d7f4d9d19d"
	I0913 19:00:30.060254   31446 cri.go:89] found id: "46d659112c682a93bb4560212566d776e405246e1b3f91e9cc2ad5198b2b8c69"
	I0913 19:00:30.060259   31446 cri.go:89] found id: "09fe052337ef3c922d075bb554c17c548520bce6283d1973e3507ca8d99ace87"
	I0913 19:00:30.060262   31446 cri.go:89] found id: "dddc0dfb6a255d4d35498f110cd8ade544b6f9eb17cef621a12500640512581e"
	I0913 19:00:30.060266   31446 cri.go:89] found id: "b752b1ac699cb46bd464528882ec5c1bfb29241c94a1eda062141311151c2fc1"
	I0913 19:00:30.060270   31446 cri.go:89] found id: "15c33340e3091f96212bbbf29c4a8802468f76ac07126f47265d4f2b86321a89"
	I0913 19:00:30.060277   31446 cri.go:89] found id: "1d1a0b2d1c95ea3234342a3c075a74b46c01ce0a91b97801503cb6fd1fc06163"
	I0913 19:00:30.060281   31446 cri.go:89] found id: "80a7cb47dee67b788f693c83be0a9b7c41fc30e21ddf9b0131e13737c8047222"
	I0913 19:00:30.060286   31446 cri.go:89] found id: ""
	I0913 19:00:30.060335   31446 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
ha_test.go:562: failed to start cluster. args "out/minikube-linux-amd64 start -p ha-617764 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio" : exit status 105
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-617764 -n ha-617764
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-617764 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-617764 logs -n 25: (1.635773315s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                      |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-617764 cp ha-617764-m03:/home/docker/cp-test.txt                            | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC | 13 Sep 24 18:46 UTC |
	|         | ha-617764-m04:/home/docker/cp-test_ha-617764-m03_ha-617764-m04.txt             |           |         |         |                     |                     |
	| ssh     | ha-617764 ssh -n                                                               | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC | 13 Sep 24 18:46 UTC |
	|         | ha-617764-m03 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-617764 ssh -n ha-617764-m04 sudo cat                                        | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC | 13 Sep 24 18:46 UTC |
	|         | /home/docker/cp-test_ha-617764-m03_ha-617764-m04.txt                           |           |         |         |                     |                     |
	| cp      | ha-617764 cp testdata/cp-test.txt                                              | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC | 13 Sep 24 18:46 UTC |
	|         | ha-617764-m04:/home/docker/cp-test.txt                                         |           |         |         |                     |                     |
	| ssh     | ha-617764 ssh -n                                                               | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC | 13 Sep 24 18:46 UTC |
	|         | ha-617764-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| cp      | ha-617764 cp ha-617764-m04:/home/docker/cp-test.txt                            | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC | 13 Sep 24 18:46 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile62644144/001/cp-test_ha-617764-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-617764 ssh -n                                                               | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC | 13 Sep 24 18:46 UTC |
	|         | ha-617764-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| cp      | ha-617764 cp ha-617764-m04:/home/docker/cp-test.txt                            | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC | 13 Sep 24 18:46 UTC |
	|         | ha-617764:/home/docker/cp-test_ha-617764-m04_ha-617764.txt                     |           |         |         |                     |                     |
	| ssh     | ha-617764 ssh -n                                                               | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC | 13 Sep 24 18:46 UTC |
	|         | ha-617764-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-617764 ssh -n ha-617764 sudo cat                                            | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC | 13 Sep 24 18:46 UTC |
	|         | /home/docker/cp-test_ha-617764-m04_ha-617764.txt                               |           |         |         |                     |                     |
	| cp      | ha-617764 cp ha-617764-m04:/home/docker/cp-test.txt                            | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC | 13 Sep 24 18:46 UTC |
	|         | ha-617764-m02:/home/docker/cp-test_ha-617764-m04_ha-617764-m02.txt             |           |         |         |                     |                     |
	| ssh     | ha-617764 ssh -n                                                               | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC | 13 Sep 24 18:46 UTC |
	|         | ha-617764-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-617764 ssh -n ha-617764-m02 sudo cat                                        | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC | 13 Sep 24 18:46 UTC |
	|         | /home/docker/cp-test_ha-617764-m04_ha-617764-m02.txt                           |           |         |         |                     |                     |
	| cp      | ha-617764 cp ha-617764-m04:/home/docker/cp-test.txt                            | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC | 13 Sep 24 18:46 UTC |
	|         | ha-617764-m03:/home/docker/cp-test_ha-617764-m04_ha-617764-m03.txt             |           |         |         |                     |                     |
	| ssh     | ha-617764 ssh -n                                                               | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC | 13 Sep 24 18:46 UTC |
	|         | ha-617764-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-617764 ssh -n ha-617764-m03 sudo cat                                        | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC | 13 Sep 24 18:46 UTC |
	|         | /home/docker/cp-test_ha-617764-m04_ha-617764-m03.txt                           |           |         |         |                     |                     |
	| node    | ha-617764 node stop m02 -v=7                                                   | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC |                     |
	|         | --alsologtostderr                                                              |           |         |         |                     |                     |
	| node    | ha-617764 node start m02 -v=7                                                  | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:48 UTC |                     |
	|         | --alsologtostderr                                                              |           |         |         |                     |                     |
	| node    | list -p ha-617764 -v=7                                                         | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:49 UTC |                     |
	|         | --alsologtostderr                                                              |           |         |         |                     |                     |
	| stop    | -p ha-617764 -v=7                                                              | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:49 UTC |                     |
	|         | --alsologtostderr                                                              |           |         |         |                     |                     |
	| start   | -p ha-617764 --wait=true -v=7                                                  | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:51 UTC | 13 Sep 24 18:56 UTC |
	|         | --alsologtostderr                                                              |           |         |         |                     |                     |
	| node    | list -p ha-617764                                                              | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:56 UTC |                     |
	| node    | ha-617764 node delete m03 -v=7                                                 | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:56 UTC | 13 Sep 24 18:56 UTC |
	|         | --alsologtostderr                                                              |           |         |         |                     |                     |
	| stop    | ha-617764 stop -v=7                                                            | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:56 UTC |                     |
	|         | --alsologtostderr                                                              |           |         |         |                     |                     |
	| start   | -p ha-617764 --wait=true                                                       | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:58 UTC |                     |
	|         | -v=7 --alsologtostderr                                                         |           |         |         |                     |                     |
	|         | --driver=kvm2                                                                  |           |         |         |                     |                     |
	|         | --container-runtime=crio                                                       |           |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/13 18:58:43
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0913 18:58:43.150705   31446 out.go:345] Setting OutFile to fd 1 ...
	I0913 18:58:43.150823   31446 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 18:58:43.150831   31446 out.go:358] Setting ErrFile to fd 2...
	I0913 18:58:43.150835   31446 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 18:58:43.150989   31446 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19636-3902/.minikube/bin
	I0913 18:58:43.151527   31446 out.go:352] Setting JSON to false
	I0913 18:58:43.152444   31446 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2466,"bootTime":1726251457,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0913 18:58:43.152531   31446 start.go:139] virtualization: kvm guest
	I0913 18:58:43.155078   31446 out.go:177] * [ha-617764] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0913 18:58:43.156678   31446 notify.go:220] Checking for updates...
	I0913 18:58:43.156709   31446 out.go:177]   - MINIKUBE_LOCATION=19636
	I0913 18:58:43.158268   31446 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 18:58:43.159544   31446 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19636-3902/kubeconfig
	I0913 18:58:43.160767   31446 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19636-3902/.minikube
	I0913 18:58:43.162220   31446 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0913 18:58:43.163615   31446 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 18:58:43.165451   31446 config.go:182] Loaded profile config "ha-617764": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 18:58:43.165853   31446 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:58:43.165907   31446 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:58:43.180911   31446 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45843
	I0913 18:58:43.181388   31446 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:58:43.181949   31446 main.go:141] libmachine: Using API Version  1
	I0913 18:58:43.181971   31446 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:58:43.182353   31446 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:58:43.182521   31446 main.go:141] libmachine: (ha-617764) Calling .DriverName
	I0913 18:58:43.182750   31446 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 18:58:43.183084   31446 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:58:43.183122   31446 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:58:43.197519   31446 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39037
	I0913 18:58:43.197916   31446 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:58:43.198411   31446 main.go:141] libmachine: Using API Version  1
	I0913 18:58:43.198429   31446 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:58:43.198758   31446 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:58:43.198946   31446 main.go:141] libmachine: (ha-617764) Calling .DriverName
	I0913 18:58:43.235966   31446 out.go:177] * Using the kvm2 driver based on existing profile
	I0913 18:58:43.237300   31446 start.go:297] selected driver: kvm2
	I0913 18:58:43.237333   31446 start.go:901] validating driver "kvm2" against &{Name:ha-617764 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.1 ClusterName:ha-617764 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.145 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.203 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.238 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false insp
ektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptim
izations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 18:58:43.237501   31446 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 18:58:43.237936   31446 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 18:58:43.238020   31446 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19636-3902/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0913 18:58:43.253448   31446 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0913 18:58:43.254210   31446 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0913 18:58:43.254249   31446 cni.go:84] Creating CNI manager for ""
	I0913 18:58:43.254286   31446 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0913 18:58:43.254380   31446 start.go:340] cluster config:
	{Name:ha-617764 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-617764 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.145 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.203 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.238 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow
:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetCli
entPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 18:58:43.254578   31446 iso.go:125] acquiring lock: {Name:mk6d09a9e7ffc35d34bf41cb51ad15df14a4d34d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 18:58:43.257570   31446 out.go:177] * Starting "ha-617764" primary control-plane node in "ha-617764" cluster
	I0913 18:58:43.258900   31446 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0913 18:58:43.258938   31446 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0913 18:58:43.258945   31446 cache.go:56] Caching tarball of preloaded images
	I0913 18:58:43.259017   31446 preload.go:172] Found /home/jenkins/minikube-integration/19636-3902/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0913 18:58:43.259028   31446 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0913 18:58:43.259156   31446 profile.go:143] Saving config to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/config.json ...
	I0913 18:58:43.259345   31446 start.go:360] acquireMachinesLock for ha-617764: {Name:mk2a4fc9f87a3264088553785e086036ce1d8c5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 18:58:43.259392   31446 start.go:364] duration metric: took 31.174µs to acquireMachinesLock for "ha-617764"
	I0913 18:58:43.259405   31446 start.go:96] Skipping create...Using existing machine configuration
	I0913 18:58:43.259413   31446 fix.go:54] fixHost starting: 
	I0913 18:58:43.259679   31446 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:58:43.259711   31446 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:58:43.274822   31446 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33137
	I0913 18:58:43.275298   31446 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:58:43.275852   31446 main.go:141] libmachine: Using API Version  1
	I0913 18:58:43.275878   31446 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:58:43.276311   31446 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:58:43.276486   31446 main.go:141] libmachine: (ha-617764) Calling .DriverName
	I0913 18:58:43.276663   31446 main.go:141] libmachine: (ha-617764) Calling .GetState
	I0913 18:58:43.278189   31446 fix.go:112] recreateIfNeeded on ha-617764: state=Running err=<nil>
	W0913 18:58:43.278219   31446 fix.go:138] unexpected machine state, will restart: <nil>
	I0913 18:58:43.280067   31446 out.go:177] * Updating the running kvm2 "ha-617764" VM ...
	I0913 18:58:43.281138   31446 machine.go:93] provisionDockerMachine start ...
	I0913 18:58:43.281155   31446 main.go:141] libmachine: (ha-617764) Calling .DriverName
	I0913 18:58:43.281323   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHHostname
	I0913 18:58:43.284023   31446 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:58:43.284521   31446 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 18:58:43.284555   31446 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:58:43.284669   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHPort
	I0913 18:58:43.284825   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 18:58:43.284952   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 18:58:43.285055   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHUsername
	I0913 18:58:43.285196   31446 main.go:141] libmachine: Using SSH client type: native
	I0913 18:58:43.285409   31446 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.145 22 <nil> <nil>}
	I0913 18:58:43.285420   31446 main.go:141] libmachine: About to run SSH command:
	hostname
	I0913 18:58:43.394451   31446 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-617764
	
	I0913 18:58:43.394477   31446 main.go:141] libmachine: (ha-617764) Calling .GetMachineName
	I0913 18:58:43.394708   31446 buildroot.go:166] provisioning hostname "ha-617764"
	I0913 18:58:43.394736   31446 main.go:141] libmachine: (ha-617764) Calling .GetMachineName
	I0913 18:58:43.394924   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHHostname
	I0913 18:58:43.397704   31446 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:58:43.398088   31446 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 18:58:43.398141   31446 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:58:43.398322   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHPort
	I0913 18:58:43.398529   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 18:58:43.398740   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 18:58:43.398893   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHUsername
	I0913 18:58:43.399057   31446 main.go:141] libmachine: Using SSH client type: native
	I0913 18:58:43.399258   31446 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.145 22 <nil> <nil>}
	I0913 18:58:43.399275   31446 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-617764 && echo "ha-617764" | sudo tee /etc/hostname
	I0913 18:58:43.520106   31446 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-617764
	
	I0913 18:58:43.520131   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHHostname
	I0913 18:58:43.522812   31446 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:58:43.523152   31446 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 18:58:43.523170   31446 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:58:43.523391   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHPort
	I0913 18:58:43.523571   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 18:58:43.523748   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 18:58:43.523885   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHUsername
	I0913 18:58:43.524100   31446 main.go:141] libmachine: Using SSH client type: native
	I0913 18:58:43.524293   31446 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.145 22 <nil> <nil>}
	I0913 18:58:43.524308   31446 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-617764' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-617764/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-617764' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0913 18:58:43.635855   31446 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0913 18:58:43.635900   31446 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19636-3902/.minikube CaCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19636-3902/.minikube}
	I0913 18:58:43.635930   31446 buildroot.go:174] setting up certificates
	I0913 18:58:43.635943   31446 provision.go:84] configureAuth start
	I0913 18:58:43.635958   31446 main.go:141] libmachine: (ha-617764) Calling .GetMachineName
	I0913 18:58:43.636270   31446 main.go:141] libmachine: (ha-617764) Calling .GetIP
	I0913 18:58:43.638723   31446 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:58:43.639091   31446 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 18:58:43.639122   31446 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:58:43.639263   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHHostname
	I0913 18:58:43.641516   31446 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:58:43.641896   31446 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 18:58:43.641921   31446 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:58:43.642009   31446 provision.go:143] copyHostCerts
	I0913 18:58:43.642045   31446 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem
	I0913 18:58:43.642090   31446 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem, removing ...
	I0913 18:58:43.642118   31446 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem
	I0913 18:58:43.642204   31446 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem (1082 bytes)
	I0913 18:58:43.642317   31446 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem
	I0913 18:58:43.642345   31446 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem, removing ...
	I0913 18:58:43.642351   31446 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem
	I0913 18:58:43.642393   31446 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem (1123 bytes)
	I0913 18:58:43.642482   31446 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem
	I0913 18:58:43.642507   31446 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem, removing ...
	I0913 18:58:43.642516   31446 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem
	I0913 18:58:43.642554   31446 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem (1679 bytes)
	I0913 18:58:43.642629   31446 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem org=jenkins.ha-617764 san=[127.0.0.1 192.168.39.145 ha-617764 localhost minikube]
	I0913 18:58:44.051872   31446 provision.go:177] copyRemoteCerts
	I0913 18:58:44.051926   31446 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0913 18:58:44.051949   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHHostname
	I0913 18:58:44.054378   31446 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:58:44.054746   31446 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 18:58:44.054779   31446 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:58:44.054963   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHPort
	I0913 18:58:44.055136   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 18:58:44.055290   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHUsername
	I0913 18:58:44.055443   31446 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764/id_rsa Username:docker}
	I0913 18:58:44.136923   31446 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0913 18:58:44.136991   31446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0913 18:58:44.167349   31446 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0913 18:58:44.167474   31446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0913 18:58:44.192816   31446 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0913 18:58:44.192890   31446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0913 18:58:44.219869   31446 provision.go:87] duration metric: took 583.909353ms to configureAuth
	I0913 18:58:44.219902   31446 buildroot.go:189] setting minikube options for container-runtime
	I0913 18:58:44.220142   31446 config.go:182] Loaded profile config "ha-617764": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 18:58:44.220219   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHHostname
	I0913 18:58:44.222922   31446 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:58:44.223448   31446 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 18:58:44.223533   31446 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:58:44.223808   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHPort
	I0913 18:58:44.224007   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 18:58:44.224174   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 18:58:44.224308   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHUsername
	I0913 18:58:44.224474   31446 main.go:141] libmachine: Using SSH client type: native
	I0913 18:58:44.224676   31446 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.145 22 <nil> <nil>}
	I0913 18:58:44.224698   31446 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0913 19:00:18.789819   31446 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0913 19:00:18.789840   31446 machine.go:96] duration metric: took 1m35.508690532s to provisionDockerMachine
	I0913 19:00:18.789851   31446 start.go:293] postStartSetup for "ha-617764" (driver="kvm2")
	I0913 19:00:18.789861   31446 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0913 19:00:18.789874   31446 main.go:141] libmachine: (ha-617764) Calling .DriverName
	I0913 19:00:18.790220   31446 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0913 19:00:18.790251   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHHostname
	I0913 19:00:18.793500   31446 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 19:00:18.793848   31446 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 19:00:18.793875   31446 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 19:00:18.794048   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHPort
	I0913 19:00:18.794238   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 19:00:18.794385   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHUsername
	I0913 19:00:18.794569   31446 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764/id_rsa Username:docker}
	I0913 19:00:18.877285   31446 ssh_runner.go:195] Run: cat /etc/os-release
	I0913 19:00:18.883268   31446 info.go:137] Remote host: Buildroot 2023.02.9
	I0913 19:00:18.883297   31446 filesync.go:126] Scanning /home/jenkins/minikube-integration/19636-3902/.minikube/addons for local assets ...
	I0913 19:00:18.883423   31446 filesync.go:126] Scanning /home/jenkins/minikube-integration/19636-3902/.minikube/files for local assets ...
	I0913 19:00:18.883612   31446 filesync.go:149] local asset: /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem -> 110792.pem in /etc/ssl/certs
	I0913 19:00:18.883631   31446 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem -> /etc/ssl/certs/110792.pem
	I0913 19:00:18.883718   31446 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0913 19:00:18.893226   31446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem --> /etc/ssl/certs/110792.pem (1708 bytes)
	I0913 19:00:18.920369   31446 start.go:296] duration metric: took 130.503832ms for postStartSetup
	I0913 19:00:18.920414   31446 main.go:141] libmachine: (ha-617764) Calling .DriverName
	I0913 19:00:18.920676   31446 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0913 19:00:18.920707   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHHostname
	I0913 19:00:18.923635   31446 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 19:00:18.924114   31446 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 19:00:18.924141   31446 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 19:00:18.924348   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHPort
	I0913 19:00:18.924535   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 19:00:18.924698   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHUsername
	I0913 19:00:18.924850   31446 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764/id_rsa Username:docker}
	W0913 19:00:19.009141   31446 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0913 19:00:19.009172   31446 fix.go:56] duration metric: took 1m35.749758939s for fixHost
	I0913 19:00:19.009198   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHHostname
	I0913 19:00:19.011920   31446 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 19:00:19.012313   31446 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 19:00:19.012336   31446 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 19:00:19.012505   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHPort
	I0913 19:00:19.012684   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 19:00:19.012842   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 19:00:19.012978   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHUsername
	I0913 19:00:19.013111   31446 main.go:141] libmachine: Using SSH client type: native
	I0913 19:00:19.013373   31446 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.145 22 <nil> <nil>}
	I0913 19:00:19.013392   31446 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0913 19:00:19.118884   31446 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726254019.083169511
	
	I0913 19:00:19.118912   31446 fix.go:216] guest clock: 1726254019.083169511
	I0913 19:00:19.118923   31446 fix.go:229] Guest: 2024-09-13 19:00:19.083169511 +0000 UTC Remote: 2024-09-13 19:00:19.009181164 +0000 UTC m=+95.893684428 (delta=73.988347ms)
	I0913 19:00:19.118983   31446 fix.go:200] guest clock delta is within tolerance: 73.988347ms
	I0913 19:00:19.118991   31446 start.go:83] releasing machines lock for "ha-617764", held for 1m35.85958928s
	I0913 19:00:19.119022   31446 main.go:141] libmachine: (ha-617764) Calling .DriverName
	I0913 19:00:19.119255   31446 main.go:141] libmachine: (ha-617764) Calling .GetIP
	I0913 19:00:19.121927   31446 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 19:00:19.122454   31446 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 19:00:19.122593   31446 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 19:00:19.122762   31446 main.go:141] libmachine: (ha-617764) Calling .DriverName
	I0913 19:00:19.123286   31446 main.go:141] libmachine: (ha-617764) Calling .DriverName
	I0913 19:00:19.123470   31446 main.go:141] libmachine: (ha-617764) Calling .DriverName
	I0913 19:00:19.123531   31446 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0913 19:00:19.123584   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHHostname
	I0913 19:00:19.123664   31446 ssh_runner.go:195] Run: cat /version.json
	I0913 19:00:19.123680   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHHostname
	I0913 19:00:19.126137   31446 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 19:00:19.126495   31446 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 19:00:19.126557   31446 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 19:00:19.126605   31446 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 19:00:19.126870   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHPort
	I0913 19:00:19.126965   31446 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 19:00:19.126997   31446 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 19:00:19.127049   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 19:00:19.127133   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHPort
	I0913 19:00:19.127204   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHUsername
	I0913 19:00:19.127289   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 19:00:19.127344   31446 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764/id_rsa Username:docker}
	I0913 19:00:19.127430   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHUsername
	I0913 19:00:19.127554   31446 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764/id_rsa Username:docker}
	I0913 19:00:19.230613   31446 ssh_runner.go:195] Run: systemctl --version
	I0913 19:00:19.238299   31446 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0913 19:00:19.405183   31446 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0913 19:00:19.411872   31446 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0913 19:00:19.411926   31446 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0913 19:00:19.421058   31446 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0913 19:00:19.421086   31446 start.go:495] detecting cgroup driver to use...
	I0913 19:00:19.421155   31446 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0913 19:00:19.436778   31446 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0913 19:00:19.450920   31446 docker.go:217] disabling cri-docker service (if available) ...
	I0913 19:00:19.450979   31446 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0913 19:00:19.464921   31446 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0913 19:00:19.478168   31446 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0913 19:00:19.645366   31446 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0913 19:00:19.801636   31446 docker.go:233] disabling docker service ...
	I0913 19:00:19.801712   31446 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0913 19:00:19.818239   31446 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0913 19:00:19.832446   31446 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0913 19:00:19.978995   31446 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0913 19:00:20.122997   31446 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0913 19:00:20.139838   31446 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0913 19:00:20.159570   31446 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0913 19:00:20.159648   31446 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:00:20.172313   31446 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0913 19:00:20.172387   31446 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:00:20.183969   31446 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:00:20.195156   31446 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:00:20.206292   31446 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0913 19:00:20.218569   31446 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:00:20.229457   31446 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:00:20.241787   31446 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:00:20.252269   31446 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0913 19:00:20.262210   31446 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0913 19:00:20.272169   31446 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 19:00:20.432441   31446 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0913 19:00:27.397849   31446 ssh_runner.go:235] Completed: sudo systemctl restart crio: (6.965372324s)
	I0913 19:00:27.397881   31446 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0913 19:00:27.397939   31446 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0913 19:00:27.404132   31446 start.go:563] Will wait 60s for crictl version
	I0913 19:00:27.404202   31446 ssh_runner.go:195] Run: which crictl
	I0913 19:00:27.407981   31446 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0913 19:00:27.443823   31446 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0913 19:00:27.443905   31446 ssh_runner.go:195] Run: crio --version
	I0913 19:00:27.475173   31446 ssh_runner.go:195] Run: crio --version
	I0913 19:00:27.506743   31446 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0913 19:00:27.508011   31446 main.go:141] libmachine: (ha-617764) Calling .GetIP
	I0913 19:00:27.510651   31446 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 19:00:27.511033   31446 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 19:00:27.511060   31446 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 19:00:27.511270   31446 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0913 19:00:27.516012   31446 kubeadm.go:883] updating cluster {Name:ha-617764 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-617764 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.145 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.203 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.238 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadge
t:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fa
lse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0913 19:00:27.516147   31446 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0913 19:00:27.516207   31446 ssh_runner.go:195] Run: sudo crictl images --output json
	I0913 19:00:27.563165   31446 crio.go:514] all images are preloaded for cri-o runtime.
	I0913 19:00:27.563185   31446 crio.go:433] Images already preloaded, skipping extraction
	I0913 19:00:27.563228   31446 ssh_runner.go:195] Run: sudo crictl images --output json
	I0913 19:00:27.599775   31446 crio.go:514] all images are preloaded for cri-o runtime.
	I0913 19:00:27.599799   31446 cache_images.go:84] Images are preloaded, skipping loading
	I0913 19:00:27.599809   31446 kubeadm.go:934] updating node { 192.168.39.145 8443 v1.31.1 crio true true} ...
	I0913 19:00:27.599915   31446 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-617764 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.145
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-617764 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0913 19:00:27.600007   31446 ssh_runner.go:195] Run: crio config
	I0913 19:00:27.651311   31446 cni.go:84] Creating CNI manager for ""
	I0913 19:00:27.651333   31446 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0913 19:00:27.651343   31446 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0913 19:00:27.651366   31446 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.145 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-617764 NodeName:ha-617764 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.145"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.145 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0913 19:00:27.651508   31446 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.145
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-617764"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.145
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.145"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0913 19:00:27.651538   31446 kube-vip.go:115] generating kube-vip config ...
	I0913 19:00:27.651587   31446 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0913 19:00:27.664287   31446 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0913 19:00:27.664396   31446 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0913 19:00:27.664455   31446 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0913 19:00:27.674466   31446 binaries.go:44] Found k8s binaries, skipping transfer
	I0913 19:00:27.674547   31446 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0913 19:00:27.684733   31446 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0913 19:00:27.702120   31446 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0913 19:00:27.719612   31446 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0913 19:00:27.737029   31446 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0913 19:00:27.755478   31446 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0913 19:00:27.759223   31446 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 19:00:27.910765   31446 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0913 19:00:27.925634   31446 certs.go:68] Setting up /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764 for IP: 192.168.39.145
	I0913 19:00:27.925655   31446 certs.go:194] generating shared ca certs ...
	I0913 19:00:27.925670   31446 certs.go:226] acquiring lock for ca certs: {Name:mke780aab4c2895bded11772e31c7a357d07742c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 19:00:27.925837   31446 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.key
	I0913 19:00:27.925877   31446 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.key
	I0913 19:00:27.925887   31446 certs.go:256] generating profile certs ...
	I0913 19:00:27.925954   31446 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/client.key
	I0913 19:00:27.925980   31446 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.key.902bad01
	I0913 19:00:27.926001   31446 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.crt.902bad01 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.145 192.168.39.203 192.168.39.254]
	I0913 19:00:28.083419   31446 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.crt.902bad01 ...
	I0913 19:00:28.083444   31446 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.crt.902bad01: {Name:mk5610f7b2a13e2e9a2db0fd30b419eeb2bcec9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 19:00:28.083629   31446 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.key.902bad01 ...
	I0913 19:00:28.083645   31446 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.key.902bad01: {Name:mk0e8fc15f8ef270cc2f47ac846f3a3e4156c718 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 19:00:28.083740   31446 certs.go:381] copying /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.crt.902bad01 -> /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.crt
	I0913 19:00:28.083880   31446 certs.go:385] copying /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.key.902bad01 -> /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.key
	I0913 19:00:28.084003   31446 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/proxy-client.key
	I0913 19:00:28.084017   31446 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0913 19:00:28.084030   31446 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0913 19:00:28.084042   31446 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0913 19:00:28.084057   31446 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0913 19:00:28.084069   31446 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0913 19:00:28.084082   31446 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0913 19:00:28.084100   31446 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0913 19:00:28.084113   31446 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0913 19:00:28.084157   31446 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079.pem (1338 bytes)
	W0913 19:00:28.084185   31446 certs.go:480] ignoring /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079_empty.pem, impossibly tiny 0 bytes
	I0913 19:00:28.084195   31446 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem (1675 bytes)
	I0913 19:00:28.084215   31446 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem (1082 bytes)
	I0913 19:00:28.084238   31446 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem (1123 bytes)
	I0913 19:00:28.084258   31446 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem (1679 bytes)
	I0913 19:00:28.084294   31446 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem (1708 bytes)
	I0913 19:00:28.084323   31446 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079.pem -> /usr/share/ca-certificates/11079.pem
	I0913 19:00:28.084336   31446 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem -> /usr/share/ca-certificates/110792.pem
	I0913 19:00:28.084348   31446 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0913 19:00:28.084922   31446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0913 19:00:28.111077   31446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0913 19:00:28.134495   31446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0913 19:00:28.159747   31446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0913 19:00:28.182325   31446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0913 19:00:28.205586   31446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0913 19:00:28.229539   31446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0913 19:00:28.252370   31446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0913 19:00:28.275737   31446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079.pem --> /usr/share/ca-certificates/11079.pem (1338 bytes)
	I0913 19:00:28.300247   31446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem --> /usr/share/ca-certificates/110792.pem (1708 bytes)
	I0913 19:00:28.324266   31446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0913 19:00:28.347577   31446 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0913 19:00:28.365115   31446 ssh_runner.go:195] Run: openssl version
	I0913 19:00:28.408066   31446 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110792.pem && ln -fs /usr/share/ca-certificates/110792.pem /etc/ssl/certs/110792.pem"
	I0913 19:00:28.469517   31446 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110792.pem
	I0913 19:00:28.486389   31446 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 13 18:38 /usr/share/ca-certificates/110792.pem
	I0913 19:00:28.486486   31446 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110792.pem
	I0913 19:00:28.525327   31446 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110792.pem /etc/ssl/certs/3ec20f2e.0"
	I0913 19:00:28.652306   31446 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0913 19:00:28.760544   31446 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0913 19:00:28.769712   31446 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 13 18:22 /usr/share/ca-certificates/minikubeCA.pem
	I0913 19:00:28.769775   31446 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0913 19:00:28.819345   31446 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0913 19:00:28.906062   31446 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11079.pem && ln -fs /usr/share/ca-certificates/11079.pem /etc/ssl/certs/11079.pem"
	I0913 19:00:29.048802   31446 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11079.pem
	I0913 19:00:29.102932   31446 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 13 18:38 /usr/share/ca-certificates/11079.pem
	I0913 19:00:29.103020   31446 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11079.pem
	I0913 19:00:29.115422   31446 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11079.pem /etc/ssl/certs/51391683.0"
	I0913 19:00:29.318793   31446 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0913 19:00:29.362153   31446 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0913 19:00:29.471278   31446 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0913 19:00:29.492455   31446 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0913 19:00:29.513786   31446 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0913 19:00:29.728338   31446 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0913 19:00:29.780205   31446 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0913 19:00:29.853145   31446 kubeadm.go:392] StartCluster: {Name:ha-617764 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-617764 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.145 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.203 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.238 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:f
alse istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 19:00:29.853301   31446 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0913 19:00:29.853366   31446 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0913 19:00:30.060193   31446 cri.go:89] found id: "7cb162ca4a91609e62eb9a3b82ac2d18f1f007e8a4b2577cce0250928caa85f2"
	I0913 19:00:30.060217   31446 cri.go:89] found id: "360965c899e522788532af521cbf0b477b35e977e91f5a28b282abf712fc953e"
	I0913 19:00:30.060223   31446 cri.go:89] found id: "26de4c71cc1f8d3a39e52e622c86361c67e1839a5b84f098c669196c7c161196"
	I0913 19:00:30.060228   31446 cri.go:89] found id: "12d8e3661fa4705e4486cfa4b69b3f31e0b159af038044b195db15b9345f4f4c"
	I0913 19:00:30.060233   31446 cri.go:89] found id: "c22324f5733e44ea66404b16a363cefc848145f9b3f1a75daecc8b7369ff96dd"
	I0913 19:00:30.060237   31446 cri.go:89] found id: "bc744a6ac873da0f38fbe5a85580ba322c578e68359078003640394bc1a9784f"
	I0913 19:00:30.060240   31446 cri.go:89] found id: "570c77981741ff23e853840359e686b304c18dfbe54cb12c07ca5d8bf2b8de17"
	I0913 19:00:30.060244   31446 cri.go:89] found id: "0a368121b3974a88cb67c446e03fd5709ed4dd291d3b5de37b4544a7c42b60cc"
	I0913 19:00:30.060247   31446 cri.go:89] found id: "32fcfa457f3ff0e638142def4aa43f12d6a5a779bfe86e597cc242d7f4d9d19d"
	I0913 19:00:30.060254   31446 cri.go:89] found id: "46d659112c682a93bb4560212566d776e405246e1b3f91e9cc2ad5198b2b8c69"
	I0913 19:00:30.060259   31446 cri.go:89] found id: "09fe052337ef3c922d075bb554c17c548520bce6283d1973e3507ca8d99ace87"
	I0913 19:00:30.060262   31446 cri.go:89] found id: "dddc0dfb6a255d4d35498f110cd8ade544b6f9eb17cef621a12500640512581e"
	I0913 19:00:30.060266   31446 cri.go:89] found id: "b752b1ac699cb46bd464528882ec5c1bfb29241c94a1eda062141311151c2fc1"
	I0913 19:00:30.060270   31446 cri.go:89] found id: "15c33340e3091f96212bbbf29c4a8802468f76ac07126f47265d4f2b86321a89"
	I0913 19:00:30.060277   31446 cri.go:89] found id: "1d1a0b2d1c95ea3234342a3c075a74b46c01ce0a91b97801503cb6fd1fc06163"
	I0913 19:00:30.060281   31446 cri.go:89] found id: "80a7cb47dee67b788f693c83be0a9b7c41fc30e21ddf9b0131e13737c8047222"
	I0913 19:00:30.060286   31446 cri.go:89] found id: ""
	I0913 19:00:30.060335   31446 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 13 19:09:39 ha-617764 crio[6149]: time="2024-09-13 19:09:39.023784376Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=22dd704e-1c0b-4cbb-8d75-d09cd3af3b2e name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 19:09:39 ha-617764 crio[6149]: time="2024-09-13 19:09:39.024222288Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:999f5e6003ef9162b57fec4793fbe75aab1348731d8d4f27ad7d3029004b6d4c,PodSandboxId:2ec7df8952268fade9fc0ffc23b16f677fdc2189770cb119d9f905f75fcc7282,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:7,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726254518536729625,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e5f1a84-1798-430e-af04-82469e8f4a7b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 7,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9e9ac5d6b79f0671011120db7c835a22b4b40515be6c0a224b5d78d10631858,PodSandboxId:b36021c0b35cdc0a068086003611f08b09d2d09f7980d332998b85651d1f5f31,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:6,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726254374542802960,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 815ca8cb73177215968b5c5242b63776,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 6,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87156e375ce6e42c538bb851dcb55abf7b83754448bb78e67a324fd93e76a534,PodSandboxId:639b42fbde0c6031c445be6bfab0a91e5a252efc1e759eace315ed7ee44203e9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:6,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726254272527637261,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f4db9ee38410b02d601ed80ae90b5a4,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e916b90f9253d6855bcd1b24ab1ae47479a3f23fd600018cb0738677896b324f,PodSandboxId:2ec7df8952268fade9fc0ffc23b16f677fdc2189770cb119d9f905f75fcc7282,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:6,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726254205523822299,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e5f1a84-1798-430e-af04-82469e8f4a7b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a3f92c39f616d85e223876cd67911949b93e345a656250b2b9a93e494f7b7b0,PodSandboxId:b36021c0b35cdc0a068086003611f08b09d2d09f7980d332998b85651d1f5f31,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:5,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726254195530889880,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 815ca8cb73177215968b5c5242b63776,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50283a22853869e4521e5d44efcff41101ffde818329fd7423f482cb33efabc0,PodSandboxId:639b42fbde0c6031c445be6bfab0a91e5a252efc1e759eace315ed7ee44203e9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:5,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726254130535295539,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f4db9ee38410b02d601ed80ae90b5a4,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf7f61f474e782e403819281bc5a7d281f13bc89127e0093ac24be08ab21acdc,PodSandboxId:ae1363f1228347bc367ec486d746b078aece68a3917c3a8f082468352ed540ff,Metadata:&ContainerMetadata{Name:busybox,Attempt:2,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726254062550829569,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-t4fwq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1bc3749b-0225-445c-9b86-767558392df7,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMe
ssagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bdc8b32559cc35726794d7412fa2462beea35cff248df50623987861ae7bd48,PodSandboxId:90fa239fc72bb6f4e32a775ccd5954a7efb3c77a68e1e066547347d6aa9de270,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726254029473467866,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-92mml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36bd37dc-88c4-4264-9e7c-a90246cc5212,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.p
od.terminationGracePeriod: 30,},},&Container{Id:2ca0aab49c5463b35d609dd4876256ce1882f5f15b9314d4efc1c9460e5da465,PodSandboxId:743d4b43092c6c121728464d3854175d802db7a5cbb8456d4d3e321a84e3380d,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726254029507452958,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-htrbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41a8301e-fca3-4907-bc77-808b013a2d2a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"proto
col\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70f0f4e37a417dce143a67fa1dcf0c5c3b58a8856c27d2bc40f1a94c8751842a,PodSandboxId:43cddd96b715864e6de09c1fa6b08a212431d9f454105b634b09d702d276ce38,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726254029705747992,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fdhnm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c509676-c7ba-4841-89b5-7e4266abd9c9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.c
ontainer.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cb162ca4a91609e62eb9a3b82ac2d18f1f007e8a4b2577cce0250928caa85f2,PodSandboxId:1aee20bf902b8f7f8a9930e46d1fefb6a6c5f2d3d6d8c4ca74d565558460f8cd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726254029639290085,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-b
9bzd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81130c38-ff5d-4c9e-ab7b-7eae4a62d3b8,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:360965c899e522788532af521cbf0b477b35e977e91f5a28b282abf712fc953e,PodSandboxId:477f3d5572a61dae49e2682bbe5fcfd071182b44e8574426070fd470e9d8b5d9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726254029288189795,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-617764,io.kubernete
s.pod.namespace: kube-system,io.kubernetes.pod.uid: 15cf7928620050653d6239c1007547bd,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c22324f5733e44ea66404b16a363cefc848145f9b3f1a75daecc8b7369ff96dd,PodSandboxId:e94c56bdaeede8bdf6b9672b64791948a015daf333d84890a1b688a898d34e7e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726254029033042084,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
bf3d4ca74d8429dc43b760fdf8f185ab,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc744a6ac873da0f38fbe5a85580ba322c578e68359078003640394bc1a9784f,PodSandboxId:c01954306193722d6eb940bc8b28659b638bd552e7eee2298bf05a3dac30ffcf,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726254028929207262,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5545735943f8ff5a38c9aea0b4c785ad,},Ann
otations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bb3333d84624db9148786fc866cc4cb99179a577e31d3365459788ba7b02f59,PodSandboxId:0238ab84a512173bc79a44c0c48ee4a9bfeeaed98c7655e711a18a704a5951f9,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726253659862538947,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-t4fwq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1bc3749b-0225-445c-9b86-767558392df7,},Annotations:map[string]string{io.kuberne
tes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46d659112c682a93bb4560212566d776e405246e1b3f91e9cc2ad5198b2b8c69,PodSandboxId:566613db4514bf925926ac26426504bb464936a657b23ca25af3ad3aa0a803e1,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_EXITED,CreatedAt:1726253640657625545,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5545735943f8ff5a38c9aea0b4c785ad,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernet
es.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dddc0dfb6a255d4d35498f110cd8ade544b6f9eb17cef621a12500640512581e,PodSandboxId:18e2ef1278c487f38e01dc8ec652cadc524a2e3c08f1b80bae0f12b5ce7d5f45,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726253626790338590,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-b9bzd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81130c38-ff5d-4c9e-ab7b-7eae4a62d3b8,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09fe052337ef3c922d075bb554c17c548520bce6283d1973e3507ca8d99ace87,PodSandboxId:5f1a3394b645b31406ab8d54ac120cd39398070996c972bf73df2d505c1848b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726253626803961664,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fdhnm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c509676-c7ba-4841-89b5-7e4266abd9c9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b752b1ac699cb46bd464528882ec5c1bfb29241c94a1eda062141311151c2fc1,PodSandboxId:3a3adb124d23e71e636eaf0f200f72f3779d41c493e0475b011f1e825d452f00,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726253626698915633,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-htrbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 41a8301e-fca3-4907-bc77-808b013a2d2a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d1a0b2d1c95ea3234342a3c075a74b46c01ce0a91b97801503cb6fd1fc06163,PodSandboxId:09bbefd12114cb9e6833e18edd3e080d729dc865efaaa80af99dd78c1c2387c2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,C
reatedAt:1726253626405117690,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-92mml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36bd37dc-88c4-4264-9e7c-a90246cc5212,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15c33340e3091f96212bbbf29c4a8802468f76ac07126f47265d4f2b86321a89,PodSandboxId:acfcaea56c23ed4626d8134635877c58a053b2861b24d0d53cf3024c7c8c1ca0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726253626518315300,Labels:map[
string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf3d4ca74d8429dc43b760fdf8f185ab,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80a7cb47dee67b788f693c83be0a9b7c41fc30e21ddf9b0131e13737c8047222,PodSandboxId:a63972ff65b125bee7dbfeebba7408899591af85f385eeba93cb4a7472b5a831,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726253626301782095,Labels:map[string]string{io.kubernetes.container.name
: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15cf7928620050653d6239c1007547bd,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=22dd704e-1c0b-4cbb-8d75-d09cd3af3b2e name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 19:09:39 ha-617764 crio[6149]: time="2024-09-13 19:09:39.070144821Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=40d5d153-4de5-4265-b2eb-28c522bdbd53 name=/runtime.v1.RuntimeService/Version
	Sep 13 19:09:39 ha-617764 crio[6149]: time="2024-09-13 19:09:39.070283168Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=40d5d153-4de5-4265-b2eb-28c522bdbd53 name=/runtime.v1.RuntimeService/Version
	Sep 13 19:09:39 ha-617764 crio[6149]: time="2024-09-13 19:09:39.071871258Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=73d4d495-2411-49a7-aa25-675b8acbd04d name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 19:09:39 ha-617764 crio[6149]: time="2024-09-13 19:09:39.072708486Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726254579072679486,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=73d4d495-2411-49a7-aa25-675b8acbd04d name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 19:09:39 ha-617764 crio[6149]: time="2024-09-13 19:09:39.073830094Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c8ff4db8-1f0d-48f3-9f0f-534ec858fd1e name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 19:09:39 ha-617764 crio[6149]: time="2024-09-13 19:09:39.073903252Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c8ff4db8-1f0d-48f3-9f0f-534ec858fd1e name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 19:09:39 ha-617764 crio[6149]: time="2024-09-13 19:09:39.074379606Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:999f5e6003ef9162b57fec4793fbe75aab1348731d8d4f27ad7d3029004b6d4c,PodSandboxId:2ec7df8952268fade9fc0ffc23b16f677fdc2189770cb119d9f905f75fcc7282,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:7,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726254518536729625,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e5f1a84-1798-430e-af04-82469e8f4a7b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 7,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9e9ac5d6b79f0671011120db7c835a22b4b40515be6c0a224b5d78d10631858,PodSandboxId:b36021c0b35cdc0a068086003611f08b09d2d09f7980d332998b85651d1f5f31,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:6,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726254374542802960,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 815ca8cb73177215968b5c5242b63776,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 6,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87156e375ce6e42c538bb851dcb55abf7b83754448bb78e67a324fd93e76a534,PodSandboxId:639b42fbde0c6031c445be6bfab0a91e5a252efc1e759eace315ed7ee44203e9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:6,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726254272527637261,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f4db9ee38410b02d601ed80ae90b5a4,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e916b90f9253d6855bcd1b24ab1ae47479a3f23fd600018cb0738677896b324f,PodSandboxId:2ec7df8952268fade9fc0ffc23b16f677fdc2189770cb119d9f905f75fcc7282,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:6,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726254205523822299,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e5f1a84-1798-430e-af04-82469e8f4a7b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a3f92c39f616d85e223876cd67911949b93e345a656250b2b9a93e494f7b7b0,PodSandboxId:b36021c0b35cdc0a068086003611f08b09d2d09f7980d332998b85651d1f5f31,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:5,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726254195530889880,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 815ca8cb73177215968b5c5242b63776,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50283a22853869e4521e5d44efcff41101ffde818329fd7423f482cb33efabc0,PodSandboxId:639b42fbde0c6031c445be6bfab0a91e5a252efc1e759eace315ed7ee44203e9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:5,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726254130535295539,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f4db9ee38410b02d601ed80ae90b5a4,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf7f61f474e782e403819281bc5a7d281f13bc89127e0093ac24be08ab21acdc,PodSandboxId:ae1363f1228347bc367ec486d746b078aece68a3917c3a8f082468352ed540ff,Metadata:&ContainerMetadata{Name:busybox,Attempt:2,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726254062550829569,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-t4fwq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1bc3749b-0225-445c-9b86-767558392df7,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMe
ssagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bdc8b32559cc35726794d7412fa2462beea35cff248df50623987861ae7bd48,PodSandboxId:90fa239fc72bb6f4e32a775ccd5954a7efb3c77a68e1e066547347d6aa9de270,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726254029473467866,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-92mml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36bd37dc-88c4-4264-9e7c-a90246cc5212,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.p
od.terminationGracePeriod: 30,},},&Container{Id:2ca0aab49c5463b35d609dd4876256ce1882f5f15b9314d4efc1c9460e5da465,PodSandboxId:743d4b43092c6c121728464d3854175d802db7a5cbb8456d4d3e321a84e3380d,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726254029507452958,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-htrbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41a8301e-fca3-4907-bc77-808b013a2d2a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"proto
col\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70f0f4e37a417dce143a67fa1dcf0c5c3b58a8856c27d2bc40f1a94c8751842a,PodSandboxId:43cddd96b715864e6de09c1fa6b08a212431d9f454105b634b09d702d276ce38,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726254029705747992,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fdhnm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c509676-c7ba-4841-89b5-7e4266abd9c9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.c
ontainer.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cb162ca4a91609e62eb9a3b82ac2d18f1f007e8a4b2577cce0250928caa85f2,PodSandboxId:1aee20bf902b8f7f8a9930e46d1fefb6a6c5f2d3d6d8c4ca74d565558460f8cd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726254029639290085,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-b
9bzd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81130c38-ff5d-4c9e-ab7b-7eae4a62d3b8,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:360965c899e522788532af521cbf0b477b35e977e91f5a28b282abf712fc953e,PodSandboxId:477f3d5572a61dae49e2682bbe5fcfd071182b44e8574426070fd470e9d8b5d9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726254029288189795,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-617764,io.kubernete
s.pod.namespace: kube-system,io.kubernetes.pod.uid: 15cf7928620050653d6239c1007547bd,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c22324f5733e44ea66404b16a363cefc848145f9b3f1a75daecc8b7369ff96dd,PodSandboxId:e94c56bdaeede8bdf6b9672b64791948a015daf333d84890a1b688a898d34e7e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726254029033042084,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
bf3d4ca74d8429dc43b760fdf8f185ab,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc744a6ac873da0f38fbe5a85580ba322c578e68359078003640394bc1a9784f,PodSandboxId:c01954306193722d6eb940bc8b28659b638bd552e7eee2298bf05a3dac30ffcf,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726254028929207262,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5545735943f8ff5a38c9aea0b4c785ad,},Ann
otations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bb3333d84624db9148786fc866cc4cb99179a577e31d3365459788ba7b02f59,PodSandboxId:0238ab84a512173bc79a44c0c48ee4a9bfeeaed98c7655e711a18a704a5951f9,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726253659862538947,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-t4fwq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1bc3749b-0225-445c-9b86-767558392df7,},Annotations:map[string]string{io.kuberne
tes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46d659112c682a93bb4560212566d776e405246e1b3f91e9cc2ad5198b2b8c69,PodSandboxId:566613db4514bf925926ac26426504bb464936a657b23ca25af3ad3aa0a803e1,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_EXITED,CreatedAt:1726253640657625545,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5545735943f8ff5a38c9aea0b4c785ad,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernet
es.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dddc0dfb6a255d4d35498f110cd8ade544b6f9eb17cef621a12500640512581e,PodSandboxId:18e2ef1278c487f38e01dc8ec652cadc524a2e3c08f1b80bae0f12b5ce7d5f45,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726253626790338590,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-b9bzd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81130c38-ff5d-4c9e-ab7b-7eae4a62d3b8,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09fe052337ef3c922d075bb554c17c548520bce6283d1973e3507ca8d99ace87,PodSandboxId:5f1a3394b645b31406ab8d54ac120cd39398070996c972bf73df2d505c1848b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726253626803961664,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fdhnm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c509676-c7ba-4841-89b5-7e4266abd9c9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b752b1ac699cb46bd464528882ec5c1bfb29241c94a1eda062141311151c2fc1,PodSandboxId:3a3adb124d23e71e636eaf0f200f72f3779d41c493e0475b011f1e825d452f00,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726253626698915633,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-htrbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 41a8301e-fca3-4907-bc77-808b013a2d2a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d1a0b2d1c95ea3234342a3c075a74b46c01ce0a91b97801503cb6fd1fc06163,PodSandboxId:09bbefd12114cb9e6833e18edd3e080d729dc865efaaa80af99dd78c1c2387c2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,C
reatedAt:1726253626405117690,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-92mml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36bd37dc-88c4-4264-9e7c-a90246cc5212,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15c33340e3091f96212bbbf29c4a8802468f76ac07126f47265d4f2b86321a89,PodSandboxId:acfcaea56c23ed4626d8134635877c58a053b2861b24d0d53cf3024c7c8c1ca0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726253626518315300,Labels:map[
string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf3d4ca74d8429dc43b760fdf8f185ab,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80a7cb47dee67b788f693c83be0a9b7c41fc30e21ddf9b0131e13737c8047222,PodSandboxId:a63972ff65b125bee7dbfeebba7408899591af85f385eeba93cb4a7472b5a831,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726253626301782095,Labels:map[string]string{io.kubernetes.container.name
: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15cf7928620050653d6239c1007547bd,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c8ff4db8-1f0d-48f3-9f0f-534ec858fd1e name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 19:09:39 ha-617764 crio[6149]: time="2024-09-13 19:09:39.092203651Z" level=debug msg="Request: &StatusRequest{Verbose:false,}" file="otel-collector/interceptors.go:62" id=0316f2c2-68c2-4804-b106-ab5a7b2e6c4d name=/runtime.v1.RuntimeService/Status
	Sep 13 19:09:39 ha-617764 crio[6149]: time="2024-09-13 19:09:39.092471199Z" level=debug msg="Response: &StatusResponse{Status:&RuntimeStatus{Conditions:[]*RuntimeCondition{&RuntimeCondition{Type:RuntimeReady,Status:true,Reason:,Message:,},&RuntimeCondition{Type:NetworkReady,Status:true,Reason:,Message:,},},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=0316f2c2-68c2-4804-b106-ab5a7b2e6c4d name=/runtime.v1.RuntimeService/Status
	Sep 13 19:09:39 ha-617764 crio[6149]: time="2024-09-13 19:09:39.128097330Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2f9525cb-c66a-454f-b58e-a38bae7b210a name=/runtime.v1.RuntimeService/Version
	Sep 13 19:09:39 ha-617764 crio[6149]: time="2024-09-13 19:09:39.128223180Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2f9525cb-c66a-454f-b58e-a38bae7b210a name=/runtime.v1.RuntimeService/Version
	Sep 13 19:09:39 ha-617764 crio[6149]: time="2024-09-13 19:09:39.129171175Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2875de5d-a3d0-4335-8ef9-1c3375e6baba name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 19:09:39 ha-617764 crio[6149]: time="2024-09-13 19:09:39.129632288Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726254579129610594,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2875de5d-a3d0-4335-8ef9-1c3375e6baba name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 19:09:39 ha-617764 crio[6149]: time="2024-09-13 19:09:39.130064005Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=030cdc55-e111-4ac0-9c35-07ec26f5792b name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 19:09:39 ha-617764 crio[6149]: time="2024-09-13 19:09:39.130136712Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=030cdc55-e111-4ac0-9c35-07ec26f5792b name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 19:09:39 ha-617764 crio[6149]: time="2024-09-13 19:09:39.130586129Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:999f5e6003ef9162b57fec4793fbe75aab1348731d8d4f27ad7d3029004b6d4c,PodSandboxId:2ec7df8952268fade9fc0ffc23b16f677fdc2189770cb119d9f905f75fcc7282,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:7,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726254518536729625,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e5f1a84-1798-430e-af04-82469e8f4a7b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 7,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9e9ac5d6b79f0671011120db7c835a22b4b40515be6c0a224b5d78d10631858,PodSandboxId:b36021c0b35cdc0a068086003611f08b09d2d09f7980d332998b85651d1f5f31,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:6,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726254374542802960,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 815ca8cb73177215968b5c5242b63776,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 6,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87156e375ce6e42c538bb851dcb55abf7b83754448bb78e67a324fd93e76a534,PodSandboxId:639b42fbde0c6031c445be6bfab0a91e5a252efc1e759eace315ed7ee44203e9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:6,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726254272527637261,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f4db9ee38410b02d601ed80ae90b5a4,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e916b90f9253d6855bcd1b24ab1ae47479a3f23fd600018cb0738677896b324f,PodSandboxId:2ec7df8952268fade9fc0ffc23b16f677fdc2189770cb119d9f905f75fcc7282,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:6,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726254205523822299,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e5f1a84-1798-430e-af04-82469e8f4a7b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a3f92c39f616d85e223876cd67911949b93e345a656250b2b9a93e494f7b7b0,PodSandboxId:b36021c0b35cdc0a068086003611f08b09d2d09f7980d332998b85651d1f5f31,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:5,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726254195530889880,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 815ca8cb73177215968b5c5242b63776,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50283a22853869e4521e5d44efcff41101ffde818329fd7423f482cb33efabc0,PodSandboxId:639b42fbde0c6031c445be6bfab0a91e5a252efc1e759eace315ed7ee44203e9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:5,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726254130535295539,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f4db9ee38410b02d601ed80ae90b5a4,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf7f61f474e782e403819281bc5a7d281f13bc89127e0093ac24be08ab21acdc,PodSandboxId:ae1363f1228347bc367ec486d746b078aece68a3917c3a8f082468352ed540ff,Metadata:&ContainerMetadata{Name:busybox,Attempt:2,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726254062550829569,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-t4fwq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1bc3749b-0225-445c-9b86-767558392df7,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMe
ssagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bdc8b32559cc35726794d7412fa2462beea35cff248df50623987861ae7bd48,PodSandboxId:90fa239fc72bb6f4e32a775ccd5954a7efb3c77a68e1e066547347d6aa9de270,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726254029473467866,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-92mml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36bd37dc-88c4-4264-9e7c-a90246cc5212,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.p
od.terminationGracePeriod: 30,},},&Container{Id:2ca0aab49c5463b35d609dd4876256ce1882f5f15b9314d4efc1c9460e5da465,PodSandboxId:743d4b43092c6c121728464d3854175d802db7a5cbb8456d4d3e321a84e3380d,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726254029507452958,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-htrbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41a8301e-fca3-4907-bc77-808b013a2d2a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"proto
col\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70f0f4e37a417dce143a67fa1dcf0c5c3b58a8856c27d2bc40f1a94c8751842a,PodSandboxId:43cddd96b715864e6de09c1fa6b08a212431d9f454105b634b09d702d276ce38,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726254029705747992,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fdhnm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c509676-c7ba-4841-89b5-7e4266abd9c9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.c
ontainer.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cb162ca4a91609e62eb9a3b82ac2d18f1f007e8a4b2577cce0250928caa85f2,PodSandboxId:1aee20bf902b8f7f8a9930e46d1fefb6a6c5f2d3d6d8c4ca74d565558460f8cd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726254029639290085,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-b
9bzd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81130c38-ff5d-4c9e-ab7b-7eae4a62d3b8,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:360965c899e522788532af521cbf0b477b35e977e91f5a28b282abf712fc953e,PodSandboxId:477f3d5572a61dae49e2682bbe5fcfd071182b44e8574426070fd470e9d8b5d9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726254029288189795,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-617764,io.kubernete
s.pod.namespace: kube-system,io.kubernetes.pod.uid: 15cf7928620050653d6239c1007547bd,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c22324f5733e44ea66404b16a363cefc848145f9b3f1a75daecc8b7369ff96dd,PodSandboxId:e94c56bdaeede8bdf6b9672b64791948a015daf333d84890a1b688a898d34e7e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726254029033042084,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
bf3d4ca74d8429dc43b760fdf8f185ab,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc744a6ac873da0f38fbe5a85580ba322c578e68359078003640394bc1a9784f,PodSandboxId:c01954306193722d6eb940bc8b28659b638bd552e7eee2298bf05a3dac30ffcf,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726254028929207262,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5545735943f8ff5a38c9aea0b4c785ad,},Ann
otations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bb3333d84624db9148786fc866cc4cb99179a577e31d3365459788ba7b02f59,PodSandboxId:0238ab84a512173bc79a44c0c48ee4a9bfeeaed98c7655e711a18a704a5951f9,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726253659862538947,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-t4fwq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1bc3749b-0225-445c-9b86-767558392df7,},Annotations:map[string]string{io.kuberne
tes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46d659112c682a93bb4560212566d776e405246e1b3f91e9cc2ad5198b2b8c69,PodSandboxId:566613db4514bf925926ac26426504bb464936a657b23ca25af3ad3aa0a803e1,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_EXITED,CreatedAt:1726253640657625545,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5545735943f8ff5a38c9aea0b4c785ad,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernet
es.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dddc0dfb6a255d4d35498f110cd8ade544b6f9eb17cef621a12500640512581e,PodSandboxId:18e2ef1278c487f38e01dc8ec652cadc524a2e3c08f1b80bae0f12b5ce7d5f45,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726253626790338590,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-b9bzd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81130c38-ff5d-4c9e-ab7b-7eae4a62d3b8,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09fe052337ef3c922d075bb554c17c548520bce6283d1973e3507ca8d99ace87,PodSandboxId:5f1a3394b645b31406ab8d54ac120cd39398070996c972bf73df2d505c1848b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726253626803961664,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fdhnm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c509676-c7ba-4841-89b5-7e4266abd9c9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b752b1ac699cb46bd464528882ec5c1bfb29241c94a1eda062141311151c2fc1,PodSandboxId:3a3adb124d23e71e636eaf0f200f72f3779d41c493e0475b011f1e825d452f00,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726253626698915633,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-htrbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 41a8301e-fca3-4907-bc77-808b013a2d2a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d1a0b2d1c95ea3234342a3c075a74b46c01ce0a91b97801503cb6fd1fc06163,PodSandboxId:09bbefd12114cb9e6833e18edd3e080d729dc865efaaa80af99dd78c1c2387c2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,C
reatedAt:1726253626405117690,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-92mml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36bd37dc-88c4-4264-9e7c-a90246cc5212,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15c33340e3091f96212bbbf29c4a8802468f76ac07126f47265d4f2b86321a89,PodSandboxId:acfcaea56c23ed4626d8134635877c58a053b2861b24d0d53cf3024c7c8c1ca0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726253626518315300,Labels:map[
string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf3d4ca74d8429dc43b760fdf8f185ab,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80a7cb47dee67b788f693c83be0a9b7c41fc30e21ddf9b0131e13737c8047222,PodSandboxId:a63972ff65b125bee7dbfeebba7408899591af85f385eeba93cb4a7472b5a831,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726253626301782095,Labels:map[string]string{io.kubernetes.container.name
: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15cf7928620050653d6239c1007547bd,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=030cdc55-e111-4ac0-9c35-07ec26f5792b name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 19:09:39 ha-617764 crio[6149]: time="2024-09-13 19:09:39.177564447Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=96cd43c1-ed2c-4529-ae30-5fac5e0699db name=/runtime.v1.RuntimeService/Version
	Sep 13 19:09:39 ha-617764 crio[6149]: time="2024-09-13 19:09:39.177752967Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=96cd43c1-ed2c-4529-ae30-5fac5e0699db name=/runtime.v1.RuntimeService/Version
	Sep 13 19:09:39 ha-617764 crio[6149]: time="2024-09-13 19:09:39.179005503Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2f87292f-2972-47dd-81f4-c5493d6f99e5 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 19:09:39 ha-617764 crio[6149]: time="2024-09-13 19:09:39.179625176Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726254579179597124,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2f87292f-2972-47dd-81f4-c5493d6f99e5 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 19:09:39 ha-617764 crio[6149]: time="2024-09-13 19:09:39.180736516Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=543494b9-ec5c-485b-aacc-a5c54a0bad95 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 19:09:39 ha-617764 crio[6149]: time="2024-09-13 19:09:39.180793953Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=543494b9-ec5c-485b-aacc-a5c54a0bad95 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 19:09:39 ha-617764 crio[6149]: time="2024-09-13 19:09:39.181293653Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:999f5e6003ef9162b57fec4793fbe75aab1348731d8d4f27ad7d3029004b6d4c,PodSandboxId:2ec7df8952268fade9fc0ffc23b16f677fdc2189770cb119d9f905f75fcc7282,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:7,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726254518536729625,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e5f1a84-1798-430e-af04-82469e8f4a7b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 7,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9e9ac5d6b79f0671011120db7c835a22b4b40515be6c0a224b5d78d10631858,PodSandboxId:b36021c0b35cdc0a068086003611f08b09d2d09f7980d332998b85651d1f5f31,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:6,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726254374542802960,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 815ca8cb73177215968b5c5242b63776,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 6,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87156e375ce6e42c538bb851dcb55abf7b83754448bb78e67a324fd93e76a534,PodSandboxId:639b42fbde0c6031c445be6bfab0a91e5a252efc1e759eace315ed7ee44203e9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:6,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726254272527637261,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f4db9ee38410b02d601ed80ae90b5a4,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e916b90f9253d6855bcd1b24ab1ae47479a3f23fd600018cb0738677896b324f,PodSandboxId:2ec7df8952268fade9fc0ffc23b16f677fdc2189770cb119d9f905f75fcc7282,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:6,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726254205523822299,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e5f1a84-1798-430e-af04-82469e8f4a7b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a3f92c39f616d85e223876cd67911949b93e345a656250b2b9a93e494f7b7b0,PodSandboxId:b36021c0b35cdc0a068086003611f08b09d2d09f7980d332998b85651d1f5f31,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:5,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726254195530889880,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 815ca8cb73177215968b5c5242b63776,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50283a22853869e4521e5d44efcff41101ffde818329fd7423f482cb33efabc0,PodSandboxId:639b42fbde0c6031c445be6bfab0a91e5a252efc1e759eace315ed7ee44203e9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:5,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726254130535295539,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f4db9ee38410b02d601ed80ae90b5a4,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf7f61f474e782e403819281bc5a7d281f13bc89127e0093ac24be08ab21acdc,PodSandboxId:ae1363f1228347bc367ec486d746b078aece68a3917c3a8f082468352ed540ff,Metadata:&ContainerMetadata{Name:busybox,Attempt:2,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726254062550829569,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-t4fwq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1bc3749b-0225-445c-9b86-767558392df7,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMe
ssagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bdc8b32559cc35726794d7412fa2462beea35cff248df50623987861ae7bd48,PodSandboxId:90fa239fc72bb6f4e32a775ccd5954a7efb3c77a68e1e066547347d6aa9de270,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726254029473467866,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-92mml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36bd37dc-88c4-4264-9e7c-a90246cc5212,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.p
od.terminationGracePeriod: 30,},},&Container{Id:2ca0aab49c5463b35d609dd4876256ce1882f5f15b9314d4efc1c9460e5da465,PodSandboxId:743d4b43092c6c121728464d3854175d802db7a5cbb8456d4d3e321a84e3380d,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726254029507452958,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-htrbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41a8301e-fca3-4907-bc77-808b013a2d2a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"proto
col\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70f0f4e37a417dce143a67fa1dcf0c5c3b58a8856c27d2bc40f1a94c8751842a,PodSandboxId:43cddd96b715864e6de09c1fa6b08a212431d9f454105b634b09d702d276ce38,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726254029705747992,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fdhnm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c509676-c7ba-4841-89b5-7e4266abd9c9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.c
ontainer.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cb162ca4a91609e62eb9a3b82ac2d18f1f007e8a4b2577cce0250928caa85f2,PodSandboxId:1aee20bf902b8f7f8a9930e46d1fefb6a6c5f2d3d6d8c4ca74d565558460f8cd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726254029639290085,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-b
9bzd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81130c38-ff5d-4c9e-ab7b-7eae4a62d3b8,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:360965c899e522788532af521cbf0b477b35e977e91f5a28b282abf712fc953e,PodSandboxId:477f3d5572a61dae49e2682bbe5fcfd071182b44e8574426070fd470e9d8b5d9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726254029288189795,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-617764,io.kubernete
s.pod.namespace: kube-system,io.kubernetes.pod.uid: 15cf7928620050653d6239c1007547bd,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c22324f5733e44ea66404b16a363cefc848145f9b3f1a75daecc8b7369ff96dd,PodSandboxId:e94c56bdaeede8bdf6b9672b64791948a015daf333d84890a1b688a898d34e7e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726254029033042084,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
bf3d4ca74d8429dc43b760fdf8f185ab,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc744a6ac873da0f38fbe5a85580ba322c578e68359078003640394bc1a9784f,PodSandboxId:c01954306193722d6eb940bc8b28659b638bd552e7eee2298bf05a3dac30ffcf,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726254028929207262,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5545735943f8ff5a38c9aea0b4c785ad,},Ann
otations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bb3333d84624db9148786fc866cc4cb99179a577e31d3365459788ba7b02f59,PodSandboxId:0238ab84a512173bc79a44c0c48ee4a9bfeeaed98c7655e711a18a704a5951f9,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726253659862538947,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-t4fwq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1bc3749b-0225-445c-9b86-767558392df7,},Annotations:map[string]string{io.kuberne
tes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46d659112c682a93bb4560212566d776e405246e1b3f91e9cc2ad5198b2b8c69,PodSandboxId:566613db4514bf925926ac26426504bb464936a657b23ca25af3ad3aa0a803e1,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_EXITED,CreatedAt:1726253640657625545,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5545735943f8ff5a38c9aea0b4c785ad,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernet
es.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dddc0dfb6a255d4d35498f110cd8ade544b6f9eb17cef621a12500640512581e,PodSandboxId:18e2ef1278c487f38e01dc8ec652cadc524a2e3c08f1b80bae0f12b5ce7d5f45,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726253626790338590,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-b9bzd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81130c38-ff5d-4c9e-ab7b-7eae4a62d3b8,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09fe052337ef3c922d075bb554c17c548520bce6283d1973e3507ca8d99ace87,PodSandboxId:5f1a3394b645b31406ab8d54ac120cd39398070996c972bf73df2d505c1848b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726253626803961664,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fdhnm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c509676-c7ba-4841-89b5-7e4266abd9c9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b752b1ac699cb46bd464528882ec5c1bfb29241c94a1eda062141311151c2fc1,PodSandboxId:3a3adb124d23e71e636eaf0f200f72f3779d41c493e0475b011f1e825d452f00,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726253626698915633,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-htrbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 41a8301e-fca3-4907-bc77-808b013a2d2a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d1a0b2d1c95ea3234342a3c075a74b46c01ce0a91b97801503cb6fd1fc06163,PodSandboxId:09bbefd12114cb9e6833e18edd3e080d729dc865efaaa80af99dd78c1c2387c2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,C
reatedAt:1726253626405117690,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-92mml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36bd37dc-88c4-4264-9e7c-a90246cc5212,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15c33340e3091f96212bbbf29c4a8802468f76ac07126f47265d4f2b86321a89,PodSandboxId:acfcaea56c23ed4626d8134635877c58a053b2861b24d0d53cf3024c7c8c1ca0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726253626518315300,Labels:map[
string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf3d4ca74d8429dc43b760fdf8f185ab,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80a7cb47dee67b788f693c83be0a9b7c41fc30e21ddf9b0131e13737c8047222,PodSandboxId:a63972ff65b125bee7dbfeebba7408899591af85f385eeba93cb4a7472b5a831,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726253626301782095,Labels:map[string]string{io.kubernetes.container.name
: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15cf7928620050653d6239c1007547bd,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=543494b9-ec5c-485b-aacc-a5c54a0bad95 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	999f5e6003ef9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   About a minute ago   Running             storage-provisioner       7                   2ec7df8952268       storage-provisioner
	d9e9ac5d6b79f       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   3 minutes ago        Running             kube-controller-manager   6                   b36021c0b35cd       kube-controller-manager-ha-617764
	87156e375ce6e       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   5 minutes ago        Running             kube-apiserver            6                   639b42fbde0c6       kube-apiserver-ha-617764
	e916b90f9253d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   6 minutes ago        Exited              storage-provisioner       6                   2ec7df8952268       storage-provisioner
	8a3f92c39f616       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   6 minutes ago        Exited              kube-controller-manager   5                   b36021c0b35cd       kube-controller-manager-ha-617764
	50283a2285386       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   7 minutes ago        Exited              kube-apiserver            5                   639b42fbde0c6       kube-apiserver-ha-617764
	bf7f61f474e78       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a   8 minutes ago        Running             busybox                   2                   ae1363f122834       busybox-7dff88458-t4fwq
	70f0f4e37a417       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago        Running             coredns                   2                   43cddd96b7158       coredns-7c65d6cfc9-fdhnm
	7cb162ca4a916       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f   9 minutes ago        Running             kindnet-cni               2                   1aee20bf902b8       kindnet-b9bzd
	2ca0aab49c546       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago        Running             coredns                   2                   743d4b43092c6       coredns-7c65d6cfc9-htrbt
	0bdc8b32559cc       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   9 minutes ago        Running             kube-proxy                2                   90fa239fc72bb       kube-proxy-92mml
	360965c899e52       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   9 minutes ago        Running             kube-scheduler            2                   477f3d5572a61       kube-scheduler-ha-617764
	c22324f5733e4       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   9 minutes ago        Running             etcd                      2                   e94c56bdaeede       etcd-ha-617764
	bc744a6ac873d       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12   9 minutes ago        Running             kube-vip                  1                   c019543061937       kube-vip-ha-617764
	2bb3333d84624       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a   15 minutes ago       Exited              busybox                   1                   0238ab84a5121       busybox-7dff88458-t4fwq
	46d659112c682       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12   15 minutes ago       Exited              kube-vip                  0                   566613db4514b       kube-vip-ha-617764
	09fe052337ef3       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   15 minutes ago       Exited              coredns                   1                   5f1a3394b645b       coredns-7c65d6cfc9-fdhnm
	dddc0dfb6a255       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f   15 minutes ago       Exited              kindnet-cni               1                   18e2ef1278c48       kindnet-b9bzd
	b752b1ac699cb       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   15 minutes ago       Exited              coredns                   1                   3a3adb124d23e       coredns-7c65d6cfc9-htrbt
	15c33340e3091       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   15 minutes ago       Exited              etcd                      1                   acfcaea56c23e       etcd-ha-617764
	1d1a0b2d1c95e       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   15 minutes ago       Exited              kube-proxy                1                   09bbefd12114c       kube-proxy-92mml
	80a7cb47dee67       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   15 minutes ago       Exited              kube-scheduler            1                   a63972ff65b12       kube-scheduler-ha-617764
	
	
	==> coredns [09fe052337ef3c922d075bb554c17c548520bce6283d1973e3507ca8d99ace87] <==
	Trace[818669773]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (18:54:01.526)
	Trace[818669773]: [10.000979018s] [10.000979018s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:52492->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:52492->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:43514->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:43514->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [2ca0aab49c5463b35d609dd4876256ce1882f5f15b9314d4efc1c9460e5da465] <==
	Trace[935271282]: [14.299786922s] [14.299786922s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=1, ErrCode=NO_ERROR, debug=""
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [70f0f4e37a417dce143a67fa1dcf0c5c3b58a8856c27d2bc40f1a94c8751842a] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [b752b1ac699cb46bd464528882ec5c1bfb29241c94a1eda062141311151c2fc1] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:54310->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-617764
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-617764
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fdd33bebc6743cfd1c61ec7fe066add478610a92
	                    minikube.k8s.io/name=ha-617764
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_13T18_42_29_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Sep 2024 18:42:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-617764
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 13 Sep 2024 19:09:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Sep 2024 19:04:46 +0000   Fri, 13 Sep 2024 18:57:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Sep 2024 19:04:46 +0000   Fri, 13 Sep 2024 18:57:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Sep 2024 19:04:46 +0000   Fri, 13 Sep 2024 18:57:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 13 Sep 2024 19:04:46 +0000   Fri, 13 Sep 2024 18:57:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.145
	  Hostname:    ha-617764
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0f2aba800dcc4d28902afcae110b3305
	  System UUID:                0f2aba80-0dcc-4d28-902a-fcae110b3305
	  Boot ID:                    07a71fa7-1555-49dc-a1c7-c845200ddeaa
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-t4fwq              0 (0%)        0 (0%)      0 (0%)           0 (0%)         24m
	  kube-system                 coredns-7c65d6cfc9-fdhnm             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     27m
	  kube-system                 coredns-7c65d6cfc9-htrbt             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     27m
	  kube-system                 etcd-ha-617764                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         27m
	  kube-system                 kindnet-b9bzd                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      27m
	  kube-system                 kube-apiserver-ha-617764             250m (12%)    0 (0%)      0 (0%)           0 (0%)         27m
	  kube-system                 kube-controller-manager-ha-617764    200m (10%)    0 (0%)      0 (0%)           0 (0%)         27m
	  kube-system                 kube-proxy-92mml                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         27m
	  kube-system                 kube-scheduler-ha-617764             100m (5%)     0 (0%)      0 (0%)           0 (0%)         27m
	  kube-system                 kube-vip-ha-617764                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         27m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                   From             Message
	  ----     ------                   ----                  ----             -------
	  Normal   Starting                 15m                   kube-proxy       
	  Normal   Starting                 27m                   kube-proxy       
	  Normal   Starting                 27m                   kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  27m                   kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           27m                   node-controller  Node ha-617764 event: Registered Node ha-617764 in Controller
	  Normal   RegisteredNode           26m                   node-controller  Node ha-617764 event: Registered Node ha-617764 in Controller
	  Normal   RegisteredNode           24m                   node-controller  Node ha-617764 event: Registered Node ha-617764 in Controller
	  Normal   RegisteredNode           15m                   node-controller  Node ha-617764 event: Registered Node ha-617764 in Controller
	  Normal   RegisteredNode           15m                   node-controller  Node ha-617764 event: Registered Node ha-617764 in Controller
	  Normal   RegisteredNode           14m                   node-controller  Node ha-617764 event: Registered Node ha-617764 in Controller
	  Normal   NodeNotReady             12m                   node-controller  Node ha-617764 status is now: NodeNotReady
	  Normal   NodeHasSufficientMemory  12m (x2 over 27m)     kubelet          Node ha-617764 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x2 over 27m)     kubelet          Node ha-617764 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x2 over 27m)     kubelet          Node ha-617764 status is now: NodeHasSufficientPID
	  Normal   NodeReady                12m (x2 over 26m)     kubelet          Node ha-617764 status is now: NodeReady
	  Warning  ContainerGCFailed        10m (x3 over 17m)     kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeNotReady             9m22s (x10 over 16m)  kubelet          Node ha-617764 status is now: NodeNotReady
	  Normal   RegisteredNode           3m23s                 node-controller  Node ha-617764 event: Registered Node ha-617764 in Controller
	
	
	Name:               ha-617764-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-617764-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fdd33bebc6743cfd1c61ec7fe066add478610a92
	                    minikube.k8s.io/name=ha-617764
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_13T18_43_27_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Sep 2024 18:43:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-617764-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 13 Sep 2024 19:09:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Sep 2024 19:04:41 +0000   Fri, 13 Sep 2024 18:54:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Sep 2024 19:04:41 +0000   Fri, 13 Sep 2024 18:54:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Sep 2024 19:04:41 +0000   Fri, 13 Sep 2024 18:54:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 13 Sep 2024 19:04:41 +0000   Fri, 13 Sep 2024 18:54:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.203
	  Hostname:    ha-617764-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 3cc6f4389cf8474aa5ba9d0c108b603d
	  System UUID:                3cc6f438-9cf8-474a-a5ba-9d0c108b603d
	  Boot ID:                    3ff149de-a1f6-4a53-9c3a-07c56d69cf30
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-c28t9                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         24m
	  kube-system                 etcd-ha-617764-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         26m
	  kube-system                 kindnet-bc2zg                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      26m
	  kube-system                 kube-apiserver-ha-617764-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         26m
	  kube-system                 kube-controller-manager-ha-617764-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         26m
	  kube-system                 kube-proxy-hqm8n                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         26m
	  kube-system                 kube-scheduler-ha-617764-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         26m
	  kube-system                 kube-vip-ha-617764-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         26m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 5m20s              kube-proxy       
	  Normal   Starting                 15m                kube-proxy       
	  Normal   Starting                 26m                kube-proxy       
	  Normal   NodeAllocatableEnforced  26m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  26m (x8 over 26m)  kubelet          Node ha-617764-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    26m (x8 over 26m)  kubelet          Node ha-617764-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     26m (x7 over 26m)  kubelet          Node ha-617764-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           26m                node-controller  Node ha-617764-m02 event: Registered Node ha-617764-m02 in Controller
	  Normal   RegisteredNode           26m                node-controller  Node ha-617764-m02 event: Registered Node ha-617764-m02 in Controller
	  Normal   RegisteredNode           24m                node-controller  Node ha-617764-m02 event: Registered Node ha-617764-m02 in Controller
	  Normal   NodeNotReady             22m                node-controller  Node ha-617764-m02 status is now: NodeNotReady
	  Normal   NodeHasNoDiskPressure    15m (x8 over 15m)  kubelet          Node ha-617764-m02 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 15m                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  15m (x8 over 15m)  kubelet          Node ha-617764-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     15m (x7 over 15m)  kubelet          Node ha-617764-m02 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           15m                node-controller  Node ha-617764-m02 event: Registered Node ha-617764-m02 in Controller
	  Normal   RegisteredNode           15m                node-controller  Node ha-617764-m02 event: Registered Node ha-617764-m02 in Controller
	  Normal   RegisteredNode           14m                node-controller  Node ha-617764-m02 event: Registered Node ha-617764-m02 in Controller
	  Normal   NodeNotReady             8m37s              kubelet          Node ha-617764-m02 status is now: NodeNotReady
	  Warning  ContainerGCFailed        8m37s              kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           3m23s              node-controller  Node ha-617764-m02 event: Registered Node ha-617764-m02 in Controller
	
	
	Name:               ha-617764-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-617764-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fdd33bebc6743cfd1c61ec7fe066add478610a92
	                    minikube.k8s.io/name=ha-617764
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_13T18_45_53_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Sep 2024 18:45:52 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-617764-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 13 Sep 2024 18:56:14 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 13 Sep 2024 18:55:54 +0000   Fri, 13 Sep 2024 18:56:55 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 13 Sep 2024 18:55:54 +0000   Fri, 13 Sep 2024 18:56:55 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 13 Sep 2024 18:55:54 +0000   Fri, 13 Sep 2024 18:56:55 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 13 Sep 2024 18:55:54 +0000   Fri, 13 Sep 2024 18:56:55 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.238
	  Hostname:    ha-617764-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 377b052ecb52420ba7e0fb039f04c4f5
	  System UUID:                377b052e-cb52-420b-a7e0-fb039f04c4f5
	  Boot ID:                    44a904c1-478c-4733-89d7-64bb5ed6ea9f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-hzxvw    0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kindnet-47jgz              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      23m
	  kube-system                 kube-proxy-5rlkn           0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 23m                kube-proxy       
	  Normal   Starting                 13m                kube-proxy       
	  Normal   NodeHasSufficientPID     23m (x2 over 23m)  kubelet          Node ha-617764-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  23m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  23m (x2 over 23m)  kubelet          Node ha-617764-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    23m (x2 over 23m)  kubelet          Node ha-617764-m04 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           23m                node-controller  Node ha-617764-m04 event: Registered Node ha-617764-m04 in Controller
	  Normal   RegisteredNode           23m                node-controller  Node ha-617764-m04 event: Registered Node ha-617764-m04 in Controller
	  Normal   RegisteredNode           23m                node-controller  Node ha-617764-m04 event: Registered Node ha-617764-m04 in Controller
	  Normal   NodeReady                23m                kubelet          Node ha-617764-m04 status is now: NodeReady
	  Normal   RegisteredNode           15m                node-controller  Node ha-617764-m04 event: Registered Node ha-617764-m04 in Controller
	  Normal   RegisteredNode           15m                node-controller  Node ha-617764-m04 event: Registered Node ha-617764-m04 in Controller
	  Normal   RegisteredNode           14m                node-controller  Node ha-617764-m04 event: Registered Node ha-617764-m04 in Controller
	  Normal   NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 13m                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  13m (x2 over 13m)  kubelet          Node ha-617764-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m (x2 over 13m)  kubelet          Node ha-617764-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m (x2 over 13m)  kubelet          Node ha-617764-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 13m                kubelet          Node ha-617764-m04 has been rebooted, boot id: 44a904c1-478c-4733-89d7-64bb5ed6ea9f
	  Normal   NodeReady                13m                kubelet          Node ha-617764-m04 status is now: NodeReady
	  Normal   NodeNotReady             12m (x2 over 14m)  node-controller  Node ha-617764-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           3m23s              node-controller  Node ha-617764-m04 event: Registered Node ha-617764-m04 in Controller
	
	
	==> dmesg <==
	[  +0.063743] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.140604] systemd-fstab-generator[1308]: Ignoring "noauto" option for root device
	[  +0.090298] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.514292] kauditd_printk_skb: 21 callbacks suppressed
	[ +12.179813] kauditd_printk_skb: 38 callbacks suppressed
	[Sep13 18:43] kauditd_printk_skb: 24 callbacks suppressed
	[Sep13 18:53] systemd-fstab-generator[3494]: Ignoring "noauto" option for root device
	[  +0.152592] systemd-fstab-generator[3506]: Ignoring "noauto" option for root device
	[  +0.176959] systemd-fstab-generator[3520]: Ignoring "noauto" option for root device
	[  +0.144628] systemd-fstab-generator[3532]: Ignoring "noauto" option for root device
	[  +0.278033] systemd-fstab-generator[3560]: Ignoring "noauto" option for root device
	[  +6.938453] systemd-fstab-generator[3656]: Ignoring "noauto" option for root device
	[  +0.087335] kauditd_printk_skb: 100 callbacks suppressed
	[  +6.505183] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.221465] kauditd_printk_skb: 85 callbacks suppressed
	[Sep13 18:54] kauditd_printk_skb: 5 callbacks suppressed
	[  +8.066370] kauditd_printk_skb: 4 callbacks suppressed
	[Sep13 19:00] systemd-fstab-generator[6064]: Ignoring "noauto" option for root device
	[  +0.171401] systemd-fstab-generator[6082]: Ignoring "noauto" option for root device
	[  +0.186624] systemd-fstab-generator[6096]: Ignoring "noauto" option for root device
	[  +0.141420] systemd-fstab-generator[6108]: Ignoring "noauto" option for root device
	[  +0.313065] systemd-fstab-generator[6136]: Ignoring "noauto" option for root device
	[  +7.472494] systemd-fstab-generator[6247]: Ignoring "noauto" option for root device
	[  +0.086449] kauditd_printk_skb: 100 callbacks suppressed
	[  +6.730244] kauditd_printk_skb: 117 callbacks suppressed
	
	
	==> etcd [15c33340e3091f96212bbbf29c4a8802468f76ac07126f47265d4f2b86321a89] <==
	{"level":"info","ts":"2024-09-13T18:58:44.411752Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"44b3a0f32f80bb09 [term 3] starts to transfer leadership to 130da78b66ce9e95"}
	{"level":"info","ts":"2024-09-13T18:58:44.411785Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"44b3a0f32f80bb09 sends MsgTimeoutNow to 130da78b66ce9e95 immediately as 130da78b66ce9e95 already has up-to-date log"}
	{"level":"info","ts":"2024-09-13T18:58:44.414478Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"44b3a0f32f80bb09 [term: 3] received a MsgVote message with higher term from 130da78b66ce9e95 [term: 4]"}
	{"level":"info","ts":"2024-09-13T18:58:44.414534Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"44b3a0f32f80bb09 became follower at term 4"}
	{"level":"info","ts":"2024-09-13T18:58:44.414548Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"44b3a0f32f80bb09 [logterm: 3, index: 3644, vote: 0] cast MsgVote for 130da78b66ce9e95 [logterm: 3, index: 3644] at term 4"}
	{"level":"info","ts":"2024-09-13T18:58:44.414556Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 44b3a0f32f80bb09 lost leader 44b3a0f32f80bb09 at term 4"}
	{"level":"info","ts":"2024-09-13T18:58:44.416226Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 44b3a0f32f80bb09 elected leader 130da78b66ce9e95 at term 4"}
	{"level":"info","ts":"2024-09-13T18:58:44.512693Z","caller":"etcdserver/server.go:1498","msg":"leadership transfer finished","local-member-id":"44b3a0f32f80bb09","old-leader-member-id":"44b3a0f32f80bb09","new-leader-member-id":"130da78b66ce9e95","took":"101.001068ms"}
	{"level":"info","ts":"2024-09-13T18:58:44.512832Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"130da78b66ce9e95"}
	{"level":"warn","ts":"2024-09-13T18:58:44.513914Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"130da78b66ce9e95"}
	{"level":"info","ts":"2024-09-13T18:58:44.514037Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"130da78b66ce9e95"}
	{"level":"warn","ts":"2024-09-13T18:58:44.515584Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"130da78b66ce9e95"}
	{"level":"info","ts":"2024-09-13T18:58:44.515625Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"130da78b66ce9e95"}
	{"level":"info","ts":"2024-09-13T18:58:44.515668Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"44b3a0f32f80bb09","remote-peer-id":"130da78b66ce9e95"}
	{"level":"warn","ts":"2024-09-13T18:58:44.515788Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"44b3a0f32f80bb09","remote-peer-id":"130da78b66ce9e95","error":"context canceled"}
	{"level":"warn","ts":"2024-09-13T18:58:44.515815Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"130da78b66ce9e95","error":"failed to read 130da78b66ce9e95 on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-09-13T18:58:44.515846Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"44b3a0f32f80bb09","remote-peer-id":"130da78b66ce9e95"}
	{"level":"warn","ts":"2024-09-13T18:58:44.515937Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"44b3a0f32f80bb09","remote-peer-id":"130da78b66ce9e95","error":"context canceled"}
	{"level":"info","ts":"2024-09-13T18:58:44.515950Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"44b3a0f32f80bb09","remote-peer-id":"130da78b66ce9e95"}
	{"level":"info","ts":"2024-09-13T18:58:44.515960Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"130da78b66ce9e95"}
	{"level":"warn","ts":"2024-09-13T18:58:44.522046Z","caller":"rafthttp/http.go:413","msg":"failed to find remote peer in cluster","local-member-id":"44b3a0f32f80bb09","remote-peer-id-stream-handler":"44b3a0f32f80bb09","remote-peer-id-from":"130da78b66ce9e95","cluster-id":"33ee9922f2bf4379"}
	{"level":"info","ts":"2024-09-13T18:58:44.522270Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.145:2380"}
	{"level":"warn","ts":"2024-09-13T18:58:44.523349Z","caller":"embed/config_logging.go:170","msg":"rejected connection on peer endpoint","remote-addr":"192.168.39.203:60554","server-name":"","error":"set tcp 192.168.39.145:2380: use of closed network connection"}
	{"level":"info","ts":"2024-09-13T18:58:45.058204Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.145:2380"}
	{"level":"info","ts":"2024-09-13T18:58:45.058341Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-617764","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.145:2380"],"advertise-client-urls":["https://192.168.39.145:2379"]}
	
	
	==> etcd [c22324f5733e44ea66404b16a363cefc848145f9b3f1a75daecc8b7369ff96dd] <==
	{"level":"warn","ts":"2024-09-13T19:03:35.527602Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"130da78b66ce9e95","rtt":"0s","error":"dial tcp 192.168.39.203:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-13T19:03:35.930750Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":13477463805937998108,"retry-timeout":"500ms"}
	{"level":"info","ts":"2024-09-13T19:03:35.937002Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"130da78b66ce9e95"}
	{"level":"info","ts":"2024-09-13T19:03:35.937054Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"44b3a0f32f80bb09","remote-peer-id":"130da78b66ce9e95"}
	{"level":"info","ts":"2024-09-13T19:03:35.952129Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"44b3a0f32f80bb09","remote-peer-id":"130da78b66ce9e95"}
	{"level":"info","ts":"2024-09-13T19:03:35.982627Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"44b3a0f32f80bb09","to":"130da78b66ce9e95","stream-type":"stream Message"}
	{"level":"info","ts":"2024-09-13T19:03:35.982754Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"44b3a0f32f80bb09","remote-peer-id":"130da78b66ce9e95"}
	{"level":"info","ts":"2024-09-13T19:03:35.987476Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"44b3a0f32f80bb09","to":"130da78b66ce9e95","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-09-13T19:03:35.987924Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"44b3a0f32f80bb09","remote-peer-id":"130da78b66ce9e95"}
	{"level":"info","ts":"2024-09-13T19:03:36.025723Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"44b3a0f32f80bb09 is starting a new election at term 6"}
	{"level":"info","ts":"2024-09-13T19:03:36.025788Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"44b3a0f32f80bb09 became pre-candidate at term 6"}
	{"level":"info","ts":"2024-09-13T19:03:36.025812Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"44b3a0f32f80bb09 received MsgPreVoteResp from 44b3a0f32f80bb09 at term 6"}
	{"level":"info","ts":"2024-09-13T19:03:36.025854Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"44b3a0f32f80bb09 [logterm: 6, index: 3648] sent MsgPreVote request to 130da78b66ce9e95 at term 6"}
	{"level":"info","ts":"2024-09-13T19:03:36.031749Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"44b3a0f32f80bb09 received MsgPreVoteResp from 130da78b66ce9e95 at term 6"}
	{"level":"info","ts":"2024-09-13T19:03:36.031999Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"44b3a0f32f80bb09 has received 2 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2024-09-13T19:03:36.032111Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"44b3a0f32f80bb09 became candidate at term 7"}
	{"level":"info","ts":"2024-09-13T19:03:36.032187Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"44b3a0f32f80bb09 received MsgVoteResp from 44b3a0f32f80bb09 at term 7"}
	{"level":"info","ts":"2024-09-13T19:03:36.032218Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"44b3a0f32f80bb09 [logterm: 6, index: 3648] sent MsgVote request to 130da78b66ce9e95 at term 7"}
	{"level":"info","ts":"2024-09-13T19:03:36.038817Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"44b3a0f32f80bb09 received MsgVoteResp from 130da78b66ce9e95 at term 7"}
	{"level":"info","ts":"2024-09-13T19:03:36.038865Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"44b3a0f32f80bb09 has received 2 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2024-09-13T19:03:36.038893Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"44b3a0f32f80bb09 became leader at term 7"}
	{"level":"info","ts":"2024-09-13T19:03:36.038913Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 44b3a0f32f80bb09 elected leader 44b3a0f32f80bb09 at term 7"}
	{"level":"warn","ts":"2024-09-13T19:03:36.039109Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"4.616150527s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: leader changed"}
	{"level":"info","ts":"2024-09-13T19:03:36.039440Z","caller":"traceutil/trace.go:171","msg":"trace[220464413] range","detail":"{range_begin:; range_end:; }","duration":"4.616514161s","start":"2024-09-13T19:03:31.422912Z","end":"2024-09-13T19:03:36.039426Z","steps":["trace[220464413] 'agreement among raft nodes before linearized reading'  (duration: 4.616143656s)"],"step_count":1}
	{"level":"error","ts":"2024-09-13T19:03:36.039654Z","caller":"etcdhttp/health.go:367","msg":"Health check error","path":"/readyz","reason":"[-]linearizable_read failed: etcdserver: leader changed\n[+]data_corruption ok\n[+]serializable_read ok\n","status-code":503,"stacktrace":"go.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp.(*CheckRegistry).installRootHttpEndpoint.newHealthHandler.func2\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp/health.go:367\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2141\nnet/http.(*ServeMux).ServeHTTP\n\tnet/http/server.go:2519\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:2943\nnet/http.(*conn).serve\n\tnet/http/server.go:2014"}
	
	
	==> kernel <==
	 19:09:39 up 27 min,  0 users,  load average: 0.23, 0.30, 0.32
	Linux ha-617764 5.10.207 #1 SMP Thu Sep 12 19:03:33 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [7cb162ca4a91609e62eb9a3b82ac2d18f1f007e8a4b2577cce0250928caa85f2] <==
	I0913 19:08:50.925993       1 main.go:322] Node ha-617764-m04 has CIDR [10.244.3.0/24] 
	I0913 19:09:00.922790       1 main.go:295] Handling node with IPs: map[192.168.39.145:{}]
	I0913 19:09:00.922841       1 main.go:299] handling current node
	I0913 19:09:00.922861       1 main.go:295] Handling node with IPs: map[192.168.39.203:{}]
	I0913 19:09:00.922866       1 main.go:322] Node ha-617764-m02 has CIDR [10.244.1.0/24] 
	I0913 19:09:00.922997       1 main.go:295] Handling node with IPs: map[192.168.39.238:{}]
	I0913 19:09:00.923022       1 main.go:322] Node ha-617764-m04 has CIDR [10.244.3.0/24] 
	I0913 19:09:10.925503       1 main.go:295] Handling node with IPs: map[192.168.39.203:{}]
	I0913 19:09:10.925559       1 main.go:322] Node ha-617764-m02 has CIDR [10.244.1.0/24] 
	I0913 19:09:10.925692       1 main.go:295] Handling node with IPs: map[192.168.39.238:{}]
	I0913 19:09:10.925716       1 main.go:322] Node ha-617764-m04 has CIDR [10.244.3.0/24] 
	I0913 19:09:10.925765       1 main.go:295] Handling node with IPs: map[192.168.39.145:{}]
	I0913 19:09:10.925781       1 main.go:299] handling current node
	I0913 19:09:20.925111       1 main.go:295] Handling node with IPs: map[192.168.39.145:{}]
	I0913 19:09:20.925161       1 main.go:299] handling current node
	I0913 19:09:20.925176       1 main.go:295] Handling node with IPs: map[192.168.39.203:{}]
	I0913 19:09:20.925181       1 main.go:322] Node ha-617764-m02 has CIDR [10.244.1.0/24] 
	I0913 19:09:20.925352       1 main.go:295] Handling node with IPs: map[192.168.39.238:{}]
	I0913 19:09:20.925377       1 main.go:322] Node ha-617764-m04 has CIDR [10.244.3.0/24] 
	I0913 19:09:30.917652       1 main.go:295] Handling node with IPs: map[192.168.39.238:{}]
	I0913 19:09:30.917705       1 main.go:322] Node ha-617764-m04 has CIDR [10.244.3.0/24] 
	I0913 19:09:30.917848       1 main.go:295] Handling node with IPs: map[192.168.39.145:{}]
	I0913 19:09:30.917877       1 main.go:299] handling current node
	I0913 19:09:30.917889       1 main.go:295] Handling node with IPs: map[192.168.39.203:{}]
	I0913 19:09:30.917894       1 main.go:322] Node ha-617764-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [dddc0dfb6a255d4d35498f110cd8ade544b6f9eb17cef621a12500640512581e] <==
	I0913 18:57:57.992785       1 main.go:322] Node ha-617764-m02 has CIDR [10.244.1.0/24] 
	I0913 18:58:07.986622       1 main.go:295] Handling node with IPs: map[192.168.39.145:{}]
	I0913 18:58:07.986810       1 main.go:299] handling current node
	I0913 18:58:07.986855       1 main.go:295] Handling node with IPs: map[192.168.39.203:{}]
	I0913 18:58:07.986874       1 main.go:322] Node ha-617764-m02 has CIDR [10.244.1.0/24] 
	I0913 18:58:07.987050       1 main.go:295] Handling node with IPs: map[192.168.39.238:{}]
	I0913 18:58:07.987072       1 main.go:322] Node ha-617764-m04 has CIDR [10.244.3.0/24] 
	I0913 18:58:17.988128       1 main.go:295] Handling node with IPs: map[192.168.39.238:{}]
	I0913 18:58:17.988336       1 main.go:322] Node ha-617764-m04 has CIDR [10.244.3.0/24] 
	I0913 18:58:17.988500       1 main.go:295] Handling node with IPs: map[192.168.39.145:{}]
	I0913 18:58:17.988524       1 main.go:299] handling current node
	I0913 18:58:17.988554       1 main.go:295] Handling node with IPs: map[192.168.39.203:{}]
	I0913 18:58:17.988558       1 main.go:322] Node ha-617764-m02 has CIDR [10.244.1.0/24] 
	I0913 18:58:27.988426       1 main.go:295] Handling node with IPs: map[192.168.39.145:{}]
	I0913 18:58:27.988495       1 main.go:299] handling current node
	I0913 18:58:27.988516       1 main.go:295] Handling node with IPs: map[192.168.39.203:{}]
	I0913 18:58:27.988521       1 main.go:322] Node ha-617764-m02 has CIDR [10.244.1.0/24] 
	I0913 18:58:27.988689       1 main.go:295] Handling node with IPs: map[192.168.39.238:{}]
	I0913 18:58:27.988745       1 main.go:322] Node ha-617764-m04 has CIDR [10.244.3.0/24] 
	I0913 18:58:37.994223       1 main.go:295] Handling node with IPs: map[192.168.39.145:{}]
	I0913 18:58:37.994340       1 main.go:299] handling current node
	I0913 18:58:37.994361       1 main.go:295] Handling node with IPs: map[192.168.39.203:{}]
	I0913 18:58:37.994371       1 main.go:322] Node ha-617764-m02 has CIDR [10.244.1.0/24] 
	I0913 18:58:37.994612       1 main.go:295] Handling node with IPs: map[192.168.39.238:{}]
	I0913 18:58:37.994637       1 main.go:322] Node ha-617764-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [50283a22853869e4521e5d44efcff41101ffde818329fd7423f482cb33efabc0] <==
	W0913 19:03:07.117209       1 reflector.go:561] storage/cacher.go:/certificatesigningrequests: failed to list *certificates.CertificateSigningRequest: etcdserver: request timed out
	E0913 19:03:07.118747       1 cacher.go:478] cacher (certificatesigningrequests.certificates.k8s.io): unexpected ListAndWatch error: failed to list *certificates.CertificateSigningRequest: etcdserver: request timed out; reinitializing...
	W0913 19:03:07.117288       1 reflector.go:561] storage/cacher.go:/priorityclasses: failed to list *scheduling.PriorityClass: etcdserver: request timed out
	E0913 19:03:07.118795       1 cacher.go:478] cacher (priorityclasses.scheduling.k8s.io): unexpected ListAndWatch error: failed to list *scheduling.PriorityClass: etcdserver: request timed out; reinitializing...
	W0913 19:03:07.118835       1 reflector.go:561] storage/cacher.go:/poddisruptionbudgets: failed to list *policy.PodDisruptionBudget: etcdserver: request timed out
	E0913 19:03:07.118860       1 cacher.go:478] cacher (poddisruptionbudgets.policy): unexpected ListAndWatch error: failed to list *policy.PodDisruptionBudget: etcdserver: request timed out; reinitializing...
	W0913 19:03:07.118862       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.IngressClass: etcdserver: request timed out
	E0913 19:03:07.118908       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.IngressClass: failed to list *v1.IngressClass: etcdserver: request timed out" logger="UnhandledError"
	E0913 19:03:07.117873       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: rpctypes.EtcdError{code:0xe, desc:\"etcdserver: request timed out\"}: etcdserver: request timed out" logger="UnhandledError"
	W0913 19:03:07.119041       1 reflector.go:561] storage/cacher.go:/rolebindings: failed to list *rbac.RoleBinding: etcdserver: request timed out
	E0913 19:03:07.119081       1 cacher.go:478] cacher (rolebindings.rbac.authorization.k8s.io): unexpected ListAndWatch error: failed to list *rbac.RoleBinding: etcdserver: request timed out; reinitializing...
	W0913 19:03:07.119107       1 reflector.go:561] storage/cacher.go:/horizontalpodautoscalers: failed to list *autoscaling.HorizontalPodAutoscaler: etcdserver: request timed out
	E0913 19:03:07.119130       1 cacher.go:478] cacher (horizontalpodautoscalers.autoscaling): unexpected ListAndWatch error: failed to list *autoscaling.HorizontalPodAutoscaler: etcdserver: request timed out; reinitializing...
	W0913 19:03:07.118881       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Endpoints: etcdserver: request timed out
	E0913 19:03:07.119187       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Endpoints: failed to list *v1.Endpoints: etcdserver: request timed out" logger="UnhandledError"
	W0913 19:03:07.119292       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: etcdserver: request timed out. Retrying...
	F0913 19:03:07.119338       1 hooks.go:210] PostStartHook "scheduling/bootstrap-system-priority-classes" failed: unable to add default system priority classes: timed out waiting for the condition
	E0913 19:03:07.119412       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: etcdserver: request timed out" logger="UnhandledError"
	E0913 19:03:07.155903       1 controller.go:145] "Failed to ensure lease exists, will retry" err="etcdserver: request timed out" interval="1.6s"
	W0913 19:03:07.119390       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ValidatingAdmissionPolicy: etcdserver: request timed out
	E0913 19:03:07.155969       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ValidatingAdmissionPolicy: failed to list *v1.ValidatingAdmissionPolicy: etcdserver: request timed out" logger="UnhandledError"
	F0913 19:03:07.147197       1 hooks.go:210] PostStartHook "rbac/bootstrap-roles" failed: unable to initialize roles: timed out waiting for the condition
	E0913 19:03:07.180431       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, etcdserver: request timed out]"
	W0913 19:03:07.188666       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: etcdserver: request timed out
	E0913 19:03:07.188800       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: etcdserver: request timed out" logger="UnhandledError"
	
	
	==> kube-apiserver [87156e375ce6e42c538bb851dcb55abf7b83754448bb78e67a324fd93e76a534] <==
	I0913 19:04:34.155635       1 apiapproval_controller.go:189] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0913 19:04:34.155675       1 crd_finalizer.go:269] Starting CRDFinalizer
	I0913 19:04:34.145448       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0913 19:04:34.145457       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0913 19:04:34.243089       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0913 19:04:34.243807       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0913 19:04:34.246351       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0913 19:04:34.247643       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0913 19:04:34.248414       1 shared_informer.go:320] Caches are synced for configmaps
	I0913 19:04:34.248975       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0913 19:04:34.249015       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0913 19:04:34.248452       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0913 19:04:34.252424       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0913 19:04:34.252462       1 aggregator.go:171] initial CRD sync complete...
	I0913 19:04:34.252481       1 autoregister_controller.go:144] Starting autoregister controller
	I0913 19:04:34.252485       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0913 19:04:34.252490       1 cache.go:39] Caches are synced for autoregister controller
	I0913 19:04:34.265419       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0913 19:04:34.275995       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0913 19:04:34.276033       1 policy_source.go:224] refreshing policies
	I0913 19:04:34.323537       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0913 19:04:35.150657       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0913 19:04:35.562141       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.145]
	I0913 19:04:35.563630       1 controller.go:615] quota admission added evaluator for: endpoints
	I0913 19:04:35.569590       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [8a3f92c39f616d85e223876cd67911949b93e345a656250b2b9a93e494f7b7b0] <==
	I0913 19:03:16.402728       1 serving.go:386] Generated self-signed cert in-memory
	I0913 19:03:16.703364       1 controllermanager.go:197] "Starting" version="v1.31.1"
	I0913 19:03:16.703449       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0913 19:03:16.705317       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0913 19:03:16.705492       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0913 19:03:16.706001       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0913 19:03:16.705942       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	E0913 19:03:26.708603       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.145:8443/healthz\": dial tcp 192.168.39.145:8443: connect: connection refused"
	
	
	==> kube-controller-manager [d9e9ac5d6b79f0671011120db7c835a22b4b40515be6c0a224b5d78d10631858] <==
	I0913 19:06:16.971061       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0913 19:06:16.974278       1 shared_informer.go:320] Caches are synced for resource quota
	I0913 19:06:16.974370       1 shared_informer.go:320] Caches are synced for daemon sets
	I0913 19:06:16.978058       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-617764-m04"
	I0913 19:06:16.978484       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-617764"
	I0913 19:06:16.978545       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-617764-m02"
	I0913 19:06:16.978578       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-617764-m04"
	I0913 19:06:16.978604       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0913 19:06:16.978676       1 shared_informer.go:320] Caches are synced for disruption
	I0913 19:06:17.001863       1 shared_informer.go:320] Caches are synced for job
	I0913 19:06:17.002008       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0913 19:06:17.002051       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0913 19:06:17.001883       1 shared_informer.go:320] Caches are synced for deployment
	I0913 19:06:17.002279       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0913 19:06:17.002568       1 shared_informer.go:320] Caches are synced for stateful set
	I0913 19:06:17.003157       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0913 19:06:17.003222       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0913 19:06:17.003340       1 shared_informer.go:320] Caches are synced for persistent volume
	I0913 19:06:17.007098       1 shared_informer.go:320] Caches are synced for attach detach
	I0913 19:06:17.008302       1 shared_informer.go:320] Caches are synced for resource quota
	I0913 19:06:17.041807       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0913 19:06:17.083647       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-617764-m04"
	I0913 19:06:17.452004       1 shared_informer.go:320] Caches are synced for garbage collector
	I0913 19:06:17.452044       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0913 19:06:17.472553       1 shared_informer.go:320] Caches are synced for garbage collector
	
	
	==> kube-proxy [0bdc8b32559cc35726794d7412fa2462beea35cff248df50623987861ae7bd48] <==
	E0913 19:02:27.298107       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	E0913 19:02:30.369582       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.39.254:8443: connect: no route to host"
	W0913 19:02:42.656761       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-617764&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0913 19:02:42.657783       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-617764&limit=500&resourceVersion=0\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	E0913 19:02:42.657618       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0913 19:02:54.945213       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0913 19:03:07.232668       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.39.254:8443: connect: no route to host"
	W0913 19:03:13.377355       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0913 19:03:13.377416       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0913 19:03:13.377475       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0913 19:03:13.377505       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	E0913 19:03:19.521668       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.39.254:8443: connect: no route to host"
	W0913 19:03:22.593418       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-617764&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0913 19:03:22.594211       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-617764&limit=500&resourceVersion=0\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	E0913 19:03:31.809399       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0913 19:03:31.809484       1 event_broadcaster.go:216] "Unable to write event (retry limit exceeded!)" event="&Event{ObjectMeta:{ha-617764.17f4e2ef11fb5014  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},EventTime:2024-09-13 19:01:13.616478822 +0000 UTC m=+43.051066987,Series:nil,ReportingController:kube-proxy,ReportingInstance:kube-proxy-ha-617764,Action:StartKubeProxy,Reason:Starting,Regarding:{Node  ha-617764 ha-617764   },Related:nil,Note:,Type:Normal,DeprecatedSource:{ },DeprecatedFirstTimestamp:0001-01-01 00:00:00 +0000 UTC,DeprecatedLastTimestamp:0001-01-01 00:00:00 +0000 UTC,DeprecatedCount:0,}"
	W0913 19:04:08.674299       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0913 19:04:08.674633       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0913 19:04:14.818943       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0913 19:04:14.819106       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0913 19:04:20.961639       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-617764&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0913 19:04:20.961836       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-617764&limit=500&resourceVersion=0\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	I0913 19:04:46.118228       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0913 19:04:48.417939       1 shared_informer.go:320] Caches are synced for service config
	I0913 19:05:14.019838       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [1d1a0b2d1c95ea3234342a3c075a74b46c01ce0a91b97801503cb6fd1fc06163] <==
	E0913 18:54:28.193745       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-617764\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0913 18:54:28.194003       1 server.go:646] "Can't determine this node's IP, assuming loopback; if this is incorrect, please set the --bind-address flag"
	E0913 18:54:28.194170       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0913 18:54:28.234105       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0913 18:54:28.234302       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0913 18:54:28.234395       1 server_linux.go:169] "Using iptables Proxier"
	I0913 18:54:28.237390       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0913 18:54:28.237818       1 server.go:483] "Version info" version="v1.31.1"
	I0913 18:54:28.237860       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0913 18:54:28.240362       1 config.go:199] "Starting service config controller"
	I0913 18:54:28.240424       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0913 18:54:28.240535       1 config.go:105] "Starting endpoint slice config controller"
	I0913 18:54:28.240556       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0913 18:54:28.241385       1 config.go:328] "Starting node config controller"
	I0913 18:54:28.241411       1 shared_informer.go:313] Waiting for caches to sync for node config
	E0913 18:54:31.266663       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.39.254:8443: connect: no route to host"
	W0913 18:54:31.266902       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0913 18:54:31.267053       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0913 18:54:31.267155       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0913 18:54:31.267225       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0913 18:54:31.270424       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-617764&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0913 18:54:31.270680       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-617764&limit=500&resourceVersion=0\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	I0913 18:54:32.241327       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0913 18:54:32.541475       1 shared_informer.go:320] Caches are synced for service config
	I0913 18:54:32.642363       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [360965c899e522788532af521cbf0b477b35e977e91f5a28b282abf712fc953e] <==
	W0913 19:03:57.900185       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.145:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.145:8443: connect: connection refused
	E0913 19:03:57.900372       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://192.168.39.145:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.39.145:8443: connect: connection refused" logger="UnhandledError"
	W0913 19:03:58.563523       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.145:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.145:8443: connect: connection refused
	E0913 19:03:58.563568       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.39.145:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.39.145:8443: connect: connection refused" logger="UnhandledError"
	W0913 19:04:07.237583       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.145:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.145:8443: connect: connection refused
	E0913 19:04:07.237716       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.168.39.145:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.39.145:8443: connect: connection refused" logger="UnhandledError"
	W0913 19:04:07.479004       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.145:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.145:8443: connect: connection refused
	E0913 19:04:07.479145       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.168.39.145:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.39.145:8443: connect: connection refused" logger="UnhandledError"
	W0913 19:04:14.886681       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.145:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.145:8443: connect: connection refused
	E0913 19:04:14.886861       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.168.39.145:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.39.145:8443: connect: connection refused" logger="UnhandledError"
	W0913 19:04:17.376850       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.145:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.145:8443: connect: connection refused
	E0913 19:04:17.376931       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://192.168.39.145:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.39.145:8443: connect: connection refused" logger="UnhandledError"
	W0913 19:04:18.702116       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.145:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.145:8443: connect: connection refused
	E0913 19:04:18.702189       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://192.168.39.145:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.145:8443: connect: connection refused" logger="UnhandledError"
	W0913 19:04:21.189061       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.145:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.145:8443: connect: connection refused
	E0913 19:04:21.189219       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://192.168.39.145:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.39.145:8443: connect: connection refused" logger="UnhandledError"
	W0913 19:04:22.488215       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.145:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.145:8443: connect: connection refused
	E0913 19:04:22.488335       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://192.168.39.145:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.39.145:8443: connect: connection refused" logger="UnhandledError"
	W0913 19:04:30.978522       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.145:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.145:8443: connect: connection refused
	E0913 19:04:30.978653       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get \"https://192.168.39.145:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.39.145:8443: connect: connection refused" logger="UnhandledError"
	W0913 19:04:32.316893       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.145:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.145:8443: connect: connection refused
	E0913 19:04:32.317198       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://192.168.39.145:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.39.145:8443: connect: connection refused" logger="UnhandledError"
	W0913 19:04:34.163725       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0913 19:04:34.163824       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0913 19:04:46.592137       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [80a7cb47dee67b788f693c83be0a9b7c41fc30e21ddf9b0131e13737c8047222] <==
	E0913 18:54:18.337790       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.168.39.145:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.39.145:8443: connect: connection refused" logger="UnhandledError"
	W0913 18:54:18.785652       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.145:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.145:8443: connect: connection refused
	E0913 18:54:18.785751       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.168.39.145:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.39.145:8443: connect: connection refused" logger="UnhandledError"
	W0913 18:54:23.154505       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.145:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.145:8443: connect: connection refused
	E0913 18:54:23.154624       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://192.168.39.145:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.39.145:8443: connect: connection refused" logger="UnhandledError"
	W0913 18:54:26.780601       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0913 18:54:26.780738       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0913 18:54:26.780951       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0913 18:54:26.781066       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0913 18:54:26.783577       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0913 18:54:26.783651       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0913 18:54:26.783823       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0913 18:54:26.783883       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0913 18:54:26.783831       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0913 18:54:26.783955       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0913 18:54:26.783968       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0913 18:54:26.784151       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0913 18:54:26.784400       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0913 18:54:26.784439       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0913 18:54:44.032097       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0913 18:56:04.977977       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-hzxvw\": pod busybox-7dff88458-hzxvw is already assigned to node \"ha-617764-m04\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-hzxvw" node="ha-617764-m04"
	E0913 18:56:04.978105       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 6a455845-10fb-415a-badb-63751bb03ec8(default/busybox-7dff88458-hzxvw) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-hzxvw"
	E0913 18:56:04.978138       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-hzxvw\": pod busybox-7dff88458-hzxvw is already assigned to node \"ha-617764-m04\"" pod="default/busybox-7dff88458-hzxvw"
	I0913 18:56:04.978160       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-hzxvw" node="ha-617764-m04"
	E0913 18:58:44.325787       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 13 19:08:28 ha-617764 kubelet[1315]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 13 19:08:28 ha-617764 kubelet[1315]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 13 19:08:28 ha-617764 kubelet[1315]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 13 19:08:28 ha-617764 kubelet[1315]: E0913 19:08:28.952274    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726254508951628772,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 19:08:28 ha-617764 kubelet[1315]: E0913 19:08:28.952307    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726254508951628772,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 19:08:38 ha-617764 kubelet[1315]: I0913 19:08:38.515464    1315 scope.go:117] "RemoveContainer" containerID="e916b90f9253d6855bcd1b24ab1ae47479a3f23fd600018cb0738677896b324f"
	Sep 13 19:08:38 ha-617764 kubelet[1315]: E0913 19:08:38.955116    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726254518954688809,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 19:08:38 ha-617764 kubelet[1315]: E0913 19:08:38.955171    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726254518954688809,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 19:08:48 ha-617764 kubelet[1315]: E0913 19:08:48.956889    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726254528956554093,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 19:08:48 ha-617764 kubelet[1315]: E0913 19:08:48.957155    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726254528956554093,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 19:08:58 ha-617764 kubelet[1315]: E0913 19:08:58.958906    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726254538958539402,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 19:08:58 ha-617764 kubelet[1315]: E0913 19:08:58.959333    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726254538958539402,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 19:09:08 ha-617764 kubelet[1315]: E0913 19:09:08.960916    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726254548960531605,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 19:09:08 ha-617764 kubelet[1315]: E0913 19:09:08.961343    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726254548960531605,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 19:09:18 ha-617764 kubelet[1315]: E0913 19:09:18.962842    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726254558962459188,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 19:09:18 ha-617764 kubelet[1315]: E0913 19:09:18.962887    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726254558962459188,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 19:09:28 ha-617764 kubelet[1315]: E0913 19:09:28.545670    1315 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 13 19:09:28 ha-617764 kubelet[1315]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 13 19:09:28 ha-617764 kubelet[1315]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 13 19:09:28 ha-617764 kubelet[1315]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 13 19:09:28 ha-617764 kubelet[1315]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 13 19:09:28 ha-617764 kubelet[1315]: E0913 19:09:28.964511    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726254568964115032,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 19:09:28 ha-617764 kubelet[1315]: E0913 19:09:28.964555    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726254568964115032,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 19:09:38 ha-617764 kubelet[1315]: E0913 19:09:38.968506    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726254578967107888,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 19:09:38 ha-617764 kubelet[1315]: E0913 19:09:38.968540    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726254578967107888,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0913 19:09:38.744783   33974 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19636-3902/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-617764 -n ha-617764
helpers_test.go:261: (dbg) Run:  kubectl --context ha-617764 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartCluster (657.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (2.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:413: expected profile "ha-617764" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-617764\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-617764\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"kvm2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":
1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-617764\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.39.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.39.145\",\"Port\":8443,\"Kube
rnetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.39.203\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.39.238\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,
\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/home/jenkins:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetr
ics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-617764 -n ha-617764
helpers_test.go:244: <<< TestMultiControlPlane/serial/DegradedAfterClusterRestart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterClusterRestart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-617764 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-617764 logs -n 25: (1.658598567s)
helpers_test.go:252: TestMultiControlPlane/serial/DegradedAfterClusterRestart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                      |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-617764 cp ha-617764-m03:/home/docker/cp-test.txt                            | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC | 13 Sep 24 18:46 UTC |
	|         | ha-617764-m04:/home/docker/cp-test_ha-617764-m03_ha-617764-m04.txt             |           |         |         |                     |                     |
	| ssh     | ha-617764 ssh -n                                                               | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC | 13 Sep 24 18:46 UTC |
	|         | ha-617764-m03 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-617764 ssh -n ha-617764-m04 sudo cat                                        | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC | 13 Sep 24 18:46 UTC |
	|         | /home/docker/cp-test_ha-617764-m03_ha-617764-m04.txt                           |           |         |         |                     |                     |
	| cp      | ha-617764 cp testdata/cp-test.txt                                              | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC | 13 Sep 24 18:46 UTC |
	|         | ha-617764-m04:/home/docker/cp-test.txt                                         |           |         |         |                     |                     |
	| ssh     | ha-617764 ssh -n                                                               | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC | 13 Sep 24 18:46 UTC |
	|         | ha-617764-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| cp      | ha-617764 cp ha-617764-m04:/home/docker/cp-test.txt                            | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC | 13 Sep 24 18:46 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile62644144/001/cp-test_ha-617764-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-617764 ssh -n                                                               | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC | 13 Sep 24 18:46 UTC |
	|         | ha-617764-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| cp      | ha-617764 cp ha-617764-m04:/home/docker/cp-test.txt                            | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC | 13 Sep 24 18:46 UTC |
	|         | ha-617764:/home/docker/cp-test_ha-617764-m04_ha-617764.txt                     |           |         |         |                     |                     |
	| ssh     | ha-617764 ssh -n                                                               | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC | 13 Sep 24 18:46 UTC |
	|         | ha-617764-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-617764 ssh -n ha-617764 sudo cat                                            | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC | 13 Sep 24 18:46 UTC |
	|         | /home/docker/cp-test_ha-617764-m04_ha-617764.txt                               |           |         |         |                     |                     |
	| cp      | ha-617764 cp ha-617764-m04:/home/docker/cp-test.txt                            | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC | 13 Sep 24 18:46 UTC |
	|         | ha-617764-m02:/home/docker/cp-test_ha-617764-m04_ha-617764-m02.txt             |           |         |         |                     |                     |
	| ssh     | ha-617764 ssh -n                                                               | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC | 13 Sep 24 18:46 UTC |
	|         | ha-617764-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-617764 ssh -n ha-617764-m02 sudo cat                                        | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC | 13 Sep 24 18:46 UTC |
	|         | /home/docker/cp-test_ha-617764-m04_ha-617764-m02.txt                           |           |         |         |                     |                     |
	| cp      | ha-617764 cp ha-617764-m04:/home/docker/cp-test.txt                            | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC | 13 Sep 24 18:46 UTC |
	|         | ha-617764-m03:/home/docker/cp-test_ha-617764-m04_ha-617764-m03.txt             |           |         |         |                     |                     |
	| ssh     | ha-617764 ssh -n                                                               | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC | 13 Sep 24 18:46 UTC |
	|         | ha-617764-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-617764 ssh -n ha-617764-m03 sudo cat                                        | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC | 13 Sep 24 18:46 UTC |
	|         | /home/docker/cp-test_ha-617764-m04_ha-617764-m03.txt                           |           |         |         |                     |                     |
	| node    | ha-617764 node stop m02 -v=7                                                   | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC |                     |
	|         | --alsologtostderr                                                              |           |         |         |                     |                     |
	| node    | ha-617764 node start m02 -v=7                                                  | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:48 UTC |                     |
	|         | --alsologtostderr                                                              |           |         |         |                     |                     |
	| node    | list -p ha-617764 -v=7                                                         | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:49 UTC |                     |
	|         | --alsologtostderr                                                              |           |         |         |                     |                     |
	| stop    | -p ha-617764 -v=7                                                              | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:49 UTC |                     |
	|         | --alsologtostderr                                                              |           |         |         |                     |                     |
	| start   | -p ha-617764 --wait=true -v=7                                                  | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:51 UTC | 13 Sep 24 18:56 UTC |
	|         | --alsologtostderr                                                              |           |         |         |                     |                     |
	| node    | list -p ha-617764                                                              | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:56 UTC |                     |
	| node    | ha-617764 node delete m03 -v=7                                                 | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:56 UTC | 13 Sep 24 18:56 UTC |
	|         | --alsologtostderr                                                              |           |         |         |                     |                     |
	| stop    | ha-617764 stop -v=7                                                            | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:56 UTC |                     |
	|         | --alsologtostderr                                                              |           |         |         |                     |                     |
	| start   | -p ha-617764 --wait=true                                                       | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:58 UTC |                     |
	|         | -v=7 --alsologtostderr                                                         |           |         |         |                     |                     |
	|         | --driver=kvm2                                                                  |           |         |         |                     |                     |
	|         | --container-runtime=crio                                                       |           |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/13 18:58:43
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0913 18:58:43.150705   31446 out.go:345] Setting OutFile to fd 1 ...
	I0913 18:58:43.150823   31446 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 18:58:43.150831   31446 out.go:358] Setting ErrFile to fd 2...
	I0913 18:58:43.150835   31446 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 18:58:43.150989   31446 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19636-3902/.minikube/bin
	I0913 18:58:43.151527   31446 out.go:352] Setting JSON to false
	I0913 18:58:43.152444   31446 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2466,"bootTime":1726251457,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0913 18:58:43.152531   31446 start.go:139] virtualization: kvm guest
	I0913 18:58:43.155078   31446 out.go:177] * [ha-617764] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0913 18:58:43.156678   31446 notify.go:220] Checking for updates...
	I0913 18:58:43.156709   31446 out.go:177]   - MINIKUBE_LOCATION=19636
	I0913 18:58:43.158268   31446 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 18:58:43.159544   31446 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19636-3902/kubeconfig
	I0913 18:58:43.160767   31446 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19636-3902/.minikube
	I0913 18:58:43.162220   31446 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0913 18:58:43.163615   31446 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 18:58:43.165451   31446 config.go:182] Loaded profile config "ha-617764": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 18:58:43.165853   31446 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:58:43.165907   31446 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:58:43.180911   31446 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45843
	I0913 18:58:43.181388   31446 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:58:43.181949   31446 main.go:141] libmachine: Using API Version  1
	I0913 18:58:43.181971   31446 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:58:43.182353   31446 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:58:43.182521   31446 main.go:141] libmachine: (ha-617764) Calling .DriverName
	I0913 18:58:43.182750   31446 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 18:58:43.183084   31446 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:58:43.183122   31446 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:58:43.197519   31446 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39037
	I0913 18:58:43.197916   31446 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:58:43.198411   31446 main.go:141] libmachine: Using API Version  1
	I0913 18:58:43.198429   31446 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:58:43.198758   31446 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:58:43.198946   31446 main.go:141] libmachine: (ha-617764) Calling .DriverName
	I0913 18:58:43.235966   31446 out.go:177] * Using the kvm2 driver based on existing profile
	I0913 18:58:43.237300   31446 start.go:297] selected driver: kvm2
	I0913 18:58:43.237333   31446 start.go:901] validating driver "kvm2" against &{Name:ha-617764 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.1 ClusterName:ha-617764 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.145 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.203 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.238 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false insp
ektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptim
izations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 18:58:43.237501   31446 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 18:58:43.237936   31446 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 18:58:43.238020   31446 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19636-3902/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0913 18:58:43.253448   31446 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0913 18:58:43.254210   31446 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0913 18:58:43.254249   31446 cni.go:84] Creating CNI manager for ""
	I0913 18:58:43.254286   31446 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0913 18:58:43.254380   31446 start.go:340] cluster config:
	{Name:ha-617764 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-617764 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.145 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.203 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.238 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow
:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetCli
entPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 18:58:43.254578   31446 iso.go:125] acquiring lock: {Name:mk6d09a9e7ffc35d34bf41cb51ad15df14a4d34d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 18:58:43.257570   31446 out.go:177] * Starting "ha-617764" primary control-plane node in "ha-617764" cluster
	I0913 18:58:43.258900   31446 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0913 18:58:43.258938   31446 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0913 18:58:43.258945   31446 cache.go:56] Caching tarball of preloaded images
	I0913 18:58:43.259017   31446 preload.go:172] Found /home/jenkins/minikube-integration/19636-3902/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0913 18:58:43.259028   31446 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0913 18:58:43.259156   31446 profile.go:143] Saving config to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/config.json ...
	I0913 18:58:43.259345   31446 start.go:360] acquireMachinesLock for ha-617764: {Name:mk2a4fc9f87a3264088553785e086036ce1d8c5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 18:58:43.259392   31446 start.go:364] duration metric: took 31.174µs to acquireMachinesLock for "ha-617764"
	I0913 18:58:43.259405   31446 start.go:96] Skipping create...Using existing machine configuration
	I0913 18:58:43.259413   31446 fix.go:54] fixHost starting: 
	I0913 18:58:43.259679   31446 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:58:43.259711   31446 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:58:43.274822   31446 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33137
	I0913 18:58:43.275298   31446 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:58:43.275852   31446 main.go:141] libmachine: Using API Version  1
	I0913 18:58:43.275878   31446 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:58:43.276311   31446 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:58:43.276486   31446 main.go:141] libmachine: (ha-617764) Calling .DriverName
	I0913 18:58:43.276663   31446 main.go:141] libmachine: (ha-617764) Calling .GetState
	I0913 18:58:43.278189   31446 fix.go:112] recreateIfNeeded on ha-617764: state=Running err=<nil>
	W0913 18:58:43.278219   31446 fix.go:138] unexpected machine state, will restart: <nil>
	I0913 18:58:43.280067   31446 out.go:177] * Updating the running kvm2 "ha-617764" VM ...
	I0913 18:58:43.281138   31446 machine.go:93] provisionDockerMachine start ...
	I0913 18:58:43.281155   31446 main.go:141] libmachine: (ha-617764) Calling .DriverName
	I0913 18:58:43.281323   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHHostname
	I0913 18:58:43.284023   31446 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:58:43.284521   31446 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 18:58:43.284555   31446 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:58:43.284669   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHPort
	I0913 18:58:43.284825   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 18:58:43.284952   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 18:58:43.285055   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHUsername
	I0913 18:58:43.285196   31446 main.go:141] libmachine: Using SSH client type: native
	I0913 18:58:43.285409   31446 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.145 22 <nil> <nil>}
	I0913 18:58:43.285420   31446 main.go:141] libmachine: About to run SSH command:
	hostname
	I0913 18:58:43.394451   31446 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-617764
	
	I0913 18:58:43.394477   31446 main.go:141] libmachine: (ha-617764) Calling .GetMachineName
	I0913 18:58:43.394708   31446 buildroot.go:166] provisioning hostname "ha-617764"
	I0913 18:58:43.394736   31446 main.go:141] libmachine: (ha-617764) Calling .GetMachineName
	I0913 18:58:43.394924   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHHostname
	I0913 18:58:43.397704   31446 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:58:43.398088   31446 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 18:58:43.398141   31446 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:58:43.398322   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHPort
	I0913 18:58:43.398529   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 18:58:43.398740   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 18:58:43.398893   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHUsername
	I0913 18:58:43.399057   31446 main.go:141] libmachine: Using SSH client type: native
	I0913 18:58:43.399258   31446 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.145 22 <nil> <nil>}
	I0913 18:58:43.399275   31446 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-617764 && echo "ha-617764" | sudo tee /etc/hostname
	I0913 18:58:43.520106   31446 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-617764
	
	I0913 18:58:43.520131   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHHostname
	I0913 18:58:43.522812   31446 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:58:43.523152   31446 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 18:58:43.523170   31446 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:58:43.523391   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHPort
	I0913 18:58:43.523571   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 18:58:43.523748   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 18:58:43.523885   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHUsername
	I0913 18:58:43.524100   31446 main.go:141] libmachine: Using SSH client type: native
	I0913 18:58:43.524293   31446 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.145 22 <nil> <nil>}
	I0913 18:58:43.524308   31446 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-617764' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-617764/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-617764' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0913 18:58:43.635855   31446 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0913 18:58:43.635900   31446 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19636-3902/.minikube CaCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19636-3902/.minikube}
	I0913 18:58:43.635930   31446 buildroot.go:174] setting up certificates
	I0913 18:58:43.635943   31446 provision.go:84] configureAuth start
	I0913 18:58:43.635958   31446 main.go:141] libmachine: (ha-617764) Calling .GetMachineName
	I0913 18:58:43.636270   31446 main.go:141] libmachine: (ha-617764) Calling .GetIP
	I0913 18:58:43.638723   31446 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:58:43.639091   31446 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 18:58:43.639122   31446 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:58:43.639263   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHHostname
	I0913 18:58:43.641516   31446 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:58:43.641896   31446 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 18:58:43.641921   31446 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:58:43.642009   31446 provision.go:143] copyHostCerts
	I0913 18:58:43.642045   31446 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem
	I0913 18:58:43.642090   31446 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem, removing ...
	I0913 18:58:43.642118   31446 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem
	I0913 18:58:43.642204   31446 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem (1082 bytes)
	I0913 18:58:43.642317   31446 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem
	I0913 18:58:43.642345   31446 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem, removing ...
	I0913 18:58:43.642351   31446 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem
	I0913 18:58:43.642393   31446 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem (1123 bytes)
	I0913 18:58:43.642482   31446 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem
	I0913 18:58:43.642507   31446 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem, removing ...
	I0913 18:58:43.642516   31446 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem
	I0913 18:58:43.642554   31446 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem (1679 bytes)
	I0913 18:58:43.642629   31446 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem org=jenkins.ha-617764 san=[127.0.0.1 192.168.39.145 ha-617764 localhost minikube]
	I0913 18:58:44.051872   31446 provision.go:177] copyRemoteCerts
	I0913 18:58:44.051926   31446 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0913 18:58:44.051949   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHHostname
	I0913 18:58:44.054378   31446 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:58:44.054746   31446 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 18:58:44.054779   31446 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:58:44.054963   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHPort
	I0913 18:58:44.055136   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 18:58:44.055290   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHUsername
	I0913 18:58:44.055443   31446 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764/id_rsa Username:docker}
	I0913 18:58:44.136923   31446 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0913 18:58:44.136991   31446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0913 18:58:44.167349   31446 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0913 18:58:44.167474   31446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0913 18:58:44.192816   31446 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0913 18:58:44.192890   31446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0913 18:58:44.219869   31446 provision.go:87] duration metric: took 583.909353ms to configureAuth
	I0913 18:58:44.219902   31446 buildroot.go:189] setting minikube options for container-runtime
	I0913 18:58:44.220142   31446 config.go:182] Loaded profile config "ha-617764": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 18:58:44.220219   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHHostname
	I0913 18:58:44.222922   31446 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:58:44.223448   31446 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 18:58:44.223533   31446 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:58:44.223808   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHPort
	I0913 18:58:44.224007   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 18:58:44.224174   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 18:58:44.224308   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHUsername
	I0913 18:58:44.224474   31446 main.go:141] libmachine: Using SSH client type: native
	I0913 18:58:44.224676   31446 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.145 22 <nil> <nil>}
	I0913 18:58:44.224698   31446 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0913 19:00:18.789819   31446 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0913 19:00:18.789840   31446 machine.go:96] duration metric: took 1m35.508690532s to provisionDockerMachine
	I0913 19:00:18.789851   31446 start.go:293] postStartSetup for "ha-617764" (driver="kvm2")
	I0913 19:00:18.789861   31446 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0913 19:00:18.789874   31446 main.go:141] libmachine: (ha-617764) Calling .DriverName
	I0913 19:00:18.790220   31446 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0913 19:00:18.790251   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHHostname
	I0913 19:00:18.793500   31446 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 19:00:18.793848   31446 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 19:00:18.793875   31446 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 19:00:18.794048   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHPort
	I0913 19:00:18.794238   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 19:00:18.794385   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHUsername
	I0913 19:00:18.794569   31446 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764/id_rsa Username:docker}
	I0913 19:00:18.877285   31446 ssh_runner.go:195] Run: cat /etc/os-release
	I0913 19:00:18.883268   31446 info.go:137] Remote host: Buildroot 2023.02.9
	I0913 19:00:18.883297   31446 filesync.go:126] Scanning /home/jenkins/minikube-integration/19636-3902/.minikube/addons for local assets ...
	I0913 19:00:18.883423   31446 filesync.go:126] Scanning /home/jenkins/minikube-integration/19636-3902/.minikube/files for local assets ...
	I0913 19:00:18.883612   31446 filesync.go:149] local asset: /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem -> 110792.pem in /etc/ssl/certs
	I0913 19:00:18.883631   31446 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem -> /etc/ssl/certs/110792.pem
	I0913 19:00:18.883718   31446 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0913 19:00:18.893226   31446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem --> /etc/ssl/certs/110792.pem (1708 bytes)
	I0913 19:00:18.920369   31446 start.go:296] duration metric: took 130.503832ms for postStartSetup
	I0913 19:00:18.920414   31446 main.go:141] libmachine: (ha-617764) Calling .DriverName
	I0913 19:00:18.920676   31446 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0913 19:00:18.920707   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHHostname
	I0913 19:00:18.923635   31446 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 19:00:18.924114   31446 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 19:00:18.924141   31446 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 19:00:18.924348   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHPort
	I0913 19:00:18.924535   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 19:00:18.924698   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHUsername
	I0913 19:00:18.924850   31446 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764/id_rsa Username:docker}
	W0913 19:00:19.009141   31446 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0913 19:00:19.009172   31446 fix.go:56] duration metric: took 1m35.749758939s for fixHost
	I0913 19:00:19.009198   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHHostname
	I0913 19:00:19.011920   31446 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 19:00:19.012313   31446 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 19:00:19.012336   31446 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 19:00:19.012505   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHPort
	I0913 19:00:19.012684   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 19:00:19.012842   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 19:00:19.012978   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHUsername
	I0913 19:00:19.013111   31446 main.go:141] libmachine: Using SSH client type: native
	I0913 19:00:19.013373   31446 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.145 22 <nil> <nil>}
	I0913 19:00:19.013392   31446 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0913 19:00:19.118884   31446 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726254019.083169511
	
	I0913 19:00:19.118912   31446 fix.go:216] guest clock: 1726254019.083169511
	I0913 19:00:19.118923   31446 fix.go:229] Guest: 2024-09-13 19:00:19.083169511 +0000 UTC Remote: 2024-09-13 19:00:19.009181164 +0000 UTC m=+95.893684428 (delta=73.988347ms)
	I0913 19:00:19.118983   31446 fix.go:200] guest clock delta is within tolerance: 73.988347ms
	I0913 19:00:19.118991   31446 start.go:83] releasing machines lock for "ha-617764", held for 1m35.85958928s
	I0913 19:00:19.119022   31446 main.go:141] libmachine: (ha-617764) Calling .DriverName
	I0913 19:00:19.119255   31446 main.go:141] libmachine: (ha-617764) Calling .GetIP
	I0913 19:00:19.121927   31446 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 19:00:19.122454   31446 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 19:00:19.122593   31446 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 19:00:19.122762   31446 main.go:141] libmachine: (ha-617764) Calling .DriverName
	I0913 19:00:19.123286   31446 main.go:141] libmachine: (ha-617764) Calling .DriverName
	I0913 19:00:19.123470   31446 main.go:141] libmachine: (ha-617764) Calling .DriverName
	I0913 19:00:19.123531   31446 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0913 19:00:19.123584   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHHostname
	I0913 19:00:19.123664   31446 ssh_runner.go:195] Run: cat /version.json
	I0913 19:00:19.123680   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHHostname
	I0913 19:00:19.126137   31446 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 19:00:19.126495   31446 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 19:00:19.126557   31446 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 19:00:19.126605   31446 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 19:00:19.126870   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHPort
	I0913 19:00:19.126965   31446 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 19:00:19.126997   31446 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 19:00:19.127049   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 19:00:19.127133   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHPort
	I0913 19:00:19.127204   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHUsername
	I0913 19:00:19.127289   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 19:00:19.127344   31446 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764/id_rsa Username:docker}
	I0913 19:00:19.127430   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHUsername
	I0913 19:00:19.127554   31446 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764/id_rsa Username:docker}
	I0913 19:00:19.230613   31446 ssh_runner.go:195] Run: systemctl --version
	I0913 19:00:19.238299   31446 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0913 19:00:19.405183   31446 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0913 19:00:19.411872   31446 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0913 19:00:19.411926   31446 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0913 19:00:19.421058   31446 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0913 19:00:19.421086   31446 start.go:495] detecting cgroup driver to use...
	I0913 19:00:19.421155   31446 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0913 19:00:19.436778   31446 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0913 19:00:19.450920   31446 docker.go:217] disabling cri-docker service (if available) ...
	I0913 19:00:19.450979   31446 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0913 19:00:19.464921   31446 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0913 19:00:19.478168   31446 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0913 19:00:19.645366   31446 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0913 19:00:19.801636   31446 docker.go:233] disabling docker service ...
	I0913 19:00:19.801712   31446 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0913 19:00:19.818239   31446 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0913 19:00:19.832446   31446 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0913 19:00:19.978995   31446 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0913 19:00:20.122997   31446 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0913 19:00:20.139838   31446 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0913 19:00:20.159570   31446 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0913 19:00:20.159648   31446 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:00:20.172313   31446 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0913 19:00:20.172387   31446 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:00:20.183969   31446 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:00:20.195156   31446 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:00:20.206292   31446 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0913 19:00:20.218569   31446 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:00:20.229457   31446 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:00:20.241787   31446 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:00:20.252269   31446 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0913 19:00:20.262210   31446 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0913 19:00:20.272169   31446 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 19:00:20.432441   31446 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0913 19:00:27.397849   31446 ssh_runner.go:235] Completed: sudo systemctl restart crio: (6.965372324s)
	I0913 19:00:27.397881   31446 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0913 19:00:27.397939   31446 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0913 19:00:27.404132   31446 start.go:563] Will wait 60s for crictl version
	I0913 19:00:27.404202   31446 ssh_runner.go:195] Run: which crictl
	I0913 19:00:27.407981   31446 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0913 19:00:27.443823   31446 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0913 19:00:27.443905   31446 ssh_runner.go:195] Run: crio --version
	I0913 19:00:27.475173   31446 ssh_runner.go:195] Run: crio --version
	I0913 19:00:27.506743   31446 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0913 19:00:27.508011   31446 main.go:141] libmachine: (ha-617764) Calling .GetIP
	I0913 19:00:27.510651   31446 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 19:00:27.511033   31446 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 19:00:27.511060   31446 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 19:00:27.511270   31446 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0913 19:00:27.516012   31446 kubeadm.go:883] updating cluster {Name:ha-617764 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-617764 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.145 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.203 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.238 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadge
t:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fa
lse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0913 19:00:27.516147   31446 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0913 19:00:27.516207   31446 ssh_runner.go:195] Run: sudo crictl images --output json
	I0913 19:00:27.563165   31446 crio.go:514] all images are preloaded for cri-o runtime.
	I0913 19:00:27.563185   31446 crio.go:433] Images already preloaded, skipping extraction
	I0913 19:00:27.563228   31446 ssh_runner.go:195] Run: sudo crictl images --output json
	I0913 19:00:27.599775   31446 crio.go:514] all images are preloaded for cri-o runtime.
	I0913 19:00:27.599799   31446 cache_images.go:84] Images are preloaded, skipping loading
	I0913 19:00:27.599809   31446 kubeadm.go:934] updating node { 192.168.39.145 8443 v1.31.1 crio true true} ...
	I0913 19:00:27.599915   31446 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-617764 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.145
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-617764 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0913 19:00:27.600007   31446 ssh_runner.go:195] Run: crio config
	I0913 19:00:27.651311   31446 cni.go:84] Creating CNI manager for ""
	I0913 19:00:27.651333   31446 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0913 19:00:27.651343   31446 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0913 19:00:27.651366   31446 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.145 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-617764 NodeName:ha-617764 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.145"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.145 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0913 19:00:27.651508   31446 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.145
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-617764"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.145
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.145"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0913 19:00:27.651538   31446 kube-vip.go:115] generating kube-vip config ...
	I0913 19:00:27.651587   31446 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0913 19:00:27.664287   31446 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0913 19:00:27.664396   31446 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0913 19:00:27.664455   31446 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0913 19:00:27.674466   31446 binaries.go:44] Found k8s binaries, skipping transfer
	I0913 19:00:27.674547   31446 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0913 19:00:27.684733   31446 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0913 19:00:27.702120   31446 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0913 19:00:27.719612   31446 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0913 19:00:27.737029   31446 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0913 19:00:27.755478   31446 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0913 19:00:27.759223   31446 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 19:00:27.910765   31446 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0913 19:00:27.925634   31446 certs.go:68] Setting up /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764 for IP: 192.168.39.145
	I0913 19:00:27.925655   31446 certs.go:194] generating shared ca certs ...
	I0913 19:00:27.925670   31446 certs.go:226] acquiring lock for ca certs: {Name:mke780aab4c2895bded11772e31c7a357d07742c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 19:00:27.925837   31446 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.key
	I0913 19:00:27.925877   31446 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.key
	I0913 19:00:27.925887   31446 certs.go:256] generating profile certs ...
	I0913 19:00:27.925954   31446 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/client.key
	I0913 19:00:27.925980   31446 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.key.902bad01
	I0913 19:00:27.926001   31446 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.crt.902bad01 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.145 192.168.39.203 192.168.39.254]
	I0913 19:00:28.083419   31446 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.crt.902bad01 ...
	I0913 19:00:28.083444   31446 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.crt.902bad01: {Name:mk5610f7b2a13e2e9a2db0fd30b419eeb2bcec9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 19:00:28.083629   31446 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.key.902bad01 ...
	I0913 19:00:28.083645   31446 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.key.902bad01: {Name:mk0e8fc15f8ef270cc2f47ac846f3a3e4156c718 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 19:00:28.083740   31446 certs.go:381] copying /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.crt.902bad01 -> /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.crt
	I0913 19:00:28.083880   31446 certs.go:385] copying /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.key.902bad01 -> /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.key
	I0913 19:00:28.084003   31446 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/proxy-client.key
	I0913 19:00:28.084017   31446 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0913 19:00:28.084030   31446 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0913 19:00:28.084042   31446 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0913 19:00:28.084057   31446 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0913 19:00:28.084069   31446 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0913 19:00:28.084082   31446 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0913 19:00:28.084100   31446 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0913 19:00:28.084113   31446 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0913 19:00:28.084157   31446 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079.pem (1338 bytes)
	W0913 19:00:28.084185   31446 certs.go:480] ignoring /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079_empty.pem, impossibly tiny 0 bytes
	I0913 19:00:28.084195   31446 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem (1675 bytes)
	I0913 19:00:28.084215   31446 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem (1082 bytes)
	I0913 19:00:28.084238   31446 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem (1123 bytes)
	I0913 19:00:28.084258   31446 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem (1679 bytes)
	I0913 19:00:28.084294   31446 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem (1708 bytes)
	I0913 19:00:28.084323   31446 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079.pem -> /usr/share/ca-certificates/11079.pem
	I0913 19:00:28.084336   31446 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem -> /usr/share/ca-certificates/110792.pem
	I0913 19:00:28.084348   31446 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0913 19:00:28.084922   31446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0913 19:00:28.111077   31446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0913 19:00:28.134495   31446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0913 19:00:28.159747   31446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0913 19:00:28.182325   31446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0913 19:00:28.205586   31446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0913 19:00:28.229539   31446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0913 19:00:28.252370   31446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0913 19:00:28.275737   31446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079.pem --> /usr/share/ca-certificates/11079.pem (1338 bytes)
	I0913 19:00:28.300247   31446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem --> /usr/share/ca-certificates/110792.pem (1708 bytes)
	I0913 19:00:28.324266   31446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0913 19:00:28.347577   31446 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0913 19:00:28.365115   31446 ssh_runner.go:195] Run: openssl version
	I0913 19:00:28.408066   31446 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110792.pem && ln -fs /usr/share/ca-certificates/110792.pem /etc/ssl/certs/110792.pem"
	I0913 19:00:28.469517   31446 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110792.pem
	I0913 19:00:28.486389   31446 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 13 18:38 /usr/share/ca-certificates/110792.pem
	I0913 19:00:28.486486   31446 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110792.pem
	I0913 19:00:28.525327   31446 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110792.pem /etc/ssl/certs/3ec20f2e.0"
	I0913 19:00:28.652306   31446 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0913 19:00:28.760544   31446 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0913 19:00:28.769712   31446 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 13 18:22 /usr/share/ca-certificates/minikubeCA.pem
	I0913 19:00:28.769775   31446 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0913 19:00:28.819345   31446 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0913 19:00:28.906062   31446 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11079.pem && ln -fs /usr/share/ca-certificates/11079.pem /etc/ssl/certs/11079.pem"
	I0913 19:00:29.048802   31446 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11079.pem
	I0913 19:00:29.102932   31446 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 13 18:38 /usr/share/ca-certificates/11079.pem
	I0913 19:00:29.103020   31446 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11079.pem
	I0913 19:00:29.115422   31446 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11079.pem /etc/ssl/certs/51391683.0"
	I0913 19:00:29.318793   31446 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0913 19:00:29.362153   31446 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0913 19:00:29.471278   31446 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0913 19:00:29.492455   31446 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0913 19:00:29.513786   31446 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0913 19:00:29.728338   31446 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0913 19:00:29.780205   31446 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0913 19:00:29.853145   31446 kubeadm.go:392] StartCluster: {Name:ha-617764 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-617764 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.145 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.203 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.238 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:f
alse istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 19:00:29.853301   31446 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0913 19:00:29.853366   31446 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0913 19:00:30.060193   31446 cri.go:89] found id: "7cb162ca4a91609e62eb9a3b82ac2d18f1f007e8a4b2577cce0250928caa85f2"
	I0913 19:00:30.060217   31446 cri.go:89] found id: "360965c899e522788532af521cbf0b477b35e977e91f5a28b282abf712fc953e"
	I0913 19:00:30.060223   31446 cri.go:89] found id: "26de4c71cc1f8d3a39e52e622c86361c67e1839a5b84f098c669196c7c161196"
	I0913 19:00:30.060228   31446 cri.go:89] found id: "12d8e3661fa4705e4486cfa4b69b3f31e0b159af038044b195db15b9345f4f4c"
	I0913 19:00:30.060233   31446 cri.go:89] found id: "c22324f5733e44ea66404b16a363cefc848145f9b3f1a75daecc8b7369ff96dd"
	I0913 19:00:30.060237   31446 cri.go:89] found id: "bc744a6ac873da0f38fbe5a85580ba322c578e68359078003640394bc1a9784f"
	I0913 19:00:30.060240   31446 cri.go:89] found id: "570c77981741ff23e853840359e686b304c18dfbe54cb12c07ca5d8bf2b8de17"
	I0913 19:00:30.060244   31446 cri.go:89] found id: "0a368121b3974a88cb67c446e03fd5709ed4dd291d3b5de37b4544a7c42b60cc"
	I0913 19:00:30.060247   31446 cri.go:89] found id: "32fcfa457f3ff0e638142def4aa43f12d6a5a779bfe86e597cc242d7f4d9d19d"
	I0913 19:00:30.060254   31446 cri.go:89] found id: "46d659112c682a93bb4560212566d776e405246e1b3f91e9cc2ad5198b2b8c69"
	I0913 19:00:30.060259   31446 cri.go:89] found id: "09fe052337ef3c922d075bb554c17c548520bce6283d1973e3507ca8d99ace87"
	I0913 19:00:30.060262   31446 cri.go:89] found id: "dddc0dfb6a255d4d35498f110cd8ade544b6f9eb17cef621a12500640512581e"
	I0913 19:00:30.060266   31446 cri.go:89] found id: "b752b1ac699cb46bd464528882ec5c1bfb29241c94a1eda062141311151c2fc1"
	I0913 19:00:30.060270   31446 cri.go:89] found id: "15c33340e3091f96212bbbf29c4a8802468f76ac07126f47265d4f2b86321a89"
	I0913 19:00:30.060277   31446 cri.go:89] found id: "1d1a0b2d1c95ea3234342a3c075a74b46c01ce0a91b97801503cb6fd1fc06163"
	I0913 19:00:30.060281   31446 cri.go:89] found id: "80a7cb47dee67b788f693c83be0a9b7c41fc30e21ddf9b0131e13737c8047222"
	I0913 19:00:30.060286   31446 cri.go:89] found id: ""
	I0913 19:00:30.060335   31446 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 13 19:09:41 ha-617764 crio[6149]: time="2024-09-13 19:09:41.630680219Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726254581630655982,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9eb9604c-d7b0-4d99-ad9d-1cbeb5145abe name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 19:09:41 ha-617764 crio[6149]: time="2024-09-13 19:09:41.631085214Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a70ee77d-24ad-44ec-b1f1-0d4e4babcfba name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 19:09:41 ha-617764 crio[6149]: time="2024-09-13 19:09:41.631157577Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a70ee77d-24ad-44ec-b1f1-0d4e4babcfba name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 19:09:41 ha-617764 crio[6149]: time="2024-09-13 19:09:41.631656275Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:999f5e6003ef9162b57fec4793fbe75aab1348731d8d4f27ad7d3029004b6d4c,PodSandboxId:2ec7df8952268fade9fc0ffc23b16f677fdc2189770cb119d9f905f75fcc7282,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:7,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726254518536729625,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e5f1a84-1798-430e-af04-82469e8f4a7b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 7,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9e9ac5d6b79f0671011120db7c835a22b4b40515be6c0a224b5d78d10631858,PodSandboxId:b36021c0b35cdc0a068086003611f08b09d2d09f7980d332998b85651d1f5f31,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:6,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726254374542802960,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 815ca8cb73177215968b5c5242b63776,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 6,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87156e375ce6e42c538bb851dcb55abf7b83754448bb78e67a324fd93e76a534,PodSandboxId:639b42fbde0c6031c445be6bfab0a91e5a252efc1e759eace315ed7ee44203e9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:6,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726254272527637261,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f4db9ee38410b02d601ed80ae90b5a4,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e916b90f9253d6855bcd1b24ab1ae47479a3f23fd600018cb0738677896b324f,PodSandboxId:2ec7df8952268fade9fc0ffc23b16f677fdc2189770cb119d9f905f75fcc7282,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:6,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726254205523822299,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e5f1a84-1798-430e-af04-82469e8f4a7b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a3f92c39f616d85e223876cd67911949b93e345a656250b2b9a93e494f7b7b0,PodSandboxId:b36021c0b35cdc0a068086003611f08b09d2d09f7980d332998b85651d1f5f31,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:5,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726254195530889880,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 815ca8cb73177215968b5c5242b63776,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50283a22853869e4521e5d44efcff41101ffde818329fd7423f482cb33efabc0,PodSandboxId:639b42fbde0c6031c445be6bfab0a91e5a252efc1e759eace315ed7ee44203e9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:5,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726254130535295539,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f4db9ee38410b02d601ed80ae90b5a4,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf7f61f474e782e403819281bc5a7d281f13bc89127e0093ac24be08ab21acdc,PodSandboxId:ae1363f1228347bc367ec486d746b078aece68a3917c3a8f082468352ed540ff,Metadata:&ContainerMetadata{Name:busybox,Attempt:2,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726254062550829569,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-t4fwq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1bc3749b-0225-445c-9b86-767558392df7,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMe
ssagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bdc8b32559cc35726794d7412fa2462beea35cff248df50623987861ae7bd48,PodSandboxId:90fa239fc72bb6f4e32a775ccd5954a7efb3c77a68e1e066547347d6aa9de270,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726254029473467866,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-92mml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36bd37dc-88c4-4264-9e7c-a90246cc5212,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.p
od.terminationGracePeriod: 30,},},&Container{Id:2ca0aab49c5463b35d609dd4876256ce1882f5f15b9314d4efc1c9460e5da465,PodSandboxId:743d4b43092c6c121728464d3854175d802db7a5cbb8456d4d3e321a84e3380d,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726254029507452958,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-htrbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41a8301e-fca3-4907-bc77-808b013a2d2a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"proto
col\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70f0f4e37a417dce143a67fa1dcf0c5c3b58a8856c27d2bc40f1a94c8751842a,PodSandboxId:43cddd96b715864e6de09c1fa6b08a212431d9f454105b634b09d702d276ce38,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726254029705747992,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fdhnm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c509676-c7ba-4841-89b5-7e4266abd9c9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.c
ontainer.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cb162ca4a91609e62eb9a3b82ac2d18f1f007e8a4b2577cce0250928caa85f2,PodSandboxId:1aee20bf902b8f7f8a9930e46d1fefb6a6c5f2d3d6d8c4ca74d565558460f8cd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726254029639290085,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-b
9bzd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81130c38-ff5d-4c9e-ab7b-7eae4a62d3b8,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:360965c899e522788532af521cbf0b477b35e977e91f5a28b282abf712fc953e,PodSandboxId:477f3d5572a61dae49e2682bbe5fcfd071182b44e8574426070fd470e9d8b5d9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726254029288189795,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-617764,io.kubernete
s.pod.namespace: kube-system,io.kubernetes.pod.uid: 15cf7928620050653d6239c1007547bd,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c22324f5733e44ea66404b16a363cefc848145f9b3f1a75daecc8b7369ff96dd,PodSandboxId:e94c56bdaeede8bdf6b9672b64791948a015daf333d84890a1b688a898d34e7e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726254029033042084,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
bf3d4ca74d8429dc43b760fdf8f185ab,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc744a6ac873da0f38fbe5a85580ba322c578e68359078003640394bc1a9784f,PodSandboxId:c01954306193722d6eb940bc8b28659b638bd552e7eee2298bf05a3dac30ffcf,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726254028929207262,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5545735943f8ff5a38c9aea0b4c785ad,},Ann
otations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bb3333d84624db9148786fc866cc4cb99179a577e31d3365459788ba7b02f59,PodSandboxId:0238ab84a512173bc79a44c0c48ee4a9bfeeaed98c7655e711a18a704a5951f9,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726253659862538947,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-t4fwq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1bc3749b-0225-445c-9b86-767558392df7,},Annotations:map[string]string{io.kuberne
tes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46d659112c682a93bb4560212566d776e405246e1b3f91e9cc2ad5198b2b8c69,PodSandboxId:566613db4514bf925926ac26426504bb464936a657b23ca25af3ad3aa0a803e1,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_EXITED,CreatedAt:1726253640657625545,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5545735943f8ff5a38c9aea0b4c785ad,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernet
es.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dddc0dfb6a255d4d35498f110cd8ade544b6f9eb17cef621a12500640512581e,PodSandboxId:18e2ef1278c487f38e01dc8ec652cadc524a2e3c08f1b80bae0f12b5ce7d5f45,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726253626790338590,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-b9bzd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81130c38-ff5d-4c9e-ab7b-7eae4a62d3b8,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09fe052337ef3c922d075bb554c17c548520bce6283d1973e3507ca8d99ace87,PodSandboxId:5f1a3394b645b31406ab8d54ac120cd39398070996c972bf73df2d505c1848b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726253626803961664,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fdhnm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c509676-c7ba-4841-89b5-7e4266abd9c9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b752b1ac699cb46bd464528882ec5c1bfb29241c94a1eda062141311151c2fc1,PodSandboxId:3a3adb124d23e71e636eaf0f200f72f3779d41c493e0475b011f1e825d452f00,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726253626698915633,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-htrbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 41a8301e-fca3-4907-bc77-808b013a2d2a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d1a0b2d1c95ea3234342a3c075a74b46c01ce0a91b97801503cb6fd1fc06163,PodSandboxId:09bbefd12114cb9e6833e18edd3e080d729dc865efaaa80af99dd78c1c2387c2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,C
reatedAt:1726253626405117690,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-92mml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36bd37dc-88c4-4264-9e7c-a90246cc5212,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15c33340e3091f96212bbbf29c4a8802468f76ac07126f47265d4f2b86321a89,PodSandboxId:acfcaea56c23ed4626d8134635877c58a053b2861b24d0d53cf3024c7c8c1ca0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726253626518315300,Labels:map[
string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf3d4ca74d8429dc43b760fdf8f185ab,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80a7cb47dee67b788f693c83be0a9b7c41fc30e21ddf9b0131e13737c8047222,PodSandboxId:a63972ff65b125bee7dbfeebba7408899591af85f385eeba93cb4a7472b5a831,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726253626301782095,Labels:map[string]string{io.kubernetes.container.name
: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15cf7928620050653d6239c1007547bd,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a70ee77d-24ad-44ec-b1f1-0d4e4babcfba name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 19:09:41 ha-617764 crio[6149]: time="2024-09-13 19:09:41.681308316Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b76bca1f-2cc7-480e-a488-afd84cac8ae6 name=/runtime.v1.RuntimeService/Version
	Sep 13 19:09:41 ha-617764 crio[6149]: time="2024-09-13 19:09:41.681428851Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b76bca1f-2cc7-480e-a488-afd84cac8ae6 name=/runtime.v1.RuntimeService/Version
	Sep 13 19:09:41 ha-617764 crio[6149]: time="2024-09-13 19:09:41.682689896Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a8161d83-eeb2-44b4-821f-78d8c30d8b54 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 19:09:41 ha-617764 crio[6149]: time="2024-09-13 19:09:41.683352930Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726254581683321115,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a8161d83-eeb2-44b4-821f-78d8c30d8b54 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 19:09:41 ha-617764 crio[6149]: time="2024-09-13 19:09:41.684476606Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=957c50e5-176b-4d31-a45c-68be7162be9d name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 19:09:41 ha-617764 crio[6149]: time="2024-09-13 19:09:41.684671929Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=957c50e5-176b-4d31-a45c-68be7162be9d name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 19:09:41 ha-617764 crio[6149]: time="2024-09-13 19:09:41.685294265Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:999f5e6003ef9162b57fec4793fbe75aab1348731d8d4f27ad7d3029004b6d4c,PodSandboxId:2ec7df8952268fade9fc0ffc23b16f677fdc2189770cb119d9f905f75fcc7282,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:7,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726254518536729625,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e5f1a84-1798-430e-af04-82469e8f4a7b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 7,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9e9ac5d6b79f0671011120db7c835a22b4b40515be6c0a224b5d78d10631858,PodSandboxId:b36021c0b35cdc0a068086003611f08b09d2d09f7980d332998b85651d1f5f31,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:6,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726254374542802960,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 815ca8cb73177215968b5c5242b63776,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 6,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87156e375ce6e42c538bb851dcb55abf7b83754448bb78e67a324fd93e76a534,PodSandboxId:639b42fbde0c6031c445be6bfab0a91e5a252efc1e759eace315ed7ee44203e9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:6,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726254272527637261,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f4db9ee38410b02d601ed80ae90b5a4,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e916b90f9253d6855bcd1b24ab1ae47479a3f23fd600018cb0738677896b324f,PodSandboxId:2ec7df8952268fade9fc0ffc23b16f677fdc2189770cb119d9f905f75fcc7282,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:6,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726254205523822299,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e5f1a84-1798-430e-af04-82469e8f4a7b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a3f92c39f616d85e223876cd67911949b93e345a656250b2b9a93e494f7b7b0,PodSandboxId:b36021c0b35cdc0a068086003611f08b09d2d09f7980d332998b85651d1f5f31,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:5,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726254195530889880,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 815ca8cb73177215968b5c5242b63776,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50283a22853869e4521e5d44efcff41101ffde818329fd7423f482cb33efabc0,PodSandboxId:639b42fbde0c6031c445be6bfab0a91e5a252efc1e759eace315ed7ee44203e9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:5,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726254130535295539,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f4db9ee38410b02d601ed80ae90b5a4,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf7f61f474e782e403819281bc5a7d281f13bc89127e0093ac24be08ab21acdc,PodSandboxId:ae1363f1228347bc367ec486d746b078aece68a3917c3a8f082468352ed540ff,Metadata:&ContainerMetadata{Name:busybox,Attempt:2,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726254062550829569,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-t4fwq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1bc3749b-0225-445c-9b86-767558392df7,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMe
ssagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bdc8b32559cc35726794d7412fa2462beea35cff248df50623987861ae7bd48,PodSandboxId:90fa239fc72bb6f4e32a775ccd5954a7efb3c77a68e1e066547347d6aa9de270,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726254029473467866,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-92mml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36bd37dc-88c4-4264-9e7c-a90246cc5212,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.p
od.terminationGracePeriod: 30,},},&Container{Id:2ca0aab49c5463b35d609dd4876256ce1882f5f15b9314d4efc1c9460e5da465,PodSandboxId:743d4b43092c6c121728464d3854175d802db7a5cbb8456d4d3e321a84e3380d,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726254029507452958,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-htrbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41a8301e-fca3-4907-bc77-808b013a2d2a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"proto
col\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70f0f4e37a417dce143a67fa1dcf0c5c3b58a8856c27d2bc40f1a94c8751842a,PodSandboxId:43cddd96b715864e6de09c1fa6b08a212431d9f454105b634b09d702d276ce38,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726254029705747992,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fdhnm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c509676-c7ba-4841-89b5-7e4266abd9c9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.c
ontainer.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cb162ca4a91609e62eb9a3b82ac2d18f1f007e8a4b2577cce0250928caa85f2,PodSandboxId:1aee20bf902b8f7f8a9930e46d1fefb6a6c5f2d3d6d8c4ca74d565558460f8cd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726254029639290085,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-b
9bzd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81130c38-ff5d-4c9e-ab7b-7eae4a62d3b8,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:360965c899e522788532af521cbf0b477b35e977e91f5a28b282abf712fc953e,PodSandboxId:477f3d5572a61dae49e2682bbe5fcfd071182b44e8574426070fd470e9d8b5d9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726254029288189795,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-617764,io.kubernete
s.pod.namespace: kube-system,io.kubernetes.pod.uid: 15cf7928620050653d6239c1007547bd,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c22324f5733e44ea66404b16a363cefc848145f9b3f1a75daecc8b7369ff96dd,PodSandboxId:e94c56bdaeede8bdf6b9672b64791948a015daf333d84890a1b688a898d34e7e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726254029033042084,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
bf3d4ca74d8429dc43b760fdf8f185ab,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc744a6ac873da0f38fbe5a85580ba322c578e68359078003640394bc1a9784f,PodSandboxId:c01954306193722d6eb940bc8b28659b638bd552e7eee2298bf05a3dac30ffcf,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726254028929207262,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5545735943f8ff5a38c9aea0b4c785ad,},Ann
otations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bb3333d84624db9148786fc866cc4cb99179a577e31d3365459788ba7b02f59,PodSandboxId:0238ab84a512173bc79a44c0c48ee4a9bfeeaed98c7655e711a18a704a5951f9,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726253659862538947,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-t4fwq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1bc3749b-0225-445c-9b86-767558392df7,},Annotations:map[string]string{io.kuberne
tes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46d659112c682a93bb4560212566d776e405246e1b3f91e9cc2ad5198b2b8c69,PodSandboxId:566613db4514bf925926ac26426504bb464936a657b23ca25af3ad3aa0a803e1,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_EXITED,CreatedAt:1726253640657625545,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5545735943f8ff5a38c9aea0b4c785ad,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernet
es.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dddc0dfb6a255d4d35498f110cd8ade544b6f9eb17cef621a12500640512581e,PodSandboxId:18e2ef1278c487f38e01dc8ec652cadc524a2e3c08f1b80bae0f12b5ce7d5f45,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726253626790338590,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-b9bzd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81130c38-ff5d-4c9e-ab7b-7eae4a62d3b8,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09fe052337ef3c922d075bb554c17c548520bce6283d1973e3507ca8d99ace87,PodSandboxId:5f1a3394b645b31406ab8d54ac120cd39398070996c972bf73df2d505c1848b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726253626803961664,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fdhnm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c509676-c7ba-4841-89b5-7e4266abd9c9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b752b1ac699cb46bd464528882ec5c1bfb29241c94a1eda062141311151c2fc1,PodSandboxId:3a3adb124d23e71e636eaf0f200f72f3779d41c493e0475b011f1e825d452f00,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726253626698915633,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-htrbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 41a8301e-fca3-4907-bc77-808b013a2d2a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d1a0b2d1c95ea3234342a3c075a74b46c01ce0a91b97801503cb6fd1fc06163,PodSandboxId:09bbefd12114cb9e6833e18edd3e080d729dc865efaaa80af99dd78c1c2387c2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,C
reatedAt:1726253626405117690,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-92mml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36bd37dc-88c4-4264-9e7c-a90246cc5212,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15c33340e3091f96212bbbf29c4a8802468f76ac07126f47265d4f2b86321a89,PodSandboxId:acfcaea56c23ed4626d8134635877c58a053b2861b24d0d53cf3024c7c8c1ca0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726253626518315300,Labels:map[
string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf3d4ca74d8429dc43b760fdf8f185ab,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80a7cb47dee67b788f693c83be0a9b7c41fc30e21ddf9b0131e13737c8047222,PodSandboxId:a63972ff65b125bee7dbfeebba7408899591af85f385eeba93cb4a7472b5a831,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726253626301782095,Labels:map[string]string{io.kubernetes.container.name
: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15cf7928620050653d6239c1007547bd,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=957c50e5-176b-4d31-a45c-68be7162be9d name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 19:09:41 ha-617764 crio[6149]: time="2024-09-13 19:09:41.733925991Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2de23ea6-5363-4fa8-9c94-47ff9221882a name=/runtime.v1.RuntimeService/Version
	Sep 13 19:09:41 ha-617764 crio[6149]: time="2024-09-13 19:09:41.734026686Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2de23ea6-5363-4fa8-9c94-47ff9221882a name=/runtime.v1.RuntimeService/Version
	Sep 13 19:09:41 ha-617764 crio[6149]: time="2024-09-13 19:09:41.735480412Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8da99846-192c-4513-ade6-21c1ad16e7fb name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 19:09:41 ha-617764 crio[6149]: time="2024-09-13 19:09:41.736569446Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726254581736540938,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8da99846-192c-4513-ade6-21c1ad16e7fb name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 19:09:41 ha-617764 crio[6149]: time="2024-09-13 19:09:41.737345845Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7a773333-4197-4ab0-80f6-2d02977fadbb name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 19:09:41 ha-617764 crio[6149]: time="2024-09-13 19:09:41.737425375Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7a773333-4197-4ab0-80f6-2d02977fadbb name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 19:09:41 ha-617764 crio[6149]: time="2024-09-13 19:09:41.737826138Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:999f5e6003ef9162b57fec4793fbe75aab1348731d8d4f27ad7d3029004b6d4c,PodSandboxId:2ec7df8952268fade9fc0ffc23b16f677fdc2189770cb119d9f905f75fcc7282,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:7,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726254518536729625,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e5f1a84-1798-430e-af04-82469e8f4a7b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 7,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9e9ac5d6b79f0671011120db7c835a22b4b40515be6c0a224b5d78d10631858,PodSandboxId:b36021c0b35cdc0a068086003611f08b09d2d09f7980d332998b85651d1f5f31,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:6,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726254374542802960,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 815ca8cb73177215968b5c5242b63776,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 6,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87156e375ce6e42c538bb851dcb55abf7b83754448bb78e67a324fd93e76a534,PodSandboxId:639b42fbde0c6031c445be6bfab0a91e5a252efc1e759eace315ed7ee44203e9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:6,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726254272527637261,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f4db9ee38410b02d601ed80ae90b5a4,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e916b90f9253d6855bcd1b24ab1ae47479a3f23fd600018cb0738677896b324f,PodSandboxId:2ec7df8952268fade9fc0ffc23b16f677fdc2189770cb119d9f905f75fcc7282,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:6,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726254205523822299,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e5f1a84-1798-430e-af04-82469e8f4a7b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a3f92c39f616d85e223876cd67911949b93e345a656250b2b9a93e494f7b7b0,PodSandboxId:b36021c0b35cdc0a068086003611f08b09d2d09f7980d332998b85651d1f5f31,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:5,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726254195530889880,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 815ca8cb73177215968b5c5242b63776,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50283a22853869e4521e5d44efcff41101ffde818329fd7423f482cb33efabc0,PodSandboxId:639b42fbde0c6031c445be6bfab0a91e5a252efc1e759eace315ed7ee44203e9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:5,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726254130535295539,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f4db9ee38410b02d601ed80ae90b5a4,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf7f61f474e782e403819281bc5a7d281f13bc89127e0093ac24be08ab21acdc,PodSandboxId:ae1363f1228347bc367ec486d746b078aece68a3917c3a8f082468352ed540ff,Metadata:&ContainerMetadata{Name:busybox,Attempt:2,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726254062550829569,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-t4fwq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1bc3749b-0225-445c-9b86-767558392df7,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMe
ssagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bdc8b32559cc35726794d7412fa2462beea35cff248df50623987861ae7bd48,PodSandboxId:90fa239fc72bb6f4e32a775ccd5954a7efb3c77a68e1e066547347d6aa9de270,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726254029473467866,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-92mml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36bd37dc-88c4-4264-9e7c-a90246cc5212,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.p
od.terminationGracePeriod: 30,},},&Container{Id:2ca0aab49c5463b35d609dd4876256ce1882f5f15b9314d4efc1c9460e5da465,PodSandboxId:743d4b43092c6c121728464d3854175d802db7a5cbb8456d4d3e321a84e3380d,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726254029507452958,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-htrbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41a8301e-fca3-4907-bc77-808b013a2d2a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"proto
col\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70f0f4e37a417dce143a67fa1dcf0c5c3b58a8856c27d2bc40f1a94c8751842a,PodSandboxId:43cddd96b715864e6de09c1fa6b08a212431d9f454105b634b09d702d276ce38,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726254029705747992,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fdhnm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c509676-c7ba-4841-89b5-7e4266abd9c9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.c
ontainer.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cb162ca4a91609e62eb9a3b82ac2d18f1f007e8a4b2577cce0250928caa85f2,PodSandboxId:1aee20bf902b8f7f8a9930e46d1fefb6a6c5f2d3d6d8c4ca74d565558460f8cd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726254029639290085,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-b
9bzd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81130c38-ff5d-4c9e-ab7b-7eae4a62d3b8,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:360965c899e522788532af521cbf0b477b35e977e91f5a28b282abf712fc953e,PodSandboxId:477f3d5572a61dae49e2682bbe5fcfd071182b44e8574426070fd470e9d8b5d9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726254029288189795,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-617764,io.kubernete
s.pod.namespace: kube-system,io.kubernetes.pod.uid: 15cf7928620050653d6239c1007547bd,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c22324f5733e44ea66404b16a363cefc848145f9b3f1a75daecc8b7369ff96dd,PodSandboxId:e94c56bdaeede8bdf6b9672b64791948a015daf333d84890a1b688a898d34e7e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726254029033042084,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
bf3d4ca74d8429dc43b760fdf8f185ab,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc744a6ac873da0f38fbe5a85580ba322c578e68359078003640394bc1a9784f,PodSandboxId:c01954306193722d6eb940bc8b28659b638bd552e7eee2298bf05a3dac30ffcf,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726254028929207262,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5545735943f8ff5a38c9aea0b4c785ad,},Ann
otations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bb3333d84624db9148786fc866cc4cb99179a577e31d3365459788ba7b02f59,PodSandboxId:0238ab84a512173bc79a44c0c48ee4a9bfeeaed98c7655e711a18a704a5951f9,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726253659862538947,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-t4fwq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1bc3749b-0225-445c-9b86-767558392df7,},Annotations:map[string]string{io.kuberne
tes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46d659112c682a93bb4560212566d776e405246e1b3f91e9cc2ad5198b2b8c69,PodSandboxId:566613db4514bf925926ac26426504bb464936a657b23ca25af3ad3aa0a803e1,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_EXITED,CreatedAt:1726253640657625545,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5545735943f8ff5a38c9aea0b4c785ad,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernet
es.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dddc0dfb6a255d4d35498f110cd8ade544b6f9eb17cef621a12500640512581e,PodSandboxId:18e2ef1278c487f38e01dc8ec652cadc524a2e3c08f1b80bae0f12b5ce7d5f45,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726253626790338590,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-b9bzd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81130c38-ff5d-4c9e-ab7b-7eae4a62d3b8,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09fe052337ef3c922d075bb554c17c548520bce6283d1973e3507ca8d99ace87,PodSandboxId:5f1a3394b645b31406ab8d54ac120cd39398070996c972bf73df2d505c1848b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726253626803961664,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fdhnm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c509676-c7ba-4841-89b5-7e4266abd9c9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b752b1ac699cb46bd464528882ec5c1bfb29241c94a1eda062141311151c2fc1,PodSandboxId:3a3adb124d23e71e636eaf0f200f72f3779d41c493e0475b011f1e825d452f00,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726253626698915633,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-htrbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 41a8301e-fca3-4907-bc77-808b013a2d2a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d1a0b2d1c95ea3234342a3c075a74b46c01ce0a91b97801503cb6fd1fc06163,PodSandboxId:09bbefd12114cb9e6833e18edd3e080d729dc865efaaa80af99dd78c1c2387c2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,C
reatedAt:1726253626405117690,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-92mml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36bd37dc-88c4-4264-9e7c-a90246cc5212,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15c33340e3091f96212bbbf29c4a8802468f76ac07126f47265d4f2b86321a89,PodSandboxId:acfcaea56c23ed4626d8134635877c58a053b2861b24d0d53cf3024c7c8c1ca0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726253626518315300,Labels:map[
string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf3d4ca74d8429dc43b760fdf8f185ab,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80a7cb47dee67b788f693c83be0a9b7c41fc30e21ddf9b0131e13737c8047222,PodSandboxId:a63972ff65b125bee7dbfeebba7408899591af85f385eeba93cb4a7472b5a831,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726253626301782095,Labels:map[string]string{io.kubernetes.container.name
: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15cf7928620050653d6239c1007547bd,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7a773333-4197-4ab0-80f6-2d02977fadbb name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 19:09:41 ha-617764 crio[6149]: time="2024-09-13 19:09:41.778696826Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dba2dcb8-5a64-4eca-a7f0-15024b90ad6f name=/runtime.v1.RuntimeService/Version
	Sep 13 19:09:41 ha-617764 crio[6149]: time="2024-09-13 19:09:41.778789124Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dba2dcb8-5a64-4eca-a7f0-15024b90ad6f name=/runtime.v1.RuntimeService/Version
	Sep 13 19:09:41 ha-617764 crio[6149]: time="2024-09-13 19:09:41.780110402Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=944a417d-d608-4623-9506-220aff006d02 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 19:09:41 ha-617764 crio[6149]: time="2024-09-13 19:09:41.780618716Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726254581780596853,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=944a417d-d608-4623-9506-220aff006d02 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 19:09:41 ha-617764 crio[6149]: time="2024-09-13 19:09:41.781027484Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=319a3165-d131-4b23-a058-a1916aac5d61 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 19:09:41 ha-617764 crio[6149]: time="2024-09-13 19:09:41.781099982Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=319a3165-d131-4b23-a058-a1916aac5d61 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 19:09:41 ha-617764 crio[6149]: time="2024-09-13 19:09:41.781573628Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:999f5e6003ef9162b57fec4793fbe75aab1348731d8d4f27ad7d3029004b6d4c,PodSandboxId:2ec7df8952268fade9fc0ffc23b16f677fdc2189770cb119d9f905f75fcc7282,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:7,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726254518536729625,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e5f1a84-1798-430e-af04-82469e8f4a7b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 7,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9e9ac5d6b79f0671011120db7c835a22b4b40515be6c0a224b5d78d10631858,PodSandboxId:b36021c0b35cdc0a068086003611f08b09d2d09f7980d332998b85651d1f5f31,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:6,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726254374542802960,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 815ca8cb73177215968b5c5242b63776,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 6,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87156e375ce6e42c538bb851dcb55abf7b83754448bb78e67a324fd93e76a534,PodSandboxId:639b42fbde0c6031c445be6bfab0a91e5a252efc1e759eace315ed7ee44203e9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:6,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726254272527637261,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f4db9ee38410b02d601ed80ae90b5a4,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e916b90f9253d6855bcd1b24ab1ae47479a3f23fd600018cb0738677896b324f,PodSandboxId:2ec7df8952268fade9fc0ffc23b16f677fdc2189770cb119d9f905f75fcc7282,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:6,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726254205523822299,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e5f1a84-1798-430e-af04-82469e8f4a7b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a3f92c39f616d85e223876cd67911949b93e345a656250b2b9a93e494f7b7b0,PodSandboxId:b36021c0b35cdc0a068086003611f08b09d2d09f7980d332998b85651d1f5f31,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:5,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726254195530889880,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 815ca8cb73177215968b5c5242b63776,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50283a22853869e4521e5d44efcff41101ffde818329fd7423f482cb33efabc0,PodSandboxId:639b42fbde0c6031c445be6bfab0a91e5a252efc1e759eace315ed7ee44203e9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:5,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726254130535295539,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f4db9ee38410b02d601ed80ae90b5a4,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf7f61f474e782e403819281bc5a7d281f13bc89127e0093ac24be08ab21acdc,PodSandboxId:ae1363f1228347bc367ec486d746b078aece68a3917c3a8f082468352ed540ff,Metadata:&ContainerMetadata{Name:busybox,Attempt:2,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726254062550829569,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-t4fwq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1bc3749b-0225-445c-9b86-767558392df7,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMe
ssagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bdc8b32559cc35726794d7412fa2462beea35cff248df50623987861ae7bd48,PodSandboxId:90fa239fc72bb6f4e32a775ccd5954a7efb3c77a68e1e066547347d6aa9de270,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726254029473467866,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-92mml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36bd37dc-88c4-4264-9e7c-a90246cc5212,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.p
od.terminationGracePeriod: 30,},},&Container{Id:2ca0aab49c5463b35d609dd4876256ce1882f5f15b9314d4efc1c9460e5da465,PodSandboxId:743d4b43092c6c121728464d3854175d802db7a5cbb8456d4d3e321a84e3380d,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726254029507452958,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-htrbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41a8301e-fca3-4907-bc77-808b013a2d2a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"proto
col\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70f0f4e37a417dce143a67fa1dcf0c5c3b58a8856c27d2bc40f1a94c8751842a,PodSandboxId:43cddd96b715864e6de09c1fa6b08a212431d9f454105b634b09d702d276ce38,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726254029705747992,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fdhnm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c509676-c7ba-4841-89b5-7e4266abd9c9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.c
ontainer.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cb162ca4a91609e62eb9a3b82ac2d18f1f007e8a4b2577cce0250928caa85f2,PodSandboxId:1aee20bf902b8f7f8a9930e46d1fefb6a6c5f2d3d6d8c4ca74d565558460f8cd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726254029639290085,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-b
9bzd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81130c38-ff5d-4c9e-ab7b-7eae4a62d3b8,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:360965c899e522788532af521cbf0b477b35e977e91f5a28b282abf712fc953e,PodSandboxId:477f3d5572a61dae49e2682bbe5fcfd071182b44e8574426070fd470e9d8b5d9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726254029288189795,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-617764,io.kubernete
s.pod.namespace: kube-system,io.kubernetes.pod.uid: 15cf7928620050653d6239c1007547bd,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c22324f5733e44ea66404b16a363cefc848145f9b3f1a75daecc8b7369ff96dd,PodSandboxId:e94c56bdaeede8bdf6b9672b64791948a015daf333d84890a1b688a898d34e7e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726254029033042084,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
bf3d4ca74d8429dc43b760fdf8f185ab,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc744a6ac873da0f38fbe5a85580ba322c578e68359078003640394bc1a9784f,PodSandboxId:c01954306193722d6eb940bc8b28659b638bd552e7eee2298bf05a3dac30ffcf,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726254028929207262,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5545735943f8ff5a38c9aea0b4c785ad,},Ann
otations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bb3333d84624db9148786fc866cc4cb99179a577e31d3365459788ba7b02f59,PodSandboxId:0238ab84a512173bc79a44c0c48ee4a9bfeeaed98c7655e711a18a704a5951f9,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726253659862538947,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-t4fwq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1bc3749b-0225-445c-9b86-767558392df7,},Annotations:map[string]string{io.kuberne
tes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46d659112c682a93bb4560212566d776e405246e1b3f91e9cc2ad5198b2b8c69,PodSandboxId:566613db4514bf925926ac26426504bb464936a657b23ca25af3ad3aa0a803e1,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_EXITED,CreatedAt:1726253640657625545,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5545735943f8ff5a38c9aea0b4c785ad,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernet
es.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dddc0dfb6a255d4d35498f110cd8ade544b6f9eb17cef621a12500640512581e,PodSandboxId:18e2ef1278c487f38e01dc8ec652cadc524a2e3c08f1b80bae0f12b5ce7d5f45,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726253626790338590,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-b9bzd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81130c38-ff5d-4c9e-ab7b-7eae4a62d3b8,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09fe052337ef3c922d075bb554c17c548520bce6283d1973e3507ca8d99ace87,PodSandboxId:5f1a3394b645b31406ab8d54ac120cd39398070996c972bf73df2d505c1848b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726253626803961664,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fdhnm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c509676-c7ba-4841-89b5-7e4266abd9c9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b752b1ac699cb46bd464528882ec5c1bfb29241c94a1eda062141311151c2fc1,PodSandboxId:3a3adb124d23e71e636eaf0f200f72f3779d41c493e0475b011f1e825d452f00,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726253626698915633,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-htrbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 41a8301e-fca3-4907-bc77-808b013a2d2a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d1a0b2d1c95ea3234342a3c075a74b46c01ce0a91b97801503cb6fd1fc06163,PodSandboxId:09bbefd12114cb9e6833e18edd3e080d729dc865efaaa80af99dd78c1c2387c2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,C
reatedAt:1726253626405117690,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-92mml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36bd37dc-88c4-4264-9e7c-a90246cc5212,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15c33340e3091f96212bbbf29c4a8802468f76ac07126f47265d4f2b86321a89,PodSandboxId:acfcaea56c23ed4626d8134635877c58a053b2861b24d0d53cf3024c7c8c1ca0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726253626518315300,Labels:map[
string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf3d4ca74d8429dc43b760fdf8f185ab,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80a7cb47dee67b788f693c83be0a9b7c41fc30e21ddf9b0131e13737c8047222,PodSandboxId:a63972ff65b125bee7dbfeebba7408899591af85f385eeba93cb4a7472b5a831,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726253626301782095,Labels:map[string]string{io.kubernetes.container.name
: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15cf7928620050653d6239c1007547bd,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=319a3165-d131-4b23-a058-a1916aac5d61 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	999f5e6003ef9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   About a minute ago   Running             storage-provisioner       7                   2ec7df8952268       storage-provisioner
	d9e9ac5d6b79f       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   3 minutes ago        Running             kube-controller-manager   6                   b36021c0b35cd       kube-controller-manager-ha-617764
	87156e375ce6e       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   5 minutes ago        Running             kube-apiserver            6                   639b42fbde0c6       kube-apiserver-ha-617764
	e916b90f9253d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   6 minutes ago        Exited              storage-provisioner       6                   2ec7df8952268       storage-provisioner
	8a3f92c39f616       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   6 minutes ago        Exited              kube-controller-manager   5                   b36021c0b35cd       kube-controller-manager-ha-617764
	50283a2285386       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   7 minutes ago        Exited              kube-apiserver            5                   639b42fbde0c6       kube-apiserver-ha-617764
	bf7f61f474e78       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a   8 minutes ago        Running             busybox                   2                   ae1363f122834       busybox-7dff88458-t4fwq
	70f0f4e37a417       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago        Running             coredns                   2                   43cddd96b7158       coredns-7c65d6cfc9-fdhnm
	7cb162ca4a916       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f   9 minutes ago        Running             kindnet-cni               2                   1aee20bf902b8       kindnet-b9bzd
	2ca0aab49c546       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago        Running             coredns                   2                   743d4b43092c6       coredns-7c65d6cfc9-htrbt
	0bdc8b32559cc       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   9 minutes ago        Running             kube-proxy                2                   90fa239fc72bb       kube-proxy-92mml
	360965c899e52       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   9 minutes ago        Running             kube-scheduler            2                   477f3d5572a61       kube-scheduler-ha-617764
	c22324f5733e4       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   9 minutes ago        Running             etcd                      2                   e94c56bdaeede       etcd-ha-617764
	bc744a6ac873d       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12   9 minutes ago        Running             kube-vip                  1                   c019543061937       kube-vip-ha-617764
	2bb3333d84624       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a   15 minutes ago       Exited              busybox                   1                   0238ab84a5121       busybox-7dff88458-t4fwq
	46d659112c682       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12   15 minutes ago       Exited              kube-vip                  0                   566613db4514b       kube-vip-ha-617764
	09fe052337ef3       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   15 minutes ago       Exited              coredns                   1                   5f1a3394b645b       coredns-7c65d6cfc9-fdhnm
	dddc0dfb6a255       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f   15 minutes ago       Exited              kindnet-cni               1                   18e2ef1278c48       kindnet-b9bzd
	b752b1ac699cb       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   15 minutes ago       Exited              coredns                   1                   3a3adb124d23e       coredns-7c65d6cfc9-htrbt
	15c33340e3091       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   15 minutes ago       Exited              etcd                      1                   acfcaea56c23e       etcd-ha-617764
	1d1a0b2d1c95e       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   15 minutes ago       Exited              kube-proxy                1                   09bbefd12114c       kube-proxy-92mml
	80a7cb47dee67       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   15 minutes ago       Exited              kube-scheduler            1                   a63972ff65b12       kube-scheduler-ha-617764
	
	
	==> coredns [09fe052337ef3c922d075bb554c17c548520bce6283d1973e3507ca8d99ace87] <==
	Trace[818669773]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (18:54:01.526)
	Trace[818669773]: [10.000979018s] [10.000979018s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:52492->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:52492->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:43514->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:43514->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [2ca0aab49c5463b35d609dd4876256ce1882f5f15b9314d4efc1c9460e5da465] <==
	Trace[935271282]: [14.299786922s] [14.299786922s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=1, ErrCode=NO_ERROR, debug=""
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [70f0f4e37a417dce143a67fa1dcf0c5c3b58a8856c27d2bc40f1a94c8751842a] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [b752b1ac699cb46bd464528882ec5c1bfb29241c94a1eda062141311151c2fc1] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:54310->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-617764
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-617764
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fdd33bebc6743cfd1c61ec7fe066add478610a92
	                    minikube.k8s.io/name=ha-617764
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_13T18_42_29_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Sep 2024 18:42:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-617764
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 13 Sep 2024 19:09:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Sep 2024 19:04:46 +0000   Fri, 13 Sep 2024 18:57:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Sep 2024 19:04:46 +0000   Fri, 13 Sep 2024 18:57:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Sep 2024 19:04:46 +0000   Fri, 13 Sep 2024 18:57:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 13 Sep 2024 19:04:46 +0000   Fri, 13 Sep 2024 18:57:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.145
	  Hostname:    ha-617764
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0f2aba800dcc4d28902afcae110b3305
	  System UUID:                0f2aba80-0dcc-4d28-902a-fcae110b3305
	  Boot ID:                    07a71fa7-1555-49dc-a1c7-c845200ddeaa
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-t4fwq              0 (0%)        0 (0%)      0 (0%)           0 (0%)         24m
	  kube-system                 coredns-7c65d6cfc9-fdhnm             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     27m
	  kube-system                 coredns-7c65d6cfc9-htrbt             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     27m
	  kube-system                 etcd-ha-617764                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         27m
	  kube-system                 kindnet-b9bzd                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      27m
	  kube-system                 kube-apiserver-ha-617764             250m (12%)    0 (0%)      0 (0%)           0 (0%)         27m
	  kube-system                 kube-controller-manager-ha-617764    200m (10%)    0 (0%)      0 (0%)           0 (0%)         27m
	  kube-system                 kube-proxy-92mml                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         27m
	  kube-system                 kube-scheduler-ha-617764             100m (5%)     0 (0%)      0 (0%)           0 (0%)         27m
	  kube-system                 kube-vip-ha-617764                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         27m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                   From             Message
	  ----     ------                   ----                  ----             -------
	  Normal   Starting                 15m                   kube-proxy       
	  Normal   Starting                 27m                   kube-proxy       
	  Normal   Starting                 27m                   kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  27m                   kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           27m                   node-controller  Node ha-617764 event: Registered Node ha-617764 in Controller
	  Normal   RegisteredNode           26m                   node-controller  Node ha-617764 event: Registered Node ha-617764 in Controller
	  Normal   RegisteredNode           24m                   node-controller  Node ha-617764 event: Registered Node ha-617764 in Controller
	  Normal   RegisteredNode           15m                   node-controller  Node ha-617764 event: Registered Node ha-617764 in Controller
	  Normal   RegisteredNode           15m                   node-controller  Node ha-617764 event: Registered Node ha-617764 in Controller
	  Normal   RegisteredNode           14m                   node-controller  Node ha-617764 event: Registered Node ha-617764 in Controller
	  Normal   NodeNotReady             12m                   node-controller  Node ha-617764 status is now: NodeNotReady
	  Normal   NodeHasSufficientMemory  12m (x2 over 27m)     kubelet          Node ha-617764 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x2 over 27m)     kubelet          Node ha-617764 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x2 over 27m)     kubelet          Node ha-617764 status is now: NodeHasSufficientPID
	  Normal   NodeReady                12m (x2 over 26m)     kubelet          Node ha-617764 status is now: NodeReady
	  Warning  ContainerGCFailed        10m (x3 over 17m)     kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeNotReady             9m25s (x10 over 16m)  kubelet          Node ha-617764 status is now: NodeNotReady
	  Normal   RegisteredNode           3m26s                 node-controller  Node ha-617764 event: Registered Node ha-617764 in Controller
	
	
	Name:               ha-617764-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-617764-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fdd33bebc6743cfd1c61ec7fe066add478610a92
	                    minikube.k8s.io/name=ha-617764
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_13T18_43_27_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Sep 2024 18:43:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-617764-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 13 Sep 2024 19:09:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Sep 2024 19:04:41 +0000   Fri, 13 Sep 2024 18:54:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Sep 2024 19:04:41 +0000   Fri, 13 Sep 2024 18:54:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Sep 2024 19:04:41 +0000   Fri, 13 Sep 2024 18:54:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 13 Sep 2024 19:04:41 +0000   Fri, 13 Sep 2024 18:54:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.203
	  Hostname:    ha-617764-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 3cc6f4389cf8474aa5ba9d0c108b603d
	  System UUID:                3cc6f438-9cf8-474a-a5ba-9d0c108b603d
	  Boot ID:                    3ff149de-a1f6-4a53-9c3a-07c56d69cf30
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-c28t9                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         24m
	  kube-system                 etcd-ha-617764-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         26m
	  kube-system                 kindnet-bc2zg                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      26m
	  kube-system                 kube-apiserver-ha-617764-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         26m
	  kube-system                 kube-controller-manager-ha-617764-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         26m
	  kube-system                 kube-proxy-hqm8n                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         26m
	  kube-system                 kube-scheduler-ha-617764-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         26m
	  kube-system                 kube-vip-ha-617764-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         26m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 5m23s              kube-proxy       
	  Normal   Starting                 15m                kube-proxy       
	  Normal   Starting                 26m                kube-proxy       
	  Normal   NodeAllocatableEnforced  26m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  26m (x8 over 26m)  kubelet          Node ha-617764-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    26m (x8 over 26m)  kubelet          Node ha-617764-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     26m (x7 over 26m)  kubelet          Node ha-617764-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           26m                node-controller  Node ha-617764-m02 event: Registered Node ha-617764-m02 in Controller
	  Normal   RegisteredNode           26m                node-controller  Node ha-617764-m02 event: Registered Node ha-617764-m02 in Controller
	  Normal   RegisteredNode           24m                node-controller  Node ha-617764-m02 event: Registered Node ha-617764-m02 in Controller
	  Normal   NodeNotReady             22m                node-controller  Node ha-617764-m02 status is now: NodeNotReady
	  Normal   NodeHasNoDiskPressure    15m (x8 over 15m)  kubelet          Node ha-617764-m02 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 15m                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  15m (x8 over 15m)  kubelet          Node ha-617764-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     15m (x7 over 15m)  kubelet          Node ha-617764-m02 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           15m                node-controller  Node ha-617764-m02 event: Registered Node ha-617764-m02 in Controller
	  Normal   RegisteredNode           15m                node-controller  Node ha-617764-m02 event: Registered Node ha-617764-m02 in Controller
	  Normal   RegisteredNode           14m                node-controller  Node ha-617764-m02 event: Registered Node ha-617764-m02 in Controller
	  Normal   NodeNotReady             8m40s              kubelet          Node ha-617764-m02 status is now: NodeNotReady
	  Warning  ContainerGCFailed        8m40s              kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           3m26s              node-controller  Node ha-617764-m02 event: Registered Node ha-617764-m02 in Controller
	
	
	Name:               ha-617764-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-617764-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fdd33bebc6743cfd1c61ec7fe066add478610a92
	                    minikube.k8s.io/name=ha-617764
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_13T18_45_53_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Sep 2024 18:45:52 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-617764-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 13 Sep 2024 18:56:14 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 13 Sep 2024 18:55:54 +0000   Fri, 13 Sep 2024 18:56:55 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 13 Sep 2024 18:55:54 +0000   Fri, 13 Sep 2024 18:56:55 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 13 Sep 2024 18:55:54 +0000   Fri, 13 Sep 2024 18:56:55 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 13 Sep 2024 18:55:54 +0000   Fri, 13 Sep 2024 18:56:55 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.238
	  Hostname:    ha-617764-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 377b052ecb52420ba7e0fb039f04c4f5
	  System UUID:                377b052e-cb52-420b-a7e0-fb039f04c4f5
	  Boot ID:                    44a904c1-478c-4733-89d7-64bb5ed6ea9f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-hzxvw    0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kindnet-47jgz              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      23m
	  kube-system                 kube-proxy-5rlkn           0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 23m                kube-proxy       
	  Normal   Starting                 13m                kube-proxy       
	  Normal   NodeHasSufficientPID     23m (x2 over 23m)  kubelet          Node ha-617764-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  23m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  23m (x2 over 23m)  kubelet          Node ha-617764-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    23m (x2 over 23m)  kubelet          Node ha-617764-m04 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           23m                node-controller  Node ha-617764-m04 event: Registered Node ha-617764-m04 in Controller
	  Normal   RegisteredNode           23m                node-controller  Node ha-617764-m04 event: Registered Node ha-617764-m04 in Controller
	  Normal   RegisteredNode           23m                node-controller  Node ha-617764-m04 event: Registered Node ha-617764-m04 in Controller
	  Normal   NodeReady                23m                kubelet          Node ha-617764-m04 status is now: NodeReady
	  Normal   RegisteredNode           15m                node-controller  Node ha-617764-m04 event: Registered Node ha-617764-m04 in Controller
	  Normal   RegisteredNode           15m                node-controller  Node ha-617764-m04 event: Registered Node ha-617764-m04 in Controller
	  Normal   RegisteredNode           14m                node-controller  Node ha-617764-m04 event: Registered Node ha-617764-m04 in Controller
	  Normal   NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 13m                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  13m (x2 over 13m)  kubelet          Node ha-617764-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m (x2 over 13m)  kubelet          Node ha-617764-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m (x2 over 13m)  kubelet          Node ha-617764-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 13m                kubelet          Node ha-617764-m04 has been rebooted, boot id: 44a904c1-478c-4733-89d7-64bb5ed6ea9f
	  Normal   NodeReady                13m                kubelet          Node ha-617764-m04 status is now: NodeReady
	  Normal   NodeNotReady             12m (x2 over 14m)  node-controller  Node ha-617764-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           3m26s              node-controller  Node ha-617764-m04 event: Registered Node ha-617764-m04 in Controller
	
	
	==> dmesg <==
	[  +0.063743] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.140604] systemd-fstab-generator[1308]: Ignoring "noauto" option for root device
	[  +0.090298] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.514292] kauditd_printk_skb: 21 callbacks suppressed
	[ +12.179813] kauditd_printk_skb: 38 callbacks suppressed
	[Sep13 18:43] kauditd_printk_skb: 24 callbacks suppressed
	[Sep13 18:53] systemd-fstab-generator[3494]: Ignoring "noauto" option for root device
	[  +0.152592] systemd-fstab-generator[3506]: Ignoring "noauto" option for root device
	[  +0.176959] systemd-fstab-generator[3520]: Ignoring "noauto" option for root device
	[  +0.144628] systemd-fstab-generator[3532]: Ignoring "noauto" option for root device
	[  +0.278033] systemd-fstab-generator[3560]: Ignoring "noauto" option for root device
	[  +6.938453] systemd-fstab-generator[3656]: Ignoring "noauto" option for root device
	[  +0.087335] kauditd_printk_skb: 100 callbacks suppressed
	[  +6.505183] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.221465] kauditd_printk_skb: 85 callbacks suppressed
	[Sep13 18:54] kauditd_printk_skb: 5 callbacks suppressed
	[  +8.066370] kauditd_printk_skb: 4 callbacks suppressed
	[Sep13 19:00] systemd-fstab-generator[6064]: Ignoring "noauto" option for root device
	[  +0.171401] systemd-fstab-generator[6082]: Ignoring "noauto" option for root device
	[  +0.186624] systemd-fstab-generator[6096]: Ignoring "noauto" option for root device
	[  +0.141420] systemd-fstab-generator[6108]: Ignoring "noauto" option for root device
	[  +0.313065] systemd-fstab-generator[6136]: Ignoring "noauto" option for root device
	[  +7.472494] systemd-fstab-generator[6247]: Ignoring "noauto" option for root device
	[  +0.086449] kauditd_printk_skb: 100 callbacks suppressed
	[  +6.730244] kauditd_printk_skb: 117 callbacks suppressed
	
	
	==> etcd [15c33340e3091f96212bbbf29c4a8802468f76ac07126f47265d4f2b86321a89] <==
	{"level":"info","ts":"2024-09-13T18:58:44.411752Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"44b3a0f32f80bb09 [term 3] starts to transfer leadership to 130da78b66ce9e95"}
	{"level":"info","ts":"2024-09-13T18:58:44.411785Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"44b3a0f32f80bb09 sends MsgTimeoutNow to 130da78b66ce9e95 immediately as 130da78b66ce9e95 already has up-to-date log"}
	{"level":"info","ts":"2024-09-13T18:58:44.414478Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"44b3a0f32f80bb09 [term: 3] received a MsgVote message with higher term from 130da78b66ce9e95 [term: 4]"}
	{"level":"info","ts":"2024-09-13T18:58:44.414534Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"44b3a0f32f80bb09 became follower at term 4"}
	{"level":"info","ts":"2024-09-13T18:58:44.414548Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"44b3a0f32f80bb09 [logterm: 3, index: 3644, vote: 0] cast MsgVote for 130da78b66ce9e95 [logterm: 3, index: 3644] at term 4"}
	{"level":"info","ts":"2024-09-13T18:58:44.414556Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 44b3a0f32f80bb09 lost leader 44b3a0f32f80bb09 at term 4"}
	{"level":"info","ts":"2024-09-13T18:58:44.416226Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 44b3a0f32f80bb09 elected leader 130da78b66ce9e95 at term 4"}
	{"level":"info","ts":"2024-09-13T18:58:44.512693Z","caller":"etcdserver/server.go:1498","msg":"leadership transfer finished","local-member-id":"44b3a0f32f80bb09","old-leader-member-id":"44b3a0f32f80bb09","new-leader-member-id":"130da78b66ce9e95","took":"101.001068ms"}
	{"level":"info","ts":"2024-09-13T18:58:44.512832Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"130da78b66ce9e95"}
	{"level":"warn","ts":"2024-09-13T18:58:44.513914Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"130da78b66ce9e95"}
	{"level":"info","ts":"2024-09-13T18:58:44.514037Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"130da78b66ce9e95"}
	{"level":"warn","ts":"2024-09-13T18:58:44.515584Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"130da78b66ce9e95"}
	{"level":"info","ts":"2024-09-13T18:58:44.515625Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"130da78b66ce9e95"}
	{"level":"info","ts":"2024-09-13T18:58:44.515668Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"44b3a0f32f80bb09","remote-peer-id":"130da78b66ce9e95"}
	{"level":"warn","ts":"2024-09-13T18:58:44.515788Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"44b3a0f32f80bb09","remote-peer-id":"130da78b66ce9e95","error":"context canceled"}
	{"level":"warn","ts":"2024-09-13T18:58:44.515815Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"130da78b66ce9e95","error":"failed to read 130da78b66ce9e95 on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-09-13T18:58:44.515846Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"44b3a0f32f80bb09","remote-peer-id":"130da78b66ce9e95"}
	{"level":"warn","ts":"2024-09-13T18:58:44.515937Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"44b3a0f32f80bb09","remote-peer-id":"130da78b66ce9e95","error":"context canceled"}
	{"level":"info","ts":"2024-09-13T18:58:44.515950Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"44b3a0f32f80bb09","remote-peer-id":"130da78b66ce9e95"}
	{"level":"info","ts":"2024-09-13T18:58:44.515960Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"130da78b66ce9e95"}
	{"level":"warn","ts":"2024-09-13T18:58:44.522046Z","caller":"rafthttp/http.go:413","msg":"failed to find remote peer in cluster","local-member-id":"44b3a0f32f80bb09","remote-peer-id-stream-handler":"44b3a0f32f80bb09","remote-peer-id-from":"130da78b66ce9e95","cluster-id":"33ee9922f2bf4379"}
	{"level":"info","ts":"2024-09-13T18:58:44.522270Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.145:2380"}
	{"level":"warn","ts":"2024-09-13T18:58:44.523349Z","caller":"embed/config_logging.go:170","msg":"rejected connection on peer endpoint","remote-addr":"192.168.39.203:60554","server-name":"","error":"set tcp 192.168.39.145:2380: use of closed network connection"}
	{"level":"info","ts":"2024-09-13T18:58:45.058204Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.145:2380"}
	{"level":"info","ts":"2024-09-13T18:58:45.058341Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-617764","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.145:2380"],"advertise-client-urls":["https://192.168.39.145:2379"]}
	
	
	==> etcd [c22324f5733e44ea66404b16a363cefc848145f9b3f1a75daecc8b7369ff96dd] <==
	{"level":"warn","ts":"2024-09-13T19:03:35.527602Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"130da78b66ce9e95","rtt":"0s","error":"dial tcp 192.168.39.203:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-13T19:03:35.930750Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":13477463805937998108,"retry-timeout":"500ms"}
	{"level":"info","ts":"2024-09-13T19:03:35.937002Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"130da78b66ce9e95"}
	{"level":"info","ts":"2024-09-13T19:03:35.937054Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"44b3a0f32f80bb09","remote-peer-id":"130da78b66ce9e95"}
	{"level":"info","ts":"2024-09-13T19:03:35.952129Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"44b3a0f32f80bb09","remote-peer-id":"130da78b66ce9e95"}
	{"level":"info","ts":"2024-09-13T19:03:35.982627Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"44b3a0f32f80bb09","to":"130da78b66ce9e95","stream-type":"stream Message"}
	{"level":"info","ts":"2024-09-13T19:03:35.982754Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"44b3a0f32f80bb09","remote-peer-id":"130da78b66ce9e95"}
	{"level":"info","ts":"2024-09-13T19:03:35.987476Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"44b3a0f32f80bb09","to":"130da78b66ce9e95","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-09-13T19:03:35.987924Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"44b3a0f32f80bb09","remote-peer-id":"130da78b66ce9e95"}
	{"level":"info","ts":"2024-09-13T19:03:36.025723Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"44b3a0f32f80bb09 is starting a new election at term 6"}
	{"level":"info","ts":"2024-09-13T19:03:36.025788Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"44b3a0f32f80bb09 became pre-candidate at term 6"}
	{"level":"info","ts":"2024-09-13T19:03:36.025812Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"44b3a0f32f80bb09 received MsgPreVoteResp from 44b3a0f32f80bb09 at term 6"}
	{"level":"info","ts":"2024-09-13T19:03:36.025854Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"44b3a0f32f80bb09 [logterm: 6, index: 3648] sent MsgPreVote request to 130da78b66ce9e95 at term 6"}
	{"level":"info","ts":"2024-09-13T19:03:36.031749Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"44b3a0f32f80bb09 received MsgPreVoteResp from 130da78b66ce9e95 at term 6"}
	{"level":"info","ts":"2024-09-13T19:03:36.031999Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"44b3a0f32f80bb09 has received 2 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2024-09-13T19:03:36.032111Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"44b3a0f32f80bb09 became candidate at term 7"}
	{"level":"info","ts":"2024-09-13T19:03:36.032187Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"44b3a0f32f80bb09 received MsgVoteResp from 44b3a0f32f80bb09 at term 7"}
	{"level":"info","ts":"2024-09-13T19:03:36.032218Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"44b3a0f32f80bb09 [logterm: 6, index: 3648] sent MsgVote request to 130da78b66ce9e95 at term 7"}
	{"level":"info","ts":"2024-09-13T19:03:36.038817Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"44b3a0f32f80bb09 received MsgVoteResp from 130da78b66ce9e95 at term 7"}
	{"level":"info","ts":"2024-09-13T19:03:36.038865Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"44b3a0f32f80bb09 has received 2 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2024-09-13T19:03:36.038893Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"44b3a0f32f80bb09 became leader at term 7"}
	{"level":"info","ts":"2024-09-13T19:03:36.038913Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 44b3a0f32f80bb09 elected leader 44b3a0f32f80bb09 at term 7"}
	{"level":"warn","ts":"2024-09-13T19:03:36.039109Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"4.616150527s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: leader changed"}
	{"level":"info","ts":"2024-09-13T19:03:36.039440Z","caller":"traceutil/trace.go:171","msg":"trace[220464413] range","detail":"{range_begin:; range_end:; }","duration":"4.616514161s","start":"2024-09-13T19:03:31.422912Z","end":"2024-09-13T19:03:36.039426Z","steps":["trace[220464413] 'agreement among raft nodes before linearized reading'  (duration: 4.616143656s)"],"step_count":1}
	{"level":"error","ts":"2024-09-13T19:03:36.039654Z","caller":"etcdhttp/health.go:367","msg":"Health check error","path":"/readyz","reason":"[-]linearizable_read failed: etcdserver: leader changed\n[+]data_corruption ok\n[+]serializable_read ok\n","status-code":503,"stacktrace":"go.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp.(*CheckRegistry).installRootHttpEndpoint.newHealthHandler.func2\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp/health.go:367\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2141\nnet/http.(*ServeMux).ServeHTTP\n\tnet/http/server.go:2519\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:2943\nnet/http.(*conn).serve\n\tnet/http/server.go:2014"}
	
	
	==> kernel <==
	 19:09:42 up 27 min,  0 users,  load average: 0.21, 0.29, 0.32
	Linux ha-617764 5.10.207 #1 SMP Thu Sep 12 19:03:33 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [7cb162ca4a91609e62eb9a3b82ac2d18f1f007e8a4b2577cce0250928caa85f2] <==
	I0913 19:09:00.923022       1 main.go:322] Node ha-617764-m04 has CIDR [10.244.3.0/24] 
	I0913 19:09:10.925503       1 main.go:295] Handling node with IPs: map[192.168.39.203:{}]
	I0913 19:09:10.925559       1 main.go:322] Node ha-617764-m02 has CIDR [10.244.1.0/24] 
	I0913 19:09:10.925692       1 main.go:295] Handling node with IPs: map[192.168.39.238:{}]
	I0913 19:09:10.925716       1 main.go:322] Node ha-617764-m04 has CIDR [10.244.3.0/24] 
	I0913 19:09:10.925765       1 main.go:295] Handling node with IPs: map[192.168.39.145:{}]
	I0913 19:09:10.925781       1 main.go:299] handling current node
	I0913 19:09:20.925111       1 main.go:295] Handling node with IPs: map[192.168.39.145:{}]
	I0913 19:09:20.925161       1 main.go:299] handling current node
	I0913 19:09:20.925176       1 main.go:295] Handling node with IPs: map[192.168.39.203:{}]
	I0913 19:09:20.925181       1 main.go:322] Node ha-617764-m02 has CIDR [10.244.1.0/24] 
	I0913 19:09:20.925352       1 main.go:295] Handling node with IPs: map[192.168.39.238:{}]
	I0913 19:09:20.925377       1 main.go:322] Node ha-617764-m04 has CIDR [10.244.3.0/24] 
	I0913 19:09:30.917652       1 main.go:295] Handling node with IPs: map[192.168.39.238:{}]
	I0913 19:09:30.917705       1 main.go:322] Node ha-617764-m04 has CIDR [10.244.3.0/24] 
	I0913 19:09:30.917848       1 main.go:295] Handling node with IPs: map[192.168.39.145:{}]
	I0913 19:09:30.917877       1 main.go:299] handling current node
	I0913 19:09:30.917889       1 main.go:295] Handling node with IPs: map[192.168.39.203:{}]
	I0913 19:09:30.917894       1 main.go:322] Node ha-617764-m02 has CIDR [10.244.1.0/24] 
	I0913 19:09:40.916872       1 main.go:295] Handling node with IPs: map[192.168.39.145:{}]
	I0913 19:09:40.916920       1 main.go:299] handling current node
	I0913 19:09:40.916947       1 main.go:295] Handling node with IPs: map[192.168.39.203:{}]
	I0913 19:09:40.916952       1 main.go:322] Node ha-617764-m02 has CIDR [10.244.1.0/24] 
	I0913 19:09:40.917108       1 main.go:295] Handling node with IPs: map[192.168.39.238:{}]
	I0913 19:09:40.917113       1 main.go:322] Node ha-617764-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [dddc0dfb6a255d4d35498f110cd8ade544b6f9eb17cef621a12500640512581e] <==
	I0913 18:57:57.992785       1 main.go:322] Node ha-617764-m02 has CIDR [10.244.1.0/24] 
	I0913 18:58:07.986622       1 main.go:295] Handling node with IPs: map[192.168.39.145:{}]
	I0913 18:58:07.986810       1 main.go:299] handling current node
	I0913 18:58:07.986855       1 main.go:295] Handling node with IPs: map[192.168.39.203:{}]
	I0913 18:58:07.986874       1 main.go:322] Node ha-617764-m02 has CIDR [10.244.1.0/24] 
	I0913 18:58:07.987050       1 main.go:295] Handling node with IPs: map[192.168.39.238:{}]
	I0913 18:58:07.987072       1 main.go:322] Node ha-617764-m04 has CIDR [10.244.3.0/24] 
	I0913 18:58:17.988128       1 main.go:295] Handling node with IPs: map[192.168.39.238:{}]
	I0913 18:58:17.988336       1 main.go:322] Node ha-617764-m04 has CIDR [10.244.3.0/24] 
	I0913 18:58:17.988500       1 main.go:295] Handling node with IPs: map[192.168.39.145:{}]
	I0913 18:58:17.988524       1 main.go:299] handling current node
	I0913 18:58:17.988554       1 main.go:295] Handling node with IPs: map[192.168.39.203:{}]
	I0913 18:58:17.988558       1 main.go:322] Node ha-617764-m02 has CIDR [10.244.1.0/24] 
	I0913 18:58:27.988426       1 main.go:295] Handling node with IPs: map[192.168.39.145:{}]
	I0913 18:58:27.988495       1 main.go:299] handling current node
	I0913 18:58:27.988516       1 main.go:295] Handling node with IPs: map[192.168.39.203:{}]
	I0913 18:58:27.988521       1 main.go:322] Node ha-617764-m02 has CIDR [10.244.1.0/24] 
	I0913 18:58:27.988689       1 main.go:295] Handling node with IPs: map[192.168.39.238:{}]
	I0913 18:58:27.988745       1 main.go:322] Node ha-617764-m04 has CIDR [10.244.3.0/24] 
	I0913 18:58:37.994223       1 main.go:295] Handling node with IPs: map[192.168.39.145:{}]
	I0913 18:58:37.994340       1 main.go:299] handling current node
	I0913 18:58:37.994361       1 main.go:295] Handling node with IPs: map[192.168.39.203:{}]
	I0913 18:58:37.994371       1 main.go:322] Node ha-617764-m02 has CIDR [10.244.1.0/24] 
	I0913 18:58:37.994612       1 main.go:295] Handling node with IPs: map[192.168.39.238:{}]
	I0913 18:58:37.994637       1 main.go:322] Node ha-617764-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [50283a22853869e4521e5d44efcff41101ffde818329fd7423f482cb33efabc0] <==
	W0913 19:03:07.117209       1 reflector.go:561] storage/cacher.go:/certificatesigningrequests: failed to list *certificates.CertificateSigningRequest: etcdserver: request timed out
	E0913 19:03:07.118747       1 cacher.go:478] cacher (certificatesigningrequests.certificates.k8s.io): unexpected ListAndWatch error: failed to list *certificates.CertificateSigningRequest: etcdserver: request timed out; reinitializing...
	W0913 19:03:07.117288       1 reflector.go:561] storage/cacher.go:/priorityclasses: failed to list *scheduling.PriorityClass: etcdserver: request timed out
	E0913 19:03:07.118795       1 cacher.go:478] cacher (priorityclasses.scheduling.k8s.io): unexpected ListAndWatch error: failed to list *scheduling.PriorityClass: etcdserver: request timed out; reinitializing...
	W0913 19:03:07.118835       1 reflector.go:561] storage/cacher.go:/poddisruptionbudgets: failed to list *policy.PodDisruptionBudget: etcdserver: request timed out
	E0913 19:03:07.118860       1 cacher.go:478] cacher (poddisruptionbudgets.policy): unexpected ListAndWatch error: failed to list *policy.PodDisruptionBudget: etcdserver: request timed out; reinitializing...
	W0913 19:03:07.118862       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.IngressClass: etcdserver: request timed out
	E0913 19:03:07.118908       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.IngressClass: failed to list *v1.IngressClass: etcdserver: request timed out" logger="UnhandledError"
	E0913 19:03:07.117873       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: rpctypes.EtcdError{code:0xe, desc:\"etcdserver: request timed out\"}: etcdserver: request timed out" logger="UnhandledError"
	W0913 19:03:07.119041       1 reflector.go:561] storage/cacher.go:/rolebindings: failed to list *rbac.RoleBinding: etcdserver: request timed out
	E0913 19:03:07.119081       1 cacher.go:478] cacher (rolebindings.rbac.authorization.k8s.io): unexpected ListAndWatch error: failed to list *rbac.RoleBinding: etcdserver: request timed out; reinitializing...
	W0913 19:03:07.119107       1 reflector.go:561] storage/cacher.go:/horizontalpodautoscalers: failed to list *autoscaling.HorizontalPodAutoscaler: etcdserver: request timed out
	E0913 19:03:07.119130       1 cacher.go:478] cacher (horizontalpodautoscalers.autoscaling): unexpected ListAndWatch error: failed to list *autoscaling.HorizontalPodAutoscaler: etcdserver: request timed out; reinitializing...
	W0913 19:03:07.118881       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Endpoints: etcdserver: request timed out
	E0913 19:03:07.119187       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Endpoints: failed to list *v1.Endpoints: etcdserver: request timed out" logger="UnhandledError"
	W0913 19:03:07.119292       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: etcdserver: request timed out. Retrying...
	F0913 19:03:07.119338       1 hooks.go:210] PostStartHook "scheduling/bootstrap-system-priority-classes" failed: unable to add default system priority classes: timed out waiting for the condition
	E0913 19:03:07.119412       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: etcdserver: request timed out" logger="UnhandledError"
	E0913 19:03:07.155903       1 controller.go:145] "Failed to ensure lease exists, will retry" err="etcdserver: request timed out" interval="1.6s"
	W0913 19:03:07.119390       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ValidatingAdmissionPolicy: etcdserver: request timed out
	E0913 19:03:07.155969       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ValidatingAdmissionPolicy: failed to list *v1.ValidatingAdmissionPolicy: etcdserver: request timed out" logger="UnhandledError"
	F0913 19:03:07.147197       1 hooks.go:210] PostStartHook "rbac/bootstrap-roles" failed: unable to initialize roles: timed out waiting for the condition
	E0913 19:03:07.180431       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, etcdserver: request timed out]"
	W0913 19:03:07.188666       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: etcdserver: request timed out
	E0913 19:03:07.188800       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: etcdserver: request timed out" logger="UnhandledError"
	
	
	==> kube-apiserver [87156e375ce6e42c538bb851dcb55abf7b83754448bb78e67a324fd93e76a534] <==
	I0913 19:04:34.155635       1 apiapproval_controller.go:189] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0913 19:04:34.155675       1 crd_finalizer.go:269] Starting CRDFinalizer
	I0913 19:04:34.145448       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0913 19:04:34.145457       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0913 19:04:34.243089       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0913 19:04:34.243807       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0913 19:04:34.246351       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0913 19:04:34.247643       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0913 19:04:34.248414       1 shared_informer.go:320] Caches are synced for configmaps
	I0913 19:04:34.248975       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0913 19:04:34.249015       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0913 19:04:34.248452       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0913 19:04:34.252424       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0913 19:04:34.252462       1 aggregator.go:171] initial CRD sync complete...
	I0913 19:04:34.252481       1 autoregister_controller.go:144] Starting autoregister controller
	I0913 19:04:34.252485       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0913 19:04:34.252490       1 cache.go:39] Caches are synced for autoregister controller
	I0913 19:04:34.265419       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0913 19:04:34.275995       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0913 19:04:34.276033       1 policy_source.go:224] refreshing policies
	I0913 19:04:34.323537       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0913 19:04:35.150657       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0913 19:04:35.562141       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.145]
	I0913 19:04:35.563630       1 controller.go:615] quota admission added evaluator for: endpoints
	I0913 19:04:35.569590       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [8a3f92c39f616d85e223876cd67911949b93e345a656250b2b9a93e494f7b7b0] <==
	I0913 19:03:16.402728       1 serving.go:386] Generated self-signed cert in-memory
	I0913 19:03:16.703364       1 controllermanager.go:197] "Starting" version="v1.31.1"
	I0913 19:03:16.703449       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0913 19:03:16.705317       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0913 19:03:16.705492       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0913 19:03:16.706001       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0913 19:03:16.705942       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	E0913 19:03:26.708603       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.145:8443/healthz\": dial tcp 192.168.39.145:8443: connect: connection refused"
	
	
	==> kube-controller-manager [d9e9ac5d6b79f0671011120db7c835a22b4b40515be6c0a224b5d78d10631858] <==
	I0913 19:06:16.971061       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0913 19:06:16.974278       1 shared_informer.go:320] Caches are synced for resource quota
	I0913 19:06:16.974370       1 shared_informer.go:320] Caches are synced for daemon sets
	I0913 19:06:16.978058       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-617764-m04"
	I0913 19:06:16.978484       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-617764"
	I0913 19:06:16.978545       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-617764-m02"
	I0913 19:06:16.978578       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-617764-m04"
	I0913 19:06:16.978604       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0913 19:06:16.978676       1 shared_informer.go:320] Caches are synced for disruption
	I0913 19:06:17.001863       1 shared_informer.go:320] Caches are synced for job
	I0913 19:06:17.002008       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0913 19:06:17.002051       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0913 19:06:17.001883       1 shared_informer.go:320] Caches are synced for deployment
	I0913 19:06:17.002279       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0913 19:06:17.002568       1 shared_informer.go:320] Caches are synced for stateful set
	I0913 19:06:17.003157       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0913 19:06:17.003222       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0913 19:06:17.003340       1 shared_informer.go:320] Caches are synced for persistent volume
	I0913 19:06:17.007098       1 shared_informer.go:320] Caches are synced for attach detach
	I0913 19:06:17.008302       1 shared_informer.go:320] Caches are synced for resource quota
	I0913 19:06:17.041807       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0913 19:06:17.083647       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-617764-m04"
	I0913 19:06:17.452004       1 shared_informer.go:320] Caches are synced for garbage collector
	I0913 19:06:17.452044       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0913 19:06:17.472553       1 shared_informer.go:320] Caches are synced for garbage collector
	
	
	==> kube-proxy [0bdc8b32559cc35726794d7412fa2462beea35cff248df50623987861ae7bd48] <==
	E0913 19:02:27.298107       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	E0913 19:02:30.369582       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.39.254:8443: connect: no route to host"
	W0913 19:02:42.656761       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-617764&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0913 19:02:42.657783       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-617764&limit=500&resourceVersion=0\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	E0913 19:02:42.657618       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0913 19:02:54.945213       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0913 19:03:07.232668       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.39.254:8443: connect: no route to host"
	W0913 19:03:13.377355       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0913 19:03:13.377416       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0913 19:03:13.377475       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0913 19:03:13.377505       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	E0913 19:03:19.521668       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.39.254:8443: connect: no route to host"
	W0913 19:03:22.593418       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-617764&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0913 19:03:22.594211       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-617764&limit=500&resourceVersion=0\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	E0913 19:03:31.809399       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0913 19:03:31.809484       1 event_broadcaster.go:216] "Unable to write event (retry limit exceeded!)" event="&Event{ObjectMeta:{ha-617764.17f4e2ef11fb5014  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},EventTime:2024-09-13 19:01:13.616478822 +0000 UTC m=+43.051066987,Series:nil,ReportingController:kube-proxy,ReportingInstance:kube-proxy-ha-617764,Action:StartKubeProxy,Reason:Starting,Regarding:{Node  ha-617764 ha-617764   },Related:nil,Note:,Type:Normal,DeprecatedSource:{ },DeprecatedFirstTimestamp:0001-01-01 00:00:00 +0000 UTC,DeprecatedLastTimestamp:0001-01-01 00:00:00 +0000 UTC,DeprecatedCount:0,}"
	W0913 19:04:08.674299       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0913 19:04:08.674633       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0913 19:04:14.818943       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0913 19:04:14.819106       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0913 19:04:20.961639       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-617764&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0913 19:04:20.961836       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-617764&limit=500&resourceVersion=0\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	I0913 19:04:46.118228       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0913 19:04:48.417939       1 shared_informer.go:320] Caches are synced for service config
	I0913 19:05:14.019838       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [1d1a0b2d1c95ea3234342a3c075a74b46c01ce0a91b97801503cb6fd1fc06163] <==
	E0913 18:54:28.193745       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-617764\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0913 18:54:28.194003       1 server.go:646] "Can't determine this node's IP, assuming loopback; if this is incorrect, please set the --bind-address flag"
	E0913 18:54:28.194170       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0913 18:54:28.234105       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0913 18:54:28.234302       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0913 18:54:28.234395       1 server_linux.go:169] "Using iptables Proxier"
	I0913 18:54:28.237390       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0913 18:54:28.237818       1 server.go:483] "Version info" version="v1.31.1"
	I0913 18:54:28.237860       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0913 18:54:28.240362       1 config.go:199] "Starting service config controller"
	I0913 18:54:28.240424       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0913 18:54:28.240535       1 config.go:105] "Starting endpoint slice config controller"
	I0913 18:54:28.240556       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0913 18:54:28.241385       1 config.go:328] "Starting node config controller"
	I0913 18:54:28.241411       1 shared_informer.go:313] Waiting for caches to sync for node config
	E0913 18:54:31.266663       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.39.254:8443: connect: no route to host"
	W0913 18:54:31.266902       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0913 18:54:31.267053       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0913 18:54:31.267155       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0913 18:54:31.267225       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0913 18:54:31.270424       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-617764&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0913 18:54:31.270680       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-617764&limit=500&resourceVersion=0\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	I0913 18:54:32.241327       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0913 18:54:32.541475       1 shared_informer.go:320] Caches are synced for service config
	I0913 18:54:32.642363       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [360965c899e522788532af521cbf0b477b35e977e91f5a28b282abf712fc953e] <==
	W0913 19:03:57.900185       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.145:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.145:8443: connect: connection refused
	E0913 19:03:57.900372       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://192.168.39.145:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.39.145:8443: connect: connection refused" logger="UnhandledError"
	W0913 19:03:58.563523       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.145:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.145:8443: connect: connection refused
	E0913 19:03:58.563568       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.39.145:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.39.145:8443: connect: connection refused" logger="UnhandledError"
	W0913 19:04:07.237583       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.145:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.145:8443: connect: connection refused
	E0913 19:04:07.237716       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.168.39.145:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.39.145:8443: connect: connection refused" logger="UnhandledError"
	W0913 19:04:07.479004       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.145:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.145:8443: connect: connection refused
	E0913 19:04:07.479145       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.168.39.145:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.39.145:8443: connect: connection refused" logger="UnhandledError"
	W0913 19:04:14.886681       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.145:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.145:8443: connect: connection refused
	E0913 19:04:14.886861       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.168.39.145:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.39.145:8443: connect: connection refused" logger="UnhandledError"
	W0913 19:04:17.376850       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.145:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.145:8443: connect: connection refused
	E0913 19:04:17.376931       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://192.168.39.145:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.39.145:8443: connect: connection refused" logger="UnhandledError"
	W0913 19:04:18.702116       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.145:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.145:8443: connect: connection refused
	E0913 19:04:18.702189       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://192.168.39.145:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.145:8443: connect: connection refused" logger="UnhandledError"
	W0913 19:04:21.189061       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.145:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.145:8443: connect: connection refused
	E0913 19:04:21.189219       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://192.168.39.145:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.39.145:8443: connect: connection refused" logger="UnhandledError"
	W0913 19:04:22.488215       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.145:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.145:8443: connect: connection refused
	E0913 19:04:22.488335       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://192.168.39.145:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.39.145:8443: connect: connection refused" logger="UnhandledError"
	W0913 19:04:30.978522       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.145:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.145:8443: connect: connection refused
	E0913 19:04:30.978653       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get \"https://192.168.39.145:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.39.145:8443: connect: connection refused" logger="UnhandledError"
	W0913 19:04:32.316893       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.145:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.145:8443: connect: connection refused
	E0913 19:04:32.317198       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://192.168.39.145:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.39.145:8443: connect: connection refused" logger="UnhandledError"
	W0913 19:04:34.163725       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0913 19:04:34.163824       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0913 19:04:46.592137       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [80a7cb47dee67b788f693c83be0a9b7c41fc30e21ddf9b0131e13737c8047222] <==
	E0913 18:54:18.337790       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.168.39.145:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.39.145:8443: connect: connection refused" logger="UnhandledError"
	W0913 18:54:18.785652       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.145:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.145:8443: connect: connection refused
	E0913 18:54:18.785751       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.168.39.145:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.39.145:8443: connect: connection refused" logger="UnhandledError"
	W0913 18:54:23.154505       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.145:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.145:8443: connect: connection refused
	E0913 18:54:23.154624       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://192.168.39.145:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.39.145:8443: connect: connection refused" logger="UnhandledError"
	W0913 18:54:26.780601       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0913 18:54:26.780738       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0913 18:54:26.780951       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0913 18:54:26.781066       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0913 18:54:26.783577       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0913 18:54:26.783651       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0913 18:54:26.783823       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0913 18:54:26.783883       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0913 18:54:26.783831       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0913 18:54:26.783955       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0913 18:54:26.783968       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0913 18:54:26.784151       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0913 18:54:26.784400       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0913 18:54:26.784439       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0913 18:54:44.032097       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0913 18:56:04.977977       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-hzxvw\": pod busybox-7dff88458-hzxvw is already assigned to node \"ha-617764-m04\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-hzxvw" node="ha-617764-m04"
	E0913 18:56:04.978105       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 6a455845-10fb-415a-badb-63751bb03ec8(default/busybox-7dff88458-hzxvw) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-hzxvw"
	E0913 18:56:04.978138       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-hzxvw\": pod busybox-7dff88458-hzxvw is already assigned to node \"ha-617764-m04\"" pod="default/busybox-7dff88458-hzxvw"
	I0913 18:56:04.978160       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-hzxvw" node="ha-617764-m04"
	E0913 18:58:44.325787       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 13 19:08:28 ha-617764 kubelet[1315]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 13 19:08:28 ha-617764 kubelet[1315]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 13 19:08:28 ha-617764 kubelet[1315]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 13 19:08:28 ha-617764 kubelet[1315]: E0913 19:08:28.952274    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726254508951628772,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 19:08:28 ha-617764 kubelet[1315]: E0913 19:08:28.952307    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726254508951628772,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 19:08:38 ha-617764 kubelet[1315]: I0913 19:08:38.515464    1315 scope.go:117] "RemoveContainer" containerID="e916b90f9253d6855bcd1b24ab1ae47479a3f23fd600018cb0738677896b324f"
	Sep 13 19:08:38 ha-617764 kubelet[1315]: E0913 19:08:38.955116    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726254518954688809,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 19:08:38 ha-617764 kubelet[1315]: E0913 19:08:38.955171    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726254518954688809,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 19:08:48 ha-617764 kubelet[1315]: E0913 19:08:48.956889    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726254528956554093,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 19:08:48 ha-617764 kubelet[1315]: E0913 19:08:48.957155    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726254528956554093,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 19:08:58 ha-617764 kubelet[1315]: E0913 19:08:58.958906    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726254538958539402,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 19:08:58 ha-617764 kubelet[1315]: E0913 19:08:58.959333    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726254538958539402,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 19:09:08 ha-617764 kubelet[1315]: E0913 19:09:08.960916    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726254548960531605,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 19:09:08 ha-617764 kubelet[1315]: E0913 19:09:08.961343    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726254548960531605,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 19:09:18 ha-617764 kubelet[1315]: E0913 19:09:18.962842    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726254558962459188,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 19:09:18 ha-617764 kubelet[1315]: E0913 19:09:18.962887    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726254558962459188,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 19:09:28 ha-617764 kubelet[1315]: E0913 19:09:28.545670    1315 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 13 19:09:28 ha-617764 kubelet[1315]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 13 19:09:28 ha-617764 kubelet[1315]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 13 19:09:28 ha-617764 kubelet[1315]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 13 19:09:28 ha-617764 kubelet[1315]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 13 19:09:28 ha-617764 kubelet[1315]: E0913 19:09:28.964511    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726254568964115032,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 19:09:28 ha-617764 kubelet[1315]: E0913 19:09:28.964555    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726254568964115032,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 19:09:38 ha-617764 kubelet[1315]: E0913 19:09:38.968506    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726254578967107888,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 19:09:38 ha-617764 kubelet[1315]: E0913 19:09:38.968540    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726254578967107888,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0913 19:09:41.346169   34117 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19636-3902/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-617764 -n ha-617764
helpers_test.go:261: (dbg) Run:  kubectl --context ha-617764 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/DegradedAfterClusterRestart FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (2.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (125.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-617764 --control-plane -v=7 --alsologtostderr
E0913 19:10:57.576319   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/functional-204039/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:605: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p ha-617764 --control-plane -v=7 --alsologtostderr: signal: killed (2m2.997210866s)

                                                
                                                
-- stdout --
	* Adding node m05 to cluster ha-617764 as [worker control-plane]
	* Starting "ha-617764-m05" control-plane node in "ha-617764" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	* Verifying Kubernetes components...

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 19:09:43.350584   34198 out.go:345] Setting OutFile to fd 1 ...
	I0913 19:09:43.350700   34198 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 19:09:43.350710   34198 out.go:358] Setting ErrFile to fd 2...
	I0913 19:09:43.350716   34198 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 19:09:43.350912   34198 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19636-3902/.minikube/bin
	I0913 19:09:43.351203   34198 mustload.go:65] Loading cluster: ha-617764
	I0913 19:09:43.351601   34198 config.go:182] Loaded profile config "ha-617764": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 19:09:43.352062   34198 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:09:43.352133   34198 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:09:43.366907   34198 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42085
	I0913 19:09:43.367399   34198 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:09:43.367896   34198 main.go:141] libmachine: Using API Version  1
	I0913 19:09:43.367917   34198 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:09:43.368314   34198 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:09:43.368470   34198 main.go:141] libmachine: (ha-617764) Calling .GetState
	I0913 19:09:43.369836   34198 host.go:66] Checking if "ha-617764" exists ...
	I0913 19:09:43.370168   34198 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:09:43.370210   34198 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:09:43.385881   34198 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43093
	I0913 19:09:43.386320   34198 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:09:43.386820   34198 main.go:141] libmachine: Using API Version  1
	I0913 19:09:43.386840   34198 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:09:43.387131   34198 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:09:43.387318   34198 main.go:141] libmachine: (ha-617764) Calling .DriverName
	I0913 19:09:43.387822   34198 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:09:43.387856   34198 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:09:43.402592   34198 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37385
	I0913 19:09:43.403023   34198 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:09:43.403523   34198 main.go:141] libmachine: Using API Version  1
	I0913 19:09:43.403544   34198 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:09:43.403842   34198 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:09:43.404011   34198 main.go:141] libmachine: (ha-617764-m02) Calling .GetState
	I0913 19:09:43.405691   34198 host.go:66] Checking if "ha-617764-m02" exists ...
	I0913 19:09:43.406016   34198 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:09:43.406062   34198 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:09:43.420446   34198 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36337
	I0913 19:09:43.420849   34198 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:09:43.421422   34198 main.go:141] libmachine: Using API Version  1
	I0913 19:09:43.421443   34198 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:09:43.421735   34198 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:09:43.421900   34198 main.go:141] libmachine: (ha-617764-m02) Calling .DriverName
	I0913 19:09:43.422040   34198 api_server.go:166] Checking apiserver status ...
	I0913 19:09:43.422075   34198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:09:43.422116   34198 main.go:141] libmachine: (ha-617764) Calling .GetSSHHostname
	I0913 19:09:43.424700   34198 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 19:09:43.425157   34198 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 19:09:43.425182   34198 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 19:09:43.425317   34198 main.go:141] libmachine: (ha-617764) Calling .GetSSHPort
	I0913 19:09:43.425491   34198 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 19:09:43.425614   34198 main.go:141] libmachine: (ha-617764) Calling .GetSSHUsername
	I0913 19:09:43.425754   34198 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764/id_rsa Username:docker}
	I0913 19:09:43.512947   34198 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/7838/cgroup
	W0913 19:09:43.524957   34198 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/7838/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0913 19:09:43.525002   34198 ssh_runner.go:195] Run: ls
	I0913 19:09:43.530782   34198 api_server.go:253] Checking apiserver healthz at https://192.168.39.145:8443/healthz ...
	I0913 19:09:43.535310   34198 api_server.go:279] https://192.168.39.145:8443/healthz returned 200:
	ok
	I0913 19:09:43.537364   34198 out.go:177] * Adding node m05 to cluster ha-617764 as [worker control-plane]
	I0913 19:09:43.538669   34198 config.go:182] Loaded profile config "ha-617764": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 19:09:43.538774   34198 profile.go:143] Saving config to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/config.json ...
	I0913 19:09:43.540283   34198 out.go:177] * Starting "ha-617764-m05" control-plane node in "ha-617764" cluster
	I0913 19:09:43.541346   34198 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0913 19:09:43.541382   34198 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0913 19:09:43.541399   34198 cache.go:56] Caching tarball of preloaded images
	I0913 19:09:43.541495   34198 preload.go:172] Found /home/jenkins/minikube-integration/19636-3902/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0913 19:09:43.541508   34198 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0913 19:09:43.541632   34198 profile.go:143] Saving config to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/config.json ...
	I0913 19:09:43.541847   34198 start.go:360] acquireMachinesLock for ha-617764-m05: {Name:mk2a4fc9f87a3264088553785e086036ce1d8c5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 19:09:43.541897   34198 start.go:364] duration metric: took 23.933µs to acquireMachinesLock for "ha-617764-m05"
	I0913 19:09:43.541921   34198 start.go:93] Provisioning new machine with config: &{Name:ha-617764 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-617764 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.145 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.203 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.238 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m05 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false f
reshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m05 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:true Worker:true}
	I0913 19:09:43.542116   34198 start.go:125] createHost starting for "m05" (driver="kvm2")
	I0913 19:09:43.543618   34198 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0913 19:09:43.543746   34198 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:09:43.543787   34198 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:09:43.559116   34198 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36061
	I0913 19:09:43.559531   34198 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:09:43.560011   34198 main.go:141] libmachine: Using API Version  1
	I0913 19:09:43.560036   34198 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:09:43.560342   34198 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:09:43.560521   34198 main.go:141] libmachine: (ha-617764-m05) Calling .GetMachineName
	I0913 19:09:43.560658   34198 main.go:141] libmachine: (ha-617764-m05) Calling .DriverName
	I0913 19:09:43.560791   34198 start.go:159] libmachine.API.Create for "ha-617764" (driver="kvm2")
	I0913 19:09:43.560821   34198 client.go:168] LocalClient.Create starting
	I0913 19:09:43.560864   34198 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem
	I0913 19:09:43.560901   34198 main.go:141] libmachine: Decoding PEM data...
	I0913 19:09:43.560921   34198 main.go:141] libmachine: Parsing certificate...
	I0913 19:09:43.560996   34198 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem
	I0913 19:09:43.561023   34198 main.go:141] libmachine: Decoding PEM data...
	I0913 19:09:43.561038   34198 main.go:141] libmachine: Parsing certificate...
	I0913 19:09:43.561064   34198 main.go:141] libmachine: Running pre-create checks...
	I0913 19:09:43.561075   34198 main.go:141] libmachine: (ha-617764-m05) Calling .PreCreateCheck
	I0913 19:09:43.561254   34198 main.go:141] libmachine: (ha-617764-m05) Calling .GetConfigRaw
	I0913 19:09:43.561625   34198 main.go:141] libmachine: Creating machine...
	I0913 19:09:43.561639   34198 main.go:141] libmachine: (ha-617764-m05) Calling .Create
	I0913 19:09:43.561769   34198 main.go:141] libmachine: (ha-617764-m05) Creating KVM machine...
	I0913 19:09:43.562922   34198 main.go:141] libmachine: (ha-617764-m05) DBG | found existing default KVM network
	I0913 19:09:43.563034   34198 main.go:141] libmachine: (ha-617764-m05) DBG | found existing private KVM network mk-ha-617764
	I0913 19:09:43.563208   34198 main.go:141] libmachine: (ha-617764-m05) Setting up store path in /home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764-m05 ...
	I0913 19:09:43.563235   34198 main.go:141] libmachine: (ha-617764-m05) Building disk image from file:///home/jenkins/minikube-integration/19636-3902/.minikube/cache/iso/amd64/minikube-v1.34.0-1726156389-19616-amd64.iso
	I0913 19:09:43.563312   34198 main.go:141] libmachine: (ha-617764-m05) DBG | I0913 19:09:43.563205   34234 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19636-3902/.minikube
	I0913 19:09:43.563387   34198 main.go:141] libmachine: (ha-617764-m05) Downloading /home/jenkins/minikube-integration/19636-3902/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19636-3902/.minikube/cache/iso/amd64/minikube-v1.34.0-1726156389-19616-amd64.iso...
	I0913 19:09:43.796383   34198 main.go:141] libmachine: (ha-617764-m05) DBG | I0913 19:09:43.796237   34234 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764-m05/id_rsa...
	I0913 19:09:43.863858   34198 main.go:141] libmachine: (ha-617764-m05) DBG | I0913 19:09:43.863721   34234 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764-m05/ha-617764-m05.rawdisk...
	I0913 19:09:43.863897   34198 main.go:141] libmachine: (ha-617764-m05) DBG | Writing magic tar header
	I0913 19:09:43.863911   34198 main.go:141] libmachine: (ha-617764-m05) DBG | Writing SSH key tar header
	I0913 19:09:43.863924   34198 main.go:141] libmachine: (ha-617764-m05) DBG | I0913 19:09:43.863865   34234 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764-m05 ...
	I0913 19:09:43.864005   34198 main.go:141] libmachine: (ha-617764-m05) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764-m05
	I0913 19:09:43.864031   34198 main.go:141] libmachine: (ha-617764-m05) Setting executable bit set on /home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764-m05 (perms=drwx------)
	I0913 19:09:43.864047   34198 main.go:141] libmachine: (ha-617764-m05) Setting executable bit set on /home/jenkins/minikube-integration/19636-3902/.minikube/machines (perms=drwxr-xr-x)
	I0913 19:09:43.864062   34198 main.go:141] libmachine: (ha-617764-m05) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19636-3902/.minikube/machines
	I0913 19:09:43.864077   34198 main.go:141] libmachine: (ha-617764-m05) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19636-3902/.minikube
	I0913 19:09:43.864091   34198 main.go:141] libmachine: (ha-617764-m05) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19636-3902
	I0913 19:09:43.864183   34198 main.go:141] libmachine: (ha-617764-m05) Setting executable bit set on /home/jenkins/minikube-integration/19636-3902/.minikube (perms=drwxr-xr-x)
	I0913 19:09:43.864249   34198 main.go:141] libmachine: (ha-617764-m05) Setting executable bit set on /home/jenkins/minikube-integration/19636-3902 (perms=drwxrwxr-x)
	I0913 19:09:43.864264   34198 main.go:141] libmachine: (ha-617764-m05) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0913 19:09:43.864276   34198 main.go:141] libmachine: (ha-617764-m05) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0913 19:09:43.864303   34198 main.go:141] libmachine: (ha-617764-m05) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0913 19:09:43.864316   34198 main.go:141] libmachine: (ha-617764-m05) DBG | Checking permissions on dir: /home/jenkins
	I0913 19:09:43.864327   34198 main.go:141] libmachine: (ha-617764-m05) Creating domain...
	I0913 19:09:43.864353   34198 main.go:141] libmachine: (ha-617764-m05) DBG | Checking permissions on dir: /home
	I0913 19:09:43.864371   34198 main.go:141] libmachine: (ha-617764-m05) DBG | Skipping /home - not owner
	I0913 19:09:43.865153   34198 main.go:141] libmachine: (ha-617764-m05) define libvirt domain using xml: 
	I0913 19:09:43.865176   34198 main.go:141] libmachine: (ha-617764-m05) <domain type='kvm'>
	I0913 19:09:43.865203   34198 main.go:141] libmachine: (ha-617764-m05)   <name>ha-617764-m05</name>
	I0913 19:09:43.865211   34198 main.go:141] libmachine: (ha-617764-m05)   <memory unit='MiB'>2200</memory>
	I0913 19:09:43.865218   34198 main.go:141] libmachine: (ha-617764-m05)   <vcpu>2</vcpu>
	I0913 19:09:43.865224   34198 main.go:141] libmachine: (ha-617764-m05)   <features>
	I0913 19:09:43.865232   34198 main.go:141] libmachine: (ha-617764-m05)     <acpi/>
	I0913 19:09:43.865237   34198 main.go:141] libmachine: (ha-617764-m05)     <apic/>
	I0913 19:09:43.865244   34198 main.go:141] libmachine: (ha-617764-m05)     <pae/>
	I0913 19:09:43.865251   34198 main.go:141] libmachine: (ha-617764-m05)     
	I0913 19:09:43.865258   34198 main.go:141] libmachine: (ha-617764-m05)   </features>
	I0913 19:09:43.865268   34198 main.go:141] libmachine: (ha-617764-m05)   <cpu mode='host-passthrough'>
	I0913 19:09:43.865275   34198 main.go:141] libmachine: (ha-617764-m05)   
	I0913 19:09:43.865281   34198 main.go:141] libmachine: (ha-617764-m05)   </cpu>
	I0913 19:09:43.865292   34198 main.go:141] libmachine: (ha-617764-m05)   <os>
	I0913 19:09:43.865299   34198 main.go:141] libmachine: (ha-617764-m05)     <type>hvm</type>
	I0913 19:09:43.865307   34198 main.go:141] libmachine: (ha-617764-m05)     <boot dev='cdrom'/>
	I0913 19:09:43.865317   34198 main.go:141] libmachine: (ha-617764-m05)     <boot dev='hd'/>
	I0913 19:09:43.865324   34198 main.go:141] libmachine: (ha-617764-m05)     <bootmenu enable='no'/>
	I0913 19:09:43.865336   34198 main.go:141] libmachine: (ha-617764-m05)   </os>
	I0913 19:09:43.865344   34198 main.go:141] libmachine: (ha-617764-m05)   <devices>
	I0913 19:09:43.865356   34198 main.go:141] libmachine: (ha-617764-m05)     <disk type='file' device='cdrom'>
	I0913 19:09:43.865367   34198 main.go:141] libmachine: (ha-617764-m05)       <source file='/home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764-m05/boot2docker.iso'/>
	I0913 19:09:43.865383   34198 main.go:141] libmachine: (ha-617764-m05)       <target dev='hdc' bus='scsi'/>
	I0913 19:09:43.865394   34198 main.go:141] libmachine: (ha-617764-m05)       <readonly/>
	I0913 19:09:43.865404   34198 main.go:141] libmachine: (ha-617764-m05)     </disk>
	I0913 19:09:43.865422   34198 main.go:141] libmachine: (ha-617764-m05)     <disk type='file' device='disk'>
	I0913 19:09:43.865434   34198 main.go:141] libmachine: (ha-617764-m05)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0913 19:09:43.865447   34198 main.go:141] libmachine: (ha-617764-m05)       <source file='/home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764-m05/ha-617764-m05.rawdisk'/>
	I0913 19:09:43.865457   34198 main.go:141] libmachine: (ha-617764-m05)       <target dev='hda' bus='virtio'/>
	I0913 19:09:43.865465   34198 main.go:141] libmachine: (ha-617764-m05)     </disk>
	I0913 19:09:43.865475   34198 main.go:141] libmachine: (ha-617764-m05)     <interface type='network'>
	I0913 19:09:43.865488   34198 main.go:141] libmachine: (ha-617764-m05)       <source network='mk-ha-617764'/>
	I0913 19:09:43.865498   34198 main.go:141] libmachine: (ha-617764-m05)       <model type='virtio'/>
	I0913 19:09:43.865509   34198 main.go:141] libmachine: (ha-617764-m05)     </interface>
	I0913 19:09:43.865519   34198 main.go:141] libmachine: (ha-617764-m05)     <interface type='network'>
	I0913 19:09:43.865530   34198 main.go:141] libmachine: (ha-617764-m05)       <source network='default'/>
	I0913 19:09:43.865541   34198 main.go:141] libmachine: (ha-617764-m05)       <model type='virtio'/>
	I0913 19:09:43.865550   34198 main.go:141] libmachine: (ha-617764-m05)     </interface>
	I0913 19:09:43.865560   34198 main.go:141] libmachine: (ha-617764-m05)     <serial type='pty'>
	I0913 19:09:43.865569   34198 main.go:141] libmachine: (ha-617764-m05)       <target port='0'/>
	I0913 19:09:43.865578   34198 main.go:141] libmachine: (ha-617764-m05)     </serial>
	I0913 19:09:43.865587   34198 main.go:141] libmachine: (ha-617764-m05)     <console type='pty'>
	I0913 19:09:43.865598   34198 main.go:141] libmachine: (ha-617764-m05)       <target type='serial' port='0'/>
	I0913 19:09:43.865609   34198 main.go:141] libmachine: (ha-617764-m05)     </console>
	I0913 19:09:43.865618   34198 main.go:141] libmachine: (ha-617764-m05)     <rng model='virtio'>
	I0913 19:09:43.865629   34198 main.go:141] libmachine: (ha-617764-m05)       <backend model='random'>/dev/random</backend>
	I0913 19:09:43.865637   34198 main.go:141] libmachine: (ha-617764-m05)     </rng>
	I0913 19:09:43.865646   34198 main.go:141] libmachine: (ha-617764-m05)     
	I0913 19:09:43.865654   34198 main.go:141] libmachine: (ha-617764-m05)     
	I0913 19:09:43.865662   34198 main.go:141] libmachine: (ha-617764-m05)   </devices>
	I0913 19:09:43.865671   34198 main.go:141] libmachine: (ha-617764-m05) </domain>
	I0913 19:09:43.865681   34198 main.go:141] libmachine: (ha-617764-m05) 
	I0913 19:09:43.872580   34198 main.go:141] libmachine: (ha-617764-m05) DBG | domain ha-617764-m05 has defined MAC address 52:54:00:b0:19:0f in network default
	I0913 19:09:43.873126   34198 main.go:141] libmachine: (ha-617764-m05) Ensuring networks are active...
	I0913 19:09:43.873145   34198 main.go:141] libmachine: (ha-617764-m05) DBG | domain ha-617764-m05 has defined MAC address 52:54:00:58:27:df in network mk-ha-617764
	I0913 19:09:43.873863   34198 main.go:141] libmachine: (ha-617764-m05) Ensuring network default is active
	I0913 19:09:43.874207   34198 main.go:141] libmachine: (ha-617764-m05) Ensuring network mk-ha-617764 is active
	I0913 19:09:43.874580   34198 main.go:141] libmachine: (ha-617764-m05) Getting domain xml...
	I0913 19:09:43.875302   34198 main.go:141] libmachine: (ha-617764-m05) Creating domain...
	I0913 19:09:45.083938   34198 main.go:141] libmachine: (ha-617764-m05) Waiting to get IP...
	I0913 19:09:45.084629   34198 main.go:141] libmachine: (ha-617764-m05) DBG | domain ha-617764-m05 has defined MAC address 52:54:00:58:27:df in network mk-ha-617764
	I0913 19:09:45.085038   34198 main.go:141] libmachine: (ha-617764-m05) DBG | unable to find current IP address of domain ha-617764-m05 in network mk-ha-617764
	I0913 19:09:45.085108   34198 main.go:141] libmachine: (ha-617764-m05) DBG | I0913 19:09:45.085011   34234 retry.go:31] will retry after 221.552841ms: waiting for machine to come up
	I0913 19:09:45.308421   34198 main.go:141] libmachine: (ha-617764-m05) DBG | domain ha-617764-m05 has defined MAC address 52:54:00:58:27:df in network mk-ha-617764
	I0913 19:09:45.308879   34198 main.go:141] libmachine: (ha-617764-m05) DBG | unable to find current IP address of domain ha-617764-m05 in network mk-ha-617764
	I0913 19:09:45.308904   34198 main.go:141] libmachine: (ha-617764-m05) DBG | I0913 19:09:45.308823   34234 retry.go:31] will retry after 262.452397ms: waiting for machine to come up
	I0913 19:09:45.573184   34198 main.go:141] libmachine: (ha-617764-m05) DBG | domain ha-617764-m05 has defined MAC address 52:54:00:58:27:df in network mk-ha-617764
	I0913 19:09:45.573623   34198 main.go:141] libmachine: (ha-617764-m05) DBG | unable to find current IP address of domain ha-617764-m05 in network mk-ha-617764
	I0913 19:09:45.573672   34198 main.go:141] libmachine: (ha-617764-m05) DBG | I0913 19:09:45.573604   34234 retry.go:31] will retry after 419.653662ms: waiting for machine to come up
	I0913 19:09:45.995087   34198 main.go:141] libmachine: (ha-617764-m05) DBG | domain ha-617764-m05 has defined MAC address 52:54:00:58:27:df in network mk-ha-617764
	I0913 19:09:45.995480   34198 main.go:141] libmachine: (ha-617764-m05) DBG | unable to find current IP address of domain ha-617764-m05 in network mk-ha-617764
	I0913 19:09:45.995498   34198 main.go:141] libmachine: (ha-617764-m05) DBG | I0913 19:09:45.995454   34234 retry.go:31] will retry after 487.977131ms: waiting for machine to come up
	I0913 19:09:46.485315   34198 main.go:141] libmachine: (ha-617764-m05) DBG | domain ha-617764-m05 has defined MAC address 52:54:00:58:27:df in network mk-ha-617764
	I0913 19:09:46.485772   34198 main.go:141] libmachine: (ha-617764-m05) DBG | unable to find current IP address of domain ha-617764-m05 in network mk-ha-617764
	I0913 19:09:46.485825   34198 main.go:141] libmachine: (ha-617764-m05) DBG | I0913 19:09:46.485745   34234 retry.go:31] will retry after 479.609864ms: waiting for machine to come up
	I0913 19:09:46.967446   34198 main.go:141] libmachine: (ha-617764-m05) DBG | domain ha-617764-m05 has defined MAC address 52:54:00:58:27:df in network mk-ha-617764
	I0913 19:09:46.967884   34198 main.go:141] libmachine: (ha-617764-m05) DBG | unable to find current IP address of domain ha-617764-m05 in network mk-ha-617764
	I0913 19:09:46.967906   34198 main.go:141] libmachine: (ha-617764-m05) DBG | I0913 19:09:46.967838   34234 retry.go:31] will retry after 602.31982ms: waiting for machine to come up
	I0913 19:09:47.572367   34198 main.go:141] libmachine: (ha-617764-m05) DBG | domain ha-617764-m05 has defined MAC address 52:54:00:58:27:df in network mk-ha-617764
	I0913 19:09:47.572761   34198 main.go:141] libmachine: (ha-617764-m05) DBG | unable to find current IP address of domain ha-617764-m05 in network mk-ha-617764
	I0913 19:09:47.572787   34198 main.go:141] libmachine: (ha-617764-m05) DBG | I0913 19:09:47.572718   34234 retry.go:31] will retry after 833.614342ms: waiting for machine to come up
	I0913 19:09:48.407995   34198 main.go:141] libmachine: (ha-617764-m05) DBG | domain ha-617764-m05 has defined MAC address 52:54:00:58:27:df in network mk-ha-617764
	I0913 19:09:48.408375   34198 main.go:141] libmachine: (ha-617764-m05) DBG | unable to find current IP address of domain ha-617764-m05 in network mk-ha-617764
	I0913 19:09:48.408396   34198 main.go:141] libmachine: (ha-617764-m05) DBG | I0913 19:09:48.408332   34234 retry.go:31] will retry after 1.004265405s: waiting for machine to come up
	I0913 19:09:49.414227   34198 main.go:141] libmachine: (ha-617764-m05) DBG | domain ha-617764-m05 has defined MAC address 52:54:00:58:27:df in network mk-ha-617764
	I0913 19:09:49.414705   34198 main.go:141] libmachine: (ha-617764-m05) DBG | unable to find current IP address of domain ha-617764-m05 in network mk-ha-617764
	I0913 19:09:49.414735   34198 main.go:141] libmachine: (ha-617764-m05) DBG | I0913 19:09:49.414649   34234 retry.go:31] will retry after 1.452231946s: waiting for machine to come up
	I0913 19:09:50.869115   34198 main.go:141] libmachine: (ha-617764-m05) DBG | domain ha-617764-m05 has defined MAC address 52:54:00:58:27:df in network mk-ha-617764
	I0913 19:09:50.869536   34198 main.go:141] libmachine: (ha-617764-m05) DBG | unable to find current IP address of domain ha-617764-m05 in network mk-ha-617764
	I0913 19:09:50.869557   34198 main.go:141] libmachine: (ha-617764-m05) DBG | I0913 19:09:50.869500   34234 retry.go:31] will retry after 2.121050525s: waiting for machine to come up
	I0913 19:09:52.993820   34198 main.go:141] libmachine: (ha-617764-m05) DBG | domain ha-617764-m05 has defined MAC address 52:54:00:58:27:df in network mk-ha-617764
	I0913 19:09:52.994209   34198 main.go:141] libmachine: (ha-617764-m05) DBG | unable to find current IP address of domain ha-617764-m05 in network mk-ha-617764
	I0913 19:09:52.994234   34198 main.go:141] libmachine: (ha-617764-m05) DBG | I0913 19:09:52.994157   34234 retry.go:31] will retry after 1.753523252s: waiting for machine to come up
	I0913 19:09:54.749868   34198 main.go:141] libmachine: (ha-617764-m05) DBG | domain ha-617764-m05 has defined MAC address 52:54:00:58:27:df in network mk-ha-617764
	I0913 19:09:54.750292   34198 main.go:141] libmachine: (ha-617764-m05) DBG | unable to find current IP address of domain ha-617764-m05 in network mk-ha-617764
	I0913 19:09:54.750318   34198 main.go:141] libmachine: (ha-617764-m05) DBG | I0913 19:09:54.750249   34234 retry.go:31] will retry after 3.626807881s: waiting for machine to come up
	I0913 19:09:58.379259   34198 main.go:141] libmachine: (ha-617764-m05) DBG | domain ha-617764-m05 has defined MAC address 52:54:00:58:27:df in network mk-ha-617764
	I0913 19:09:58.379721   34198 main.go:141] libmachine: (ha-617764-m05) DBG | unable to find current IP address of domain ha-617764-m05 in network mk-ha-617764
	I0913 19:09:58.379750   34198 main.go:141] libmachine: (ha-617764-m05) DBG | I0913 19:09:58.379678   34234 retry.go:31] will retry after 4.444260251s: waiting for machine to come up
	I0913 19:10:02.828998   34198 main.go:141] libmachine: (ha-617764-m05) DBG | domain ha-617764-m05 has defined MAC address 52:54:00:58:27:df in network mk-ha-617764
	I0913 19:10:02.829496   34198 main.go:141] libmachine: (ha-617764-m05) DBG | unable to find current IP address of domain ha-617764-m05 in network mk-ha-617764
	I0913 19:10:02.829518   34198 main.go:141] libmachine: (ha-617764-m05) DBG | I0913 19:10:02.829431   34234 retry.go:31] will retry after 3.572857993s: waiting for machine to come up
	I0913 19:10:06.403452   34198 main.go:141] libmachine: (ha-617764-m05) DBG | domain ha-617764-m05 has defined MAC address 52:54:00:58:27:df in network mk-ha-617764
	I0913 19:10:06.403922   34198 main.go:141] libmachine: (ha-617764-m05) Found IP for machine: 192.168.39.164
	I0913 19:10:06.403946   34198 main.go:141] libmachine: (ha-617764-m05) Reserving static IP address...
	I0913 19:10:06.403973   34198 main.go:141] libmachine: (ha-617764-m05) DBG | domain ha-617764-m05 has current primary IP address 192.168.39.164 and MAC address 52:54:00:58:27:df in network mk-ha-617764
	I0913 19:10:06.404443   34198 main.go:141] libmachine: (ha-617764-m05) DBG | unable to find host DHCP lease matching {name: "ha-617764-m05", mac: "52:54:00:58:27:df", ip: "192.168.39.164"} in network mk-ha-617764
	I0913 19:10:06.476800   34198 main.go:141] libmachine: (ha-617764-m05) DBG | Getting to WaitForSSH function...
	I0913 19:10:06.476828   34198 main.go:141] libmachine: (ha-617764-m05) Reserved static IP address: 192.168.39.164
	I0913 19:10:06.476839   34198 main.go:141] libmachine: (ha-617764-m05) Waiting for SSH to be available...
	I0913 19:10:06.479378   34198 main.go:141] libmachine: (ha-617764-m05) DBG | domain ha-617764-m05 has defined MAC address 52:54:00:58:27:df in network mk-ha-617764
	I0913 19:10:06.479707   34198 main.go:141] libmachine: (ha-617764-m05) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:27:df", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 20:09:58 +0000 UTC Type:0 Mac:52:54:00:58:27:df Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:minikube Clientid:01:52:54:00:58:27:df}
	I0913 19:10:06.479738   34198 main.go:141] libmachine: (ha-617764-m05) DBG | domain ha-617764-m05 has defined IP address 192.168.39.164 and MAC address 52:54:00:58:27:df in network mk-ha-617764
	I0913 19:10:06.479881   34198 main.go:141] libmachine: (ha-617764-m05) DBG | Using SSH client type: external
	I0913 19:10:06.479905   34198 main.go:141] libmachine: (ha-617764-m05) DBG | Using SSH private key: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764-m05/id_rsa (-rw-------)
	I0913 19:10:06.479938   34198 main.go:141] libmachine: (ha-617764-m05) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.164 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764-m05/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0913 19:10:06.479952   34198 main.go:141] libmachine: (ha-617764-m05) DBG | About to run SSH command:
	I0913 19:10:06.479977   34198 main.go:141] libmachine: (ha-617764-m05) DBG | exit 0
	I0913 19:10:06.606155   34198 main.go:141] libmachine: (ha-617764-m05) DBG | SSH cmd err, output: <nil>: 
	I0913 19:10:06.606453   34198 main.go:141] libmachine: (ha-617764-m05) KVM machine creation complete!
	I0913 19:10:06.606728   34198 main.go:141] libmachine: (ha-617764-m05) Calling .GetConfigRaw
	I0913 19:10:06.607411   34198 main.go:141] libmachine: (ha-617764-m05) Calling .DriverName
	I0913 19:10:06.607606   34198 main.go:141] libmachine: (ha-617764-m05) Calling .DriverName
	I0913 19:10:06.607736   34198 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0913 19:10:06.607830   34198 main.go:141] libmachine: (ha-617764-m05) Calling .GetState
	I0913 19:10:06.609170   34198 main.go:141] libmachine: Detecting operating system of created instance...
	I0913 19:10:06.609186   34198 main.go:141] libmachine: Waiting for SSH to be available...
	I0913 19:10:06.609193   34198 main.go:141] libmachine: Getting to WaitForSSH function...
	I0913 19:10:06.609201   34198 main.go:141] libmachine: (ha-617764-m05) Calling .GetSSHHostname
	I0913 19:10:06.611665   34198 main.go:141] libmachine: (ha-617764-m05) DBG | domain ha-617764-m05 has defined MAC address 52:54:00:58:27:df in network mk-ha-617764
	I0913 19:10:06.611965   34198 main.go:141] libmachine: (ha-617764-m05) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:27:df", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 20:09:58 +0000 UTC Type:0 Mac:52:54:00:58:27:df Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:ha-617764-m05 Clientid:01:52:54:00:58:27:df}
	I0913 19:10:06.611993   34198 main.go:141] libmachine: (ha-617764-m05) DBG | domain ha-617764-m05 has defined IP address 192.168.39.164 and MAC address 52:54:00:58:27:df in network mk-ha-617764
	I0913 19:10:06.612106   34198 main.go:141] libmachine: (ha-617764-m05) Calling .GetSSHPort
	I0913 19:10:06.612262   34198 main.go:141] libmachine: (ha-617764-m05) Calling .GetSSHKeyPath
	I0913 19:10:06.612415   34198 main.go:141] libmachine: (ha-617764-m05) Calling .GetSSHKeyPath
	I0913 19:10:06.612590   34198 main.go:141] libmachine: (ha-617764-m05) Calling .GetSSHUsername
	I0913 19:10:06.612727   34198 main.go:141] libmachine: Using SSH client type: native
	I0913 19:10:06.612957   34198 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.164 22 <nil> <nil>}
	I0913 19:10:06.612970   34198 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0913 19:10:06.713678   34198 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0913 19:10:06.713702   34198 main.go:141] libmachine: Detecting the provisioner...
	I0913 19:10:06.713712   34198 main.go:141] libmachine: (ha-617764-m05) Calling .GetSSHHostname
	I0913 19:10:06.716315   34198 main.go:141] libmachine: (ha-617764-m05) DBG | domain ha-617764-m05 has defined MAC address 52:54:00:58:27:df in network mk-ha-617764
	I0913 19:10:06.716792   34198 main.go:141] libmachine: (ha-617764-m05) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:27:df", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 20:09:58 +0000 UTC Type:0 Mac:52:54:00:58:27:df Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:ha-617764-m05 Clientid:01:52:54:00:58:27:df}
	I0913 19:10:06.716817   34198 main.go:141] libmachine: (ha-617764-m05) DBG | domain ha-617764-m05 has defined IP address 192.168.39.164 and MAC address 52:54:00:58:27:df in network mk-ha-617764
	I0913 19:10:06.716997   34198 main.go:141] libmachine: (ha-617764-m05) Calling .GetSSHPort
	I0913 19:10:06.717176   34198 main.go:141] libmachine: (ha-617764-m05) Calling .GetSSHKeyPath
	I0913 19:10:06.717352   34198 main.go:141] libmachine: (ha-617764-m05) Calling .GetSSHKeyPath
	I0913 19:10:06.717525   34198 main.go:141] libmachine: (ha-617764-m05) Calling .GetSSHUsername
	I0913 19:10:06.717705   34198 main.go:141] libmachine: Using SSH client type: native
	I0913 19:10:06.717878   34198 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.164 22 <nil> <nil>}
	I0913 19:10:06.717891   34198 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0913 19:10:06.823018   34198 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0913 19:10:06.823092   34198 main.go:141] libmachine: found compatible host: buildroot
	I0913 19:10:06.823102   34198 main.go:141] libmachine: Provisioning with buildroot...
	I0913 19:10:06.823110   34198 main.go:141] libmachine: (ha-617764-m05) Calling .GetMachineName
	I0913 19:10:06.823349   34198 buildroot.go:166] provisioning hostname "ha-617764-m05"
	I0913 19:10:06.823373   34198 main.go:141] libmachine: (ha-617764-m05) Calling .GetMachineName
	I0913 19:10:06.823524   34198 main.go:141] libmachine: (ha-617764-m05) Calling .GetSSHHostname
	I0913 19:10:06.826086   34198 main.go:141] libmachine: (ha-617764-m05) DBG | domain ha-617764-m05 has defined MAC address 52:54:00:58:27:df in network mk-ha-617764
	I0913 19:10:06.826582   34198 main.go:141] libmachine: (ha-617764-m05) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:27:df", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 20:09:58 +0000 UTC Type:0 Mac:52:54:00:58:27:df Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:ha-617764-m05 Clientid:01:52:54:00:58:27:df}
	I0913 19:10:06.826611   34198 main.go:141] libmachine: (ha-617764-m05) DBG | domain ha-617764-m05 has defined IP address 192.168.39.164 and MAC address 52:54:00:58:27:df in network mk-ha-617764
	I0913 19:10:06.826697   34198 main.go:141] libmachine: (ha-617764-m05) Calling .GetSSHPort
	I0913 19:10:06.826846   34198 main.go:141] libmachine: (ha-617764-m05) Calling .GetSSHKeyPath
	I0913 19:10:06.826991   34198 main.go:141] libmachine: (ha-617764-m05) Calling .GetSSHKeyPath
	I0913 19:10:06.827081   34198 main.go:141] libmachine: (ha-617764-m05) Calling .GetSSHUsername
	I0913 19:10:06.827222   34198 main.go:141] libmachine: Using SSH client type: native
	I0913 19:10:06.827408   34198 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.164 22 <nil> <nil>}
	I0913 19:10:06.827420   34198 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-617764-m05 && echo "ha-617764-m05" | sudo tee /etc/hostname
	I0913 19:10:06.944935   34198 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-617764-m05
	
	I0913 19:10:06.944967   34198 main.go:141] libmachine: (ha-617764-m05) Calling .GetSSHHostname
	I0913 19:10:06.947702   34198 main.go:141] libmachine: (ha-617764-m05) DBG | domain ha-617764-m05 has defined MAC address 52:54:00:58:27:df in network mk-ha-617764
	I0913 19:10:06.948152   34198 main.go:141] libmachine: (ha-617764-m05) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:27:df", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 20:09:58 +0000 UTC Type:0 Mac:52:54:00:58:27:df Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:ha-617764-m05 Clientid:01:52:54:00:58:27:df}
	I0913 19:10:06.948175   34198 main.go:141] libmachine: (ha-617764-m05) DBG | domain ha-617764-m05 has defined IP address 192.168.39.164 and MAC address 52:54:00:58:27:df in network mk-ha-617764
	I0913 19:10:06.948346   34198 main.go:141] libmachine: (ha-617764-m05) Calling .GetSSHPort
	I0913 19:10:06.948533   34198 main.go:141] libmachine: (ha-617764-m05) Calling .GetSSHKeyPath
	I0913 19:10:06.948696   34198 main.go:141] libmachine: (ha-617764-m05) Calling .GetSSHKeyPath
	I0913 19:10:06.948833   34198 main.go:141] libmachine: (ha-617764-m05) Calling .GetSSHUsername
	I0913 19:10:06.948977   34198 main.go:141] libmachine: Using SSH client type: native
	I0913 19:10:06.949135   34198 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.164 22 <nil> <nil>}
	I0913 19:10:06.949149   34198 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-617764-m05' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-617764-m05/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-617764-m05' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0913 19:10:07.059495   34198 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0913 19:10:07.059519   34198 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19636-3902/.minikube CaCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19636-3902/.minikube}
	I0913 19:10:07.059538   34198 buildroot.go:174] setting up certificates
	I0913 19:10:07.059546   34198 provision.go:84] configureAuth start
	I0913 19:10:07.059553   34198 main.go:141] libmachine: (ha-617764-m05) Calling .GetMachineName
	I0913 19:10:07.059815   34198 main.go:141] libmachine: (ha-617764-m05) Calling .GetIP
	I0913 19:10:07.062561   34198 main.go:141] libmachine: (ha-617764-m05) DBG | domain ha-617764-m05 has defined MAC address 52:54:00:58:27:df in network mk-ha-617764
	I0913 19:10:07.063095   34198 main.go:141] libmachine: (ha-617764-m05) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:27:df", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 20:09:58 +0000 UTC Type:0 Mac:52:54:00:58:27:df Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:ha-617764-m05 Clientid:01:52:54:00:58:27:df}
	I0913 19:10:07.063146   34198 main.go:141] libmachine: (ha-617764-m05) DBG | domain ha-617764-m05 has defined IP address 192.168.39.164 and MAC address 52:54:00:58:27:df in network mk-ha-617764
	I0913 19:10:07.063252   34198 main.go:141] libmachine: (ha-617764-m05) Calling .GetSSHHostname
	I0913 19:10:07.066900   34198 main.go:141] libmachine: (ha-617764-m05) DBG | domain ha-617764-m05 has defined MAC address 52:54:00:58:27:df in network mk-ha-617764
	I0913 19:10:07.067299   34198 main.go:141] libmachine: (ha-617764-m05) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:27:df", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 20:09:58 +0000 UTC Type:0 Mac:52:54:00:58:27:df Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:ha-617764-m05 Clientid:01:52:54:00:58:27:df}
	I0913 19:10:07.067319   34198 main.go:141] libmachine: (ha-617764-m05) DBG | domain ha-617764-m05 has defined IP address 192.168.39.164 and MAC address 52:54:00:58:27:df in network mk-ha-617764
	I0913 19:10:07.067467   34198 provision.go:143] copyHostCerts
	I0913 19:10:07.067504   34198 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem
	I0913 19:10:07.067537   34198 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem, removing ...
	I0913 19:10:07.067546   34198 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem
	I0913 19:10:07.067608   34198 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem (1082 bytes)
	I0913 19:10:07.067719   34198 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem
	I0913 19:10:07.067738   34198 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem, removing ...
	I0913 19:10:07.067745   34198 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem
	I0913 19:10:07.067772   34198 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem (1123 bytes)
	I0913 19:10:07.067848   34198 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem
	I0913 19:10:07.067871   34198 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem, removing ...
	I0913 19:10:07.067885   34198 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem
	I0913 19:10:07.067918   34198 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem (1679 bytes)
	I0913 19:10:07.067984   34198 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem org=jenkins.ha-617764-m05 san=[127.0.0.1 192.168.39.164 ha-617764-m05 localhost minikube]
	I0913 19:10:07.192976   34198 provision.go:177] copyRemoteCerts
	I0913 19:10:07.193046   34198 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0913 19:10:07.193079   34198 main.go:141] libmachine: (ha-617764-m05) Calling .GetSSHHostname
	I0913 19:10:07.195965   34198 main.go:141] libmachine: (ha-617764-m05) DBG | domain ha-617764-m05 has defined MAC address 52:54:00:58:27:df in network mk-ha-617764
	I0913 19:10:07.196395   34198 main.go:141] libmachine: (ha-617764-m05) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:27:df", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 20:09:58 +0000 UTC Type:0 Mac:52:54:00:58:27:df Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:ha-617764-m05 Clientid:01:52:54:00:58:27:df}
	I0913 19:10:07.196416   34198 main.go:141] libmachine: (ha-617764-m05) DBG | domain ha-617764-m05 has defined IP address 192.168.39.164 and MAC address 52:54:00:58:27:df in network mk-ha-617764
	I0913 19:10:07.196649   34198 main.go:141] libmachine: (ha-617764-m05) Calling .GetSSHPort
	I0913 19:10:07.196828   34198 main.go:141] libmachine: (ha-617764-m05) Calling .GetSSHKeyPath
	I0913 19:10:07.196951   34198 main.go:141] libmachine: (ha-617764-m05) Calling .GetSSHUsername
	I0913 19:10:07.197084   34198 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764-m05/id_rsa Username:docker}
	I0913 19:10:07.280805   34198 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0913 19:10:07.280872   34198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0913 19:10:07.305824   34198 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0913 19:10:07.305896   34198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0913 19:10:07.333277   34198 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0913 19:10:07.333374   34198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0913 19:10:07.357656   34198 provision.go:87] duration metric: took 298.100193ms to configureAuth
	I0913 19:10:07.357682   34198 buildroot.go:189] setting minikube options for container-runtime
	I0913 19:10:07.357925   34198 config.go:182] Loaded profile config "ha-617764": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 19:10:07.358029   34198 main.go:141] libmachine: (ha-617764-m05) Calling .GetSSHHostname
	I0913 19:10:07.360678   34198 main.go:141] libmachine: (ha-617764-m05) DBG | domain ha-617764-m05 has defined MAC address 52:54:00:58:27:df in network mk-ha-617764
	I0913 19:10:07.361071   34198 main.go:141] libmachine: (ha-617764-m05) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:27:df", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 20:09:58 +0000 UTC Type:0 Mac:52:54:00:58:27:df Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:ha-617764-m05 Clientid:01:52:54:00:58:27:df}
	I0913 19:10:07.361107   34198 main.go:141] libmachine: (ha-617764-m05) DBG | domain ha-617764-m05 has defined IP address 192.168.39.164 and MAC address 52:54:00:58:27:df in network mk-ha-617764
	I0913 19:10:07.361259   34198 main.go:141] libmachine: (ha-617764-m05) Calling .GetSSHPort
	I0913 19:10:07.361414   34198 main.go:141] libmachine: (ha-617764-m05) Calling .GetSSHKeyPath
	I0913 19:10:07.361628   34198 main.go:141] libmachine: (ha-617764-m05) Calling .GetSSHKeyPath
	I0913 19:10:07.361740   34198 main.go:141] libmachine: (ha-617764-m05) Calling .GetSSHUsername
	I0913 19:10:07.361913   34198 main.go:141] libmachine: Using SSH client type: native
	I0913 19:10:07.362078   34198 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.164 22 <nil> <nil>}
	I0913 19:10:07.362090   34198 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0913 19:10:07.590852   34198 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0913 19:10:07.590880   34198 main.go:141] libmachine: Checking connection to Docker...
	I0913 19:10:07.590890   34198 main.go:141] libmachine: (ha-617764-m05) Calling .GetURL
	I0913 19:10:07.592113   34198 main.go:141] libmachine: (ha-617764-m05) DBG | Using libvirt version 6000000
	I0913 19:10:07.594794   34198 main.go:141] libmachine: (ha-617764-m05) DBG | domain ha-617764-m05 has defined MAC address 52:54:00:58:27:df in network mk-ha-617764
	I0913 19:10:07.595189   34198 main.go:141] libmachine: (ha-617764-m05) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:27:df", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 20:09:58 +0000 UTC Type:0 Mac:52:54:00:58:27:df Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:ha-617764-m05 Clientid:01:52:54:00:58:27:df}
	I0913 19:10:07.595215   34198 main.go:141] libmachine: (ha-617764-m05) DBG | domain ha-617764-m05 has defined IP address 192.168.39.164 and MAC address 52:54:00:58:27:df in network mk-ha-617764
	I0913 19:10:07.595494   34198 main.go:141] libmachine: Docker is up and running!
	I0913 19:10:07.595535   34198 main.go:141] libmachine: Reticulating splines...
	I0913 19:10:07.595554   34198 client.go:171] duration metric: took 24.034715822s to LocalClient.Create
	I0913 19:10:07.595580   34198 start.go:167] duration metric: took 24.034797466s to libmachine.API.Create "ha-617764"
	I0913 19:10:07.595592   34198 start.go:293] postStartSetup for "ha-617764-m05" (driver="kvm2")
	I0913 19:10:07.595603   34198 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0913 19:10:07.595624   34198 main.go:141] libmachine: (ha-617764-m05) Calling .DriverName
	I0913 19:10:07.595868   34198 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0913 19:10:07.595897   34198 main.go:141] libmachine: (ha-617764-m05) Calling .GetSSHHostname
	I0913 19:10:07.598292   34198 main.go:141] libmachine: (ha-617764-m05) DBG | domain ha-617764-m05 has defined MAC address 52:54:00:58:27:df in network mk-ha-617764
	I0913 19:10:07.598644   34198 main.go:141] libmachine: (ha-617764-m05) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:27:df", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 20:09:58 +0000 UTC Type:0 Mac:52:54:00:58:27:df Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:ha-617764-m05 Clientid:01:52:54:00:58:27:df}
	I0913 19:10:07.598678   34198 main.go:141] libmachine: (ha-617764-m05) DBG | domain ha-617764-m05 has defined IP address 192.168.39.164 and MAC address 52:54:00:58:27:df in network mk-ha-617764
	I0913 19:10:07.598867   34198 main.go:141] libmachine: (ha-617764-m05) Calling .GetSSHPort
	I0913 19:10:07.599038   34198 main.go:141] libmachine: (ha-617764-m05) Calling .GetSSHKeyPath
	I0913 19:10:07.599179   34198 main.go:141] libmachine: (ha-617764-m05) Calling .GetSSHUsername
	I0913 19:10:07.599345   34198 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764-m05/id_rsa Username:docker}
	I0913 19:10:07.685018   34198 ssh_runner.go:195] Run: cat /etc/os-release
	I0913 19:10:07.689627   34198 info.go:137] Remote host: Buildroot 2023.02.9
	I0913 19:10:07.689651   34198 filesync.go:126] Scanning /home/jenkins/minikube-integration/19636-3902/.minikube/addons for local assets ...
	I0913 19:10:07.689713   34198 filesync.go:126] Scanning /home/jenkins/minikube-integration/19636-3902/.minikube/files for local assets ...
	I0913 19:10:07.689790   34198 filesync.go:149] local asset: /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem -> 110792.pem in /etc/ssl/certs
	I0913 19:10:07.689800   34198 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem -> /etc/ssl/certs/110792.pem
	I0913 19:10:07.689875   34198 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0913 19:10:07.700078   34198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem --> /etc/ssl/certs/110792.pem (1708 bytes)
	I0913 19:10:07.728293   34198 start.go:296] duration metric: took 132.68985ms for postStartSetup
	I0913 19:10:07.728339   34198 main.go:141] libmachine: (ha-617764-m05) Calling .GetConfigRaw
	I0913 19:10:07.728916   34198 main.go:141] libmachine: (ha-617764-m05) Calling .GetIP
	I0913 19:10:07.731333   34198 main.go:141] libmachine: (ha-617764-m05) DBG | domain ha-617764-m05 has defined MAC address 52:54:00:58:27:df in network mk-ha-617764
	I0913 19:10:07.731761   34198 main.go:141] libmachine: (ha-617764-m05) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:27:df", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 20:09:58 +0000 UTC Type:0 Mac:52:54:00:58:27:df Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:ha-617764-m05 Clientid:01:52:54:00:58:27:df}
	I0913 19:10:07.731789   34198 main.go:141] libmachine: (ha-617764-m05) DBG | domain ha-617764-m05 has defined IP address 192.168.39.164 and MAC address 52:54:00:58:27:df in network mk-ha-617764
	I0913 19:10:07.732068   34198 profile.go:143] Saving config to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/config.json ...
	I0913 19:10:07.732270   34198 start.go:128] duration metric: took 24.190140967s to createHost
	I0913 19:10:07.732293   34198 main.go:141] libmachine: (ha-617764-m05) Calling .GetSSHHostname
	I0913 19:10:07.735031   34198 main.go:141] libmachine: (ha-617764-m05) DBG | domain ha-617764-m05 has defined MAC address 52:54:00:58:27:df in network mk-ha-617764
	I0913 19:10:07.735366   34198 main.go:141] libmachine: (ha-617764-m05) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:27:df", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 20:09:58 +0000 UTC Type:0 Mac:52:54:00:58:27:df Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:ha-617764-m05 Clientid:01:52:54:00:58:27:df}
	I0913 19:10:07.735400   34198 main.go:141] libmachine: (ha-617764-m05) DBG | domain ha-617764-m05 has defined IP address 192.168.39.164 and MAC address 52:54:00:58:27:df in network mk-ha-617764
	I0913 19:10:07.735543   34198 main.go:141] libmachine: (ha-617764-m05) Calling .GetSSHPort
	I0913 19:10:07.735736   34198 main.go:141] libmachine: (ha-617764-m05) Calling .GetSSHKeyPath
	I0913 19:10:07.735877   34198 main.go:141] libmachine: (ha-617764-m05) Calling .GetSSHKeyPath
	I0913 19:10:07.736014   34198 main.go:141] libmachine: (ha-617764-m05) Calling .GetSSHUsername
	I0913 19:10:07.736194   34198 main.go:141] libmachine: Using SSH client type: native
	I0913 19:10:07.736355   34198 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.164 22 <nil> <nil>}
	I0913 19:10:07.736364   34198 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0913 19:10:07.838883   34198 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726254607.814715627
	
	I0913 19:10:07.838908   34198 fix.go:216] guest clock: 1726254607.814715627
	I0913 19:10:07.838917   34198 fix.go:229] Guest: 2024-09-13 19:10:07.814715627 +0000 UTC Remote: 2024-09-13 19:10:07.732282708 +0000 UTC m=+24.416013652 (delta=82.432919ms)
	I0913 19:10:07.838963   34198 fix.go:200] guest clock delta is within tolerance: 82.432919ms
	I0913 19:10:07.838970   34198 start.go:83] releasing machines lock for "ha-617764-m05", held for 24.297059488s
	I0913 19:10:07.838996   34198 main.go:141] libmachine: (ha-617764-m05) Calling .DriverName
	I0913 19:10:07.839293   34198 main.go:141] libmachine: (ha-617764-m05) Calling .GetIP
	I0913 19:10:07.842338   34198 main.go:141] libmachine: (ha-617764-m05) DBG | domain ha-617764-m05 has defined MAC address 52:54:00:58:27:df in network mk-ha-617764
	I0913 19:10:07.842739   34198 main.go:141] libmachine: (ha-617764-m05) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:27:df", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 20:09:58 +0000 UTC Type:0 Mac:52:54:00:58:27:df Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:ha-617764-m05 Clientid:01:52:54:00:58:27:df}
	I0913 19:10:07.842764   34198 main.go:141] libmachine: (ha-617764-m05) DBG | domain ha-617764-m05 has defined IP address 192.168.39.164 and MAC address 52:54:00:58:27:df in network mk-ha-617764
	I0913 19:10:07.842920   34198 main.go:141] libmachine: (ha-617764-m05) Calling .DriverName
	I0913 19:10:07.843404   34198 main.go:141] libmachine: (ha-617764-m05) Calling .DriverName
	I0913 19:10:07.843578   34198 main.go:141] libmachine: (ha-617764-m05) Calling .DriverName
	I0913 19:10:07.843694   34198 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0913 19:10:07.843735   34198 main.go:141] libmachine: (ha-617764-m05) Calling .GetSSHHostname
	I0913 19:10:07.843811   34198 ssh_runner.go:195] Run: systemctl --version
	I0913 19:10:07.843837   34198 main.go:141] libmachine: (ha-617764-m05) Calling .GetSSHHostname
	I0913 19:10:07.846276   34198 main.go:141] libmachine: (ha-617764-m05) DBG | domain ha-617764-m05 has defined MAC address 52:54:00:58:27:df in network mk-ha-617764
	I0913 19:10:07.846430   34198 main.go:141] libmachine: (ha-617764-m05) DBG | domain ha-617764-m05 has defined MAC address 52:54:00:58:27:df in network mk-ha-617764
	I0913 19:10:07.846726   34198 main.go:141] libmachine: (ha-617764-m05) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:27:df", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 20:09:58 +0000 UTC Type:0 Mac:52:54:00:58:27:df Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:ha-617764-m05 Clientid:01:52:54:00:58:27:df}
	I0913 19:10:07.846747   34198 main.go:141] libmachine: (ha-617764-m05) DBG | domain ha-617764-m05 has defined IP address 192.168.39.164 and MAC address 52:54:00:58:27:df in network mk-ha-617764
	I0913 19:10:07.846817   34198 main.go:141] libmachine: (ha-617764-m05) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:27:df", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 20:09:58 +0000 UTC Type:0 Mac:52:54:00:58:27:df Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:ha-617764-m05 Clientid:01:52:54:00:58:27:df}
	I0913 19:10:07.846836   34198 main.go:141] libmachine: (ha-617764-m05) DBG | domain ha-617764-m05 has defined IP address 192.168.39.164 and MAC address 52:54:00:58:27:df in network mk-ha-617764
	I0913 19:10:07.846845   34198 main.go:141] libmachine: (ha-617764-m05) Calling .GetSSHPort
	I0913 19:10:07.847003   34198 main.go:141] libmachine: (ha-617764-m05) Calling .GetSSHPort
	I0913 19:10:07.847014   34198 main.go:141] libmachine: (ha-617764-m05) Calling .GetSSHKeyPath
	I0913 19:10:07.847162   34198 main.go:141] libmachine: (ha-617764-m05) Calling .GetSSHUsername
	I0913 19:10:07.847172   34198 main.go:141] libmachine: (ha-617764-m05) Calling .GetSSHKeyPath
	I0913 19:10:07.847267   34198 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764-m05/id_rsa Username:docker}
	I0913 19:10:07.847437   34198 main.go:141] libmachine: (ha-617764-m05) Calling .GetSSHUsername
	I0913 19:10:07.847552   34198 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764-m05/id_rsa Username:docker}
	I0913 19:10:07.958592   34198 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0913 19:10:08.123258   34198 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0913 19:10:08.129604   34198 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0913 19:10:08.129678   34198 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0913 19:10:08.146581   34198 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0913 19:10:08.146606   34198 start.go:495] detecting cgroup driver to use...
	I0913 19:10:08.146675   34198 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0913 19:10:08.163565   34198 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0913 19:10:08.177286   34198 docker.go:217] disabling cri-docker service (if available) ...
	I0913 19:10:08.177364   34198 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0913 19:10:08.190960   34198 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0913 19:10:08.205060   34198 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0913 19:10:08.321438   34198 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0913 19:10:08.454470   34198 docker.go:233] disabling docker service ...
	I0913 19:10:08.454530   34198 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0913 19:10:08.469813   34198 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0913 19:10:08.484072   34198 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0913 19:10:08.638258   34198 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0913 19:10:08.760698   34198 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0913 19:10:08.776801   34198 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0913 19:10:08.797451   34198 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0913 19:10:08.797504   34198 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:10:08.809767   34198 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0913 19:10:08.809828   34198 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:10:08.822024   34198 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:10:08.834503   34198 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:10:08.847171   34198 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0913 19:10:08.858384   34198 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:10:08.869694   34198 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:10:08.887352   34198 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:10:08.898583   34198 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0913 19:10:08.908698   34198 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0913 19:10:08.908760   34198 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0913 19:10:08.922287   34198 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0913 19:10:08.932022   34198 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 19:10:09.047646   34198 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0913 19:10:09.146380   34198 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0913 19:10:09.146458   34198 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0913 19:10:09.151324   34198 start.go:563] Will wait 60s for crictl version
	I0913 19:10:09.151375   34198 ssh_runner.go:195] Run: which crictl
	I0913 19:10:09.155234   34198 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0913 19:10:09.201616   34198 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0913 19:10:09.201712   34198 ssh_runner.go:195] Run: crio --version
	I0913 19:10:09.231167   34198 ssh_runner.go:195] Run: crio --version
	I0913 19:10:09.261768   34198 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0913 19:10:09.263026   34198 main.go:141] libmachine: (ha-617764-m05) Calling .GetIP
	I0913 19:10:09.265988   34198 main.go:141] libmachine: (ha-617764-m05) DBG | domain ha-617764-m05 has defined MAC address 52:54:00:58:27:df in network mk-ha-617764
	I0913 19:10:09.266542   34198 main.go:141] libmachine: (ha-617764-m05) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:27:df", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 20:09:58 +0000 UTC Type:0 Mac:52:54:00:58:27:df Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:ha-617764-m05 Clientid:01:52:54:00:58:27:df}
	I0913 19:10:09.266570   34198 main.go:141] libmachine: (ha-617764-m05) DBG | domain ha-617764-m05 has defined IP address 192.168.39.164 and MAC address 52:54:00:58:27:df in network mk-ha-617764
	I0913 19:10:09.266824   34198 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0913 19:10:09.271118   34198 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0913 19:10:09.284291   34198 mustload.go:65] Loading cluster: ha-617764
	I0913 19:10:09.284539   34198 config.go:182] Loaded profile config "ha-617764": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 19:10:09.284872   34198 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:10:09.284916   34198 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:10:09.299967   34198 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36947
	I0913 19:10:09.300365   34198 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:10:09.300923   34198 main.go:141] libmachine: Using API Version  1
	I0913 19:10:09.300948   34198 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:10:09.301280   34198 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:10:09.301490   34198 main.go:141] libmachine: (ha-617764) Calling .GetState
	I0913 19:10:09.302993   34198 host.go:66] Checking if "ha-617764" exists ...
	I0913 19:10:09.303285   34198 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:10:09.303317   34198 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:10:09.318906   34198 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33463
	I0913 19:10:09.319365   34198 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:10:09.319962   34198 main.go:141] libmachine: Using API Version  1
	I0913 19:10:09.319986   34198 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:10:09.320268   34198 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:10:09.320430   34198 main.go:141] libmachine: (ha-617764) Calling .DriverName
	I0913 19:10:09.320555   34198 certs.go:68] Setting up /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764 for IP: 192.168.39.164
	I0913 19:10:09.320564   34198 certs.go:194] generating shared ca certs ...
	I0913 19:10:09.320577   34198 certs.go:226] acquiring lock for ca certs: {Name:mke780aab4c2895bded11772e31c7a357d07742c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 19:10:09.320706   34198 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.key
	I0913 19:10:09.320742   34198 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.key
	I0913 19:10:09.320753   34198 certs.go:256] generating profile certs ...
	I0913 19:10:09.320823   34198 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/client.key
	I0913 19:10:09.320848   34198 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.key.2466bcf8
	I0913 19:10:09.320866   34198 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.crt.2466bcf8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.145 192.168.39.203 192.168.39.164 192.168.39.254]
	I0913 19:10:09.537828   34198 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.crt.2466bcf8 ...
	I0913 19:10:09.537857   34198 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.crt.2466bcf8: {Name:mka08f4279c39a5e3cfa1ff0129160634a7d7870 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 19:10:09.538018   34198 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.key.2466bcf8 ...
	I0913 19:10:09.538035   34198 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.key.2466bcf8: {Name:mke02a6a2c5a8832b88c258cf8a4ad0e29877db6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 19:10:09.538134   34198 certs.go:381] copying /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.crt.2466bcf8 -> /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.crt
	I0913 19:10:09.538334   34198 certs.go:385] copying /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.key.2466bcf8 -> /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.key
	I0913 19:10:09.538472   34198 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/proxy-client.key
	I0913 19:10:09.538486   34198 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0913 19:10:09.538500   34198 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0913 19:10:09.538513   34198 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0913 19:10:09.538525   34198 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0913 19:10:09.538537   34198 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0913 19:10:09.538550   34198 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0913 19:10:09.538561   34198 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0913 19:10:09.538572   34198 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0913 19:10:09.538614   34198 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079.pem (1338 bytes)
	W0913 19:10:09.538639   34198 certs.go:480] ignoring /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079_empty.pem, impossibly tiny 0 bytes
	I0913 19:10:09.538648   34198 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem (1675 bytes)
	I0913 19:10:09.538671   34198 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem (1082 bytes)
	I0913 19:10:09.538722   34198 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem (1123 bytes)
	I0913 19:10:09.538755   34198 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem (1679 bytes)
	I0913 19:10:09.538797   34198 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem (1708 bytes)
	I0913 19:10:09.538827   34198 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem -> /usr/share/ca-certificates/110792.pem
	I0913 19:10:09.538842   34198 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0913 19:10:09.538854   34198 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079.pem -> /usr/share/ca-certificates/11079.pem
	I0913 19:10:09.538884   34198 main.go:141] libmachine: (ha-617764) Calling .GetSSHHostname
	I0913 19:10:09.541846   34198 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 19:10:09.542276   34198 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 19:10:09.542310   34198 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 19:10:09.542542   34198 main.go:141] libmachine: (ha-617764) Calling .GetSSHPort
	I0913 19:10:09.542718   34198 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 19:10:09.542843   34198 main.go:141] libmachine: (ha-617764) Calling .GetSSHUsername
	I0913 19:10:09.542968   34198 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764/id_rsa Username:docker}
	I0913 19:10:09.618470   34198 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0913 19:10:09.623511   34198 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0913 19:10:09.636520   34198 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0913 19:10:09.643136   34198 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0913 19:10:09.654681   34198 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0913 19:10:09.658593   34198 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0913 19:10:09.671767   34198 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0913 19:10:09.676589   34198 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0913 19:10:09.690019   34198 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0913 19:10:09.695722   34198 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0913 19:10:09.709665   34198 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0913 19:10:09.714536   34198 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0913 19:10:09.727688   34198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0913 19:10:09.753484   34198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0913 19:10:09.779769   34198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0913 19:10:09.804151   34198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0913 19:10:09.830006   34198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0913 19:10:09.854281   34198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0913 19:10:09.878538   34198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0913 19:10:09.903534   34198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0913 19:10:09.929569   34198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem --> /usr/share/ca-certificates/110792.pem (1708 bytes)
	I0913 19:10:09.954525   34198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0913 19:10:09.981120   34198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079.pem --> /usr/share/ca-certificates/11079.pem (1338 bytes)
	I0913 19:10:10.006710   34198 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0913 19:10:10.023988   34198 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0913 19:10:10.040487   34198 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0913 19:10:10.059600   34198 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0913 19:10:10.078737   34198 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0913 19:10:10.096098   34198 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0913 19:10:10.113388   34198 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0913 19:10:10.130460   34198 ssh_runner.go:195] Run: openssl version
	I0913 19:10:10.136472   34198 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110792.pem && ln -fs /usr/share/ca-certificates/110792.pem /etc/ssl/certs/110792.pem"
	I0913 19:10:10.148278   34198 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110792.pem
	I0913 19:10:10.153159   34198 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 13 18:38 /usr/share/ca-certificates/110792.pem
	I0913 19:10:10.153208   34198 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110792.pem
	I0913 19:10:10.158894   34198 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110792.pem /etc/ssl/certs/3ec20f2e.0"
	I0913 19:10:10.169885   34198 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0913 19:10:10.181621   34198 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0913 19:10:10.186457   34198 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 13 18:22 /usr/share/ca-certificates/minikubeCA.pem
	I0913 19:10:10.186516   34198 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0913 19:10:10.192171   34198 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0913 19:10:10.203170   34198 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11079.pem && ln -fs /usr/share/ca-certificates/11079.pem /etc/ssl/certs/11079.pem"
	I0913 19:10:10.214482   34198 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11079.pem
	I0913 19:10:10.219530   34198 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 13 18:38 /usr/share/ca-certificates/11079.pem
	I0913 19:10:10.219581   34198 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11079.pem
	I0913 19:10:10.225537   34198 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11079.pem /etc/ssl/certs/51391683.0"
	I0913 19:10:10.236680   34198 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0913 19:10:10.240908   34198 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0913 19:10:10.240964   34198 kubeadm.go:934] updating node {m05 192.168.39.164 8443 v1.31.1  true true} ...
	I0913 19:10:10.241053   34198 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-617764-m05 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.164
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-617764 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0913 19:10:10.241089   34198 kube-vip.go:115] generating kube-vip config ...
	I0913 19:10:10.241122   34198 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0913 19:10:10.256573   34198 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0913 19:10:10.256674   34198 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0913 19:10:10.256735   34198 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0913 19:10:10.267117   34198 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0913 19:10:10.267199   34198 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0913 19:10:10.276668   34198 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0913 19:10:10.276690   34198 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0913 19:10:10.276721   34198 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I0913 19:10:10.276740   34198 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I0913 19:10:10.276756   34198 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0913 19:10:10.276776   34198 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 19:10:10.276793   34198 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0913 19:10:10.276732   34198 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0913 19:10:10.294350   34198 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0913 19:10:10.294400   34198 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0913 19:10:10.294430   34198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0913 19:10:10.294434   34198 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0913 19:10:10.294445   34198 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0913 19:10:10.294450   34198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0913 19:10:10.309400   34198 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0913 19:10:10.309434   34198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0913 19:10:11.170759   34198 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0913 19:10:11.181234   34198 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0913 19:10:11.199047   34198 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0913 19:10:11.216529   34198 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0913 19:10:11.240650   34198 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0913 19:10:11.244913   34198 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0913 19:10:11.257988   34198 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 19:10:11.382857   34198 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0913 19:10:11.401041   34198 host.go:66] Checking if "ha-617764" exists ...
	I0913 19:10:11.401480   34198 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:10:11.401520   34198 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:10:11.418401   34198 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42033
	I0913 19:10:11.418896   34198 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:10:11.419429   34198 main.go:141] libmachine: Using API Version  1
	I0913 19:10:11.419451   34198 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:10:11.419761   34198 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:10:11.419932   34198 main.go:141] libmachine: (ha-617764) Calling .DriverName
	I0913 19:10:11.420052   34198 start.go:317] joinCluster: &{Name:ha-617764 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-617764 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.145 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.203 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.238 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m05 IP:192.168.39.164 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:fa
lse gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 19:10:11.420207   34198 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0913 19:10:11.420222   34198 main.go:141] libmachine: (ha-617764) Calling .GetSSHHostname
	I0913 19:10:11.422876   34198 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 19:10:11.423410   34198 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 19:10:11.423434   34198 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 19:10:11.423577   34198 main.go:141] libmachine: (ha-617764) Calling .GetSSHPort
	I0913 19:10:11.423720   34198 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 19:10:11.423861   34198 main.go:141] libmachine: (ha-617764) Calling .GetSSHUsername
	I0913 19:10:11.423978   34198 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764/id_rsa Username:docker}
	I0913 19:10:11.608008   34198 start.go:343] trying to join control-plane node "m05" to cluster: &{Name:m05 IP:192.168.39.164 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:true Worker:true}
	I0913 19:10:11.608056   34198 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token rut3pa.bs0u0y8ah2w2kue1 --discovery-token-ca-cert-hash sha256:a4240e11abfbfd373505036507e5ef67d548fb372e560a526b421dfcccc65691 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-617764-m05 --control-plane --apiserver-advertise-address=192.168.39.164 --apiserver-bind-port=8443"
	I0913 19:10:35.205398   34198 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token rut3pa.bs0u0y8ah2w2kue1 --discovery-token-ca-cert-hash sha256:a4240e11abfbfd373505036507e5ef67d548fb372e560a526b421dfcccc65691 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-617764-m05 --control-plane --apiserver-advertise-address=192.168.39.164 --apiserver-bind-port=8443": (23.597316607s)
	I0913 19:10:35.205443   34198 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0913 19:10:35.758589   34198 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-617764-m05 minikube.k8s.io/updated_at=2024_09_13T19_10_35_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=fdd33bebc6743cfd1c61ec7fe066add478610a92 minikube.k8s.io/name=ha-617764 minikube.k8s.io/primary=false
	I0913 19:10:35.904107   34198 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-617764-m05 node-role.kubernetes.io/control-plane:NoSchedule-
	I0913 19:10:36.072751   34198 start.go:319] duration metric: took 24.652693287s to joinCluster
	I0913 19:10:36.072820   34198 start.go:235] Will wait 6m0s for node &{Name:m05 IP:192.168.39.164 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:true Worker:true}
	I0913 19:10:36.073214   34198 config.go:182] Loaded profile config "ha-617764": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 19:10:36.074215   34198 out.go:177] * Verifying Kubernetes components...
	I0913 19:10:36.075453   34198 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 19:10:36.263488   34198 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0913 19:10:36.280336   34198 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19636-3902/kubeconfig
	I0913 19:10:36.280619   34198 kapi.go:59] client config for ha-617764: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/client.crt", KeyFile:"/home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/client.key", CAFile:"/home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6f9c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0913 19:10:36.280706   34198 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.145:8443
	I0913 19:10:36.281112   34198 cert_rotation.go:140] Starting client certificate rotation controller
	I0913 19:10:36.281386   34198 node_ready.go:35] waiting up to 6m0s for node "ha-617764-m05" to be "Ready" ...
	I0913 19:10:36.281473   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m05
	I0913 19:10:36.281484   34198 round_trippers.go:469] Request Headers:
	I0913 19:10:36.281495   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:10:36.281502   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:10:36.291060   34198 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0913 19:10:36.782355   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m05
	I0913 19:10:36.782381   34198 round_trippers.go:469] Request Headers:
	I0913 19:10:36.782397   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:10:36.782403   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:10:36.785847   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:10:37.281811   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m05
	I0913 19:10:37.281832   34198 round_trippers.go:469] Request Headers:
	I0913 19:10:37.281840   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:10:37.281845   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:10:37.285097   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:10:37.781966   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m05
	I0913 19:10:37.781988   34198 round_trippers.go:469] Request Headers:
	I0913 19:10:37.781996   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:10:37.782000   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:10:37.785219   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:10:38.282197   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m05
	I0913 19:10:38.282229   34198 round_trippers.go:469] Request Headers:
	I0913 19:10:38.282239   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:10:38.282245   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:10:38.285728   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:10:38.286419   34198 node_ready.go:53] node "ha-617764-m05" has status "Ready":"False"
	I0913 19:10:38.782056   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m05
	I0913 19:10:38.782086   34198 round_trippers.go:469] Request Headers:
	I0913 19:10:38.782112   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:10:38.782117   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:10:38.786020   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:10:39.281975   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m05
	I0913 19:10:39.281996   34198 round_trippers.go:469] Request Headers:
	I0913 19:10:39.282003   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:10:39.282008   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:10:39.285402   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:10:39.782504   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m05
	I0913 19:10:39.782527   34198 round_trippers.go:469] Request Headers:
	I0913 19:10:39.782536   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:10:39.782541   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:10:39.786708   34198 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0913 19:10:40.281799   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m05
	I0913 19:10:40.281822   34198 round_trippers.go:469] Request Headers:
	I0913 19:10:40.281833   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:10:40.281838   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:10:40.285594   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:10:40.286779   34198 node_ready.go:53] node "ha-617764-m05" has status "Ready":"False"
	I0913 19:10:40.782087   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m05
	I0913 19:10:40.782134   34198 round_trippers.go:469] Request Headers:
	I0913 19:10:40.782147   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:10:40.782154   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:10:40.813457   34198 round_trippers.go:574] Response Status: 200 OK in 31 milliseconds
	I0913 19:10:41.282467   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m05
	I0913 19:10:41.282494   34198 round_trippers.go:469] Request Headers:
	I0913 19:10:41.282502   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:10:41.282508   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:10:41.286158   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:10:41.782517   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m05
	I0913 19:10:41.782538   34198 round_trippers.go:469] Request Headers:
	I0913 19:10:41.782547   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:10:41.782551   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:10:41.785706   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:10:42.281675   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m05
	I0913 19:10:42.281696   34198 round_trippers.go:469] Request Headers:
	I0913 19:10:42.281704   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:10:42.281708   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:10:42.286524   34198 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0913 19:10:42.287350   34198 node_ready.go:53] node "ha-617764-m05" has status "Ready":"False"
	I0913 19:10:42.782417   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m05
	I0913 19:10:42.782438   34198 round_trippers.go:469] Request Headers:
	I0913 19:10:42.782445   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:10:42.782448   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:10:42.786246   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:10:43.282214   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m05
	I0913 19:10:43.282230   34198 round_trippers.go:469] Request Headers:
	I0913 19:10:43.282238   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:10:43.282243   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:10:43.286224   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:10:43.781799   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m05
	I0913 19:10:43.781822   34198 round_trippers.go:469] Request Headers:
	I0913 19:10:43.781830   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:10:43.781835   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:10:43.785839   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:10:44.281735   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m05
	I0913 19:10:44.281759   34198 round_trippers.go:469] Request Headers:
	I0913 19:10:44.281783   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:10:44.281788   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:10:44.285372   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:10:44.781557   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m05
	I0913 19:10:44.781580   34198 round_trippers.go:469] Request Headers:
	I0913 19:10:44.781588   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:10:44.781592   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:10:44.785497   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:10:44.786012   34198 node_ready.go:53] node "ha-617764-m05" has status "Ready":"False"
	I0913 19:10:45.282395   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m05
	I0913 19:10:45.282417   34198 round_trippers.go:469] Request Headers:
	I0913 19:10:45.282427   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:10:45.282433   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:10:45.285660   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:10:45.782496   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m05
	I0913 19:10:45.782517   34198 round_trippers.go:469] Request Headers:
	I0913 19:10:45.782528   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:10:45.782532   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:10:45.785423   34198 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 19:10:46.282493   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m05
	I0913 19:10:46.282529   34198 round_trippers.go:469] Request Headers:
	I0913 19:10:46.282537   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:10:46.282541   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:10:46.285855   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:10:46.782376   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m05
	I0913 19:10:46.782402   34198 round_trippers.go:469] Request Headers:
	I0913 19:10:46.782414   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:10:46.782422   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:10:46.785692   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:10:46.786269   34198 node_ready.go:53] node "ha-617764-m05" has status "Ready":"False"
	I0913 19:10:47.282626   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m05
	I0913 19:10:47.282653   34198 round_trippers.go:469] Request Headers:
	I0913 19:10:47.282664   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:10:47.282669   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:10:47.286364   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:10:47.782549   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m05
	I0913 19:10:47.782571   34198 round_trippers.go:469] Request Headers:
	I0913 19:10:47.782581   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:10:47.782588   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:10:47.786303   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:10:48.282572   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m05
	I0913 19:10:48.282593   34198 round_trippers.go:469] Request Headers:
	I0913 19:10:48.282601   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:10:48.282605   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:10:48.285941   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:10:48.781997   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m05
	I0913 19:10:48.782019   34198 round_trippers.go:469] Request Headers:
	I0913 19:10:48.782026   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:10:48.782031   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:10:48.785275   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:10:49.282563   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m05
	I0913 19:10:49.282595   34198 round_trippers.go:469] Request Headers:
	I0913 19:10:49.282606   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:10:49.282613   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:10:49.286518   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:10:49.287220   34198 node_ready.go:53] node "ha-617764-m05" has status "Ready":"False"
	I0913 19:10:49.782241   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m05
	I0913 19:10:49.782261   34198 round_trippers.go:469] Request Headers:
	I0913 19:10:49.782271   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:10:49.782275   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:10:49.785463   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:10:50.281659   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m05
	I0913 19:10:50.281679   34198 round_trippers.go:469] Request Headers:
	I0913 19:10:50.281689   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:10:50.281696   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:10:50.285109   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:10:50.782175   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m05
	I0913 19:10:50.782194   34198 round_trippers.go:469] Request Headers:
	I0913 19:10:50.782201   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:10:50.782207   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:10:50.785892   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:10:51.281971   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m05
	I0913 19:10:51.281991   34198 round_trippers.go:469] Request Headers:
	I0913 19:10:51.281999   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:10:51.282002   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:10:51.285411   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:10:51.781920   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m05
	I0913 19:10:51.781942   34198 round_trippers.go:469] Request Headers:
	I0913 19:10:51.781950   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:10:51.781954   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:10:51.785661   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:10:51.786262   34198 node_ready.go:53] node "ha-617764-m05" has status "Ready":"False"
	I0913 19:10:52.282560   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m05
	I0913 19:10:52.282585   34198 round_trippers.go:469] Request Headers:
	I0913 19:10:52.282595   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:10:52.282602   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:10:52.285953   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:10:52.781829   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m05
	I0913 19:10:52.781850   34198 round_trippers.go:469] Request Headers:
	I0913 19:10:52.781858   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:10:52.781862   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:10:52.785747   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:10:53.282234   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m05
	I0913 19:10:53.282253   34198 round_trippers.go:469] Request Headers:
	I0913 19:10:53.282261   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:10:53.282265   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:10:53.285986   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:10:53.782506   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m05
	I0913 19:10:53.782529   34198 round_trippers.go:469] Request Headers:
	I0913 19:10:53.782542   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:10:53.782546   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:10:53.785986   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:10:53.786638   34198 node_ready.go:53] node "ha-617764-m05" has status "Ready":"False"
	I0913 19:10:54.282030   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m05
	I0913 19:10:54.282049   34198 round_trippers.go:469] Request Headers:
	I0913 19:10:54.282057   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:10:54.282062   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:10:54.285789   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:10:54.782574   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m05
	I0913 19:10:54.782607   34198 round_trippers.go:469] Request Headers:
	I0913 19:10:54.782618   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:10:54.782622   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:10:54.786864   34198 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0913 19:10:54.787575   34198 node_ready.go:49] node "ha-617764-m05" has status "Ready":"True"
	I0913 19:10:54.787604   34198 node_ready.go:38] duration metric: took 18.506184977s for node "ha-617764-m05" to be "Ready" ...
	I0913 19:10:54.787616   34198 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0913 19:10:54.787658   34198 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0913 19:10:54.787671   34198 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0913 19:10:54.787719   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods
	I0913 19:10:54.787726   34198 round_trippers.go:469] Request Headers:
	I0913 19:10:54.787733   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:10:54.787737   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:10:54.794263   34198 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0913 19:10:54.803117   34198 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-fdhnm" in "kube-system" namespace to be "Ready" ...
	I0913 19:10:54.803191   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fdhnm
	I0913 19:10:54.803199   34198 round_trippers.go:469] Request Headers:
	I0913 19:10:54.803206   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:10:54.803211   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:10:54.806875   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:10:54.808072   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764
	I0913 19:10:54.808087   34198 round_trippers.go:469] Request Headers:
	I0913 19:10:54.808094   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:10:54.808097   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:10:54.810790   34198 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 19:10:54.811342   34198 pod_ready.go:93] pod "coredns-7c65d6cfc9-fdhnm" in "kube-system" namespace has status "Ready":"True"
	I0913 19:10:54.811363   34198 pod_ready.go:82] duration metric: took 8.221945ms for pod "coredns-7c65d6cfc9-fdhnm" in "kube-system" namespace to be "Ready" ...
	I0913 19:10:54.811374   34198 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-htrbt" in "kube-system" namespace to be "Ready" ...
	I0913 19:10:54.811437   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-htrbt
	I0913 19:10:54.811447   34198 round_trippers.go:469] Request Headers:
	I0913 19:10:54.811458   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:10:54.811469   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:10:54.814174   34198 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 19:10:54.815355   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764
	I0913 19:10:54.815369   34198 round_trippers.go:469] Request Headers:
	I0913 19:10:54.815375   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:10:54.815379   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:10:54.818485   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:10:54.819368   34198 pod_ready.go:93] pod "coredns-7c65d6cfc9-htrbt" in "kube-system" namespace has status "Ready":"True"
	I0913 19:10:54.819387   34198 pod_ready.go:82] duration metric: took 8.006313ms for pod "coredns-7c65d6cfc9-htrbt" in "kube-system" namespace to be "Ready" ...
	I0913 19:10:54.819396   34198 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-617764" in "kube-system" namespace to be "Ready" ...
	I0913 19:10:54.819446   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/etcd-ha-617764
	I0913 19:10:54.819454   34198 round_trippers.go:469] Request Headers:
	I0913 19:10:54.819461   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:10:54.819466   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:10:54.822956   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:10:54.823531   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764
	I0913 19:10:54.823551   34198 round_trippers.go:469] Request Headers:
	I0913 19:10:54.823561   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:10:54.823566   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:10:54.826699   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:10:54.827163   34198 pod_ready.go:93] pod "etcd-ha-617764" in "kube-system" namespace has status "Ready":"True"
	I0913 19:10:54.827182   34198 pod_ready.go:82] duration metric: took 7.780123ms for pod "etcd-ha-617764" in "kube-system" namespace to be "Ready" ...
	I0913 19:10:54.827194   34198 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-617764-m02" in "kube-system" namespace to be "Ready" ...
	I0913 19:10:54.827263   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/etcd-ha-617764-m02
	I0913 19:10:54.827273   34198 round_trippers.go:469] Request Headers:
	I0913 19:10:54.827283   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:10:54.827291   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:10:54.830069   34198 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 19:10:54.830678   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 19:10:54.830690   34198 round_trippers.go:469] Request Headers:
	I0913 19:10:54.830697   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:10:54.830700   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:10:54.833254   34198 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 19:10:54.833717   34198 pod_ready.go:93] pod "etcd-ha-617764-m02" in "kube-system" namespace has status "Ready":"True"
	I0913 19:10:54.833735   34198 pod_ready.go:82] duration metric: took 6.534385ms for pod "etcd-ha-617764-m02" in "kube-system" namespace to be "Ready" ...
	I0913 19:10:54.833743   34198 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-617764-m05" in "kube-system" namespace to be "Ready" ...
	I0913 19:10:54.983084   34198 request.go:632] Waited for 149.293914ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/etcd-ha-617764-m05
	I0913 19:10:54.983155   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/etcd-ha-617764-m05
	I0913 19:10:54.983161   34198 round_trippers.go:469] Request Headers:
	I0913 19:10:54.983168   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:10:54.983175   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:10:54.986289   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:10:55.183275   34198 request.go:632] Waited for 196.265876ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.145:8443/api/v1/nodes/ha-617764-m05
	I0913 19:10:55.183358   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m05
	I0913 19:10:55.183366   34198 round_trippers.go:469] Request Headers:
	I0913 19:10:55.183376   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:10:55.183384   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:10:55.186806   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:10:55.187522   34198 pod_ready.go:93] pod "etcd-ha-617764-m05" in "kube-system" namespace has status "Ready":"True"
	I0913 19:10:55.187544   34198 pod_ready.go:82] duration metric: took 353.795259ms for pod "etcd-ha-617764-m05" in "kube-system" namespace to be "Ready" ...
	I0913 19:10:55.187563   34198 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-617764" in "kube-system" namespace to be "Ready" ...
	I0913 19:10:55.383548   34198 request.go:632] Waited for 195.896181ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-617764
	I0913 19:10:55.383609   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-617764
	I0913 19:10:55.383614   34198 round_trippers.go:469] Request Headers:
	I0913 19:10:55.383621   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:10:55.383624   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:10:55.387715   34198 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0913 19:10:55.582638   34198 request.go:632] Waited for 194.289409ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.145:8443/api/v1/nodes/ha-617764
	I0913 19:10:55.582717   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764
	I0913 19:10:55.582727   34198 round_trippers.go:469] Request Headers:
	I0913 19:10:55.582738   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:10:55.582746   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:10:55.587664   34198 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0913 19:10:55.588152   34198 pod_ready.go:93] pod "kube-apiserver-ha-617764" in "kube-system" namespace has status "Ready":"True"
	I0913 19:10:55.588169   34198 pod_ready.go:82] duration metric: took 400.599768ms for pod "kube-apiserver-ha-617764" in "kube-system" namespace to be "Ready" ...
	I0913 19:10:55.588179   34198 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-617764-m02" in "kube-system" namespace to be "Ready" ...
	I0913 19:10:55.783256   34198 request.go:632] Waited for 195.022225ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-617764-m02
	I0913 19:10:55.783314   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-617764-m02
	I0913 19:10:55.783332   34198 round_trippers.go:469] Request Headers:
	I0913 19:10:55.783339   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:10:55.783343   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:10:55.786618   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:10:55.982580   34198 request.go:632] Waited for 195.272756ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 19:10:55.982665   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 19:10:55.982676   34198 round_trippers.go:469] Request Headers:
	I0913 19:10:55.982687   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:10:55.982697   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:10:55.986064   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:10:56.183058   34198 request.go:632] Waited for 94.268482ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-617764-m02
	I0913 19:10:56.183113   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-617764-m02
	I0913 19:10:56.183119   34198 round_trippers.go:469] Request Headers:
	I0913 19:10:56.183126   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:10:56.183131   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:10:56.186983   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:10:56.382694   34198 request.go:632] Waited for 194.882205ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 19:10:56.382765   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 19:10:56.382770   34198 round_trippers.go:469] Request Headers:
	I0913 19:10:56.382779   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:10:56.382784   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:10:56.386267   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:10:56.588734   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-617764-m02
	I0913 19:10:56.588759   34198 round_trippers.go:469] Request Headers:
	I0913 19:10:56.588769   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:10:56.588776   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:10:56.592042   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:10:56.783074   34198 request.go:632] Waited for 190.35473ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 19:10:56.783140   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 19:10:56.783147   34198 round_trippers.go:469] Request Headers:
	I0913 19:10:56.783163   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:10:56.783174   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:10:56.786868   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:10:57.088578   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-617764-m02
	I0913 19:10:57.088597   34198 round_trippers.go:469] Request Headers:
	I0913 19:10:57.088607   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:10:57.088612   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:10:57.092003   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:10:57.183022   34198 request.go:632] Waited for 90.238153ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 19:10:57.183093   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 19:10:57.183099   34198 round_trippers.go:469] Request Headers:
	I0913 19:10:57.183106   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:10:57.183111   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:10:57.186623   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:10:57.588406   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-617764-m02
	I0913 19:10:57.588424   34198 round_trippers.go:469] Request Headers:
	I0913 19:10:57.588432   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:10:57.588437   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:10:57.591870   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:10:57.592675   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 19:10:57.592691   34198 round_trippers.go:469] Request Headers:
	I0913 19:10:57.592701   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:10:57.592705   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:10:57.595807   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:10:57.596314   34198 pod_ready.go:103] pod "kube-apiserver-ha-617764-m02" in "kube-system" namespace has status "Ready":"False"
	I0913 19:10:58.088700   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-617764-m02
	I0913 19:10:58.088724   34198 round_trippers.go:469] Request Headers:
	I0913 19:10:58.088734   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:10:58.088741   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:10:58.092574   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:10:58.093213   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 19:10:58.093226   34198 round_trippers.go:469] Request Headers:
	I0913 19:10:58.093237   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:10:58.093243   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:10:58.095886   34198 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 19:10:58.589066   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-617764-m02
	I0913 19:10:58.589090   34198 round_trippers.go:469] Request Headers:
	I0913 19:10:58.589100   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:10:58.589105   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:10:58.593290   34198 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0913 19:10:58.594181   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 19:10:58.594197   34198 round_trippers.go:469] Request Headers:
	I0913 19:10:58.594208   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:10:58.594212   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:10:58.597493   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:10:59.088345   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-617764-m02
	I0913 19:10:59.088364   34198 round_trippers.go:469] Request Headers:
	I0913 19:10:59.088372   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:10:59.088376   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:10:59.091235   34198 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 19:10:59.092121   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 19:10:59.092139   34198 round_trippers.go:469] Request Headers:
	I0913 19:10:59.092149   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:10:59.092155   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:10:59.094627   34198 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 19:10:59.588401   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-617764-m02
	I0913 19:10:59.588423   34198 round_trippers.go:469] Request Headers:
	I0913 19:10:59.588431   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:10:59.588436   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:10:59.592340   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:10:59.593308   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 19:10:59.593324   34198 round_trippers.go:469] Request Headers:
	I0913 19:10:59.593332   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:10:59.593335   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:10:59.596249   34198 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 19:10:59.596956   34198 pod_ready.go:103] pod "kube-apiserver-ha-617764-m02" in "kube-system" namespace has status "Ready":"False"
	I0913 19:11:00.088338   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-617764-m02
	I0913 19:11:00.088361   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:00.088387   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:00.088397   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:00.092242   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:11:00.093235   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 19:11:00.093253   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:00.093260   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:00.093275   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:00.096071   34198 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 19:11:00.589092   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-617764-m02
	I0913 19:11:00.589115   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:00.589126   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:00.589129   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:00.592790   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:11:00.593518   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 19:11:00.593534   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:00.593541   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:00.593545   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:00.596366   34198 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 19:11:01.089090   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-617764-m02
	I0913 19:11:01.089118   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:01.089128   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:01.089133   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:01.096810   34198 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0913 19:11:01.097661   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 19:11:01.097680   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:01.097689   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:01.097694   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:01.102685   34198 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0913 19:11:01.588983   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-617764-m02
	I0913 19:11:01.589013   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:01.589040   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:01.589046   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:01.592867   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:11:01.593785   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 19:11:01.593799   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:01.593812   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:01.593815   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:01.596992   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:11:01.597548   34198 pod_ready.go:103] pod "kube-apiserver-ha-617764-m02" in "kube-system" namespace has status "Ready":"False"
	I0913 19:11:02.088756   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-617764-m02
	I0913 19:11:02.088794   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:02.088802   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:02.088806   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:02.092049   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:11:02.093000   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 19:11:02.093021   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:02.093032   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:02.093039   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:02.095732   34198 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 19:11:02.588535   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-617764-m02
	I0913 19:11:02.588559   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:02.588570   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:02.588575   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:02.592303   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:11:02.592956   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 19:11:02.592971   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:02.592978   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:02.592982   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:02.595744   34198 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 19:11:03.088458   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-617764-m02
	I0913 19:11:03.088482   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:03.088489   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:03.088493   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:03.091949   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:11:03.093073   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 19:11:03.093086   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:03.093093   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:03.093096   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:03.095874   34198 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 19:11:03.588313   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-617764-m02
	I0913 19:11:03.588341   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:03.588349   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:03.588352   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:03.592491   34198 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0913 19:11:03.593209   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 19:11:03.593226   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:03.593235   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:03.593240   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:03.595866   34198 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 19:11:04.088524   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-617764-m02
	I0913 19:11:04.088545   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:04.088553   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:04.088557   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:04.092210   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:11:04.093120   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 19:11:04.093134   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:04.093141   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:04.093146   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:04.095808   34198 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 19:11:04.096400   34198 pod_ready.go:103] pod "kube-apiserver-ha-617764-m02" in "kube-system" namespace has status "Ready":"False"
	I0913 19:11:04.588431   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-617764-m02
	I0913 19:11:04.588453   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:04.588461   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:04.588465   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:04.592003   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:11:04.592879   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 19:11:04.592895   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:04.592902   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:04.592906   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:04.595520   34198 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 19:11:05.088500   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-617764-m02
	I0913 19:11:05.088522   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:05.088545   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:05.088550   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:05.092009   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:11:05.093038   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 19:11:05.093052   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:05.093059   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:05.093062   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:05.095878   34198 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 19:11:05.588947   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-617764-m02
	I0913 19:11:05.588968   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:05.588975   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:05.588979   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:05.592250   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:11:05.592895   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 19:11:05.592908   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:05.592915   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:05.592918   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:05.595464   34198 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 19:11:06.089236   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-617764-m02
	I0913 19:11:06.089257   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:06.089265   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:06.089270   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:06.096352   34198 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0913 19:11:06.097318   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 19:11:06.097337   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:06.097347   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:06.097355   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:06.100216   34198 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 19:11:06.101056   34198 pod_ready.go:103] pod "kube-apiserver-ha-617764-m02" in "kube-system" namespace has status "Ready":"False"
	I0913 19:11:06.589053   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-617764-m02
	I0913 19:11:06.589077   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:06.589084   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:06.589088   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:06.592715   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:11:06.593692   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 19:11:06.593708   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:06.593715   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:06.593719   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:06.597173   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:11:07.089351   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-617764-m02
	I0913 19:11:07.089379   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:07.089387   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:07.089392   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:07.093025   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:11:07.093694   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 19:11:07.093710   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:07.093717   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:07.093722   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:07.100147   34198 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0913 19:11:07.588982   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-617764-m02
	I0913 19:11:07.589002   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:07.589010   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:07.589015   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:07.593388   34198 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0913 19:11:07.594219   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 19:11:07.594237   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:07.594247   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:07.594259   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:07.597581   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:11:08.088615   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-617764-m02
	I0913 19:11:08.088637   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:08.088645   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:08.088650   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:08.092810   34198 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0913 19:11:08.093826   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 19:11:08.093841   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:08.093851   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:08.093856   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:08.097353   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:11:08.588485   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-617764-m02
	I0913 19:11:08.588505   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:08.588513   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:08.588518   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:08.592168   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:11:08.592825   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 19:11:08.592841   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:08.592848   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:08.592853   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:08.595621   34198 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 19:11:08.596717   34198 pod_ready.go:103] pod "kube-apiserver-ha-617764-m02" in "kube-system" namespace has status "Ready":"False"
	I0913 19:11:09.088459   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-617764-m02
	I0913 19:11:09.088480   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:09.088488   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:09.088491   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:09.094531   34198 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0913 19:11:09.095297   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 19:11:09.095312   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:09.095319   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:09.095323   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:09.098929   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:11:09.589210   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-617764-m02
	I0913 19:11:09.589232   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:09.589240   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:09.589244   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:09.592875   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:11:09.593755   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 19:11:09.593774   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:09.593781   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:09.593784   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:09.596598   34198 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 19:11:10.088535   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-617764-m02
	I0913 19:11:10.088556   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:10.088567   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:10.088573   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:10.092215   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:11:10.093042   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 19:11:10.093057   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:10.093064   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:10.093068   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:10.096213   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:11:10.589275   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-617764-m02
	I0913 19:11:10.589299   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:10.589309   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:10.589314   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:10.593186   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:11:10.594176   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 19:11:10.594193   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:10.594201   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:10.594206   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:10.597095   34198 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 19:11:10.597544   34198 pod_ready.go:103] pod "kube-apiserver-ha-617764-m02" in "kube-system" namespace has status "Ready":"False"
	I0913 19:11:11.088994   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-617764-m02
	I0913 19:11:11.089015   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:11.089025   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:11.089031   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:11.092289   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:11:11.093149   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 19:11:11.093163   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:11.093170   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:11.093174   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:11.095901   34198 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 19:11:11.589372   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-617764-m02
	I0913 19:11:11.589397   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:11.589416   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:11.589424   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:11.593640   34198 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0913 19:11:11.594329   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 19:11:11.594344   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:11.594354   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:11.594361   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:11.596916   34198 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 19:11:12.088767   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-617764-m02
	I0913 19:11:12.088792   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:12.088803   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:12.088814   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:12.093376   34198 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0913 19:11:12.094235   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 19:11:12.094253   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:12.094264   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:12.094270   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:12.097581   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:11:12.589362   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-617764-m02
	I0913 19:11:12.589383   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:12.589394   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:12.589399   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:12.594320   34198 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0913 19:11:12.595064   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 19:11:12.595078   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:12.595086   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:12.595091   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:12.597972   34198 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 19:11:12.598550   34198 pod_ready.go:103] pod "kube-apiserver-ha-617764-m02" in "kube-system" namespace has status "Ready":"False"
	I0913 19:11:13.088748   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-617764-m02
	I0913 19:11:13.088773   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:13.088781   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:13.088784   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:13.092551   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:11:13.093211   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 19:11:13.093229   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:13.093237   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:13.093240   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:13.096497   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:11:13.589375   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-617764-m02
	I0913 19:11:13.589394   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:13.589402   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:13.589406   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:13.592986   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:11:13.593655   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 19:11:13.593670   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:13.593677   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:13.593682   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:13.596333   34198 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 19:11:14.089206   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-617764-m02
	I0913 19:11:14.089230   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:14.089237   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:14.089243   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:14.093677   34198 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0913 19:11:14.094440   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 19:11:14.094454   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:14.094461   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:14.094465   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:14.098381   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:11:14.589160   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-617764-m02
	I0913 19:11:14.589180   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:14.589188   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:14.589193   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:14.593053   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:11:14.593676   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 19:11:14.593695   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:14.593705   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:14.593712   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:14.596655   34198 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 19:11:15.089374   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-617764-m02
	I0913 19:11:15.089396   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:15.089406   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:15.089411   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:15.093320   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:11:15.094036   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 19:11:15.094052   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:15.094059   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:15.094065   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:15.096833   34198 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 19:11:15.097547   34198 pod_ready.go:103] pod "kube-apiserver-ha-617764-m02" in "kube-system" namespace has status "Ready":"False"
	I0913 19:11:15.588403   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-617764-m02
	I0913 19:11:15.588426   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:15.588437   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:15.588444   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:15.592167   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:11:15.593309   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 19:11:15.593335   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:15.593346   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:15.593351   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:15.596280   34198 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 19:11:16.088876   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-617764-m02
	I0913 19:11:16.088903   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:16.088913   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:16.088918   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:16.092678   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:11:16.093885   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 19:11:16.093898   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:16.093905   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:16.093910   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:16.096720   34198 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 19:11:16.589380   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-617764-m02
	I0913 19:11:16.589406   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:16.589417   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:16.589423   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:16.593280   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:11:16.594140   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 19:11:16.594158   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:16.594178   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:16.594189   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:16.596801   34198 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 19:11:17.088716   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-617764-m02
	I0913 19:11:17.088741   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:17.088758   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:17.088764   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:17.092272   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:11:17.092978   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 19:11:17.092993   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:17.093003   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:17.093009   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:17.095766   34198 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 19:11:17.588432   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-617764-m02
	I0913 19:11:17.588454   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:17.588461   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:17.588465   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:17.592054   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:11:17.593003   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 19:11:17.593018   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:17.593026   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:17.593030   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:17.595541   34198 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 19:11:17.596321   34198 pod_ready.go:103] pod "kube-apiserver-ha-617764-m02" in "kube-system" namespace has status "Ready":"False"
	I0913 19:11:18.088448   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-617764-m02
	I0913 19:11:18.088470   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:18.088476   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:18.088480   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:18.091831   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:11:18.092549   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 19:11:18.092563   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:18.092573   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:18.092580   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:18.095208   34198 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 19:11:18.589324   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-617764-m02
	I0913 19:11:18.589344   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:18.589361   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:18.589366   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:18.593061   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:11:18.593879   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 19:11:18.593895   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:18.593902   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:18.593906   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:18.596587   34198 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 19:11:19.088373   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-617764-m02
	I0913 19:11:19.088401   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:19.088410   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:19.088414   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:19.092564   34198 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0913 19:11:19.093365   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 19:11:19.093381   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:19.093394   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:19.093401   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:19.096428   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:11:19.589300   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-617764-m02
	I0913 19:11:19.589318   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:19.589326   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:19.589330   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:19.593009   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:11:19.593743   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 19:11:19.593756   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:19.593764   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:19.593771   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:19.596450   34198 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 19:11:19.597055   34198 pod_ready.go:103] pod "kube-apiserver-ha-617764-m02" in "kube-system" namespace has status "Ready":"False"
	I0913 19:11:20.089024   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-617764-m02
	I0913 19:11:20.089046   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:20.089054   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:20.089059   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:20.092369   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:11:20.093211   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 19:11:20.093228   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:20.093237   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:20.093241   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:20.096017   34198 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 19:11:20.589039   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-617764-m02
	I0913 19:11:20.589069   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:20.589077   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:20.589083   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:20.592056   34198 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 19:11:20.592848   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 19:11:20.592862   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:20.592869   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:20.592871   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:20.595326   34198 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 19:11:21.089263   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-617764-m02
	I0913 19:11:21.089287   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:21.089296   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:21.089299   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:21.092876   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:11:21.093564   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 19:11:21.093578   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:21.093585   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:21.093589   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:21.096410   34198 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 19:11:21.588742   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-617764-m02
	I0913 19:11:21.588767   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:21.588777   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:21.588781   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:21.592265   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:11:21.592935   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 19:11:21.592957   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:21.592966   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:21.592974   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:21.595759   34198 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 19:11:22.088453   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-617764-m02
	I0913 19:11:22.088475   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:22.088483   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:22.088487   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:22.092668   34198 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0913 19:11:22.093740   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 19:11:22.093756   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:22.093766   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:22.093771   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:22.096492   34198 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 19:11:22.097219   34198 pod_ready.go:103] pod "kube-apiserver-ha-617764-m02" in "kube-system" namespace has status "Ready":"False"
	I0913 19:11:22.588418   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-617764-m02
	I0913 19:11:22.588441   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:22.588451   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:22.588458   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:22.596084   34198 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0913 19:11:22.596891   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 19:11:22.596910   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:22.596922   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:22.596927   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:22.599851   34198 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 19:11:23.088590   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-617764-m02
	I0913 19:11:23.088612   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:23.088619   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:23.088622   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:23.092062   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:11:23.093068   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 19:11:23.093088   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:23.093095   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:23.093098   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:23.095809   34198 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 19:11:23.588322   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-617764-m02
	I0913 19:11:23.588341   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:23.588351   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:23.588355   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:23.592532   34198 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0913 19:11:23.593695   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 19:11:23.593710   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:23.593717   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:23.593725   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:23.596458   34198 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 19:11:24.088331   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-617764-m02
	I0913 19:11:24.088351   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:24.088359   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:24.088365   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:24.091568   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:11:24.092367   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 19:11:24.092392   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:24.092400   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:24.092405   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:24.095029   34198 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 19:11:24.588967   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-617764-m02
	I0913 19:11:24.588989   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:24.588997   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:24.589002   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:24.592815   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:11:24.593522   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 19:11:24.593536   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:24.593544   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:24.593551   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:24.596431   34198 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 19:11:24.596972   34198 pod_ready.go:103] pod "kube-apiserver-ha-617764-m02" in "kube-system" namespace has status "Ready":"False"
	I0913 19:11:25.089292   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-617764-m02
	I0913 19:11:25.089314   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:25.089321   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:25.089325   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:25.093175   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:11:25.093779   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 19:11:25.093793   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:25.093801   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:25.093804   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:25.096634   34198 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 19:11:25.588442   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-617764-m02
	I0913 19:11:25.588466   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:25.588476   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:25.588481   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:25.592354   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:11:25.593142   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 19:11:25.593155   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:25.593162   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:25.593167   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:25.595855   34198 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 19:11:26.088812   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-617764-m02
	I0913 19:11:26.088832   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:26.088841   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:26.088845   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:26.092740   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:11:26.093400   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 19:11:26.093417   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:26.093429   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:26.093437   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:26.096112   34198 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 19:11:26.588770   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-617764-m02
	I0913 19:11:26.588791   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:26.588799   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:26.588804   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:26.592544   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:11:26.593334   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 19:11:26.593349   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:26.593359   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:26.593364   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:26.596343   34198 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 19:11:27.089292   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-617764-m02
	I0913 19:11:27.089314   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:27.089322   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:27.089335   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:27.092791   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:11:27.093681   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 19:11:27.093696   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:27.093704   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:27.093708   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:27.096618   34198 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 19:11:27.097241   34198 pod_ready.go:103] pod "kube-apiserver-ha-617764-m02" in "kube-system" namespace has status "Ready":"False"
	I0913 19:11:27.588613   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-617764-m02
	I0913 19:11:27.588635   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:27.588645   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:27.588650   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:27.592554   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:11:27.593222   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 19:11:27.593236   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:27.593243   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:27.593248   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:27.596167   34198 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 19:11:28.089249   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-617764-m02
	I0913 19:11:28.089271   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:28.089279   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:28.089282   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:28.093001   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:11:28.093865   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 19:11:28.093883   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:28.093893   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:28.093897   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:28.097020   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:11:28.588542   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-617764-m02
	I0913 19:11:28.588562   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:28.588570   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:28.588576   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:28.592448   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:11:28.593184   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 19:11:28.593196   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:28.593203   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:28.593207   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:28.596039   34198 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 19:11:29.089077   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-617764-m02
	I0913 19:11:29.089105   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:29.089116   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:29.089120   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:29.092812   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:11:29.093698   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 19:11:29.093731   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:29.093742   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:29.093749   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:29.096522   34198 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 19:11:29.589080   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-617764-m02
	I0913 19:11:29.589102   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:29.589109   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:29.589114   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:29.592804   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:11:29.593680   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 19:11:29.593695   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:29.593702   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:29.593708   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:29.596517   34198 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 19:11:29.597126   34198 pod_ready.go:103] pod "kube-apiserver-ha-617764-m02" in "kube-system" namespace has status "Ready":"False"
	I0913 19:11:30.088397   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-617764-m02
	I0913 19:11:30.088416   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:30.088424   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:30.088429   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:30.092202   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:11:30.093169   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 19:11:30.093185   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:30.093192   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:30.093197   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:30.096035   34198 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 19:11:30.588986   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-617764-m02
	I0913 19:11:30.589010   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:30.589021   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:30.589026   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:30.592594   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:11:30.593773   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 19:11:30.593788   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:30.593795   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:30.593799   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:30.596381   34198 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 19:11:31.088367   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-617764-m02
	I0913 19:11:31.088387   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:31.088395   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:31.088399   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:31.091896   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:11:31.092721   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 19:11:31.092738   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:31.092749   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:31.092756   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:31.095961   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:11:31.588336   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-617764-m02
	I0913 19:11:31.588360   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:31.588370   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:31.588376   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:31.592172   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:11:31.592963   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 19:11:31.592979   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:31.592987   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:31.592992   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:31.595724   34198 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 19:11:32.088455   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-617764-m02
	I0913 19:11:32.088481   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:32.088490   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:32.088496   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:32.092707   34198 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0913 19:11:32.093262   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 19:11:32.093277   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:32.093286   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:32.093291   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:32.095884   34198 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 19:11:32.096308   34198 pod_ready.go:103] pod "kube-apiserver-ha-617764-m02" in "kube-system" namespace has status "Ready":"False"
	I0913 19:11:32.588695   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-617764-m02
	I0913 19:11:32.588719   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:32.588729   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:32.588737   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:32.592570   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:11:32.593376   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 19:11:32.593389   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:32.593397   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:32.593400   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:32.596484   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:11:33.089346   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-617764-m02
	I0913 19:11:33.089366   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:33.089374   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:33.089378   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:33.092902   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:11:33.093569   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 19:11:33.093584   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:33.093595   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:33.093602   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:33.096275   34198 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 19:11:33.588518   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-617764-m02
	I0913 19:11:33.588536   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:33.588544   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:33.588551   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:33.591807   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:11:33.592602   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 19:11:33.592616   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:33.592624   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:33.592628   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:33.594997   34198 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 19:11:34.088868   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-617764-m02
	I0913 19:11:34.088889   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:34.088897   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:34.088899   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:34.093022   34198 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0913 19:11:34.093809   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 19:11:34.093825   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:34.093832   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:34.093835   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:34.096722   34198 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 19:11:34.097166   34198 pod_ready.go:103] pod "kube-apiserver-ha-617764-m02" in "kube-system" namespace has status "Ready":"False"
	I0913 19:11:34.589362   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-617764-m02
	I0913 19:11:34.589385   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:34.589394   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:34.589398   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:34.592962   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:11:34.593638   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 19:11:34.593651   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:34.593659   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:34.593663   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:34.596543   34198 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 19:11:35.088463   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-617764-m02
	I0913 19:11:35.088484   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:35.088492   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:35.088497   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:35.092128   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:11:35.092843   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 19:11:35.092856   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:35.092863   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:35.092867   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:35.095686   34198 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 19:11:35.588551   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-617764-m02
	I0913 19:11:35.588573   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:35.588582   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:35.588587   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:35.592229   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:11:35.593039   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 19:11:35.593053   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:35.593064   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:35.593070   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:35.595768   34198 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 19:11:36.088774   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-617764-m02
	I0913 19:11:36.088796   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:36.088804   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:36.088808   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:36.093769   34198 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0913 19:11:36.094542   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 19:11:36.094557   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:36.094563   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:36.094568   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:36.097830   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:11:36.098338   34198 pod_ready.go:103] pod "kube-apiserver-ha-617764-m02" in "kube-system" namespace has status "Ready":"False"
	I0913 19:11:36.589278   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-617764-m02
	I0913 19:11:36.589298   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:36.589305   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:36.589309   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:36.592626   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:11:36.593498   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 19:11:36.593512   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:36.593523   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:36.593528   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:36.596322   34198 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 19:11:37.088360   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-617764-m02
	I0913 19:11:37.088387   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:37.088396   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:37.088400   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:37.091689   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:11:37.092309   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 19:11:37.092323   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:37.092330   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:37.092333   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:37.095030   34198 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 19:11:37.588999   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-617764-m02
	I0913 19:11:37.589020   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:37.589028   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:37.589032   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:37.592995   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:11:37.593700   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 19:11:37.593713   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:37.593721   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:37.593725   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:37.596651   34198 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 19:11:38.088450   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-617764-m02
	I0913 19:11:38.088472   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:38.088481   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:38.088487   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:38.091829   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:11:38.092574   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 19:11:38.092588   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:38.092596   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:38.092604   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:38.095402   34198 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 19:11:38.588811   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-617764-m02
	I0913 19:11:38.588833   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:38.588841   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:38.588845   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:38.592476   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:11:38.593302   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 19:11:38.593318   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:38.593325   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:38.593329   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:38.596684   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:11:38.597319   34198 pod_ready.go:103] pod "kube-apiserver-ha-617764-m02" in "kube-system" namespace has status "Ready":"False"
	I0913 19:11:39.088479   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-617764-m02
	I0913 19:11:39.088501   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:39.088509   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:39.088513   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:39.096955   34198 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0913 19:11:39.097819   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 19:11:39.097846   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:39.097865   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:39.097874   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:39.100702   34198 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 19:11:39.588343   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-617764-m02
	I0913 19:11:39.588364   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:39.588384   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:39.588390   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:39.592083   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:11:39.592998   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 19:11:39.593012   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:39.593019   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:39.593023   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:39.596323   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:11:40.088560   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-617764-m02
	I0913 19:11:40.088585   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:40.088598   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:40.088605   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:40.092066   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:11:40.092664   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 19:11:40.092679   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:40.092686   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:40.092690   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:40.095556   34198 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 19:11:40.588449   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-617764-m02
	I0913 19:11:40.588469   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:40.588480   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:40.588487   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:40.592080   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:11:40.592902   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 19:11:40.592917   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:40.592924   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:40.592930   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:40.595558   34198 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 19:11:41.088428   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-617764-m02
	I0913 19:11:41.088452   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:41.088462   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:41.088468   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:41.091511   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:11:41.092454   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 19:11:41.092468   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:41.092475   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:41.092478   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:41.095393   34198 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 19:11:41.096040   34198 pod_ready.go:103] pod "kube-apiserver-ha-617764-m02" in "kube-system" namespace has status "Ready":"False"
	I0913 19:11:41.588736   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-617764-m02
	I0913 19:11:41.588762   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:41.588773   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:41.588778   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:41.592211   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:11:41.592871   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 19:11:41.592886   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:41.592893   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:41.592897   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:41.595387   34198 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 19:11:42.089348   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-617764-m02
	I0913 19:11:42.089375   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:42.089387   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:42.089392   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:42.093770   34198 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0913 19:11:42.094747   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 19:11:42.094762   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:42.094771   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:42.094775   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:42.097238   34198 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 19:11:42.589148   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-617764-m02
	I0913 19:11:42.589170   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:42.589176   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:42.589180   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:42.593015   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:11:42.593798   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 19:11:42.593812   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:42.593819   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:42.593823   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:42.597474   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:11:43.088387   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-617764-m02
	I0913 19:11:43.088413   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:43.088424   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:43.088428   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:43.092171   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:11:43.092881   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 19:11:43.092893   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:43.092900   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:43.092905   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:43.095600   34198 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 19:11:43.096134   34198 pod_ready.go:103] pod "kube-apiserver-ha-617764-m02" in "kube-system" namespace has status "Ready":"False"
	I0913 19:11:43.588812   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-617764-m02
	I0913 19:11:43.588836   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:43.588847   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:43.588854   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:43.592041   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:11:43.592958   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 19:11:43.592975   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:43.592981   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:43.592988   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:43.598961   34198 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0913 19:11:44.088750   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-617764-m02
	I0913 19:11:44.088771   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:44.088779   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:44.088786   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:44.092629   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:11:44.093340   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 19:11:44.093353   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:44.093360   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:44.093364   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:44.097245   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:11:44.589108   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-617764-m02
	I0913 19:11:44.589131   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:44.589140   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:44.589144   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:44.593086   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:11:44.593866   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 19:11:44.593879   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:44.593887   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:44.593891   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:44.596679   34198 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 19:11:45.088564   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-617764-m02
	I0913 19:11:45.088585   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:45.088593   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:45.088596   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:45.092054   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:11:45.092808   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 19:11:45.092828   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:45.092837   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:45.092850   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:45.095605   34198 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 19:11:45.096300   34198 pod_ready.go:103] pod "kube-apiserver-ha-617764-m02" in "kube-system" namespace has status "Ready":"False"
	I0913 19:11:45.588556   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-617764-m02
	I0913 19:11:45.588579   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:45.588589   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:45.588596   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:45.591934   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:11:45.592838   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 19:11:45.592853   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:45.592860   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:45.592864   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:45.595375   34198 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 19:11:46.089282   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-617764-m02
	I0913 19:11:46.089303   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:46.089310   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:46.089314   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:46.092676   34198 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 19:11:46.093426   34198 round_trippers.go:463] GET https://192.168.39.145:8443/api/v1/nodes/ha-617764-m02
	I0913 19:11:46.093440   34198 round_trippers.go:469] Request Headers:
	I0913 19:11:46.093447   34198 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 19:11:46.093451   34198 round_trippers.go:473]     Accept: application/json, */*
	I0913 19:11:46.096330   34198 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds

                                                
                                                
** /stderr **
ha_test.go:607: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-linux-amd64 node add -p ha-617764 --control-plane -v=7 --alsologtostderr" : signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-617764 -n ha-617764
helpers_test.go:244: <<< TestMultiControlPlane/serial/AddSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/AddSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-617764 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-617764 logs -n 25: (1.785375319s)
helpers_test.go:252: TestMultiControlPlane/serial/AddSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                      |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-617764 ssh -n                                                               | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC | 13 Sep 24 18:46 UTC |
	|         | ha-617764-m03 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-617764 ssh -n ha-617764-m04 sudo cat                                        | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC | 13 Sep 24 18:46 UTC |
	|         | /home/docker/cp-test_ha-617764-m03_ha-617764-m04.txt                           |           |         |         |                     |                     |
	| cp      | ha-617764 cp testdata/cp-test.txt                                              | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC | 13 Sep 24 18:46 UTC |
	|         | ha-617764-m04:/home/docker/cp-test.txt                                         |           |         |         |                     |                     |
	| ssh     | ha-617764 ssh -n                                                               | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC | 13 Sep 24 18:46 UTC |
	|         | ha-617764-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| cp      | ha-617764 cp ha-617764-m04:/home/docker/cp-test.txt                            | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC | 13 Sep 24 18:46 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile62644144/001/cp-test_ha-617764-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-617764 ssh -n                                                               | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC | 13 Sep 24 18:46 UTC |
	|         | ha-617764-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| cp      | ha-617764 cp ha-617764-m04:/home/docker/cp-test.txt                            | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC | 13 Sep 24 18:46 UTC |
	|         | ha-617764:/home/docker/cp-test_ha-617764-m04_ha-617764.txt                     |           |         |         |                     |                     |
	| ssh     | ha-617764 ssh -n                                                               | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC | 13 Sep 24 18:46 UTC |
	|         | ha-617764-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-617764 ssh -n ha-617764 sudo cat                                            | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC | 13 Sep 24 18:46 UTC |
	|         | /home/docker/cp-test_ha-617764-m04_ha-617764.txt                               |           |         |         |                     |                     |
	| cp      | ha-617764 cp ha-617764-m04:/home/docker/cp-test.txt                            | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC | 13 Sep 24 18:46 UTC |
	|         | ha-617764-m02:/home/docker/cp-test_ha-617764-m04_ha-617764-m02.txt             |           |         |         |                     |                     |
	| ssh     | ha-617764 ssh -n                                                               | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC | 13 Sep 24 18:46 UTC |
	|         | ha-617764-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-617764 ssh -n ha-617764-m02 sudo cat                                        | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC | 13 Sep 24 18:46 UTC |
	|         | /home/docker/cp-test_ha-617764-m04_ha-617764-m02.txt                           |           |         |         |                     |                     |
	| cp      | ha-617764 cp ha-617764-m04:/home/docker/cp-test.txt                            | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC | 13 Sep 24 18:46 UTC |
	|         | ha-617764-m03:/home/docker/cp-test_ha-617764-m04_ha-617764-m03.txt             |           |         |         |                     |                     |
	| ssh     | ha-617764 ssh -n                                                               | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC | 13 Sep 24 18:46 UTC |
	|         | ha-617764-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-617764 ssh -n ha-617764-m03 sudo cat                                        | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC | 13 Sep 24 18:46 UTC |
	|         | /home/docker/cp-test_ha-617764-m04_ha-617764-m03.txt                           |           |         |         |                     |                     |
	| node    | ha-617764 node stop m02 -v=7                                                   | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:46 UTC |                     |
	|         | --alsologtostderr                                                              |           |         |         |                     |                     |
	| node    | ha-617764 node start m02 -v=7                                                  | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:48 UTC |                     |
	|         | --alsologtostderr                                                              |           |         |         |                     |                     |
	| node    | list -p ha-617764 -v=7                                                         | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:49 UTC |                     |
	|         | --alsologtostderr                                                              |           |         |         |                     |                     |
	| stop    | -p ha-617764 -v=7                                                              | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:49 UTC |                     |
	|         | --alsologtostderr                                                              |           |         |         |                     |                     |
	| start   | -p ha-617764 --wait=true -v=7                                                  | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:51 UTC | 13 Sep 24 18:56 UTC |
	|         | --alsologtostderr                                                              |           |         |         |                     |                     |
	| node    | list -p ha-617764                                                              | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:56 UTC |                     |
	| node    | ha-617764 node delete m03 -v=7                                                 | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:56 UTC | 13 Sep 24 18:56 UTC |
	|         | --alsologtostderr                                                              |           |         |         |                     |                     |
	| stop    | ha-617764 stop -v=7                                                            | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:56 UTC |                     |
	|         | --alsologtostderr                                                              |           |         |         |                     |                     |
	| start   | -p ha-617764 --wait=true                                                       | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 18:58 UTC |                     |
	|         | -v=7 --alsologtostderr                                                         |           |         |         |                     |                     |
	|         | --driver=kvm2                                                                  |           |         |         |                     |                     |
	|         | --container-runtime=crio                                                       |           |         |         |                     |                     |
	| node    | add -p ha-617764                                                               | ha-617764 | jenkins | v1.34.0 | 13 Sep 24 19:09 UTC |                     |
	|         | --control-plane -v=7                                                           |           |         |         |                     |                     |
	|         | --alsologtostderr                                                              |           |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/13 18:58:43
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0913 18:58:43.150705   31446 out.go:345] Setting OutFile to fd 1 ...
	I0913 18:58:43.150823   31446 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 18:58:43.150831   31446 out.go:358] Setting ErrFile to fd 2...
	I0913 18:58:43.150835   31446 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 18:58:43.150989   31446 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19636-3902/.minikube/bin
	I0913 18:58:43.151527   31446 out.go:352] Setting JSON to false
	I0913 18:58:43.152444   31446 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2466,"bootTime":1726251457,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0913 18:58:43.152531   31446 start.go:139] virtualization: kvm guest
	I0913 18:58:43.155078   31446 out.go:177] * [ha-617764] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0913 18:58:43.156678   31446 notify.go:220] Checking for updates...
	I0913 18:58:43.156709   31446 out.go:177]   - MINIKUBE_LOCATION=19636
	I0913 18:58:43.158268   31446 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 18:58:43.159544   31446 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19636-3902/kubeconfig
	I0913 18:58:43.160767   31446 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19636-3902/.minikube
	I0913 18:58:43.162220   31446 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0913 18:58:43.163615   31446 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 18:58:43.165451   31446 config.go:182] Loaded profile config "ha-617764": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 18:58:43.165853   31446 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:58:43.165907   31446 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:58:43.180911   31446 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45843
	I0913 18:58:43.181388   31446 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:58:43.181949   31446 main.go:141] libmachine: Using API Version  1
	I0913 18:58:43.181971   31446 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:58:43.182353   31446 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:58:43.182521   31446 main.go:141] libmachine: (ha-617764) Calling .DriverName
	I0913 18:58:43.182750   31446 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 18:58:43.183084   31446 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:58:43.183122   31446 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:58:43.197519   31446 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39037
	I0913 18:58:43.197916   31446 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:58:43.198411   31446 main.go:141] libmachine: Using API Version  1
	I0913 18:58:43.198429   31446 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:58:43.198758   31446 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:58:43.198946   31446 main.go:141] libmachine: (ha-617764) Calling .DriverName
	I0913 18:58:43.235966   31446 out.go:177] * Using the kvm2 driver based on existing profile
	I0913 18:58:43.237300   31446 start.go:297] selected driver: kvm2
	I0913 18:58:43.237333   31446 start.go:901] validating driver "kvm2" against &{Name:ha-617764 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.1 ClusterName:ha-617764 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.145 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.203 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.238 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false insp
ektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptim
izations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 18:58:43.237501   31446 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 18:58:43.237936   31446 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 18:58:43.238020   31446 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19636-3902/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0913 18:58:43.253448   31446 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0913 18:58:43.254210   31446 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0913 18:58:43.254249   31446 cni.go:84] Creating CNI manager for ""
	I0913 18:58:43.254286   31446 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0913 18:58:43.254380   31446 start.go:340] cluster config:
	{Name:ha-617764 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-617764 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.145 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.203 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.238 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow
:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetCli
entPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 18:58:43.254578   31446 iso.go:125] acquiring lock: {Name:mk6d09a9e7ffc35d34bf41cb51ad15df14a4d34d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 18:58:43.257570   31446 out.go:177] * Starting "ha-617764" primary control-plane node in "ha-617764" cluster
	I0913 18:58:43.258900   31446 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0913 18:58:43.258938   31446 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0913 18:58:43.258945   31446 cache.go:56] Caching tarball of preloaded images
	I0913 18:58:43.259017   31446 preload.go:172] Found /home/jenkins/minikube-integration/19636-3902/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0913 18:58:43.259028   31446 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0913 18:58:43.259156   31446 profile.go:143] Saving config to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/config.json ...
	I0913 18:58:43.259345   31446 start.go:360] acquireMachinesLock for ha-617764: {Name:mk2a4fc9f87a3264088553785e086036ce1d8c5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 18:58:43.259392   31446 start.go:364] duration metric: took 31.174µs to acquireMachinesLock for "ha-617764"
	I0913 18:58:43.259405   31446 start.go:96] Skipping create...Using existing machine configuration
	I0913 18:58:43.259413   31446 fix.go:54] fixHost starting: 
	I0913 18:58:43.259679   31446 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:58:43.259711   31446 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:58:43.274822   31446 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33137
	I0913 18:58:43.275298   31446 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:58:43.275852   31446 main.go:141] libmachine: Using API Version  1
	I0913 18:58:43.275878   31446 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:58:43.276311   31446 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:58:43.276486   31446 main.go:141] libmachine: (ha-617764) Calling .DriverName
	I0913 18:58:43.276663   31446 main.go:141] libmachine: (ha-617764) Calling .GetState
	I0913 18:58:43.278189   31446 fix.go:112] recreateIfNeeded on ha-617764: state=Running err=<nil>
	W0913 18:58:43.278219   31446 fix.go:138] unexpected machine state, will restart: <nil>
	I0913 18:58:43.280067   31446 out.go:177] * Updating the running kvm2 "ha-617764" VM ...
	I0913 18:58:43.281138   31446 machine.go:93] provisionDockerMachine start ...
	I0913 18:58:43.281155   31446 main.go:141] libmachine: (ha-617764) Calling .DriverName
	I0913 18:58:43.281323   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHHostname
	I0913 18:58:43.284023   31446 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:58:43.284521   31446 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 18:58:43.284555   31446 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:58:43.284669   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHPort
	I0913 18:58:43.284825   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 18:58:43.284952   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 18:58:43.285055   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHUsername
	I0913 18:58:43.285196   31446 main.go:141] libmachine: Using SSH client type: native
	I0913 18:58:43.285409   31446 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.145 22 <nil> <nil>}
	I0913 18:58:43.285420   31446 main.go:141] libmachine: About to run SSH command:
	hostname
	I0913 18:58:43.394451   31446 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-617764
	
	I0913 18:58:43.394477   31446 main.go:141] libmachine: (ha-617764) Calling .GetMachineName
	I0913 18:58:43.394708   31446 buildroot.go:166] provisioning hostname "ha-617764"
	I0913 18:58:43.394736   31446 main.go:141] libmachine: (ha-617764) Calling .GetMachineName
	I0913 18:58:43.394924   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHHostname
	I0913 18:58:43.397704   31446 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:58:43.398088   31446 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 18:58:43.398141   31446 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:58:43.398322   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHPort
	I0913 18:58:43.398529   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 18:58:43.398740   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 18:58:43.398893   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHUsername
	I0913 18:58:43.399057   31446 main.go:141] libmachine: Using SSH client type: native
	I0913 18:58:43.399258   31446 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.145 22 <nil> <nil>}
	I0913 18:58:43.399275   31446 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-617764 && echo "ha-617764" | sudo tee /etc/hostname
	I0913 18:58:43.520106   31446 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-617764
	
	I0913 18:58:43.520131   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHHostname
	I0913 18:58:43.522812   31446 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:58:43.523152   31446 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 18:58:43.523170   31446 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:58:43.523391   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHPort
	I0913 18:58:43.523571   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 18:58:43.523748   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 18:58:43.523885   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHUsername
	I0913 18:58:43.524100   31446 main.go:141] libmachine: Using SSH client type: native
	I0913 18:58:43.524293   31446 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.145 22 <nil> <nil>}
	I0913 18:58:43.524308   31446 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-617764' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-617764/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-617764' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0913 18:58:43.635855   31446 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0913 18:58:43.635900   31446 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19636-3902/.minikube CaCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19636-3902/.minikube}
	I0913 18:58:43.635930   31446 buildroot.go:174] setting up certificates
	I0913 18:58:43.635943   31446 provision.go:84] configureAuth start
	I0913 18:58:43.635958   31446 main.go:141] libmachine: (ha-617764) Calling .GetMachineName
	I0913 18:58:43.636270   31446 main.go:141] libmachine: (ha-617764) Calling .GetIP
	I0913 18:58:43.638723   31446 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:58:43.639091   31446 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 18:58:43.639122   31446 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:58:43.639263   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHHostname
	I0913 18:58:43.641516   31446 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:58:43.641896   31446 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 18:58:43.641921   31446 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:58:43.642009   31446 provision.go:143] copyHostCerts
	I0913 18:58:43.642045   31446 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem
	I0913 18:58:43.642090   31446 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem, removing ...
	I0913 18:58:43.642118   31446 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem
	I0913 18:58:43.642204   31446 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem (1082 bytes)
	I0913 18:58:43.642317   31446 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem
	I0913 18:58:43.642345   31446 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem, removing ...
	I0913 18:58:43.642351   31446 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem
	I0913 18:58:43.642393   31446 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem (1123 bytes)
	I0913 18:58:43.642482   31446 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem
	I0913 18:58:43.642507   31446 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem, removing ...
	I0913 18:58:43.642516   31446 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem
	I0913 18:58:43.642554   31446 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem (1679 bytes)
	I0913 18:58:43.642629   31446 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem org=jenkins.ha-617764 san=[127.0.0.1 192.168.39.145 ha-617764 localhost minikube]
	I0913 18:58:44.051872   31446 provision.go:177] copyRemoteCerts
	I0913 18:58:44.051926   31446 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0913 18:58:44.051949   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHHostname
	I0913 18:58:44.054378   31446 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:58:44.054746   31446 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 18:58:44.054779   31446 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:58:44.054963   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHPort
	I0913 18:58:44.055136   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 18:58:44.055290   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHUsername
	I0913 18:58:44.055443   31446 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764/id_rsa Username:docker}
	I0913 18:58:44.136923   31446 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0913 18:58:44.136991   31446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0913 18:58:44.167349   31446 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0913 18:58:44.167474   31446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0913 18:58:44.192816   31446 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0913 18:58:44.192890   31446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0913 18:58:44.219869   31446 provision.go:87] duration metric: took 583.909353ms to configureAuth
	I0913 18:58:44.219902   31446 buildroot.go:189] setting minikube options for container-runtime
	I0913 18:58:44.220142   31446 config.go:182] Loaded profile config "ha-617764": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 18:58:44.220219   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHHostname
	I0913 18:58:44.222922   31446 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:58:44.223448   31446 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 18:58:44.223533   31446 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 18:58:44.223808   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHPort
	I0913 18:58:44.224007   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 18:58:44.224174   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 18:58:44.224308   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHUsername
	I0913 18:58:44.224474   31446 main.go:141] libmachine: Using SSH client type: native
	I0913 18:58:44.224676   31446 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.145 22 <nil> <nil>}
	I0913 18:58:44.224698   31446 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0913 19:00:18.789819   31446 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0913 19:00:18.789840   31446 machine.go:96] duration metric: took 1m35.508690532s to provisionDockerMachine
	I0913 19:00:18.789851   31446 start.go:293] postStartSetup for "ha-617764" (driver="kvm2")
	I0913 19:00:18.789861   31446 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0913 19:00:18.789874   31446 main.go:141] libmachine: (ha-617764) Calling .DriverName
	I0913 19:00:18.790220   31446 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0913 19:00:18.790251   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHHostname
	I0913 19:00:18.793500   31446 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 19:00:18.793848   31446 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 19:00:18.793875   31446 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 19:00:18.794048   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHPort
	I0913 19:00:18.794238   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 19:00:18.794385   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHUsername
	I0913 19:00:18.794569   31446 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764/id_rsa Username:docker}
	I0913 19:00:18.877285   31446 ssh_runner.go:195] Run: cat /etc/os-release
	I0913 19:00:18.883268   31446 info.go:137] Remote host: Buildroot 2023.02.9
	I0913 19:00:18.883297   31446 filesync.go:126] Scanning /home/jenkins/minikube-integration/19636-3902/.minikube/addons for local assets ...
	I0913 19:00:18.883423   31446 filesync.go:126] Scanning /home/jenkins/minikube-integration/19636-3902/.minikube/files for local assets ...
	I0913 19:00:18.883612   31446 filesync.go:149] local asset: /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem -> 110792.pem in /etc/ssl/certs
	I0913 19:00:18.883631   31446 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem -> /etc/ssl/certs/110792.pem
	I0913 19:00:18.883718   31446 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0913 19:00:18.893226   31446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem --> /etc/ssl/certs/110792.pem (1708 bytes)
	I0913 19:00:18.920369   31446 start.go:296] duration metric: took 130.503832ms for postStartSetup
	I0913 19:00:18.920414   31446 main.go:141] libmachine: (ha-617764) Calling .DriverName
	I0913 19:00:18.920676   31446 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0913 19:00:18.920707   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHHostname
	I0913 19:00:18.923635   31446 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 19:00:18.924114   31446 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 19:00:18.924141   31446 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 19:00:18.924348   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHPort
	I0913 19:00:18.924535   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 19:00:18.924698   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHUsername
	I0913 19:00:18.924850   31446 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764/id_rsa Username:docker}
	W0913 19:00:19.009141   31446 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0913 19:00:19.009172   31446 fix.go:56] duration metric: took 1m35.749758939s for fixHost
	I0913 19:00:19.009198   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHHostname
	I0913 19:00:19.011920   31446 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 19:00:19.012313   31446 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 19:00:19.012336   31446 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 19:00:19.012505   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHPort
	I0913 19:00:19.012684   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 19:00:19.012842   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 19:00:19.012978   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHUsername
	I0913 19:00:19.013111   31446 main.go:141] libmachine: Using SSH client type: native
	I0913 19:00:19.013373   31446 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.145 22 <nil> <nil>}
	I0913 19:00:19.013392   31446 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0913 19:00:19.118884   31446 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726254019.083169511
	
	I0913 19:00:19.118912   31446 fix.go:216] guest clock: 1726254019.083169511
	I0913 19:00:19.118923   31446 fix.go:229] Guest: 2024-09-13 19:00:19.083169511 +0000 UTC Remote: 2024-09-13 19:00:19.009181164 +0000 UTC m=+95.893684428 (delta=73.988347ms)
	I0913 19:00:19.118983   31446 fix.go:200] guest clock delta is within tolerance: 73.988347ms
	I0913 19:00:19.118991   31446 start.go:83] releasing machines lock for "ha-617764", held for 1m35.85958928s
	I0913 19:00:19.119022   31446 main.go:141] libmachine: (ha-617764) Calling .DriverName
	I0913 19:00:19.119255   31446 main.go:141] libmachine: (ha-617764) Calling .GetIP
	I0913 19:00:19.121927   31446 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 19:00:19.122454   31446 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 19:00:19.122593   31446 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 19:00:19.122762   31446 main.go:141] libmachine: (ha-617764) Calling .DriverName
	I0913 19:00:19.123286   31446 main.go:141] libmachine: (ha-617764) Calling .DriverName
	I0913 19:00:19.123470   31446 main.go:141] libmachine: (ha-617764) Calling .DriverName
	I0913 19:00:19.123531   31446 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0913 19:00:19.123584   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHHostname
	I0913 19:00:19.123664   31446 ssh_runner.go:195] Run: cat /version.json
	I0913 19:00:19.123680   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHHostname
	I0913 19:00:19.126137   31446 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 19:00:19.126495   31446 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 19:00:19.126557   31446 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 19:00:19.126605   31446 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 19:00:19.126870   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHPort
	I0913 19:00:19.126965   31446 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 19:00:19.126997   31446 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 19:00:19.127049   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 19:00:19.127133   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHPort
	I0913 19:00:19.127204   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHUsername
	I0913 19:00:19.127289   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHKeyPath
	I0913 19:00:19.127344   31446 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764/id_rsa Username:docker}
	I0913 19:00:19.127430   31446 main.go:141] libmachine: (ha-617764) Calling .GetSSHUsername
	I0913 19:00:19.127554   31446 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/ha-617764/id_rsa Username:docker}
	I0913 19:00:19.230613   31446 ssh_runner.go:195] Run: systemctl --version
	I0913 19:00:19.238299   31446 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0913 19:00:19.405183   31446 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0913 19:00:19.411872   31446 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0913 19:00:19.411926   31446 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0913 19:00:19.421058   31446 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0913 19:00:19.421086   31446 start.go:495] detecting cgroup driver to use...
	I0913 19:00:19.421155   31446 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0913 19:00:19.436778   31446 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0913 19:00:19.450920   31446 docker.go:217] disabling cri-docker service (if available) ...
	I0913 19:00:19.450979   31446 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0913 19:00:19.464921   31446 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0913 19:00:19.478168   31446 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0913 19:00:19.645366   31446 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0913 19:00:19.801636   31446 docker.go:233] disabling docker service ...
	I0913 19:00:19.801712   31446 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0913 19:00:19.818239   31446 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0913 19:00:19.832446   31446 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0913 19:00:19.978995   31446 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0913 19:00:20.122997   31446 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0913 19:00:20.139838   31446 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0913 19:00:20.159570   31446 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0913 19:00:20.159648   31446 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:00:20.172313   31446 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0913 19:00:20.172387   31446 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:00:20.183969   31446 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:00:20.195156   31446 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:00:20.206292   31446 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0913 19:00:20.218569   31446 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:00:20.229457   31446 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:00:20.241787   31446 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:00:20.252269   31446 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0913 19:00:20.262210   31446 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0913 19:00:20.272169   31446 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 19:00:20.432441   31446 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0913 19:00:27.397849   31446 ssh_runner.go:235] Completed: sudo systemctl restart crio: (6.965372324s)
	I0913 19:00:27.397881   31446 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0913 19:00:27.397939   31446 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0913 19:00:27.404132   31446 start.go:563] Will wait 60s for crictl version
	I0913 19:00:27.404202   31446 ssh_runner.go:195] Run: which crictl
	I0913 19:00:27.407981   31446 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0913 19:00:27.443823   31446 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0913 19:00:27.443905   31446 ssh_runner.go:195] Run: crio --version
	I0913 19:00:27.475173   31446 ssh_runner.go:195] Run: crio --version
	I0913 19:00:27.506743   31446 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0913 19:00:27.508011   31446 main.go:141] libmachine: (ha-617764) Calling .GetIP
	I0913 19:00:27.510651   31446 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 19:00:27.511033   31446 main.go:141] libmachine: (ha-617764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:5d:60", ip: ""} in network mk-ha-617764: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:00 +0000 UTC Type:0 Mac:52:54:00:1a:5d:60 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-617764 Clientid:01:52:54:00:1a:5d:60}
	I0913 19:00:27.511060   31446 main.go:141] libmachine: (ha-617764) DBG | domain ha-617764 has defined IP address 192.168.39.145 and MAC address 52:54:00:1a:5d:60 in network mk-ha-617764
	I0913 19:00:27.511270   31446 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0913 19:00:27.516012   31446 kubeadm.go:883] updating cluster {Name:ha-617764 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-617764 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.145 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.203 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.238 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadge
t:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fa
lse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0913 19:00:27.516147   31446 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0913 19:00:27.516207   31446 ssh_runner.go:195] Run: sudo crictl images --output json
	I0913 19:00:27.563165   31446 crio.go:514] all images are preloaded for cri-o runtime.
	I0913 19:00:27.563185   31446 crio.go:433] Images already preloaded, skipping extraction
	I0913 19:00:27.563228   31446 ssh_runner.go:195] Run: sudo crictl images --output json
	I0913 19:00:27.599775   31446 crio.go:514] all images are preloaded for cri-o runtime.
	I0913 19:00:27.599799   31446 cache_images.go:84] Images are preloaded, skipping loading
	I0913 19:00:27.599809   31446 kubeadm.go:934] updating node { 192.168.39.145 8443 v1.31.1 crio true true} ...
	I0913 19:00:27.599915   31446 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-617764 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.145
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-617764 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0913 19:00:27.600007   31446 ssh_runner.go:195] Run: crio config
	I0913 19:00:27.651311   31446 cni.go:84] Creating CNI manager for ""
	I0913 19:00:27.651333   31446 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0913 19:00:27.651343   31446 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0913 19:00:27.651366   31446 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.145 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-617764 NodeName:ha-617764 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.145"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.145 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0913 19:00:27.651508   31446 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.145
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-617764"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.145
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.145"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0913 19:00:27.651538   31446 kube-vip.go:115] generating kube-vip config ...
	I0913 19:00:27.651587   31446 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0913 19:00:27.664287   31446 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0913 19:00:27.664396   31446 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0913 19:00:27.664455   31446 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0913 19:00:27.674466   31446 binaries.go:44] Found k8s binaries, skipping transfer
	I0913 19:00:27.674547   31446 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0913 19:00:27.684733   31446 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0913 19:00:27.702120   31446 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0913 19:00:27.719612   31446 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0913 19:00:27.737029   31446 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0913 19:00:27.755478   31446 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0913 19:00:27.759223   31446 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 19:00:27.910765   31446 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0913 19:00:27.925634   31446 certs.go:68] Setting up /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764 for IP: 192.168.39.145
	I0913 19:00:27.925655   31446 certs.go:194] generating shared ca certs ...
	I0913 19:00:27.925670   31446 certs.go:226] acquiring lock for ca certs: {Name:mke780aab4c2895bded11772e31c7a357d07742c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 19:00:27.925837   31446 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.key
	I0913 19:00:27.925877   31446 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.key
	I0913 19:00:27.925887   31446 certs.go:256] generating profile certs ...
	I0913 19:00:27.925954   31446 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/client.key
	I0913 19:00:27.925980   31446 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.key.902bad01
	I0913 19:00:27.926001   31446 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.crt.902bad01 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.145 192.168.39.203 192.168.39.254]
	I0913 19:00:28.083419   31446 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.crt.902bad01 ...
	I0913 19:00:28.083444   31446 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.crt.902bad01: {Name:mk5610f7b2a13e2e9a2db0fd30b419eeb2bcec9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 19:00:28.083629   31446 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.key.902bad01 ...
	I0913 19:00:28.083645   31446 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.key.902bad01: {Name:mk0e8fc15f8ef270cc2f47ac846f3a3e4156c718 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 19:00:28.083740   31446 certs.go:381] copying /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.crt.902bad01 -> /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.crt
	I0913 19:00:28.083880   31446 certs.go:385] copying /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.key.902bad01 -> /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.key
	I0913 19:00:28.084003   31446 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/proxy-client.key
	I0913 19:00:28.084017   31446 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0913 19:00:28.084030   31446 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0913 19:00:28.084042   31446 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0913 19:00:28.084057   31446 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0913 19:00:28.084069   31446 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0913 19:00:28.084082   31446 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0913 19:00:28.084100   31446 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0913 19:00:28.084113   31446 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0913 19:00:28.084157   31446 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079.pem (1338 bytes)
	W0913 19:00:28.084185   31446 certs.go:480] ignoring /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079_empty.pem, impossibly tiny 0 bytes
	I0913 19:00:28.084195   31446 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem (1675 bytes)
	I0913 19:00:28.084215   31446 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem (1082 bytes)
	I0913 19:00:28.084238   31446 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem (1123 bytes)
	I0913 19:00:28.084258   31446 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem (1679 bytes)
	I0913 19:00:28.084294   31446 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem (1708 bytes)
	I0913 19:00:28.084323   31446 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079.pem -> /usr/share/ca-certificates/11079.pem
	I0913 19:00:28.084336   31446 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem -> /usr/share/ca-certificates/110792.pem
	I0913 19:00:28.084348   31446 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0913 19:00:28.084922   31446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0913 19:00:28.111077   31446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0913 19:00:28.134495   31446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0913 19:00:28.159747   31446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0913 19:00:28.182325   31446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0913 19:00:28.205586   31446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0913 19:00:28.229539   31446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0913 19:00:28.252370   31446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/ha-617764/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0913 19:00:28.275737   31446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079.pem --> /usr/share/ca-certificates/11079.pem (1338 bytes)
	I0913 19:00:28.300247   31446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem --> /usr/share/ca-certificates/110792.pem (1708 bytes)
	I0913 19:00:28.324266   31446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0913 19:00:28.347577   31446 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0913 19:00:28.365115   31446 ssh_runner.go:195] Run: openssl version
	I0913 19:00:28.408066   31446 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110792.pem && ln -fs /usr/share/ca-certificates/110792.pem /etc/ssl/certs/110792.pem"
	I0913 19:00:28.469517   31446 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110792.pem
	I0913 19:00:28.486389   31446 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 13 18:38 /usr/share/ca-certificates/110792.pem
	I0913 19:00:28.486486   31446 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110792.pem
	I0913 19:00:28.525327   31446 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110792.pem /etc/ssl/certs/3ec20f2e.0"
	I0913 19:00:28.652306   31446 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0913 19:00:28.760544   31446 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0913 19:00:28.769712   31446 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 13 18:22 /usr/share/ca-certificates/minikubeCA.pem
	I0913 19:00:28.769775   31446 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0913 19:00:28.819345   31446 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0913 19:00:28.906062   31446 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11079.pem && ln -fs /usr/share/ca-certificates/11079.pem /etc/ssl/certs/11079.pem"
	I0913 19:00:29.048802   31446 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11079.pem
	I0913 19:00:29.102932   31446 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 13 18:38 /usr/share/ca-certificates/11079.pem
	I0913 19:00:29.103020   31446 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11079.pem
	I0913 19:00:29.115422   31446 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11079.pem /etc/ssl/certs/51391683.0"
	I0913 19:00:29.318793   31446 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0913 19:00:29.362153   31446 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0913 19:00:29.471278   31446 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0913 19:00:29.492455   31446 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0913 19:00:29.513786   31446 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0913 19:00:29.728338   31446 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0913 19:00:29.780205   31446 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0913 19:00:29.853145   31446 kubeadm.go:392] StartCluster: {Name:ha-617764 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-617764 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.145 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.203 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.238 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:f
alse istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 19:00:29.853301   31446 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0913 19:00:29.853366   31446 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0913 19:00:30.060193   31446 cri.go:89] found id: "7cb162ca4a91609e62eb9a3b82ac2d18f1f007e8a4b2577cce0250928caa85f2"
	I0913 19:00:30.060217   31446 cri.go:89] found id: "360965c899e522788532af521cbf0b477b35e977e91f5a28b282abf712fc953e"
	I0913 19:00:30.060223   31446 cri.go:89] found id: "26de4c71cc1f8d3a39e52e622c86361c67e1839a5b84f098c669196c7c161196"
	I0913 19:00:30.060228   31446 cri.go:89] found id: "12d8e3661fa4705e4486cfa4b69b3f31e0b159af038044b195db15b9345f4f4c"
	I0913 19:00:30.060233   31446 cri.go:89] found id: "c22324f5733e44ea66404b16a363cefc848145f9b3f1a75daecc8b7369ff96dd"
	I0913 19:00:30.060237   31446 cri.go:89] found id: "bc744a6ac873da0f38fbe5a85580ba322c578e68359078003640394bc1a9784f"
	I0913 19:00:30.060240   31446 cri.go:89] found id: "570c77981741ff23e853840359e686b304c18dfbe54cb12c07ca5d8bf2b8de17"
	I0913 19:00:30.060244   31446 cri.go:89] found id: "0a368121b3974a88cb67c446e03fd5709ed4dd291d3b5de37b4544a7c42b60cc"
	I0913 19:00:30.060247   31446 cri.go:89] found id: "32fcfa457f3ff0e638142def4aa43f12d6a5a779bfe86e597cc242d7f4d9d19d"
	I0913 19:00:30.060254   31446 cri.go:89] found id: "46d659112c682a93bb4560212566d776e405246e1b3f91e9cc2ad5198b2b8c69"
	I0913 19:00:30.060259   31446 cri.go:89] found id: "09fe052337ef3c922d075bb554c17c548520bce6283d1973e3507ca8d99ace87"
	I0913 19:00:30.060262   31446 cri.go:89] found id: "dddc0dfb6a255d4d35498f110cd8ade544b6f9eb17cef621a12500640512581e"
	I0913 19:00:30.060266   31446 cri.go:89] found id: "b752b1ac699cb46bd464528882ec5c1bfb29241c94a1eda062141311151c2fc1"
	I0913 19:00:30.060270   31446 cri.go:89] found id: "15c33340e3091f96212bbbf29c4a8802468f76ac07126f47265d4f2b86321a89"
	I0913 19:00:30.060277   31446 cri.go:89] found id: "1d1a0b2d1c95ea3234342a3c075a74b46c01ce0a91b97801503cb6fd1fc06163"
	I0913 19:00:30.060281   31446 cri.go:89] found id: "80a7cb47dee67b788f693c83be0a9b7c41fc30e21ddf9b0131e13737c8047222"
	I0913 19:00:30.060286   31446 cri.go:89] found id: ""
	I0913 19:00:30.060335   31446 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 13 19:11:46 ha-617764 crio[6149]: time="2024-09-13 19:11:46.999164665Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726254706999142892,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8c7b1eca-20dd-41dc-9596-cb16ecf1ebd1 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 19:11:47 ha-617764 crio[6149]: time="2024-09-13 19:11:47.000201165Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9697fa9b-5bbf-4e70-91ea-4b06dd4df2c5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 19:11:47 ha-617764 crio[6149]: time="2024-09-13 19:11:47.000302999Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9697fa9b-5bbf-4e70-91ea-4b06dd4df2c5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 19:11:47 ha-617764 crio[6149]: time="2024-09-13 19:11:47.001019466Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:999f5e6003ef9162b57fec4793fbe75aab1348731d8d4f27ad7d3029004b6d4c,PodSandboxId:2ec7df8952268fade9fc0ffc23b16f677fdc2189770cb119d9f905f75fcc7282,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:7,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726254518536729625,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e5f1a84-1798-430e-af04-82469e8f4a7b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 7,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9e9ac5d6b79f0671011120db7c835a22b4b40515be6c0a224b5d78d10631858,PodSandboxId:b36021c0b35cdc0a068086003611f08b09d2d09f7980d332998b85651d1f5f31,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:6,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726254374542802960,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 815ca8cb73177215968b5c5242b63776,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 6,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87156e375ce6e42c538bb851dcb55abf7b83754448bb78e67a324fd93e76a534,PodSandboxId:639b42fbde0c6031c445be6bfab0a91e5a252efc1e759eace315ed7ee44203e9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:6,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726254272527637261,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f4db9ee38410b02d601ed80ae90b5a4,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e916b90f9253d6855bcd1b24ab1ae47479a3f23fd600018cb0738677896b324f,PodSandboxId:2ec7df8952268fade9fc0ffc23b16f677fdc2189770cb119d9f905f75fcc7282,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:6,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726254205523822299,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e5f1a84-1798-430e-af04-82469e8f4a7b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a3f92c39f616d85e223876cd67911949b93e345a656250b2b9a93e494f7b7b0,PodSandboxId:b36021c0b35cdc0a068086003611f08b09d2d09f7980d332998b85651d1f5f31,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:5,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726254195530889880,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 815ca8cb73177215968b5c5242b63776,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50283a22853869e4521e5d44efcff41101ffde818329fd7423f482cb33efabc0,PodSandboxId:639b42fbde0c6031c445be6bfab0a91e5a252efc1e759eace315ed7ee44203e9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:5,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726254130535295539,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f4db9ee38410b02d601ed80ae90b5a4,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf7f61f474e782e403819281bc5a7d281f13bc89127e0093ac24be08ab21acdc,PodSandboxId:ae1363f1228347bc367ec486d746b078aece68a3917c3a8f082468352ed540ff,Metadata:&ContainerMetadata{Name:busybox,Attempt:2,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726254062550829569,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-t4fwq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1bc3749b-0225-445c-9b86-767558392df7,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMe
ssagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bdc8b32559cc35726794d7412fa2462beea35cff248df50623987861ae7bd48,PodSandboxId:90fa239fc72bb6f4e32a775ccd5954a7efb3c77a68e1e066547347d6aa9de270,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726254029473467866,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-92mml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36bd37dc-88c4-4264-9e7c-a90246cc5212,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.p
od.terminationGracePeriod: 30,},},&Container{Id:2ca0aab49c5463b35d609dd4876256ce1882f5f15b9314d4efc1c9460e5da465,PodSandboxId:743d4b43092c6c121728464d3854175d802db7a5cbb8456d4d3e321a84e3380d,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726254029507452958,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-htrbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41a8301e-fca3-4907-bc77-808b013a2d2a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"proto
col\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70f0f4e37a417dce143a67fa1dcf0c5c3b58a8856c27d2bc40f1a94c8751842a,PodSandboxId:43cddd96b715864e6de09c1fa6b08a212431d9f454105b634b09d702d276ce38,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726254029705747992,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fdhnm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c509676-c7ba-4841-89b5-7e4266abd9c9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.c
ontainer.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cb162ca4a91609e62eb9a3b82ac2d18f1f007e8a4b2577cce0250928caa85f2,PodSandboxId:1aee20bf902b8f7f8a9930e46d1fefb6a6c5f2d3d6d8c4ca74d565558460f8cd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726254029639290085,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-b
9bzd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81130c38-ff5d-4c9e-ab7b-7eae4a62d3b8,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:360965c899e522788532af521cbf0b477b35e977e91f5a28b282abf712fc953e,PodSandboxId:477f3d5572a61dae49e2682bbe5fcfd071182b44e8574426070fd470e9d8b5d9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726254029288189795,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-617764,io.kubernete
s.pod.namespace: kube-system,io.kubernetes.pod.uid: 15cf7928620050653d6239c1007547bd,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c22324f5733e44ea66404b16a363cefc848145f9b3f1a75daecc8b7369ff96dd,PodSandboxId:e94c56bdaeede8bdf6b9672b64791948a015daf333d84890a1b688a898d34e7e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726254029033042084,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
bf3d4ca74d8429dc43b760fdf8f185ab,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc744a6ac873da0f38fbe5a85580ba322c578e68359078003640394bc1a9784f,PodSandboxId:c01954306193722d6eb940bc8b28659b638bd552e7eee2298bf05a3dac30ffcf,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726254028929207262,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5545735943f8ff5a38c9aea0b4c785ad,},Ann
otations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bb3333d84624db9148786fc866cc4cb99179a577e31d3365459788ba7b02f59,PodSandboxId:0238ab84a512173bc79a44c0c48ee4a9bfeeaed98c7655e711a18a704a5951f9,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726253659862538947,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-t4fwq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1bc3749b-0225-445c-9b86-767558392df7,},Annotations:map[string]string{io.kuberne
tes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46d659112c682a93bb4560212566d776e405246e1b3f91e9cc2ad5198b2b8c69,PodSandboxId:566613db4514bf925926ac26426504bb464936a657b23ca25af3ad3aa0a803e1,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_EXITED,CreatedAt:1726253640657625545,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5545735943f8ff5a38c9aea0b4c785ad,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernet
es.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dddc0dfb6a255d4d35498f110cd8ade544b6f9eb17cef621a12500640512581e,PodSandboxId:18e2ef1278c487f38e01dc8ec652cadc524a2e3c08f1b80bae0f12b5ce7d5f45,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726253626790338590,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-b9bzd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81130c38-ff5d-4c9e-ab7b-7eae4a62d3b8,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09fe052337ef3c922d075bb554c17c548520bce6283d1973e3507ca8d99ace87,PodSandboxId:5f1a3394b645b31406ab8d54ac120cd39398070996c972bf73df2d505c1848b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726253626803961664,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fdhnm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c509676-c7ba-4841-89b5-7e4266abd9c9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b752b1ac699cb46bd464528882ec5c1bfb29241c94a1eda062141311151c2fc1,PodSandboxId:3a3adb124d23e71e636eaf0f200f72f3779d41c493e0475b011f1e825d452f00,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726253626698915633,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-htrbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 41a8301e-fca3-4907-bc77-808b013a2d2a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d1a0b2d1c95ea3234342a3c075a74b46c01ce0a91b97801503cb6fd1fc06163,PodSandboxId:09bbefd12114cb9e6833e18edd3e080d729dc865efaaa80af99dd78c1c2387c2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,C
reatedAt:1726253626405117690,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-92mml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36bd37dc-88c4-4264-9e7c-a90246cc5212,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15c33340e3091f96212bbbf29c4a8802468f76ac07126f47265d4f2b86321a89,PodSandboxId:acfcaea56c23ed4626d8134635877c58a053b2861b24d0d53cf3024c7c8c1ca0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726253626518315300,Labels:map[
string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf3d4ca74d8429dc43b760fdf8f185ab,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80a7cb47dee67b788f693c83be0a9b7c41fc30e21ddf9b0131e13737c8047222,PodSandboxId:a63972ff65b125bee7dbfeebba7408899591af85f385eeba93cb4a7472b5a831,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726253626301782095,Labels:map[string]string{io.kubernetes.container.name
: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15cf7928620050653d6239c1007547bd,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9697fa9b-5bbf-4e70-91ea-4b06dd4df2c5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 19:11:47 ha-617764 crio[6149]: time="2024-09-13 19:11:47.044434528Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f54f3f0f-b8ff-422a-9df6-73ce5f9c4a9a name=/runtime.v1.RuntimeService/Version
	Sep 13 19:11:47 ha-617764 crio[6149]: time="2024-09-13 19:11:47.044528900Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f54f3f0f-b8ff-422a-9df6-73ce5f9c4a9a name=/runtime.v1.RuntimeService/Version
	Sep 13 19:11:47 ha-617764 crio[6149]: time="2024-09-13 19:11:47.045747041Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a0fed84b-e849-417d-bde9-cafc13621233 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 19:11:47 ha-617764 crio[6149]: time="2024-09-13 19:11:47.046157883Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726254707046133545,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a0fed84b-e849-417d-bde9-cafc13621233 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 19:11:47 ha-617764 crio[6149]: time="2024-09-13 19:11:47.046761729Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a920923a-44e2-4cae-8d7b-d40d178ff9b2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 19:11:47 ha-617764 crio[6149]: time="2024-09-13 19:11:47.046812741Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a920923a-44e2-4cae-8d7b-d40d178ff9b2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 19:11:47 ha-617764 crio[6149]: time="2024-09-13 19:11:47.047503002Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:999f5e6003ef9162b57fec4793fbe75aab1348731d8d4f27ad7d3029004b6d4c,PodSandboxId:2ec7df8952268fade9fc0ffc23b16f677fdc2189770cb119d9f905f75fcc7282,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:7,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726254518536729625,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e5f1a84-1798-430e-af04-82469e8f4a7b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 7,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9e9ac5d6b79f0671011120db7c835a22b4b40515be6c0a224b5d78d10631858,PodSandboxId:b36021c0b35cdc0a068086003611f08b09d2d09f7980d332998b85651d1f5f31,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:6,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726254374542802960,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 815ca8cb73177215968b5c5242b63776,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 6,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87156e375ce6e42c538bb851dcb55abf7b83754448bb78e67a324fd93e76a534,PodSandboxId:639b42fbde0c6031c445be6bfab0a91e5a252efc1e759eace315ed7ee44203e9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:6,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726254272527637261,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f4db9ee38410b02d601ed80ae90b5a4,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e916b90f9253d6855bcd1b24ab1ae47479a3f23fd600018cb0738677896b324f,PodSandboxId:2ec7df8952268fade9fc0ffc23b16f677fdc2189770cb119d9f905f75fcc7282,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:6,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726254205523822299,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e5f1a84-1798-430e-af04-82469e8f4a7b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a3f92c39f616d85e223876cd67911949b93e345a656250b2b9a93e494f7b7b0,PodSandboxId:b36021c0b35cdc0a068086003611f08b09d2d09f7980d332998b85651d1f5f31,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:5,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726254195530889880,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 815ca8cb73177215968b5c5242b63776,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50283a22853869e4521e5d44efcff41101ffde818329fd7423f482cb33efabc0,PodSandboxId:639b42fbde0c6031c445be6bfab0a91e5a252efc1e759eace315ed7ee44203e9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:5,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726254130535295539,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f4db9ee38410b02d601ed80ae90b5a4,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf7f61f474e782e403819281bc5a7d281f13bc89127e0093ac24be08ab21acdc,PodSandboxId:ae1363f1228347bc367ec486d746b078aece68a3917c3a8f082468352ed540ff,Metadata:&ContainerMetadata{Name:busybox,Attempt:2,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726254062550829569,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-t4fwq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1bc3749b-0225-445c-9b86-767558392df7,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMe
ssagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bdc8b32559cc35726794d7412fa2462beea35cff248df50623987861ae7bd48,PodSandboxId:90fa239fc72bb6f4e32a775ccd5954a7efb3c77a68e1e066547347d6aa9de270,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726254029473467866,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-92mml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36bd37dc-88c4-4264-9e7c-a90246cc5212,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.p
od.terminationGracePeriod: 30,},},&Container{Id:2ca0aab49c5463b35d609dd4876256ce1882f5f15b9314d4efc1c9460e5da465,PodSandboxId:743d4b43092c6c121728464d3854175d802db7a5cbb8456d4d3e321a84e3380d,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726254029507452958,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-htrbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41a8301e-fca3-4907-bc77-808b013a2d2a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"proto
col\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70f0f4e37a417dce143a67fa1dcf0c5c3b58a8856c27d2bc40f1a94c8751842a,PodSandboxId:43cddd96b715864e6de09c1fa6b08a212431d9f454105b634b09d702d276ce38,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726254029705747992,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fdhnm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c509676-c7ba-4841-89b5-7e4266abd9c9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.c
ontainer.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cb162ca4a91609e62eb9a3b82ac2d18f1f007e8a4b2577cce0250928caa85f2,PodSandboxId:1aee20bf902b8f7f8a9930e46d1fefb6a6c5f2d3d6d8c4ca74d565558460f8cd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726254029639290085,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-b
9bzd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81130c38-ff5d-4c9e-ab7b-7eae4a62d3b8,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:360965c899e522788532af521cbf0b477b35e977e91f5a28b282abf712fc953e,PodSandboxId:477f3d5572a61dae49e2682bbe5fcfd071182b44e8574426070fd470e9d8b5d9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726254029288189795,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-617764,io.kubernete
s.pod.namespace: kube-system,io.kubernetes.pod.uid: 15cf7928620050653d6239c1007547bd,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c22324f5733e44ea66404b16a363cefc848145f9b3f1a75daecc8b7369ff96dd,PodSandboxId:e94c56bdaeede8bdf6b9672b64791948a015daf333d84890a1b688a898d34e7e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726254029033042084,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
bf3d4ca74d8429dc43b760fdf8f185ab,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc744a6ac873da0f38fbe5a85580ba322c578e68359078003640394bc1a9784f,PodSandboxId:c01954306193722d6eb940bc8b28659b638bd552e7eee2298bf05a3dac30ffcf,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726254028929207262,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5545735943f8ff5a38c9aea0b4c785ad,},Ann
otations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bb3333d84624db9148786fc866cc4cb99179a577e31d3365459788ba7b02f59,PodSandboxId:0238ab84a512173bc79a44c0c48ee4a9bfeeaed98c7655e711a18a704a5951f9,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726253659862538947,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-t4fwq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1bc3749b-0225-445c-9b86-767558392df7,},Annotations:map[string]string{io.kuberne
tes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46d659112c682a93bb4560212566d776e405246e1b3f91e9cc2ad5198b2b8c69,PodSandboxId:566613db4514bf925926ac26426504bb464936a657b23ca25af3ad3aa0a803e1,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_EXITED,CreatedAt:1726253640657625545,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5545735943f8ff5a38c9aea0b4c785ad,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernet
es.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dddc0dfb6a255d4d35498f110cd8ade544b6f9eb17cef621a12500640512581e,PodSandboxId:18e2ef1278c487f38e01dc8ec652cadc524a2e3c08f1b80bae0f12b5ce7d5f45,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726253626790338590,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-b9bzd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81130c38-ff5d-4c9e-ab7b-7eae4a62d3b8,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09fe052337ef3c922d075bb554c17c548520bce6283d1973e3507ca8d99ace87,PodSandboxId:5f1a3394b645b31406ab8d54ac120cd39398070996c972bf73df2d505c1848b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726253626803961664,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fdhnm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c509676-c7ba-4841-89b5-7e4266abd9c9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b752b1ac699cb46bd464528882ec5c1bfb29241c94a1eda062141311151c2fc1,PodSandboxId:3a3adb124d23e71e636eaf0f200f72f3779d41c493e0475b011f1e825d452f00,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726253626698915633,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-htrbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 41a8301e-fca3-4907-bc77-808b013a2d2a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d1a0b2d1c95ea3234342a3c075a74b46c01ce0a91b97801503cb6fd1fc06163,PodSandboxId:09bbefd12114cb9e6833e18edd3e080d729dc865efaaa80af99dd78c1c2387c2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,C
reatedAt:1726253626405117690,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-92mml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36bd37dc-88c4-4264-9e7c-a90246cc5212,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15c33340e3091f96212bbbf29c4a8802468f76ac07126f47265d4f2b86321a89,PodSandboxId:acfcaea56c23ed4626d8134635877c58a053b2861b24d0d53cf3024c7c8c1ca0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726253626518315300,Labels:map[
string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf3d4ca74d8429dc43b760fdf8f185ab,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80a7cb47dee67b788f693c83be0a9b7c41fc30e21ddf9b0131e13737c8047222,PodSandboxId:a63972ff65b125bee7dbfeebba7408899591af85f385eeba93cb4a7472b5a831,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726253626301782095,Labels:map[string]string{io.kubernetes.container.name
: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15cf7928620050653d6239c1007547bd,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a920923a-44e2-4cae-8d7b-d40d178ff9b2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 19:11:47 ha-617764 crio[6149]: time="2024-09-13 19:11:47.095391489Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6ee6f84b-b332-49bc-95ce-3a2feca7be54 name=/runtime.v1.RuntimeService/Version
	Sep 13 19:11:47 ha-617764 crio[6149]: time="2024-09-13 19:11:47.095466375Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6ee6f84b-b332-49bc-95ce-3a2feca7be54 name=/runtime.v1.RuntimeService/Version
	Sep 13 19:11:47 ha-617764 crio[6149]: time="2024-09-13 19:11:47.096538077Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0ad7fa45-8c01-4f39-b576-08a6dc100dab name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 19:11:47 ha-617764 crio[6149]: time="2024-09-13 19:11:47.096981058Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726254707096957050,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0ad7fa45-8c01-4f39-b576-08a6dc100dab name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 19:11:47 ha-617764 crio[6149]: time="2024-09-13 19:11:47.097597308Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b05626ce-4ff1-4677-be71-28e32fea6836 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 19:11:47 ha-617764 crio[6149]: time="2024-09-13 19:11:47.097651225Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b05626ce-4ff1-4677-be71-28e32fea6836 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 19:11:47 ha-617764 crio[6149]: time="2024-09-13 19:11:47.098093270Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:999f5e6003ef9162b57fec4793fbe75aab1348731d8d4f27ad7d3029004b6d4c,PodSandboxId:2ec7df8952268fade9fc0ffc23b16f677fdc2189770cb119d9f905f75fcc7282,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:7,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726254518536729625,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e5f1a84-1798-430e-af04-82469e8f4a7b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 7,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9e9ac5d6b79f0671011120db7c835a22b4b40515be6c0a224b5d78d10631858,PodSandboxId:b36021c0b35cdc0a068086003611f08b09d2d09f7980d332998b85651d1f5f31,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:6,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726254374542802960,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 815ca8cb73177215968b5c5242b63776,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 6,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87156e375ce6e42c538bb851dcb55abf7b83754448bb78e67a324fd93e76a534,PodSandboxId:639b42fbde0c6031c445be6bfab0a91e5a252efc1e759eace315ed7ee44203e9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:6,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726254272527637261,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f4db9ee38410b02d601ed80ae90b5a4,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e916b90f9253d6855bcd1b24ab1ae47479a3f23fd600018cb0738677896b324f,PodSandboxId:2ec7df8952268fade9fc0ffc23b16f677fdc2189770cb119d9f905f75fcc7282,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:6,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726254205523822299,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e5f1a84-1798-430e-af04-82469e8f4a7b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a3f92c39f616d85e223876cd67911949b93e345a656250b2b9a93e494f7b7b0,PodSandboxId:b36021c0b35cdc0a068086003611f08b09d2d09f7980d332998b85651d1f5f31,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:5,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726254195530889880,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 815ca8cb73177215968b5c5242b63776,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50283a22853869e4521e5d44efcff41101ffde818329fd7423f482cb33efabc0,PodSandboxId:639b42fbde0c6031c445be6bfab0a91e5a252efc1e759eace315ed7ee44203e9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:5,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726254130535295539,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f4db9ee38410b02d601ed80ae90b5a4,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf7f61f474e782e403819281bc5a7d281f13bc89127e0093ac24be08ab21acdc,PodSandboxId:ae1363f1228347bc367ec486d746b078aece68a3917c3a8f082468352ed540ff,Metadata:&ContainerMetadata{Name:busybox,Attempt:2,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726254062550829569,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-t4fwq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1bc3749b-0225-445c-9b86-767558392df7,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMe
ssagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bdc8b32559cc35726794d7412fa2462beea35cff248df50623987861ae7bd48,PodSandboxId:90fa239fc72bb6f4e32a775ccd5954a7efb3c77a68e1e066547347d6aa9de270,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726254029473467866,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-92mml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36bd37dc-88c4-4264-9e7c-a90246cc5212,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.p
od.terminationGracePeriod: 30,},},&Container{Id:2ca0aab49c5463b35d609dd4876256ce1882f5f15b9314d4efc1c9460e5da465,PodSandboxId:743d4b43092c6c121728464d3854175d802db7a5cbb8456d4d3e321a84e3380d,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726254029507452958,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-htrbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41a8301e-fca3-4907-bc77-808b013a2d2a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"proto
col\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70f0f4e37a417dce143a67fa1dcf0c5c3b58a8856c27d2bc40f1a94c8751842a,PodSandboxId:43cddd96b715864e6de09c1fa6b08a212431d9f454105b634b09d702d276ce38,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726254029705747992,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fdhnm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c509676-c7ba-4841-89b5-7e4266abd9c9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.c
ontainer.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cb162ca4a91609e62eb9a3b82ac2d18f1f007e8a4b2577cce0250928caa85f2,PodSandboxId:1aee20bf902b8f7f8a9930e46d1fefb6a6c5f2d3d6d8c4ca74d565558460f8cd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726254029639290085,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-b
9bzd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81130c38-ff5d-4c9e-ab7b-7eae4a62d3b8,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:360965c899e522788532af521cbf0b477b35e977e91f5a28b282abf712fc953e,PodSandboxId:477f3d5572a61dae49e2682bbe5fcfd071182b44e8574426070fd470e9d8b5d9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726254029288189795,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-617764,io.kubernete
s.pod.namespace: kube-system,io.kubernetes.pod.uid: 15cf7928620050653d6239c1007547bd,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c22324f5733e44ea66404b16a363cefc848145f9b3f1a75daecc8b7369ff96dd,PodSandboxId:e94c56bdaeede8bdf6b9672b64791948a015daf333d84890a1b688a898d34e7e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726254029033042084,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
bf3d4ca74d8429dc43b760fdf8f185ab,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc744a6ac873da0f38fbe5a85580ba322c578e68359078003640394bc1a9784f,PodSandboxId:c01954306193722d6eb940bc8b28659b638bd552e7eee2298bf05a3dac30ffcf,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726254028929207262,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5545735943f8ff5a38c9aea0b4c785ad,},Ann
otations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bb3333d84624db9148786fc866cc4cb99179a577e31d3365459788ba7b02f59,PodSandboxId:0238ab84a512173bc79a44c0c48ee4a9bfeeaed98c7655e711a18a704a5951f9,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726253659862538947,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-t4fwq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1bc3749b-0225-445c-9b86-767558392df7,},Annotations:map[string]string{io.kuberne
tes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46d659112c682a93bb4560212566d776e405246e1b3f91e9cc2ad5198b2b8c69,PodSandboxId:566613db4514bf925926ac26426504bb464936a657b23ca25af3ad3aa0a803e1,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_EXITED,CreatedAt:1726253640657625545,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5545735943f8ff5a38c9aea0b4c785ad,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernet
es.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dddc0dfb6a255d4d35498f110cd8ade544b6f9eb17cef621a12500640512581e,PodSandboxId:18e2ef1278c487f38e01dc8ec652cadc524a2e3c08f1b80bae0f12b5ce7d5f45,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726253626790338590,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-b9bzd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81130c38-ff5d-4c9e-ab7b-7eae4a62d3b8,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09fe052337ef3c922d075bb554c17c548520bce6283d1973e3507ca8d99ace87,PodSandboxId:5f1a3394b645b31406ab8d54ac120cd39398070996c972bf73df2d505c1848b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726253626803961664,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fdhnm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c509676-c7ba-4841-89b5-7e4266abd9c9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b752b1ac699cb46bd464528882ec5c1bfb29241c94a1eda062141311151c2fc1,PodSandboxId:3a3adb124d23e71e636eaf0f200f72f3779d41c493e0475b011f1e825d452f00,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726253626698915633,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-htrbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 41a8301e-fca3-4907-bc77-808b013a2d2a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d1a0b2d1c95ea3234342a3c075a74b46c01ce0a91b97801503cb6fd1fc06163,PodSandboxId:09bbefd12114cb9e6833e18edd3e080d729dc865efaaa80af99dd78c1c2387c2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,C
reatedAt:1726253626405117690,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-92mml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36bd37dc-88c4-4264-9e7c-a90246cc5212,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15c33340e3091f96212bbbf29c4a8802468f76ac07126f47265d4f2b86321a89,PodSandboxId:acfcaea56c23ed4626d8134635877c58a053b2861b24d0d53cf3024c7c8c1ca0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726253626518315300,Labels:map[
string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf3d4ca74d8429dc43b760fdf8f185ab,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80a7cb47dee67b788f693c83be0a9b7c41fc30e21ddf9b0131e13737c8047222,PodSandboxId:a63972ff65b125bee7dbfeebba7408899591af85f385eeba93cb4a7472b5a831,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726253626301782095,Labels:map[string]string{io.kubernetes.container.name
: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15cf7928620050653d6239c1007547bd,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b05626ce-4ff1-4677-be71-28e32fea6836 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 19:11:47 ha-617764 crio[6149]: time="2024-09-13 19:11:47.146402983Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=eb577b8b-4286-4481-a429-40a81afbe38f name=/runtime.v1.RuntimeService/Version
	Sep 13 19:11:47 ha-617764 crio[6149]: time="2024-09-13 19:11:47.146492770Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=eb577b8b-4286-4481-a429-40a81afbe38f name=/runtime.v1.RuntimeService/Version
	Sep 13 19:11:47 ha-617764 crio[6149]: time="2024-09-13 19:11:47.148320157Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=17789149-5a4a-442e-b46b-e2b56798ea4f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 19:11:47 ha-617764 crio[6149]: time="2024-09-13 19:11:47.148796178Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726254707148771691,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=17789149-5a4a-442e-b46b-e2b56798ea4f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 19:11:47 ha-617764 crio[6149]: time="2024-09-13 19:11:47.149500580Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2be13e2f-82ae-420d-a85d-87942eded10c name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 19:11:47 ha-617764 crio[6149]: time="2024-09-13 19:11:47.149552828Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2be13e2f-82ae-420d-a85d-87942eded10c name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 19:11:47 ha-617764 crio[6149]: time="2024-09-13 19:11:47.149952964Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:999f5e6003ef9162b57fec4793fbe75aab1348731d8d4f27ad7d3029004b6d4c,PodSandboxId:2ec7df8952268fade9fc0ffc23b16f677fdc2189770cb119d9f905f75fcc7282,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:7,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726254518536729625,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e5f1a84-1798-430e-af04-82469e8f4a7b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 7,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9e9ac5d6b79f0671011120db7c835a22b4b40515be6c0a224b5d78d10631858,PodSandboxId:b36021c0b35cdc0a068086003611f08b09d2d09f7980d332998b85651d1f5f31,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:6,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726254374542802960,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 815ca8cb73177215968b5c5242b63776,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 6,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87156e375ce6e42c538bb851dcb55abf7b83754448bb78e67a324fd93e76a534,PodSandboxId:639b42fbde0c6031c445be6bfab0a91e5a252efc1e759eace315ed7ee44203e9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:6,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726254272527637261,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f4db9ee38410b02d601ed80ae90b5a4,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e916b90f9253d6855bcd1b24ab1ae47479a3f23fd600018cb0738677896b324f,PodSandboxId:2ec7df8952268fade9fc0ffc23b16f677fdc2189770cb119d9f905f75fcc7282,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:6,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726254205523822299,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e5f1a84-1798-430e-af04-82469e8f4a7b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a3f92c39f616d85e223876cd67911949b93e345a656250b2b9a93e494f7b7b0,PodSandboxId:b36021c0b35cdc0a068086003611f08b09d2d09f7980d332998b85651d1f5f31,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:5,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726254195530889880,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 815ca8cb73177215968b5c5242b63776,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50283a22853869e4521e5d44efcff41101ffde818329fd7423f482cb33efabc0,PodSandboxId:639b42fbde0c6031c445be6bfab0a91e5a252efc1e759eace315ed7ee44203e9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:5,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726254130535295539,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f4db9ee38410b02d601ed80ae90b5a4,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf7f61f474e782e403819281bc5a7d281f13bc89127e0093ac24be08ab21acdc,PodSandboxId:ae1363f1228347bc367ec486d746b078aece68a3917c3a8f082468352ed540ff,Metadata:&ContainerMetadata{Name:busybox,Attempt:2,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726254062550829569,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-t4fwq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1bc3749b-0225-445c-9b86-767558392df7,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMe
ssagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bdc8b32559cc35726794d7412fa2462beea35cff248df50623987861ae7bd48,PodSandboxId:90fa239fc72bb6f4e32a775ccd5954a7efb3c77a68e1e066547347d6aa9de270,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726254029473467866,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-92mml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36bd37dc-88c4-4264-9e7c-a90246cc5212,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.p
od.terminationGracePeriod: 30,},},&Container{Id:2ca0aab49c5463b35d609dd4876256ce1882f5f15b9314d4efc1c9460e5da465,PodSandboxId:743d4b43092c6c121728464d3854175d802db7a5cbb8456d4d3e321a84e3380d,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726254029507452958,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-htrbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41a8301e-fca3-4907-bc77-808b013a2d2a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"proto
col\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70f0f4e37a417dce143a67fa1dcf0c5c3b58a8856c27d2bc40f1a94c8751842a,PodSandboxId:43cddd96b715864e6de09c1fa6b08a212431d9f454105b634b09d702d276ce38,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726254029705747992,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fdhnm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c509676-c7ba-4841-89b5-7e4266abd9c9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.c
ontainer.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cb162ca4a91609e62eb9a3b82ac2d18f1f007e8a4b2577cce0250928caa85f2,PodSandboxId:1aee20bf902b8f7f8a9930e46d1fefb6a6c5f2d3d6d8c4ca74d565558460f8cd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726254029639290085,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-b
9bzd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81130c38-ff5d-4c9e-ab7b-7eae4a62d3b8,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:360965c899e522788532af521cbf0b477b35e977e91f5a28b282abf712fc953e,PodSandboxId:477f3d5572a61dae49e2682bbe5fcfd071182b44e8574426070fd470e9d8b5d9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726254029288189795,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-617764,io.kubernete
s.pod.namespace: kube-system,io.kubernetes.pod.uid: 15cf7928620050653d6239c1007547bd,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c22324f5733e44ea66404b16a363cefc848145f9b3f1a75daecc8b7369ff96dd,PodSandboxId:e94c56bdaeede8bdf6b9672b64791948a015daf333d84890a1b688a898d34e7e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726254029033042084,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
bf3d4ca74d8429dc43b760fdf8f185ab,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc744a6ac873da0f38fbe5a85580ba322c578e68359078003640394bc1a9784f,PodSandboxId:c01954306193722d6eb940bc8b28659b638bd552e7eee2298bf05a3dac30ffcf,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726254028929207262,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5545735943f8ff5a38c9aea0b4c785ad,},Ann
otations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bb3333d84624db9148786fc866cc4cb99179a577e31d3365459788ba7b02f59,PodSandboxId:0238ab84a512173bc79a44c0c48ee4a9bfeeaed98c7655e711a18a704a5951f9,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726253659862538947,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-t4fwq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1bc3749b-0225-445c-9b86-767558392df7,},Annotations:map[string]string{io.kuberne
tes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46d659112c682a93bb4560212566d776e405246e1b3f91e9cc2ad5198b2b8c69,PodSandboxId:566613db4514bf925926ac26426504bb464936a657b23ca25af3ad3aa0a803e1,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_EXITED,CreatedAt:1726253640657625545,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5545735943f8ff5a38c9aea0b4c785ad,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernet
es.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dddc0dfb6a255d4d35498f110cd8ade544b6f9eb17cef621a12500640512581e,PodSandboxId:18e2ef1278c487f38e01dc8ec652cadc524a2e3c08f1b80bae0f12b5ce7d5f45,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726253626790338590,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-b9bzd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81130c38-ff5d-4c9e-ab7b-7eae4a62d3b8,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09fe052337ef3c922d075bb554c17c548520bce6283d1973e3507ca8d99ace87,PodSandboxId:5f1a3394b645b31406ab8d54ac120cd39398070996c972bf73df2d505c1848b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726253626803961664,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fdhnm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c509676-c7ba-4841-89b5-7e4266abd9c9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b752b1ac699cb46bd464528882ec5c1bfb29241c94a1eda062141311151c2fc1,PodSandboxId:3a3adb124d23e71e636eaf0f200f72f3779d41c493e0475b011f1e825d452f00,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726253626698915633,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-htrbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 41a8301e-fca3-4907-bc77-808b013a2d2a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d1a0b2d1c95ea3234342a3c075a74b46c01ce0a91b97801503cb6fd1fc06163,PodSandboxId:09bbefd12114cb9e6833e18edd3e080d729dc865efaaa80af99dd78c1c2387c2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,C
reatedAt:1726253626405117690,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-92mml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36bd37dc-88c4-4264-9e7c-a90246cc5212,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15c33340e3091f96212bbbf29c4a8802468f76ac07126f47265d4f2b86321a89,PodSandboxId:acfcaea56c23ed4626d8134635877c58a053b2861b24d0d53cf3024c7c8c1ca0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726253626518315300,Labels:map[
string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf3d4ca74d8429dc43b760fdf8f185ab,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80a7cb47dee67b788f693c83be0a9b7c41fc30e21ddf9b0131e13737c8047222,PodSandboxId:a63972ff65b125bee7dbfeebba7408899591af85f385eeba93cb4a7472b5a831,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726253626301782095,Labels:map[string]string{io.kubernetes.container.name
: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-617764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15cf7928620050653d6239c1007547bd,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2be13e2f-82ae-420d-a85d-87942eded10c name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	999f5e6003ef9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   3 minutes ago       Running             storage-provisioner       7                   2ec7df8952268       storage-provisioner
	d9e9ac5d6b79f       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   5 minutes ago       Running             kube-controller-manager   6                   b36021c0b35cd       kube-controller-manager-ha-617764
	87156e375ce6e       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   7 minutes ago       Running             kube-apiserver            6                   639b42fbde0c6       kube-apiserver-ha-617764
	e916b90f9253d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   8 minutes ago       Exited              storage-provisioner       6                   2ec7df8952268       storage-provisioner
	8a3f92c39f616       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   8 minutes ago       Exited              kube-controller-manager   5                   b36021c0b35cd       kube-controller-manager-ha-617764
	50283a2285386       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   9 minutes ago       Exited              kube-apiserver            5                   639b42fbde0c6       kube-apiserver-ha-617764
	bf7f61f474e78       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a   10 minutes ago      Running             busybox                   2                   ae1363f122834       busybox-7dff88458-t4fwq
	70f0f4e37a417       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   11 minutes ago      Running             coredns                   2                   43cddd96b7158       coredns-7c65d6cfc9-fdhnm
	7cb162ca4a916       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f   11 minutes ago      Running             kindnet-cni               2                   1aee20bf902b8       kindnet-b9bzd
	2ca0aab49c546       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   11 minutes ago      Running             coredns                   2                   743d4b43092c6       coredns-7c65d6cfc9-htrbt
	0bdc8b32559cc       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   11 minutes ago      Running             kube-proxy                2                   90fa239fc72bb       kube-proxy-92mml
	360965c899e52       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   11 minutes ago      Running             kube-scheduler            2                   477f3d5572a61       kube-scheduler-ha-617764
	c22324f5733e4       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   11 minutes ago      Running             etcd                      2                   e94c56bdaeede       etcd-ha-617764
	bc744a6ac873d       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12   11 minutes ago      Running             kube-vip                  1                   c019543061937       kube-vip-ha-617764
	2bb3333d84624       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a   17 minutes ago      Exited              busybox                   1                   0238ab84a5121       busybox-7dff88458-t4fwq
	46d659112c682       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12   17 minutes ago      Exited              kube-vip                  0                   566613db4514b       kube-vip-ha-617764
	09fe052337ef3       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   18 minutes ago      Exited              coredns                   1                   5f1a3394b645b       coredns-7c65d6cfc9-fdhnm
	dddc0dfb6a255       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f   18 minutes ago      Exited              kindnet-cni               1                   18e2ef1278c48       kindnet-b9bzd
	b752b1ac699cb       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   18 minutes ago      Exited              coredns                   1                   3a3adb124d23e       coredns-7c65d6cfc9-htrbt
	15c33340e3091       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   18 minutes ago      Exited              etcd                      1                   acfcaea56c23e       etcd-ha-617764
	1d1a0b2d1c95e       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   18 minutes ago      Exited              kube-proxy                1                   09bbefd12114c       kube-proxy-92mml
	80a7cb47dee67       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   18 minutes ago      Exited              kube-scheduler            1                   a63972ff65b12       kube-scheduler-ha-617764
	
	
	==> coredns [09fe052337ef3c922d075bb554c17c548520bce6283d1973e3507ca8d99ace87] <==
	Trace[818669773]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (18:54:01.526)
	Trace[818669773]: [10.000979018s] [10.000979018s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:52492->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:52492->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:43514->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:43514->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [2ca0aab49c5463b35d609dd4876256ce1882f5f15b9314d4efc1c9460e5da465] <==
	Trace[935271282]: [14.299786922s] [14.299786922s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=1, ErrCode=NO_ERROR, debug=""
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [70f0f4e37a417dce143a67fa1dcf0c5c3b58a8856c27d2bc40f1a94c8751842a] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [b752b1ac699cb46bd464528882ec5c1bfb29241c94a1eda062141311151c2fc1] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:54310->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-617764
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-617764
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fdd33bebc6743cfd1c61ec7fe066add478610a92
	                    minikube.k8s.io/name=ha-617764
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_13T18_42_29_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Sep 2024 18:42:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-617764
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 13 Sep 2024 19:11:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Sep 2024 19:09:52 +0000   Fri, 13 Sep 2024 18:57:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Sep 2024 19:09:52 +0000   Fri, 13 Sep 2024 18:57:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Sep 2024 19:09:52 +0000   Fri, 13 Sep 2024 18:57:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 13 Sep 2024 19:09:52 +0000   Fri, 13 Sep 2024 18:57:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.145
	  Hostname:    ha-617764
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0f2aba800dcc4d28902afcae110b3305
	  System UUID:                0f2aba80-0dcc-4d28-902a-fcae110b3305
	  Boot ID:                    07a71fa7-1555-49dc-a1c7-c845200ddeaa
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-t4fwq              0 (0%)        0 (0%)      0 (0%)           0 (0%)         26m
	  kube-system                 coredns-7c65d6cfc9-fdhnm             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     29m
	  kube-system                 coredns-7c65d6cfc9-htrbt             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     29m
	  kube-system                 etcd-ha-617764                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         29m
	  kube-system                 kindnet-b9bzd                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      29m
	  kube-system                 kube-apiserver-ha-617764             250m (12%)    0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-controller-manager-ha-617764    200m (10%)    0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-proxy-92mml                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-scheduler-ha-617764             100m (5%)     0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-vip-ha-617764                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         29m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                 From             Message
	  ----     ------                   ----                ----             -------
	  Normal   Starting                 17m                 kube-proxy       
	  Normal   Starting                 29m                 kube-proxy       
	  Normal   Starting                 29m                 kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  29m                 kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           29m                 node-controller  Node ha-617764 event: Registered Node ha-617764 in Controller
	  Normal   RegisteredNode           28m                 node-controller  Node ha-617764 event: Registered Node ha-617764 in Controller
	  Normal   RegisteredNode           26m                 node-controller  Node ha-617764 event: Registered Node ha-617764 in Controller
	  Normal   RegisteredNode           17m                 node-controller  Node ha-617764 event: Registered Node ha-617764 in Controller
	  Normal   RegisteredNode           17m                 node-controller  Node ha-617764 event: Registered Node ha-617764 in Controller
	  Normal   RegisteredNode           16m                 node-controller  Node ha-617764 event: Registered Node ha-617764 in Controller
	  Normal   NodeNotReady             14m                 node-controller  Node ha-617764 status is now: NodeNotReady
	  Normal   NodeHasNoDiskPressure    14m (x2 over 29m)   kubelet          Node ha-617764 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  14m (x2 over 29m)   kubelet          Node ha-617764 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     14m (x2 over 29m)   kubelet          Node ha-617764 status is now: NodeHasSufficientPID
	  Normal   NodeReady                14m (x2 over 29m)   kubelet          Node ha-617764 status is now: NodeReady
	  Warning  ContainerGCFailed        12m (x3 over 19m)   kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeNotReady             11m (x10 over 19m)  kubelet          Node ha-617764 status is now: NodeNotReady
	  Normal   RegisteredNode           5m31s               node-controller  Node ha-617764 event: Registered Node ha-617764 in Controller
	  Normal   RegisteredNode           67s                 node-controller  Node ha-617764 event: Registered Node ha-617764 in Controller
	
	
	Name:               ha-617764-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-617764-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fdd33bebc6743cfd1c61ec7fe066add478610a92
	                    minikube.k8s.io/name=ha-617764
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_13T18_43_27_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Sep 2024 18:43:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-617764-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 13 Sep 2024 19:11:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Sep 2024 19:09:48 +0000   Fri, 13 Sep 2024 18:54:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Sep 2024 19:09:48 +0000   Fri, 13 Sep 2024 18:54:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Sep 2024 19:09:48 +0000   Fri, 13 Sep 2024 18:54:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 13 Sep 2024 19:09:48 +0000   Fri, 13 Sep 2024 18:54:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.203
	  Hostname:    ha-617764-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 3cc6f4389cf8474aa5ba9d0c108b603d
	  System UUID:                3cc6f438-9cf8-474a-a5ba-9d0c108b603d
	  Boot ID:                    3ff149de-a1f6-4a53-9c3a-07c56d69cf30
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-c28t9                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         26m
	  kube-system                 etcd-ha-617764-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         28m
	  kube-system                 kindnet-bc2zg                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      28m
	  kube-system                 kube-apiserver-ha-617764-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 kube-controller-manager-ha-617764-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 kube-proxy-hqm8n                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 kube-scheduler-ha-617764-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 kube-vip-ha-617764-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 7m28s              kube-proxy       
	  Normal   Starting                 17m                kube-proxy       
	  Normal   Starting                 28m                kube-proxy       
	  Normal   NodeHasSufficientPID     28m (x7 over 28m)  kubelet          Node ha-617764-m02 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  28m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  28m (x8 over 28m)  kubelet          Node ha-617764-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    28m (x8 over 28m)  kubelet          Node ha-617764-m02 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           28m                node-controller  Node ha-617764-m02 event: Registered Node ha-617764-m02 in Controller
	  Normal   RegisteredNode           28m                node-controller  Node ha-617764-m02 event: Registered Node ha-617764-m02 in Controller
	  Normal   RegisteredNode           26m                node-controller  Node ha-617764-m02 event: Registered Node ha-617764-m02 in Controller
	  Normal   NodeNotReady             24m                node-controller  Node ha-617764-m02 status is now: NodeNotReady
	  Normal   NodeHasSufficientMemory  17m (x8 over 17m)  kubelet          Node ha-617764-m02 status is now: NodeHasSufficientMemory
	  Normal   Starting                 17m                kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    17m (x8 over 17m)  kubelet          Node ha-617764-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     17m (x7 over 17m)  kubelet          Node ha-617764-m02 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  17m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           17m                node-controller  Node ha-617764-m02 event: Registered Node ha-617764-m02 in Controller
	  Normal   RegisteredNode           17m                node-controller  Node ha-617764-m02 event: Registered Node ha-617764-m02 in Controller
	  Normal   RegisteredNode           16m                node-controller  Node ha-617764-m02 event: Registered Node ha-617764-m02 in Controller
	  Normal   NodeNotReady             10m                kubelet          Node ha-617764-m02 status is now: NodeNotReady
	  Warning  ContainerGCFailed        10m                kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           5m31s              node-controller  Node ha-617764-m02 event: Registered Node ha-617764-m02 in Controller
	  Normal   RegisteredNode           67s                node-controller  Node ha-617764-m02 event: Registered Node ha-617764-m02 in Controller
	
	
	Name:               ha-617764-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-617764-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fdd33bebc6743cfd1c61ec7fe066add478610a92
	                    minikube.k8s.io/name=ha-617764
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_13T18_45_53_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Sep 2024 18:45:52 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-617764-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 13 Sep 2024 18:56:14 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 13 Sep 2024 18:55:54 +0000   Fri, 13 Sep 2024 18:56:55 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 13 Sep 2024 18:55:54 +0000   Fri, 13 Sep 2024 18:56:55 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 13 Sep 2024 18:55:54 +0000   Fri, 13 Sep 2024 18:56:55 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 13 Sep 2024 18:55:54 +0000   Fri, 13 Sep 2024 18:56:55 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.238
	  Hostname:    ha-617764-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 377b052ecb52420ba7e0fb039f04c4f5
	  System UUID:                377b052e-cb52-420b-a7e0-fb039f04c4f5
	  Boot ID:                    44a904c1-478c-4733-89d7-64bb5ed6ea9f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-hzxvw    0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kindnet-47jgz              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      25m
	  kube-system                 kube-proxy-5rlkn           0 (0%)        0 (0%)      0 (0%)           0 (0%)         25m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 25m                kube-proxy       
	  Normal   Starting                 15m                kube-proxy       
	  Normal   NodeHasNoDiskPressure    25m (x2 over 25m)  kubelet          Node ha-617764-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeAllocatableEnforced  25m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientPID     25m (x2 over 25m)  kubelet          Node ha-617764-m04 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  25m (x2 over 25m)  kubelet          Node ha-617764-m04 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           25m                node-controller  Node ha-617764-m04 event: Registered Node ha-617764-m04 in Controller
	  Normal   RegisteredNode           25m                node-controller  Node ha-617764-m04 event: Registered Node ha-617764-m04 in Controller
	  Normal   RegisteredNode           25m                node-controller  Node ha-617764-m04 event: Registered Node ha-617764-m04 in Controller
	  Normal   NodeReady                25m                kubelet          Node ha-617764-m04 status is now: NodeReady
	  Normal   RegisteredNode           17m                node-controller  Node ha-617764-m04 event: Registered Node ha-617764-m04 in Controller
	  Normal   RegisteredNode           17m                node-controller  Node ha-617764-m04 event: Registered Node ha-617764-m04 in Controller
	  Normal   RegisteredNode           16m                node-controller  Node ha-617764-m04 event: Registered Node ha-617764-m04 in Controller
	  Normal   Starting                 15m                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  15m (x2 over 15m)  kubelet          Node ha-617764-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    15m (x2 over 15m)  kubelet          Node ha-617764-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     15m (x2 over 15m)  kubelet          Node ha-617764-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 15m                kubelet          Node ha-617764-m04 has been rebooted, boot id: 44a904c1-478c-4733-89d7-64bb5ed6ea9f
	  Normal   NodeReady                15m                kubelet          Node ha-617764-m04 status is now: NodeReady
	  Normal   NodeNotReady             14m (x2 over 16m)  node-controller  Node ha-617764-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           5m31s              node-controller  Node ha-617764-m04 event: Registered Node ha-617764-m04 in Controller
	  Normal   RegisteredNode           67s                node-controller  Node ha-617764-m04 event: Registered Node ha-617764-m04 in Controller
	
	
	Name:               ha-617764-m05
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-617764-m05
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fdd33bebc6743cfd1c61ec7fe066add478610a92
	                    minikube.k8s.io/name=ha-617764
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_13T19_10_35_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Sep 2024 19:10:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-617764-m05
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 13 Sep 2024 19:11:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Sep 2024 19:11:02 +0000   Fri, 13 Sep 2024 19:10:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Sep 2024 19:11:02 +0000   Fri, 13 Sep 2024 19:10:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Sep 2024 19:11:02 +0000   Fri, 13 Sep 2024 19:10:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 13 Sep 2024 19:11:02 +0000   Fri, 13 Sep 2024 19:10:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.164
	  Hostname:    ha-617764-m05
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 92928d1e5fb04e50acaefeba3488a4c9
	  System UUID:                92928d1e-5fb0-4e50-acae-feba3488a4c9
	  Boot ID:                    0614792b-9dcd-4fe7-9364-1ae2bed8c05e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-ha-617764-m05                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         74s
	  kube-system                 kindnet-fzs9m                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      76s
	  kube-system                 kube-apiserver-ha-617764-m05             250m (12%)    0 (0%)      0 (0%)           0 (0%)         75s
	  kube-system                 kube-controller-manager-ha-617764-m05    200m (10%)    0 (0%)      0 (0%)           0 (0%)         75s
	  kube-system                 kube-proxy-xvlkj                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         76s
	  kube-system                 kube-scheduler-ha-617764-m05             100m (5%)     0 (0%)      0 (0%)           0 (0%)         74s
	  kube-system                 kube-vip-ha-617764-m05                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         71s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 73s                kube-proxy       
	  Normal  NodeHasSufficientMemory  76s (x8 over 76s)  kubelet          Node ha-617764-m05 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    76s (x8 over 76s)  kubelet          Node ha-617764-m05 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     76s (x7 over 76s)  kubelet          Node ha-617764-m05 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  76s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           75s                node-controller  Node ha-617764-m05 event: Registered Node ha-617764-m05 in Controller
	  Normal  RegisteredNode           67s                node-controller  Node ha-617764-m05 event: Registered Node ha-617764-m05 in Controller
	
	
	==> dmesg <==
	[  +0.063743] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.140604] systemd-fstab-generator[1308]: Ignoring "noauto" option for root device
	[  +0.090298] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.514292] kauditd_printk_skb: 21 callbacks suppressed
	[ +12.179813] kauditd_printk_skb: 38 callbacks suppressed
	[Sep13 18:43] kauditd_printk_skb: 24 callbacks suppressed
	[Sep13 18:53] systemd-fstab-generator[3494]: Ignoring "noauto" option for root device
	[  +0.152592] systemd-fstab-generator[3506]: Ignoring "noauto" option for root device
	[  +0.176959] systemd-fstab-generator[3520]: Ignoring "noauto" option for root device
	[  +0.144628] systemd-fstab-generator[3532]: Ignoring "noauto" option for root device
	[  +0.278033] systemd-fstab-generator[3560]: Ignoring "noauto" option for root device
	[  +6.938453] systemd-fstab-generator[3656]: Ignoring "noauto" option for root device
	[  +0.087335] kauditd_printk_skb: 100 callbacks suppressed
	[  +6.505183] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.221465] kauditd_printk_skb: 85 callbacks suppressed
	[Sep13 18:54] kauditd_printk_skb: 5 callbacks suppressed
	[  +8.066370] kauditd_printk_skb: 4 callbacks suppressed
	[Sep13 19:00] systemd-fstab-generator[6064]: Ignoring "noauto" option for root device
	[  +0.171401] systemd-fstab-generator[6082]: Ignoring "noauto" option for root device
	[  +0.186624] systemd-fstab-generator[6096]: Ignoring "noauto" option for root device
	[  +0.141420] systemd-fstab-generator[6108]: Ignoring "noauto" option for root device
	[  +0.313065] systemd-fstab-generator[6136]: Ignoring "noauto" option for root device
	[  +7.472494] systemd-fstab-generator[6247]: Ignoring "noauto" option for root device
	[  +0.086449] kauditd_printk_skb: 100 callbacks suppressed
	[  +6.730244] kauditd_printk_skb: 117 callbacks suppressed
	
	
	==> etcd [15c33340e3091f96212bbbf29c4a8802468f76ac07126f47265d4f2b86321a89] <==
	{"level":"info","ts":"2024-09-13T18:58:44.411752Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"44b3a0f32f80bb09 [term 3] starts to transfer leadership to 130da78b66ce9e95"}
	{"level":"info","ts":"2024-09-13T18:58:44.411785Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"44b3a0f32f80bb09 sends MsgTimeoutNow to 130da78b66ce9e95 immediately as 130da78b66ce9e95 already has up-to-date log"}
	{"level":"info","ts":"2024-09-13T18:58:44.414478Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"44b3a0f32f80bb09 [term: 3] received a MsgVote message with higher term from 130da78b66ce9e95 [term: 4]"}
	{"level":"info","ts":"2024-09-13T18:58:44.414534Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"44b3a0f32f80bb09 became follower at term 4"}
	{"level":"info","ts":"2024-09-13T18:58:44.414548Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"44b3a0f32f80bb09 [logterm: 3, index: 3644, vote: 0] cast MsgVote for 130da78b66ce9e95 [logterm: 3, index: 3644] at term 4"}
	{"level":"info","ts":"2024-09-13T18:58:44.414556Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 44b3a0f32f80bb09 lost leader 44b3a0f32f80bb09 at term 4"}
	{"level":"info","ts":"2024-09-13T18:58:44.416226Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 44b3a0f32f80bb09 elected leader 130da78b66ce9e95 at term 4"}
	{"level":"info","ts":"2024-09-13T18:58:44.512693Z","caller":"etcdserver/server.go:1498","msg":"leadership transfer finished","local-member-id":"44b3a0f32f80bb09","old-leader-member-id":"44b3a0f32f80bb09","new-leader-member-id":"130da78b66ce9e95","took":"101.001068ms"}
	{"level":"info","ts":"2024-09-13T18:58:44.512832Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"130da78b66ce9e95"}
	{"level":"warn","ts":"2024-09-13T18:58:44.513914Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"130da78b66ce9e95"}
	{"level":"info","ts":"2024-09-13T18:58:44.514037Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"130da78b66ce9e95"}
	{"level":"warn","ts":"2024-09-13T18:58:44.515584Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"130da78b66ce9e95"}
	{"level":"info","ts":"2024-09-13T18:58:44.515625Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"130da78b66ce9e95"}
	{"level":"info","ts":"2024-09-13T18:58:44.515668Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"44b3a0f32f80bb09","remote-peer-id":"130da78b66ce9e95"}
	{"level":"warn","ts":"2024-09-13T18:58:44.515788Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"44b3a0f32f80bb09","remote-peer-id":"130da78b66ce9e95","error":"context canceled"}
	{"level":"warn","ts":"2024-09-13T18:58:44.515815Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"130da78b66ce9e95","error":"failed to read 130da78b66ce9e95 on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-09-13T18:58:44.515846Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"44b3a0f32f80bb09","remote-peer-id":"130da78b66ce9e95"}
	{"level":"warn","ts":"2024-09-13T18:58:44.515937Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"44b3a0f32f80bb09","remote-peer-id":"130da78b66ce9e95","error":"context canceled"}
	{"level":"info","ts":"2024-09-13T18:58:44.515950Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"44b3a0f32f80bb09","remote-peer-id":"130da78b66ce9e95"}
	{"level":"info","ts":"2024-09-13T18:58:44.515960Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"130da78b66ce9e95"}
	{"level":"warn","ts":"2024-09-13T18:58:44.522046Z","caller":"rafthttp/http.go:413","msg":"failed to find remote peer in cluster","local-member-id":"44b3a0f32f80bb09","remote-peer-id-stream-handler":"44b3a0f32f80bb09","remote-peer-id-from":"130da78b66ce9e95","cluster-id":"33ee9922f2bf4379"}
	{"level":"info","ts":"2024-09-13T18:58:44.522270Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.145:2380"}
	{"level":"warn","ts":"2024-09-13T18:58:44.523349Z","caller":"embed/config_logging.go:170","msg":"rejected connection on peer endpoint","remote-addr":"192.168.39.203:60554","server-name":"","error":"set tcp 192.168.39.145:2380: use of closed network connection"}
	{"level":"info","ts":"2024-09-13T18:58:45.058204Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.145:2380"}
	{"level":"info","ts":"2024-09-13T18:58:45.058341Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-617764","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.145:2380"],"advertise-client-urls":["https://192.168.39.145:2379"]}
	
	
	==> etcd [c22324f5733e44ea66404b16a363cefc848145f9b3f1a75daecc8b7369ff96dd] <==
	{"level":"info","ts":"2024-09-13T19:10:31.603385Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"33ee9922f2bf4379","local-member-id":"44b3a0f32f80bb09","added-peer-id":"589727043a46996","added-peer-peer-urls":["https://192.168.39.164:2380"]}
	{"level":"info","ts":"2024-09-13T19:10:31.604834Z","caller":"rafthttp/peer.go:133","msg":"starting remote peer","remote-peer-id":"589727043a46996"}
	{"level":"info","ts":"2024-09-13T19:10:31.605158Z","caller":"rafthttp/pipeline.go:72","msg":"started HTTP pipelining with remote peer","local-member-id":"44b3a0f32f80bb09","remote-peer-id":"589727043a46996"}
	{"level":"info","ts":"2024-09-13T19:10:31.606136Z","caller":"rafthttp/stream.go:169","msg":"started stream writer with remote peer","local-member-id":"44b3a0f32f80bb09","remote-peer-id":"589727043a46996"}
	{"level":"info","ts":"2024-09-13T19:10:31.606882Z","caller":"rafthttp/stream.go:169","msg":"started stream writer with remote peer","local-member-id":"44b3a0f32f80bb09","remote-peer-id":"589727043a46996"}
	{"level":"info","ts":"2024-09-13T19:10:31.608143Z","caller":"rafthttp/peer.go:137","msg":"started remote peer","remote-peer-id":"589727043a46996"}
	{"level":"info","ts":"2024-09-13T19:10:31.608193Z","caller":"rafthttp/stream.go:395","msg":"started stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"44b3a0f32f80bb09","remote-peer-id":"589727043a46996"}
	{"level":"info","ts":"2024-09-13T19:10:31.608340Z","caller":"rafthttp/stream.go:395","msg":"started stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"44b3a0f32f80bb09","remote-peer-id":"589727043a46996"}
	{"level":"info","ts":"2024-09-13T19:10:31.608277Z","caller":"rafthttp/transport.go:317","msg":"added remote peer","local-member-id":"44b3a0f32f80bb09","remote-peer-id":"589727043a46996","remote-peer-urls":["https://192.168.39.164:2380"]}
	{"level":"warn","ts":"2024-09-13T19:10:31.671145Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"589727043a46996","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"warn","ts":"2024-09-13T19:10:32.160088Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"589727043a46996","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"warn","ts":"2024-09-13T19:10:32.226042Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.164:2380/version","remote-member-id":"589727043a46996","error":"Get \"https://192.168.39.164:2380/version\": dial tcp 192.168.39.164:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-13T19:10:32.226147Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"589727043a46996","error":"Get \"https://192.168.39.164:2380/version\": dial tcp 192.168.39.164:2380: connect: connection refused"}
	{"level":"info","ts":"2024-09-13T19:10:32.914367Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"589727043a46996"}
	{"level":"info","ts":"2024-09-13T19:10:32.922047Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"44b3a0f32f80bb09","remote-peer-id":"589727043a46996"}
	{"level":"info","ts":"2024-09-13T19:10:32.925304Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"44b3a0f32f80bb09","remote-peer-id":"589727043a46996"}
	{"level":"info","ts":"2024-09-13T19:10:33.022371Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"44b3a0f32f80bb09","to":"589727043a46996","stream-type":"stream Message"}
	{"level":"info","ts":"2024-09-13T19:10:33.022512Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"44b3a0f32f80bb09","remote-peer-id":"589727043a46996"}
	{"level":"info","ts":"2024-09-13T19:10:33.051950Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"44b3a0f32f80bb09","to":"589727043a46996","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-09-13T19:10:33.052085Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"44b3a0f32f80bb09","remote-peer-id":"589727043a46996"}
	{"level":"warn","ts":"2024-09-13T19:10:33.160599Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"589727043a46996","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"info","ts":"2024-09-13T19:10:33.660785Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"44b3a0f32f80bb09 switched to configuration voters=(398975868495751574 1372937678584979093 4950477381744769801)"}
	{"level":"info","ts":"2024-09-13T19:10:33.660877Z","caller":"membership/cluster.go:535","msg":"promote member","cluster-id":"33ee9922f2bf4379","local-member-id":"44b3a0f32f80bb09"}
	{"level":"info","ts":"2024-09-13T19:10:33.660902Z","caller":"etcdserver/server.go:1996","msg":"applied a configuration change through raft","local-member-id":"44b3a0f32f80bb09","raft-conf-change":"ConfChangeAddNode","raft-conf-change-node-id":"589727043a46996"}
	{"level":"info","ts":"2024-09-13T19:10:42.928858Z","caller":"traceutil/trace.go:171","msg":"trace[345192002] transaction","detail":"{read_only:false; response_revision:3946; number_of_response:1; }","duration":"110.792286ms","start":"2024-09-13T19:10:42.818045Z","end":"2024-09-13T19:10:42.928837Z","steps":["trace[345192002] 'process raft request'  (duration: 109.383366ms)"],"step_count":1}
	
	
	==> kernel <==
	 19:11:47 up 29 min,  0 users,  load average: 0.16, 0.23, 0.29
	Linux ha-617764 5.10.207 #1 SMP Thu Sep 12 19:03:33 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [7cb162ca4a91609e62eb9a3b82ac2d18f1f007e8a4b2577cce0250928caa85f2] <==
	I0913 19:11:10.923047       1 main.go:322] Node ha-617764-m05 has CIDR [10.244.2.0/24] 
	I0913 19:11:20.916852       1 main.go:295] Handling node with IPs: map[192.168.39.238:{}]
	I0913 19:11:20.916967       1 main.go:322] Node ha-617764-m04 has CIDR [10.244.3.0/24] 
	I0913 19:11:20.917104       1 main.go:295] Handling node with IPs: map[192.168.39.164:{}]
	I0913 19:11:20.917140       1 main.go:322] Node ha-617764-m05 has CIDR [10.244.2.0/24] 
	I0913 19:11:20.917304       1 main.go:295] Handling node with IPs: map[192.168.39.145:{}]
	I0913 19:11:20.917333       1 main.go:299] handling current node
	I0913 19:11:20.917355       1 main.go:295] Handling node with IPs: map[192.168.39.203:{}]
	I0913 19:11:20.917371       1 main.go:322] Node ha-617764-m02 has CIDR [10.244.1.0/24] 
	I0913 19:11:30.917395       1 main.go:295] Handling node with IPs: map[192.168.39.145:{}]
	I0913 19:11:30.917448       1 main.go:299] handling current node
	I0913 19:11:30.917463       1 main.go:295] Handling node with IPs: map[192.168.39.203:{}]
	I0913 19:11:30.917469       1 main.go:322] Node ha-617764-m02 has CIDR [10.244.1.0/24] 
	I0913 19:11:30.917620       1 main.go:295] Handling node with IPs: map[192.168.39.238:{}]
	I0913 19:11:30.917626       1 main.go:322] Node ha-617764-m04 has CIDR [10.244.3.0/24] 
	I0913 19:11:30.917678       1 main.go:295] Handling node with IPs: map[192.168.39.164:{}]
	I0913 19:11:30.917721       1 main.go:322] Node ha-617764-m05 has CIDR [10.244.2.0/24] 
	I0913 19:11:40.926427       1 main.go:295] Handling node with IPs: map[192.168.39.145:{}]
	I0913 19:11:40.926484       1 main.go:299] handling current node
	I0913 19:11:40.926538       1 main.go:295] Handling node with IPs: map[192.168.39.203:{}]
	I0913 19:11:40.926550       1 main.go:322] Node ha-617764-m02 has CIDR [10.244.1.0/24] 
	I0913 19:11:40.926766       1 main.go:295] Handling node with IPs: map[192.168.39.238:{}]
	I0913 19:11:40.926796       1 main.go:322] Node ha-617764-m04 has CIDR [10.244.3.0/24] 
	I0913 19:11:40.926899       1 main.go:295] Handling node with IPs: map[192.168.39.164:{}]
	I0913 19:11:40.926932       1 main.go:322] Node ha-617764-m05 has CIDR [10.244.2.0/24] 
	
	
	==> kindnet [dddc0dfb6a255d4d35498f110cd8ade544b6f9eb17cef621a12500640512581e] <==
	I0913 18:57:57.992785       1 main.go:322] Node ha-617764-m02 has CIDR [10.244.1.0/24] 
	I0913 18:58:07.986622       1 main.go:295] Handling node with IPs: map[192.168.39.145:{}]
	I0913 18:58:07.986810       1 main.go:299] handling current node
	I0913 18:58:07.986855       1 main.go:295] Handling node with IPs: map[192.168.39.203:{}]
	I0913 18:58:07.986874       1 main.go:322] Node ha-617764-m02 has CIDR [10.244.1.0/24] 
	I0913 18:58:07.987050       1 main.go:295] Handling node with IPs: map[192.168.39.238:{}]
	I0913 18:58:07.987072       1 main.go:322] Node ha-617764-m04 has CIDR [10.244.3.0/24] 
	I0913 18:58:17.988128       1 main.go:295] Handling node with IPs: map[192.168.39.238:{}]
	I0913 18:58:17.988336       1 main.go:322] Node ha-617764-m04 has CIDR [10.244.3.0/24] 
	I0913 18:58:17.988500       1 main.go:295] Handling node with IPs: map[192.168.39.145:{}]
	I0913 18:58:17.988524       1 main.go:299] handling current node
	I0913 18:58:17.988554       1 main.go:295] Handling node with IPs: map[192.168.39.203:{}]
	I0913 18:58:17.988558       1 main.go:322] Node ha-617764-m02 has CIDR [10.244.1.0/24] 
	I0913 18:58:27.988426       1 main.go:295] Handling node with IPs: map[192.168.39.145:{}]
	I0913 18:58:27.988495       1 main.go:299] handling current node
	I0913 18:58:27.988516       1 main.go:295] Handling node with IPs: map[192.168.39.203:{}]
	I0913 18:58:27.988521       1 main.go:322] Node ha-617764-m02 has CIDR [10.244.1.0/24] 
	I0913 18:58:27.988689       1 main.go:295] Handling node with IPs: map[192.168.39.238:{}]
	I0913 18:58:27.988745       1 main.go:322] Node ha-617764-m04 has CIDR [10.244.3.0/24] 
	I0913 18:58:37.994223       1 main.go:295] Handling node with IPs: map[192.168.39.145:{}]
	I0913 18:58:37.994340       1 main.go:299] handling current node
	I0913 18:58:37.994361       1 main.go:295] Handling node with IPs: map[192.168.39.203:{}]
	I0913 18:58:37.994371       1 main.go:322] Node ha-617764-m02 has CIDR [10.244.1.0/24] 
	I0913 18:58:37.994612       1 main.go:295] Handling node with IPs: map[192.168.39.238:{}]
	I0913 18:58:37.994637       1 main.go:322] Node ha-617764-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [50283a22853869e4521e5d44efcff41101ffde818329fd7423f482cb33efabc0] <==
	W0913 19:03:07.117209       1 reflector.go:561] storage/cacher.go:/certificatesigningrequests: failed to list *certificates.CertificateSigningRequest: etcdserver: request timed out
	E0913 19:03:07.118747       1 cacher.go:478] cacher (certificatesigningrequests.certificates.k8s.io): unexpected ListAndWatch error: failed to list *certificates.CertificateSigningRequest: etcdserver: request timed out; reinitializing...
	W0913 19:03:07.117288       1 reflector.go:561] storage/cacher.go:/priorityclasses: failed to list *scheduling.PriorityClass: etcdserver: request timed out
	E0913 19:03:07.118795       1 cacher.go:478] cacher (priorityclasses.scheduling.k8s.io): unexpected ListAndWatch error: failed to list *scheduling.PriorityClass: etcdserver: request timed out; reinitializing...
	W0913 19:03:07.118835       1 reflector.go:561] storage/cacher.go:/poddisruptionbudgets: failed to list *policy.PodDisruptionBudget: etcdserver: request timed out
	E0913 19:03:07.118860       1 cacher.go:478] cacher (poddisruptionbudgets.policy): unexpected ListAndWatch error: failed to list *policy.PodDisruptionBudget: etcdserver: request timed out; reinitializing...
	W0913 19:03:07.118862       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.IngressClass: etcdserver: request timed out
	E0913 19:03:07.118908       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.IngressClass: failed to list *v1.IngressClass: etcdserver: request timed out" logger="UnhandledError"
	E0913 19:03:07.117873       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: rpctypes.EtcdError{code:0xe, desc:\"etcdserver: request timed out\"}: etcdserver: request timed out" logger="UnhandledError"
	W0913 19:03:07.119041       1 reflector.go:561] storage/cacher.go:/rolebindings: failed to list *rbac.RoleBinding: etcdserver: request timed out
	E0913 19:03:07.119081       1 cacher.go:478] cacher (rolebindings.rbac.authorization.k8s.io): unexpected ListAndWatch error: failed to list *rbac.RoleBinding: etcdserver: request timed out; reinitializing...
	W0913 19:03:07.119107       1 reflector.go:561] storage/cacher.go:/horizontalpodautoscalers: failed to list *autoscaling.HorizontalPodAutoscaler: etcdserver: request timed out
	E0913 19:03:07.119130       1 cacher.go:478] cacher (horizontalpodautoscalers.autoscaling): unexpected ListAndWatch error: failed to list *autoscaling.HorizontalPodAutoscaler: etcdserver: request timed out; reinitializing...
	W0913 19:03:07.118881       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Endpoints: etcdserver: request timed out
	E0913 19:03:07.119187       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Endpoints: failed to list *v1.Endpoints: etcdserver: request timed out" logger="UnhandledError"
	W0913 19:03:07.119292       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: etcdserver: request timed out. Retrying...
	F0913 19:03:07.119338       1 hooks.go:210] PostStartHook "scheduling/bootstrap-system-priority-classes" failed: unable to add default system priority classes: timed out waiting for the condition
	E0913 19:03:07.119412       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: etcdserver: request timed out" logger="UnhandledError"
	E0913 19:03:07.155903       1 controller.go:145] "Failed to ensure lease exists, will retry" err="etcdserver: request timed out" interval="1.6s"
	W0913 19:03:07.119390       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ValidatingAdmissionPolicy: etcdserver: request timed out
	E0913 19:03:07.155969       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ValidatingAdmissionPolicy: failed to list *v1.ValidatingAdmissionPolicy: etcdserver: request timed out" logger="UnhandledError"
	F0913 19:03:07.147197       1 hooks.go:210] PostStartHook "rbac/bootstrap-roles" failed: unable to initialize roles: timed out waiting for the condition
	E0913 19:03:07.180431       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, etcdserver: request timed out]"
	W0913 19:03:07.188666       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: etcdserver: request timed out
	E0913 19:03:07.188800       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: etcdserver: request timed out" logger="UnhandledError"
	
	
	==> kube-apiserver [87156e375ce6e42c538bb851dcb55abf7b83754448bb78e67a324fd93e76a534] <==
	I0913 19:04:34.155635       1 apiapproval_controller.go:189] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0913 19:04:34.155675       1 crd_finalizer.go:269] Starting CRDFinalizer
	I0913 19:04:34.145448       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0913 19:04:34.145457       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0913 19:04:34.243089       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0913 19:04:34.243807       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0913 19:04:34.246351       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0913 19:04:34.247643       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0913 19:04:34.248414       1 shared_informer.go:320] Caches are synced for configmaps
	I0913 19:04:34.248975       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0913 19:04:34.249015       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0913 19:04:34.248452       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0913 19:04:34.252424       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0913 19:04:34.252462       1 aggregator.go:171] initial CRD sync complete...
	I0913 19:04:34.252481       1 autoregister_controller.go:144] Starting autoregister controller
	I0913 19:04:34.252485       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0913 19:04:34.252490       1 cache.go:39] Caches are synced for autoregister controller
	I0913 19:04:34.265419       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0913 19:04:34.275995       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0913 19:04:34.276033       1 policy_source.go:224] refreshing policies
	I0913 19:04:34.323537       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0913 19:04:35.150657       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0913 19:04:35.562141       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.145]
	I0913 19:04:35.563630       1 controller.go:615] quota admission added evaluator for: endpoints
	I0913 19:04:35.569590       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [8a3f92c39f616d85e223876cd67911949b93e345a656250b2b9a93e494f7b7b0] <==
	I0913 19:03:16.402728       1 serving.go:386] Generated self-signed cert in-memory
	I0913 19:03:16.703364       1 controllermanager.go:197] "Starting" version="v1.31.1"
	I0913 19:03:16.703449       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0913 19:03:16.705317       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0913 19:03:16.705492       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0913 19:03:16.706001       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0913 19:03:16.705942       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	E0913 19:03:26.708603       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.145:8443/healthz\": dial tcp 192.168.39.145:8443: connect: connection refused"
	
	
	==> kube-controller-manager [d9e9ac5d6b79f0671011120db7c835a22b4b40515be6c0a224b5d78d10631858] <==
	I0913 19:06:17.452044       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0913 19:06:17.472553       1 shared_informer.go:320] Caches are synced for garbage collector
	I0913 19:09:48.801020       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-617764-m02"
	I0913 19:09:52.926860       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-617764"
	I0913 19:10:31.370023       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-617764-m05\" does not exist"
	I0913 19:10:31.384654       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-617764-m05" podCIDRs=["10.244.2.0/24"]
	I0913 19:10:31.386189       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-617764-m05"
	I0913 19:10:31.387800       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-617764-m05"
	I0913 19:10:31.416856       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-617764-m05"
	I0913 19:10:31.558621       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-617764-m05"
	I0913 19:10:32.024020       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-617764-m05"
	I0913 19:10:32.052927       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-617764-m05"
	I0913 19:10:35.158651       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-617764-m05"
	I0913 19:10:35.860853       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-617764-m05"
	I0913 19:10:36.032052       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-617764-m05"
	I0913 19:10:40.470075       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-617764-m04"
	I0913 19:10:40.516277       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-617764-m05"
	I0913 19:10:40.549563       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-617764-m05"
	I0913 19:10:41.735162       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-617764-m05"
	I0913 19:10:42.148062       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-617764-m04"
	I0913 19:10:50.638808       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-617764-m04"
	I0913 19:10:54.621811       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-617764-m05"
	I0913 19:10:54.639910       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-617764-m05"
	I0913 19:10:55.545703       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-617764-m05"
	I0913 19:11:02.096402       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-617764-m05"
	
	
	==> kube-proxy [0bdc8b32559cc35726794d7412fa2462beea35cff248df50623987861ae7bd48] <==
	E0913 19:02:27.298107       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	E0913 19:02:30.369582       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.39.254:8443: connect: no route to host"
	W0913 19:02:42.656761       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-617764&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0913 19:02:42.657783       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-617764&limit=500&resourceVersion=0\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	E0913 19:02:42.657618       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0913 19:02:54.945213       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0913 19:03:07.232668       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.39.254:8443: connect: no route to host"
	W0913 19:03:13.377355       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0913 19:03:13.377416       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0913 19:03:13.377475       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0913 19:03:13.377505       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	E0913 19:03:19.521668       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.39.254:8443: connect: no route to host"
	W0913 19:03:22.593418       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-617764&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0913 19:03:22.594211       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-617764&limit=500&resourceVersion=0\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	E0913 19:03:31.809399       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0913 19:03:31.809484       1 event_broadcaster.go:216] "Unable to write event (retry limit exceeded!)" event="&Event{ObjectMeta:{ha-617764.17f4e2ef11fb5014  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},EventTime:2024-09-13 19:01:13.616478822 +0000 UTC m=+43.051066987,Series:nil,ReportingController:kube-proxy,ReportingInstance:kube-proxy-ha-617764,Action:StartKubeProxy,Reason:Starting,Regarding:{Node  ha-617764 ha-617764   },Related:nil,Note:,Type:Normal,DeprecatedSource:{ },DeprecatedFirstTimestamp:0001-01-01 00:00:00 +0000 UTC,DeprecatedLastTimestamp:0001-01-01 00:00:00 +0000 UTC,DeprecatedCount:0,}"
	W0913 19:04:08.674299       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0913 19:04:08.674633       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0913 19:04:14.818943       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0913 19:04:14.819106       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0913 19:04:20.961639       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-617764&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0913 19:04:20.961836       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-617764&limit=500&resourceVersion=0\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	I0913 19:04:46.118228       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0913 19:04:48.417939       1 shared_informer.go:320] Caches are synced for service config
	I0913 19:05:14.019838       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [1d1a0b2d1c95ea3234342a3c075a74b46c01ce0a91b97801503cb6fd1fc06163] <==
	E0913 18:54:28.193745       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-617764\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0913 18:54:28.194003       1 server.go:646] "Can't determine this node's IP, assuming loopback; if this is incorrect, please set the --bind-address flag"
	E0913 18:54:28.194170       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0913 18:54:28.234105       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0913 18:54:28.234302       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0913 18:54:28.234395       1 server_linux.go:169] "Using iptables Proxier"
	I0913 18:54:28.237390       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0913 18:54:28.237818       1 server.go:483] "Version info" version="v1.31.1"
	I0913 18:54:28.237860       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0913 18:54:28.240362       1 config.go:199] "Starting service config controller"
	I0913 18:54:28.240424       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0913 18:54:28.240535       1 config.go:105] "Starting endpoint slice config controller"
	I0913 18:54:28.240556       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0913 18:54:28.241385       1 config.go:328] "Starting node config controller"
	I0913 18:54:28.241411       1 shared_informer.go:313] Waiting for caches to sync for node config
	E0913 18:54:31.266663       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.39.254:8443: connect: no route to host"
	W0913 18:54:31.266902       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0913 18:54:31.267053       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0913 18:54:31.267155       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0913 18:54:31.267225       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0913 18:54:31.270424       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-617764&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0913 18:54:31.270680       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-617764&limit=500&resourceVersion=0\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	I0913 18:54:32.241327       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0913 18:54:32.541475       1 shared_informer.go:320] Caches are synced for service config
	I0913 18:54:32.642363       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [360965c899e522788532af521cbf0b477b35e977e91f5a28b282abf712fc953e] <==
	W0913 19:03:57.900185       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.145:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.145:8443: connect: connection refused
	E0913 19:03:57.900372       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://192.168.39.145:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.39.145:8443: connect: connection refused" logger="UnhandledError"
	W0913 19:03:58.563523       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.145:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.145:8443: connect: connection refused
	E0913 19:03:58.563568       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.39.145:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.39.145:8443: connect: connection refused" logger="UnhandledError"
	W0913 19:04:07.237583       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.145:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.145:8443: connect: connection refused
	E0913 19:04:07.237716       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.168.39.145:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.39.145:8443: connect: connection refused" logger="UnhandledError"
	W0913 19:04:07.479004       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.145:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.145:8443: connect: connection refused
	E0913 19:04:07.479145       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.168.39.145:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.39.145:8443: connect: connection refused" logger="UnhandledError"
	W0913 19:04:14.886681       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.145:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.145:8443: connect: connection refused
	E0913 19:04:14.886861       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.168.39.145:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.39.145:8443: connect: connection refused" logger="UnhandledError"
	W0913 19:04:17.376850       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.145:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.145:8443: connect: connection refused
	E0913 19:04:17.376931       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://192.168.39.145:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.39.145:8443: connect: connection refused" logger="UnhandledError"
	W0913 19:04:18.702116       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.145:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.145:8443: connect: connection refused
	E0913 19:04:18.702189       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://192.168.39.145:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.145:8443: connect: connection refused" logger="UnhandledError"
	W0913 19:04:21.189061       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.145:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.145:8443: connect: connection refused
	E0913 19:04:21.189219       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://192.168.39.145:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.39.145:8443: connect: connection refused" logger="UnhandledError"
	W0913 19:04:22.488215       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.145:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.145:8443: connect: connection refused
	E0913 19:04:22.488335       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://192.168.39.145:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.39.145:8443: connect: connection refused" logger="UnhandledError"
	W0913 19:04:30.978522       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.145:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.145:8443: connect: connection refused
	E0913 19:04:30.978653       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get \"https://192.168.39.145:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.39.145:8443: connect: connection refused" logger="UnhandledError"
	W0913 19:04:32.316893       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.145:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.145:8443: connect: connection refused
	E0913 19:04:32.317198       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://192.168.39.145:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.39.145:8443: connect: connection refused" logger="UnhandledError"
	W0913 19:04:34.163725       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0913 19:04:34.163824       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0913 19:04:46.592137       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [80a7cb47dee67b788f693c83be0a9b7c41fc30e21ddf9b0131e13737c8047222] <==
	E0913 18:54:18.337790       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.168.39.145:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.39.145:8443: connect: connection refused" logger="UnhandledError"
	W0913 18:54:18.785652       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.145:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.145:8443: connect: connection refused
	E0913 18:54:18.785751       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.168.39.145:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.39.145:8443: connect: connection refused" logger="UnhandledError"
	W0913 18:54:23.154505       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.145:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.145:8443: connect: connection refused
	E0913 18:54:23.154624       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://192.168.39.145:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.39.145:8443: connect: connection refused" logger="UnhandledError"
	W0913 18:54:26.780601       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0913 18:54:26.780738       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0913 18:54:26.780951       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0913 18:54:26.781066       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0913 18:54:26.783577       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0913 18:54:26.783651       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0913 18:54:26.783823       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0913 18:54:26.783883       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0913 18:54:26.783831       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0913 18:54:26.783955       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0913 18:54:26.783968       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0913 18:54:26.784151       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0913 18:54:26.784400       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0913 18:54:26.784439       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0913 18:54:44.032097       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0913 18:56:04.977977       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-hzxvw\": pod busybox-7dff88458-hzxvw is already assigned to node \"ha-617764-m04\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-hzxvw" node="ha-617764-m04"
	E0913 18:56:04.978105       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 6a455845-10fb-415a-badb-63751bb03ec8(default/busybox-7dff88458-hzxvw) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-hzxvw"
	E0913 18:56:04.978138       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-hzxvw\": pod busybox-7dff88458-hzxvw is already assigned to node \"ha-617764-m04\"" pod="default/busybox-7dff88458-hzxvw"
	I0913 18:56:04.978160       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-hzxvw" node="ha-617764-m04"
	E0913 18:58:44.325787       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 13 19:10:28 ha-617764 kubelet[1315]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 13 19:10:28 ha-617764 kubelet[1315]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 13 19:10:28 ha-617764 kubelet[1315]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 13 19:10:28 ha-617764 kubelet[1315]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 13 19:10:28 ha-617764 kubelet[1315]: E0913 19:10:28.981165    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726254628980658229,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 19:10:28 ha-617764 kubelet[1315]: E0913 19:10:28.981351    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726254628980658229,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 19:10:38 ha-617764 kubelet[1315]: E0913 19:10:38.983787    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726254638983343822,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 19:10:38 ha-617764 kubelet[1315]: E0913 19:10:38.984082    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726254638983343822,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 19:10:48 ha-617764 kubelet[1315]: E0913 19:10:48.985829    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726254648985298367,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 19:10:48 ha-617764 kubelet[1315]: E0913 19:10:48.985882    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726254648985298367,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 19:10:58 ha-617764 kubelet[1315]: E0913 19:10:58.988205    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726254658987800204,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 19:10:58 ha-617764 kubelet[1315]: E0913 19:10:58.988766    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726254658987800204,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 19:11:08 ha-617764 kubelet[1315]: E0913 19:11:08.991533    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726254668991082678,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 19:11:08 ha-617764 kubelet[1315]: E0913 19:11:08.991812    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726254668991082678,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 19:11:18 ha-617764 kubelet[1315]: E0913 19:11:18.994071    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726254678993688617,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 19:11:18 ha-617764 kubelet[1315]: E0913 19:11:18.994547    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726254678993688617,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 19:11:28 ha-617764 kubelet[1315]: E0913 19:11:28.544687    1315 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 13 19:11:28 ha-617764 kubelet[1315]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 13 19:11:28 ha-617764 kubelet[1315]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 13 19:11:28 ha-617764 kubelet[1315]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 13 19:11:28 ha-617764 kubelet[1315]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 13 19:11:28 ha-617764 kubelet[1315]: E0913 19:11:28.997533    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726254688996952349,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 19:11:28 ha-617764 kubelet[1315]: E0913 19:11:28.997640    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726254688996952349,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 19:11:38 ha-617764 kubelet[1315]: E0913 19:11:38.999650    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726254698999205899,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 19:11:38 ha-617764 kubelet[1315]: E0913 19:11:38.999741    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726254698999205899,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0913 19:11:46.701403   34805 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19636-3902/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-617764 -n ha-617764
helpers_test.go:261: (dbg) Run:  kubectl --context ha-617764 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/AddSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (125.50s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (329.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-832180
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-832180
E0913 19:20:57.576099   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/functional-204039/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-832180: exit status 82 (2m1.878643716s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-832180-m03"  ...
	* Stopping node "multinode-832180-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-832180" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-832180 --wait=true -v=8 --alsologtostderr
E0913 19:24:06.601620   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-832180 --wait=true -v=8 --alsologtostderr: (3m25.423334984s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-832180
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-832180 -n multinode-832180
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-832180 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-832180 logs -n 25: (1.454003146s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-832180 ssh -n                                                                 | multinode-832180 | jenkins | v1.34.0 | 13 Sep 24 19:19 UTC | 13 Sep 24 19:19 UTC |
	|         | multinode-832180-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-832180 cp multinode-832180-m02:/home/docker/cp-test.txt                       | multinode-832180 | jenkins | v1.34.0 | 13 Sep 24 19:19 UTC | 13 Sep 24 19:19 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile2586814433/001/cp-test_multinode-832180-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-832180 ssh -n                                                                 | multinode-832180 | jenkins | v1.34.0 | 13 Sep 24 19:19 UTC | 13 Sep 24 19:19 UTC |
	|         | multinode-832180-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-832180 cp multinode-832180-m02:/home/docker/cp-test.txt                       | multinode-832180 | jenkins | v1.34.0 | 13 Sep 24 19:19 UTC | 13 Sep 24 19:19 UTC |
	|         | multinode-832180:/home/docker/cp-test_multinode-832180-m02_multinode-832180.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-832180 ssh -n                                                                 | multinode-832180 | jenkins | v1.34.0 | 13 Sep 24 19:19 UTC | 13 Sep 24 19:19 UTC |
	|         | multinode-832180-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-832180 ssh -n multinode-832180 sudo cat                                       | multinode-832180 | jenkins | v1.34.0 | 13 Sep 24 19:19 UTC | 13 Sep 24 19:19 UTC |
	|         | /home/docker/cp-test_multinode-832180-m02_multinode-832180.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-832180 cp multinode-832180-m02:/home/docker/cp-test.txt                       | multinode-832180 | jenkins | v1.34.0 | 13 Sep 24 19:19 UTC | 13 Sep 24 19:19 UTC |
	|         | multinode-832180-m03:/home/docker/cp-test_multinode-832180-m02_multinode-832180-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-832180 ssh -n                                                                 | multinode-832180 | jenkins | v1.34.0 | 13 Sep 24 19:19 UTC | 13 Sep 24 19:19 UTC |
	|         | multinode-832180-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-832180 ssh -n multinode-832180-m03 sudo cat                                   | multinode-832180 | jenkins | v1.34.0 | 13 Sep 24 19:19 UTC | 13 Sep 24 19:19 UTC |
	|         | /home/docker/cp-test_multinode-832180-m02_multinode-832180-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-832180 cp testdata/cp-test.txt                                                | multinode-832180 | jenkins | v1.34.0 | 13 Sep 24 19:19 UTC | 13 Sep 24 19:19 UTC |
	|         | multinode-832180-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-832180 ssh -n                                                                 | multinode-832180 | jenkins | v1.34.0 | 13 Sep 24 19:19 UTC | 13 Sep 24 19:19 UTC |
	|         | multinode-832180-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-832180 cp multinode-832180-m03:/home/docker/cp-test.txt                       | multinode-832180 | jenkins | v1.34.0 | 13 Sep 24 19:19 UTC | 13 Sep 24 19:19 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile2586814433/001/cp-test_multinode-832180-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-832180 ssh -n                                                                 | multinode-832180 | jenkins | v1.34.0 | 13 Sep 24 19:19 UTC | 13 Sep 24 19:19 UTC |
	|         | multinode-832180-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-832180 cp multinode-832180-m03:/home/docker/cp-test.txt                       | multinode-832180 | jenkins | v1.34.0 | 13 Sep 24 19:19 UTC | 13 Sep 24 19:19 UTC |
	|         | multinode-832180:/home/docker/cp-test_multinode-832180-m03_multinode-832180.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-832180 ssh -n                                                                 | multinode-832180 | jenkins | v1.34.0 | 13 Sep 24 19:19 UTC | 13 Sep 24 19:19 UTC |
	|         | multinode-832180-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-832180 ssh -n multinode-832180 sudo cat                                       | multinode-832180 | jenkins | v1.34.0 | 13 Sep 24 19:19 UTC | 13 Sep 24 19:19 UTC |
	|         | /home/docker/cp-test_multinode-832180-m03_multinode-832180.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-832180 cp multinode-832180-m03:/home/docker/cp-test.txt                       | multinode-832180 | jenkins | v1.34.0 | 13 Sep 24 19:19 UTC | 13 Sep 24 19:19 UTC |
	|         | multinode-832180-m02:/home/docker/cp-test_multinode-832180-m03_multinode-832180-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-832180 ssh -n                                                                 | multinode-832180 | jenkins | v1.34.0 | 13 Sep 24 19:19 UTC | 13 Sep 24 19:19 UTC |
	|         | multinode-832180-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-832180 ssh -n multinode-832180-m02 sudo cat                                   | multinode-832180 | jenkins | v1.34.0 | 13 Sep 24 19:19 UTC | 13 Sep 24 19:19 UTC |
	|         | /home/docker/cp-test_multinode-832180-m03_multinode-832180-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-832180 node stop m03                                                          | multinode-832180 | jenkins | v1.34.0 | 13 Sep 24 19:19 UTC | 13 Sep 24 19:19 UTC |
	| node    | multinode-832180 node start                                                             | multinode-832180 | jenkins | v1.34.0 | 13 Sep 24 19:19 UTC | 13 Sep 24 19:20 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-832180                                                                | multinode-832180 | jenkins | v1.34.0 | 13 Sep 24 19:20 UTC |                     |
	| stop    | -p multinode-832180                                                                     | multinode-832180 | jenkins | v1.34.0 | 13 Sep 24 19:20 UTC |                     |
	| start   | -p multinode-832180                                                                     | multinode-832180 | jenkins | v1.34.0 | 13 Sep 24 19:22 UTC | 13 Sep 24 19:25 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-832180                                                                | multinode-832180 | jenkins | v1.34.0 | 13 Sep 24 19:25 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/13 19:22:03
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0913 19:22:03.249260   42355 out.go:345] Setting OutFile to fd 1 ...
	I0913 19:22:03.249376   42355 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 19:22:03.249384   42355 out.go:358] Setting ErrFile to fd 2...
	I0913 19:22:03.249389   42355 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 19:22:03.249580   42355 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19636-3902/.minikube/bin
	I0913 19:22:03.250213   42355 out.go:352] Setting JSON to false
	I0913 19:22:03.251313   42355 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3866,"bootTime":1726251457,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0913 19:22:03.251402   42355 start.go:139] virtualization: kvm guest
	I0913 19:22:03.253736   42355 out.go:177] * [multinode-832180] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0913 19:22:03.255048   42355 notify.go:220] Checking for updates...
	I0913 19:22:03.255057   42355 out.go:177]   - MINIKUBE_LOCATION=19636
	I0913 19:22:03.256373   42355 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 19:22:03.257713   42355 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19636-3902/kubeconfig
	I0913 19:22:03.258927   42355 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19636-3902/.minikube
	I0913 19:22:03.260199   42355 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0913 19:22:03.261584   42355 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 19:22:03.263594   42355 config.go:182] Loaded profile config "multinode-832180": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 19:22:03.263708   42355 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 19:22:03.264138   42355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:22:03.264179   42355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:22:03.279526   42355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45491
	I0913 19:22:03.279961   42355 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:22:03.280484   42355 main.go:141] libmachine: Using API Version  1
	I0913 19:22:03.280504   42355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:22:03.280792   42355 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:22:03.280976   42355 main.go:141] libmachine: (multinode-832180) Calling .DriverName
	I0913 19:22:03.317196   42355 out.go:177] * Using the kvm2 driver based on existing profile
	I0913 19:22:03.318783   42355 start.go:297] selected driver: kvm2
	I0913 19:22:03.318804   42355 start.go:901] validating driver "kvm2" against &{Name:multinode-832180 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.1 ClusterName:multinode-832180 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.107 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.235 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.21 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false insp
ektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptim
izations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 19:22:03.318960   42355 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 19:22:03.319320   42355 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 19:22:03.319420   42355 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19636-3902/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0913 19:22:03.334864   42355 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0913 19:22:03.335552   42355 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0913 19:22:03.335595   42355 cni.go:84] Creating CNI manager for ""
	I0913 19:22:03.335658   42355 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0913 19:22:03.335728   42355 start.go:340] cluster config:
	{Name:multinode-832180 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-832180 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.107 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.235 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.21 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow
:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetCli
entPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 19:22:03.335857   42355 iso.go:125] acquiring lock: {Name:mk6d09a9e7ffc35d34bf41cb51ad15df14a4d34d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 19:22:03.337849   42355 out.go:177] * Starting "multinode-832180" primary control-plane node in "multinode-832180" cluster
	I0913 19:22:03.339093   42355 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0913 19:22:03.339146   42355 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0913 19:22:03.339157   42355 cache.go:56] Caching tarball of preloaded images
	I0913 19:22:03.339245   42355 preload.go:172] Found /home/jenkins/minikube-integration/19636-3902/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0913 19:22:03.339258   42355 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0913 19:22:03.339408   42355 profile.go:143] Saving config to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/multinode-832180/config.json ...
	I0913 19:22:03.339613   42355 start.go:360] acquireMachinesLock for multinode-832180: {Name:mk2a4fc9f87a3264088553785e086036ce1d8c5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 19:22:03.339671   42355 start.go:364] duration metric: took 37.899µs to acquireMachinesLock for "multinode-832180"
	I0913 19:22:03.339690   42355 start.go:96] Skipping create...Using existing machine configuration
	I0913 19:22:03.339700   42355 fix.go:54] fixHost starting: 
	I0913 19:22:03.339975   42355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:22:03.340012   42355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:22:03.354368   42355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34103
	I0913 19:22:03.354900   42355 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:22:03.355484   42355 main.go:141] libmachine: Using API Version  1
	I0913 19:22:03.355506   42355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:22:03.355807   42355 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:22:03.355983   42355 main.go:141] libmachine: (multinode-832180) Calling .DriverName
	I0913 19:22:03.356157   42355 main.go:141] libmachine: (multinode-832180) Calling .GetState
	I0913 19:22:03.357902   42355 fix.go:112] recreateIfNeeded on multinode-832180: state=Running err=<nil>
	W0913 19:22:03.357923   42355 fix.go:138] unexpected machine state, will restart: <nil>
	I0913 19:22:03.360152   42355 out.go:177] * Updating the running kvm2 "multinode-832180" VM ...
	I0913 19:22:03.361571   42355 machine.go:93] provisionDockerMachine start ...
	I0913 19:22:03.361601   42355 main.go:141] libmachine: (multinode-832180) Calling .DriverName
	I0913 19:22:03.361832   42355 main.go:141] libmachine: (multinode-832180) Calling .GetSSHHostname
	I0913 19:22:03.364434   42355 main.go:141] libmachine: (multinode-832180) DBG | domain multinode-832180 has defined MAC address 52:54:00:ca:be:cf in network mk-multinode-832180
	I0913 19:22:03.364877   42355 main.go:141] libmachine: (multinode-832180) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:be:cf", ip: ""} in network mk-multinode-832180: {Iface:virbr1 ExpiryTime:2024-09-13 20:16:37 +0000 UTC Type:0 Mac:52:54:00:ca:be:cf Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:multinode-832180 Clientid:01:52:54:00:ca:be:cf}
	I0913 19:22:03.364901   42355 main.go:141] libmachine: (multinode-832180) DBG | domain multinode-832180 has defined IP address 192.168.39.107 and MAC address 52:54:00:ca:be:cf in network mk-multinode-832180
	I0913 19:22:03.365054   42355 main.go:141] libmachine: (multinode-832180) Calling .GetSSHPort
	I0913 19:22:03.365213   42355 main.go:141] libmachine: (multinode-832180) Calling .GetSSHKeyPath
	I0913 19:22:03.365345   42355 main.go:141] libmachine: (multinode-832180) Calling .GetSSHKeyPath
	I0913 19:22:03.365456   42355 main.go:141] libmachine: (multinode-832180) Calling .GetSSHUsername
	I0913 19:22:03.365624   42355 main.go:141] libmachine: Using SSH client type: native
	I0913 19:22:03.365826   42355 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.107 22 <nil> <nil>}
	I0913 19:22:03.365838   42355 main.go:141] libmachine: About to run SSH command:
	hostname
	I0913 19:22:03.475028   42355 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-832180
	
	I0913 19:22:03.475051   42355 main.go:141] libmachine: (multinode-832180) Calling .GetMachineName
	I0913 19:22:03.475280   42355 buildroot.go:166] provisioning hostname "multinode-832180"
	I0913 19:22:03.475307   42355 main.go:141] libmachine: (multinode-832180) Calling .GetMachineName
	I0913 19:22:03.475482   42355 main.go:141] libmachine: (multinode-832180) Calling .GetSSHHostname
	I0913 19:22:03.478454   42355 main.go:141] libmachine: (multinode-832180) DBG | domain multinode-832180 has defined MAC address 52:54:00:ca:be:cf in network mk-multinode-832180
	I0913 19:22:03.478990   42355 main.go:141] libmachine: (multinode-832180) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:be:cf", ip: ""} in network mk-multinode-832180: {Iface:virbr1 ExpiryTime:2024-09-13 20:16:37 +0000 UTC Type:0 Mac:52:54:00:ca:be:cf Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:multinode-832180 Clientid:01:52:54:00:ca:be:cf}
	I0913 19:22:03.479011   42355 main.go:141] libmachine: (multinode-832180) DBG | domain multinode-832180 has defined IP address 192.168.39.107 and MAC address 52:54:00:ca:be:cf in network mk-multinode-832180
	I0913 19:22:03.479221   42355 main.go:141] libmachine: (multinode-832180) Calling .GetSSHPort
	I0913 19:22:03.479384   42355 main.go:141] libmachine: (multinode-832180) Calling .GetSSHKeyPath
	I0913 19:22:03.479518   42355 main.go:141] libmachine: (multinode-832180) Calling .GetSSHKeyPath
	I0913 19:22:03.479658   42355 main.go:141] libmachine: (multinode-832180) Calling .GetSSHUsername
	I0913 19:22:03.479831   42355 main.go:141] libmachine: Using SSH client type: native
	I0913 19:22:03.479993   42355 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.107 22 <nil> <nil>}
	I0913 19:22:03.480006   42355 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-832180 && echo "multinode-832180" | sudo tee /etc/hostname
	I0913 19:22:03.602971   42355 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-832180
	
	I0913 19:22:03.602998   42355 main.go:141] libmachine: (multinode-832180) Calling .GetSSHHostname
	I0913 19:22:03.605756   42355 main.go:141] libmachine: (multinode-832180) DBG | domain multinode-832180 has defined MAC address 52:54:00:ca:be:cf in network mk-multinode-832180
	I0913 19:22:03.606184   42355 main.go:141] libmachine: (multinode-832180) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:be:cf", ip: ""} in network mk-multinode-832180: {Iface:virbr1 ExpiryTime:2024-09-13 20:16:37 +0000 UTC Type:0 Mac:52:54:00:ca:be:cf Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:multinode-832180 Clientid:01:52:54:00:ca:be:cf}
	I0913 19:22:03.606211   42355 main.go:141] libmachine: (multinode-832180) DBG | domain multinode-832180 has defined IP address 192.168.39.107 and MAC address 52:54:00:ca:be:cf in network mk-multinode-832180
	I0913 19:22:03.606392   42355 main.go:141] libmachine: (multinode-832180) Calling .GetSSHPort
	I0913 19:22:03.606574   42355 main.go:141] libmachine: (multinode-832180) Calling .GetSSHKeyPath
	I0913 19:22:03.606732   42355 main.go:141] libmachine: (multinode-832180) Calling .GetSSHKeyPath
	I0913 19:22:03.606835   42355 main.go:141] libmachine: (multinode-832180) Calling .GetSSHUsername
	I0913 19:22:03.606955   42355 main.go:141] libmachine: Using SSH client type: native
	I0913 19:22:03.607131   42355 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.107 22 <nil> <nil>}
	I0913 19:22:03.607146   42355 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-832180' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-832180/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-832180' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0913 19:22:03.719036   42355 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0913 19:22:03.719063   42355 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19636-3902/.minikube CaCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19636-3902/.minikube}
	I0913 19:22:03.719099   42355 buildroot.go:174] setting up certificates
	I0913 19:22:03.719112   42355 provision.go:84] configureAuth start
	I0913 19:22:03.719122   42355 main.go:141] libmachine: (multinode-832180) Calling .GetMachineName
	I0913 19:22:03.719457   42355 main.go:141] libmachine: (multinode-832180) Calling .GetIP
	I0913 19:22:03.722043   42355 main.go:141] libmachine: (multinode-832180) DBG | domain multinode-832180 has defined MAC address 52:54:00:ca:be:cf in network mk-multinode-832180
	I0913 19:22:03.722403   42355 main.go:141] libmachine: (multinode-832180) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:be:cf", ip: ""} in network mk-multinode-832180: {Iface:virbr1 ExpiryTime:2024-09-13 20:16:37 +0000 UTC Type:0 Mac:52:54:00:ca:be:cf Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:multinode-832180 Clientid:01:52:54:00:ca:be:cf}
	I0913 19:22:03.722434   42355 main.go:141] libmachine: (multinode-832180) DBG | domain multinode-832180 has defined IP address 192.168.39.107 and MAC address 52:54:00:ca:be:cf in network mk-multinode-832180
	I0913 19:22:03.722586   42355 main.go:141] libmachine: (multinode-832180) Calling .GetSSHHostname
	I0913 19:22:03.724810   42355 main.go:141] libmachine: (multinode-832180) DBG | domain multinode-832180 has defined MAC address 52:54:00:ca:be:cf in network mk-multinode-832180
	I0913 19:22:03.725206   42355 main.go:141] libmachine: (multinode-832180) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:be:cf", ip: ""} in network mk-multinode-832180: {Iface:virbr1 ExpiryTime:2024-09-13 20:16:37 +0000 UTC Type:0 Mac:52:54:00:ca:be:cf Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:multinode-832180 Clientid:01:52:54:00:ca:be:cf}
	I0913 19:22:03.725237   42355 main.go:141] libmachine: (multinode-832180) DBG | domain multinode-832180 has defined IP address 192.168.39.107 and MAC address 52:54:00:ca:be:cf in network mk-multinode-832180
	I0913 19:22:03.725344   42355 provision.go:143] copyHostCerts
	I0913 19:22:03.725384   42355 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem
	I0913 19:22:03.725414   42355 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem, removing ...
	I0913 19:22:03.725423   42355 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem
	I0913 19:22:03.725490   42355 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem (1123 bytes)
	I0913 19:22:03.725573   42355 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem
	I0913 19:22:03.725596   42355 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem, removing ...
	I0913 19:22:03.725602   42355 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem
	I0913 19:22:03.725626   42355 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem (1679 bytes)
	I0913 19:22:03.725680   42355 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem
	I0913 19:22:03.725696   42355 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem, removing ...
	I0913 19:22:03.725701   42355 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem
	I0913 19:22:03.725721   42355 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem (1082 bytes)
	I0913 19:22:03.725807   42355 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem org=jenkins.multinode-832180 san=[127.0.0.1 192.168.39.107 localhost minikube multinode-832180]
	I0913 19:22:03.971079   42355 provision.go:177] copyRemoteCerts
	I0913 19:22:03.971140   42355 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0913 19:22:03.971165   42355 main.go:141] libmachine: (multinode-832180) Calling .GetSSHHostname
	I0913 19:22:03.973539   42355 main.go:141] libmachine: (multinode-832180) DBG | domain multinode-832180 has defined MAC address 52:54:00:ca:be:cf in network mk-multinode-832180
	I0913 19:22:03.973883   42355 main.go:141] libmachine: (multinode-832180) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:be:cf", ip: ""} in network mk-multinode-832180: {Iface:virbr1 ExpiryTime:2024-09-13 20:16:37 +0000 UTC Type:0 Mac:52:54:00:ca:be:cf Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:multinode-832180 Clientid:01:52:54:00:ca:be:cf}
	I0913 19:22:03.973912   42355 main.go:141] libmachine: (multinode-832180) DBG | domain multinode-832180 has defined IP address 192.168.39.107 and MAC address 52:54:00:ca:be:cf in network mk-multinode-832180
	I0913 19:22:03.974145   42355 main.go:141] libmachine: (multinode-832180) Calling .GetSSHPort
	I0913 19:22:03.974336   42355 main.go:141] libmachine: (multinode-832180) Calling .GetSSHKeyPath
	I0913 19:22:03.974491   42355 main.go:141] libmachine: (multinode-832180) Calling .GetSSHUsername
	I0913 19:22:03.974607   42355 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/multinode-832180/id_rsa Username:docker}
	I0913 19:22:04.057235   42355 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0913 19:22:04.057315   42355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0913 19:22:04.083020   42355 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0913 19:22:04.083093   42355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0913 19:22:04.107189   42355 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0913 19:22:04.107267   42355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0913 19:22:04.137338   42355 provision.go:87] duration metric: took 418.215423ms to configureAuth
	I0913 19:22:04.137361   42355 buildroot.go:189] setting minikube options for container-runtime
	I0913 19:22:04.137587   42355 config.go:182] Loaded profile config "multinode-832180": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 19:22:04.137666   42355 main.go:141] libmachine: (multinode-832180) Calling .GetSSHHostname
	I0913 19:22:04.141005   42355 main.go:141] libmachine: (multinode-832180) DBG | domain multinode-832180 has defined MAC address 52:54:00:ca:be:cf in network mk-multinode-832180
	I0913 19:22:04.141415   42355 main.go:141] libmachine: (multinode-832180) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:be:cf", ip: ""} in network mk-multinode-832180: {Iface:virbr1 ExpiryTime:2024-09-13 20:16:37 +0000 UTC Type:0 Mac:52:54:00:ca:be:cf Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:multinode-832180 Clientid:01:52:54:00:ca:be:cf}
	I0913 19:22:04.141461   42355 main.go:141] libmachine: (multinode-832180) DBG | domain multinode-832180 has defined IP address 192.168.39.107 and MAC address 52:54:00:ca:be:cf in network mk-multinode-832180
	I0913 19:22:04.141621   42355 main.go:141] libmachine: (multinode-832180) Calling .GetSSHPort
	I0913 19:22:04.141809   42355 main.go:141] libmachine: (multinode-832180) Calling .GetSSHKeyPath
	I0913 19:22:04.141979   42355 main.go:141] libmachine: (multinode-832180) Calling .GetSSHKeyPath
	I0913 19:22:04.142117   42355 main.go:141] libmachine: (multinode-832180) Calling .GetSSHUsername
	I0913 19:22:04.142276   42355 main.go:141] libmachine: Using SSH client type: native
	I0913 19:22:04.142444   42355 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.107 22 <nil> <nil>}
	I0913 19:22:04.142459   42355 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0913 19:23:34.902586   42355 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0913 19:23:34.902620   42355 machine.go:96] duration metric: took 1m31.541029453s to provisionDockerMachine
	I0913 19:23:34.902634   42355 start.go:293] postStartSetup for "multinode-832180" (driver="kvm2")
	I0913 19:23:34.902648   42355 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0913 19:23:34.902674   42355 main.go:141] libmachine: (multinode-832180) Calling .DriverName
	I0913 19:23:34.903008   42355 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0913 19:23:34.903039   42355 main.go:141] libmachine: (multinode-832180) Calling .GetSSHHostname
	I0913 19:23:34.906264   42355 main.go:141] libmachine: (multinode-832180) DBG | domain multinode-832180 has defined MAC address 52:54:00:ca:be:cf in network mk-multinode-832180
	I0913 19:23:34.906739   42355 main.go:141] libmachine: (multinode-832180) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:be:cf", ip: ""} in network mk-multinode-832180: {Iface:virbr1 ExpiryTime:2024-09-13 20:16:37 +0000 UTC Type:0 Mac:52:54:00:ca:be:cf Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:multinode-832180 Clientid:01:52:54:00:ca:be:cf}
	I0913 19:23:34.906764   42355 main.go:141] libmachine: (multinode-832180) DBG | domain multinode-832180 has defined IP address 192.168.39.107 and MAC address 52:54:00:ca:be:cf in network mk-multinode-832180
	I0913 19:23:34.906973   42355 main.go:141] libmachine: (multinode-832180) Calling .GetSSHPort
	I0913 19:23:34.907161   42355 main.go:141] libmachine: (multinode-832180) Calling .GetSSHKeyPath
	I0913 19:23:34.907290   42355 main.go:141] libmachine: (multinode-832180) Calling .GetSSHUsername
	I0913 19:23:34.907393   42355 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/multinode-832180/id_rsa Username:docker}
	I0913 19:23:34.995617   42355 ssh_runner.go:195] Run: cat /etc/os-release
	I0913 19:23:35.000099   42355 command_runner.go:130] > NAME=Buildroot
	I0913 19:23:35.000122   42355 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0913 19:23:35.000126   42355 command_runner.go:130] > ID=buildroot
	I0913 19:23:35.000137   42355 command_runner.go:130] > VERSION_ID=2023.02.9
	I0913 19:23:35.000154   42355 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0913 19:23:35.000261   42355 info.go:137] Remote host: Buildroot 2023.02.9
	I0913 19:23:35.000285   42355 filesync.go:126] Scanning /home/jenkins/minikube-integration/19636-3902/.minikube/addons for local assets ...
	I0913 19:23:35.000351   42355 filesync.go:126] Scanning /home/jenkins/minikube-integration/19636-3902/.minikube/files for local assets ...
	I0913 19:23:35.000445   42355 filesync.go:149] local asset: /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem -> 110792.pem in /etc/ssl/certs
	I0913 19:23:35.000456   42355 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem -> /etc/ssl/certs/110792.pem
	I0913 19:23:35.000595   42355 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0913 19:23:35.011312   42355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem --> /etc/ssl/certs/110792.pem (1708 bytes)
	I0913 19:23:35.035931   42355 start.go:296] duration metric: took 133.28504ms for postStartSetup
	I0913 19:23:35.035969   42355 fix.go:56] duration metric: took 1m31.696271499s for fixHost
	I0913 19:23:35.035988   42355 main.go:141] libmachine: (multinode-832180) Calling .GetSSHHostname
	I0913 19:23:35.038594   42355 main.go:141] libmachine: (multinode-832180) DBG | domain multinode-832180 has defined MAC address 52:54:00:ca:be:cf in network mk-multinode-832180
	I0913 19:23:35.039022   42355 main.go:141] libmachine: (multinode-832180) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:be:cf", ip: ""} in network mk-multinode-832180: {Iface:virbr1 ExpiryTime:2024-09-13 20:16:37 +0000 UTC Type:0 Mac:52:54:00:ca:be:cf Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:multinode-832180 Clientid:01:52:54:00:ca:be:cf}
	I0913 19:23:35.039047   42355 main.go:141] libmachine: (multinode-832180) DBG | domain multinode-832180 has defined IP address 192.168.39.107 and MAC address 52:54:00:ca:be:cf in network mk-multinode-832180
	I0913 19:23:35.039202   42355 main.go:141] libmachine: (multinode-832180) Calling .GetSSHPort
	I0913 19:23:35.039384   42355 main.go:141] libmachine: (multinode-832180) Calling .GetSSHKeyPath
	I0913 19:23:35.039548   42355 main.go:141] libmachine: (multinode-832180) Calling .GetSSHKeyPath
	I0913 19:23:35.039663   42355 main.go:141] libmachine: (multinode-832180) Calling .GetSSHUsername
	I0913 19:23:35.039794   42355 main.go:141] libmachine: Using SSH client type: native
	I0913 19:23:35.039970   42355 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.107 22 <nil> <nil>}
	I0913 19:23:35.039983   42355 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0913 19:23:35.147041   42355 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726255415.124046029
	
	I0913 19:23:35.147065   42355 fix.go:216] guest clock: 1726255415.124046029
	I0913 19:23:35.147072   42355 fix.go:229] Guest: 2024-09-13 19:23:35.124046029 +0000 UTC Remote: 2024-09-13 19:23:35.035973119 +0000 UTC m=+91.823272639 (delta=88.07291ms)
	I0913 19:23:35.147112   42355 fix.go:200] guest clock delta is within tolerance: 88.07291ms
	I0913 19:23:35.147116   42355 start.go:83] releasing machines lock for "multinode-832180", held for 1m31.807435737s
	I0913 19:23:35.147137   42355 main.go:141] libmachine: (multinode-832180) Calling .DriverName
	I0913 19:23:35.147366   42355 main.go:141] libmachine: (multinode-832180) Calling .GetIP
	I0913 19:23:35.150000   42355 main.go:141] libmachine: (multinode-832180) DBG | domain multinode-832180 has defined MAC address 52:54:00:ca:be:cf in network mk-multinode-832180
	I0913 19:23:35.150334   42355 main.go:141] libmachine: (multinode-832180) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:be:cf", ip: ""} in network mk-multinode-832180: {Iface:virbr1 ExpiryTime:2024-09-13 20:16:37 +0000 UTC Type:0 Mac:52:54:00:ca:be:cf Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:multinode-832180 Clientid:01:52:54:00:ca:be:cf}
	I0913 19:23:35.150364   42355 main.go:141] libmachine: (multinode-832180) DBG | domain multinode-832180 has defined IP address 192.168.39.107 and MAC address 52:54:00:ca:be:cf in network mk-multinode-832180
	I0913 19:23:35.150460   42355 main.go:141] libmachine: (multinode-832180) Calling .DriverName
	I0913 19:23:35.151116   42355 main.go:141] libmachine: (multinode-832180) Calling .DriverName
	I0913 19:23:35.151288   42355 main.go:141] libmachine: (multinode-832180) Calling .DriverName
	I0913 19:23:35.151359   42355 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0913 19:23:35.151398   42355 main.go:141] libmachine: (multinode-832180) Calling .GetSSHHostname
	I0913 19:23:35.151529   42355 ssh_runner.go:195] Run: cat /version.json
	I0913 19:23:35.151552   42355 main.go:141] libmachine: (multinode-832180) Calling .GetSSHHostname
	I0913 19:23:35.153925   42355 main.go:141] libmachine: (multinode-832180) DBG | domain multinode-832180 has defined MAC address 52:54:00:ca:be:cf in network mk-multinode-832180
	I0913 19:23:35.154308   42355 main.go:141] libmachine: (multinode-832180) DBG | domain multinode-832180 has defined MAC address 52:54:00:ca:be:cf in network mk-multinode-832180
	I0913 19:23:35.154342   42355 main.go:141] libmachine: (multinode-832180) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:be:cf", ip: ""} in network mk-multinode-832180: {Iface:virbr1 ExpiryTime:2024-09-13 20:16:37 +0000 UTC Type:0 Mac:52:54:00:ca:be:cf Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:multinode-832180 Clientid:01:52:54:00:ca:be:cf}
	I0913 19:23:35.154369   42355 main.go:141] libmachine: (multinode-832180) DBG | domain multinode-832180 has defined IP address 192.168.39.107 and MAC address 52:54:00:ca:be:cf in network mk-multinode-832180
	I0913 19:23:35.154486   42355 main.go:141] libmachine: (multinode-832180) Calling .GetSSHPort
	I0913 19:23:35.154631   42355 main.go:141] libmachine: (multinode-832180) Calling .GetSSHKeyPath
	I0913 19:23:35.154717   42355 main.go:141] libmachine: (multinode-832180) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:be:cf", ip: ""} in network mk-multinode-832180: {Iface:virbr1 ExpiryTime:2024-09-13 20:16:37 +0000 UTC Type:0 Mac:52:54:00:ca:be:cf Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:multinode-832180 Clientid:01:52:54:00:ca:be:cf}
	I0913 19:23:35.154741   42355 main.go:141] libmachine: (multinode-832180) DBG | domain multinode-832180 has defined IP address 192.168.39.107 and MAC address 52:54:00:ca:be:cf in network mk-multinode-832180
	I0913 19:23:35.154780   42355 main.go:141] libmachine: (multinode-832180) Calling .GetSSHUsername
	I0913 19:23:35.154895   42355 main.go:141] libmachine: (multinode-832180) Calling .GetSSHPort
	I0913 19:23:35.154954   42355 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/multinode-832180/id_rsa Username:docker}
	I0913 19:23:35.155038   42355 main.go:141] libmachine: (multinode-832180) Calling .GetSSHKeyPath
	I0913 19:23:35.155163   42355 main.go:141] libmachine: (multinode-832180) Calling .GetSSHUsername
	I0913 19:23:35.155296   42355 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/multinode-832180/id_rsa Username:docker}
	I0913 19:23:35.231487   42355 command_runner.go:130] > {"iso_version": "v1.34.0-1726156389-19616", "kicbase_version": "v0.0.45-1725963390-19606", "minikube_version": "v1.34.0", "commit": "5022c44a3509464df545efc115fbb6c3f1b5e972"}
	I0913 19:23:35.231745   42355 ssh_runner.go:195] Run: systemctl --version
	I0913 19:23:35.258963   42355 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0913 19:23:35.259010   42355 command_runner.go:130] > systemd 252 (252)
	I0913 19:23:35.259035   42355 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0913 19:23:35.259103   42355 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0913 19:23:35.429729   42355 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0913 19:23:35.449928   42355 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0913 19:23:35.449996   42355 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0913 19:23:35.450043   42355 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0913 19:23:35.467409   42355 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0913 19:23:35.467439   42355 start.go:495] detecting cgroup driver to use...
	I0913 19:23:35.467515   42355 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0913 19:23:35.491182   42355 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0913 19:23:35.514420   42355 docker.go:217] disabling cri-docker service (if available) ...
	I0913 19:23:35.514496   42355 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0913 19:23:35.528909   42355 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0913 19:23:35.547571   42355 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0913 19:23:35.697656   42355 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0913 19:23:35.840192   42355 docker.go:233] disabling docker service ...
	I0913 19:23:35.840273   42355 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0913 19:23:35.858823   42355 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0913 19:23:35.874448   42355 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0913 19:23:36.020272   42355 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0913 19:23:36.168786   42355 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0913 19:23:36.185474   42355 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0913 19:23:36.206574   42355 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0913 19:23:36.206616   42355 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0913 19:23:36.206670   42355 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:23:36.219258   42355 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0913 19:23:36.219336   42355 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:23:36.231233   42355 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:23:36.242940   42355 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:23:36.254701   42355 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0913 19:23:36.266817   42355 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:23:36.278064   42355 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:23:36.289357   42355 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:23:36.300370   42355 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0913 19:23:36.310295   42355 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0913 19:23:36.310365   42355 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0913 19:23:36.321313   42355 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 19:23:36.476708   42355 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0913 19:23:37.656296   42355 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.179547879s)
	I0913 19:23:37.656324   42355 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0913 19:23:37.656383   42355 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0913 19:23:37.662472   42355 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0913 19:23:37.662500   42355 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0913 19:23:37.662509   42355 command_runner.go:130] > Device: 0,22	Inode: 1376        Links: 1
	I0913 19:23:37.662516   42355 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0913 19:23:37.662523   42355 command_runner.go:130] > Access: 2024-09-13 19:23:37.610496236 +0000
	I0913 19:23:37.662529   42355 command_runner.go:130] > Modify: 2024-09-13 19:23:37.492488935 +0000
	I0913 19:23:37.662534   42355 command_runner.go:130] > Change: 2024-09-13 19:23:37.492488935 +0000
	I0913 19:23:37.662539   42355 command_runner.go:130] >  Birth: -
	I0913 19:23:37.662572   42355 start.go:563] Will wait 60s for crictl version
	I0913 19:23:37.662624   42355 ssh_runner.go:195] Run: which crictl
	I0913 19:23:37.666993   42355 command_runner.go:130] > /usr/bin/crictl
	I0913 19:23:37.667178   42355 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0913 19:23:37.706486   42355 command_runner.go:130] > Version:  0.1.0
	I0913 19:23:37.706515   42355 command_runner.go:130] > RuntimeName:  cri-o
	I0913 19:23:37.706520   42355 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0913 19:23:37.706526   42355 command_runner.go:130] > RuntimeApiVersion:  v1
	I0913 19:23:37.706545   42355 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0913 19:23:37.706611   42355 ssh_runner.go:195] Run: crio --version
	I0913 19:23:37.735900   42355 command_runner.go:130] > crio version 1.29.1
	I0913 19:23:37.735929   42355 command_runner.go:130] > Version:        1.29.1
	I0913 19:23:37.735937   42355 command_runner.go:130] > GitCommit:      unknown
	I0913 19:23:37.735942   42355 command_runner.go:130] > GitCommitDate:  unknown
	I0913 19:23:37.735946   42355 command_runner.go:130] > GitTreeState:   clean
	I0913 19:23:37.735951   42355 command_runner.go:130] > BuildDate:      2024-09-12T19:33:02Z
	I0913 19:23:37.735955   42355 command_runner.go:130] > GoVersion:      go1.21.6
	I0913 19:23:37.735959   42355 command_runner.go:130] > Compiler:       gc
	I0913 19:23:37.735963   42355 command_runner.go:130] > Platform:       linux/amd64
	I0913 19:23:37.735967   42355 command_runner.go:130] > Linkmode:       dynamic
	I0913 19:23:37.735972   42355 command_runner.go:130] > BuildTags:      
	I0913 19:23:37.735976   42355 command_runner.go:130] >   containers_image_ostree_stub
	I0913 19:23:37.735980   42355 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0913 19:23:37.735984   42355 command_runner.go:130] >   btrfs_noversion
	I0913 19:23:37.735988   42355 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0913 19:23:37.735992   42355 command_runner.go:130] >   libdm_no_deferred_remove
	I0913 19:23:37.735998   42355 command_runner.go:130] >   seccomp
	I0913 19:23:37.736003   42355 command_runner.go:130] > LDFlags:          unknown
	I0913 19:23:37.736007   42355 command_runner.go:130] > SeccompEnabled:   true
	I0913 19:23:37.736013   42355 command_runner.go:130] > AppArmorEnabled:  false
	I0913 19:23:37.736112   42355 ssh_runner.go:195] Run: crio --version
	I0913 19:23:37.763708   42355 command_runner.go:130] > crio version 1.29.1
	I0913 19:23:37.763730   42355 command_runner.go:130] > Version:        1.29.1
	I0913 19:23:37.763736   42355 command_runner.go:130] > GitCommit:      unknown
	I0913 19:23:37.763741   42355 command_runner.go:130] > GitCommitDate:  unknown
	I0913 19:23:37.763745   42355 command_runner.go:130] > GitTreeState:   clean
	I0913 19:23:37.763750   42355 command_runner.go:130] > BuildDate:      2024-09-12T19:33:02Z
	I0913 19:23:37.763754   42355 command_runner.go:130] > GoVersion:      go1.21.6
	I0913 19:23:37.763757   42355 command_runner.go:130] > Compiler:       gc
	I0913 19:23:37.763763   42355 command_runner.go:130] > Platform:       linux/amd64
	I0913 19:23:37.763768   42355 command_runner.go:130] > Linkmode:       dynamic
	I0913 19:23:37.763786   42355 command_runner.go:130] > BuildTags:      
	I0913 19:23:37.763793   42355 command_runner.go:130] >   containers_image_ostree_stub
	I0913 19:23:37.763803   42355 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0913 19:23:37.763808   42355 command_runner.go:130] >   btrfs_noversion
	I0913 19:23:37.763823   42355 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0913 19:23:37.763830   42355 command_runner.go:130] >   libdm_no_deferred_remove
	I0913 19:23:37.763834   42355 command_runner.go:130] >   seccomp
	I0913 19:23:37.763841   42355 command_runner.go:130] > LDFlags:          unknown
	I0913 19:23:37.763845   42355 command_runner.go:130] > SeccompEnabled:   true
	I0913 19:23:37.763851   42355 command_runner.go:130] > AppArmorEnabled:  false
	I0913 19:23:37.767044   42355 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0913 19:23:37.768482   42355 main.go:141] libmachine: (multinode-832180) Calling .GetIP
	I0913 19:23:37.771107   42355 main.go:141] libmachine: (multinode-832180) DBG | domain multinode-832180 has defined MAC address 52:54:00:ca:be:cf in network mk-multinode-832180
	I0913 19:23:37.771453   42355 main.go:141] libmachine: (multinode-832180) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:be:cf", ip: ""} in network mk-multinode-832180: {Iface:virbr1 ExpiryTime:2024-09-13 20:16:37 +0000 UTC Type:0 Mac:52:54:00:ca:be:cf Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:multinode-832180 Clientid:01:52:54:00:ca:be:cf}
	I0913 19:23:37.771475   42355 main.go:141] libmachine: (multinode-832180) DBG | domain multinode-832180 has defined IP address 192.168.39.107 and MAC address 52:54:00:ca:be:cf in network mk-multinode-832180
	I0913 19:23:37.771740   42355 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0913 19:23:37.776352   42355 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0913 19:23:37.776451   42355 kubeadm.go:883] updating cluster {Name:multinode-832180 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.1 ClusterName:multinode-832180 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.107 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.235 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.21 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadge
t:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fa
lse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0913 19:23:37.776568   42355 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0913 19:23:37.776608   42355 ssh_runner.go:195] Run: sudo crictl images --output json
	I0913 19:23:37.816243   42355 command_runner.go:130] > {
	I0913 19:23:37.816268   42355 command_runner.go:130] >   "images": [
	I0913 19:23:37.816273   42355 command_runner.go:130] >     {
	I0913 19:23:37.816281   42355 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0913 19:23:37.816285   42355 command_runner.go:130] >       "repoTags": [
	I0913 19:23:37.816291   42355 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0913 19:23:37.816295   42355 command_runner.go:130] >       ],
	I0913 19:23:37.816299   42355 command_runner.go:130] >       "repoDigests": [
	I0913 19:23:37.816307   42355 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0913 19:23:37.816315   42355 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0913 19:23:37.816321   42355 command_runner.go:130] >       ],
	I0913 19:23:37.816327   42355 command_runner.go:130] >       "size": "87190579",
	I0913 19:23:37.816333   42355 command_runner.go:130] >       "uid": null,
	I0913 19:23:37.816338   42355 command_runner.go:130] >       "username": "",
	I0913 19:23:37.816348   42355 command_runner.go:130] >       "spec": null,
	I0913 19:23:37.816357   42355 command_runner.go:130] >       "pinned": false
	I0913 19:23:37.816363   42355 command_runner.go:130] >     },
	I0913 19:23:37.816371   42355 command_runner.go:130] >     {
	I0913 19:23:37.816379   42355 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0913 19:23:37.816385   42355 command_runner.go:130] >       "repoTags": [
	I0913 19:23:37.816391   42355 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0913 19:23:37.816399   42355 command_runner.go:130] >       ],
	I0913 19:23:37.816405   42355 command_runner.go:130] >       "repoDigests": [
	I0913 19:23:37.816415   42355 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0913 19:23:37.816426   42355 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0913 19:23:37.816431   42355 command_runner.go:130] >       ],
	I0913 19:23:37.816440   42355 command_runner.go:130] >       "size": "1363676",
	I0913 19:23:37.816449   42355 command_runner.go:130] >       "uid": null,
	I0913 19:23:37.816467   42355 command_runner.go:130] >       "username": "",
	I0913 19:23:37.816474   42355 command_runner.go:130] >       "spec": null,
	I0913 19:23:37.816478   42355 command_runner.go:130] >       "pinned": false
	I0913 19:23:37.816484   42355 command_runner.go:130] >     },
	I0913 19:23:37.816489   42355 command_runner.go:130] >     {
	I0913 19:23:37.816497   42355 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0913 19:23:37.816503   42355 command_runner.go:130] >       "repoTags": [
	I0913 19:23:37.816509   42355 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0913 19:23:37.816514   42355 command_runner.go:130] >       ],
	I0913 19:23:37.816520   42355 command_runner.go:130] >       "repoDigests": [
	I0913 19:23:37.816536   42355 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0913 19:23:37.816553   42355 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0913 19:23:37.816562   42355 command_runner.go:130] >       ],
	I0913 19:23:37.816567   42355 command_runner.go:130] >       "size": "31470524",
	I0913 19:23:37.816574   42355 command_runner.go:130] >       "uid": null,
	I0913 19:23:37.816578   42355 command_runner.go:130] >       "username": "",
	I0913 19:23:37.816584   42355 command_runner.go:130] >       "spec": null,
	I0913 19:23:37.816588   42355 command_runner.go:130] >       "pinned": false
	I0913 19:23:37.816593   42355 command_runner.go:130] >     },
	I0913 19:23:37.816597   42355 command_runner.go:130] >     {
	I0913 19:23:37.816604   42355 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0913 19:23:37.816613   42355 command_runner.go:130] >       "repoTags": [
	I0913 19:23:37.816625   42355 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0913 19:23:37.816634   42355 command_runner.go:130] >       ],
	I0913 19:23:37.816643   42355 command_runner.go:130] >       "repoDigests": [
	I0913 19:23:37.816657   42355 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I0913 19:23:37.816673   42355 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I0913 19:23:37.816680   42355 command_runner.go:130] >       ],
	I0913 19:23:37.816685   42355 command_runner.go:130] >       "size": "63273227",
	I0913 19:23:37.816693   42355 command_runner.go:130] >       "uid": null,
	I0913 19:23:37.816704   42355 command_runner.go:130] >       "username": "nonroot",
	I0913 19:23:37.816713   42355 command_runner.go:130] >       "spec": null,
	I0913 19:23:37.816722   42355 command_runner.go:130] >       "pinned": false
	I0913 19:23:37.816730   42355 command_runner.go:130] >     },
	I0913 19:23:37.816738   42355 command_runner.go:130] >     {
	I0913 19:23:37.816751   42355 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0913 19:23:37.816760   42355 command_runner.go:130] >       "repoTags": [
	I0913 19:23:37.816766   42355 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0913 19:23:37.816772   42355 command_runner.go:130] >       ],
	I0913 19:23:37.816778   42355 command_runner.go:130] >       "repoDigests": [
	I0913 19:23:37.816792   42355 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0913 19:23:37.816806   42355 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0913 19:23:37.816815   42355 command_runner.go:130] >       ],
	I0913 19:23:37.816824   42355 command_runner.go:130] >       "size": "149009664",
	I0913 19:23:37.816833   42355 command_runner.go:130] >       "uid": {
	I0913 19:23:37.816843   42355 command_runner.go:130] >         "value": "0"
	I0913 19:23:37.816850   42355 command_runner.go:130] >       },
	I0913 19:23:37.816854   42355 command_runner.go:130] >       "username": "",
	I0913 19:23:37.816860   42355 command_runner.go:130] >       "spec": null,
	I0913 19:23:37.816867   42355 command_runner.go:130] >       "pinned": false
	I0913 19:23:37.816877   42355 command_runner.go:130] >     },
	I0913 19:23:37.816882   42355 command_runner.go:130] >     {
	I0913 19:23:37.816896   42355 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I0913 19:23:37.816905   42355 command_runner.go:130] >       "repoTags": [
	I0913 19:23:37.816916   42355 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I0913 19:23:37.816929   42355 command_runner.go:130] >       ],
	I0913 19:23:37.816936   42355 command_runner.go:130] >       "repoDigests": [
	I0913 19:23:37.816945   42355 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I0913 19:23:37.816961   42355 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I0913 19:23:37.816970   42355 command_runner.go:130] >       ],
	I0913 19:23:37.816983   42355 command_runner.go:130] >       "size": "95237600",
	I0913 19:23:37.816992   42355 command_runner.go:130] >       "uid": {
	I0913 19:23:37.817001   42355 command_runner.go:130] >         "value": "0"
	I0913 19:23:37.817010   42355 command_runner.go:130] >       },
	I0913 19:23:37.817018   42355 command_runner.go:130] >       "username": "",
	I0913 19:23:37.817025   42355 command_runner.go:130] >       "spec": null,
	I0913 19:23:37.817031   42355 command_runner.go:130] >       "pinned": false
	I0913 19:23:37.817040   42355 command_runner.go:130] >     },
	I0913 19:23:37.817049   42355 command_runner.go:130] >     {
	I0913 19:23:37.817061   42355 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I0913 19:23:37.817071   42355 command_runner.go:130] >       "repoTags": [
	I0913 19:23:37.817082   42355 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I0913 19:23:37.817090   42355 command_runner.go:130] >       ],
	I0913 19:23:37.817099   42355 command_runner.go:130] >       "repoDigests": [
	I0913 19:23:37.817109   42355 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I0913 19:23:37.817124   42355 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I0913 19:23:37.817134   42355 command_runner.go:130] >       ],
	I0913 19:23:37.817146   42355 command_runner.go:130] >       "size": "89437508",
	I0913 19:23:37.817155   42355 command_runner.go:130] >       "uid": {
	I0913 19:23:37.817161   42355 command_runner.go:130] >         "value": "0"
	I0913 19:23:37.817167   42355 command_runner.go:130] >       },
	I0913 19:23:37.817176   42355 command_runner.go:130] >       "username": "",
	I0913 19:23:37.817185   42355 command_runner.go:130] >       "spec": null,
	I0913 19:23:37.817192   42355 command_runner.go:130] >       "pinned": false
	I0913 19:23:37.817196   42355 command_runner.go:130] >     },
	I0913 19:23:37.817202   42355 command_runner.go:130] >     {
	I0913 19:23:37.817213   42355 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I0913 19:23:37.817223   42355 command_runner.go:130] >       "repoTags": [
	I0913 19:23:37.817234   42355 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I0913 19:23:37.817241   42355 command_runner.go:130] >       ],
	I0913 19:23:37.817248   42355 command_runner.go:130] >       "repoDigests": [
	I0913 19:23:37.817269   42355 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I0913 19:23:37.817284   42355 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I0913 19:23:37.817294   42355 command_runner.go:130] >       ],
	I0913 19:23:37.817305   42355 command_runner.go:130] >       "size": "92733849",
	I0913 19:23:37.817314   42355 command_runner.go:130] >       "uid": null,
	I0913 19:23:37.817322   42355 command_runner.go:130] >       "username": "",
	I0913 19:23:37.817331   42355 command_runner.go:130] >       "spec": null,
	I0913 19:23:37.817338   42355 command_runner.go:130] >       "pinned": false
	I0913 19:23:37.817343   42355 command_runner.go:130] >     },
	I0913 19:23:37.817348   42355 command_runner.go:130] >     {
	I0913 19:23:37.817355   42355 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I0913 19:23:37.817359   42355 command_runner.go:130] >       "repoTags": [
	I0913 19:23:37.817365   42355 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I0913 19:23:37.817370   42355 command_runner.go:130] >       ],
	I0913 19:23:37.817376   42355 command_runner.go:130] >       "repoDigests": [
	I0913 19:23:37.817388   42355 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I0913 19:23:37.817399   42355 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I0913 19:23:37.817405   42355 command_runner.go:130] >       ],
	I0913 19:23:37.817411   42355 command_runner.go:130] >       "size": "68420934",
	I0913 19:23:37.817417   42355 command_runner.go:130] >       "uid": {
	I0913 19:23:37.817423   42355 command_runner.go:130] >         "value": "0"
	I0913 19:23:37.817428   42355 command_runner.go:130] >       },
	I0913 19:23:37.817434   42355 command_runner.go:130] >       "username": "",
	I0913 19:23:37.817439   42355 command_runner.go:130] >       "spec": null,
	I0913 19:23:37.817443   42355 command_runner.go:130] >       "pinned": false
	I0913 19:23:37.817446   42355 command_runner.go:130] >     },
	I0913 19:23:37.817451   42355 command_runner.go:130] >     {
	I0913 19:23:37.817460   42355 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0913 19:23:37.817471   42355 command_runner.go:130] >       "repoTags": [
	I0913 19:23:37.817478   42355 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0913 19:23:37.817486   42355 command_runner.go:130] >       ],
	I0913 19:23:37.817493   42355 command_runner.go:130] >       "repoDigests": [
	I0913 19:23:37.817507   42355 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0913 19:23:37.817520   42355 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0913 19:23:37.817527   42355 command_runner.go:130] >       ],
	I0913 19:23:37.817532   42355 command_runner.go:130] >       "size": "742080",
	I0913 19:23:37.817540   42355 command_runner.go:130] >       "uid": {
	I0913 19:23:37.817548   42355 command_runner.go:130] >         "value": "65535"
	I0913 19:23:37.817556   42355 command_runner.go:130] >       },
	I0913 19:23:37.817563   42355 command_runner.go:130] >       "username": "",
	I0913 19:23:37.817571   42355 command_runner.go:130] >       "spec": null,
	I0913 19:23:37.817578   42355 command_runner.go:130] >       "pinned": true
	I0913 19:23:37.817585   42355 command_runner.go:130] >     }
	I0913 19:23:37.817591   42355 command_runner.go:130] >   ]
	I0913 19:23:37.817597   42355 command_runner.go:130] > }
	I0913 19:23:37.817809   42355 crio.go:514] all images are preloaded for cri-o runtime.
	I0913 19:23:37.817825   42355 crio.go:433] Images already preloaded, skipping extraction
	I0913 19:23:37.817880   42355 ssh_runner.go:195] Run: sudo crictl images --output json
	I0913 19:23:37.850471   42355 command_runner.go:130] > {
	I0913 19:23:37.850495   42355 command_runner.go:130] >   "images": [
	I0913 19:23:37.850501   42355 command_runner.go:130] >     {
	I0913 19:23:37.850514   42355 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0913 19:23:37.850521   42355 command_runner.go:130] >       "repoTags": [
	I0913 19:23:37.850530   42355 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0913 19:23:37.850533   42355 command_runner.go:130] >       ],
	I0913 19:23:37.850538   42355 command_runner.go:130] >       "repoDigests": [
	I0913 19:23:37.850546   42355 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0913 19:23:37.850552   42355 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0913 19:23:37.850556   42355 command_runner.go:130] >       ],
	I0913 19:23:37.850561   42355 command_runner.go:130] >       "size": "87190579",
	I0913 19:23:37.850564   42355 command_runner.go:130] >       "uid": null,
	I0913 19:23:37.850569   42355 command_runner.go:130] >       "username": "",
	I0913 19:23:37.850576   42355 command_runner.go:130] >       "spec": null,
	I0913 19:23:37.850586   42355 command_runner.go:130] >       "pinned": false
	I0913 19:23:37.850594   42355 command_runner.go:130] >     },
	I0913 19:23:37.850600   42355 command_runner.go:130] >     {
	I0913 19:23:37.850611   42355 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0913 19:23:37.850620   42355 command_runner.go:130] >       "repoTags": [
	I0913 19:23:37.850627   42355 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0913 19:23:37.850633   42355 command_runner.go:130] >       ],
	I0913 19:23:37.850637   42355 command_runner.go:130] >       "repoDigests": [
	I0913 19:23:37.850646   42355 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0913 19:23:37.850655   42355 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0913 19:23:37.850659   42355 command_runner.go:130] >       ],
	I0913 19:23:37.850664   42355 command_runner.go:130] >       "size": "1363676",
	I0913 19:23:37.850670   42355 command_runner.go:130] >       "uid": null,
	I0913 19:23:37.850683   42355 command_runner.go:130] >       "username": "",
	I0913 19:23:37.850692   42355 command_runner.go:130] >       "spec": null,
	I0913 19:23:37.850699   42355 command_runner.go:130] >       "pinned": false
	I0913 19:23:37.850706   42355 command_runner.go:130] >     },
	I0913 19:23:37.850711   42355 command_runner.go:130] >     {
	I0913 19:23:37.850724   42355 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0913 19:23:37.850731   42355 command_runner.go:130] >       "repoTags": [
	I0913 19:23:37.850737   42355 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0913 19:23:37.850743   42355 command_runner.go:130] >       ],
	I0913 19:23:37.850749   42355 command_runner.go:130] >       "repoDigests": [
	I0913 19:23:37.850764   42355 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0913 19:23:37.850777   42355 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0913 19:23:37.850785   42355 command_runner.go:130] >       ],
	I0913 19:23:37.850792   42355 command_runner.go:130] >       "size": "31470524",
	I0913 19:23:37.850803   42355 command_runner.go:130] >       "uid": null,
	I0913 19:23:37.850810   42355 command_runner.go:130] >       "username": "",
	I0913 19:23:37.850819   42355 command_runner.go:130] >       "spec": null,
	I0913 19:23:37.850824   42355 command_runner.go:130] >       "pinned": false
	I0913 19:23:37.850830   42355 command_runner.go:130] >     },
	I0913 19:23:37.850834   42355 command_runner.go:130] >     {
	I0913 19:23:37.850846   42355 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0913 19:23:37.850856   42355 command_runner.go:130] >       "repoTags": [
	I0913 19:23:37.850864   42355 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0913 19:23:37.850872   42355 command_runner.go:130] >       ],
	I0913 19:23:37.850879   42355 command_runner.go:130] >       "repoDigests": [
	I0913 19:23:37.850893   42355 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I0913 19:23:37.850911   42355 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I0913 19:23:37.850917   42355 command_runner.go:130] >       ],
	I0913 19:23:37.850923   42355 command_runner.go:130] >       "size": "63273227",
	I0913 19:23:37.850932   42355 command_runner.go:130] >       "uid": null,
	I0913 19:23:37.850942   42355 command_runner.go:130] >       "username": "nonroot",
	I0913 19:23:37.850960   42355 command_runner.go:130] >       "spec": null,
	I0913 19:23:37.850969   42355 command_runner.go:130] >       "pinned": false
	I0913 19:23:37.850974   42355 command_runner.go:130] >     },
	I0913 19:23:37.850981   42355 command_runner.go:130] >     {
	I0913 19:23:37.850990   42355 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0913 19:23:37.850998   42355 command_runner.go:130] >       "repoTags": [
	I0913 19:23:37.851002   42355 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0913 19:23:37.851008   42355 command_runner.go:130] >       ],
	I0913 19:23:37.851015   42355 command_runner.go:130] >       "repoDigests": [
	I0913 19:23:37.851029   42355 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0913 19:23:37.851043   42355 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0913 19:23:37.851051   42355 command_runner.go:130] >       ],
	I0913 19:23:37.851059   42355 command_runner.go:130] >       "size": "149009664",
	I0913 19:23:37.851068   42355 command_runner.go:130] >       "uid": {
	I0913 19:23:37.851074   42355 command_runner.go:130] >         "value": "0"
	I0913 19:23:37.851081   42355 command_runner.go:130] >       },
	I0913 19:23:37.851085   42355 command_runner.go:130] >       "username": "",
	I0913 19:23:37.851090   42355 command_runner.go:130] >       "spec": null,
	I0913 19:23:37.851097   42355 command_runner.go:130] >       "pinned": false
	I0913 19:23:37.851109   42355 command_runner.go:130] >     },
	I0913 19:23:37.851115   42355 command_runner.go:130] >     {
	I0913 19:23:37.851128   42355 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I0913 19:23:37.851136   42355 command_runner.go:130] >       "repoTags": [
	I0913 19:23:37.851150   42355 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I0913 19:23:37.851160   42355 command_runner.go:130] >       ],
	I0913 19:23:37.851166   42355 command_runner.go:130] >       "repoDigests": [
	I0913 19:23:37.851178   42355 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I0913 19:23:37.851193   42355 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I0913 19:23:37.851203   42355 command_runner.go:130] >       ],
	I0913 19:23:37.851211   42355 command_runner.go:130] >       "size": "95237600",
	I0913 19:23:37.851220   42355 command_runner.go:130] >       "uid": {
	I0913 19:23:37.851226   42355 command_runner.go:130] >         "value": "0"
	I0913 19:23:37.851235   42355 command_runner.go:130] >       },
	I0913 19:23:37.851241   42355 command_runner.go:130] >       "username": "",
	I0913 19:23:37.851249   42355 command_runner.go:130] >       "spec": null,
	I0913 19:23:37.851253   42355 command_runner.go:130] >       "pinned": false
	I0913 19:23:37.851256   42355 command_runner.go:130] >     },
	I0913 19:23:37.851262   42355 command_runner.go:130] >     {
	I0913 19:23:37.851275   42355 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I0913 19:23:37.851284   42355 command_runner.go:130] >       "repoTags": [
	I0913 19:23:37.851293   42355 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I0913 19:23:37.851302   42355 command_runner.go:130] >       ],
	I0913 19:23:37.851309   42355 command_runner.go:130] >       "repoDigests": [
	I0913 19:23:37.851323   42355 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I0913 19:23:37.851336   42355 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I0913 19:23:37.851346   42355 command_runner.go:130] >       ],
	I0913 19:23:37.851352   42355 command_runner.go:130] >       "size": "89437508",
	I0913 19:23:37.851362   42355 command_runner.go:130] >       "uid": {
	I0913 19:23:37.851369   42355 command_runner.go:130] >         "value": "0"
	I0913 19:23:37.851376   42355 command_runner.go:130] >       },
	I0913 19:23:37.851382   42355 command_runner.go:130] >       "username": "",
	I0913 19:23:37.851390   42355 command_runner.go:130] >       "spec": null,
	I0913 19:23:37.851396   42355 command_runner.go:130] >       "pinned": false
	I0913 19:23:37.851404   42355 command_runner.go:130] >     },
	I0913 19:23:37.851410   42355 command_runner.go:130] >     {
	I0913 19:23:37.851423   42355 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I0913 19:23:37.851432   42355 command_runner.go:130] >       "repoTags": [
	I0913 19:23:37.851441   42355 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I0913 19:23:37.851450   42355 command_runner.go:130] >       ],
	I0913 19:23:37.851457   42355 command_runner.go:130] >       "repoDigests": [
	I0913 19:23:37.851478   42355 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I0913 19:23:37.851493   42355 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I0913 19:23:37.851501   42355 command_runner.go:130] >       ],
	I0913 19:23:37.851507   42355 command_runner.go:130] >       "size": "92733849",
	I0913 19:23:37.851516   42355 command_runner.go:130] >       "uid": null,
	I0913 19:23:37.851522   42355 command_runner.go:130] >       "username": "",
	I0913 19:23:37.851530   42355 command_runner.go:130] >       "spec": null,
	I0913 19:23:37.851536   42355 command_runner.go:130] >       "pinned": false
	I0913 19:23:37.851543   42355 command_runner.go:130] >     },
	I0913 19:23:37.851549   42355 command_runner.go:130] >     {
	I0913 19:23:37.851559   42355 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I0913 19:23:37.851568   42355 command_runner.go:130] >       "repoTags": [
	I0913 19:23:37.851576   42355 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I0913 19:23:37.851584   42355 command_runner.go:130] >       ],
	I0913 19:23:37.851591   42355 command_runner.go:130] >       "repoDigests": [
	I0913 19:23:37.851606   42355 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I0913 19:23:37.851619   42355 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I0913 19:23:37.851628   42355 command_runner.go:130] >       ],
	I0913 19:23:37.851635   42355 command_runner.go:130] >       "size": "68420934",
	I0913 19:23:37.851644   42355 command_runner.go:130] >       "uid": {
	I0913 19:23:37.851648   42355 command_runner.go:130] >         "value": "0"
	I0913 19:23:37.851652   42355 command_runner.go:130] >       },
	I0913 19:23:37.851656   42355 command_runner.go:130] >       "username": "",
	I0913 19:23:37.851660   42355 command_runner.go:130] >       "spec": null,
	I0913 19:23:37.851664   42355 command_runner.go:130] >       "pinned": false
	I0913 19:23:37.851667   42355 command_runner.go:130] >     },
	I0913 19:23:37.851671   42355 command_runner.go:130] >     {
	I0913 19:23:37.851677   42355 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0913 19:23:37.851683   42355 command_runner.go:130] >       "repoTags": [
	I0913 19:23:37.851688   42355 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0913 19:23:37.851694   42355 command_runner.go:130] >       ],
	I0913 19:23:37.851698   42355 command_runner.go:130] >       "repoDigests": [
	I0913 19:23:37.851708   42355 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0913 19:23:37.851719   42355 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0913 19:23:37.851726   42355 command_runner.go:130] >       ],
	I0913 19:23:37.851730   42355 command_runner.go:130] >       "size": "742080",
	I0913 19:23:37.851734   42355 command_runner.go:130] >       "uid": {
	I0913 19:23:37.851738   42355 command_runner.go:130] >         "value": "65535"
	I0913 19:23:37.851744   42355 command_runner.go:130] >       },
	I0913 19:23:37.851748   42355 command_runner.go:130] >       "username": "",
	I0913 19:23:37.851751   42355 command_runner.go:130] >       "spec": null,
	I0913 19:23:37.851758   42355 command_runner.go:130] >       "pinned": true
	I0913 19:23:37.851761   42355 command_runner.go:130] >     }
	I0913 19:23:37.851764   42355 command_runner.go:130] >   ]
	I0913 19:23:37.851769   42355 command_runner.go:130] > }
	I0913 19:23:37.851874   42355 crio.go:514] all images are preloaded for cri-o runtime.
	I0913 19:23:37.851884   42355 cache_images.go:84] Images are preloaded, skipping loading
	I0913 19:23:37.851891   42355 kubeadm.go:934] updating node { 192.168.39.107 8443 v1.31.1 crio true true} ...
	I0913 19:23:37.851977   42355 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-832180 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.107
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:multinode-832180 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0913 19:23:37.852038   42355 ssh_runner.go:195] Run: crio config
	I0913 19:23:37.885707   42355 command_runner.go:130] ! time="2024-09-13 19:23:37.862901052Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0913 19:23:37.892051   42355 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0913 19:23:37.897600   42355 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0913 19:23:37.897632   42355 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0913 19:23:37.897643   42355 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0913 19:23:37.897648   42355 command_runner.go:130] > #
	I0913 19:23:37.897658   42355 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0913 19:23:37.897668   42355 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0913 19:23:37.897677   42355 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0913 19:23:37.897684   42355 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0913 19:23:37.897688   42355 command_runner.go:130] > # reload'.
	I0913 19:23:37.897693   42355 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0913 19:23:37.897703   42355 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0913 19:23:37.897709   42355 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0913 19:23:37.897715   42355 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0913 19:23:37.897723   42355 command_runner.go:130] > [crio]
	I0913 19:23:37.897731   42355 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0913 19:23:37.897741   42355 command_runner.go:130] > # containers images, in this directory.
	I0913 19:23:37.897748   42355 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0913 19:23:37.897764   42355 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0913 19:23:37.897775   42355 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0913 19:23:37.897786   42355 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0913 19:23:37.897795   42355 command_runner.go:130] > # imagestore = ""
	I0913 19:23:37.897806   42355 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0913 19:23:37.897817   42355 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0913 19:23:37.897826   42355 command_runner.go:130] > storage_driver = "overlay"
	I0913 19:23:37.897834   42355 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0913 19:23:37.897847   42355 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0913 19:23:37.897864   42355 command_runner.go:130] > storage_option = [
	I0913 19:23:37.897874   42355 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0913 19:23:37.897879   42355 command_runner.go:130] > ]
	I0913 19:23:37.897889   42355 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0913 19:23:37.897895   42355 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0913 19:23:37.897901   42355 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0913 19:23:37.897907   42355 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0913 19:23:37.897914   42355 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0913 19:23:37.897919   42355 command_runner.go:130] > # always happen on a node reboot
	I0913 19:23:37.897926   42355 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0913 19:23:37.897935   42355 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0913 19:23:37.897943   42355 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0913 19:23:37.897948   42355 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0913 19:23:37.897954   42355 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0913 19:23:37.897961   42355 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0913 19:23:37.897971   42355 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0913 19:23:37.897975   42355 command_runner.go:130] > # internal_wipe = true
	I0913 19:23:37.897982   42355 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0913 19:23:37.897989   42355 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0913 19:23:37.897993   42355 command_runner.go:130] > # internal_repair = false
	I0913 19:23:37.897999   42355 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0913 19:23:37.898005   42355 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0913 19:23:37.898010   42355 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0913 19:23:37.898015   42355 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0913 19:23:37.898023   42355 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0913 19:23:37.898029   42355 command_runner.go:130] > [crio.api]
	I0913 19:23:37.898034   42355 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0913 19:23:37.898039   42355 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0913 19:23:37.898046   42355 command_runner.go:130] > # IP address on which the stream server will listen.
	I0913 19:23:37.898050   42355 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0913 19:23:37.898058   42355 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0913 19:23:37.898063   42355 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0913 19:23:37.898068   42355 command_runner.go:130] > # stream_port = "0"
	I0913 19:23:37.898076   42355 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0913 19:23:37.898082   42355 command_runner.go:130] > # stream_enable_tls = false
	I0913 19:23:37.898088   42355 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0913 19:23:37.898102   42355 command_runner.go:130] > # stream_idle_timeout = ""
	I0913 19:23:37.898112   42355 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0913 19:23:37.898122   42355 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0913 19:23:37.898126   42355 command_runner.go:130] > # minutes.
	I0913 19:23:37.898130   42355 command_runner.go:130] > # stream_tls_cert = ""
	I0913 19:23:37.898135   42355 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0913 19:23:37.898141   42355 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0913 19:23:37.898148   42355 command_runner.go:130] > # stream_tls_key = ""
	I0913 19:23:37.898153   42355 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0913 19:23:37.898159   42355 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0913 19:23:37.898174   42355 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0913 19:23:37.898182   42355 command_runner.go:130] > # stream_tls_ca = ""
	I0913 19:23:37.898189   42355 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0913 19:23:37.898196   42355 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0913 19:23:37.898203   42355 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0913 19:23:37.898209   42355 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0913 19:23:37.898215   42355 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0913 19:23:37.898222   42355 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0913 19:23:37.898226   42355 command_runner.go:130] > [crio.runtime]
	I0913 19:23:37.898234   42355 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0913 19:23:37.898239   42355 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0913 19:23:37.898246   42355 command_runner.go:130] > # "nofile=1024:2048"
	I0913 19:23:37.898251   42355 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0913 19:23:37.898257   42355 command_runner.go:130] > # default_ulimits = [
	I0913 19:23:37.898260   42355 command_runner.go:130] > # ]
	I0913 19:23:37.898266   42355 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0913 19:23:37.898270   42355 command_runner.go:130] > # no_pivot = false
	I0913 19:23:37.898280   42355 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0913 19:23:37.898288   42355 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0913 19:23:37.898293   42355 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0913 19:23:37.898301   42355 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0913 19:23:37.898308   42355 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0913 19:23:37.898314   42355 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0913 19:23:37.898321   42355 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0913 19:23:37.898325   42355 command_runner.go:130] > # Cgroup setting for conmon
	I0913 19:23:37.898333   42355 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0913 19:23:37.898337   42355 command_runner.go:130] > conmon_cgroup = "pod"
	I0913 19:23:37.898343   42355 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0913 19:23:37.898348   42355 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0913 19:23:37.898356   42355 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0913 19:23:37.898360   42355 command_runner.go:130] > conmon_env = [
	I0913 19:23:37.898365   42355 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0913 19:23:37.898370   42355 command_runner.go:130] > ]
	I0913 19:23:37.898375   42355 command_runner.go:130] > # Additional environment variables to set for all the
	I0913 19:23:37.898383   42355 command_runner.go:130] > # containers. These are overridden if set in the
	I0913 19:23:37.898390   42355 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0913 19:23:37.898394   42355 command_runner.go:130] > # default_env = [
	I0913 19:23:37.898401   42355 command_runner.go:130] > # ]
	I0913 19:23:37.898407   42355 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0913 19:23:37.898415   42355 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0913 19:23:37.898419   42355 command_runner.go:130] > # selinux = false
	I0913 19:23:37.898430   42355 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0913 19:23:37.898438   42355 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0913 19:23:37.898446   42355 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0913 19:23:37.898450   42355 command_runner.go:130] > # seccomp_profile = ""
	I0913 19:23:37.898458   42355 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0913 19:23:37.898463   42355 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0913 19:23:37.898471   42355 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0913 19:23:37.898475   42355 command_runner.go:130] > # which might increase security.
	I0913 19:23:37.898482   42355 command_runner.go:130] > # This option is currently deprecated,
	I0913 19:23:37.898488   42355 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0913 19:23:37.898495   42355 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0913 19:23:37.898501   42355 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0913 19:23:37.898513   42355 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0913 19:23:37.898521   42355 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0913 19:23:37.898529   42355 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0913 19:23:37.898534   42355 command_runner.go:130] > # This option supports live configuration reload.
	I0913 19:23:37.898541   42355 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0913 19:23:37.898546   42355 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0913 19:23:37.898552   42355 command_runner.go:130] > # the cgroup blockio controller.
	I0913 19:23:37.898556   42355 command_runner.go:130] > # blockio_config_file = ""
	I0913 19:23:37.898562   42355 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0913 19:23:37.898568   42355 command_runner.go:130] > # blockio parameters.
	I0913 19:23:37.898572   42355 command_runner.go:130] > # blockio_reload = false
	I0913 19:23:37.898578   42355 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0913 19:23:37.898584   42355 command_runner.go:130] > # irqbalance daemon.
	I0913 19:23:37.898589   42355 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0913 19:23:37.898597   42355 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0913 19:23:37.898603   42355 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0913 19:23:37.898612   42355 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0913 19:23:37.898618   42355 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0913 19:23:37.898626   42355 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0913 19:23:37.898631   42355 command_runner.go:130] > # This option supports live configuration reload.
	I0913 19:23:37.898637   42355 command_runner.go:130] > # rdt_config_file = ""
	I0913 19:23:37.898642   42355 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0913 19:23:37.898649   42355 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0913 19:23:37.898664   42355 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0913 19:23:37.898670   42355 command_runner.go:130] > # separate_pull_cgroup = ""
	I0913 19:23:37.898676   42355 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0913 19:23:37.898684   42355 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0913 19:23:37.898688   42355 command_runner.go:130] > # will be added.
	I0913 19:23:37.898692   42355 command_runner.go:130] > # default_capabilities = [
	I0913 19:23:37.898698   42355 command_runner.go:130] > # 	"CHOWN",
	I0913 19:23:37.898701   42355 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0913 19:23:37.898707   42355 command_runner.go:130] > # 	"FSETID",
	I0913 19:23:37.898710   42355 command_runner.go:130] > # 	"FOWNER",
	I0913 19:23:37.898714   42355 command_runner.go:130] > # 	"SETGID",
	I0913 19:23:37.898718   42355 command_runner.go:130] > # 	"SETUID",
	I0913 19:23:37.898722   42355 command_runner.go:130] > # 	"SETPCAP",
	I0913 19:23:37.898728   42355 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0913 19:23:37.898734   42355 command_runner.go:130] > # 	"KILL",
	I0913 19:23:37.898742   42355 command_runner.go:130] > # ]
	I0913 19:23:37.898753   42355 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0913 19:23:37.898766   42355 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0913 19:23:37.898780   42355 command_runner.go:130] > # add_inheritable_capabilities = false
	I0913 19:23:37.898792   42355 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0913 19:23:37.898804   42355 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0913 19:23:37.898812   42355 command_runner.go:130] > default_sysctls = [
	I0913 19:23:37.898819   42355 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0913 19:23:37.898826   42355 command_runner.go:130] > ]
	I0913 19:23:37.898830   42355 command_runner.go:130] > # List of devices on the host that a
	I0913 19:23:37.898839   42355 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0913 19:23:37.898843   42355 command_runner.go:130] > # allowed_devices = [
	I0913 19:23:37.898848   42355 command_runner.go:130] > # 	"/dev/fuse",
	I0913 19:23:37.898852   42355 command_runner.go:130] > # ]
	I0913 19:23:37.898858   42355 command_runner.go:130] > # List of additional devices. specified as
	I0913 19:23:37.898866   42355 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0913 19:23:37.898873   42355 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0913 19:23:37.898878   42355 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0913 19:23:37.898884   42355 command_runner.go:130] > # additional_devices = [
	I0913 19:23:37.898887   42355 command_runner.go:130] > # ]
	I0913 19:23:37.898892   42355 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0913 19:23:37.898896   42355 command_runner.go:130] > # cdi_spec_dirs = [
	I0913 19:23:37.898899   42355 command_runner.go:130] > # 	"/etc/cdi",
	I0913 19:23:37.898903   42355 command_runner.go:130] > # 	"/var/run/cdi",
	I0913 19:23:37.898906   42355 command_runner.go:130] > # ]
	I0913 19:23:37.898912   42355 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0913 19:23:37.898918   42355 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0913 19:23:37.898921   42355 command_runner.go:130] > # Defaults to false.
	I0913 19:23:37.898927   42355 command_runner.go:130] > # device_ownership_from_security_context = false
	I0913 19:23:37.898933   42355 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0913 19:23:37.898939   42355 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0913 19:23:37.898943   42355 command_runner.go:130] > # hooks_dir = [
	I0913 19:23:37.898947   42355 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0913 19:23:37.898950   42355 command_runner.go:130] > # ]
	I0913 19:23:37.898956   42355 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0913 19:23:37.898967   42355 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0913 19:23:37.898972   42355 command_runner.go:130] > # its default mounts from the following two files:
	I0913 19:23:37.898976   42355 command_runner.go:130] > #
	I0913 19:23:37.898981   42355 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0913 19:23:37.898989   42355 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0913 19:23:37.898994   42355 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0913 19:23:37.898999   42355 command_runner.go:130] > #
	I0913 19:23:37.899005   42355 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0913 19:23:37.899012   42355 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0913 19:23:37.899020   42355 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0913 19:23:37.899027   42355 command_runner.go:130] > #      only add mounts it finds in this file.
	I0913 19:23:37.899031   42355 command_runner.go:130] > #
	I0913 19:23:37.899035   42355 command_runner.go:130] > # default_mounts_file = ""
	I0913 19:23:37.899042   42355 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0913 19:23:37.899048   42355 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0913 19:23:37.899054   42355 command_runner.go:130] > pids_limit = 1024
	I0913 19:23:37.899060   42355 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0913 19:23:37.899068   42355 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0913 19:23:37.899074   42355 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0913 19:23:37.899084   42355 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0913 19:23:37.899089   42355 command_runner.go:130] > # log_size_max = -1
	I0913 19:23:37.899098   42355 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0913 19:23:37.899102   42355 command_runner.go:130] > # log_to_journald = false
	I0913 19:23:37.899111   42355 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0913 19:23:37.899116   42355 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0913 19:23:37.899121   42355 command_runner.go:130] > # Path to directory for container attach sockets.
	I0913 19:23:37.899128   42355 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0913 19:23:37.899133   42355 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0913 19:23:37.899138   42355 command_runner.go:130] > # bind_mount_prefix = ""
	I0913 19:23:37.899143   42355 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0913 19:23:37.899149   42355 command_runner.go:130] > # read_only = false
	I0913 19:23:37.899155   42355 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0913 19:23:37.899163   42355 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0913 19:23:37.899167   42355 command_runner.go:130] > # live configuration reload.
	I0913 19:23:37.899171   42355 command_runner.go:130] > # log_level = "info"
	I0913 19:23:37.899177   42355 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0913 19:23:37.899185   42355 command_runner.go:130] > # This option supports live configuration reload.
	I0913 19:23:37.899189   42355 command_runner.go:130] > # log_filter = ""
	I0913 19:23:37.899194   42355 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0913 19:23:37.899202   42355 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0913 19:23:37.899205   42355 command_runner.go:130] > # separated by comma.
	I0913 19:23:37.899212   42355 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0913 19:23:37.899215   42355 command_runner.go:130] > # uid_mappings = ""
	I0913 19:23:37.899221   42355 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0913 19:23:37.899227   42355 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0913 19:23:37.899231   42355 command_runner.go:130] > # separated by comma.
	I0913 19:23:37.899238   42355 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0913 19:23:37.899247   42355 command_runner.go:130] > # gid_mappings = ""
	I0913 19:23:37.899252   42355 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0913 19:23:37.899259   42355 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0913 19:23:37.899266   42355 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0913 19:23:37.899273   42355 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0913 19:23:37.899279   42355 command_runner.go:130] > # minimum_mappable_uid = -1
	I0913 19:23:37.899285   42355 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0913 19:23:37.899294   42355 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0913 19:23:37.899300   42355 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0913 19:23:37.899309   42355 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0913 19:23:37.899313   42355 command_runner.go:130] > # minimum_mappable_gid = -1
	I0913 19:23:37.899319   42355 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0913 19:23:37.899328   42355 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0913 19:23:37.899333   42355 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0913 19:23:37.899339   42355 command_runner.go:130] > # ctr_stop_timeout = 30
	I0913 19:23:37.899345   42355 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0913 19:23:37.899351   42355 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0913 19:23:37.899358   42355 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0913 19:23:37.899362   42355 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0913 19:23:37.899367   42355 command_runner.go:130] > drop_infra_ctr = false
	I0913 19:23:37.899373   42355 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0913 19:23:37.899381   42355 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0913 19:23:37.899388   42355 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0913 19:23:37.899394   42355 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0913 19:23:37.899401   42355 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0913 19:23:37.899408   42355 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0913 19:23:37.899414   42355 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0913 19:23:37.899421   42355 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0913 19:23:37.899425   42355 command_runner.go:130] > # shared_cpuset = ""
	I0913 19:23:37.899436   42355 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0913 19:23:37.899441   42355 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0913 19:23:37.899448   42355 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0913 19:23:37.899454   42355 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0913 19:23:37.899461   42355 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0913 19:23:37.899466   42355 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0913 19:23:37.899476   42355 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0913 19:23:37.899482   42355 command_runner.go:130] > # enable_criu_support = false
	I0913 19:23:37.899487   42355 command_runner.go:130] > # Enable/disable the generation of the container,
	I0913 19:23:37.899495   42355 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0913 19:23:37.899500   42355 command_runner.go:130] > # enable_pod_events = false
	I0913 19:23:37.899508   42355 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0913 19:23:37.899514   42355 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0913 19:23:37.899521   42355 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0913 19:23:37.899525   42355 command_runner.go:130] > # default_runtime = "runc"
	I0913 19:23:37.899532   42355 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0913 19:23:37.899540   42355 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0913 19:23:37.899551   42355 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0913 19:23:37.899556   42355 command_runner.go:130] > # creation as a file is not desired either.
	I0913 19:23:37.899566   42355 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0913 19:23:37.899571   42355 command_runner.go:130] > # the hostname is being managed dynamically.
	I0913 19:23:37.899578   42355 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0913 19:23:37.899581   42355 command_runner.go:130] > # ]
	I0913 19:23:37.899588   42355 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0913 19:23:37.899596   42355 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0913 19:23:37.899602   42355 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0913 19:23:37.899609   42355 command_runner.go:130] > # Each entry in the table should follow the format:
	I0913 19:23:37.899612   42355 command_runner.go:130] > #
	I0913 19:23:37.899617   42355 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0913 19:23:37.899621   42355 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0913 19:23:37.899668   42355 command_runner.go:130] > # runtime_type = "oci"
	I0913 19:23:37.899679   42355 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0913 19:23:37.899683   42355 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0913 19:23:37.899688   42355 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0913 19:23:37.899692   42355 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0913 19:23:37.899698   42355 command_runner.go:130] > # monitor_env = []
	I0913 19:23:37.899703   42355 command_runner.go:130] > # privileged_without_host_devices = false
	I0913 19:23:37.899707   42355 command_runner.go:130] > # allowed_annotations = []
	I0913 19:23:37.899715   42355 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0913 19:23:37.899718   42355 command_runner.go:130] > # Where:
	I0913 19:23:37.899725   42355 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0913 19:23:37.899738   42355 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0913 19:23:37.899750   42355 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0913 19:23:37.899762   42355 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0913 19:23:37.899775   42355 command_runner.go:130] > #   in $PATH.
	I0913 19:23:37.899787   42355 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0913 19:23:37.899795   42355 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0913 19:23:37.899807   42355 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0913 19:23:37.899814   42355 command_runner.go:130] > #   state.
	I0913 19:23:37.899825   42355 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0913 19:23:37.899833   42355 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0913 19:23:37.899840   42355 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0913 19:23:37.899847   42355 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0913 19:23:37.899853   42355 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0913 19:23:37.899861   42355 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0913 19:23:37.899867   42355 command_runner.go:130] > #   The currently recognized values are:
	I0913 19:23:37.899875   42355 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0913 19:23:37.899881   42355 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0913 19:23:37.899888   42355 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0913 19:23:37.899894   42355 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0913 19:23:37.899902   42355 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0913 19:23:37.899910   42355 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0913 19:23:37.899916   42355 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0913 19:23:37.899924   42355 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0913 19:23:37.899930   42355 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0913 19:23:37.899938   42355 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0913 19:23:37.899943   42355 command_runner.go:130] > #   deprecated option "conmon".
	I0913 19:23:37.899950   42355 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0913 19:23:37.899957   42355 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0913 19:23:37.899964   42355 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0913 19:23:37.899970   42355 command_runner.go:130] > #   should be moved to the container's cgroup
	I0913 19:23:37.899976   42355 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0913 19:23:37.899984   42355 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0913 19:23:37.899993   42355 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0913 19:23:37.900001   42355 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0913 19:23:37.900004   42355 command_runner.go:130] > #
	I0913 19:23:37.900008   42355 command_runner.go:130] > # Using the seccomp notifier feature:
	I0913 19:23:37.900014   42355 command_runner.go:130] > #
	I0913 19:23:37.900021   42355 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0913 19:23:37.900027   42355 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0913 19:23:37.900033   42355 command_runner.go:130] > #
	I0913 19:23:37.900038   42355 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0913 19:23:37.900047   42355 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0913 19:23:37.900050   42355 command_runner.go:130] > #
	I0913 19:23:37.900056   42355 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0913 19:23:37.900062   42355 command_runner.go:130] > # feature.
	I0913 19:23:37.900065   42355 command_runner.go:130] > #
	I0913 19:23:37.900070   42355 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0913 19:23:37.900078   42355 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0913 19:23:37.900084   42355 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0913 19:23:37.900092   42355 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0913 19:23:37.900098   42355 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0913 19:23:37.900101   42355 command_runner.go:130] > #
	I0913 19:23:37.900107   42355 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0913 19:23:37.900113   42355 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0913 19:23:37.900116   42355 command_runner.go:130] > #
	I0913 19:23:37.900123   42355 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0913 19:23:37.900130   42355 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0913 19:23:37.900133   42355 command_runner.go:130] > #
	I0913 19:23:37.900138   42355 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0913 19:23:37.900146   42355 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0913 19:23:37.900150   42355 command_runner.go:130] > # limitation.
	I0913 19:23:37.900157   42355 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0913 19:23:37.900164   42355 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0913 19:23:37.900168   42355 command_runner.go:130] > runtime_type = "oci"
	I0913 19:23:37.900174   42355 command_runner.go:130] > runtime_root = "/run/runc"
	I0913 19:23:37.900179   42355 command_runner.go:130] > runtime_config_path = ""
	I0913 19:23:37.900184   42355 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0913 19:23:37.900191   42355 command_runner.go:130] > monitor_cgroup = "pod"
	I0913 19:23:37.900195   42355 command_runner.go:130] > monitor_exec_cgroup = ""
	I0913 19:23:37.900200   42355 command_runner.go:130] > monitor_env = [
	I0913 19:23:37.900205   42355 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0913 19:23:37.900211   42355 command_runner.go:130] > ]
	I0913 19:23:37.900215   42355 command_runner.go:130] > privileged_without_host_devices = false
	I0913 19:23:37.900222   42355 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0913 19:23:37.900229   42355 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0913 19:23:37.900235   42355 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0913 19:23:37.900244   42355 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0913 19:23:37.900256   42355 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0913 19:23:37.900265   42355 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0913 19:23:37.900274   42355 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0913 19:23:37.900283   42355 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0913 19:23:37.900289   42355 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0913 19:23:37.900297   42355 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0913 19:23:37.900303   42355 command_runner.go:130] > # Example:
	I0913 19:23:37.900307   42355 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0913 19:23:37.900311   42355 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0913 19:23:37.900318   42355 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0913 19:23:37.900323   42355 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0913 19:23:37.900328   42355 command_runner.go:130] > # cpuset = 0
	I0913 19:23:37.900332   42355 command_runner.go:130] > # cpushares = "0-1"
	I0913 19:23:37.900335   42355 command_runner.go:130] > # Where:
	I0913 19:23:37.900343   42355 command_runner.go:130] > # The workload name is workload-type.
	I0913 19:23:37.900350   42355 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0913 19:23:37.900357   42355 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0913 19:23:37.900362   42355 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0913 19:23:37.900371   42355 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0913 19:23:37.900378   42355 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0913 19:23:37.900383   42355 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0913 19:23:37.900390   42355 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0913 19:23:37.900396   42355 command_runner.go:130] > # Default value is set to true
	I0913 19:23:37.900400   42355 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0913 19:23:37.900408   42355 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0913 19:23:37.900413   42355 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0913 19:23:37.900419   42355 command_runner.go:130] > # Default value is set to 'false'
	I0913 19:23:37.900423   42355 command_runner.go:130] > # disable_hostport_mapping = false
	I0913 19:23:37.900436   42355 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0913 19:23:37.900440   42355 command_runner.go:130] > #
	I0913 19:23:37.900446   42355 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0913 19:23:37.900452   42355 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0913 19:23:37.900458   42355 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0913 19:23:37.900463   42355 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0913 19:23:37.900470   42355 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0913 19:23:37.900474   42355 command_runner.go:130] > [crio.image]
	I0913 19:23:37.900480   42355 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0913 19:23:37.900484   42355 command_runner.go:130] > # default_transport = "docker://"
	I0913 19:23:37.900489   42355 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0913 19:23:37.900495   42355 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0913 19:23:37.900498   42355 command_runner.go:130] > # global_auth_file = ""
	I0913 19:23:37.900503   42355 command_runner.go:130] > # The image used to instantiate infra containers.
	I0913 19:23:37.900508   42355 command_runner.go:130] > # This option supports live configuration reload.
	I0913 19:23:37.900512   42355 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I0913 19:23:37.900518   42355 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0913 19:23:37.900523   42355 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0913 19:23:37.900528   42355 command_runner.go:130] > # This option supports live configuration reload.
	I0913 19:23:37.900532   42355 command_runner.go:130] > # pause_image_auth_file = ""
	I0913 19:23:37.900537   42355 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0913 19:23:37.900543   42355 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0913 19:23:37.900550   42355 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0913 19:23:37.900555   42355 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0913 19:23:37.900559   42355 command_runner.go:130] > # pause_command = "/pause"
	I0913 19:23:37.900564   42355 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0913 19:23:37.900570   42355 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0913 19:23:37.900575   42355 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0913 19:23:37.900582   42355 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0913 19:23:37.900587   42355 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0913 19:23:37.900592   42355 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0913 19:23:37.900596   42355 command_runner.go:130] > # pinned_images = [
	I0913 19:23:37.900599   42355 command_runner.go:130] > # ]
	I0913 19:23:37.900604   42355 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0913 19:23:37.900610   42355 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0913 19:23:37.900616   42355 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0913 19:23:37.900624   42355 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0913 19:23:37.900629   42355 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0913 19:23:37.900633   42355 command_runner.go:130] > # signature_policy = ""
	I0913 19:23:37.900639   42355 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0913 19:23:37.900648   42355 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0913 19:23:37.900654   42355 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0913 19:23:37.900662   42355 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0913 19:23:37.900670   42355 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0913 19:23:37.900675   42355 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0913 19:23:37.900683   42355 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0913 19:23:37.900689   42355 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0913 19:23:37.900695   42355 command_runner.go:130] > # changing them here.
	I0913 19:23:37.900699   42355 command_runner.go:130] > # insecure_registries = [
	I0913 19:23:37.900702   42355 command_runner.go:130] > # ]
	I0913 19:23:37.900708   42355 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0913 19:23:37.900715   42355 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0913 19:23:37.900719   42355 command_runner.go:130] > # image_volumes = "mkdir"
	I0913 19:23:37.900724   42355 command_runner.go:130] > # Temporary directory to use for storing big files
	I0913 19:23:37.900731   42355 command_runner.go:130] > # big_files_temporary_dir = ""
	I0913 19:23:37.900740   42355 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0913 19:23:37.900749   42355 command_runner.go:130] > # CNI plugins.
	I0913 19:23:37.900755   42355 command_runner.go:130] > [crio.network]
	I0913 19:23:37.900767   42355 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0913 19:23:37.900779   42355 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0913 19:23:37.900788   42355 command_runner.go:130] > # cni_default_network = ""
	I0913 19:23:37.900797   42355 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0913 19:23:37.900807   42355 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0913 19:23:37.900816   42355 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0913 19:23:37.900823   42355 command_runner.go:130] > # plugin_dirs = [
	I0913 19:23:37.900828   42355 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0913 19:23:37.900833   42355 command_runner.go:130] > # ]
	I0913 19:23:37.900839   42355 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0913 19:23:37.900846   42355 command_runner.go:130] > [crio.metrics]
	I0913 19:23:37.900851   42355 command_runner.go:130] > # Globally enable or disable metrics support.
	I0913 19:23:37.900857   42355 command_runner.go:130] > enable_metrics = true
	I0913 19:23:37.900862   42355 command_runner.go:130] > # Specify enabled metrics collectors.
	I0913 19:23:37.900869   42355 command_runner.go:130] > # Per default all metrics are enabled.
	I0913 19:23:37.900874   42355 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0913 19:23:37.900882   42355 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0913 19:23:37.900888   42355 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0913 19:23:37.900894   42355 command_runner.go:130] > # metrics_collectors = [
	I0913 19:23:37.900898   42355 command_runner.go:130] > # 	"operations",
	I0913 19:23:37.900902   42355 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0913 19:23:37.900907   42355 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0913 19:23:37.900912   42355 command_runner.go:130] > # 	"operations_errors",
	I0913 19:23:37.900916   42355 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0913 19:23:37.900923   42355 command_runner.go:130] > # 	"image_pulls_by_name",
	I0913 19:23:37.900927   42355 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0913 19:23:37.900934   42355 command_runner.go:130] > # 	"image_pulls_failures",
	I0913 19:23:37.900940   42355 command_runner.go:130] > # 	"image_pulls_successes",
	I0913 19:23:37.900945   42355 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0913 19:23:37.900951   42355 command_runner.go:130] > # 	"image_layer_reuse",
	I0913 19:23:37.900955   42355 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0913 19:23:37.900961   42355 command_runner.go:130] > # 	"containers_oom_total",
	I0913 19:23:37.900965   42355 command_runner.go:130] > # 	"containers_oom",
	I0913 19:23:37.900969   42355 command_runner.go:130] > # 	"processes_defunct",
	I0913 19:23:37.900973   42355 command_runner.go:130] > # 	"operations_total",
	I0913 19:23:37.900977   42355 command_runner.go:130] > # 	"operations_latency_seconds",
	I0913 19:23:37.900982   42355 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0913 19:23:37.900989   42355 command_runner.go:130] > # 	"operations_errors_total",
	I0913 19:23:37.900994   42355 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0913 19:23:37.901001   42355 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0913 19:23:37.901005   42355 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0913 19:23:37.901011   42355 command_runner.go:130] > # 	"image_pulls_success_total",
	I0913 19:23:37.901015   42355 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0913 19:23:37.901022   42355 command_runner.go:130] > # 	"containers_oom_count_total",
	I0913 19:23:37.901027   42355 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0913 19:23:37.901034   42355 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0913 19:23:37.901038   42355 command_runner.go:130] > # ]
	I0913 19:23:37.901045   42355 command_runner.go:130] > # The port on which the metrics server will listen.
	I0913 19:23:37.901048   42355 command_runner.go:130] > # metrics_port = 9090
	I0913 19:23:37.901054   42355 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0913 19:23:37.901060   42355 command_runner.go:130] > # metrics_socket = ""
	I0913 19:23:37.901065   42355 command_runner.go:130] > # The certificate for the secure metrics server.
	I0913 19:23:37.901071   42355 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0913 19:23:37.901089   42355 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0913 19:23:37.901094   42355 command_runner.go:130] > # certificate on any modification event.
	I0913 19:23:37.901100   42355 command_runner.go:130] > # metrics_cert = ""
	I0913 19:23:37.901105   42355 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0913 19:23:37.901112   42355 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0913 19:23:37.901117   42355 command_runner.go:130] > # metrics_key = ""
	I0913 19:23:37.901124   42355 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0913 19:23:37.901129   42355 command_runner.go:130] > [crio.tracing]
	I0913 19:23:37.901134   42355 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0913 19:23:37.901141   42355 command_runner.go:130] > # enable_tracing = false
	I0913 19:23:37.901146   42355 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0913 19:23:37.901151   42355 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0913 19:23:37.901160   42355 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0913 19:23:37.901165   42355 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0913 19:23:37.901171   42355 command_runner.go:130] > # CRI-O NRI configuration.
	I0913 19:23:37.901174   42355 command_runner.go:130] > [crio.nri]
	I0913 19:23:37.901181   42355 command_runner.go:130] > # Globally enable or disable NRI.
	I0913 19:23:37.901187   42355 command_runner.go:130] > # enable_nri = false
	I0913 19:23:37.901193   42355 command_runner.go:130] > # NRI socket to listen on.
	I0913 19:23:37.901199   42355 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0913 19:23:37.901204   42355 command_runner.go:130] > # NRI plugin directory to use.
	I0913 19:23:37.901209   42355 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0913 19:23:37.901216   42355 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0913 19:23:37.901222   42355 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0913 19:23:37.901230   42355 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0913 19:23:37.901234   42355 command_runner.go:130] > # nri_disable_connections = false
	I0913 19:23:37.901242   42355 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0913 19:23:37.901246   42355 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0913 19:23:37.901251   42355 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0913 19:23:37.901257   42355 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0913 19:23:37.901262   42355 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0913 19:23:37.901268   42355 command_runner.go:130] > [crio.stats]
	I0913 19:23:37.901273   42355 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0913 19:23:37.901281   42355 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0913 19:23:37.901285   42355 command_runner.go:130] > # stats_collection_period = 0
	I0913 19:23:37.901358   42355 cni.go:84] Creating CNI manager for ""
	I0913 19:23:37.901368   42355 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0913 19:23:37.901376   42355 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0913 19:23:37.901397   42355 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.107 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-832180 NodeName:multinode-832180 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.107"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.107 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0913 19:23:37.901521   42355 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.107
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-832180"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.107
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.107"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0913 19:23:37.901581   42355 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0913 19:23:37.912321   42355 command_runner.go:130] > kubeadm
	I0913 19:23:37.912343   42355 command_runner.go:130] > kubectl
	I0913 19:23:37.912349   42355 command_runner.go:130] > kubelet
	I0913 19:23:37.912427   42355 binaries.go:44] Found k8s binaries, skipping transfer
	I0913 19:23:37.912508   42355 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0913 19:23:37.922714   42355 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0913 19:23:37.941316   42355 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0913 19:23:37.958864   42355 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0913 19:23:37.976825   42355 ssh_runner.go:195] Run: grep 192.168.39.107	control-plane.minikube.internal$ /etc/hosts
	I0913 19:23:37.980749   42355 command_runner.go:130] > 192.168.39.107	control-plane.minikube.internal
	I0913 19:23:37.980892   42355 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 19:23:38.124686   42355 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0913 19:23:38.143184   42355 certs.go:68] Setting up /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/multinode-832180 for IP: 192.168.39.107
	I0913 19:23:38.143209   42355 certs.go:194] generating shared ca certs ...
	I0913 19:23:38.143225   42355 certs.go:226] acquiring lock for ca certs: {Name:mke780aab4c2895bded11772e31c7a357d07742c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 19:23:38.143388   42355 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.key
	I0913 19:23:38.143436   42355 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.key
	I0913 19:23:38.143445   42355 certs.go:256] generating profile certs ...
	I0913 19:23:38.143526   42355 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/multinode-832180/client.key
	I0913 19:23:38.143590   42355 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/multinode-832180/apiserver.key.496af81c
	I0913 19:23:38.143623   42355 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/multinode-832180/proxy-client.key
	I0913 19:23:38.143635   42355 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0913 19:23:38.143650   42355 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0913 19:23:38.143662   42355 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0913 19:23:38.143672   42355 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0913 19:23:38.143684   42355 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/multinode-832180/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0913 19:23:38.143694   42355 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/multinode-832180/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0913 19:23:38.143706   42355 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/multinode-832180/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0913 19:23:38.143720   42355 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/multinode-832180/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0913 19:23:38.143777   42355 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079.pem (1338 bytes)
	W0913 19:23:38.143822   42355 certs.go:480] ignoring /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079_empty.pem, impossibly tiny 0 bytes
	I0913 19:23:38.143835   42355 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem (1675 bytes)
	I0913 19:23:38.143869   42355 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem (1082 bytes)
	I0913 19:23:38.143893   42355 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem (1123 bytes)
	I0913 19:23:38.143915   42355 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem (1679 bytes)
	I0913 19:23:38.143954   42355 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem (1708 bytes)
	I0913 19:23:38.143995   42355 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0913 19:23:38.144015   42355 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079.pem -> /usr/share/ca-certificates/11079.pem
	I0913 19:23:38.144027   42355 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem -> /usr/share/ca-certificates/110792.pem
	I0913 19:23:38.144585   42355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0913 19:23:38.169918   42355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0913 19:23:38.199375   42355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0913 19:23:38.225013   42355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0913 19:23:38.249740   42355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/multinode-832180/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0913 19:23:38.274811   42355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/multinode-832180/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0913 19:23:38.298716   42355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/multinode-832180/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0913 19:23:38.322393   42355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/multinode-832180/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0913 19:23:38.346877   42355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0913 19:23:38.370793   42355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079.pem --> /usr/share/ca-certificates/11079.pem (1338 bytes)
	I0913 19:23:38.419112   42355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem --> /usr/share/ca-certificates/110792.pem (1708 bytes)
	I0913 19:23:38.471595   42355 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0913 19:23:38.506298   42355 ssh_runner.go:195] Run: openssl version
	I0913 19:23:38.515632   42355 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0913 19:23:38.515796   42355 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0913 19:23:38.528837   42355 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0913 19:23:38.534762   42355 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 13 18:22 /usr/share/ca-certificates/minikubeCA.pem
	I0913 19:23:38.535087   42355 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 13 18:22 /usr/share/ca-certificates/minikubeCA.pem
	I0913 19:23:38.535157   42355 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0913 19:23:38.541094   42355 command_runner.go:130] > b5213941
	I0913 19:23:38.541440   42355 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0913 19:23:38.551487   42355 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11079.pem && ln -fs /usr/share/ca-certificates/11079.pem /etc/ssl/certs/11079.pem"
	I0913 19:23:38.563944   42355 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11079.pem
	I0913 19:23:38.569400   42355 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 13 18:38 /usr/share/ca-certificates/11079.pem
	I0913 19:23:38.569649   42355 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 13 18:38 /usr/share/ca-certificates/11079.pem
	I0913 19:23:38.569709   42355 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11079.pem
	I0913 19:23:38.575795   42355 command_runner.go:130] > 51391683
	I0913 19:23:38.575865   42355 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11079.pem /etc/ssl/certs/51391683.0"
	I0913 19:23:38.587393   42355 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110792.pem && ln -fs /usr/share/ca-certificates/110792.pem /etc/ssl/certs/110792.pem"
	I0913 19:23:38.600355   42355 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110792.pem
	I0913 19:23:38.605483   42355 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 13 18:38 /usr/share/ca-certificates/110792.pem
	I0913 19:23:38.605792   42355 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 13 18:38 /usr/share/ca-certificates/110792.pem
	I0913 19:23:38.605842   42355 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110792.pem
	I0913 19:23:38.611613   42355 command_runner.go:130] > 3ec20f2e
	I0913 19:23:38.611736   42355 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110792.pem /etc/ssl/certs/3ec20f2e.0"
	I0913 19:23:38.621378   42355 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0913 19:23:38.632025   42355 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0913 19:23:38.632053   42355 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0913 19:23:38.632059   42355 command_runner.go:130] > Device: 253,1	Inode: 5242920     Links: 1
	I0913 19:23:38.632068   42355 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0913 19:23:38.632083   42355 command_runner.go:130] > Access: 2024-09-13 19:16:51.281450796 +0000
	I0913 19:23:38.632092   42355 command_runner.go:130] > Modify: 2024-09-13 19:16:51.281450796 +0000
	I0913 19:23:38.632100   42355 command_runner.go:130] > Change: 2024-09-13 19:16:51.281450796 +0000
	I0913 19:23:38.632107   42355 command_runner.go:130] >  Birth: 2024-09-13 19:16:51.281450796 +0000
	I0913 19:23:38.632425   42355 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0913 19:23:38.646666   42355 command_runner.go:130] > Certificate will not expire
	I0913 19:23:38.646989   42355 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0913 19:23:38.656551   42355 command_runner.go:130] > Certificate will not expire
	I0913 19:23:38.656718   42355 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0913 19:23:38.667349   42355 command_runner.go:130] > Certificate will not expire
	I0913 19:23:38.667513   42355 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0913 19:23:38.673929   42355 command_runner.go:130] > Certificate will not expire
	I0913 19:23:38.674798   42355 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0913 19:23:38.684060   42355 command_runner.go:130] > Certificate will not expire
	I0913 19:23:38.684452   42355 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0913 19:23:38.691312   42355 command_runner.go:130] > Certificate will not expire
	I0913 19:23:38.691382   42355 kubeadm.go:392] StartCluster: {Name:multinode-832180 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:multinode-832180 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.107 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.235 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.21 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:f
alse istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 19:23:38.691544   42355 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0913 19:23:38.691603   42355 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0913 19:23:38.746175   42355 command_runner.go:130] > f7319753489e7bfcfa89a8e1853b8d19f22ea6dcbce9d53d2c4e88651e86183f
	I0913 19:23:38.746206   42355 command_runner.go:130] > 19c09a93acc27cd0e802edd6cb335a581c1ffb7d3f0352d8f377993a5bb90522
	I0913 19:23:38.746215   42355 command_runner.go:130] > 3153bfb4d1050389bf7f3fceda066f6e1f0f1087931c4747ae64bc8e11fbe98d
	I0913 19:23:38.746227   42355 command_runner.go:130] > 804b00dc869d900db3f006dba9a31975b04af1bf58d926814fe7af38496dc903
	I0913 19:23:38.746236   42355 command_runner.go:130] > 96891beb662f6bf1a22be94e865f849c490f37212044b80a4e7ed40fb4e6b121
	I0913 19:23:38.746244   42355 command_runner.go:130] > 76ff5353d55e94cf485f3093825b6dda34d70a5ef34ebb1741894b8520890086
	I0913 19:23:38.746252   42355 command_runner.go:130] > 66fe7d1de1c37cc0d744c6d67ad7f13380cc730caab86f506d4bb5cf90433fd6
	I0913 19:23:38.746262   42355 command_runner.go:130] > b426c3236a8688919bb3f76b648d81b1dd24a3223eb10aa26ef46ae404f85df6
	I0913 19:23:38.746270   42355 command_runner.go:130] > 1cfc48ae630fdc726595c0684fbe9856feb456a3a168627d15486e1ea532ef01
	I0913 19:23:38.746295   42355 cri.go:89] found id: "f7319753489e7bfcfa89a8e1853b8d19f22ea6dcbce9d53d2c4e88651e86183f"
	I0913 19:23:38.746307   42355 cri.go:89] found id: "19c09a93acc27cd0e802edd6cb335a581c1ffb7d3f0352d8f377993a5bb90522"
	I0913 19:23:38.746311   42355 cri.go:89] found id: "3153bfb4d1050389bf7f3fceda066f6e1f0f1087931c4747ae64bc8e11fbe98d"
	I0913 19:23:38.746315   42355 cri.go:89] found id: "804b00dc869d900db3f006dba9a31975b04af1bf58d926814fe7af38496dc903"
	I0913 19:23:38.746320   42355 cri.go:89] found id: "96891beb662f6bf1a22be94e865f849c490f37212044b80a4e7ed40fb4e6b121"
	I0913 19:23:38.746324   42355 cri.go:89] found id: "76ff5353d55e94cf485f3093825b6dda34d70a5ef34ebb1741894b8520890086"
	I0913 19:23:38.746328   42355 cri.go:89] found id: "66fe7d1de1c37cc0d744c6d67ad7f13380cc730caab86f506d4bb5cf90433fd6"
	I0913 19:23:38.746332   42355 cri.go:89] found id: "b426c3236a8688919bb3f76b648d81b1dd24a3223eb10aa26ef46ae404f85df6"
	I0913 19:23:38.746336   42355 cri.go:89] found id: "1cfc48ae630fdc726595c0684fbe9856feb456a3a168627d15486e1ea532ef01"
	I0913 19:23:38.746344   42355 cri.go:89] found id: ""
	I0913 19:23:38.746394   42355 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 13 19:25:29 multinode-832180 crio[2751]: time="2024-09-13 19:25:29.280003164Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726255529279977952,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ad3a920d-67d6-4701-8da9-1d795be77646 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 19:25:29 multinode-832180 crio[2751]: time="2024-09-13 19:25:29.280574789Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f7549c6c-2ade-4cfe-b6ba-a0fee6d2a274 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 19:25:29 multinode-832180 crio[2751]: time="2024-09-13 19:25:29.280627204Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f7549c6c-2ade-4cfe-b6ba-a0fee6d2a274 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 19:25:29 multinode-832180 crio[2751]: time="2024-09-13 19:25:29.280949414Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c02390146341d82db4a745936ef5e4fc584c13547f15f07045ec2a7d1bad1237,PodSandboxId:305b044bacb17066710d2bd5e723416c8b4519d8c3d6f81c535977548f869a75,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726255466705438152,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-mjlx4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d05f5737-22be-4f7d-b3d0-e7c8e5ce9046,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a29ceeace1750c7a2d6a3b8401821e73aece9da2fb254f28769c10161887b8b7,PodSandboxId:81bf57f457cf7d63876353284fa2d797912b4ab7376fb46332ed946abee821e2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726255433165155478,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-prk4k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f26cb6d0-3d35-47a6-b489-383b69a2e8b2,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:daed530ab23b2fbebe0aee06dba2f69f17c0d4522c2e8e12efe610e0005dbc97,PodSandboxId:82adf420ce8e66e8027b40ee471408cf2229bc50b8e2a942c589e5dd8f3609ef,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726255433092587162,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e97c9822-6884-43de-a860-acdc7a78b0f9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.conta
iner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9edd612ed049272f921fb281466af068531ec5cd17cbc02c76c33d5e94a7613d,PodSandboxId:36637a2505a8d97b6bb271c7e51720e3d4cf6d7770607a4d368bad6fd580fc2c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726255433034139547,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sntdv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 958927cd-96b7-461e-b040-719db9b90632,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fi
le,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5f29f7f76bdf322e6bb6447a9f1665ccb44f5eba45b48899ee7784938f7b3ff,PodSandboxId:36f0301cecbeca54cc91ea6023b5917b09f2feef0053fd35c7e6fb061d981a9d,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726255432764485346,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-w8ktp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2b018c0-323f-45be-bba9-aeaf9cbbef5d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerP
ort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2696e7a77e5d55f6f846833ffbae53b024d1b919e4ad29ab0e7c5f3330c5e854,PodSandboxId:8826fdf3e66c2da85b8ec81c06ea3d3dd0a13ad59556a52ab20abc28b9122e00,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726255426030379347,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-832180,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a262925470eceb1d38958926c17eb839,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc030c85d46d7541300dea35ade9029df1b9b60bc0629342a25c9142b0c9b3fe,PodSandboxId:df8ca9089a3d13e14e3785e715a4e3425af66f71361f415452a55c032d267326,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726255425787129481,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-832180,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8394f2a8329cfeddd998e539353d3436,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d
79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99b4d9896fa0b4c8cd5c9098095f8e2a5217d7abea4c3b7d43c4e01767bd2436,PodSandboxId:07eca42c194745b4a445f6fd0c304123c164c2b3214da5cd13d1c7dabc0fa0aa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726255423381008734,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-832180,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2cf558c82f7c07ac985a1806cbb94a60,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02b3f56cb8d9643708ed133fcc56af3444c05408cdded724d5c2ef364d818846,PodSandboxId:246e4d12a9a5ed0e0938da74a074a7e2485c4a15021ccf5d168c8ee0830ab839,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726255419065099587,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-832180,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3865056459676d8751348bed55faac93,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7319753489e7bfcfa89a8e1853b8d19f22ea6dcbce9d53d2c4e88651e86183f,PodSandboxId:36f0301cecbeca54cc91ea6023b5917b09f2feef0053fd35c7e6fb061d981a9d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726255418579065761,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-w8ktp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2b018c0-323f-45be-bba9-aeaf9cbbef5d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"conta
inerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c080eec6d9e5b325df2b46eaddecc0a4b53a904434e4a42e3f0c8f4ca9b90b81,PodSandboxId:cc39c6400994b23e4971534f4ae85c59c9d126cd7808091e33201e0f2bd30fcc,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726255095408623807,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-mjlx4,i
o.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d05f5737-22be-4f7d-b3d0-e7c8e5ce9046,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3153bfb4d1050389bf7f3fceda066f6e1f0f1087931c4747ae64bc8e11fbe98d,PodSandboxId:01a1ad4d7857c2131e48a1fca5f0dddeeb36ba5e970fc7457985bf8148cd0d8e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726255037865215707,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: e97c9822-6884-43de-a860-acdc7a78b0f9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:804b00dc869d900db3f006dba9a31975b04af1bf58d926814fe7af38496dc903,PodSandboxId:ecd348aa55f04442062b8efc8222d1606a08ac572ed751a85b8ae8687a706d48,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726255026077890301,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-prk4k,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: f26cb6d0-3d35-47a6-b489-383b69a2e8b2,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96891beb662f6bf1a22be94e865f849c490f37212044b80a4e7ed40fb4e6b121,PodSandboxId:dac5dc01376abc255c4b7b216c24016112b58bdd7114b368667af4e3aedf2806,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726255026055632033,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sntdv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 958927cd-96b7-461e-b040
-719db9b90632,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66fe7d1de1c37cc0d744c6d67ad7f13380cc730caab86f506d4bb5cf90433fd6,PodSandboxId:84898a16bf03a195b3e0c35f25318953a31fa7c53cd92a6957ed5bb744af8cfb,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726255014890552337,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-832180,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a262925470eceb1d38958926c17eb839,},Annotations:map[string]string
{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76ff5353d55e94cf485f3093825b6dda34d70a5ef34ebb1741894b8520890086,PodSandboxId:6276abda062c56f56eca3cc3ab3b22b2d633c06a9e9ab544dc2d126d6cb1493e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726255014894668109,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-832180,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8394f2a8329cfeddd998e539353d3436,},Annotations:map[s
tring]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b426c3236a8688919bb3f76b648d81b1dd24a3223eb10aa26ef46ae404f85df6,PodSandboxId:267a8b99ca5db62edee6a206e0f8e0a21676d8aeb34b48db6eabbe9521bffdff,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726255014842830220,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-832180,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2cf558c82f7c07ac985a1806cbb94a60,},Annotations:map[string]string{io
.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cfc48ae630fdc726595c0684fbe9856feb456a3a168627d15486e1ea532ef01,PodSandboxId:2bf7064075a7b83dbcbcf187a9fb07dd000aed4095b6b9b658c0c1cd307441e9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726255014794442252,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-832180,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3865056459676d8751348bed55faac93,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f7549c6c-2ade-4cfe-b6ba-a0fee6d2a274 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 19:25:29 multinode-832180 crio[2751]: time="2024-09-13 19:25:29.322979828Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=884c4ddb-7503-47ab-ab7d-5e911ed28e21 name=/runtime.v1.RuntimeService/Version
	Sep 13 19:25:29 multinode-832180 crio[2751]: time="2024-09-13 19:25:29.323055855Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=884c4ddb-7503-47ab-ab7d-5e911ed28e21 name=/runtime.v1.RuntimeService/Version
	Sep 13 19:25:29 multinode-832180 crio[2751]: time="2024-09-13 19:25:29.324001849Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fe045bc0-5319-4bb7-a30e-5777c73c9028 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 19:25:29 multinode-832180 crio[2751]: time="2024-09-13 19:25:29.324595006Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726255529324569834,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fe045bc0-5319-4bb7-a30e-5777c73c9028 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 19:25:29 multinode-832180 crio[2751]: time="2024-09-13 19:25:29.325095169Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=04d6eaaf-ca94-4fc7-9f83-3427f068e6eb name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 19:25:29 multinode-832180 crio[2751]: time="2024-09-13 19:25:29.325146105Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=04d6eaaf-ca94-4fc7-9f83-3427f068e6eb name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 19:25:29 multinode-832180 crio[2751]: time="2024-09-13 19:25:29.325706557Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c02390146341d82db4a745936ef5e4fc584c13547f15f07045ec2a7d1bad1237,PodSandboxId:305b044bacb17066710d2bd5e723416c8b4519d8c3d6f81c535977548f869a75,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726255466705438152,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-mjlx4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d05f5737-22be-4f7d-b3d0-e7c8e5ce9046,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a29ceeace1750c7a2d6a3b8401821e73aece9da2fb254f28769c10161887b8b7,PodSandboxId:81bf57f457cf7d63876353284fa2d797912b4ab7376fb46332ed946abee821e2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726255433165155478,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-prk4k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f26cb6d0-3d35-47a6-b489-383b69a2e8b2,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:daed530ab23b2fbebe0aee06dba2f69f17c0d4522c2e8e12efe610e0005dbc97,PodSandboxId:82adf420ce8e66e8027b40ee471408cf2229bc50b8e2a942c589e5dd8f3609ef,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726255433092587162,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e97c9822-6884-43de-a860-acdc7a78b0f9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.conta
iner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9edd612ed049272f921fb281466af068531ec5cd17cbc02c76c33d5e94a7613d,PodSandboxId:36637a2505a8d97b6bb271c7e51720e3d4cf6d7770607a4d368bad6fd580fc2c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726255433034139547,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sntdv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 958927cd-96b7-461e-b040-719db9b90632,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fi
le,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5f29f7f76bdf322e6bb6447a9f1665ccb44f5eba45b48899ee7784938f7b3ff,PodSandboxId:36f0301cecbeca54cc91ea6023b5917b09f2feef0053fd35c7e6fb061d981a9d,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726255432764485346,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-w8ktp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2b018c0-323f-45be-bba9-aeaf9cbbef5d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerP
ort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2696e7a77e5d55f6f846833ffbae53b024d1b919e4ad29ab0e7c5f3330c5e854,PodSandboxId:8826fdf3e66c2da85b8ec81c06ea3d3dd0a13ad59556a52ab20abc28b9122e00,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726255426030379347,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-832180,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a262925470eceb1d38958926c17eb839,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc030c85d46d7541300dea35ade9029df1b9b60bc0629342a25c9142b0c9b3fe,PodSandboxId:df8ca9089a3d13e14e3785e715a4e3425af66f71361f415452a55c032d267326,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726255425787129481,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-832180,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8394f2a8329cfeddd998e539353d3436,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d
79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99b4d9896fa0b4c8cd5c9098095f8e2a5217d7abea4c3b7d43c4e01767bd2436,PodSandboxId:07eca42c194745b4a445f6fd0c304123c164c2b3214da5cd13d1c7dabc0fa0aa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726255423381008734,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-832180,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2cf558c82f7c07ac985a1806cbb94a60,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02b3f56cb8d9643708ed133fcc56af3444c05408cdded724d5c2ef364d818846,PodSandboxId:246e4d12a9a5ed0e0938da74a074a7e2485c4a15021ccf5d168c8ee0830ab839,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726255419065099587,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-832180,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3865056459676d8751348bed55faac93,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7319753489e7bfcfa89a8e1853b8d19f22ea6dcbce9d53d2c4e88651e86183f,PodSandboxId:36f0301cecbeca54cc91ea6023b5917b09f2feef0053fd35c7e6fb061d981a9d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726255418579065761,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-w8ktp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2b018c0-323f-45be-bba9-aeaf9cbbef5d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"conta
inerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c080eec6d9e5b325df2b46eaddecc0a4b53a904434e4a42e3f0c8f4ca9b90b81,PodSandboxId:cc39c6400994b23e4971534f4ae85c59c9d126cd7808091e33201e0f2bd30fcc,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726255095408623807,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-mjlx4,i
o.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d05f5737-22be-4f7d-b3d0-e7c8e5ce9046,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3153bfb4d1050389bf7f3fceda066f6e1f0f1087931c4747ae64bc8e11fbe98d,PodSandboxId:01a1ad4d7857c2131e48a1fca5f0dddeeb36ba5e970fc7457985bf8148cd0d8e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726255037865215707,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: e97c9822-6884-43de-a860-acdc7a78b0f9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:804b00dc869d900db3f006dba9a31975b04af1bf58d926814fe7af38496dc903,PodSandboxId:ecd348aa55f04442062b8efc8222d1606a08ac572ed751a85b8ae8687a706d48,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726255026077890301,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-prk4k,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: f26cb6d0-3d35-47a6-b489-383b69a2e8b2,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96891beb662f6bf1a22be94e865f849c490f37212044b80a4e7ed40fb4e6b121,PodSandboxId:dac5dc01376abc255c4b7b216c24016112b58bdd7114b368667af4e3aedf2806,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726255026055632033,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sntdv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 958927cd-96b7-461e-b040
-719db9b90632,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66fe7d1de1c37cc0d744c6d67ad7f13380cc730caab86f506d4bb5cf90433fd6,PodSandboxId:84898a16bf03a195b3e0c35f25318953a31fa7c53cd92a6957ed5bb744af8cfb,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726255014890552337,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-832180,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a262925470eceb1d38958926c17eb839,},Annotations:map[string]string
{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76ff5353d55e94cf485f3093825b6dda34d70a5ef34ebb1741894b8520890086,PodSandboxId:6276abda062c56f56eca3cc3ab3b22b2d633c06a9e9ab544dc2d126d6cb1493e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726255014894668109,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-832180,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8394f2a8329cfeddd998e539353d3436,},Annotations:map[s
tring]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b426c3236a8688919bb3f76b648d81b1dd24a3223eb10aa26ef46ae404f85df6,PodSandboxId:267a8b99ca5db62edee6a206e0f8e0a21676d8aeb34b48db6eabbe9521bffdff,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726255014842830220,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-832180,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2cf558c82f7c07ac985a1806cbb94a60,},Annotations:map[string]string{io
.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cfc48ae630fdc726595c0684fbe9856feb456a3a168627d15486e1ea532ef01,PodSandboxId:2bf7064075a7b83dbcbcf187a9fb07dd000aed4095b6b9b658c0c1cd307441e9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726255014794442252,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-832180,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3865056459676d8751348bed55faac93,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=04d6eaaf-ca94-4fc7-9f83-3427f068e6eb name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 19:25:29 multinode-832180 crio[2751]: time="2024-09-13 19:25:29.372788793Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6a0e69ff-959b-4488-9af8-3d94d46786f6 name=/runtime.v1.RuntimeService/Version
	Sep 13 19:25:29 multinode-832180 crio[2751]: time="2024-09-13 19:25:29.372878649Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6a0e69ff-959b-4488-9af8-3d94d46786f6 name=/runtime.v1.RuntimeService/Version
	Sep 13 19:25:29 multinode-832180 crio[2751]: time="2024-09-13 19:25:29.374028956Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=be905d7c-8f72-4668-882e-d6513163b178 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 19:25:29 multinode-832180 crio[2751]: time="2024-09-13 19:25:29.374541479Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726255529374515358,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=be905d7c-8f72-4668-882e-d6513163b178 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 19:25:29 multinode-832180 crio[2751]: time="2024-09-13 19:25:29.375468876Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5f2480af-23f8-45e8-ac8e-146860c34824 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 19:25:29 multinode-832180 crio[2751]: time="2024-09-13 19:25:29.375529051Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5f2480af-23f8-45e8-ac8e-146860c34824 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 19:25:29 multinode-832180 crio[2751]: time="2024-09-13 19:25:29.376073817Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c02390146341d82db4a745936ef5e4fc584c13547f15f07045ec2a7d1bad1237,PodSandboxId:305b044bacb17066710d2bd5e723416c8b4519d8c3d6f81c535977548f869a75,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726255466705438152,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-mjlx4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d05f5737-22be-4f7d-b3d0-e7c8e5ce9046,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a29ceeace1750c7a2d6a3b8401821e73aece9da2fb254f28769c10161887b8b7,PodSandboxId:81bf57f457cf7d63876353284fa2d797912b4ab7376fb46332ed946abee821e2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726255433165155478,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-prk4k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f26cb6d0-3d35-47a6-b489-383b69a2e8b2,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:daed530ab23b2fbebe0aee06dba2f69f17c0d4522c2e8e12efe610e0005dbc97,PodSandboxId:82adf420ce8e66e8027b40ee471408cf2229bc50b8e2a942c589e5dd8f3609ef,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726255433092587162,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e97c9822-6884-43de-a860-acdc7a78b0f9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.conta
iner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9edd612ed049272f921fb281466af068531ec5cd17cbc02c76c33d5e94a7613d,PodSandboxId:36637a2505a8d97b6bb271c7e51720e3d4cf6d7770607a4d368bad6fd580fc2c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726255433034139547,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sntdv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 958927cd-96b7-461e-b040-719db9b90632,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fi
le,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5f29f7f76bdf322e6bb6447a9f1665ccb44f5eba45b48899ee7784938f7b3ff,PodSandboxId:36f0301cecbeca54cc91ea6023b5917b09f2feef0053fd35c7e6fb061d981a9d,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726255432764485346,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-w8ktp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2b018c0-323f-45be-bba9-aeaf9cbbef5d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerP
ort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2696e7a77e5d55f6f846833ffbae53b024d1b919e4ad29ab0e7c5f3330c5e854,PodSandboxId:8826fdf3e66c2da85b8ec81c06ea3d3dd0a13ad59556a52ab20abc28b9122e00,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726255426030379347,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-832180,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a262925470eceb1d38958926c17eb839,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc030c85d46d7541300dea35ade9029df1b9b60bc0629342a25c9142b0c9b3fe,PodSandboxId:df8ca9089a3d13e14e3785e715a4e3425af66f71361f415452a55c032d267326,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726255425787129481,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-832180,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8394f2a8329cfeddd998e539353d3436,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d
79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99b4d9896fa0b4c8cd5c9098095f8e2a5217d7abea4c3b7d43c4e01767bd2436,PodSandboxId:07eca42c194745b4a445f6fd0c304123c164c2b3214da5cd13d1c7dabc0fa0aa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726255423381008734,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-832180,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2cf558c82f7c07ac985a1806cbb94a60,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02b3f56cb8d9643708ed133fcc56af3444c05408cdded724d5c2ef364d818846,PodSandboxId:246e4d12a9a5ed0e0938da74a074a7e2485c4a15021ccf5d168c8ee0830ab839,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726255419065099587,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-832180,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3865056459676d8751348bed55faac93,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7319753489e7bfcfa89a8e1853b8d19f22ea6dcbce9d53d2c4e88651e86183f,PodSandboxId:36f0301cecbeca54cc91ea6023b5917b09f2feef0053fd35c7e6fb061d981a9d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726255418579065761,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-w8ktp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2b018c0-323f-45be-bba9-aeaf9cbbef5d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"conta
inerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c080eec6d9e5b325df2b46eaddecc0a4b53a904434e4a42e3f0c8f4ca9b90b81,PodSandboxId:cc39c6400994b23e4971534f4ae85c59c9d126cd7808091e33201e0f2bd30fcc,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726255095408623807,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-mjlx4,i
o.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d05f5737-22be-4f7d-b3d0-e7c8e5ce9046,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3153bfb4d1050389bf7f3fceda066f6e1f0f1087931c4747ae64bc8e11fbe98d,PodSandboxId:01a1ad4d7857c2131e48a1fca5f0dddeeb36ba5e970fc7457985bf8148cd0d8e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726255037865215707,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: e97c9822-6884-43de-a860-acdc7a78b0f9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:804b00dc869d900db3f006dba9a31975b04af1bf58d926814fe7af38496dc903,PodSandboxId:ecd348aa55f04442062b8efc8222d1606a08ac572ed751a85b8ae8687a706d48,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726255026077890301,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-prk4k,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: f26cb6d0-3d35-47a6-b489-383b69a2e8b2,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96891beb662f6bf1a22be94e865f849c490f37212044b80a4e7ed40fb4e6b121,PodSandboxId:dac5dc01376abc255c4b7b216c24016112b58bdd7114b368667af4e3aedf2806,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726255026055632033,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sntdv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 958927cd-96b7-461e-b040
-719db9b90632,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66fe7d1de1c37cc0d744c6d67ad7f13380cc730caab86f506d4bb5cf90433fd6,PodSandboxId:84898a16bf03a195b3e0c35f25318953a31fa7c53cd92a6957ed5bb744af8cfb,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726255014890552337,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-832180,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a262925470eceb1d38958926c17eb839,},Annotations:map[string]string
{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76ff5353d55e94cf485f3093825b6dda34d70a5ef34ebb1741894b8520890086,PodSandboxId:6276abda062c56f56eca3cc3ab3b22b2d633c06a9e9ab544dc2d126d6cb1493e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726255014894668109,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-832180,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8394f2a8329cfeddd998e539353d3436,},Annotations:map[s
tring]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b426c3236a8688919bb3f76b648d81b1dd24a3223eb10aa26ef46ae404f85df6,PodSandboxId:267a8b99ca5db62edee6a206e0f8e0a21676d8aeb34b48db6eabbe9521bffdff,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726255014842830220,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-832180,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2cf558c82f7c07ac985a1806cbb94a60,},Annotations:map[string]string{io
.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cfc48ae630fdc726595c0684fbe9856feb456a3a168627d15486e1ea532ef01,PodSandboxId:2bf7064075a7b83dbcbcf187a9fb07dd000aed4095b6b9b658c0c1cd307441e9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726255014794442252,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-832180,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3865056459676d8751348bed55faac93,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5f2480af-23f8-45e8-ac8e-146860c34824 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 19:25:29 multinode-832180 crio[2751]: time="2024-09-13 19:25:29.419060031Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3731bf4c-372e-4e4d-91fa-8101f8bde71d name=/runtime.v1.RuntimeService/Version
	Sep 13 19:25:29 multinode-832180 crio[2751]: time="2024-09-13 19:25:29.419134457Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3731bf4c-372e-4e4d-91fa-8101f8bde71d name=/runtime.v1.RuntimeService/Version
	Sep 13 19:25:29 multinode-832180 crio[2751]: time="2024-09-13 19:25:29.420085900Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4ce578fb-d640-4569-a21c-dbd50cf012c4 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 19:25:29 multinode-832180 crio[2751]: time="2024-09-13 19:25:29.420563789Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726255529420541371,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4ce578fb-d640-4569-a21c-dbd50cf012c4 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 19:25:29 multinode-832180 crio[2751]: time="2024-09-13 19:25:29.421129240Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c389c925-00e8-457b-ba7a-9c9701a6f8dd name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 19:25:29 multinode-832180 crio[2751]: time="2024-09-13 19:25:29.421192387Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c389c925-00e8-457b-ba7a-9c9701a6f8dd name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 19:25:29 multinode-832180 crio[2751]: time="2024-09-13 19:25:29.421565482Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c02390146341d82db4a745936ef5e4fc584c13547f15f07045ec2a7d1bad1237,PodSandboxId:305b044bacb17066710d2bd5e723416c8b4519d8c3d6f81c535977548f869a75,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726255466705438152,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-mjlx4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d05f5737-22be-4f7d-b3d0-e7c8e5ce9046,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a29ceeace1750c7a2d6a3b8401821e73aece9da2fb254f28769c10161887b8b7,PodSandboxId:81bf57f457cf7d63876353284fa2d797912b4ab7376fb46332ed946abee821e2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726255433165155478,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-prk4k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f26cb6d0-3d35-47a6-b489-383b69a2e8b2,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:daed530ab23b2fbebe0aee06dba2f69f17c0d4522c2e8e12efe610e0005dbc97,PodSandboxId:82adf420ce8e66e8027b40ee471408cf2229bc50b8e2a942c589e5dd8f3609ef,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726255433092587162,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e97c9822-6884-43de-a860-acdc7a78b0f9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.conta
iner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9edd612ed049272f921fb281466af068531ec5cd17cbc02c76c33d5e94a7613d,PodSandboxId:36637a2505a8d97b6bb271c7e51720e3d4cf6d7770607a4d368bad6fd580fc2c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726255433034139547,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sntdv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 958927cd-96b7-461e-b040-719db9b90632,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fi
le,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5f29f7f76bdf322e6bb6447a9f1665ccb44f5eba45b48899ee7784938f7b3ff,PodSandboxId:36f0301cecbeca54cc91ea6023b5917b09f2feef0053fd35c7e6fb061d981a9d,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726255432764485346,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-w8ktp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2b018c0-323f-45be-bba9-aeaf9cbbef5d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerP
ort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2696e7a77e5d55f6f846833ffbae53b024d1b919e4ad29ab0e7c5f3330c5e854,PodSandboxId:8826fdf3e66c2da85b8ec81c06ea3d3dd0a13ad59556a52ab20abc28b9122e00,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726255426030379347,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-832180,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a262925470eceb1d38958926c17eb839,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc030c85d46d7541300dea35ade9029df1b9b60bc0629342a25c9142b0c9b3fe,PodSandboxId:df8ca9089a3d13e14e3785e715a4e3425af66f71361f415452a55c032d267326,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726255425787129481,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-832180,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8394f2a8329cfeddd998e539353d3436,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d
79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99b4d9896fa0b4c8cd5c9098095f8e2a5217d7abea4c3b7d43c4e01767bd2436,PodSandboxId:07eca42c194745b4a445f6fd0c304123c164c2b3214da5cd13d1c7dabc0fa0aa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726255423381008734,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-832180,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2cf558c82f7c07ac985a1806cbb94a60,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02b3f56cb8d9643708ed133fcc56af3444c05408cdded724d5c2ef364d818846,PodSandboxId:246e4d12a9a5ed0e0938da74a074a7e2485c4a15021ccf5d168c8ee0830ab839,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726255419065099587,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-832180,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3865056459676d8751348bed55faac93,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7319753489e7bfcfa89a8e1853b8d19f22ea6dcbce9d53d2c4e88651e86183f,PodSandboxId:36f0301cecbeca54cc91ea6023b5917b09f2feef0053fd35c7e6fb061d981a9d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726255418579065761,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-w8ktp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2b018c0-323f-45be-bba9-aeaf9cbbef5d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"conta
inerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c080eec6d9e5b325df2b46eaddecc0a4b53a904434e4a42e3f0c8f4ca9b90b81,PodSandboxId:cc39c6400994b23e4971534f4ae85c59c9d126cd7808091e33201e0f2bd30fcc,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726255095408623807,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-mjlx4,i
o.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d05f5737-22be-4f7d-b3d0-e7c8e5ce9046,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3153bfb4d1050389bf7f3fceda066f6e1f0f1087931c4747ae64bc8e11fbe98d,PodSandboxId:01a1ad4d7857c2131e48a1fca5f0dddeeb36ba5e970fc7457985bf8148cd0d8e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726255037865215707,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: e97c9822-6884-43de-a860-acdc7a78b0f9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:804b00dc869d900db3f006dba9a31975b04af1bf58d926814fe7af38496dc903,PodSandboxId:ecd348aa55f04442062b8efc8222d1606a08ac572ed751a85b8ae8687a706d48,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726255026077890301,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-prk4k,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: f26cb6d0-3d35-47a6-b489-383b69a2e8b2,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96891beb662f6bf1a22be94e865f849c490f37212044b80a4e7ed40fb4e6b121,PodSandboxId:dac5dc01376abc255c4b7b216c24016112b58bdd7114b368667af4e3aedf2806,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726255026055632033,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sntdv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 958927cd-96b7-461e-b040
-719db9b90632,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66fe7d1de1c37cc0d744c6d67ad7f13380cc730caab86f506d4bb5cf90433fd6,PodSandboxId:84898a16bf03a195b3e0c35f25318953a31fa7c53cd92a6957ed5bb744af8cfb,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726255014890552337,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-832180,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a262925470eceb1d38958926c17eb839,},Annotations:map[string]string
{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76ff5353d55e94cf485f3093825b6dda34d70a5ef34ebb1741894b8520890086,PodSandboxId:6276abda062c56f56eca3cc3ab3b22b2d633c06a9e9ab544dc2d126d6cb1493e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726255014894668109,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-832180,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8394f2a8329cfeddd998e539353d3436,},Annotations:map[s
tring]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b426c3236a8688919bb3f76b648d81b1dd24a3223eb10aa26ef46ae404f85df6,PodSandboxId:267a8b99ca5db62edee6a206e0f8e0a21676d8aeb34b48db6eabbe9521bffdff,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726255014842830220,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-832180,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2cf558c82f7c07ac985a1806cbb94a60,},Annotations:map[string]string{io
.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cfc48ae630fdc726595c0684fbe9856feb456a3a168627d15486e1ea532ef01,PodSandboxId:2bf7064075a7b83dbcbcf187a9fb07dd000aed4095b6b9b658c0c1cd307441e9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726255014794442252,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-832180,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3865056459676d8751348bed55faac93,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c389c925-00e8-457b-ba7a-9c9701a6f8dd name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	c02390146341d       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   305b044bacb17       busybox-7dff88458-mjlx4
	a29ceeace1750       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      About a minute ago   Running             kindnet-cni               1                   81bf57f457cf7       kindnet-prk4k
	daed530ab23b2       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       1                   82adf420ce8e6       storage-provisioner
	9edd612ed0492       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      About a minute ago   Running             kube-proxy                1                   36637a2505a8d       kube-proxy-sntdv
	e5f29f7f76bdf       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      About a minute ago   Running             coredns                   2                   36f0301cecbec       coredns-7c65d6cfc9-w8ktp
	2696e7a77e5d5       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      About a minute ago   Running             etcd                      1                   8826fdf3e66c2       etcd-multinode-832180
	fc030c85d46d7       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      About a minute ago   Running             kube-controller-manager   1                   df8ca9089a3d1       kube-controller-manager-multinode-832180
	99b4d9896fa0b       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      About a minute ago   Running             kube-scheduler            1                   07eca42c19474       kube-scheduler-multinode-832180
	02b3f56cb8d96       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      About a minute ago   Running             kube-apiserver            1                   246e4d12a9a5e       kube-apiserver-multinode-832180
	f7319753489e7       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      About a minute ago   Exited              coredns                   1                   36f0301cecbec       coredns-7c65d6cfc9-w8ktp
	c080eec6d9e5b       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   7 minutes ago        Exited              busybox                   0                   cc39c6400994b       busybox-7dff88458-mjlx4
	3153bfb4d1050       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      8 minutes ago        Exited              storage-provisioner       0                   01a1ad4d7857c       storage-provisioner
	804b00dc869d9       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      8 minutes ago        Exited              kindnet-cni               0                   ecd348aa55f04       kindnet-prk4k
	96891beb662f6       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      8 minutes ago        Exited              kube-proxy                0                   dac5dc01376ab       kube-proxy-sntdv
	76ff5353d55e9       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      8 minutes ago        Exited              kube-controller-manager   0                   6276abda062c5       kube-controller-manager-multinode-832180
	66fe7d1de1c37       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      8 minutes ago        Exited              etcd                      0                   84898a16bf03a       etcd-multinode-832180
	b426c3236a868       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      8 minutes ago        Exited              kube-scheduler            0                   267a8b99ca5db       kube-scheduler-multinode-832180
	1cfc48ae630fd       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      8 minutes ago        Exited              kube-apiserver            0                   2bf7064075a7b       kube-apiserver-multinode-832180
	
	
	==> coredns [e5f29f7f76bdf322e6bb6447a9f1665ccb44f5eba45b48899ee7784938f7b3ff] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:44791 - 9377 "HINFO IN 4364243046564626994.2591110203882479955. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015007036s
	
	
	==> coredns [f7319753489e7bfcfa89a8e1853b8d19f22ea6dcbce9d53d2c4e88651e86183f] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:42000 - 9876 "HINFO IN 310529751341749349.2565869892791839521. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.015265524s
	
	
	==> describe nodes <==
	Name:               multinode-832180
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-832180
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fdd33bebc6743cfd1c61ec7fe066add478610a92
	                    minikube.k8s.io/name=multinode-832180
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_13T19_17_01_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Sep 2024 19:16:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-832180
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 13 Sep 2024 19:25:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Sep 2024 19:23:51 +0000   Fri, 13 Sep 2024 19:16:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Sep 2024 19:23:51 +0000   Fri, 13 Sep 2024 19:16:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Sep 2024 19:23:51 +0000   Fri, 13 Sep 2024 19:16:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 13 Sep 2024 19:23:51 +0000   Fri, 13 Sep 2024 19:17:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.107
	  Hostname:    multinode-832180
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 04c9021f9b344e88906d4281c3d54114
	  System UUID:                04c9021f-9b34-4e88-906d-4281c3d54114
	  Boot ID:                    c72d22e3-5904-415c-909f-d71bc2e65107
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-mjlx4                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m18s
	  kube-system                 coredns-7c65d6cfc9-w8ktp                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     8m24s
	  kube-system                 etcd-multinode-832180                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         8m29s
	  kube-system                 kindnet-prk4k                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      8m24s
	  kube-system                 kube-apiserver-multinode-832180             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m29s
	  kube-system                 kube-controller-manager-multinode-832180    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m31s
	  kube-system                 kube-proxy-sntdv                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m24s
	  kube-system                 kube-scheduler-multinode-832180             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m29s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   100m (5%)
	  memory             220Mi (10%)  220Mi (10%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 8m23s                  kube-proxy       
	  Normal  Starting                 96s                    kube-proxy       
	  Normal  NodeHasSufficientMemory  8m35s (x8 over 8m35s)  kubelet          Node multinode-832180 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m35s (x8 over 8m35s)  kubelet          Node multinode-832180 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m35s (x7 over 8m35s)  kubelet          Node multinode-832180 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m35s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m29s                  kubelet          Node multinode-832180 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  8m29s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    8m29s                  kubelet          Node multinode-832180 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m29s                  kubelet          Node multinode-832180 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m29s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           8m25s                  node-controller  Node multinode-832180 event: Registered Node multinode-832180 in Controller
	  Normal  NodeReady                8m12s                  kubelet          Node multinode-832180 status is now: NodeReady
	  Normal  Starting                 99s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  99s (x8 over 99s)      kubelet          Node multinode-832180 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    99s (x8 over 99s)      kubelet          Node multinode-832180 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     99s (x7 over 99s)      kubelet          Node multinode-832180 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  99s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           95s                    node-controller  Node multinode-832180 event: Registered Node multinode-832180 in Controller
	
	
	Name:               multinode-832180-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-832180-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fdd33bebc6743cfd1c61ec7fe066add478610a92
	                    minikube.k8s.io/name=multinode-832180
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_13T19_24_28_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Sep 2024 19:24:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-832180-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 13 Sep 2024 19:25:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Sep 2024 19:24:58 +0000   Fri, 13 Sep 2024 19:24:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Sep 2024 19:24:58 +0000   Fri, 13 Sep 2024 19:24:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Sep 2024 19:24:58 +0000   Fri, 13 Sep 2024 19:24:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 13 Sep 2024 19:24:58 +0000   Fri, 13 Sep 2024 19:24:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.235
	  Hostname:    multinode-832180-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 41e9fdb14f5e4f07b2124dd3b9aa13fb
	  System UUID:                41e9fdb1-4f5e-4f07-b212-4dd3b9aa13fb
	  Boot ID:                    b19e2870-4145-4af6-878d-8985ceb03442
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-99fvm    0 (0%)        0 (0%)      0 (0%)           0 (0%)         66s
	  kube-system                 kindnet-sdfsx              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      7m40s
	  kube-system                 kube-proxy-sgggj           0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m40s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 7m34s                  kube-proxy  
	  Normal  Starting                 57s                    kube-proxy  
	  Normal  NodeHasSufficientMemory  7m40s (x2 over 7m40s)  kubelet     Node multinode-832180-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m40s (x2 over 7m40s)  kubelet     Node multinode-832180-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m40s (x2 over 7m40s)  kubelet     Node multinode-832180-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m40s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                7m20s                  kubelet     Node multinode-832180-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  62s (x2 over 62s)      kubelet     Node multinode-832180-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    62s (x2 over 62s)      kubelet     Node multinode-832180-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     62s (x2 over 62s)      kubelet     Node multinode-832180-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  62s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                42s                    kubelet     Node multinode-832180-m02 status is now: NodeReady
	
	
	Name:               multinode-832180-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-832180-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fdd33bebc6743cfd1c61ec7fe066add478610a92
	                    minikube.k8s.io/name=multinode-832180
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_13T19_25_07_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Sep 2024 19:25:06 +0000
	Taints:             node.kubernetes.io/not-ready:NoExecute
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-832180-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 13 Sep 2024 19:25:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Sep 2024 19:25:26 +0000   Fri, 13 Sep 2024 19:25:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Sep 2024 19:25:26 +0000   Fri, 13 Sep 2024 19:25:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Sep 2024 19:25:26 +0000   Fri, 13 Sep 2024 19:25:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 13 Sep 2024 19:25:26 +0000   Fri, 13 Sep 2024 19:25:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.21
	  Hostname:    multinode-832180-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c6f9667a4f1a4a03be6717eff330065a
	  System UUID:                c6f9667a-4f1a-4a03-be67-17eff330065a
	  Boot ID:                    caabe41a-7d8f-4d40-8ee2-2161b0e9e52c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-lg94d       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m42s
	  kube-system                 kube-proxy-7zhjz    0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m42s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 6m36s                  kube-proxy  
	  Normal  Starting                 18s                    kube-proxy  
	  Normal  Starting                 5m46s                  kube-proxy  
	  Normal  NodeHasSufficientMemory  6m42s (x2 over 6m42s)  kubelet     Node multinode-832180-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m42s (x2 over 6m42s)  kubelet     Node multinode-832180-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m42s (x2 over 6m42s)  kubelet     Node multinode-832180-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m42s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                6m21s                  kubelet     Node multinode-832180-m03 status is now: NodeReady
	  Normal  NodeHasSufficientPID     5m50s (x2 over 5m50s)  kubelet     Node multinode-832180-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m50s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    5m50s (x2 over 5m50s)  kubelet     Node multinode-832180-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  5m50s (x2 over 5m50s)  kubelet     Node multinode-832180-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                5m31s                  kubelet     Node multinode-832180-m03 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  23s (x2 over 23s)      kubelet     Node multinode-832180-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23s (x2 over 23s)      kubelet     Node multinode-832180-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23s (x2 over 23s)      kubelet     Node multinode-832180-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                3s                     kubelet     Node multinode-832180-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.061831] systemd-fstab-generator[592]: Ignoring "noauto" option for root device
	[  +0.195951] systemd-fstab-generator[606]: Ignoring "noauto" option for root device
	[  +0.134058] systemd-fstab-generator[618]: Ignoring "noauto" option for root device
	[  +0.271704] systemd-fstab-generator[648]: Ignoring "noauto" option for root device
	[  +3.926806] systemd-fstab-generator[743]: Ignoring "noauto" option for root device
	[  +4.177423] systemd-fstab-generator[880]: Ignoring "noauto" option for root device
	[  +0.056870] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.990166] systemd-fstab-generator[1210]: Ignoring "noauto" option for root device
	[  +0.071297] kauditd_printk_skb: 69 callbacks suppressed
	[Sep13 19:17] systemd-fstab-generator[1316]: Ignoring "noauto" option for root device
	[  +0.111131] kauditd_printk_skb: 18 callbacks suppressed
	[ +12.637323] kauditd_printk_skb: 69 callbacks suppressed
	[Sep13 19:18] kauditd_printk_skb: 14 callbacks suppressed
	[Sep13 19:23] systemd-fstab-generator[2676]: Ignoring "noauto" option for root device
	[  +0.144307] systemd-fstab-generator[2688]: Ignoring "noauto" option for root device
	[  +0.172772] systemd-fstab-generator[2702]: Ignoring "noauto" option for root device
	[  +0.150470] systemd-fstab-generator[2714]: Ignoring "noauto" option for root device
	[  +0.299619] systemd-fstab-generator[2742]: Ignoring "noauto" option for root device
	[  +1.654897] systemd-fstab-generator[2844]: Ignoring "noauto" option for root device
	[  +5.324522] kauditd_printk_skb: 147 callbacks suppressed
	[  +6.649948] systemd-fstab-generator[3374]: Ignoring "noauto" option for root device
	[  +0.098308] kauditd_printk_skb: 20 callbacks suppressed
	[  +5.095828] kauditd_printk_skb: 52 callbacks suppressed
	[Sep13 19:24] systemd-fstab-generator[3836]: Ignoring "noauto" option for root device
	[ +24.280957] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [2696e7a77e5d55f6f846833ffbae53b024d1b919e4ad29ab0e7c5f3330c5e854] <==
	{"level":"info","ts":"2024-09-13T19:23:46.223126Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"1d5c088f9986766d","local-member-id":"ec1614c5c0f7335e","added-peer-id":"ec1614c5c0f7335e","added-peer-peer-urls":["https://192.168.39.107:2380"]}
	{"level":"info","ts":"2024-09-13T19:23:46.223609Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-13T19:23:46.224781Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"1d5c088f9986766d","local-member-id":"ec1614c5c0f7335e","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-13T19:23:46.224837Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-13T19:23:46.229230Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-13T19:23:46.229580Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"ec1614c5c0f7335e","initial-advertise-peer-urls":["https://192.168.39.107:2380"],"listen-peer-urls":["https://192.168.39.107:2380"],"advertise-client-urls":["https://192.168.39.107:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.107:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-13T19:23:46.229620Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-13T19:23:46.229772Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.107:2380"}
	{"level":"info","ts":"2024-09-13T19:23:46.229797Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.107:2380"}
	{"level":"info","ts":"2024-09-13T19:23:47.501680Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ec1614c5c0f7335e is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-13T19:23:47.501793Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ec1614c5c0f7335e became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-13T19:23:47.501855Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ec1614c5c0f7335e received MsgPreVoteResp from ec1614c5c0f7335e at term 2"}
	{"level":"info","ts":"2024-09-13T19:23:47.501897Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ec1614c5c0f7335e became candidate at term 3"}
	{"level":"info","ts":"2024-09-13T19:23:47.501922Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ec1614c5c0f7335e received MsgVoteResp from ec1614c5c0f7335e at term 3"}
	{"level":"info","ts":"2024-09-13T19:23:47.501965Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ec1614c5c0f7335e became leader at term 3"}
	{"level":"info","ts":"2024-09-13T19:23:47.501991Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ec1614c5c0f7335e elected leader ec1614c5c0f7335e at term 3"}
	{"level":"info","ts":"2024-09-13T19:23:47.508441Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"ec1614c5c0f7335e","local-member-attributes":"{Name:multinode-832180 ClientURLs:[https://192.168.39.107:2379]}","request-path":"/0/members/ec1614c5c0f7335e/attributes","cluster-id":"1d5c088f9986766d","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-13T19:23:47.508458Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-13T19:23:47.508782Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-13T19:23:47.508861Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-13T19:23:47.508499Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-13T19:23:47.509753Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-13T19:23:47.509786Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-13T19:23:47.510674Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-13T19:23:47.510814Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.107:2379"}
	
	
	==> etcd [66fe7d1de1c37cc0d744c6d67ad7f13380cc730caab86f506d4bb5cf90433fd6] <==
	{"level":"info","ts":"2024-09-13T19:17:54.885468Z","caller":"traceutil/trace.go:171","msg":"trace[547892435] transaction","detail":"{read_only:false; response_revision:504; number_of_response:1; }","duration":"591.620953ms","start":"2024-09-13T19:17:54.293837Z","end":"2024-09-13T19:17:54.885458Z","steps":["trace[547892435] 'process raft request'  (duration: 554.029208ms)","trace[547892435] 'compare'  (duration: 37.11374ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-13T19:17:54.886566Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-13T19:17:54.293820Z","time spent":"592.682653ms","remote":"127.0.0.1:44170","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":709,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/events/default/multinode-832180-m02.17f4e3d80edd35ed\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-832180-m02.17f4e3d80edd35ed\" value_size:629 lease:3701556390052319695 >> failure:<>"}
	{"level":"info","ts":"2024-09-13T19:17:54.886090Z","caller":"traceutil/trace.go:171","msg":"trace[1035367749] transaction","detail":"{read_only:false; response_revision:506; number_of_response:1; }","duration":"441.554463ms","start":"2024-09-13T19:17:54.444477Z","end":"2024-09-13T19:17:54.886031Z","steps":["trace[1035367749] 'process raft request'  (duration: 440.655081ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-13T19:17:54.886813Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-13T19:17:54.444455Z","time spent":"442.329686ms","remote":"127.0.0.1:44170","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":728,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/kube-proxy-sgggj.17f4e3d81733dc5c\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kube-proxy-sgggj.17f4e3d81733dc5c\" value_size:648 lease:3701556390052319695 >> failure:<>"}
	{"level":"warn","ts":"2024-09-13T19:17:54.887114Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"267.775271ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-13T19:17:54.887171Z","caller":"traceutil/trace.go:171","msg":"trace[1961735353] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:506; }","duration":"267.834397ms","start":"2024-09-13T19:17:54.619326Z","end":"2024-09-13T19:17:54.887160Z","steps":["trace[1961735353] 'agreement among raft nodes before linearized reading'  (duration: 267.761166ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-13T19:17:54.887942Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"337.61981ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-832180-m02\" ","response":"range_response_count:1 size:2894"}
	{"level":"info","ts":"2024-09-13T19:17:54.888886Z","caller":"traceutil/trace.go:171","msg":"trace[1399790241] range","detail":"{range_begin:/registry/minions/multinode-832180-m02; range_end:; response_count:1; response_revision:506; }","duration":"338.565024ms","start":"2024-09-13T19:17:54.550306Z","end":"2024-09-13T19:17:54.888871Z","steps":["trace[1399790241] 'agreement among raft nodes before linearized reading'  (duration: 336.059212ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-13T19:17:54.888997Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-13T19:17:54.550234Z","time spent":"338.747284ms","remote":"127.0.0.1:44304","response type":"/etcdserverpb.KV/Range","request count":0,"request size":40,"response count":1,"response size":2917,"request content":"key:\"/registry/minions/multinode-832180-m02\" "}
	{"level":"warn","ts":"2024-09-13T19:17:54.889168Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"288.083129ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-832180-m02\" ","response":"range_response_count:1 size:2894"}
	{"level":"info","ts":"2024-09-13T19:17:54.889242Z","caller":"traceutil/trace.go:171","msg":"trace[2060334768] range","detail":"{range_begin:/registry/minions/multinode-832180-m02; range_end:; response_count:1; response_revision:506; }","duration":"288.158853ms","start":"2024-09-13T19:17:54.601074Z","end":"2024-09-13T19:17:54.889233Z","steps":["trace[2060334768] 'agreement among raft nodes before linearized reading'  (duration: 288.056896ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-13T19:18:47.486692Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"122.236597ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-13T19:18:47.487100Z","caller":"traceutil/trace.go:171","msg":"trace[88599264] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:606; }","duration":"122.728743ms","start":"2024-09-13T19:18:47.364349Z","end":"2024-09-13T19:18:47.487078Z","steps":["trace[88599264] 'range keys from in-memory index tree'  (duration: 122.213458ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-13T19:18:47.486948Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"149.652376ms","expected-duration":"100ms","prefix":"","request":"header:<ID:3701556390052321208 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-832180-m03.17f4e3e462bb82c4\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-832180-m03.17f4e3e462bb82c4\" value_size:646 lease:3701556390052320798 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-09-13T19:18:47.487329Z","caller":"traceutil/trace.go:171","msg":"trace[994767466] transaction","detail":"{read_only:false; response_revision:607; number_of_response:1; }","duration":"238.678573ms","start":"2024-09-13T19:18:47.248635Z","end":"2024-09-13T19:18:47.487313Z","steps":["trace[994767466] 'process raft request'  (duration: 88.517151ms)","trace[994767466] 'compare'  (duration: 149.420411ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-13T19:22:04.271412Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-13T19:22:04.271541Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"multinode-832180","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.107:2380"],"advertise-client-urls":["https://192.168.39.107:2379"]}
	{"level":"warn","ts":"2024-09-13T19:22:04.271684Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-13T19:22:04.271772Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-13T19:22:04.327251Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.107:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-13T19:22:04.327385Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.107:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-13T19:22:04.327509Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ec1614c5c0f7335e","current-leader-member-id":"ec1614c5c0f7335e"}
	{"level":"info","ts":"2024-09-13T19:22:04.334645Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.107:2380"}
	{"level":"info","ts":"2024-09-13T19:22:04.334838Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.107:2380"}
	{"level":"info","ts":"2024-09-13T19:22:04.334874Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"multinode-832180","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.107:2380"],"advertise-client-urls":["https://192.168.39.107:2379"]}
	
	
	==> kernel <==
	 19:25:29 up 9 min,  0 users,  load average: 0.19, 0.21, 0.12
	Linux multinode-832180 5.10.207 #1 SMP Thu Sep 12 19:03:33 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [804b00dc869d900db3f006dba9a31975b04af1bf58d926814fe7af38496dc903] <==
	I0913 19:21:17.224740       1 main.go:322] Node multinode-832180-m03 has CIDR [10.244.3.0/24] 
	I0913 19:21:27.228585       1 main.go:295] Handling node with IPs: map[192.168.39.21:{}]
	I0913 19:21:27.228702       1 main.go:322] Node multinode-832180-m03 has CIDR [10.244.3.0/24] 
	I0913 19:21:27.228871       1 main.go:295] Handling node with IPs: map[192.168.39.107:{}]
	I0913 19:21:27.228901       1 main.go:299] handling current node
	I0913 19:21:27.228928       1 main.go:295] Handling node with IPs: map[192.168.39.235:{}]
	I0913 19:21:27.228945       1 main.go:322] Node multinode-832180-m02 has CIDR [10.244.1.0/24] 
	I0913 19:21:37.232564       1 main.go:295] Handling node with IPs: map[192.168.39.107:{}]
	I0913 19:21:37.232627       1 main.go:299] handling current node
	I0913 19:21:37.232661       1 main.go:295] Handling node with IPs: map[192.168.39.235:{}]
	I0913 19:21:37.232667       1 main.go:322] Node multinode-832180-m02 has CIDR [10.244.1.0/24] 
	I0913 19:21:37.232812       1 main.go:295] Handling node with IPs: map[192.168.39.21:{}]
	I0913 19:21:37.232835       1 main.go:322] Node multinode-832180-m03 has CIDR [10.244.3.0/24] 
	I0913 19:21:47.224916       1 main.go:295] Handling node with IPs: map[192.168.39.235:{}]
	I0913 19:21:47.225046       1 main.go:322] Node multinode-832180-m02 has CIDR [10.244.1.0/24] 
	I0913 19:21:47.225245       1 main.go:295] Handling node with IPs: map[192.168.39.21:{}]
	I0913 19:21:47.225326       1 main.go:322] Node multinode-832180-m03 has CIDR [10.244.3.0/24] 
	I0913 19:21:47.225450       1 main.go:295] Handling node with IPs: map[192.168.39.107:{}]
	I0913 19:21:47.225477       1 main.go:299] handling current node
	I0913 19:21:57.226360       1 main.go:295] Handling node with IPs: map[192.168.39.235:{}]
	I0913 19:21:57.226413       1 main.go:322] Node multinode-832180-m02 has CIDR [10.244.1.0/24] 
	I0913 19:21:57.226546       1 main.go:295] Handling node with IPs: map[192.168.39.21:{}]
	I0913 19:21:57.226570       1 main.go:322] Node multinode-832180-m03 has CIDR [10.244.3.0/24] 
	I0913 19:21:57.226640       1 main.go:295] Handling node with IPs: map[192.168.39.107:{}]
	I0913 19:21:57.226663       1 main.go:299] handling current node
	
	
	==> kindnet [a29ceeace1750c7a2d6a3b8401821e73aece9da2fb254f28769c10161887b8b7] <==
	I0913 19:24:44.221590       1 main.go:322] Node multinode-832180-m03 has CIDR [10.244.3.0/24] 
	I0913 19:24:54.215040       1 main.go:295] Handling node with IPs: map[192.168.39.107:{}]
	I0913 19:24:54.215073       1 main.go:299] handling current node
	I0913 19:24:54.215088       1 main.go:295] Handling node with IPs: map[192.168.39.235:{}]
	I0913 19:24:54.215092       1 main.go:322] Node multinode-832180-m02 has CIDR [10.244.1.0/24] 
	I0913 19:24:54.215389       1 main.go:295] Handling node with IPs: map[192.168.39.21:{}]
	I0913 19:24:54.215464       1 main.go:322] Node multinode-832180-m03 has CIDR [10.244.3.0/24] 
	I0913 19:25:04.218777       1 main.go:295] Handling node with IPs: map[192.168.39.107:{}]
	I0913 19:25:04.218845       1 main.go:299] handling current node
	I0913 19:25:04.218870       1 main.go:295] Handling node with IPs: map[192.168.39.235:{}]
	I0913 19:25:04.218878       1 main.go:322] Node multinode-832180-m02 has CIDR [10.244.1.0/24] 
	I0913 19:25:04.219017       1 main.go:295] Handling node with IPs: map[192.168.39.21:{}]
	I0913 19:25:04.219053       1 main.go:322] Node multinode-832180-m03 has CIDR [10.244.3.0/24] 
	I0913 19:25:14.218795       1 main.go:295] Handling node with IPs: map[192.168.39.107:{}]
	I0913 19:25:14.218891       1 main.go:299] handling current node
	I0913 19:25:14.218923       1 main.go:295] Handling node with IPs: map[192.168.39.235:{}]
	I0913 19:25:14.218941       1 main.go:322] Node multinode-832180-m02 has CIDR [10.244.1.0/24] 
	I0913 19:25:14.219059       1 main.go:295] Handling node with IPs: map[192.168.39.21:{}]
	I0913 19:25:14.219088       1 main.go:322] Node multinode-832180-m03 has CIDR [10.244.2.0/24] 
	I0913 19:25:24.214958       1 main.go:295] Handling node with IPs: map[192.168.39.107:{}]
	I0913 19:25:24.215069       1 main.go:299] handling current node
	I0913 19:25:24.215100       1 main.go:295] Handling node with IPs: map[192.168.39.235:{}]
	I0913 19:25:24.215118       1 main.go:322] Node multinode-832180-m02 has CIDR [10.244.1.0/24] 
	I0913 19:25:24.215255       1 main.go:295] Handling node with IPs: map[192.168.39.21:{}]
	I0913 19:25:24.215379       1 main.go:322] Node multinode-832180-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [02b3f56cb8d9643708ed133fcc56af3444c05408cdded724d5c2ef364d818846] <==
	I0913 19:23:51.521570       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0913 19:23:51.524955       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0913 19:23:51.525185       1 shared_informer.go:320] Caches are synced for configmaps
	I0913 19:23:51.525234       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0913 19:23:51.525343       1 aggregator.go:171] initial CRD sync complete...
	I0913 19:23:51.525369       1 autoregister_controller.go:144] Starting autoregister controller
	I0913 19:23:51.525392       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0913 19:23:51.525413       1 cache.go:39] Caches are synced for autoregister controller
	I0913 19:23:51.526250       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0913 19:23:51.526312       1 policy_source.go:224] refreshing policies
	I0913 19:23:51.530374       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0913 19:23:51.530437       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0913 19:23:51.530448       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0913 19:23:51.531089       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0913 19:23:51.530456       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0913 19:23:51.537571       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0913 19:23:51.538047       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0913 19:23:52.334444       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0913 19:23:53.732493       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0913 19:23:53.902877       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0913 19:23:53.918220       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0913 19:23:54.003589       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0913 19:23:54.019691       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0913 19:23:55.016036       1 controller.go:615] quota admission added evaluator for: endpoints
	I0913 19:23:55.164791       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [1cfc48ae630fdc726595c0684fbe9856feb456a3a168627d15486e1ea532ef01] <==
	E0913 19:18:16.856527       1 conn.go:339] Error on socket receive: read tcp 192.168.39.107:8443->192.168.39.1:43310: use of closed network connection
	E0913 19:18:17.037526       1 conn.go:339] Error on socket receive: read tcp 192.168.39.107:8443->192.168.39.1:43336: use of closed network connection
	E0913 19:18:17.210463       1 conn.go:339] Error on socket receive: read tcp 192.168.39.107:8443->192.168.39.1:43346: use of closed network connection
	E0913 19:18:17.387995       1 conn.go:339] Error on socket receive: read tcp 192.168.39.107:8443->192.168.39.1:43368: use of closed network connection
	E0913 19:18:17.567495       1 conn.go:339] Error on socket receive: read tcp 192.168.39.107:8443->192.168.39.1:43398: use of closed network connection
	E0913 19:18:17.741129       1 conn.go:339] Error on socket receive: read tcp 192.168.39.107:8443->192.168.39.1:43424: use of closed network connection
	E0913 19:18:18.016837       1 conn.go:339] Error on socket receive: read tcp 192.168.39.107:8443->192.168.39.1:43448: use of closed network connection
	E0913 19:18:18.184869       1 conn.go:339] Error on socket receive: read tcp 192.168.39.107:8443->192.168.39.1:43466: use of closed network connection
	E0913 19:18:18.353883       1 conn.go:339] Error on socket receive: read tcp 192.168.39.107:8443->192.168.39.1:43490: use of closed network connection
	E0913 19:18:18.521138       1 conn.go:339] Error on socket receive: read tcp 192.168.39.107:8443->192.168.39.1:43510: use of closed network connection
	I0913 19:22:04.264674       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	E0913 19:22:04.291627       1 controller.go:131] Unable to remove endpoints from kubernetes service: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	W0913 19:22:04.294749       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0913 19:22:04.295612       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0913 19:22:04.295806       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0913 19:22:04.296472       1 dynamic_cafile_content.go:174] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0913 19:22:04.297871       1 storage_flowcontrol.go:186] APF bootstrap ensurer is exiting
	I0913 19:22:04.298492       1 cluster_authentication_trust_controller.go:466] Shutting down cluster_authentication_trust_controller controller
	I0913 19:22:04.298788       1 autoregister_controller.go:168] Shutting down autoregister controller
	I0913 19:22:04.298866       1 crdregistration_controller.go:145] Shutting down crd-autoregister controller
	I0913 19:22:04.298982       1 customresource_discovery_controller.go:328] Shutting down DiscoveryController
	W0913 19:22:04.299023       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0913 19:22:04.299098       1 gc_controller.go:91] Shutting down apiserver lease garbage collector
	I0913 19:22:04.299179       1 remote_available_controller.go:427] Shutting down RemoteAvailability controller
	I0913 19:22:04.299208       1 apf_controller.go:389] Shutting down API Priority and Fairness config worker
	
	
	==> kube-controller-manager [76ff5353d55e94cf485f3093825b6dda34d70a5ef34ebb1741894b8520890086] <==
	I0913 19:19:38.196468       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-832180-m03"
	I0913 19:19:38.196756       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-832180-m02"
	I0913 19:19:39.421905       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-832180-m02"
	I0913 19:19:39.423660       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-832180-m03\" does not exist"
	I0913 19:19:39.432887       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-832180-m03" podCIDRs=["10.244.3.0/24"]
	I0913 19:19:39.432945       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-832180-m03"
	I0913 19:19:39.433245       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-832180-m03"
	I0913 19:19:39.445644       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-832180-m03"
	I0913 19:19:39.750568       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-832180-m03"
	I0913 19:19:40.155878       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-832180-m03"
	I0913 19:19:44.411520       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-832180-m03"
	I0913 19:19:49.758822       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-832180-m03"
	I0913 19:19:58.548929       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-832180-m02"
	I0913 19:19:58.550092       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-832180-m03"
	I0913 19:19:58.562002       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-832180-m03"
	I0913 19:19:59.350089       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-832180-m03"
	I0913 19:20:44.371810       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-832180-m02"
	I0913 19:20:44.372497       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-832180-m03"
	I0913 19:20:44.375619       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-832180-m02"
	I0913 19:20:44.402647       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-832180-m03"
	I0913 19:20:44.405943       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-832180-m02"
	I0913 19:20:44.440117       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="21.942689ms"
	I0913 19:20:44.440334       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="90.082µs"
	I0913 19:20:49.519663       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-832180-m03"
	I0913 19:20:59.598372       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-832180-m02"
	
	
	==> kube-controller-manager [fc030c85d46d7541300dea35ade9029df1b9b60bc0629342a25c9142b0c9b3fe] <==
	I0913 19:24:47.554934       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-832180-m02"
	I0913 19:24:47.563709       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="33.494µs"
	I0913 19:24:47.580071       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="48.182µs"
	I0913 19:24:50.000352       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-832180-m02"
	I0913 19:24:51.258907       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="6.484006ms"
	I0913 19:24:51.259579       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="146.504µs"
	I0913 19:24:58.549651       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-832180-m02"
	I0913 19:25:05.493720       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-832180-m03"
	I0913 19:25:05.510675       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-832180-m03"
	I0913 19:25:05.742107       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-832180-m02"
	I0913 19:25:05.742559       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-832180-m03"
	I0913 19:25:06.768944       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-832180-m03\" does not exist"
	I0913 19:25:06.769501       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-832180-m02"
	I0913 19:25:06.777427       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-832180-m03" podCIDRs=["10.244.2.0/24"]
	I0913 19:25:06.777509       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-832180-m03"
	I0913 19:25:06.788631       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-832180-m03"
	I0913 19:25:06.807253       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-832180-m03"
	I0913 19:25:07.261855       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-832180-m03"
	I0913 19:25:07.608594       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-832180-m03"
	I0913 19:25:10.062840       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-832180-m03"
	I0913 19:25:16.838380       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-832180-m03"
	I0913 19:25:26.591109       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-832180-m03"
	I0913 19:25:26.591342       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-832180-m02"
	I0913 19:25:26.614908       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-832180-m03"
	I0913 19:25:30.026840       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-832180-m03"
	
	
	==> kube-proxy [96891beb662f6bf1a22be94e865f849c490f37212044b80a4e7ed40fb4e6b121] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0913 19:17:06.330737       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0913 19:17:06.360615       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.107"]
	E0913 19:17:06.360738       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0913 19:17:06.397565       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0913 19:17:06.397615       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0913 19:17:06.397638       1 server_linux.go:169] "Using iptables Proxier"
	I0913 19:17:06.400208       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0913 19:17:06.400637       1 server.go:483] "Version info" version="v1.31.1"
	I0913 19:17:06.400663       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0913 19:17:06.402771       1 config.go:199] "Starting service config controller"
	I0913 19:17:06.402810       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0913 19:17:06.402841       1 config.go:105] "Starting endpoint slice config controller"
	I0913 19:17:06.402847       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0913 19:17:06.403473       1 config.go:328] "Starting node config controller"
	I0913 19:17:06.403495       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0913 19:17:06.503787       1 shared_informer.go:320] Caches are synced for node config
	I0913 19:17:06.503878       1 shared_informer.go:320] Caches are synced for service config
	I0913 19:17:06.503920       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [9edd612ed049272f921fb281466af068531ec5cd17cbc02c76c33d5e94a7613d] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0913 19:23:53.381234       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0913 19:23:53.395063       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.107"]
	E0913 19:23:53.395185       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0913 19:23:53.535425       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0913 19:23:53.535471       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0913 19:23:53.535498       1 server_linux.go:169] "Using iptables Proxier"
	I0913 19:23:53.542956       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0913 19:23:53.543314       1 server.go:483] "Version info" version="v1.31.1"
	I0913 19:23:53.543342       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0913 19:23:53.548980       1 config.go:199] "Starting service config controller"
	I0913 19:23:53.549041       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0913 19:23:53.549095       1 config.go:105] "Starting endpoint slice config controller"
	I0913 19:23:53.549117       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0913 19:23:53.549687       1 config.go:328] "Starting node config controller"
	I0913 19:23:53.549719       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0913 19:23:53.649315       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0913 19:23:53.649424       1 shared_informer.go:320] Caches are synced for service config
	I0913 19:23:53.651394       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [99b4d9896fa0b4c8cd5c9098095f8e2a5217d7abea4c3b7d43c4e01767bd2436] <==
	I0913 19:23:43.882424       1 serving.go:386] Generated self-signed cert in-memory
	W0913 19:23:51.378898       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0913 19:23:51.378990       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0913 19:23:51.379018       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0913 19:23:51.379048       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0913 19:23:51.431447       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0913 19:23:51.431676       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0913 19:23:51.438351       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0913 19:23:51.440132       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0913 19:23:51.442503       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0913 19:23:51.442671       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0913 19:23:51.541028       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [b426c3236a8688919bb3f76b648d81b1dd24a3223eb10aa26ef46ae404f85df6] <==
	E0913 19:16:57.663922       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0913 19:16:57.664025       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0913 19:16:57.664056       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0913 19:16:57.663962       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0913 19:16:57.664184       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0913 19:16:58.536147       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0913 19:16:58.536319       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0913 19:16:58.584757       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0913 19:16:58.584806       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0913 19:16:58.610672       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0913 19:16:58.610707       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0913 19:16:58.642453       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0913 19:16:58.642606       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0913 19:16:58.694346       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0913 19:16:58.694395       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0913 19:16:58.698057       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0913 19:16:58.698107       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0913 19:16:58.806034       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0913 19:16:58.806193       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0913 19:16:58.813414       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0913 19:16:58.813467       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0913 19:16:58.875372       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0913 19:16:58.875467       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0913 19:17:01.151847       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0913 19:22:04.264978       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 13 19:23:52 multinode-832180 kubelet[3381]: I0913 19:23:52.648480    3381 scope.go:117] "RemoveContainer" containerID="f7319753489e7bfcfa89a8e1853b8d19f22ea6dcbce9d53d2c4e88651e86183f"
	Sep 13 19:24:00 multinode-832180 kubelet[3381]: E0913 19:24:00.510928    3381 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726255440510708749,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 19:24:00 multinode-832180 kubelet[3381]: E0913 19:24:00.510952    3381 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726255440510708749,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 19:24:00 multinode-832180 kubelet[3381]: I0913 19:24:00.591644    3381 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Sep 13 19:24:10 multinode-832180 kubelet[3381]: E0913 19:24:10.513637    3381 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726255450512630186,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 19:24:10 multinode-832180 kubelet[3381]: E0913 19:24:10.513678    3381 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726255450512630186,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 19:24:20 multinode-832180 kubelet[3381]: E0913 19:24:20.518033    3381 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726255460515256128,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 19:24:20 multinode-832180 kubelet[3381]: E0913 19:24:20.526368    3381 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726255460515256128,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 19:24:30 multinode-832180 kubelet[3381]: E0913 19:24:30.527624    3381 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726255470527352524,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 19:24:30 multinode-832180 kubelet[3381]: E0913 19:24:30.527685    3381 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726255470527352524,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 19:24:40 multinode-832180 kubelet[3381]: E0913 19:24:40.529940    3381 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726255480529457984,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 19:24:40 multinode-832180 kubelet[3381]: E0913 19:24:40.530396    3381 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726255480529457984,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 19:24:50 multinode-832180 kubelet[3381]: E0913 19:24:50.486763    3381 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 13 19:24:50 multinode-832180 kubelet[3381]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 13 19:24:50 multinode-832180 kubelet[3381]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 13 19:24:50 multinode-832180 kubelet[3381]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 13 19:24:50 multinode-832180 kubelet[3381]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 13 19:24:50 multinode-832180 kubelet[3381]: E0913 19:24:50.533879    3381 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726255490533614826,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 19:24:50 multinode-832180 kubelet[3381]: E0913 19:24:50.533900    3381 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726255490533614826,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 19:25:00 multinode-832180 kubelet[3381]: E0913 19:25:00.536504    3381 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726255500535995879,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 19:25:00 multinode-832180 kubelet[3381]: E0913 19:25:00.536568    3381 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726255500535995879,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 19:25:10 multinode-832180 kubelet[3381]: E0913 19:25:10.539163    3381 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726255510538670631,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 19:25:10 multinode-832180 kubelet[3381]: E0913 19:25:10.539211    3381 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726255510538670631,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 19:25:20 multinode-832180 kubelet[3381]: E0913 19:25:20.541221    3381 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726255520540778360,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 19:25:20 multinode-832180 kubelet[3381]: E0913 19:25:20.541306    3381 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726255520540778360,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0913 19:25:29.004190   43478 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19636-3902/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-832180 -n multinode-832180
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-832180 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (329.50s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (141.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-832180 stop
E0913 19:25:57.576393   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/functional-204039/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-832180 stop: exit status 82 (2m0.458383643s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-832180-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-832180 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-832180 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-832180 status: exit status 3 (18.642331144s)

                                                
                                                
-- stdout --
	multinode-832180
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-832180-m02
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0913 19:27:52.082451   44117 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.235:22: connect: no route to host
	E0913 19:27:52.082486   44117 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.235:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:354: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-832180 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-832180 -n multinode-832180
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-832180 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-832180 logs -n 25: (1.527003346s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-832180 ssh -n                                                                 | multinode-832180 | jenkins | v1.34.0 | 13 Sep 24 19:19 UTC | 13 Sep 24 19:19 UTC |
	|         | multinode-832180-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-832180 cp multinode-832180-m02:/home/docker/cp-test.txt                       | multinode-832180 | jenkins | v1.34.0 | 13 Sep 24 19:19 UTC | 13 Sep 24 19:19 UTC |
	|         | multinode-832180:/home/docker/cp-test_multinode-832180-m02_multinode-832180.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-832180 ssh -n                                                                 | multinode-832180 | jenkins | v1.34.0 | 13 Sep 24 19:19 UTC | 13 Sep 24 19:19 UTC |
	|         | multinode-832180-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-832180 ssh -n multinode-832180 sudo cat                                       | multinode-832180 | jenkins | v1.34.0 | 13 Sep 24 19:19 UTC | 13 Sep 24 19:19 UTC |
	|         | /home/docker/cp-test_multinode-832180-m02_multinode-832180.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-832180 cp multinode-832180-m02:/home/docker/cp-test.txt                       | multinode-832180 | jenkins | v1.34.0 | 13 Sep 24 19:19 UTC | 13 Sep 24 19:19 UTC |
	|         | multinode-832180-m03:/home/docker/cp-test_multinode-832180-m02_multinode-832180-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-832180 ssh -n                                                                 | multinode-832180 | jenkins | v1.34.0 | 13 Sep 24 19:19 UTC | 13 Sep 24 19:19 UTC |
	|         | multinode-832180-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-832180 ssh -n multinode-832180-m03 sudo cat                                   | multinode-832180 | jenkins | v1.34.0 | 13 Sep 24 19:19 UTC | 13 Sep 24 19:19 UTC |
	|         | /home/docker/cp-test_multinode-832180-m02_multinode-832180-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-832180 cp testdata/cp-test.txt                                                | multinode-832180 | jenkins | v1.34.0 | 13 Sep 24 19:19 UTC | 13 Sep 24 19:19 UTC |
	|         | multinode-832180-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-832180 ssh -n                                                                 | multinode-832180 | jenkins | v1.34.0 | 13 Sep 24 19:19 UTC | 13 Sep 24 19:19 UTC |
	|         | multinode-832180-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-832180 cp multinode-832180-m03:/home/docker/cp-test.txt                       | multinode-832180 | jenkins | v1.34.0 | 13 Sep 24 19:19 UTC | 13 Sep 24 19:19 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile2586814433/001/cp-test_multinode-832180-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-832180 ssh -n                                                                 | multinode-832180 | jenkins | v1.34.0 | 13 Sep 24 19:19 UTC | 13 Sep 24 19:19 UTC |
	|         | multinode-832180-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-832180 cp multinode-832180-m03:/home/docker/cp-test.txt                       | multinode-832180 | jenkins | v1.34.0 | 13 Sep 24 19:19 UTC | 13 Sep 24 19:19 UTC |
	|         | multinode-832180:/home/docker/cp-test_multinode-832180-m03_multinode-832180.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-832180 ssh -n                                                                 | multinode-832180 | jenkins | v1.34.0 | 13 Sep 24 19:19 UTC | 13 Sep 24 19:19 UTC |
	|         | multinode-832180-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-832180 ssh -n multinode-832180 sudo cat                                       | multinode-832180 | jenkins | v1.34.0 | 13 Sep 24 19:19 UTC | 13 Sep 24 19:19 UTC |
	|         | /home/docker/cp-test_multinode-832180-m03_multinode-832180.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-832180 cp multinode-832180-m03:/home/docker/cp-test.txt                       | multinode-832180 | jenkins | v1.34.0 | 13 Sep 24 19:19 UTC | 13 Sep 24 19:19 UTC |
	|         | multinode-832180-m02:/home/docker/cp-test_multinode-832180-m03_multinode-832180-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-832180 ssh -n                                                                 | multinode-832180 | jenkins | v1.34.0 | 13 Sep 24 19:19 UTC | 13 Sep 24 19:19 UTC |
	|         | multinode-832180-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-832180 ssh -n multinode-832180-m02 sudo cat                                   | multinode-832180 | jenkins | v1.34.0 | 13 Sep 24 19:19 UTC | 13 Sep 24 19:19 UTC |
	|         | /home/docker/cp-test_multinode-832180-m03_multinode-832180-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-832180 node stop m03                                                          | multinode-832180 | jenkins | v1.34.0 | 13 Sep 24 19:19 UTC | 13 Sep 24 19:19 UTC |
	| node    | multinode-832180 node start                                                             | multinode-832180 | jenkins | v1.34.0 | 13 Sep 24 19:19 UTC | 13 Sep 24 19:20 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-832180                                                                | multinode-832180 | jenkins | v1.34.0 | 13 Sep 24 19:20 UTC |                     |
	| stop    | -p multinode-832180                                                                     | multinode-832180 | jenkins | v1.34.0 | 13 Sep 24 19:20 UTC |                     |
	| start   | -p multinode-832180                                                                     | multinode-832180 | jenkins | v1.34.0 | 13 Sep 24 19:22 UTC | 13 Sep 24 19:25 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-832180                                                                | multinode-832180 | jenkins | v1.34.0 | 13 Sep 24 19:25 UTC |                     |
	| node    | multinode-832180 node delete                                                            | multinode-832180 | jenkins | v1.34.0 | 13 Sep 24 19:25 UTC | 13 Sep 24 19:25 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-832180 stop                                                                   | multinode-832180 | jenkins | v1.34.0 | 13 Sep 24 19:25 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/13 19:22:03
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0913 19:22:03.249260   42355 out.go:345] Setting OutFile to fd 1 ...
	I0913 19:22:03.249376   42355 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 19:22:03.249384   42355 out.go:358] Setting ErrFile to fd 2...
	I0913 19:22:03.249389   42355 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 19:22:03.249580   42355 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19636-3902/.minikube/bin
	I0913 19:22:03.250213   42355 out.go:352] Setting JSON to false
	I0913 19:22:03.251313   42355 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3866,"bootTime":1726251457,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0913 19:22:03.251402   42355 start.go:139] virtualization: kvm guest
	I0913 19:22:03.253736   42355 out.go:177] * [multinode-832180] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0913 19:22:03.255048   42355 notify.go:220] Checking for updates...
	I0913 19:22:03.255057   42355 out.go:177]   - MINIKUBE_LOCATION=19636
	I0913 19:22:03.256373   42355 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 19:22:03.257713   42355 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19636-3902/kubeconfig
	I0913 19:22:03.258927   42355 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19636-3902/.minikube
	I0913 19:22:03.260199   42355 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0913 19:22:03.261584   42355 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 19:22:03.263594   42355 config.go:182] Loaded profile config "multinode-832180": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 19:22:03.263708   42355 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 19:22:03.264138   42355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:22:03.264179   42355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:22:03.279526   42355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45491
	I0913 19:22:03.279961   42355 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:22:03.280484   42355 main.go:141] libmachine: Using API Version  1
	I0913 19:22:03.280504   42355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:22:03.280792   42355 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:22:03.280976   42355 main.go:141] libmachine: (multinode-832180) Calling .DriverName
	I0913 19:22:03.317196   42355 out.go:177] * Using the kvm2 driver based on existing profile
	I0913 19:22:03.318783   42355 start.go:297] selected driver: kvm2
	I0913 19:22:03.318804   42355 start.go:901] validating driver "kvm2" against &{Name:multinode-832180 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.1 ClusterName:multinode-832180 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.107 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.235 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.21 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false insp
ektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptim
izations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 19:22:03.318960   42355 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 19:22:03.319320   42355 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 19:22:03.319420   42355 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19636-3902/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0913 19:22:03.334864   42355 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0913 19:22:03.335552   42355 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0913 19:22:03.335595   42355 cni.go:84] Creating CNI manager for ""
	I0913 19:22:03.335658   42355 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0913 19:22:03.335728   42355 start.go:340] cluster config:
	{Name:multinode-832180 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-832180 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.107 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.235 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.21 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow
:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetCli
entPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 19:22:03.335857   42355 iso.go:125] acquiring lock: {Name:mk6d09a9e7ffc35d34bf41cb51ad15df14a4d34d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 19:22:03.337849   42355 out.go:177] * Starting "multinode-832180" primary control-plane node in "multinode-832180" cluster
	I0913 19:22:03.339093   42355 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0913 19:22:03.339146   42355 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0913 19:22:03.339157   42355 cache.go:56] Caching tarball of preloaded images
	I0913 19:22:03.339245   42355 preload.go:172] Found /home/jenkins/minikube-integration/19636-3902/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0913 19:22:03.339258   42355 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0913 19:22:03.339408   42355 profile.go:143] Saving config to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/multinode-832180/config.json ...
	I0913 19:22:03.339613   42355 start.go:360] acquireMachinesLock for multinode-832180: {Name:mk2a4fc9f87a3264088553785e086036ce1d8c5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 19:22:03.339671   42355 start.go:364] duration metric: took 37.899µs to acquireMachinesLock for "multinode-832180"
	I0913 19:22:03.339690   42355 start.go:96] Skipping create...Using existing machine configuration
	I0913 19:22:03.339700   42355 fix.go:54] fixHost starting: 
	I0913 19:22:03.339975   42355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:22:03.340012   42355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:22:03.354368   42355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34103
	I0913 19:22:03.354900   42355 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:22:03.355484   42355 main.go:141] libmachine: Using API Version  1
	I0913 19:22:03.355506   42355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:22:03.355807   42355 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:22:03.355983   42355 main.go:141] libmachine: (multinode-832180) Calling .DriverName
	I0913 19:22:03.356157   42355 main.go:141] libmachine: (multinode-832180) Calling .GetState
	I0913 19:22:03.357902   42355 fix.go:112] recreateIfNeeded on multinode-832180: state=Running err=<nil>
	W0913 19:22:03.357923   42355 fix.go:138] unexpected machine state, will restart: <nil>
	I0913 19:22:03.360152   42355 out.go:177] * Updating the running kvm2 "multinode-832180" VM ...
	I0913 19:22:03.361571   42355 machine.go:93] provisionDockerMachine start ...
	I0913 19:22:03.361601   42355 main.go:141] libmachine: (multinode-832180) Calling .DriverName
	I0913 19:22:03.361832   42355 main.go:141] libmachine: (multinode-832180) Calling .GetSSHHostname
	I0913 19:22:03.364434   42355 main.go:141] libmachine: (multinode-832180) DBG | domain multinode-832180 has defined MAC address 52:54:00:ca:be:cf in network mk-multinode-832180
	I0913 19:22:03.364877   42355 main.go:141] libmachine: (multinode-832180) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:be:cf", ip: ""} in network mk-multinode-832180: {Iface:virbr1 ExpiryTime:2024-09-13 20:16:37 +0000 UTC Type:0 Mac:52:54:00:ca:be:cf Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:multinode-832180 Clientid:01:52:54:00:ca:be:cf}
	I0913 19:22:03.364901   42355 main.go:141] libmachine: (multinode-832180) DBG | domain multinode-832180 has defined IP address 192.168.39.107 and MAC address 52:54:00:ca:be:cf in network mk-multinode-832180
	I0913 19:22:03.365054   42355 main.go:141] libmachine: (multinode-832180) Calling .GetSSHPort
	I0913 19:22:03.365213   42355 main.go:141] libmachine: (multinode-832180) Calling .GetSSHKeyPath
	I0913 19:22:03.365345   42355 main.go:141] libmachine: (multinode-832180) Calling .GetSSHKeyPath
	I0913 19:22:03.365456   42355 main.go:141] libmachine: (multinode-832180) Calling .GetSSHUsername
	I0913 19:22:03.365624   42355 main.go:141] libmachine: Using SSH client type: native
	I0913 19:22:03.365826   42355 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.107 22 <nil> <nil>}
	I0913 19:22:03.365838   42355 main.go:141] libmachine: About to run SSH command:
	hostname
	I0913 19:22:03.475028   42355 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-832180
	
	I0913 19:22:03.475051   42355 main.go:141] libmachine: (multinode-832180) Calling .GetMachineName
	I0913 19:22:03.475280   42355 buildroot.go:166] provisioning hostname "multinode-832180"
	I0913 19:22:03.475307   42355 main.go:141] libmachine: (multinode-832180) Calling .GetMachineName
	I0913 19:22:03.475482   42355 main.go:141] libmachine: (multinode-832180) Calling .GetSSHHostname
	I0913 19:22:03.478454   42355 main.go:141] libmachine: (multinode-832180) DBG | domain multinode-832180 has defined MAC address 52:54:00:ca:be:cf in network mk-multinode-832180
	I0913 19:22:03.478990   42355 main.go:141] libmachine: (multinode-832180) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:be:cf", ip: ""} in network mk-multinode-832180: {Iface:virbr1 ExpiryTime:2024-09-13 20:16:37 +0000 UTC Type:0 Mac:52:54:00:ca:be:cf Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:multinode-832180 Clientid:01:52:54:00:ca:be:cf}
	I0913 19:22:03.479011   42355 main.go:141] libmachine: (multinode-832180) DBG | domain multinode-832180 has defined IP address 192.168.39.107 and MAC address 52:54:00:ca:be:cf in network mk-multinode-832180
	I0913 19:22:03.479221   42355 main.go:141] libmachine: (multinode-832180) Calling .GetSSHPort
	I0913 19:22:03.479384   42355 main.go:141] libmachine: (multinode-832180) Calling .GetSSHKeyPath
	I0913 19:22:03.479518   42355 main.go:141] libmachine: (multinode-832180) Calling .GetSSHKeyPath
	I0913 19:22:03.479658   42355 main.go:141] libmachine: (multinode-832180) Calling .GetSSHUsername
	I0913 19:22:03.479831   42355 main.go:141] libmachine: Using SSH client type: native
	I0913 19:22:03.479993   42355 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.107 22 <nil> <nil>}
	I0913 19:22:03.480006   42355 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-832180 && echo "multinode-832180" | sudo tee /etc/hostname
	I0913 19:22:03.602971   42355 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-832180
	
	I0913 19:22:03.602998   42355 main.go:141] libmachine: (multinode-832180) Calling .GetSSHHostname
	I0913 19:22:03.605756   42355 main.go:141] libmachine: (multinode-832180) DBG | domain multinode-832180 has defined MAC address 52:54:00:ca:be:cf in network mk-multinode-832180
	I0913 19:22:03.606184   42355 main.go:141] libmachine: (multinode-832180) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:be:cf", ip: ""} in network mk-multinode-832180: {Iface:virbr1 ExpiryTime:2024-09-13 20:16:37 +0000 UTC Type:0 Mac:52:54:00:ca:be:cf Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:multinode-832180 Clientid:01:52:54:00:ca:be:cf}
	I0913 19:22:03.606211   42355 main.go:141] libmachine: (multinode-832180) DBG | domain multinode-832180 has defined IP address 192.168.39.107 and MAC address 52:54:00:ca:be:cf in network mk-multinode-832180
	I0913 19:22:03.606392   42355 main.go:141] libmachine: (multinode-832180) Calling .GetSSHPort
	I0913 19:22:03.606574   42355 main.go:141] libmachine: (multinode-832180) Calling .GetSSHKeyPath
	I0913 19:22:03.606732   42355 main.go:141] libmachine: (multinode-832180) Calling .GetSSHKeyPath
	I0913 19:22:03.606835   42355 main.go:141] libmachine: (multinode-832180) Calling .GetSSHUsername
	I0913 19:22:03.606955   42355 main.go:141] libmachine: Using SSH client type: native
	I0913 19:22:03.607131   42355 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.107 22 <nil> <nil>}
	I0913 19:22:03.607146   42355 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-832180' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-832180/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-832180' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0913 19:22:03.719036   42355 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0913 19:22:03.719063   42355 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19636-3902/.minikube CaCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19636-3902/.minikube}
	I0913 19:22:03.719099   42355 buildroot.go:174] setting up certificates
	I0913 19:22:03.719112   42355 provision.go:84] configureAuth start
	I0913 19:22:03.719122   42355 main.go:141] libmachine: (multinode-832180) Calling .GetMachineName
	I0913 19:22:03.719457   42355 main.go:141] libmachine: (multinode-832180) Calling .GetIP
	I0913 19:22:03.722043   42355 main.go:141] libmachine: (multinode-832180) DBG | domain multinode-832180 has defined MAC address 52:54:00:ca:be:cf in network mk-multinode-832180
	I0913 19:22:03.722403   42355 main.go:141] libmachine: (multinode-832180) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:be:cf", ip: ""} in network mk-multinode-832180: {Iface:virbr1 ExpiryTime:2024-09-13 20:16:37 +0000 UTC Type:0 Mac:52:54:00:ca:be:cf Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:multinode-832180 Clientid:01:52:54:00:ca:be:cf}
	I0913 19:22:03.722434   42355 main.go:141] libmachine: (multinode-832180) DBG | domain multinode-832180 has defined IP address 192.168.39.107 and MAC address 52:54:00:ca:be:cf in network mk-multinode-832180
	I0913 19:22:03.722586   42355 main.go:141] libmachine: (multinode-832180) Calling .GetSSHHostname
	I0913 19:22:03.724810   42355 main.go:141] libmachine: (multinode-832180) DBG | domain multinode-832180 has defined MAC address 52:54:00:ca:be:cf in network mk-multinode-832180
	I0913 19:22:03.725206   42355 main.go:141] libmachine: (multinode-832180) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:be:cf", ip: ""} in network mk-multinode-832180: {Iface:virbr1 ExpiryTime:2024-09-13 20:16:37 +0000 UTC Type:0 Mac:52:54:00:ca:be:cf Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:multinode-832180 Clientid:01:52:54:00:ca:be:cf}
	I0913 19:22:03.725237   42355 main.go:141] libmachine: (multinode-832180) DBG | domain multinode-832180 has defined IP address 192.168.39.107 and MAC address 52:54:00:ca:be:cf in network mk-multinode-832180
	I0913 19:22:03.725344   42355 provision.go:143] copyHostCerts
	I0913 19:22:03.725384   42355 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem
	I0913 19:22:03.725414   42355 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem, removing ...
	I0913 19:22:03.725423   42355 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem
	I0913 19:22:03.725490   42355 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem (1123 bytes)
	I0913 19:22:03.725573   42355 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem
	I0913 19:22:03.725596   42355 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem, removing ...
	I0913 19:22:03.725602   42355 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem
	I0913 19:22:03.725626   42355 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem (1679 bytes)
	I0913 19:22:03.725680   42355 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem
	I0913 19:22:03.725696   42355 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem, removing ...
	I0913 19:22:03.725701   42355 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem
	I0913 19:22:03.725721   42355 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem (1082 bytes)
	I0913 19:22:03.725807   42355 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem org=jenkins.multinode-832180 san=[127.0.0.1 192.168.39.107 localhost minikube multinode-832180]
	I0913 19:22:03.971079   42355 provision.go:177] copyRemoteCerts
	I0913 19:22:03.971140   42355 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0913 19:22:03.971165   42355 main.go:141] libmachine: (multinode-832180) Calling .GetSSHHostname
	I0913 19:22:03.973539   42355 main.go:141] libmachine: (multinode-832180) DBG | domain multinode-832180 has defined MAC address 52:54:00:ca:be:cf in network mk-multinode-832180
	I0913 19:22:03.973883   42355 main.go:141] libmachine: (multinode-832180) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:be:cf", ip: ""} in network mk-multinode-832180: {Iface:virbr1 ExpiryTime:2024-09-13 20:16:37 +0000 UTC Type:0 Mac:52:54:00:ca:be:cf Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:multinode-832180 Clientid:01:52:54:00:ca:be:cf}
	I0913 19:22:03.973912   42355 main.go:141] libmachine: (multinode-832180) DBG | domain multinode-832180 has defined IP address 192.168.39.107 and MAC address 52:54:00:ca:be:cf in network mk-multinode-832180
	I0913 19:22:03.974145   42355 main.go:141] libmachine: (multinode-832180) Calling .GetSSHPort
	I0913 19:22:03.974336   42355 main.go:141] libmachine: (multinode-832180) Calling .GetSSHKeyPath
	I0913 19:22:03.974491   42355 main.go:141] libmachine: (multinode-832180) Calling .GetSSHUsername
	I0913 19:22:03.974607   42355 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/multinode-832180/id_rsa Username:docker}
	I0913 19:22:04.057235   42355 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0913 19:22:04.057315   42355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0913 19:22:04.083020   42355 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0913 19:22:04.083093   42355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0913 19:22:04.107189   42355 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0913 19:22:04.107267   42355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0913 19:22:04.137338   42355 provision.go:87] duration metric: took 418.215423ms to configureAuth
	I0913 19:22:04.137361   42355 buildroot.go:189] setting minikube options for container-runtime
	I0913 19:22:04.137587   42355 config.go:182] Loaded profile config "multinode-832180": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 19:22:04.137666   42355 main.go:141] libmachine: (multinode-832180) Calling .GetSSHHostname
	I0913 19:22:04.141005   42355 main.go:141] libmachine: (multinode-832180) DBG | domain multinode-832180 has defined MAC address 52:54:00:ca:be:cf in network mk-multinode-832180
	I0913 19:22:04.141415   42355 main.go:141] libmachine: (multinode-832180) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:be:cf", ip: ""} in network mk-multinode-832180: {Iface:virbr1 ExpiryTime:2024-09-13 20:16:37 +0000 UTC Type:0 Mac:52:54:00:ca:be:cf Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:multinode-832180 Clientid:01:52:54:00:ca:be:cf}
	I0913 19:22:04.141461   42355 main.go:141] libmachine: (multinode-832180) DBG | domain multinode-832180 has defined IP address 192.168.39.107 and MAC address 52:54:00:ca:be:cf in network mk-multinode-832180
	I0913 19:22:04.141621   42355 main.go:141] libmachine: (multinode-832180) Calling .GetSSHPort
	I0913 19:22:04.141809   42355 main.go:141] libmachine: (multinode-832180) Calling .GetSSHKeyPath
	I0913 19:22:04.141979   42355 main.go:141] libmachine: (multinode-832180) Calling .GetSSHKeyPath
	I0913 19:22:04.142117   42355 main.go:141] libmachine: (multinode-832180) Calling .GetSSHUsername
	I0913 19:22:04.142276   42355 main.go:141] libmachine: Using SSH client type: native
	I0913 19:22:04.142444   42355 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.107 22 <nil> <nil>}
	I0913 19:22:04.142459   42355 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0913 19:23:34.902586   42355 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0913 19:23:34.902620   42355 machine.go:96] duration metric: took 1m31.541029453s to provisionDockerMachine
	I0913 19:23:34.902634   42355 start.go:293] postStartSetup for "multinode-832180" (driver="kvm2")
	I0913 19:23:34.902648   42355 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0913 19:23:34.902674   42355 main.go:141] libmachine: (multinode-832180) Calling .DriverName
	I0913 19:23:34.903008   42355 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0913 19:23:34.903039   42355 main.go:141] libmachine: (multinode-832180) Calling .GetSSHHostname
	I0913 19:23:34.906264   42355 main.go:141] libmachine: (multinode-832180) DBG | domain multinode-832180 has defined MAC address 52:54:00:ca:be:cf in network mk-multinode-832180
	I0913 19:23:34.906739   42355 main.go:141] libmachine: (multinode-832180) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:be:cf", ip: ""} in network mk-multinode-832180: {Iface:virbr1 ExpiryTime:2024-09-13 20:16:37 +0000 UTC Type:0 Mac:52:54:00:ca:be:cf Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:multinode-832180 Clientid:01:52:54:00:ca:be:cf}
	I0913 19:23:34.906764   42355 main.go:141] libmachine: (multinode-832180) DBG | domain multinode-832180 has defined IP address 192.168.39.107 and MAC address 52:54:00:ca:be:cf in network mk-multinode-832180
	I0913 19:23:34.906973   42355 main.go:141] libmachine: (multinode-832180) Calling .GetSSHPort
	I0913 19:23:34.907161   42355 main.go:141] libmachine: (multinode-832180) Calling .GetSSHKeyPath
	I0913 19:23:34.907290   42355 main.go:141] libmachine: (multinode-832180) Calling .GetSSHUsername
	I0913 19:23:34.907393   42355 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/multinode-832180/id_rsa Username:docker}
	I0913 19:23:34.995617   42355 ssh_runner.go:195] Run: cat /etc/os-release
	I0913 19:23:35.000099   42355 command_runner.go:130] > NAME=Buildroot
	I0913 19:23:35.000122   42355 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0913 19:23:35.000126   42355 command_runner.go:130] > ID=buildroot
	I0913 19:23:35.000137   42355 command_runner.go:130] > VERSION_ID=2023.02.9
	I0913 19:23:35.000154   42355 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0913 19:23:35.000261   42355 info.go:137] Remote host: Buildroot 2023.02.9
	I0913 19:23:35.000285   42355 filesync.go:126] Scanning /home/jenkins/minikube-integration/19636-3902/.minikube/addons for local assets ...
	I0913 19:23:35.000351   42355 filesync.go:126] Scanning /home/jenkins/minikube-integration/19636-3902/.minikube/files for local assets ...
	I0913 19:23:35.000445   42355 filesync.go:149] local asset: /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem -> 110792.pem in /etc/ssl/certs
	I0913 19:23:35.000456   42355 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem -> /etc/ssl/certs/110792.pem
	I0913 19:23:35.000595   42355 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0913 19:23:35.011312   42355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem --> /etc/ssl/certs/110792.pem (1708 bytes)
	I0913 19:23:35.035931   42355 start.go:296] duration metric: took 133.28504ms for postStartSetup
	I0913 19:23:35.035969   42355 fix.go:56] duration metric: took 1m31.696271499s for fixHost
	I0913 19:23:35.035988   42355 main.go:141] libmachine: (multinode-832180) Calling .GetSSHHostname
	I0913 19:23:35.038594   42355 main.go:141] libmachine: (multinode-832180) DBG | domain multinode-832180 has defined MAC address 52:54:00:ca:be:cf in network mk-multinode-832180
	I0913 19:23:35.039022   42355 main.go:141] libmachine: (multinode-832180) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:be:cf", ip: ""} in network mk-multinode-832180: {Iface:virbr1 ExpiryTime:2024-09-13 20:16:37 +0000 UTC Type:0 Mac:52:54:00:ca:be:cf Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:multinode-832180 Clientid:01:52:54:00:ca:be:cf}
	I0913 19:23:35.039047   42355 main.go:141] libmachine: (multinode-832180) DBG | domain multinode-832180 has defined IP address 192.168.39.107 and MAC address 52:54:00:ca:be:cf in network mk-multinode-832180
	I0913 19:23:35.039202   42355 main.go:141] libmachine: (multinode-832180) Calling .GetSSHPort
	I0913 19:23:35.039384   42355 main.go:141] libmachine: (multinode-832180) Calling .GetSSHKeyPath
	I0913 19:23:35.039548   42355 main.go:141] libmachine: (multinode-832180) Calling .GetSSHKeyPath
	I0913 19:23:35.039663   42355 main.go:141] libmachine: (multinode-832180) Calling .GetSSHUsername
	I0913 19:23:35.039794   42355 main.go:141] libmachine: Using SSH client type: native
	I0913 19:23:35.039970   42355 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.107 22 <nil> <nil>}
	I0913 19:23:35.039983   42355 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0913 19:23:35.147041   42355 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726255415.124046029
	
	I0913 19:23:35.147065   42355 fix.go:216] guest clock: 1726255415.124046029
	I0913 19:23:35.147072   42355 fix.go:229] Guest: 2024-09-13 19:23:35.124046029 +0000 UTC Remote: 2024-09-13 19:23:35.035973119 +0000 UTC m=+91.823272639 (delta=88.07291ms)
	I0913 19:23:35.147112   42355 fix.go:200] guest clock delta is within tolerance: 88.07291ms
	I0913 19:23:35.147116   42355 start.go:83] releasing machines lock for "multinode-832180", held for 1m31.807435737s
	I0913 19:23:35.147137   42355 main.go:141] libmachine: (multinode-832180) Calling .DriverName
	I0913 19:23:35.147366   42355 main.go:141] libmachine: (multinode-832180) Calling .GetIP
	I0913 19:23:35.150000   42355 main.go:141] libmachine: (multinode-832180) DBG | domain multinode-832180 has defined MAC address 52:54:00:ca:be:cf in network mk-multinode-832180
	I0913 19:23:35.150334   42355 main.go:141] libmachine: (multinode-832180) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:be:cf", ip: ""} in network mk-multinode-832180: {Iface:virbr1 ExpiryTime:2024-09-13 20:16:37 +0000 UTC Type:0 Mac:52:54:00:ca:be:cf Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:multinode-832180 Clientid:01:52:54:00:ca:be:cf}
	I0913 19:23:35.150364   42355 main.go:141] libmachine: (multinode-832180) DBG | domain multinode-832180 has defined IP address 192.168.39.107 and MAC address 52:54:00:ca:be:cf in network mk-multinode-832180
	I0913 19:23:35.150460   42355 main.go:141] libmachine: (multinode-832180) Calling .DriverName
	I0913 19:23:35.151116   42355 main.go:141] libmachine: (multinode-832180) Calling .DriverName
	I0913 19:23:35.151288   42355 main.go:141] libmachine: (multinode-832180) Calling .DriverName
	I0913 19:23:35.151359   42355 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0913 19:23:35.151398   42355 main.go:141] libmachine: (multinode-832180) Calling .GetSSHHostname
	I0913 19:23:35.151529   42355 ssh_runner.go:195] Run: cat /version.json
	I0913 19:23:35.151552   42355 main.go:141] libmachine: (multinode-832180) Calling .GetSSHHostname
	I0913 19:23:35.153925   42355 main.go:141] libmachine: (multinode-832180) DBG | domain multinode-832180 has defined MAC address 52:54:00:ca:be:cf in network mk-multinode-832180
	I0913 19:23:35.154308   42355 main.go:141] libmachine: (multinode-832180) DBG | domain multinode-832180 has defined MAC address 52:54:00:ca:be:cf in network mk-multinode-832180
	I0913 19:23:35.154342   42355 main.go:141] libmachine: (multinode-832180) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:be:cf", ip: ""} in network mk-multinode-832180: {Iface:virbr1 ExpiryTime:2024-09-13 20:16:37 +0000 UTC Type:0 Mac:52:54:00:ca:be:cf Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:multinode-832180 Clientid:01:52:54:00:ca:be:cf}
	I0913 19:23:35.154369   42355 main.go:141] libmachine: (multinode-832180) DBG | domain multinode-832180 has defined IP address 192.168.39.107 and MAC address 52:54:00:ca:be:cf in network mk-multinode-832180
	I0913 19:23:35.154486   42355 main.go:141] libmachine: (multinode-832180) Calling .GetSSHPort
	I0913 19:23:35.154631   42355 main.go:141] libmachine: (multinode-832180) Calling .GetSSHKeyPath
	I0913 19:23:35.154717   42355 main.go:141] libmachine: (multinode-832180) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:be:cf", ip: ""} in network mk-multinode-832180: {Iface:virbr1 ExpiryTime:2024-09-13 20:16:37 +0000 UTC Type:0 Mac:52:54:00:ca:be:cf Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:multinode-832180 Clientid:01:52:54:00:ca:be:cf}
	I0913 19:23:35.154741   42355 main.go:141] libmachine: (multinode-832180) DBG | domain multinode-832180 has defined IP address 192.168.39.107 and MAC address 52:54:00:ca:be:cf in network mk-multinode-832180
	I0913 19:23:35.154780   42355 main.go:141] libmachine: (multinode-832180) Calling .GetSSHUsername
	I0913 19:23:35.154895   42355 main.go:141] libmachine: (multinode-832180) Calling .GetSSHPort
	I0913 19:23:35.154954   42355 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/multinode-832180/id_rsa Username:docker}
	I0913 19:23:35.155038   42355 main.go:141] libmachine: (multinode-832180) Calling .GetSSHKeyPath
	I0913 19:23:35.155163   42355 main.go:141] libmachine: (multinode-832180) Calling .GetSSHUsername
	I0913 19:23:35.155296   42355 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/multinode-832180/id_rsa Username:docker}
	I0913 19:23:35.231487   42355 command_runner.go:130] > {"iso_version": "v1.34.0-1726156389-19616", "kicbase_version": "v0.0.45-1725963390-19606", "minikube_version": "v1.34.0", "commit": "5022c44a3509464df545efc115fbb6c3f1b5e972"}
	I0913 19:23:35.231745   42355 ssh_runner.go:195] Run: systemctl --version
	I0913 19:23:35.258963   42355 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0913 19:23:35.259010   42355 command_runner.go:130] > systemd 252 (252)
	I0913 19:23:35.259035   42355 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0913 19:23:35.259103   42355 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0913 19:23:35.429729   42355 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0913 19:23:35.449928   42355 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0913 19:23:35.449996   42355 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0913 19:23:35.450043   42355 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0913 19:23:35.467409   42355 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0913 19:23:35.467439   42355 start.go:495] detecting cgroup driver to use...
	I0913 19:23:35.467515   42355 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0913 19:23:35.491182   42355 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0913 19:23:35.514420   42355 docker.go:217] disabling cri-docker service (if available) ...
	I0913 19:23:35.514496   42355 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0913 19:23:35.528909   42355 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0913 19:23:35.547571   42355 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0913 19:23:35.697656   42355 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0913 19:23:35.840192   42355 docker.go:233] disabling docker service ...
	I0913 19:23:35.840273   42355 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0913 19:23:35.858823   42355 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0913 19:23:35.874448   42355 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0913 19:23:36.020272   42355 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0913 19:23:36.168786   42355 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0913 19:23:36.185474   42355 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0913 19:23:36.206574   42355 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0913 19:23:36.206616   42355 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0913 19:23:36.206670   42355 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:23:36.219258   42355 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0913 19:23:36.219336   42355 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:23:36.231233   42355 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:23:36.242940   42355 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:23:36.254701   42355 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0913 19:23:36.266817   42355 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:23:36.278064   42355 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:23:36.289357   42355 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:23:36.300370   42355 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0913 19:23:36.310295   42355 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0913 19:23:36.310365   42355 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0913 19:23:36.321313   42355 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 19:23:36.476708   42355 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0913 19:23:37.656296   42355 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.179547879s)
	I0913 19:23:37.656324   42355 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0913 19:23:37.656383   42355 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0913 19:23:37.662472   42355 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0913 19:23:37.662500   42355 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0913 19:23:37.662509   42355 command_runner.go:130] > Device: 0,22	Inode: 1376        Links: 1
	I0913 19:23:37.662516   42355 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0913 19:23:37.662523   42355 command_runner.go:130] > Access: 2024-09-13 19:23:37.610496236 +0000
	I0913 19:23:37.662529   42355 command_runner.go:130] > Modify: 2024-09-13 19:23:37.492488935 +0000
	I0913 19:23:37.662534   42355 command_runner.go:130] > Change: 2024-09-13 19:23:37.492488935 +0000
	I0913 19:23:37.662539   42355 command_runner.go:130] >  Birth: -
	I0913 19:23:37.662572   42355 start.go:563] Will wait 60s for crictl version
	I0913 19:23:37.662624   42355 ssh_runner.go:195] Run: which crictl
	I0913 19:23:37.666993   42355 command_runner.go:130] > /usr/bin/crictl
	I0913 19:23:37.667178   42355 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0913 19:23:37.706486   42355 command_runner.go:130] > Version:  0.1.0
	I0913 19:23:37.706515   42355 command_runner.go:130] > RuntimeName:  cri-o
	I0913 19:23:37.706520   42355 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0913 19:23:37.706526   42355 command_runner.go:130] > RuntimeApiVersion:  v1
	I0913 19:23:37.706545   42355 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0913 19:23:37.706611   42355 ssh_runner.go:195] Run: crio --version
	I0913 19:23:37.735900   42355 command_runner.go:130] > crio version 1.29.1
	I0913 19:23:37.735929   42355 command_runner.go:130] > Version:        1.29.1
	I0913 19:23:37.735937   42355 command_runner.go:130] > GitCommit:      unknown
	I0913 19:23:37.735942   42355 command_runner.go:130] > GitCommitDate:  unknown
	I0913 19:23:37.735946   42355 command_runner.go:130] > GitTreeState:   clean
	I0913 19:23:37.735951   42355 command_runner.go:130] > BuildDate:      2024-09-12T19:33:02Z
	I0913 19:23:37.735955   42355 command_runner.go:130] > GoVersion:      go1.21.6
	I0913 19:23:37.735959   42355 command_runner.go:130] > Compiler:       gc
	I0913 19:23:37.735963   42355 command_runner.go:130] > Platform:       linux/amd64
	I0913 19:23:37.735967   42355 command_runner.go:130] > Linkmode:       dynamic
	I0913 19:23:37.735972   42355 command_runner.go:130] > BuildTags:      
	I0913 19:23:37.735976   42355 command_runner.go:130] >   containers_image_ostree_stub
	I0913 19:23:37.735980   42355 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0913 19:23:37.735984   42355 command_runner.go:130] >   btrfs_noversion
	I0913 19:23:37.735988   42355 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0913 19:23:37.735992   42355 command_runner.go:130] >   libdm_no_deferred_remove
	I0913 19:23:37.735998   42355 command_runner.go:130] >   seccomp
	I0913 19:23:37.736003   42355 command_runner.go:130] > LDFlags:          unknown
	I0913 19:23:37.736007   42355 command_runner.go:130] > SeccompEnabled:   true
	I0913 19:23:37.736013   42355 command_runner.go:130] > AppArmorEnabled:  false
	I0913 19:23:37.736112   42355 ssh_runner.go:195] Run: crio --version
	I0913 19:23:37.763708   42355 command_runner.go:130] > crio version 1.29.1
	I0913 19:23:37.763730   42355 command_runner.go:130] > Version:        1.29.1
	I0913 19:23:37.763736   42355 command_runner.go:130] > GitCommit:      unknown
	I0913 19:23:37.763741   42355 command_runner.go:130] > GitCommitDate:  unknown
	I0913 19:23:37.763745   42355 command_runner.go:130] > GitTreeState:   clean
	I0913 19:23:37.763750   42355 command_runner.go:130] > BuildDate:      2024-09-12T19:33:02Z
	I0913 19:23:37.763754   42355 command_runner.go:130] > GoVersion:      go1.21.6
	I0913 19:23:37.763757   42355 command_runner.go:130] > Compiler:       gc
	I0913 19:23:37.763763   42355 command_runner.go:130] > Platform:       linux/amd64
	I0913 19:23:37.763768   42355 command_runner.go:130] > Linkmode:       dynamic
	I0913 19:23:37.763786   42355 command_runner.go:130] > BuildTags:      
	I0913 19:23:37.763793   42355 command_runner.go:130] >   containers_image_ostree_stub
	I0913 19:23:37.763803   42355 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0913 19:23:37.763808   42355 command_runner.go:130] >   btrfs_noversion
	I0913 19:23:37.763823   42355 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0913 19:23:37.763830   42355 command_runner.go:130] >   libdm_no_deferred_remove
	I0913 19:23:37.763834   42355 command_runner.go:130] >   seccomp
	I0913 19:23:37.763841   42355 command_runner.go:130] > LDFlags:          unknown
	I0913 19:23:37.763845   42355 command_runner.go:130] > SeccompEnabled:   true
	I0913 19:23:37.763851   42355 command_runner.go:130] > AppArmorEnabled:  false
	I0913 19:23:37.767044   42355 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0913 19:23:37.768482   42355 main.go:141] libmachine: (multinode-832180) Calling .GetIP
	I0913 19:23:37.771107   42355 main.go:141] libmachine: (multinode-832180) DBG | domain multinode-832180 has defined MAC address 52:54:00:ca:be:cf in network mk-multinode-832180
	I0913 19:23:37.771453   42355 main.go:141] libmachine: (multinode-832180) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:be:cf", ip: ""} in network mk-multinode-832180: {Iface:virbr1 ExpiryTime:2024-09-13 20:16:37 +0000 UTC Type:0 Mac:52:54:00:ca:be:cf Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:multinode-832180 Clientid:01:52:54:00:ca:be:cf}
	I0913 19:23:37.771475   42355 main.go:141] libmachine: (multinode-832180) DBG | domain multinode-832180 has defined IP address 192.168.39.107 and MAC address 52:54:00:ca:be:cf in network mk-multinode-832180
	I0913 19:23:37.771740   42355 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0913 19:23:37.776352   42355 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0913 19:23:37.776451   42355 kubeadm.go:883] updating cluster {Name:multinode-832180 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.1 ClusterName:multinode-832180 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.107 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.235 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.21 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadge
t:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fa
lse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0913 19:23:37.776568   42355 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0913 19:23:37.776608   42355 ssh_runner.go:195] Run: sudo crictl images --output json
	I0913 19:23:37.816243   42355 command_runner.go:130] > {
	I0913 19:23:37.816268   42355 command_runner.go:130] >   "images": [
	I0913 19:23:37.816273   42355 command_runner.go:130] >     {
	I0913 19:23:37.816281   42355 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0913 19:23:37.816285   42355 command_runner.go:130] >       "repoTags": [
	I0913 19:23:37.816291   42355 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0913 19:23:37.816295   42355 command_runner.go:130] >       ],
	I0913 19:23:37.816299   42355 command_runner.go:130] >       "repoDigests": [
	I0913 19:23:37.816307   42355 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0913 19:23:37.816315   42355 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0913 19:23:37.816321   42355 command_runner.go:130] >       ],
	I0913 19:23:37.816327   42355 command_runner.go:130] >       "size": "87190579",
	I0913 19:23:37.816333   42355 command_runner.go:130] >       "uid": null,
	I0913 19:23:37.816338   42355 command_runner.go:130] >       "username": "",
	I0913 19:23:37.816348   42355 command_runner.go:130] >       "spec": null,
	I0913 19:23:37.816357   42355 command_runner.go:130] >       "pinned": false
	I0913 19:23:37.816363   42355 command_runner.go:130] >     },
	I0913 19:23:37.816371   42355 command_runner.go:130] >     {
	I0913 19:23:37.816379   42355 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0913 19:23:37.816385   42355 command_runner.go:130] >       "repoTags": [
	I0913 19:23:37.816391   42355 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0913 19:23:37.816399   42355 command_runner.go:130] >       ],
	I0913 19:23:37.816405   42355 command_runner.go:130] >       "repoDigests": [
	I0913 19:23:37.816415   42355 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0913 19:23:37.816426   42355 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0913 19:23:37.816431   42355 command_runner.go:130] >       ],
	I0913 19:23:37.816440   42355 command_runner.go:130] >       "size": "1363676",
	I0913 19:23:37.816449   42355 command_runner.go:130] >       "uid": null,
	I0913 19:23:37.816467   42355 command_runner.go:130] >       "username": "",
	I0913 19:23:37.816474   42355 command_runner.go:130] >       "spec": null,
	I0913 19:23:37.816478   42355 command_runner.go:130] >       "pinned": false
	I0913 19:23:37.816484   42355 command_runner.go:130] >     },
	I0913 19:23:37.816489   42355 command_runner.go:130] >     {
	I0913 19:23:37.816497   42355 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0913 19:23:37.816503   42355 command_runner.go:130] >       "repoTags": [
	I0913 19:23:37.816509   42355 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0913 19:23:37.816514   42355 command_runner.go:130] >       ],
	I0913 19:23:37.816520   42355 command_runner.go:130] >       "repoDigests": [
	I0913 19:23:37.816536   42355 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0913 19:23:37.816553   42355 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0913 19:23:37.816562   42355 command_runner.go:130] >       ],
	I0913 19:23:37.816567   42355 command_runner.go:130] >       "size": "31470524",
	I0913 19:23:37.816574   42355 command_runner.go:130] >       "uid": null,
	I0913 19:23:37.816578   42355 command_runner.go:130] >       "username": "",
	I0913 19:23:37.816584   42355 command_runner.go:130] >       "spec": null,
	I0913 19:23:37.816588   42355 command_runner.go:130] >       "pinned": false
	I0913 19:23:37.816593   42355 command_runner.go:130] >     },
	I0913 19:23:37.816597   42355 command_runner.go:130] >     {
	I0913 19:23:37.816604   42355 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0913 19:23:37.816613   42355 command_runner.go:130] >       "repoTags": [
	I0913 19:23:37.816625   42355 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0913 19:23:37.816634   42355 command_runner.go:130] >       ],
	I0913 19:23:37.816643   42355 command_runner.go:130] >       "repoDigests": [
	I0913 19:23:37.816657   42355 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I0913 19:23:37.816673   42355 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I0913 19:23:37.816680   42355 command_runner.go:130] >       ],
	I0913 19:23:37.816685   42355 command_runner.go:130] >       "size": "63273227",
	I0913 19:23:37.816693   42355 command_runner.go:130] >       "uid": null,
	I0913 19:23:37.816704   42355 command_runner.go:130] >       "username": "nonroot",
	I0913 19:23:37.816713   42355 command_runner.go:130] >       "spec": null,
	I0913 19:23:37.816722   42355 command_runner.go:130] >       "pinned": false
	I0913 19:23:37.816730   42355 command_runner.go:130] >     },
	I0913 19:23:37.816738   42355 command_runner.go:130] >     {
	I0913 19:23:37.816751   42355 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0913 19:23:37.816760   42355 command_runner.go:130] >       "repoTags": [
	I0913 19:23:37.816766   42355 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0913 19:23:37.816772   42355 command_runner.go:130] >       ],
	I0913 19:23:37.816778   42355 command_runner.go:130] >       "repoDigests": [
	I0913 19:23:37.816792   42355 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0913 19:23:37.816806   42355 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0913 19:23:37.816815   42355 command_runner.go:130] >       ],
	I0913 19:23:37.816824   42355 command_runner.go:130] >       "size": "149009664",
	I0913 19:23:37.816833   42355 command_runner.go:130] >       "uid": {
	I0913 19:23:37.816843   42355 command_runner.go:130] >         "value": "0"
	I0913 19:23:37.816850   42355 command_runner.go:130] >       },
	I0913 19:23:37.816854   42355 command_runner.go:130] >       "username": "",
	I0913 19:23:37.816860   42355 command_runner.go:130] >       "spec": null,
	I0913 19:23:37.816867   42355 command_runner.go:130] >       "pinned": false
	I0913 19:23:37.816877   42355 command_runner.go:130] >     },
	I0913 19:23:37.816882   42355 command_runner.go:130] >     {
	I0913 19:23:37.816896   42355 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I0913 19:23:37.816905   42355 command_runner.go:130] >       "repoTags": [
	I0913 19:23:37.816916   42355 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I0913 19:23:37.816929   42355 command_runner.go:130] >       ],
	I0913 19:23:37.816936   42355 command_runner.go:130] >       "repoDigests": [
	I0913 19:23:37.816945   42355 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I0913 19:23:37.816961   42355 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I0913 19:23:37.816970   42355 command_runner.go:130] >       ],
	I0913 19:23:37.816983   42355 command_runner.go:130] >       "size": "95237600",
	I0913 19:23:37.816992   42355 command_runner.go:130] >       "uid": {
	I0913 19:23:37.817001   42355 command_runner.go:130] >         "value": "0"
	I0913 19:23:37.817010   42355 command_runner.go:130] >       },
	I0913 19:23:37.817018   42355 command_runner.go:130] >       "username": "",
	I0913 19:23:37.817025   42355 command_runner.go:130] >       "spec": null,
	I0913 19:23:37.817031   42355 command_runner.go:130] >       "pinned": false
	I0913 19:23:37.817040   42355 command_runner.go:130] >     },
	I0913 19:23:37.817049   42355 command_runner.go:130] >     {
	I0913 19:23:37.817061   42355 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I0913 19:23:37.817071   42355 command_runner.go:130] >       "repoTags": [
	I0913 19:23:37.817082   42355 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I0913 19:23:37.817090   42355 command_runner.go:130] >       ],
	I0913 19:23:37.817099   42355 command_runner.go:130] >       "repoDigests": [
	I0913 19:23:37.817109   42355 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I0913 19:23:37.817124   42355 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I0913 19:23:37.817134   42355 command_runner.go:130] >       ],
	I0913 19:23:37.817146   42355 command_runner.go:130] >       "size": "89437508",
	I0913 19:23:37.817155   42355 command_runner.go:130] >       "uid": {
	I0913 19:23:37.817161   42355 command_runner.go:130] >         "value": "0"
	I0913 19:23:37.817167   42355 command_runner.go:130] >       },
	I0913 19:23:37.817176   42355 command_runner.go:130] >       "username": "",
	I0913 19:23:37.817185   42355 command_runner.go:130] >       "spec": null,
	I0913 19:23:37.817192   42355 command_runner.go:130] >       "pinned": false
	I0913 19:23:37.817196   42355 command_runner.go:130] >     },
	I0913 19:23:37.817202   42355 command_runner.go:130] >     {
	I0913 19:23:37.817213   42355 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I0913 19:23:37.817223   42355 command_runner.go:130] >       "repoTags": [
	I0913 19:23:37.817234   42355 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I0913 19:23:37.817241   42355 command_runner.go:130] >       ],
	I0913 19:23:37.817248   42355 command_runner.go:130] >       "repoDigests": [
	I0913 19:23:37.817269   42355 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I0913 19:23:37.817284   42355 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I0913 19:23:37.817294   42355 command_runner.go:130] >       ],
	I0913 19:23:37.817305   42355 command_runner.go:130] >       "size": "92733849",
	I0913 19:23:37.817314   42355 command_runner.go:130] >       "uid": null,
	I0913 19:23:37.817322   42355 command_runner.go:130] >       "username": "",
	I0913 19:23:37.817331   42355 command_runner.go:130] >       "spec": null,
	I0913 19:23:37.817338   42355 command_runner.go:130] >       "pinned": false
	I0913 19:23:37.817343   42355 command_runner.go:130] >     },
	I0913 19:23:37.817348   42355 command_runner.go:130] >     {
	I0913 19:23:37.817355   42355 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I0913 19:23:37.817359   42355 command_runner.go:130] >       "repoTags": [
	I0913 19:23:37.817365   42355 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I0913 19:23:37.817370   42355 command_runner.go:130] >       ],
	I0913 19:23:37.817376   42355 command_runner.go:130] >       "repoDigests": [
	I0913 19:23:37.817388   42355 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I0913 19:23:37.817399   42355 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I0913 19:23:37.817405   42355 command_runner.go:130] >       ],
	I0913 19:23:37.817411   42355 command_runner.go:130] >       "size": "68420934",
	I0913 19:23:37.817417   42355 command_runner.go:130] >       "uid": {
	I0913 19:23:37.817423   42355 command_runner.go:130] >         "value": "0"
	I0913 19:23:37.817428   42355 command_runner.go:130] >       },
	I0913 19:23:37.817434   42355 command_runner.go:130] >       "username": "",
	I0913 19:23:37.817439   42355 command_runner.go:130] >       "spec": null,
	I0913 19:23:37.817443   42355 command_runner.go:130] >       "pinned": false
	I0913 19:23:37.817446   42355 command_runner.go:130] >     },
	I0913 19:23:37.817451   42355 command_runner.go:130] >     {
	I0913 19:23:37.817460   42355 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0913 19:23:37.817471   42355 command_runner.go:130] >       "repoTags": [
	I0913 19:23:37.817478   42355 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0913 19:23:37.817486   42355 command_runner.go:130] >       ],
	I0913 19:23:37.817493   42355 command_runner.go:130] >       "repoDigests": [
	I0913 19:23:37.817507   42355 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0913 19:23:37.817520   42355 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0913 19:23:37.817527   42355 command_runner.go:130] >       ],
	I0913 19:23:37.817532   42355 command_runner.go:130] >       "size": "742080",
	I0913 19:23:37.817540   42355 command_runner.go:130] >       "uid": {
	I0913 19:23:37.817548   42355 command_runner.go:130] >         "value": "65535"
	I0913 19:23:37.817556   42355 command_runner.go:130] >       },
	I0913 19:23:37.817563   42355 command_runner.go:130] >       "username": "",
	I0913 19:23:37.817571   42355 command_runner.go:130] >       "spec": null,
	I0913 19:23:37.817578   42355 command_runner.go:130] >       "pinned": true
	I0913 19:23:37.817585   42355 command_runner.go:130] >     }
	I0913 19:23:37.817591   42355 command_runner.go:130] >   ]
	I0913 19:23:37.817597   42355 command_runner.go:130] > }
	I0913 19:23:37.817809   42355 crio.go:514] all images are preloaded for cri-o runtime.
	I0913 19:23:37.817825   42355 crio.go:433] Images already preloaded, skipping extraction
	I0913 19:23:37.817880   42355 ssh_runner.go:195] Run: sudo crictl images --output json
	I0913 19:23:37.850471   42355 command_runner.go:130] > {
	I0913 19:23:37.850495   42355 command_runner.go:130] >   "images": [
	I0913 19:23:37.850501   42355 command_runner.go:130] >     {
	I0913 19:23:37.850514   42355 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0913 19:23:37.850521   42355 command_runner.go:130] >       "repoTags": [
	I0913 19:23:37.850530   42355 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0913 19:23:37.850533   42355 command_runner.go:130] >       ],
	I0913 19:23:37.850538   42355 command_runner.go:130] >       "repoDigests": [
	I0913 19:23:37.850546   42355 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0913 19:23:37.850552   42355 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0913 19:23:37.850556   42355 command_runner.go:130] >       ],
	I0913 19:23:37.850561   42355 command_runner.go:130] >       "size": "87190579",
	I0913 19:23:37.850564   42355 command_runner.go:130] >       "uid": null,
	I0913 19:23:37.850569   42355 command_runner.go:130] >       "username": "",
	I0913 19:23:37.850576   42355 command_runner.go:130] >       "spec": null,
	I0913 19:23:37.850586   42355 command_runner.go:130] >       "pinned": false
	I0913 19:23:37.850594   42355 command_runner.go:130] >     },
	I0913 19:23:37.850600   42355 command_runner.go:130] >     {
	I0913 19:23:37.850611   42355 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0913 19:23:37.850620   42355 command_runner.go:130] >       "repoTags": [
	I0913 19:23:37.850627   42355 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0913 19:23:37.850633   42355 command_runner.go:130] >       ],
	I0913 19:23:37.850637   42355 command_runner.go:130] >       "repoDigests": [
	I0913 19:23:37.850646   42355 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0913 19:23:37.850655   42355 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0913 19:23:37.850659   42355 command_runner.go:130] >       ],
	I0913 19:23:37.850664   42355 command_runner.go:130] >       "size": "1363676",
	I0913 19:23:37.850670   42355 command_runner.go:130] >       "uid": null,
	I0913 19:23:37.850683   42355 command_runner.go:130] >       "username": "",
	I0913 19:23:37.850692   42355 command_runner.go:130] >       "spec": null,
	I0913 19:23:37.850699   42355 command_runner.go:130] >       "pinned": false
	I0913 19:23:37.850706   42355 command_runner.go:130] >     },
	I0913 19:23:37.850711   42355 command_runner.go:130] >     {
	I0913 19:23:37.850724   42355 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0913 19:23:37.850731   42355 command_runner.go:130] >       "repoTags": [
	I0913 19:23:37.850737   42355 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0913 19:23:37.850743   42355 command_runner.go:130] >       ],
	I0913 19:23:37.850749   42355 command_runner.go:130] >       "repoDigests": [
	I0913 19:23:37.850764   42355 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0913 19:23:37.850777   42355 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0913 19:23:37.850785   42355 command_runner.go:130] >       ],
	I0913 19:23:37.850792   42355 command_runner.go:130] >       "size": "31470524",
	I0913 19:23:37.850803   42355 command_runner.go:130] >       "uid": null,
	I0913 19:23:37.850810   42355 command_runner.go:130] >       "username": "",
	I0913 19:23:37.850819   42355 command_runner.go:130] >       "spec": null,
	I0913 19:23:37.850824   42355 command_runner.go:130] >       "pinned": false
	I0913 19:23:37.850830   42355 command_runner.go:130] >     },
	I0913 19:23:37.850834   42355 command_runner.go:130] >     {
	I0913 19:23:37.850846   42355 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0913 19:23:37.850856   42355 command_runner.go:130] >       "repoTags": [
	I0913 19:23:37.850864   42355 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0913 19:23:37.850872   42355 command_runner.go:130] >       ],
	I0913 19:23:37.850879   42355 command_runner.go:130] >       "repoDigests": [
	I0913 19:23:37.850893   42355 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I0913 19:23:37.850911   42355 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I0913 19:23:37.850917   42355 command_runner.go:130] >       ],
	I0913 19:23:37.850923   42355 command_runner.go:130] >       "size": "63273227",
	I0913 19:23:37.850932   42355 command_runner.go:130] >       "uid": null,
	I0913 19:23:37.850942   42355 command_runner.go:130] >       "username": "nonroot",
	I0913 19:23:37.850960   42355 command_runner.go:130] >       "spec": null,
	I0913 19:23:37.850969   42355 command_runner.go:130] >       "pinned": false
	I0913 19:23:37.850974   42355 command_runner.go:130] >     },
	I0913 19:23:37.850981   42355 command_runner.go:130] >     {
	I0913 19:23:37.850990   42355 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0913 19:23:37.850998   42355 command_runner.go:130] >       "repoTags": [
	I0913 19:23:37.851002   42355 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0913 19:23:37.851008   42355 command_runner.go:130] >       ],
	I0913 19:23:37.851015   42355 command_runner.go:130] >       "repoDigests": [
	I0913 19:23:37.851029   42355 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0913 19:23:37.851043   42355 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0913 19:23:37.851051   42355 command_runner.go:130] >       ],
	I0913 19:23:37.851059   42355 command_runner.go:130] >       "size": "149009664",
	I0913 19:23:37.851068   42355 command_runner.go:130] >       "uid": {
	I0913 19:23:37.851074   42355 command_runner.go:130] >         "value": "0"
	I0913 19:23:37.851081   42355 command_runner.go:130] >       },
	I0913 19:23:37.851085   42355 command_runner.go:130] >       "username": "",
	I0913 19:23:37.851090   42355 command_runner.go:130] >       "spec": null,
	I0913 19:23:37.851097   42355 command_runner.go:130] >       "pinned": false
	I0913 19:23:37.851109   42355 command_runner.go:130] >     },
	I0913 19:23:37.851115   42355 command_runner.go:130] >     {
	I0913 19:23:37.851128   42355 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I0913 19:23:37.851136   42355 command_runner.go:130] >       "repoTags": [
	I0913 19:23:37.851150   42355 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I0913 19:23:37.851160   42355 command_runner.go:130] >       ],
	I0913 19:23:37.851166   42355 command_runner.go:130] >       "repoDigests": [
	I0913 19:23:37.851178   42355 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I0913 19:23:37.851193   42355 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I0913 19:23:37.851203   42355 command_runner.go:130] >       ],
	I0913 19:23:37.851211   42355 command_runner.go:130] >       "size": "95237600",
	I0913 19:23:37.851220   42355 command_runner.go:130] >       "uid": {
	I0913 19:23:37.851226   42355 command_runner.go:130] >         "value": "0"
	I0913 19:23:37.851235   42355 command_runner.go:130] >       },
	I0913 19:23:37.851241   42355 command_runner.go:130] >       "username": "",
	I0913 19:23:37.851249   42355 command_runner.go:130] >       "spec": null,
	I0913 19:23:37.851253   42355 command_runner.go:130] >       "pinned": false
	I0913 19:23:37.851256   42355 command_runner.go:130] >     },
	I0913 19:23:37.851262   42355 command_runner.go:130] >     {
	I0913 19:23:37.851275   42355 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I0913 19:23:37.851284   42355 command_runner.go:130] >       "repoTags": [
	I0913 19:23:37.851293   42355 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I0913 19:23:37.851302   42355 command_runner.go:130] >       ],
	I0913 19:23:37.851309   42355 command_runner.go:130] >       "repoDigests": [
	I0913 19:23:37.851323   42355 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I0913 19:23:37.851336   42355 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I0913 19:23:37.851346   42355 command_runner.go:130] >       ],
	I0913 19:23:37.851352   42355 command_runner.go:130] >       "size": "89437508",
	I0913 19:23:37.851362   42355 command_runner.go:130] >       "uid": {
	I0913 19:23:37.851369   42355 command_runner.go:130] >         "value": "0"
	I0913 19:23:37.851376   42355 command_runner.go:130] >       },
	I0913 19:23:37.851382   42355 command_runner.go:130] >       "username": "",
	I0913 19:23:37.851390   42355 command_runner.go:130] >       "spec": null,
	I0913 19:23:37.851396   42355 command_runner.go:130] >       "pinned": false
	I0913 19:23:37.851404   42355 command_runner.go:130] >     },
	I0913 19:23:37.851410   42355 command_runner.go:130] >     {
	I0913 19:23:37.851423   42355 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I0913 19:23:37.851432   42355 command_runner.go:130] >       "repoTags": [
	I0913 19:23:37.851441   42355 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I0913 19:23:37.851450   42355 command_runner.go:130] >       ],
	I0913 19:23:37.851457   42355 command_runner.go:130] >       "repoDigests": [
	I0913 19:23:37.851478   42355 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I0913 19:23:37.851493   42355 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I0913 19:23:37.851501   42355 command_runner.go:130] >       ],
	I0913 19:23:37.851507   42355 command_runner.go:130] >       "size": "92733849",
	I0913 19:23:37.851516   42355 command_runner.go:130] >       "uid": null,
	I0913 19:23:37.851522   42355 command_runner.go:130] >       "username": "",
	I0913 19:23:37.851530   42355 command_runner.go:130] >       "spec": null,
	I0913 19:23:37.851536   42355 command_runner.go:130] >       "pinned": false
	I0913 19:23:37.851543   42355 command_runner.go:130] >     },
	I0913 19:23:37.851549   42355 command_runner.go:130] >     {
	I0913 19:23:37.851559   42355 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I0913 19:23:37.851568   42355 command_runner.go:130] >       "repoTags": [
	I0913 19:23:37.851576   42355 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I0913 19:23:37.851584   42355 command_runner.go:130] >       ],
	I0913 19:23:37.851591   42355 command_runner.go:130] >       "repoDigests": [
	I0913 19:23:37.851606   42355 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I0913 19:23:37.851619   42355 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I0913 19:23:37.851628   42355 command_runner.go:130] >       ],
	I0913 19:23:37.851635   42355 command_runner.go:130] >       "size": "68420934",
	I0913 19:23:37.851644   42355 command_runner.go:130] >       "uid": {
	I0913 19:23:37.851648   42355 command_runner.go:130] >         "value": "0"
	I0913 19:23:37.851652   42355 command_runner.go:130] >       },
	I0913 19:23:37.851656   42355 command_runner.go:130] >       "username": "",
	I0913 19:23:37.851660   42355 command_runner.go:130] >       "spec": null,
	I0913 19:23:37.851664   42355 command_runner.go:130] >       "pinned": false
	I0913 19:23:37.851667   42355 command_runner.go:130] >     },
	I0913 19:23:37.851671   42355 command_runner.go:130] >     {
	I0913 19:23:37.851677   42355 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0913 19:23:37.851683   42355 command_runner.go:130] >       "repoTags": [
	I0913 19:23:37.851688   42355 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0913 19:23:37.851694   42355 command_runner.go:130] >       ],
	I0913 19:23:37.851698   42355 command_runner.go:130] >       "repoDigests": [
	I0913 19:23:37.851708   42355 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0913 19:23:37.851719   42355 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0913 19:23:37.851726   42355 command_runner.go:130] >       ],
	I0913 19:23:37.851730   42355 command_runner.go:130] >       "size": "742080",
	I0913 19:23:37.851734   42355 command_runner.go:130] >       "uid": {
	I0913 19:23:37.851738   42355 command_runner.go:130] >         "value": "65535"
	I0913 19:23:37.851744   42355 command_runner.go:130] >       },
	I0913 19:23:37.851748   42355 command_runner.go:130] >       "username": "",
	I0913 19:23:37.851751   42355 command_runner.go:130] >       "spec": null,
	I0913 19:23:37.851758   42355 command_runner.go:130] >       "pinned": true
	I0913 19:23:37.851761   42355 command_runner.go:130] >     }
	I0913 19:23:37.851764   42355 command_runner.go:130] >   ]
	I0913 19:23:37.851769   42355 command_runner.go:130] > }
	I0913 19:23:37.851874   42355 crio.go:514] all images are preloaded for cri-o runtime.
	I0913 19:23:37.851884   42355 cache_images.go:84] Images are preloaded, skipping loading
	I0913 19:23:37.851891   42355 kubeadm.go:934] updating node { 192.168.39.107 8443 v1.31.1 crio true true} ...
	I0913 19:23:37.851977   42355 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-832180 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.107
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:multinode-832180 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0913 19:23:37.852038   42355 ssh_runner.go:195] Run: crio config
	I0913 19:23:37.885707   42355 command_runner.go:130] ! time="2024-09-13 19:23:37.862901052Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0913 19:23:37.892051   42355 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0913 19:23:37.897600   42355 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0913 19:23:37.897632   42355 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0913 19:23:37.897643   42355 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0913 19:23:37.897648   42355 command_runner.go:130] > #
	I0913 19:23:37.897658   42355 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0913 19:23:37.897668   42355 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0913 19:23:37.897677   42355 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0913 19:23:37.897684   42355 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0913 19:23:37.897688   42355 command_runner.go:130] > # reload'.
	I0913 19:23:37.897693   42355 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0913 19:23:37.897703   42355 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0913 19:23:37.897709   42355 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0913 19:23:37.897715   42355 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0913 19:23:37.897723   42355 command_runner.go:130] > [crio]
	I0913 19:23:37.897731   42355 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0913 19:23:37.897741   42355 command_runner.go:130] > # containers images, in this directory.
	I0913 19:23:37.897748   42355 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0913 19:23:37.897764   42355 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0913 19:23:37.897775   42355 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0913 19:23:37.897786   42355 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0913 19:23:37.897795   42355 command_runner.go:130] > # imagestore = ""
	I0913 19:23:37.897806   42355 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0913 19:23:37.897817   42355 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0913 19:23:37.897826   42355 command_runner.go:130] > storage_driver = "overlay"
	I0913 19:23:37.897834   42355 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0913 19:23:37.897847   42355 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0913 19:23:37.897864   42355 command_runner.go:130] > storage_option = [
	I0913 19:23:37.897874   42355 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0913 19:23:37.897879   42355 command_runner.go:130] > ]
	I0913 19:23:37.897889   42355 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0913 19:23:37.897895   42355 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0913 19:23:37.897901   42355 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0913 19:23:37.897907   42355 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0913 19:23:37.897914   42355 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0913 19:23:37.897919   42355 command_runner.go:130] > # always happen on a node reboot
	I0913 19:23:37.897926   42355 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0913 19:23:37.897935   42355 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0913 19:23:37.897943   42355 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0913 19:23:37.897948   42355 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0913 19:23:37.897954   42355 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0913 19:23:37.897961   42355 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0913 19:23:37.897971   42355 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0913 19:23:37.897975   42355 command_runner.go:130] > # internal_wipe = true
	I0913 19:23:37.897982   42355 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0913 19:23:37.897989   42355 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0913 19:23:37.897993   42355 command_runner.go:130] > # internal_repair = false
	I0913 19:23:37.897999   42355 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0913 19:23:37.898005   42355 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0913 19:23:37.898010   42355 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0913 19:23:37.898015   42355 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0913 19:23:37.898023   42355 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0913 19:23:37.898029   42355 command_runner.go:130] > [crio.api]
	I0913 19:23:37.898034   42355 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0913 19:23:37.898039   42355 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0913 19:23:37.898046   42355 command_runner.go:130] > # IP address on which the stream server will listen.
	I0913 19:23:37.898050   42355 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0913 19:23:37.898058   42355 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0913 19:23:37.898063   42355 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0913 19:23:37.898068   42355 command_runner.go:130] > # stream_port = "0"
	I0913 19:23:37.898076   42355 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0913 19:23:37.898082   42355 command_runner.go:130] > # stream_enable_tls = false
	I0913 19:23:37.898088   42355 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0913 19:23:37.898102   42355 command_runner.go:130] > # stream_idle_timeout = ""
	I0913 19:23:37.898112   42355 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0913 19:23:37.898122   42355 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0913 19:23:37.898126   42355 command_runner.go:130] > # minutes.
	I0913 19:23:37.898130   42355 command_runner.go:130] > # stream_tls_cert = ""
	I0913 19:23:37.898135   42355 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0913 19:23:37.898141   42355 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0913 19:23:37.898148   42355 command_runner.go:130] > # stream_tls_key = ""
	I0913 19:23:37.898153   42355 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0913 19:23:37.898159   42355 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0913 19:23:37.898174   42355 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0913 19:23:37.898182   42355 command_runner.go:130] > # stream_tls_ca = ""
	I0913 19:23:37.898189   42355 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0913 19:23:37.898196   42355 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0913 19:23:37.898203   42355 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0913 19:23:37.898209   42355 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0913 19:23:37.898215   42355 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0913 19:23:37.898222   42355 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0913 19:23:37.898226   42355 command_runner.go:130] > [crio.runtime]
	I0913 19:23:37.898234   42355 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0913 19:23:37.898239   42355 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0913 19:23:37.898246   42355 command_runner.go:130] > # "nofile=1024:2048"
	I0913 19:23:37.898251   42355 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0913 19:23:37.898257   42355 command_runner.go:130] > # default_ulimits = [
	I0913 19:23:37.898260   42355 command_runner.go:130] > # ]
	I0913 19:23:37.898266   42355 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0913 19:23:37.898270   42355 command_runner.go:130] > # no_pivot = false
	I0913 19:23:37.898280   42355 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0913 19:23:37.898288   42355 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0913 19:23:37.898293   42355 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0913 19:23:37.898301   42355 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0913 19:23:37.898308   42355 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0913 19:23:37.898314   42355 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0913 19:23:37.898321   42355 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0913 19:23:37.898325   42355 command_runner.go:130] > # Cgroup setting for conmon
	I0913 19:23:37.898333   42355 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0913 19:23:37.898337   42355 command_runner.go:130] > conmon_cgroup = "pod"
	I0913 19:23:37.898343   42355 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0913 19:23:37.898348   42355 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0913 19:23:37.898356   42355 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0913 19:23:37.898360   42355 command_runner.go:130] > conmon_env = [
	I0913 19:23:37.898365   42355 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0913 19:23:37.898370   42355 command_runner.go:130] > ]
	I0913 19:23:37.898375   42355 command_runner.go:130] > # Additional environment variables to set for all the
	I0913 19:23:37.898383   42355 command_runner.go:130] > # containers. These are overridden if set in the
	I0913 19:23:37.898390   42355 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0913 19:23:37.898394   42355 command_runner.go:130] > # default_env = [
	I0913 19:23:37.898401   42355 command_runner.go:130] > # ]
	I0913 19:23:37.898407   42355 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0913 19:23:37.898415   42355 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0913 19:23:37.898419   42355 command_runner.go:130] > # selinux = false
	I0913 19:23:37.898430   42355 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0913 19:23:37.898438   42355 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0913 19:23:37.898446   42355 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0913 19:23:37.898450   42355 command_runner.go:130] > # seccomp_profile = ""
	I0913 19:23:37.898458   42355 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0913 19:23:37.898463   42355 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0913 19:23:37.898471   42355 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0913 19:23:37.898475   42355 command_runner.go:130] > # which might increase security.
	I0913 19:23:37.898482   42355 command_runner.go:130] > # This option is currently deprecated,
	I0913 19:23:37.898488   42355 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0913 19:23:37.898495   42355 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0913 19:23:37.898501   42355 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0913 19:23:37.898513   42355 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0913 19:23:37.898521   42355 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0913 19:23:37.898529   42355 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0913 19:23:37.898534   42355 command_runner.go:130] > # This option supports live configuration reload.
	I0913 19:23:37.898541   42355 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0913 19:23:37.898546   42355 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0913 19:23:37.898552   42355 command_runner.go:130] > # the cgroup blockio controller.
	I0913 19:23:37.898556   42355 command_runner.go:130] > # blockio_config_file = ""
	I0913 19:23:37.898562   42355 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0913 19:23:37.898568   42355 command_runner.go:130] > # blockio parameters.
	I0913 19:23:37.898572   42355 command_runner.go:130] > # blockio_reload = false
	I0913 19:23:37.898578   42355 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0913 19:23:37.898584   42355 command_runner.go:130] > # irqbalance daemon.
	I0913 19:23:37.898589   42355 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0913 19:23:37.898597   42355 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0913 19:23:37.898603   42355 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0913 19:23:37.898612   42355 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0913 19:23:37.898618   42355 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0913 19:23:37.898626   42355 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0913 19:23:37.898631   42355 command_runner.go:130] > # This option supports live configuration reload.
	I0913 19:23:37.898637   42355 command_runner.go:130] > # rdt_config_file = ""
	I0913 19:23:37.898642   42355 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0913 19:23:37.898649   42355 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0913 19:23:37.898664   42355 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0913 19:23:37.898670   42355 command_runner.go:130] > # separate_pull_cgroup = ""
	I0913 19:23:37.898676   42355 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0913 19:23:37.898684   42355 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0913 19:23:37.898688   42355 command_runner.go:130] > # will be added.
	I0913 19:23:37.898692   42355 command_runner.go:130] > # default_capabilities = [
	I0913 19:23:37.898698   42355 command_runner.go:130] > # 	"CHOWN",
	I0913 19:23:37.898701   42355 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0913 19:23:37.898707   42355 command_runner.go:130] > # 	"FSETID",
	I0913 19:23:37.898710   42355 command_runner.go:130] > # 	"FOWNER",
	I0913 19:23:37.898714   42355 command_runner.go:130] > # 	"SETGID",
	I0913 19:23:37.898718   42355 command_runner.go:130] > # 	"SETUID",
	I0913 19:23:37.898722   42355 command_runner.go:130] > # 	"SETPCAP",
	I0913 19:23:37.898728   42355 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0913 19:23:37.898734   42355 command_runner.go:130] > # 	"KILL",
	I0913 19:23:37.898742   42355 command_runner.go:130] > # ]
	I0913 19:23:37.898753   42355 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0913 19:23:37.898766   42355 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0913 19:23:37.898780   42355 command_runner.go:130] > # add_inheritable_capabilities = false
	I0913 19:23:37.898792   42355 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0913 19:23:37.898804   42355 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0913 19:23:37.898812   42355 command_runner.go:130] > default_sysctls = [
	I0913 19:23:37.898819   42355 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0913 19:23:37.898826   42355 command_runner.go:130] > ]
	I0913 19:23:37.898830   42355 command_runner.go:130] > # List of devices on the host that a
	I0913 19:23:37.898839   42355 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0913 19:23:37.898843   42355 command_runner.go:130] > # allowed_devices = [
	I0913 19:23:37.898848   42355 command_runner.go:130] > # 	"/dev/fuse",
	I0913 19:23:37.898852   42355 command_runner.go:130] > # ]
	I0913 19:23:37.898858   42355 command_runner.go:130] > # List of additional devices. specified as
	I0913 19:23:37.898866   42355 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0913 19:23:37.898873   42355 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0913 19:23:37.898878   42355 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0913 19:23:37.898884   42355 command_runner.go:130] > # additional_devices = [
	I0913 19:23:37.898887   42355 command_runner.go:130] > # ]
	I0913 19:23:37.898892   42355 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0913 19:23:37.898896   42355 command_runner.go:130] > # cdi_spec_dirs = [
	I0913 19:23:37.898899   42355 command_runner.go:130] > # 	"/etc/cdi",
	I0913 19:23:37.898903   42355 command_runner.go:130] > # 	"/var/run/cdi",
	I0913 19:23:37.898906   42355 command_runner.go:130] > # ]
	I0913 19:23:37.898912   42355 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0913 19:23:37.898918   42355 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0913 19:23:37.898921   42355 command_runner.go:130] > # Defaults to false.
	I0913 19:23:37.898927   42355 command_runner.go:130] > # device_ownership_from_security_context = false
	I0913 19:23:37.898933   42355 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0913 19:23:37.898939   42355 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0913 19:23:37.898943   42355 command_runner.go:130] > # hooks_dir = [
	I0913 19:23:37.898947   42355 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0913 19:23:37.898950   42355 command_runner.go:130] > # ]
	I0913 19:23:37.898956   42355 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0913 19:23:37.898967   42355 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0913 19:23:37.898972   42355 command_runner.go:130] > # its default mounts from the following two files:
	I0913 19:23:37.898976   42355 command_runner.go:130] > #
	I0913 19:23:37.898981   42355 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0913 19:23:37.898989   42355 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0913 19:23:37.898994   42355 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0913 19:23:37.898999   42355 command_runner.go:130] > #
	I0913 19:23:37.899005   42355 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0913 19:23:37.899012   42355 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0913 19:23:37.899020   42355 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0913 19:23:37.899027   42355 command_runner.go:130] > #      only add mounts it finds in this file.
	I0913 19:23:37.899031   42355 command_runner.go:130] > #
	I0913 19:23:37.899035   42355 command_runner.go:130] > # default_mounts_file = ""
	I0913 19:23:37.899042   42355 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0913 19:23:37.899048   42355 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0913 19:23:37.899054   42355 command_runner.go:130] > pids_limit = 1024
	I0913 19:23:37.899060   42355 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0913 19:23:37.899068   42355 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0913 19:23:37.899074   42355 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0913 19:23:37.899084   42355 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0913 19:23:37.899089   42355 command_runner.go:130] > # log_size_max = -1
	I0913 19:23:37.899098   42355 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0913 19:23:37.899102   42355 command_runner.go:130] > # log_to_journald = false
	I0913 19:23:37.899111   42355 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0913 19:23:37.899116   42355 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0913 19:23:37.899121   42355 command_runner.go:130] > # Path to directory for container attach sockets.
	I0913 19:23:37.899128   42355 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0913 19:23:37.899133   42355 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0913 19:23:37.899138   42355 command_runner.go:130] > # bind_mount_prefix = ""
	I0913 19:23:37.899143   42355 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0913 19:23:37.899149   42355 command_runner.go:130] > # read_only = false
	I0913 19:23:37.899155   42355 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0913 19:23:37.899163   42355 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0913 19:23:37.899167   42355 command_runner.go:130] > # live configuration reload.
	I0913 19:23:37.899171   42355 command_runner.go:130] > # log_level = "info"
	I0913 19:23:37.899177   42355 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0913 19:23:37.899185   42355 command_runner.go:130] > # This option supports live configuration reload.
	I0913 19:23:37.899189   42355 command_runner.go:130] > # log_filter = ""
	I0913 19:23:37.899194   42355 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0913 19:23:37.899202   42355 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0913 19:23:37.899205   42355 command_runner.go:130] > # separated by comma.
	I0913 19:23:37.899212   42355 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0913 19:23:37.899215   42355 command_runner.go:130] > # uid_mappings = ""
	I0913 19:23:37.899221   42355 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0913 19:23:37.899227   42355 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0913 19:23:37.899231   42355 command_runner.go:130] > # separated by comma.
	I0913 19:23:37.899238   42355 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0913 19:23:37.899247   42355 command_runner.go:130] > # gid_mappings = ""
	I0913 19:23:37.899252   42355 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0913 19:23:37.899259   42355 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0913 19:23:37.899266   42355 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0913 19:23:37.899273   42355 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0913 19:23:37.899279   42355 command_runner.go:130] > # minimum_mappable_uid = -1
	I0913 19:23:37.899285   42355 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0913 19:23:37.899294   42355 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0913 19:23:37.899300   42355 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0913 19:23:37.899309   42355 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0913 19:23:37.899313   42355 command_runner.go:130] > # minimum_mappable_gid = -1
	I0913 19:23:37.899319   42355 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0913 19:23:37.899328   42355 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0913 19:23:37.899333   42355 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0913 19:23:37.899339   42355 command_runner.go:130] > # ctr_stop_timeout = 30
	I0913 19:23:37.899345   42355 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0913 19:23:37.899351   42355 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0913 19:23:37.899358   42355 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0913 19:23:37.899362   42355 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0913 19:23:37.899367   42355 command_runner.go:130] > drop_infra_ctr = false
	I0913 19:23:37.899373   42355 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0913 19:23:37.899381   42355 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0913 19:23:37.899388   42355 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0913 19:23:37.899394   42355 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0913 19:23:37.899401   42355 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0913 19:23:37.899408   42355 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0913 19:23:37.899414   42355 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0913 19:23:37.899421   42355 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0913 19:23:37.899425   42355 command_runner.go:130] > # shared_cpuset = ""
	I0913 19:23:37.899436   42355 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0913 19:23:37.899441   42355 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0913 19:23:37.899448   42355 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0913 19:23:37.899454   42355 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0913 19:23:37.899461   42355 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0913 19:23:37.899466   42355 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0913 19:23:37.899476   42355 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0913 19:23:37.899482   42355 command_runner.go:130] > # enable_criu_support = false
	I0913 19:23:37.899487   42355 command_runner.go:130] > # Enable/disable the generation of the container,
	I0913 19:23:37.899495   42355 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0913 19:23:37.899500   42355 command_runner.go:130] > # enable_pod_events = false
	I0913 19:23:37.899508   42355 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0913 19:23:37.899514   42355 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0913 19:23:37.899521   42355 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0913 19:23:37.899525   42355 command_runner.go:130] > # default_runtime = "runc"
	I0913 19:23:37.899532   42355 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0913 19:23:37.899540   42355 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0913 19:23:37.899551   42355 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0913 19:23:37.899556   42355 command_runner.go:130] > # creation as a file is not desired either.
	I0913 19:23:37.899566   42355 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0913 19:23:37.899571   42355 command_runner.go:130] > # the hostname is being managed dynamically.
	I0913 19:23:37.899578   42355 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0913 19:23:37.899581   42355 command_runner.go:130] > # ]
	I0913 19:23:37.899588   42355 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0913 19:23:37.899596   42355 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0913 19:23:37.899602   42355 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0913 19:23:37.899609   42355 command_runner.go:130] > # Each entry in the table should follow the format:
	I0913 19:23:37.899612   42355 command_runner.go:130] > #
	I0913 19:23:37.899617   42355 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0913 19:23:37.899621   42355 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0913 19:23:37.899668   42355 command_runner.go:130] > # runtime_type = "oci"
	I0913 19:23:37.899679   42355 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0913 19:23:37.899683   42355 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0913 19:23:37.899688   42355 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0913 19:23:37.899692   42355 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0913 19:23:37.899698   42355 command_runner.go:130] > # monitor_env = []
	I0913 19:23:37.899703   42355 command_runner.go:130] > # privileged_without_host_devices = false
	I0913 19:23:37.899707   42355 command_runner.go:130] > # allowed_annotations = []
	I0913 19:23:37.899715   42355 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0913 19:23:37.899718   42355 command_runner.go:130] > # Where:
	I0913 19:23:37.899725   42355 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0913 19:23:37.899738   42355 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0913 19:23:37.899750   42355 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0913 19:23:37.899762   42355 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0913 19:23:37.899775   42355 command_runner.go:130] > #   in $PATH.
	I0913 19:23:37.899787   42355 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0913 19:23:37.899795   42355 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0913 19:23:37.899807   42355 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0913 19:23:37.899814   42355 command_runner.go:130] > #   state.
	I0913 19:23:37.899825   42355 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0913 19:23:37.899833   42355 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0913 19:23:37.899840   42355 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0913 19:23:37.899847   42355 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0913 19:23:37.899853   42355 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0913 19:23:37.899861   42355 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0913 19:23:37.899867   42355 command_runner.go:130] > #   The currently recognized values are:
	I0913 19:23:37.899875   42355 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0913 19:23:37.899881   42355 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0913 19:23:37.899888   42355 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0913 19:23:37.899894   42355 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0913 19:23:37.899902   42355 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0913 19:23:37.899910   42355 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0913 19:23:37.899916   42355 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0913 19:23:37.899924   42355 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0913 19:23:37.899930   42355 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0913 19:23:37.899938   42355 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0913 19:23:37.899943   42355 command_runner.go:130] > #   deprecated option "conmon".
	I0913 19:23:37.899950   42355 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0913 19:23:37.899957   42355 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0913 19:23:37.899964   42355 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0913 19:23:37.899970   42355 command_runner.go:130] > #   should be moved to the container's cgroup
	I0913 19:23:37.899976   42355 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0913 19:23:37.899984   42355 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0913 19:23:37.899993   42355 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0913 19:23:37.900001   42355 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0913 19:23:37.900004   42355 command_runner.go:130] > #
	I0913 19:23:37.900008   42355 command_runner.go:130] > # Using the seccomp notifier feature:
	I0913 19:23:37.900014   42355 command_runner.go:130] > #
	I0913 19:23:37.900021   42355 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0913 19:23:37.900027   42355 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0913 19:23:37.900033   42355 command_runner.go:130] > #
	I0913 19:23:37.900038   42355 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0913 19:23:37.900047   42355 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0913 19:23:37.900050   42355 command_runner.go:130] > #
	I0913 19:23:37.900056   42355 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0913 19:23:37.900062   42355 command_runner.go:130] > # feature.
	I0913 19:23:37.900065   42355 command_runner.go:130] > #
	I0913 19:23:37.900070   42355 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0913 19:23:37.900078   42355 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0913 19:23:37.900084   42355 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0913 19:23:37.900092   42355 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0913 19:23:37.900098   42355 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0913 19:23:37.900101   42355 command_runner.go:130] > #
	I0913 19:23:37.900107   42355 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0913 19:23:37.900113   42355 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0913 19:23:37.900116   42355 command_runner.go:130] > #
	I0913 19:23:37.900123   42355 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0913 19:23:37.900130   42355 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0913 19:23:37.900133   42355 command_runner.go:130] > #
	I0913 19:23:37.900138   42355 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0913 19:23:37.900146   42355 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0913 19:23:37.900150   42355 command_runner.go:130] > # limitation.
	I0913 19:23:37.900157   42355 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0913 19:23:37.900164   42355 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0913 19:23:37.900168   42355 command_runner.go:130] > runtime_type = "oci"
	I0913 19:23:37.900174   42355 command_runner.go:130] > runtime_root = "/run/runc"
	I0913 19:23:37.900179   42355 command_runner.go:130] > runtime_config_path = ""
	I0913 19:23:37.900184   42355 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0913 19:23:37.900191   42355 command_runner.go:130] > monitor_cgroup = "pod"
	I0913 19:23:37.900195   42355 command_runner.go:130] > monitor_exec_cgroup = ""
	I0913 19:23:37.900200   42355 command_runner.go:130] > monitor_env = [
	I0913 19:23:37.900205   42355 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0913 19:23:37.900211   42355 command_runner.go:130] > ]
	I0913 19:23:37.900215   42355 command_runner.go:130] > privileged_without_host_devices = false
	I0913 19:23:37.900222   42355 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0913 19:23:37.900229   42355 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0913 19:23:37.900235   42355 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0913 19:23:37.900244   42355 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0913 19:23:37.900256   42355 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0913 19:23:37.900265   42355 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0913 19:23:37.900274   42355 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0913 19:23:37.900283   42355 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0913 19:23:37.900289   42355 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0913 19:23:37.900297   42355 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0913 19:23:37.900303   42355 command_runner.go:130] > # Example:
	I0913 19:23:37.900307   42355 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0913 19:23:37.900311   42355 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0913 19:23:37.900318   42355 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0913 19:23:37.900323   42355 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0913 19:23:37.900328   42355 command_runner.go:130] > # cpuset = 0
	I0913 19:23:37.900332   42355 command_runner.go:130] > # cpushares = "0-1"
	I0913 19:23:37.900335   42355 command_runner.go:130] > # Where:
	I0913 19:23:37.900343   42355 command_runner.go:130] > # The workload name is workload-type.
	I0913 19:23:37.900350   42355 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0913 19:23:37.900357   42355 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0913 19:23:37.900362   42355 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0913 19:23:37.900371   42355 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0913 19:23:37.900378   42355 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0913 19:23:37.900383   42355 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0913 19:23:37.900390   42355 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0913 19:23:37.900396   42355 command_runner.go:130] > # Default value is set to true
	I0913 19:23:37.900400   42355 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0913 19:23:37.900408   42355 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0913 19:23:37.900413   42355 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0913 19:23:37.900419   42355 command_runner.go:130] > # Default value is set to 'false'
	I0913 19:23:37.900423   42355 command_runner.go:130] > # disable_hostport_mapping = false
	I0913 19:23:37.900436   42355 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0913 19:23:37.900440   42355 command_runner.go:130] > #
	I0913 19:23:37.900446   42355 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0913 19:23:37.900452   42355 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0913 19:23:37.900458   42355 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0913 19:23:37.900463   42355 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0913 19:23:37.900470   42355 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0913 19:23:37.900474   42355 command_runner.go:130] > [crio.image]
	I0913 19:23:37.900480   42355 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0913 19:23:37.900484   42355 command_runner.go:130] > # default_transport = "docker://"
	I0913 19:23:37.900489   42355 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0913 19:23:37.900495   42355 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0913 19:23:37.900498   42355 command_runner.go:130] > # global_auth_file = ""
	I0913 19:23:37.900503   42355 command_runner.go:130] > # The image used to instantiate infra containers.
	I0913 19:23:37.900508   42355 command_runner.go:130] > # This option supports live configuration reload.
	I0913 19:23:37.900512   42355 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I0913 19:23:37.900518   42355 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0913 19:23:37.900523   42355 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0913 19:23:37.900528   42355 command_runner.go:130] > # This option supports live configuration reload.
	I0913 19:23:37.900532   42355 command_runner.go:130] > # pause_image_auth_file = ""
	I0913 19:23:37.900537   42355 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0913 19:23:37.900543   42355 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0913 19:23:37.900550   42355 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0913 19:23:37.900555   42355 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0913 19:23:37.900559   42355 command_runner.go:130] > # pause_command = "/pause"
	I0913 19:23:37.900564   42355 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0913 19:23:37.900570   42355 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0913 19:23:37.900575   42355 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0913 19:23:37.900582   42355 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0913 19:23:37.900587   42355 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0913 19:23:37.900592   42355 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0913 19:23:37.900596   42355 command_runner.go:130] > # pinned_images = [
	I0913 19:23:37.900599   42355 command_runner.go:130] > # ]
	I0913 19:23:37.900604   42355 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0913 19:23:37.900610   42355 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0913 19:23:37.900616   42355 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0913 19:23:37.900624   42355 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0913 19:23:37.900629   42355 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0913 19:23:37.900633   42355 command_runner.go:130] > # signature_policy = ""
	I0913 19:23:37.900639   42355 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0913 19:23:37.900648   42355 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0913 19:23:37.900654   42355 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0913 19:23:37.900662   42355 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0913 19:23:37.900670   42355 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0913 19:23:37.900675   42355 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0913 19:23:37.900683   42355 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0913 19:23:37.900689   42355 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0913 19:23:37.900695   42355 command_runner.go:130] > # changing them here.
	I0913 19:23:37.900699   42355 command_runner.go:130] > # insecure_registries = [
	I0913 19:23:37.900702   42355 command_runner.go:130] > # ]
	I0913 19:23:37.900708   42355 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0913 19:23:37.900715   42355 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0913 19:23:37.900719   42355 command_runner.go:130] > # image_volumes = "mkdir"
	I0913 19:23:37.900724   42355 command_runner.go:130] > # Temporary directory to use for storing big files
	I0913 19:23:37.900731   42355 command_runner.go:130] > # big_files_temporary_dir = ""
	I0913 19:23:37.900740   42355 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0913 19:23:37.900749   42355 command_runner.go:130] > # CNI plugins.
	I0913 19:23:37.900755   42355 command_runner.go:130] > [crio.network]
	I0913 19:23:37.900767   42355 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0913 19:23:37.900779   42355 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0913 19:23:37.900788   42355 command_runner.go:130] > # cni_default_network = ""
	I0913 19:23:37.900797   42355 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0913 19:23:37.900807   42355 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0913 19:23:37.900816   42355 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0913 19:23:37.900823   42355 command_runner.go:130] > # plugin_dirs = [
	I0913 19:23:37.900828   42355 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0913 19:23:37.900833   42355 command_runner.go:130] > # ]
	I0913 19:23:37.900839   42355 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0913 19:23:37.900846   42355 command_runner.go:130] > [crio.metrics]
	I0913 19:23:37.900851   42355 command_runner.go:130] > # Globally enable or disable metrics support.
	I0913 19:23:37.900857   42355 command_runner.go:130] > enable_metrics = true
	I0913 19:23:37.900862   42355 command_runner.go:130] > # Specify enabled metrics collectors.
	I0913 19:23:37.900869   42355 command_runner.go:130] > # Per default all metrics are enabled.
	I0913 19:23:37.900874   42355 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0913 19:23:37.900882   42355 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0913 19:23:37.900888   42355 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0913 19:23:37.900894   42355 command_runner.go:130] > # metrics_collectors = [
	I0913 19:23:37.900898   42355 command_runner.go:130] > # 	"operations",
	I0913 19:23:37.900902   42355 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0913 19:23:37.900907   42355 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0913 19:23:37.900912   42355 command_runner.go:130] > # 	"operations_errors",
	I0913 19:23:37.900916   42355 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0913 19:23:37.900923   42355 command_runner.go:130] > # 	"image_pulls_by_name",
	I0913 19:23:37.900927   42355 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0913 19:23:37.900934   42355 command_runner.go:130] > # 	"image_pulls_failures",
	I0913 19:23:37.900940   42355 command_runner.go:130] > # 	"image_pulls_successes",
	I0913 19:23:37.900945   42355 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0913 19:23:37.900951   42355 command_runner.go:130] > # 	"image_layer_reuse",
	I0913 19:23:37.900955   42355 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0913 19:23:37.900961   42355 command_runner.go:130] > # 	"containers_oom_total",
	I0913 19:23:37.900965   42355 command_runner.go:130] > # 	"containers_oom",
	I0913 19:23:37.900969   42355 command_runner.go:130] > # 	"processes_defunct",
	I0913 19:23:37.900973   42355 command_runner.go:130] > # 	"operations_total",
	I0913 19:23:37.900977   42355 command_runner.go:130] > # 	"operations_latency_seconds",
	I0913 19:23:37.900982   42355 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0913 19:23:37.900989   42355 command_runner.go:130] > # 	"operations_errors_total",
	I0913 19:23:37.900994   42355 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0913 19:23:37.901001   42355 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0913 19:23:37.901005   42355 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0913 19:23:37.901011   42355 command_runner.go:130] > # 	"image_pulls_success_total",
	I0913 19:23:37.901015   42355 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0913 19:23:37.901022   42355 command_runner.go:130] > # 	"containers_oom_count_total",
	I0913 19:23:37.901027   42355 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0913 19:23:37.901034   42355 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0913 19:23:37.901038   42355 command_runner.go:130] > # ]
	I0913 19:23:37.901045   42355 command_runner.go:130] > # The port on which the metrics server will listen.
	I0913 19:23:37.901048   42355 command_runner.go:130] > # metrics_port = 9090
	I0913 19:23:37.901054   42355 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0913 19:23:37.901060   42355 command_runner.go:130] > # metrics_socket = ""
	I0913 19:23:37.901065   42355 command_runner.go:130] > # The certificate for the secure metrics server.
	I0913 19:23:37.901071   42355 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0913 19:23:37.901089   42355 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0913 19:23:37.901094   42355 command_runner.go:130] > # certificate on any modification event.
	I0913 19:23:37.901100   42355 command_runner.go:130] > # metrics_cert = ""
	I0913 19:23:37.901105   42355 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0913 19:23:37.901112   42355 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0913 19:23:37.901117   42355 command_runner.go:130] > # metrics_key = ""
	I0913 19:23:37.901124   42355 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0913 19:23:37.901129   42355 command_runner.go:130] > [crio.tracing]
	I0913 19:23:37.901134   42355 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0913 19:23:37.901141   42355 command_runner.go:130] > # enable_tracing = false
	I0913 19:23:37.901146   42355 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0913 19:23:37.901151   42355 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0913 19:23:37.901160   42355 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0913 19:23:37.901165   42355 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0913 19:23:37.901171   42355 command_runner.go:130] > # CRI-O NRI configuration.
	I0913 19:23:37.901174   42355 command_runner.go:130] > [crio.nri]
	I0913 19:23:37.901181   42355 command_runner.go:130] > # Globally enable or disable NRI.
	I0913 19:23:37.901187   42355 command_runner.go:130] > # enable_nri = false
	I0913 19:23:37.901193   42355 command_runner.go:130] > # NRI socket to listen on.
	I0913 19:23:37.901199   42355 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0913 19:23:37.901204   42355 command_runner.go:130] > # NRI plugin directory to use.
	I0913 19:23:37.901209   42355 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0913 19:23:37.901216   42355 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0913 19:23:37.901222   42355 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0913 19:23:37.901230   42355 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0913 19:23:37.901234   42355 command_runner.go:130] > # nri_disable_connections = false
	I0913 19:23:37.901242   42355 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0913 19:23:37.901246   42355 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0913 19:23:37.901251   42355 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0913 19:23:37.901257   42355 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0913 19:23:37.901262   42355 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0913 19:23:37.901268   42355 command_runner.go:130] > [crio.stats]
	I0913 19:23:37.901273   42355 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0913 19:23:37.901281   42355 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0913 19:23:37.901285   42355 command_runner.go:130] > # stats_collection_period = 0
	I0913 19:23:37.901358   42355 cni.go:84] Creating CNI manager for ""
	I0913 19:23:37.901368   42355 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0913 19:23:37.901376   42355 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0913 19:23:37.901397   42355 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.107 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-832180 NodeName:multinode-832180 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.107"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.107 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0913 19:23:37.901521   42355 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.107
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-832180"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.107
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.107"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0913 19:23:37.901581   42355 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0913 19:23:37.912321   42355 command_runner.go:130] > kubeadm
	I0913 19:23:37.912343   42355 command_runner.go:130] > kubectl
	I0913 19:23:37.912349   42355 command_runner.go:130] > kubelet
	I0913 19:23:37.912427   42355 binaries.go:44] Found k8s binaries, skipping transfer
	I0913 19:23:37.912508   42355 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0913 19:23:37.922714   42355 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0913 19:23:37.941316   42355 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0913 19:23:37.958864   42355 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0913 19:23:37.976825   42355 ssh_runner.go:195] Run: grep 192.168.39.107	control-plane.minikube.internal$ /etc/hosts
	I0913 19:23:37.980749   42355 command_runner.go:130] > 192.168.39.107	control-plane.minikube.internal
	I0913 19:23:37.980892   42355 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 19:23:38.124686   42355 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0913 19:23:38.143184   42355 certs.go:68] Setting up /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/multinode-832180 for IP: 192.168.39.107
	I0913 19:23:38.143209   42355 certs.go:194] generating shared ca certs ...
	I0913 19:23:38.143225   42355 certs.go:226] acquiring lock for ca certs: {Name:mke780aab4c2895bded11772e31c7a357d07742c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 19:23:38.143388   42355 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.key
	I0913 19:23:38.143436   42355 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.key
	I0913 19:23:38.143445   42355 certs.go:256] generating profile certs ...
	I0913 19:23:38.143526   42355 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/multinode-832180/client.key
	I0913 19:23:38.143590   42355 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/multinode-832180/apiserver.key.496af81c
	I0913 19:23:38.143623   42355 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/multinode-832180/proxy-client.key
	I0913 19:23:38.143635   42355 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0913 19:23:38.143650   42355 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0913 19:23:38.143662   42355 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0913 19:23:38.143672   42355 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0913 19:23:38.143684   42355 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/multinode-832180/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0913 19:23:38.143694   42355 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/multinode-832180/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0913 19:23:38.143706   42355 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/multinode-832180/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0913 19:23:38.143720   42355 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/multinode-832180/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0913 19:23:38.143777   42355 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079.pem (1338 bytes)
	W0913 19:23:38.143822   42355 certs.go:480] ignoring /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079_empty.pem, impossibly tiny 0 bytes
	I0913 19:23:38.143835   42355 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem (1675 bytes)
	I0913 19:23:38.143869   42355 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem (1082 bytes)
	I0913 19:23:38.143893   42355 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem (1123 bytes)
	I0913 19:23:38.143915   42355 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem (1679 bytes)
	I0913 19:23:38.143954   42355 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem (1708 bytes)
	I0913 19:23:38.143995   42355 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0913 19:23:38.144015   42355 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079.pem -> /usr/share/ca-certificates/11079.pem
	I0913 19:23:38.144027   42355 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem -> /usr/share/ca-certificates/110792.pem
	I0913 19:23:38.144585   42355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0913 19:23:38.169918   42355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0913 19:23:38.199375   42355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0913 19:23:38.225013   42355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0913 19:23:38.249740   42355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/multinode-832180/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0913 19:23:38.274811   42355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/multinode-832180/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0913 19:23:38.298716   42355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/multinode-832180/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0913 19:23:38.322393   42355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/multinode-832180/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0913 19:23:38.346877   42355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0913 19:23:38.370793   42355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079.pem --> /usr/share/ca-certificates/11079.pem (1338 bytes)
	I0913 19:23:38.419112   42355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem --> /usr/share/ca-certificates/110792.pem (1708 bytes)
	I0913 19:23:38.471595   42355 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0913 19:23:38.506298   42355 ssh_runner.go:195] Run: openssl version
	I0913 19:23:38.515632   42355 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0913 19:23:38.515796   42355 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0913 19:23:38.528837   42355 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0913 19:23:38.534762   42355 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 13 18:22 /usr/share/ca-certificates/minikubeCA.pem
	I0913 19:23:38.535087   42355 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 13 18:22 /usr/share/ca-certificates/minikubeCA.pem
	I0913 19:23:38.535157   42355 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0913 19:23:38.541094   42355 command_runner.go:130] > b5213941
	I0913 19:23:38.541440   42355 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0913 19:23:38.551487   42355 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11079.pem && ln -fs /usr/share/ca-certificates/11079.pem /etc/ssl/certs/11079.pem"
	I0913 19:23:38.563944   42355 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11079.pem
	I0913 19:23:38.569400   42355 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 13 18:38 /usr/share/ca-certificates/11079.pem
	I0913 19:23:38.569649   42355 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 13 18:38 /usr/share/ca-certificates/11079.pem
	I0913 19:23:38.569709   42355 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11079.pem
	I0913 19:23:38.575795   42355 command_runner.go:130] > 51391683
	I0913 19:23:38.575865   42355 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11079.pem /etc/ssl/certs/51391683.0"
	I0913 19:23:38.587393   42355 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110792.pem && ln -fs /usr/share/ca-certificates/110792.pem /etc/ssl/certs/110792.pem"
	I0913 19:23:38.600355   42355 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110792.pem
	I0913 19:23:38.605483   42355 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 13 18:38 /usr/share/ca-certificates/110792.pem
	I0913 19:23:38.605792   42355 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 13 18:38 /usr/share/ca-certificates/110792.pem
	I0913 19:23:38.605842   42355 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110792.pem
	I0913 19:23:38.611613   42355 command_runner.go:130] > 3ec20f2e
	I0913 19:23:38.611736   42355 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110792.pem /etc/ssl/certs/3ec20f2e.0"
	I0913 19:23:38.621378   42355 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0913 19:23:38.632025   42355 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0913 19:23:38.632053   42355 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0913 19:23:38.632059   42355 command_runner.go:130] > Device: 253,1	Inode: 5242920     Links: 1
	I0913 19:23:38.632068   42355 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0913 19:23:38.632083   42355 command_runner.go:130] > Access: 2024-09-13 19:16:51.281450796 +0000
	I0913 19:23:38.632092   42355 command_runner.go:130] > Modify: 2024-09-13 19:16:51.281450796 +0000
	I0913 19:23:38.632100   42355 command_runner.go:130] > Change: 2024-09-13 19:16:51.281450796 +0000
	I0913 19:23:38.632107   42355 command_runner.go:130] >  Birth: 2024-09-13 19:16:51.281450796 +0000
	I0913 19:23:38.632425   42355 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0913 19:23:38.646666   42355 command_runner.go:130] > Certificate will not expire
	I0913 19:23:38.646989   42355 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0913 19:23:38.656551   42355 command_runner.go:130] > Certificate will not expire
	I0913 19:23:38.656718   42355 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0913 19:23:38.667349   42355 command_runner.go:130] > Certificate will not expire
	I0913 19:23:38.667513   42355 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0913 19:23:38.673929   42355 command_runner.go:130] > Certificate will not expire
	I0913 19:23:38.674798   42355 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0913 19:23:38.684060   42355 command_runner.go:130] > Certificate will not expire
	I0913 19:23:38.684452   42355 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0913 19:23:38.691312   42355 command_runner.go:130] > Certificate will not expire
	I0913 19:23:38.691382   42355 kubeadm.go:392] StartCluster: {Name:multinode-832180 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:multinode-832180 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.107 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.235 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.21 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:f
alse istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 19:23:38.691544   42355 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0913 19:23:38.691603   42355 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0913 19:23:38.746175   42355 command_runner.go:130] > f7319753489e7bfcfa89a8e1853b8d19f22ea6dcbce9d53d2c4e88651e86183f
	I0913 19:23:38.746206   42355 command_runner.go:130] > 19c09a93acc27cd0e802edd6cb335a581c1ffb7d3f0352d8f377993a5bb90522
	I0913 19:23:38.746215   42355 command_runner.go:130] > 3153bfb4d1050389bf7f3fceda066f6e1f0f1087931c4747ae64bc8e11fbe98d
	I0913 19:23:38.746227   42355 command_runner.go:130] > 804b00dc869d900db3f006dba9a31975b04af1bf58d926814fe7af38496dc903
	I0913 19:23:38.746236   42355 command_runner.go:130] > 96891beb662f6bf1a22be94e865f849c490f37212044b80a4e7ed40fb4e6b121
	I0913 19:23:38.746244   42355 command_runner.go:130] > 76ff5353d55e94cf485f3093825b6dda34d70a5ef34ebb1741894b8520890086
	I0913 19:23:38.746252   42355 command_runner.go:130] > 66fe7d1de1c37cc0d744c6d67ad7f13380cc730caab86f506d4bb5cf90433fd6
	I0913 19:23:38.746262   42355 command_runner.go:130] > b426c3236a8688919bb3f76b648d81b1dd24a3223eb10aa26ef46ae404f85df6
	I0913 19:23:38.746270   42355 command_runner.go:130] > 1cfc48ae630fdc726595c0684fbe9856feb456a3a168627d15486e1ea532ef01
	I0913 19:23:38.746295   42355 cri.go:89] found id: "f7319753489e7bfcfa89a8e1853b8d19f22ea6dcbce9d53d2c4e88651e86183f"
	I0913 19:23:38.746307   42355 cri.go:89] found id: "19c09a93acc27cd0e802edd6cb335a581c1ffb7d3f0352d8f377993a5bb90522"
	I0913 19:23:38.746311   42355 cri.go:89] found id: "3153bfb4d1050389bf7f3fceda066f6e1f0f1087931c4747ae64bc8e11fbe98d"
	I0913 19:23:38.746315   42355 cri.go:89] found id: "804b00dc869d900db3f006dba9a31975b04af1bf58d926814fe7af38496dc903"
	I0913 19:23:38.746320   42355 cri.go:89] found id: "96891beb662f6bf1a22be94e865f849c490f37212044b80a4e7ed40fb4e6b121"
	I0913 19:23:38.746324   42355 cri.go:89] found id: "76ff5353d55e94cf485f3093825b6dda34d70a5ef34ebb1741894b8520890086"
	I0913 19:23:38.746328   42355 cri.go:89] found id: "66fe7d1de1c37cc0d744c6d67ad7f13380cc730caab86f506d4bb5cf90433fd6"
	I0913 19:23:38.746332   42355 cri.go:89] found id: "b426c3236a8688919bb3f76b648d81b1dd24a3223eb10aa26ef46ae404f85df6"
	I0913 19:23:38.746336   42355 cri.go:89] found id: "1cfc48ae630fdc726595c0684fbe9856feb456a3a168627d15486e1ea532ef01"
	I0913 19:23:38.746344   42355 cri.go:89] found id: ""
	I0913 19:23:38.746394   42355 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 13 19:27:52 multinode-832180 crio[2751]: time="2024-09-13 19:27:52.735921040Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726255672735885659,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=509e446d-c405-4691-8a99-8bb32d8423ad name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 19:27:52 multinode-832180 crio[2751]: time="2024-09-13 19:27:52.736654506Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5c56d89b-ae74-45ae-a917-1593ec8d2744 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 19:27:52 multinode-832180 crio[2751]: time="2024-09-13 19:27:52.736734894Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5c56d89b-ae74-45ae-a917-1593ec8d2744 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 19:27:52 multinode-832180 crio[2751]: time="2024-09-13 19:27:52.737396413Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c02390146341d82db4a745936ef5e4fc584c13547f15f07045ec2a7d1bad1237,PodSandboxId:305b044bacb17066710d2bd5e723416c8b4519d8c3d6f81c535977548f869a75,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726255466705438152,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-mjlx4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d05f5737-22be-4f7d-b3d0-e7c8e5ce9046,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a29ceeace1750c7a2d6a3b8401821e73aece9da2fb254f28769c10161887b8b7,PodSandboxId:81bf57f457cf7d63876353284fa2d797912b4ab7376fb46332ed946abee821e2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726255433165155478,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-prk4k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f26cb6d0-3d35-47a6-b489-383b69a2e8b2,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:daed530ab23b2fbebe0aee06dba2f69f17c0d4522c2e8e12efe610e0005dbc97,PodSandboxId:82adf420ce8e66e8027b40ee471408cf2229bc50b8e2a942c589e5dd8f3609ef,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726255433092587162,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e97c9822-6884-43de-a860-acdc7a78b0f9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.conta
iner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9edd612ed049272f921fb281466af068531ec5cd17cbc02c76c33d5e94a7613d,PodSandboxId:36637a2505a8d97b6bb271c7e51720e3d4cf6d7770607a4d368bad6fd580fc2c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726255433034139547,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sntdv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 958927cd-96b7-461e-b040-719db9b90632,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fi
le,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5f29f7f76bdf322e6bb6447a9f1665ccb44f5eba45b48899ee7784938f7b3ff,PodSandboxId:36f0301cecbeca54cc91ea6023b5917b09f2feef0053fd35c7e6fb061d981a9d,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726255432764485346,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-w8ktp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2b018c0-323f-45be-bba9-aeaf9cbbef5d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerP
ort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2696e7a77e5d55f6f846833ffbae53b024d1b919e4ad29ab0e7c5f3330c5e854,PodSandboxId:8826fdf3e66c2da85b8ec81c06ea3d3dd0a13ad59556a52ab20abc28b9122e00,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726255426030379347,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-832180,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a262925470eceb1d38958926c17eb839,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc030c85d46d7541300dea35ade9029df1b9b60bc0629342a25c9142b0c9b3fe,PodSandboxId:df8ca9089a3d13e14e3785e715a4e3425af66f71361f415452a55c032d267326,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726255425787129481,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-832180,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8394f2a8329cfeddd998e539353d3436,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d
79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99b4d9896fa0b4c8cd5c9098095f8e2a5217d7abea4c3b7d43c4e01767bd2436,PodSandboxId:07eca42c194745b4a445f6fd0c304123c164c2b3214da5cd13d1c7dabc0fa0aa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726255423381008734,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-832180,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2cf558c82f7c07ac985a1806cbb94a60,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02b3f56cb8d9643708ed133fcc56af3444c05408cdded724d5c2ef364d818846,PodSandboxId:246e4d12a9a5ed0e0938da74a074a7e2485c4a15021ccf5d168c8ee0830ab839,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726255419065099587,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-832180,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3865056459676d8751348bed55faac93,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7319753489e7bfcfa89a8e1853b8d19f22ea6dcbce9d53d2c4e88651e86183f,PodSandboxId:36f0301cecbeca54cc91ea6023b5917b09f2feef0053fd35c7e6fb061d981a9d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726255418579065761,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-w8ktp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2b018c0-323f-45be-bba9-aeaf9cbbef5d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"conta
inerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c080eec6d9e5b325df2b46eaddecc0a4b53a904434e4a42e3f0c8f4ca9b90b81,PodSandboxId:cc39c6400994b23e4971534f4ae85c59c9d126cd7808091e33201e0f2bd30fcc,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726255095408623807,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-mjlx4,i
o.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d05f5737-22be-4f7d-b3d0-e7c8e5ce9046,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3153bfb4d1050389bf7f3fceda066f6e1f0f1087931c4747ae64bc8e11fbe98d,PodSandboxId:01a1ad4d7857c2131e48a1fca5f0dddeeb36ba5e970fc7457985bf8148cd0d8e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726255037865215707,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: e97c9822-6884-43de-a860-acdc7a78b0f9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:804b00dc869d900db3f006dba9a31975b04af1bf58d926814fe7af38496dc903,PodSandboxId:ecd348aa55f04442062b8efc8222d1606a08ac572ed751a85b8ae8687a706d48,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726255026077890301,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-prk4k,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: f26cb6d0-3d35-47a6-b489-383b69a2e8b2,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96891beb662f6bf1a22be94e865f849c490f37212044b80a4e7ed40fb4e6b121,PodSandboxId:dac5dc01376abc255c4b7b216c24016112b58bdd7114b368667af4e3aedf2806,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726255026055632033,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sntdv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 958927cd-96b7-461e-b040
-719db9b90632,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66fe7d1de1c37cc0d744c6d67ad7f13380cc730caab86f506d4bb5cf90433fd6,PodSandboxId:84898a16bf03a195b3e0c35f25318953a31fa7c53cd92a6957ed5bb744af8cfb,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726255014890552337,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-832180,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a262925470eceb1d38958926c17eb839,},Annotations:map[string]string
{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76ff5353d55e94cf485f3093825b6dda34d70a5ef34ebb1741894b8520890086,PodSandboxId:6276abda062c56f56eca3cc3ab3b22b2d633c06a9e9ab544dc2d126d6cb1493e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726255014894668109,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-832180,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8394f2a8329cfeddd998e539353d3436,},Annotations:map[s
tring]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b426c3236a8688919bb3f76b648d81b1dd24a3223eb10aa26ef46ae404f85df6,PodSandboxId:267a8b99ca5db62edee6a206e0f8e0a21676d8aeb34b48db6eabbe9521bffdff,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726255014842830220,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-832180,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2cf558c82f7c07ac985a1806cbb94a60,},Annotations:map[string]string{io
.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cfc48ae630fdc726595c0684fbe9856feb456a3a168627d15486e1ea532ef01,PodSandboxId:2bf7064075a7b83dbcbcf187a9fb07dd000aed4095b6b9b658c0c1cd307441e9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726255014794442252,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-832180,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3865056459676d8751348bed55faac93,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5c56d89b-ae74-45ae-a917-1593ec8d2744 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 19:27:52 multinode-832180 crio[2751]: time="2024-09-13 19:27:52.794496127Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ae33430c-e001-4160-988b-fc3ec598f351 name=/runtime.v1.RuntimeService/Version
	Sep 13 19:27:52 multinode-832180 crio[2751]: time="2024-09-13 19:27:52.794591673Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ae33430c-e001-4160-988b-fc3ec598f351 name=/runtime.v1.RuntimeService/Version
	Sep 13 19:27:52 multinode-832180 crio[2751]: time="2024-09-13 19:27:52.795695937Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=37119c7d-5450-436a-813f-a64154ac4759 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 19:27:52 multinode-832180 crio[2751]: time="2024-09-13 19:27:52.796081668Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726255672796057357,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=37119c7d-5450-436a-813f-a64154ac4759 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 19:27:52 multinode-832180 crio[2751]: time="2024-09-13 19:27:52.796916132Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=62bbab23-ff8c-42a9-b1bf-914b0b11e237 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 19:27:52 multinode-832180 crio[2751]: time="2024-09-13 19:27:52.796972578Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=62bbab23-ff8c-42a9-b1bf-914b0b11e237 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 19:27:52 multinode-832180 crio[2751]: time="2024-09-13 19:27:52.797628338Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c02390146341d82db4a745936ef5e4fc584c13547f15f07045ec2a7d1bad1237,PodSandboxId:305b044bacb17066710d2bd5e723416c8b4519d8c3d6f81c535977548f869a75,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726255466705438152,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-mjlx4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d05f5737-22be-4f7d-b3d0-e7c8e5ce9046,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a29ceeace1750c7a2d6a3b8401821e73aece9da2fb254f28769c10161887b8b7,PodSandboxId:81bf57f457cf7d63876353284fa2d797912b4ab7376fb46332ed946abee821e2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726255433165155478,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-prk4k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f26cb6d0-3d35-47a6-b489-383b69a2e8b2,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:daed530ab23b2fbebe0aee06dba2f69f17c0d4522c2e8e12efe610e0005dbc97,PodSandboxId:82adf420ce8e66e8027b40ee471408cf2229bc50b8e2a942c589e5dd8f3609ef,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726255433092587162,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e97c9822-6884-43de-a860-acdc7a78b0f9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.conta
iner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9edd612ed049272f921fb281466af068531ec5cd17cbc02c76c33d5e94a7613d,PodSandboxId:36637a2505a8d97b6bb271c7e51720e3d4cf6d7770607a4d368bad6fd580fc2c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726255433034139547,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sntdv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 958927cd-96b7-461e-b040-719db9b90632,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fi
le,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5f29f7f76bdf322e6bb6447a9f1665ccb44f5eba45b48899ee7784938f7b3ff,PodSandboxId:36f0301cecbeca54cc91ea6023b5917b09f2feef0053fd35c7e6fb061d981a9d,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726255432764485346,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-w8ktp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2b018c0-323f-45be-bba9-aeaf9cbbef5d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerP
ort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2696e7a77e5d55f6f846833ffbae53b024d1b919e4ad29ab0e7c5f3330c5e854,PodSandboxId:8826fdf3e66c2da85b8ec81c06ea3d3dd0a13ad59556a52ab20abc28b9122e00,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726255426030379347,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-832180,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a262925470eceb1d38958926c17eb839,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc030c85d46d7541300dea35ade9029df1b9b60bc0629342a25c9142b0c9b3fe,PodSandboxId:df8ca9089a3d13e14e3785e715a4e3425af66f71361f415452a55c032d267326,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726255425787129481,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-832180,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8394f2a8329cfeddd998e539353d3436,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d
79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99b4d9896fa0b4c8cd5c9098095f8e2a5217d7abea4c3b7d43c4e01767bd2436,PodSandboxId:07eca42c194745b4a445f6fd0c304123c164c2b3214da5cd13d1c7dabc0fa0aa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726255423381008734,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-832180,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2cf558c82f7c07ac985a1806cbb94a60,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02b3f56cb8d9643708ed133fcc56af3444c05408cdded724d5c2ef364d818846,PodSandboxId:246e4d12a9a5ed0e0938da74a074a7e2485c4a15021ccf5d168c8ee0830ab839,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726255419065099587,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-832180,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3865056459676d8751348bed55faac93,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7319753489e7bfcfa89a8e1853b8d19f22ea6dcbce9d53d2c4e88651e86183f,PodSandboxId:36f0301cecbeca54cc91ea6023b5917b09f2feef0053fd35c7e6fb061d981a9d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726255418579065761,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-w8ktp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2b018c0-323f-45be-bba9-aeaf9cbbef5d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"conta
inerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c080eec6d9e5b325df2b46eaddecc0a4b53a904434e4a42e3f0c8f4ca9b90b81,PodSandboxId:cc39c6400994b23e4971534f4ae85c59c9d126cd7808091e33201e0f2bd30fcc,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726255095408623807,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-mjlx4,i
o.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d05f5737-22be-4f7d-b3d0-e7c8e5ce9046,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3153bfb4d1050389bf7f3fceda066f6e1f0f1087931c4747ae64bc8e11fbe98d,PodSandboxId:01a1ad4d7857c2131e48a1fca5f0dddeeb36ba5e970fc7457985bf8148cd0d8e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726255037865215707,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: e97c9822-6884-43de-a860-acdc7a78b0f9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:804b00dc869d900db3f006dba9a31975b04af1bf58d926814fe7af38496dc903,PodSandboxId:ecd348aa55f04442062b8efc8222d1606a08ac572ed751a85b8ae8687a706d48,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726255026077890301,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-prk4k,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: f26cb6d0-3d35-47a6-b489-383b69a2e8b2,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96891beb662f6bf1a22be94e865f849c490f37212044b80a4e7ed40fb4e6b121,PodSandboxId:dac5dc01376abc255c4b7b216c24016112b58bdd7114b368667af4e3aedf2806,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726255026055632033,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sntdv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 958927cd-96b7-461e-b040
-719db9b90632,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66fe7d1de1c37cc0d744c6d67ad7f13380cc730caab86f506d4bb5cf90433fd6,PodSandboxId:84898a16bf03a195b3e0c35f25318953a31fa7c53cd92a6957ed5bb744af8cfb,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726255014890552337,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-832180,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a262925470eceb1d38958926c17eb839,},Annotations:map[string]string
{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76ff5353d55e94cf485f3093825b6dda34d70a5ef34ebb1741894b8520890086,PodSandboxId:6276abda062c56f56eca3cc3ab3b22b2d633c06a9e9ab544dc2d126d6cb1493e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726255014894668109,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-832180,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8394f2a8329cfeddd998e539353d3436,},Annotations:map[s
tring]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b426c3236a8688919bb3f76b648d81b1dd24a3223eb10aa26ef46ae404f85df6,PodSandboxId:267a8b99ca5db62edee6a206e0f8e0a21676d8aeb34b48db6eabbe9521bffdff,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726255014842830220,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-832180,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2cf558c82f7c07ac985a1806cbb94a60,},Annotations:map[string]string{io
.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cfc48ae630fdc726595c0684fbe9856feb456a3a168627d15486e1ea532ef01,PodSandboxId:2bf7064075a7b83dbcbcf187a9fb07dd000aed4095b6b9b658c0c1cd307441e9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726255014794442252,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-832180,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3865056459676d8751348bed55faac93,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=62bbab23-ff8c-42a9-b1bf-914b0b11e237 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 19:27:52 multinode-832180 crio[2751]: time="2024-09-13 19:27:52.848411764Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c22e576b-f9b8-4f97-afac-858a7f0da83d name=/runtime.v1.RuntimeService/Version
	Sep 13 19:27:52 multinode-832180 crio[2751]: time="2024-09-13 19:27:52.849446220Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c22e576b-f9b8-4f97-afac-858a7f0da83d name=/runtime.v1.RuntimeService/Version
	Sep 13 19:27:52 multinode-832180 crio[2751]: time="2024-09-13 19:27:52.851902286Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=68499ba6-c01b-4f5c-b10f-8591e2b48d2e name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 19:27:52 multinode-832180 crio[2751]: time="2024-09-13 19:27:52.852344781Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726255672852318278,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=68499ba6-c01b-4f5c-b10f-8591e2b48d2e name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 19:27:52 multinode-832180 crio[2751]: time="2024-09-13 19:27:52.852976166Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=83bfd33d-d9d0-49d7-9023-3dadbd6476aa name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 19:27:52 multinode-832180 crio[2751]: time="2024-09-13 19:27:52.853029333Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=83bfd33d-d9d0-49d7-9023-3dadbd6476aa name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 19:27:52 multinode-832180 crio[2751]: time="2024-09-13 19:27:52.853607289Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c02390146341d82db4a745936ef5e4fc584c13547f15f07045ec2a7d1bad1237,PodSandboxId:305b044bacb17066710d2bd5e723416c8b4519d8c3d6f81c535977548f869a75,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726255466705438152,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-mjlx4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d05f5737-22be-4f7d-b3d0-e7c8e5ce9046,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a29ceeace1750c7a2d6a3b8401821e73aece9da2fb254f28769c10161887b8b7,PodSandboxId:81bf57f457cf7d63876353284fa2d797912b4ab7376fb46332ed946abee821e2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726255433165155478,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-prk4k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f26cb6d0-3d35-47a6-b489-383b69a2e8b2,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:daed530ab23b2fbebe0aee06dba2f69f17c0d4522c2e8e12efe610e0005dbc97,PodSandboxId:82adf420ce8e66e8027b40ee471408cf2229bc50b8e2a942c589e5dd8f3609ef,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726255433092587162,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e97c9822-6884-43de-a860-acdc7a78b0f9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.conta
iner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9edd612ed049272f921fb281466af068531ec5cd17cbc02c76c33d5e94a7613d,PodSandboxId:36637a2505a8d97b6bb271c7e51720e3d4cf6d7770607a4d368bad6fd580fc2c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726255433034139547,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sntdv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 958927cd-96b7-461e-b040-719db9b90632,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fi
le,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5f29f7f76bdf322e6bb6447a9f1665ccb44f5eba45b48899ee7784938f7b3ff,PodSandboxId:36f0301cecbeca54cc91ea6023b5917b09f2feef0053fd35c7e6fb061d981a9d,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726255432764485346,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-w8ktp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2b018c0-323f-45be-bba9-aeaf9cbbef5d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerP
ort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2696e7a77e5d55f6f846833ffbae53b024d1b919e4ad29ab0e7c5f3330c5e854,PodSandboxId:8826fdf3e66c2da85b8ec81c06ea3d3dd0a13ad59556a52ab20abc28b9122e00,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726255426030379347,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-832180,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a262925470eceb1d38958926c17eb839,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc030c85d46d7541300dea35ade9029df1b9b60bc0629342a25c9142b0c9b3fe,PodSandboxId:df8ca9089a3d13e14e3785e715a4e3425af66f71361f415452a55c032d267326,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726255425787129481,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-832180,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8394f2a8329cfeddd998e539353d3436,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d
79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99b4d9896fa0b4c8cd5c9098095f8e2a5217d7abea4c3b7d43c4e01767bd2436,PodSandboxId:07eca42c194745b4a445f6fd0c304123c164c2b3214da5cd13d1c7dabc0fa0aa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726255423381008734,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-832180,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2cf558c82f7c07ac985a1806cbb94a60,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02b3f56cb8d9643708ed133fcc56af3444c05408cdded724d5c2ef364d818846,PodSandboxId:246e4d12a9a5ed0e0938da74a074a7e2485c4a15021ccf5d168c8ee0830ab839,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726255419065099587,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-832180,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3865056459676d8751348bed55faac93,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7319753489e7bfcfa89a8e1853b8d19f22ea6dcbce9d53d2c4e88651e86183f,PodSandboxId:36f0301cecbeca54cc91ea6023b5917b09f2feef0053fd35c7e6fb061d981a9d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726255418579065761,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-w8ktp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2b018c0-323f-45be-bba9-aeaf9cbbef5d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"conta
inerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c080eec6d9e5b325df2b46eaddecc0a4b53a904434e4a42e3f0c8f4ca9b90b81,PodSandboxId:cc39c6400994b23e4971534f4ae85c59c9d126cd7808091e33201e0f2bd30fcc,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726255095408623807,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-mjlx4,i
o.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d05f5737-22be-4f7d-b3d0-e7c8e5ce9046,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3153bfb4d1050389bf7f3fceda066f6e1f0f1087931c4747ae64bc8e11fbe98d,PodSandboxId:01a1ad4d7857c2131e48a1fca5f0dddeeb36ba5e970fc7457985bf8148cd0d8e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726255037865215707,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: e97c9822-6884-43de-a860-acdc7a78b0f9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:804b00dc869d900db3f006dba9a31975b04af1bf58d926814fe7af38496dc903,PodSandboxId:ecd348aa55f04442062b8efc8222d1606a08ac572ed751a85b8ae8687a706d48,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726255026077890301,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-prk4k,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: f26cb6d0-3d35-47a6-b489-383b69a2e8b2,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96891beb662f6bf1a22be94e865f849c490f37212044b80a4e7ed40fb4e6b121,PodSandboxId:dac5dc01376abc255c4b7b216c24016112b58bdd7114b368667af4e3aedf2806,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726255026055632033,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sntdv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 958927cd-96b7-461e-b040
-719db9b90632,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66fe7d1de1c37cc0d744c6d67ad7f13380cc730caab86f506d4bb5cf90433fd6,PodSandboxId:84898a16bf03a195b3e0c35f25318953a31fa7c53cd92a6957ed5bb744af8cfb,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726255014890552337,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-832180,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a262925470eceb1d38958926c17eb839,},Annotations:map[string]string
{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76ff5353d55e94cf485f3093825b6dda34d70a5ef34ebb1741894b8520890086,PodSandboxId:6276abda062c56f56eca3cc3ab3b22b2d633c06a9e9ab544dc2d126d6cb1493e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726255014894668109,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-832180,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8394f2a8329cfeddd998e539353d3436,},Annotations:map[s
tring]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b426c3236a8688919bb3f76b648d81b1dd24a3223eb10aa26ef46ae404f85df6,PodSandboxId:267a8b99ca5db62edee6a206e0f8e0a21676d8aeb34b48db6eabbe9521bffdff,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726255014842830220,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-832180,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2cf558c82f7c07ac985a1806cbb94a60,},Annotations:map[string]string{io
.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cfc48ae630fdc726595c0684fbe9856feb456a3a168627d15486e1ea532ef01,PodSandboxId:2bf7064075a7b83dbcbcf187a9fb07dd000aed4095b6b9b658c0c1cd307441e9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726255014794442252,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-832180,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3865056459676d8751348bed55faac93,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=83bfd33d-d9d0-49d7-9023-3dadbd6476aa name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 19:27:52 multinode-832180 crio[2751]: time="2024-09-13 19:27:52.899144855Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d59df301-4125-4604-bd26-6abd3fe6d153 name=/runtime.v1.RuntimeService/Version
	Sep 13 19:27:52 multinode-832180 crio[2751]: time="2024-09-13 19:27:52.899235893Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d59df301-4125-4604-bd26-6abd3fe6d153 name=/runtime.v1.RuntimeService/Version
	Sep 13 19:27:52 multinode-832180 crio[2751]: time="2024-09-13 19:27:52.900984852Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=25e1555f-017f-4e95-97bd-8819602368ba name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 19:27:52 multinode-832180 crio[2751]: time="2024-09-13 19:27:52.901431928Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726255672901409480,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=25e1555f-017f-4e95-97bd-8819602368ba name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 19:27:52 multinode-832180 crio[2751]: time="2024-09-13 19:27:52.901960261Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a624086d-12e6-415e-b0ff-dcdc46a9f952 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 19:27:52 multinode-832180 crio[2751]: time="2024-09-13 19:27:52.902033693Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a624086d-12e6-415e-b0ff-dcdc46a9f952 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 19:27:52 multinode-832180 crio[2751]: time="2024-09-13 19:27:52.902418198Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c02390146341d82db4a745936ef5e4fc584c13547f15f07045ec2a7d1bad1237,PodSandboxId:305b044bacb17066710d2bd5e723416c8b4519d8c3d6f81c535977548f869a75,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726255466705438152,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-mjlx4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d05f5737-22be-4f7d-b3d0-e7c8e5ce9046,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a29ceeace1750c7a2d6a3b8401821e73aece9da2fb254f28769c10161887b8b7,PodSandboxId:81bf57f457cf7d63876353284fa2d797912b4ab7376fb46332ed946abee821e2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726255433165155478,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-prk4k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f26cb6d0-3d35-47a6-b489-383b69a2e8b2,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:daed530ab23b2fbebe0aee06dba2f69f17c0d4522c2e8e12efe610e0005dbc97,PodSandboxId:82adf420ce8e66e8027b40ee471408cf2229bc50b8e2a942c589e5dd8f3609ef,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726255433092587162,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e97c9822-6884-43de-a860-acdc7a78b0f9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.conta
iner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9edd612ed049272f921fb281466af068531ec5cd17cbc02c76c33d5e94a7613d,PodSandboxId:36637a2505a8d97b6bb271c7e51720e3d4cf6d7770607a4d368bad6fd580fc2c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726255433034139547,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sntdv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 958927cd-96b7-461e-b040-719db9b90632,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fi
le,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5f29f7f76bdf322e6bb6447a9f1665ccb44f5eba45b48899ee7784938f7b3ff,PodSandboxId:36f0301cecbeca54cc91ea6023b5917b09f2feef0053fd35c7e6fb061d981a9d,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726255432764485346,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-w8ktp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2b018c0-323f-45be-bba9-aeaf9cbbef5d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerP
ort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2696e7a77e5d55f6f846833ffbae53b024d1b919e4ad29ab0e7c5f3330c5e854,PodSandboxId:8826fdf3e66c2da85b8ec81c06ea3d3dd0a13ad59556a52ab20abc28b9122e00,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726255426030379347,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-832180,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a262925470eceb1d38958926c17eb839,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc030c85d46d7541300dea35ade9029df1b9b60bc0629342a25c9142b0c9b3fe,PodSandboxId:df8ca9089a3d13e14e3785e715a4e3425af66f71361f415452a55c032d267326,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726255425787129481,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-832180,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8394f2a8329cfeddd998e539353d3436,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d
79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99b4d9896fa0b4c8cd5c9098095f8e2a5217d7abea4c3b7d43c4e01767bd2436,PodSandboxId:07eca42c194745b4a445f6fd0c304123c164c2b3214da5cd13d1c7dabc0fa0aa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726255423381008734,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-832180,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2cf558c82f7c07ac985a1806cbb94a60,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02b3f56cb8d9643708ed133fcc56af3444c05408cdded724d5c2ef364d818846,PodSandboxId:246e4d12a9a5ed0e0938da74a074a7e2485c4a15021ccf5d168c8ee0830ab839,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726255419065099587,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-832180,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3865056459676d8751348bed55faac93,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7319753489e7bfcfa89a8e1853b8d19f22ea6dcbce9d53d2c4e88651e86183f,PodSandboxId:36f0301cecbeca54cc91ea6023b5917b09f2feef0053fd35c7e6fb061d981a9d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726255418579065761,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-w8ktp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2b018c0-323f-45be-bba9-aeaf9cbbef5d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"conta
inerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c080eec6d9e5b325df2b46eaddecc0a4b53a904434e4a42e3f0c8f4ca9b90b81,PodSandboxId:cc39c6400994b23e4971534f4ae85c59c9d126cd7808091e33201e0f2bd30fcc,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726255095408623807,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-mjlx4,i
o.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d05f5737-22be-4f7d-b3d0-e7c8e5ce9046,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3153bfb4d1050389bf7f3fceda066f6e1f0f1087931c4747ae64bc8e11fbe98d,PodSandboxId:01a1ad4d7857c2131e48a1fca5f0dddeeb36ba5e970fc7457985bf8148cd0d8e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726255037865215707,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: e97c9822-6884-43de-a860-acdc7a78b0f9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:804b00dc869d900db3f006dba9a31975b04af1bf58d926814fe7af38496dc903,PodSandboxId:ecd348aa55f04442062b8efc8222d1606a08ac572ed751a85b8ae8687a706d48,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726255026077890301,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-prk4k,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: f26cb6d0-3d35-47a6-b489-383b69a2e8b2,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96891beb662f6bf1a22be94e865f849c490f37212044b80a4e7ed40fb4e6b121,PodSandboxId:dac5dc01376abc255c4b7b216c24016112b58bdd7114b368667af4e3aedf2806,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726255026055632033,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sntdv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 958927cd-96b7-461e-b040
-719db9b90632,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66fe7d1de1c37cc0d744c6d67ad7f13380cc730caab86f506d4bb5cf90433fd6,PodSandboxId:84898a16bf03a195b3e0c35f25318953a31fa7c53cd92a6957ed5bb744af8cfb,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726255014890552337,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-832180,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a262925470eceb1d38958926c17eb839,},Annotations:map[string]string
{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76ff5353d55e94cf485f3093825b6dda34d70a5ef34ebb1741894b8520890086,PodSandboxId:6276abda062c56f56eca3cc3ab3b22b2d633c06a9e9ab544dc2d126d6cb1493e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726255014894668109,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-832180,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8394f2a8329cfeddd998e539353d3436,},Annotations:map[s
tring]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b426c3236a8688919bb3f76b648d81b1dd24a3223eb10aa26ef46ae404f85df6,PodSandboxId:267a8b99ca5db62edee6a206e0f8e0a21676d8aeb34b48db6eabbe9521bffdff,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726255014842830220,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-832180,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2cf558c82f7c07ac985a1806cbb94a60,},Annotations:map[string]string{io
.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cfc48ae630fdc726595c0684fbe9856feb456a3a168627d15486e1ea532ef01,PodSandboxId:2bf7064075a7b83dbcbcf187a9fb07dd000aed4095b6b9b658c0c1cd307441e9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726255014794442252,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-832180,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3865056459676d8751348bed55faac93,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a624086d-12e6-415e-b0ff-dcdc46a9f952 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c02390146341d       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      3 minutes ago       Running             busybox                   1                   305b044bacb17       busybox-7dff88458-mjlx4
	a29ceeace1750       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      3 minutes ago       Running             kindnet-cni               1                   81bf57f457cf7       kindnet-prk4k
	daed530ab23b2       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       1                   82adf420ce8e6       storage-provisioner
	9edd612ed0492       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      3 minutes ago       Running             kube-proxy                1                   36637a2505a8d       kube-proxy-sntdv
	e5f29f7f76bdf       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      4 minutes ago       Running             coredns                   2                   36f0301cecbec       coredns-7c65d6cfc9-w8ktp
	2696e7a77e5d5       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      4 minutes ago       Running             etcd                      1                   8826fdf3e66c2       etcd-multinode-832180
	fc030c85d46d7       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      4 minutes ago       Running             kube-controller-manager   1                   df8ca9089a3d1       kube-controller-manager-multinode-832180
	99b4d9896fa0b       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      4 minutes ago       Running             kube-scheduler            1                   07eca42c19474       kube-scheduler-multinode-832180
	02b3f56cb8d96       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      4 minutes ago       Running             kube-apiserver            1                   246e4d12a9a5e       kube-apiserver-multinode-832180
	f7319753489e7       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      4 minutes ago       Exited              coredns                   1                   36f0301cecbec       coredns-7c65d6cfc9-w8ktp
	c080eec6d9e5b       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   9 minutes ago       Exited              busybox                   0                   cc39c6400994b       busybox-7dff88458-mjlx4
	3153bfb4d1050       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      10 minutes ago      Exited              storage-provisioner       0                   01a1ad4d7857c       storage-provisioner
	804b00dc869d9       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      10 minutes ago      Exited              kindnet-cni               0                   ecd348aa55f04       kindnet-prk4k
	96891beb662f6       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      10 minutes ago      Exited              kube-proxy                0                   dac5dc01376ab       kube-proxy-sntdv
	76ff5353d55e9       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      10 minutes ago      Exited              kube-controller-manager   0                   6276abda062c5       kube-controller-manager-multinode-832180
	66fe7d1de1c37       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      10 minutes ago      Exited              etcd                      0                   84898a16bf03a       etcd-multinode-832180
	b426c3236a868       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      10 minutes ago      Exited              kube-scheduler            0                   267a8b99ca5db       kube-scheduler-multinode-832180
	1cfc48ae630fd       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      10 minutes ago      Exited              kube-apiserver            0                   2bf7064075a7b       kube-apiserver-multinode-832180
	
	
	==> coredns [e5f29f7f76bdf322e6bb6447a9f1665ccb44f5eba45b48899ee7784938f7b3ff] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:44791 - 9377 "HINFO IN 4364243046564626994.2591110203882479955. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015007036s
	
	
	==> coredns [f7319753489e7bfcfa89a8e1853b8d19f22ea6dcbce9d53d2c4e88651e86183f] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:42000 - 9876 "HINFO IN 310529751341749349.2565869892791839521. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.015265524s
	
	
	==> describe nodes <==
	Name:               multinode-832180
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-832180
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fdd33bebc6743cfd1c61ec7fe066add478610a92
	                    minikube.k8s.io/name=multinode-832180
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_13T19_17_01_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Sep 2024 19:16:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-832180
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 13 Sep 2024 19:27:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Sep 2024 19:23:51 +0000   Fri, 13 Sep 2024 19:16:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Sep 2024 19:23:51 +0000   Fri, 13 Sep 2024 19:16:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Sep 2024 19:23:51 +0000   Fri, 13 Sep 2024 19:16:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 13 Sep 2024 19:23:51 +0000   Fri, 13 Sep 2024 19:17:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.107
	  Hostname:    multinode-832180
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 04c9021f9b344e88906d4281c3d54114
	  System UUID:                04c9021f-9b34-4e88-906d-4281c3d54114
	  Boot ID:                    c72d22e3-5904-415c-909f-d71bc2e65107
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-mjlx4                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m42s
	  kube-system                 coredns-7c65d6cfc9-w8ktp                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     10m
	  kube-system                 etcd-multinode-832180                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         10m
	  kube-system                 kindnet-prk4k                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      10m
	  kube-system                 kube-apiserver-multinode-832180             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-multinode-832180    200m (10%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-sntdv                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-multinode-832180             100m (5%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   100m (5%)
	  memory             220Mi (10%)  220Mi (10%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 10m                  kube-proxy       
	  Normal  Starting                 3m59s                kube-proxy       
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)    kubelet          Node multinode-832180 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)    kubelet          Node multinode-832180 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)    kubelet          Node multinode-832180 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m                  kubelet          Node multinode-832180 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  10m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    10m                  kubelet          Node multinode-832180 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m                  kubelet          Node multinode-832180 status is now: NodeHasSufficientPID
	  Normal  Starting                 10m                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           10m                  node-controller  Node multinode-832180 event: Registered Node multinode-832180 in Controller
	  Normal  NodeReady                10m                  kubelet          Node multinode-832180 status is now: NodeReady
	  Normal  Starting                 4m3s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m3s (x8 over 4m3s)  kubelet          Node multinode-832180 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m3s (x8 over 4m3s)  kubelet          Node multinode-832180 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m3s (x7 over 4m3s)  kubelet          Node multinode-832180 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m3s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m59s                node-controller  Node multinode-832180 event: Registered Node multinode-832180 in Controller
	
	
	Name:               multinode-832180-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-832180-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fdd33bebc6743cfd1c61ec7fe066add478610a92
	                    minikube.k8s.io/name=multinode-832180
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_13T19_24_28_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Sep 2024 19:24:27 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-832180-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 13 Sep 2024 19:25:28 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 13 Sep 2024 19:24:58 +0000   Fri, 13 Sep 2024 19:26:10 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 13 Sep 2024 19:24:58 +0000   Fri, 13 Sep 2024 19:26:10 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 13 Sep 2024 19:24:58 +0000   Fri, 13 Sep 2024 19:26:10 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 13 Sep 2024 19:24:58 +0000   Fri, 13 Sep 2024 19:26:10 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.235
	  Hostname:    multinode-832180-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 41e9fdb14f5e4f07b2124dd3b9aa13fb
	  System UUID:                41e9fdb1-4f5e-4f07-b212-4dd3b9aa13fb
	  Boot ID:                    b19e2870-4145-4af6-878d-8985ceb03442
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-99fvm    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m30s
	  kube-system                 kindnet-sdfsx              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      10m
	  kube-system                 kube-proxy-sgggj           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m20s                  kube-proxy       
	  Normal  Starting                 9m58s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  10m (x2 over 10m)      kubelet          Node multinode-832180-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x2 over 10m)      kubelet          Node multinode-832180-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x2 over 10m)      kubelet          Node multinode-832180-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                9m44s                  kubelet          Node multinode-832180-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  3m26s (x2 over 3m26s)  kubelet          Node multinode-832180-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m26s (x2 over 3m26s)  kubelet          Node multinode-832180-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m26s (x2 over 3m26s)  kubelet          Node multinode-832180-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m26s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m6s                   kubelet          Node multinode-832180-m02 status is now: NodeReady
	  Normal  NodeNotReady             103s                   node-controller  Node multinode-832180-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.061831] systemd-fstab-generator[592]: Ignoring "noauto" option for root device
	[  +0.195951] systemd-fstab-generator[606]: Ignoring "noauto" option for root device
	[  +0.134058] systemd-fstab-generator[618]: Ignoring "noauto" option for root device
	[  +0.271704] systemd-fstab-generator[648]: Ignoring "noauto" option for root device
	[  +3.926806] systemd-fstab-generator[743]: Ignoring "noauto" option for root device
	[  +4.177423] systemd-fstab-generator[880]: Ignoring "noauto" option for root device
	[  +0.056870] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.990166] systemd-fstab-generator[1210]: Ignoring "noauto" option for root device
	[  +0.071297] kauditd_printk_skb: 69 callbacks suppressed
	[Sep13 19:17] systemd-fstab-generator[1316]: Ignoring "noauto" option for root device
	[  +0.111131] kauditd_printk_skb: 18 callbacks suppressed
	[ +12.637323] kauditd_printk_skb: 69 callbacks suppressed
	[Sep13 19:18] kauditd_printk_skb: 14 callbacks suppressed
	[Sep13 19:23] systemd-fstab-generator[2676]: Ignoring "noauto" option for root device
	[  +0.144307] systemd-fstab-generator[2688]: Ignoring "noauto" option for root device
	[  +0.172772] systemd-fstab-generator[2702]: Ignoring "noauto" option for root device
	[  +0.150470] systemd-fstab-generator[2714]: Ignoring "noauto" option for root device
	[  +0.299619] systemd-fstab-generator[2742]: Ignoring "noauto" option for root device
	[  +1.654897] systemd-fstab-generator[2844]: Ignoring "noauto" option for root device
	[  +5.324522] kauditd_printk_skb: 147 callbacks suppressed
	[  +6.649948] systemd-fstab-generator[3374]: Ignoring "noauto" option for root device
	[  +0.098308] kauditd_printk_skb: 20 callbacks suppressed
	[  +5.095828] kauditd_printk_skb: 52 callbacks suppressed
	[Sep13 19:24] systemd-fstab-generator[3836]: Ignoring "noauto" option for root device
	[ +24.280957] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [2696e7a77e5d55f6f846833ffbae53b024d1b919e4ad29ab0e7c5f3330c5e854] <==
	{"level":"info","ts":"2024-09-13T19:23:46.223126Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"1d5c088f9986766d","local-member-id":"ec1614c5c0f7335e","added-peer-id":"ec1614c5c0f7335e","added-peer-peer-urls":["https://192.168.39.107:2380"]}
	{"level":"info","ts":"2024-09-13T19:23:46.223609Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-13T19:23:46.224781Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"1d5c088f9986766d","local-member-id":"ec1614c5c0f7335e","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-13T19:23:46.224837Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-13T19:23:46.229230Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-13T19:23:46.229580Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"ec1614c5c0f7335e","initial-advertise-peer-urls":["https://192.168.39.107:2380"],"listen-peer-urls":["https://192.168.39.107:2380"],"advertise-client-urls":["https://192.168.39.107:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.107:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-13T19:23:46.229620Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-13T19:23:46.229772Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.107:2380"}
	{"level":"info","ts":"2024-09-13T19:23:46.229797Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.107:2380"}
	{"level":"info","ts":"2024-09-13T19:23:47.501680Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ec1614c5c0f7335e is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-13T19:23:47.501793Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ec1614c5c0f7335e became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-13T19:23:47.501855Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ec1614c5c0f7335e received MsgPreVoteResp from ec1614c5c0f7335e at term 2"}
	{"level":"info","ts":"2024-09-13T19:23:47.501897Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ec1614c5c0f7335e became candidate at term 3"}
	{"level":"info","ts":"2024-09-13T19:23:47.501922Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ec1614c5c0f7335e received MsgVoteResp from ec1614c5c0f7335e at term 3"}
	{"level":"info","ts":"2024-09-13T19:23:47.501965Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ec1614c5c0f7335e became leader at term 3"}
	{"level":"info","ts":"2024-09-13T19:23:47.501991Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ec1614c5c0f7335e elected leader ec1614c5c0f7335e at term 3"}
	{"level":"info","ts":"2024-09-13T19:23:47.508441Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"ec1614c5c0f7335e","local-member-attributes":"{Name:multinode-832180 ClientURLs:[https://192.168.39.107:2379]}","request-path":"/0/members/ec1614c5c0f7335e/attributes","cluster-id":"1d5c088f9986766d","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-13T19:23:47.508458Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-13T19:23:47.508782Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-13T19:23:47.508861Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-13T19:23:47.508499Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-13T19:23:47.509753Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-13T19:23:47.509786Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-13T19:23:47.510674Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-13T19:23:47.510814Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.107:2379"}
	
	
	==> etcd [66fe7d1de1c37cc0d744c6d67ad7f13380cc730caab86f506d4bb5cf90433fd6] <==
	{"level":"info","ts":"2024-09-13T19:17:54.885468Z","caller":"traceutil/trace.go:171","msg":"trace[547892435] transaction","detail":"{read_only:false; response_revision:504; number_of_response:1; }","duration":"591.620953ms","start":"2024-09-13T19:17:54.293837Z","end":"2024-09-13T19:17:54.885458Z","steps":["trace[547892435] 'process raft request'  (duration: 554.029208ms)","trace[547892435] 'compare'  (duration: 37.11374ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-13T19:17:54.886566Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-13T19:17:54.293820Z","time spent":"592.682653ms","remote":"127.0.0.1:44170","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":709,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/events/default/multinode-832180-m02.17f4e3d80edd35ed\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-832180-m02.17f4e3d80edd35ed\" value_size:629 lease:3701556390052319695 >> failure:<>"}
	{"level":"info","ts":"2024-09-13T19:17:54.886090Z","caller":"traceutil/trace.go:171","msg":"trace[1035367749] transaction","detail":"{read_only:false; response_revision:506; number_of_response:1; }","duration":"441.554463ms","start":"2024-09-13T19:17:54.444477Z","end":"2024-09-13T19:17:54.886031Z","steps":["trace[1035367749] 'process raft request'  (duration: 440.655081ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-13T19:17:54.886813Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-13T19:17:54.444455Z","time spent":"442.329686ms","remote":"127.0.0.1:44170","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":728,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/kube-proxy-sgggj.17f4e3d81733dc5c\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kube-proxy-sgggj.17f4e3d81733dc5c\" value_size:648 lease:3701556390052319695 >> failure:<>"}
	{"level":"warn","ts":"2024-09-13T19:17:54.887114Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"267.775271ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-13T19:17:54.887171Z","caller":"traceutil/trace.go:171","msg":"trace[1961735353] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:506; }","duration":"267.834397ms","start":"2024-09-13T19:17:54.619326Z","end":"2024-09-13T19:17:54.887160Z","steps":["trace[1961735353] 'agreement among raft nodes before linearized reading'  (duration: 267.761166ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-13T19:17:54.887942Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"337.61981ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-832180-m02\" ","response":"range_response_count:1 size:2894"}
	{"level":"info","ts":"2024-09-13T19:17:54.888886Z","caller":"traceutil/trace.go:171","msg":"trace[1399790241] range","detail":"{range_begin:/registry/minions/multinode-832180-m02; range_end:; response_count:1; response_revision:506; }","duration":"338.565024ms","start":"2024-09-13T19:17:54.550306Z","end":"2024-09-13T19:17:54.888871Z","steps":["trace[1399790241] 'agreement among raft nodes before linearized reading'  (duration: 336.059212ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-13T19:17:54.888997Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-13T19:17:54.550234Z","time spent":"338.747284ms","remote":"127.0.0.1:44304","response type":"/etcdserverpb.KV/Range","request count":0,"request size":40,"response count":1,"response size":2917,"request content":"key:\"/registry/minions/multinode-832180-m02\" "}
	{"level":"warn","ts":"2024-09-13T19:17:54.889168Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"288.083129ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-832180-m02\" ","response":"range_response_count:1 size:2894"}
	{"level":"info","ts":"2024-09-13T19:17:54.889242Z","caller":"traceutil/trace.go:171","msg":"trace[2060334768] range","detail":"{range_begin:/registry/minions/multinode-832180-m02; range_end:; response_count:1; response_revision:506; }","duration":"288.158853ms","start":"2024-09-13T19:17:54.601074Z","end":"2024-09-13T19:17:54.889233Z","steps":["trace[2060334768] 'agreement among raft nodes before linearized reading'  (duration: 288.056896ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-13T19:18:47.486692Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"122.236597ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-13T19:18:47.487100Z","caller":"traceutil/trace.go:171","msg":"trace[88599264] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:606; }","duration":"122.728743ms","start":"2024-09-13T19:18:47.364349Z","end":"2024-09-13T19:18:47.487078Z","steps":["trace[88599264] 'range keys from in-memory index tree'  (duration: 122.213458ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-13T19:18:47.486948Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"149.652376ms","expected-duration":"100ms","prefix":"","request":"header:<ID:3701556390052321208 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-832180-m03.17f4e3e462bb82c4\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-832180-m03.17f4e3e462bb82c4\" value_size:646 lease:3701556390052320798 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-09-13T19:18:47.487329Z","caller":"traceutil/trace.go:171","msg":"trace[994767466] transaction","detail":"{read_only:false; response_revision:607; number_of_response:1; }","duration":"238.678573ms","start":"2024-09-13T19:18:47.248635Z","end":"2024-09-13T19:18:47.487313Z","steps":["trace[994767466] 'process raft request'  (duration: 88.517151ms)","trace[994767466] 'compare'  (duration: 149.420411ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-13T19:22:04.271412Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-13T19:22:04.271541Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"multinode-832180","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.107:2380"],"advertise-client-urls":["https://192.168.39.107:2379"]}
	{"level":"warn","ts":"2024-09-13T19:22:04.271684Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-13T19:22:04.271772Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-13T19:22:04.327251Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.107:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-13T19:22:04.327385Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.107:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-13T19:22:04.327509Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ec1614c5c0f7335e","current-leader-member-id":"ec1614c5c0f7335e"}
	{"level":"info","ts":"2024-09-13T19:22:04.334645Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.107:2380"}
	{"level":"info","ts":"2024-09-13T19:22:04.334838Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.107:2380"}
	{"level":"info","ts":"2024-09-13T19:22:04.334874Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"multinode-832180","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.107:2380"],"advertise-client-urls":["https://192.168.39.107:2379"]}
	
	
	==> kernel <==
	 19:27:53 up 11 min,  0 users,  load average: 0.15, 0.20, 0.13
	Linux multinode-832180 5.10.207 #1 SMP Thu Sep 12 19:03:33 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [804b00dc869d900db3f006dba9a31975b04af1bf58d926814fe7af38496dc903] <==
	I0913 19:21:17.224740       1 main.go:322] Node multinode-832180-m03 has CIDR [10.244.3.0/24] 
	I0913 19:21:27.228585       1 main.go:295] Handling node with IPs: map[192.168.39.21:{}]
	I0913 19:21:27.228702       1 main.go:322] Node multinode-832180-m03 has CIDR [10.244.3.0/24] 
	I0913 19:21:27.228871       1 main.go:295] Handling node with IPs: map[192.168.39.107:{}]
	I0913 19:21:27.228901       1 main.go:299] handling current node
	I0913 19:21:27.228928       1 main.go:295] Handling node with IPs: map[192.168.39.235:{}]
	I0913 19:21:27.228945       1 main.go:322] Node multinode-832180-m02 has CIDR [10.244.1.0/24] 
	I0913 19:21:37.232564       1 main.go:295] Handling node with IPs: map[192.168.39.107:{}]
	I0913 19:21:37.232627       1 main.go:299] handling current node
	I0913 19:21:37.232661       1 main.go:295] Handling node with IPs: map[192.168.39.235:{}]
	I0913 19:21:37.232667       1 main.go:322] Node multinode-832180-m02 has CIDR [10.244.1.0/24] 
	I0913 19:21:37.232812       1 main.go:295] Handling node with IPs: map[192.168.39.21:{}]
	I0913 19:21:37.232835       1 main.go:322] Node multinode-832180-m03 has CIDR [10.244.3.0/24] 
	I0913 19:21:47.224916       1 main.go:295] Handling node with IPs: map[192.168.39.235:{}]
	I0913 19:21:47.225046       1 main.go:322] Node multinode-832180-m02 has CIDR [10.244.1.0/24] 
	I0913 19:21:47.225245       1 main.go:295] Handling node with IPs: map[192.168.39.21:{}]
	I0913 19:21:47.225326       1 main.go:322] Node multinode-832180-m03 has CIDR [10.244.3.0/24] 
	I0913 19:21:47.225450       1 main.go:295] Handling node with IPs: map[192.168.39.107:{}]
	I0913 19:21:47.225477       1 main.go:299] handling current node
	I0913 19:21:57.226360       1 main.go:295] Handling node with IPs: map[192.168.39.235:{}]
	I0913 19:21:57.226413       1 main.go:322] Node multinode-832180-m02 has CIDR [10.244.1.0/24] 
	I0913 19:21:57.226546       1 main.go:295] Handling node with IPs: map[192.168.39.21:{}]
	I0913 19:21:57.226570       1 main.go:322] Node multinode-832180-m03 has CIDR [10.244.3.0/24] 
	I0913 19:21:57.226640       1 main.go:295] Handling node with IPs: map[192.168.39.107:{}]
	I0913 19:21:57.226663       1 main.go:299] handling current node
	
	
	==> kindnet [a29ceeace1750c7a2d6a3b8401821e73aece9da2fb254f28769c10161887b8b7] <==
	I0913 19:26:44.215672       1 main.go:322] Node multinode-832180-m02 has CIDR [10.244.1.0/24] 
	I0913 19:26:54.215798       1 main.go:295] Handling node with IPs: map[192.168.39.107:{}]
	I0913 19:26:54.215844       1 main.go:299] handling current node
	I0913 19:26:54.215859       1 main.go:295] Handling node with IPs: map[192.168.39.235:{}]
	I0913 19:26:54.215865       1 main.go:322] Node multinode-832180-m02 has CIDR [10.244.1.0/24] 
	I0913 19:27:04.214961       1 main.go:295] Handling node with IPs: map[192.168.39.107:{}]
	I0913 19:27:04.215079       1 main.go:299] handling current node
	I0913 19:27:04.215108       1 main.go:295] Handling node with IPs: map[192.168.39.235:{}]
	I0913 19:27:04.215126       1 main.go:322] Node multinode-832180-m02 has CIDR [10.244.1.0/24] 
	I0913 19:27:14.218391       1 main.go:295] Handling node with IPs: map[192.168.39.235:{}]
	I0913 19:27:14.218487       1 main.go:322] Node multinode-832180-m02 has CIDR [10.244.1.0/24] 
	I0913 19:27:14.218632       1 main.go:295] Handling node with IPs: map[192.168.39.107:{}]
	I0913 19:27:14.218656       1 main.go:299] handling current node
	I0913 19:27:24.220126       1 main.go:295] Handling node with IPs: map[192.168.39.107:{}]
	I0913 19:27:24.220332       1 main.go:299] handling current node
	I0913 19:27:24.220374       1 main.go:295] Handling node with IPs: map[192.168.39.235:{}]
	I0913 19:27:24.220394       1 main.go:322] Node multinode-832180-m02 has CIDR [10.244.1.0/24] 
	I0913 19:27:34.218723       1 main.go:295] Handling node with IPs: map[192.168.39.235:{}]
	I0913 19:27:34.218768       1 main.go:322] Node multinode-832180-m02 has CIDR [10.244.1.0/24] 
	I0913 19:27:34.218887       1 main.go:295] Handling node with IPs: map[192.168.39.107:{}]
	I0913 19:27:34.218913       1 main.go:299] handling current node
	I0913 19:27:44.218906       1 main.go:295] Handling node with IPs: map[192.168.39.107:{}]
	I0913 19:27:44.219023       1 main.go:299] handling current node
	I0913 19:27:44.219061       1 main.go:295] Handling node with IPs: map[192.168.39.235:{}]
	I0913 19:27:44.219081       1 main.go:322] Node multinode-832180-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [02b3f56cb8d9643708ed133fcc56af3444c05408cdded724d5c2ef364d818846] <==
	I0913 19:23:51.521570       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0913 19:23:51.524955       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0913 19:23:51.525185       1 shared_informer.go:320] Caches are synced for configmaps
	I0913 19:23:51.525234       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0913 19:23:51.525343       1 aggregator.go:171] initial CRD sync complete...
	I0913 19:23:51.525369       1 autoregister_controller.go:144] Starting autoregister controller
	I0913 19:23:51.525392       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0913 19:23:51.525413       1 cache.go:39] Caches are synced for autoregister controller
	I0913 19:23:51.526250       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0913 19:23:51.526312       1 policy_source.go:224] refreshing policies
	I0913 19:23:51.530374       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0913 19:23:51.530437       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0913 19:23:51.530448       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0913 19:23:51.531089       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0913 19:23:51.530456       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0913 19:23:51.537571       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0913 19:23:51.538047       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0913 19:23:52.334444       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0913 19:23:53.732493       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0913 19:23:53.902877       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0913 19:23:53.918220       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0913 19:23:54.003589       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0913 19:23:54.019691       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0913 19:23:55.016036       1 controller.go:615] quota admission added evaluator for: endpoints
	I0913 19:23:55.164791       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [1cfc48ae630fdc726595c0684fbe9856feb456a3a168627d15486e1ea532ef01] <==
	E0913 19:18:16.856527       1 conn.go:339] Error on socket receive: read tcp 192.168.39.107:8443->192.168.39.1:43310: use of closed network connection
	E0913 19:18:17.037526       1 conn.go:339] Error on socket receive: read tcp 192.168.39.107:8443->192.168.39.1:43336: use of closed network connection
	E0913 19:18:17.210463       1 conn.go:339] Error on socket receive: read tcp 192.168.39.107:8443->192.168.39.1:43346: use of closed network connection
	E0913 19:18:17.387995       1 conn.go:339] Error on socket receive: read tcp 192.168.39.107:8443->192.168.39.1:43368: use of closed network connection
	E0913 19:18:17.567495       1 conn.go:339] Error on socket receive: read tcp 192.168.39.107:8443->192.168.39.1:43398: use of closed network connection
	E0913 19:18:17.741129       1 conn.go:339] Error on socket receive: read tcp 192.168.39.107:8443->192.168.39.1:43424: use of closed network connection
	E0913 19:18:18.016837       1 conn.go:339] Error on socket receive: read tcp 192.168.39.107:8443->192.168.39.1:43448: use of closed network connection
	E0913 19:18:18.184869       1 conn.go:339] Error on socket receive: read tcp 192.168.39.107:8443->192.168.39.1:43466: use of closed network connection
	E0913 19:18:18.353883       1 conn.go:339] Error on socket receive: read tcp 192.168.39.107:8443->192.168.39.1:43490: use of closed network connection
	E0913 19:18:18.521138       1 conn.go:339] Error on socket receive: read tcp 192.168.39.107:8443->192.168.39.1:43510: use of closed network connection
	I0913 19:22:04.264674       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	E0913 19:22:04.291627       1 controller.go:131] Unable to remove endpoints from kubernetes service: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	W0913 19:22:04.294749       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0913 19:22:04.295612       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0913 19:22:04.295806       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0913 19:22:04.296472       1 dynamic_cafile_content.go:174] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0913 19:22:04.297871       1 storage_flowcontrol.go:186] APF bootstrap ensurer is exiting
	I0913 19:22:04.298492       1 cluster_authentication_trust_controller.go:466] Shutting down cluster_authentication_trust_controller controller
	I0913 19:22:04.298788       1 autoregister_controller.go:168] Shutting down autoregister controller
	I0913 19:22:04.298866       1 crdregistration_controller.go:145] Shutting down crd-autoregister controller
	I0913 19:22:04.298982       1 customresource_discovery_controller.go:328] Shutting down DiscoveryController
	W0913 19:22:04.299023       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0913 19:22:04.299098       1 gc_controller.go:91] Shutting down apiserver lease garbage collector
	I0913 19:22:04.299179       1 remote_available_controller.go:427] Shutting down RemoteAvailability controller
	I0913 19:22:04.299208       1 apf_controller.go:389] Shutting down API Priority and Fairness config worker
	
	
	==> kube-controller-manager [76ff5353d55e94cf485f3093825b6dda34d70a5ef34ebb1741894b8520890086] <==
	I0913 19:19:38.196468       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-832180-m03"
	I0913 19:19:38.196756       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-832180-m02"
	I0913 19:19:39.421905       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-832180-m02"
	I0913 19:19:39.423660       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-832180-m03\" does not exist"
	I0913 19:19:39.432887       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-832180-m03" podCIDRs=["10.244.3.0/24"]
	I0913 19:19:39.432945       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-832180-m03"
	I0913 19:19:39.433245       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-832180-m03"
	I0913 19:19:39.445644       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-832180-m03"
	I0913 19:19:39.750568       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-832180-m03"
	I0913 19:19:40.155878       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-832180-m03"
	I0913 19:19:44.411520       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-832180-m03"
	I0913 19:19:49.758822       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-832180-m03"
	I0913 19:19:58.548929       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-832180-m02"
	I0913 19:19:58.550092       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-832180-m03"
	I0913 19:19:58.562002       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-832180-m03"
	I0913 19:19:59.350089       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-832180-m03"
	I0913 19:20:44.371810       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-832180-m02"
	I0913 19:20:44.372497       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-832180-m03"
	I0913 19:20:44.375619       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-832180-m02"
	I0913 19:20:44.402647       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-832180-m03"
	I0913 19:20:44.405943       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-832180-m02"
	I0913 19:20:44.440117       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="21.942689ms"
	I0913 19:20:44.440334       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="90.082µs"
	I0913 19:20:49.519663       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-832180-m03"
	I0913 19:20:59.598372       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-832180-m02"
	
	
	==> kube-controller-manager [fc030c85d46d7541300dea35ade9029df1b9b60bc0629342a25c9142b0c9b3fe] <==
	I0913 19:25:06.777427       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-832180-m03" podCIDRs=["10.244.2.0/24"]
	I0913 19:25:06.777509       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-832180-m03"
	I0913 19:25:06.788631       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-832180-m03"
	I0913 19:25:06.807253       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-832180-m03"
	I0913 19:25:07.261855       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-832180-m03"
	I0913 19:25:07.608594       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-832180-m03"
	I0913 19:25:10.062840       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-832180-m03"
	I0913 19:25:16.838380       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-832180-m03"
	I0913 19:25:26.591109       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-832180-m03"
	I0913 19:25:26.591342       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-832180-m02"
	I0913 19:25:26.614908       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-832180-m03"
	I0913 19:25:30.026840       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-832180-m03"
	I0913 19:25:31.218835       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-832180-m03"
	I0913 19:25:31.232321       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-832180-m03"
	I0913 19:25:31.701435       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-832180-m03"
	I0913 19:25:31.701989       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-832180-m02"
	I0913 19:26:10.047482       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-832180-m02"
	I0913 19:26:10.068785       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-832180-m02"
	I0913 19:26:10.082660       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="19.06349ms"
	I0913 19:26:10.085419       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="39.07µs"
	I0913 19:26:14.800547       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-lg94d"
	I0913 19:26:14.830867       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-lg94d"
	I0913 19:26:14.830907       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-7zhjz"
	I0913 19:26:14.854565       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-7zhjz"
	I0913 19:26:15.173005       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-832180-m02"
	
	
	==> kube-proxy [96891beb662f6bf1a22be94e865f849c490f37212044b80a4e7ed40fb4e6b121] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0913 19:17:06.330737       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0913 19:17:06.360615       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.107"]
	E0913 19:17:06.360738       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0913 19:17:06.397565       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0913 19:17:06.397615       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0913 19:17:06.397638       1 server_linux.go:169] "Using iptables Proxier"
	I0913 19:17:06.400208       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0913 19:17:06.400637       1 server.go:483] "Version info" version="v1.31.1"
	I0913 19:17:06.400663       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0913 19:17:06.402771       1 config.go:199] "Starting service config controller"
	I0913 19:17:06.402810       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0913 19:17:06.402841       1 config.go:105] "Starting endpoint slice config controller"
	I0913 19:17:06.402847       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0913 19:17:06.403473       1 config.go:328] "Starting node config controller"
	I0913 19:17:06.403495       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0913 19:17:06.503787       1 shared_informer.go:320] Caches are synced for node config
	I0913 19:17:06.503878       1 shared_informer.go:320] Caches are synced for service config
	I0913 19:17:06.503920       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [9edd612ed049272f921fb281466af068531ec5cd17cbc02c76c33d5e94a7613d] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0913 19:23:53.381234       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0913 19:23:53.395063       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.107"]
	E0913 19:23:53.395185       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0913 19:23:53.535425       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0913 19:23:53.535471       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0913 19:23:53.535498       1 server_linux.go:169] "Using iptables Proxier"
	I0913 19:23:53.542956       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0913 19:23:53.543314       1 server.go:483] "Version info" version="v1.31.1"
	I0913 19:23:53.543342       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0913 19:23:53.548980       1 config.go:199] "Starting service config controller"
	I0913 19:23:53.549041       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0913 19:23:53.549095       1 config.go:105] "Starting endpoint slice config controller"
	I0913 19:23:53.549117       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0913 19:23:53.549687       1 config.go:328] "Starting node config controller"
	I0913 19:23:53.549719       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0913 19:23:53.649315       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0913 19:23:53.649424       1 shared_informer.go:320] Caches are synced for service config
	I0913 19:23:53.651394       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [99b4d9896fa0b4c8cd5c9098095f8e2a5217d7abea4c3b7d43c4e01767bd2436] <==
	I0913 19:23:43.882424       1 serving.go:386] Generated self-signed cert in-memory
	W0913 19:23:51.378898       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0913 19:23:51.378990       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0913 19:23:51.379018       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0913 19:23:51.379048       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0913 19:23:51.431447       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0913 19:23:51.431676       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0913 19:23:51.438351       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0913 19:23:51.440132       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0913 19:23:51.442503       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0913 19:23:51.442671       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0913 19:23:51.541028       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [b426c3236a8688919bb3f76b648d81b1dd24a3223eb10aa26ef46ae404f85df6] <==
	E0913 19:16:57.663922       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0913 19:16:57.664025       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0913 19:16:57.664056       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0913 19:16:57.663962       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0913 19:16:57.664184       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0913 19:16:58.536147       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0913 19:16:58.536319       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0913 19:16:58.584757       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0913 19:16:58.584806       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0913 19:16:58.610672       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0913 19:16:58.610707       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0913 19:16:58.642453       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0913 19:16:58.642606       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0913 19:16:58.694346       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0913 19:16:58.694395       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0913 19:16:58.698057       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0913 19:16:58.698107       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0913 19:16:58.806034       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0913 19:16:58.806193       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0913 19:16:58.813414       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0913 19:16:58.813467       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0913 19:16:58.875372       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0913 19:16:58.875467       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0913 19:17:01.151847       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0913 19:22:04.264978       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 13 19:26:40 multinode-832180 kubelet[3381]: E0913 19:26:40.559124    3381 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726255600558445821,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 19:26:50 multinode-832180 kubelet[3381]: E0913 19:26:50.485645    3381 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 13 19:26:50 multinode-832180 kubelet[3381]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 13 19:26:50 multinode-832180 kubelet[3381]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 13 19:26:50 multinode-832180 kubelet[3381]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 13 19:26:50 multinode-832180 kubelet[3381]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 13 19:26:50 multinode-832180 kubelet[3381]: E0913 19:26:50.560511    3381 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726255610559829517,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 19:26:50 multinode-832180 kubelet[3381]: E0913 19:26:50.560819    3381 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726255610559829517,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 19:27:00 multinode-832180 kubelet[3381]: E0913 19:27:00.562296    3381 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726255620561612735,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 19:27:00 multinode-832180 kubelet[3381]: E0913 19:27:00.562339    3381 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726255620561612735,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 19:27:10 multinode-832180 kubelet[3381]: E0913 19:27:10.564021    3381 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726255630563684907,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 19:27:10 multinode-832180 kubelet[3381]: E0913 19:27:10.564077    3381 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726255630563684907,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 19:27:20 multinode-832180 kubelet[3381]: E0913 19:27:20.565899    3381 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726255640565469834,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 19:27:20 multinode-832180 kubelet[3381]: E0913 19:27:20.565938    3381 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726255640565469834,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 19:27:30 multinode-832180 kubelet[3381]: E0913 19:27:30.569114    3381 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726255650567047957,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 19:27:30 multinode-832180 kubelet[3381]: E0913 19:27:30.569572    3381 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726255650567047957,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 19:27:40 multinode-832180 kubelet[3381]: E0913 19:27:40.578329    3381 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726255660577699716,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 19:27:40 multinode-832180 kubelet[3381]: E0913 19:27:40.578389    3381 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726255660577699716,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 19:27:50 multinode-832180 kubelet[3381]: E0913 19:27:50.485901    3381 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 13 19:27:50 multinode-832180 kubelet[3381]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 13 19:27:50 multinode-832180 kubelet[3381]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 13 19:27:50 multinode-832180 kubelet[3381]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 13 19:27:50 multinode-832180 kubelet[3381]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 13 19:27:50 multinode-832180 kubelet[3381]: E0913 19:27:50.582889    3381 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726255670581718341,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 19:27:50 multinode-832180 kubelet[3381]: E0913 19:27:50.582916    3381 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726255670581718341,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0913 19:27:52.418295   44266 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19636-3902/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-832180 -n multinode-832180
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-832180 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (141.29s)

                                                
                                    
x
+
TestPreload (220.15s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-769198 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0913 19:34:06.601474   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-769198 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (2m14.645193329s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-769198 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-769198 image pull gcr.io/k8s-minikube/busybox: (3.504130121s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-769198
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-769198: (7.284427663s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-769198 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
E0913 19:35:40.647430   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/functional-204039/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-769198 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (1m11.60790269s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-769198 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:629: *** TestPreload FAILED at 2024-09-13 19:35:43.111279076 +0000 UTC m=+4496.590284312
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-769198 -n test-preload-769198
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-769198 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-769198 logs -n 25: (1.119188116s)
helpers_test.go:252: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-832180 ssh -n                                                                 | multinode-832180     | jenkins | v1.34.0 | 13 Sep 24 19:19 UTC | 13 Sep 24 19:19 UTC |
	|         | multinode-832180-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-832180 ssh -n multinode-832180 sudo cat                                       | multinode-832180     | jenkins | v1.34.0 | 13 Sep 24 19:19 UTC | 13 Sep 24 19:19 UTC |
	|         | /home/docker/cp-test_multinode-832180-m03_multinode-832180.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-832180 cp multinode-832180-m03:/home/docker/cp-test.txt                       | multinode-832180     | jenkins | v1.34.0 | 13 Sep 24 19:19 UTC | 13 Sep 24 19:19 UTC |
	|         | multinode-832180-m02:/home/docker/cp-test_multinode-832180-m03_multinode-832180-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-832180 ssh -n                                                                 | multinode-832180     | jenkins | v1.34.0 | 13 Sep 24 19:19 UTC | 13 Sep 24 19:19 UTC |
	|         | multinode-832180-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-832180 ssh -n multinode-832180-m02 sudo cat                                   | multinode-832180     | jenkins | v1.34.0 | 13 Sep 24 19:19 UTC | 13 Sep 24 19:19 UTC |
	|         | /home/docker/cp-test_multinode-832180-m03_multinode-832180-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-832180 node stop m03                                                          | multinode-832180     | jenkins | v1.34.0 | 13 Sep 24 19:19 UTC | 13 Sep 24 19:19 UTC |
	| node    | multinode-832180 node start                                                             | multinode-832180     | jenkins | v1.34.0 | 13 Sep 24 19:19 UTC | 13 Sep 24 19:20 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                      |         |         |                     |                     |
	| node    | list -p multinode-832180                                                                | multinode-832180     | jenkins | v1.34.0 | 13 Sep 24 19:20 UTC |                     |
	| stop    | -p multinode-832180                                                                     | multinode-832180     | jenkins | v1.34.0 | 13 Sep 24 19:20 UTC |                     |
	| start   | -p multinode-832180                                                                     | multinode-832180     | jenkins | v1.34.0 | 13 Sep 24 19:22 UTC | 13 Sep 24 19:25 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-832180                                                                | multinode-832180     | jenkins | v1.34.0 | 13 Sep 24 19:25 UTC |                     |
	| node    | multinode-832180 node delete                                                            | multinode-832180     | jenkins | v1.34.0 | 13 Sep 24 19:25 UTC | 13 Sep 24 19:25 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-832180 stop                                                                   | multinode-832180     | jenkins | v1.34.0 | 13 Sep 24 19:25 UTC |                     |
	| start   | -p multinode-832180                                                                     | multinode-832180     | jenkins | v1.34.0 | 13 Sep 24 19:27 UTC | 13 Sep 24 19:31 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | list -p multinode-832180                                                                | multinode-832180     | jenkins | v1.34.0 | 13 Sep 24 19:31 UTC |                     |
	| start   | -p multinode-832180-m02                                                                 | multinode-832180-m02 | jenkins | v1.34.0 | 13 Sep 24 19:31 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| start   | -p multinode-832180-m03                                                                 | multinode-832180-m03 | jenkins | v1.34.0 | 13 Sep 24 19:31 UTC | 13 Sep 24 19:32 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | add -p multinode-832180                                                                 | multinode-832180     | jenkins | v1.34.0 | 13 Sep 24 19:32 UTC |                     |
	| delete  | -p multinode-832180-m03                                                                 | multinode-832180-m03 | jenkins | v1.34.0 | 13 Sep 24 19:32 UTC | 13 Sep 24 19:32 UTC |
	| delete  | -p multinode-832180                                                                     | multinode-832180     | jenkins | v1.34.0 | 13 Sep 24 19:32 UTC | 13 Sep 24 19:32 UTC |
	| start   | -p test-preload-769198                                                                  | test-preload-769198  | jenkins | v1.34.0 | 13 Sep 24 19:32 UTC | 13 Sep 24 19:34 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                               |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| image   | test-preload-769198 image pull                                                          | test-preload-769198  | jenkins | v1.34.0 | 13 Sep 24 19:34 UTC | 13 Sep 24 19:34 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-769198                                                                  | test-preload-769198  | jenkins | v1.34.0 | 13 Sep 24 19:34 UTC | 13 Sep 24 19:34 UTC |
	| start   | -p test-preload-769198                                                                  | test-preload-769198  | jenkins | v1.34.0 | 13 Sep 24 19:34 UTC | 13 Sep 24 19:35 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| image   | test-preload-769198 image list                                                          | test-preload-769198  | jenkins | v1.34.0 | 13 Sep 24 19:35 UTC | 13 Sep 24 19:35 UTC |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/13 19:34:31
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0913 19:34:31.329894   46862 out.go:345] Setting OutFile to fd 1 ...
	I0913 19:34:31.330011   46862 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 19:34:31.330020   46862 out.go:358] Setting ErrFile to fd 2...
	I0913 19:34:31.330024   46862 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 19:34:31.330219   46862 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19636-3902/.minikube/bin
	I0913 19:34:31.330726   46862 out.go:352] Setting JSON to false
	I0913 19:34:31.331552   46862 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4614,"bootTime":1726251457,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0913 19:34:31.331640   46862 start.go:139] virtualization: kvm guest
	I0913 19:34:31.333787   46862 out.go:177] * [test-preload-769198] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0913 19:34:31.335062   46862 notify.go:220] Checking for updates...
	I0913 19:34:31.335072   46862 out.go:177]   - MINIKUBE_LOCATION=19636
	I0913 19:34:31.336369   46862 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 19:34:31.337954   46862 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19636-3902/kubeconfig
	I0913 19:34:31.339406   46862 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19636-3902/.minikube
	I0913 19:34:31.340712   46862 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0913 19:34:31.342061   46862 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 19:34:31.343658   46862 config.go:182] Loaded profile config "test-preload-769198": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0913 19:34:31.344017   46862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:34:31.344081   46862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:34:31.358463   46862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34555
	I0913 19:34:31.358799   46862 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:34:31.359304   46862 main.go:141] libmachine: Using API Version  1
	I0913 19:34:31.359328   46862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:34:31.359658   46862 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:34:31.359848   46862 main.go:141] libmachine: (test-preload-769198) Calling .DriverName
	I0913 19:34:31.361582   46862 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0913 19:34:31.362695   46862 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 19:34:31.362999   46862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:34:31.363041   46862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:34:31.377610   46862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41041
	I0913 19:34:31.378063   46862 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:34:31.378586   46862 main.go:141] libmachine: Using API Version  1
	I0913 19:34:31.378619   46862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:34:31.378942   46862 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:34:31.379106   46862 main.go:141] libmachine: (test-preload-769198) Calling .DriverName
	I0913 19:34:31.413914   46862 out.go:177] * Using the kvm2 driver based on existing profile
	I0913 19:34:31.415316   46862 start.go:297] selected driver: kvm2
	I0913 19:34:31.415334   46862 start.go:901] validating driver "kvm2" against &{Name:test-preload-769198 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.24.4 ClusterName:test-preload-769198 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.171 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 19:34:31.415451   46862 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 19:34:31.416086   46862 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 19:34:31.416184   46862 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19636-3902/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0913 19:34:31.431069   46862 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0913 19:34:31.431441   46862 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0913 19:34:31.431470   46862 cni.go:84] Creating CNI manager for ""
	I0913 19:34:31.431510   46862 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0913 19:34:31.431563   46862 start.go:340] cluster config:
	{Name:test-preload-769198 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-769198 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.171 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 19:34:31.431663   46862 iso.go:125] acquiring lock: {Name:mk6d09a9e7ffc35d34bf41cb51ad15df14a4d34d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 19:34:31.433455   46862 out.go:177] * Starting "test-preload-769198" primary control-plane node in "test-preload-769198" cluster
	I0913 19:34:31.434853   46862 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0913 19:34:31.998942   46862 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0913 19:34:31.999003   46862 cache.go:56] Caching tarball of preloaded images
	I0913 19:34:31.999159   46862 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0913 19:34:32.001250   46862 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I0913 19:34:32.002925   46862 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0913 19:34:32.120380   46862 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/19636-3902/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0913 19:34:45.915970   46862 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0913 19:34:45.916070   46862 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19636-3902/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0913 19:34:46.756213   46862 cache.go:59] Finished verifying existence of preloaded tar for v1.24.4 on crio
	I0913 19:34:46.756348   46862 profile.go:143] Saving config to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/test-preload-769198/config.json ...
	I0913 19:34:46.756576   46862 start.go:360] acquireMachinesLock for test-preload-769198: {Name:mk2a4fc9f87a3264088553785e086036ce1d8c5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 19:34:46.756631   46862 start.go:364] duration metric: took 36.187µs to acquireMachinesLock for "test-preload-769198"
	I0913 19:34:46.756645   46862 start.go:96] Skipping create...Using existing machine configuration
	I0913 19:34:46.756650   46862 fix.go:54] fixHost starting: 
	I0913 19:34:46.756907   46862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:34:46.756939   46862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:34:46.771385   46862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41355
	I0913 19:34:46.771849   46862 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:34:46.772368   46862 main.go:141] libmachine: Using API Version  1
	I0913 19:34:46.772389   46862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:34:46.772750   46862 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:34:46.772916   46862 main.go:141] libmachine: (test-preload-769198) Calling .DriverName
	I0913 19:34:46.773065   46862 main.go:141] libmachine: (test-preload-769198) Calling .GetState
	I0913 19:34:46.774622   46862 fix.go:112] recreateIfNeeded on test-preload-769198: state=Stopped err=<nil>
	I0913 19:34:46.774648   46862 main.go:141] libmachine: (test-preload-769198) Calling .DriverName
	W0913 19:34:46.774777   46862 fix.go:138] unexpected machine state, will restart: <nil>
	I0913 19:34:46.777149   46862 out.go:177] * Restarting existing kvm2 VM for "test-preload-769198" ...
	I0913 19:34:46.778584   46862 main.go:141] libmachine: (test-preload-769198) Calling .Start
	I0913 19:34:46.778766   46862 main.go:141] libmachine: (test-preload-769198) Ensuring networks are active...
	I0913 19:34:46.779654   46862 main.go:141] libmachine: (test-preload-769198) Ensuring network default is active
	I0913 19:34:46.780003   46862 main.go:141] libmachine: (test-preload-769198) Ensuring network mk-test-preload-769198 is active
	I0913 19:34:46.780339   46862 main.go:141] libmachine: (test-preload-769198) Getting domain xml...
	I0913 19:34:46.780991   46862 main.go:141] libmachine: (test-preload-769198) Creating domain...
	I0913 19:34:47.981797   46862 main.go:141] libmachine: (test-preload-769198) Waiting to get IP...
	I0913 19:34:47.982648   46862 main.go:141] libmachine: (test-preload-769198) DBG | domain test-preload-769198 has defined MAC address 52:54:00:b8:f0:e3 in network mk-test-preload-769198
	I0913 19:34:47.983028   46862 main.go:141] libmachine: (test-preload-769198) DBG | unable to find current IP address of domain test-preload-769198 in network mk-test-preload-769198
	I0913 19:34:47.983097   46862 main.go:141] libmachine: (test-preload-769198) DBG | I0913 19:34:47.983009   46946 retry.go:31] will retry after 312.140601ms: waiting for machine to come up
	I0913 19:34:48.296553   46862 main.go:141] libmachine: (test-preload-769198) DBG | domain test-preload-769198 has defined MAC address 52:54:00:b8:f0:e3 in network mk-test-preload-769198
	I0913 19:34:48.296940   46862 main.go:141] libmachine: (test-preload-769198) DBG | unable to find current IP address of domain test-preload-769198 in network mk-test-preload-769198
	I0913 19:34:48.296968   46862 main.go:141] libmachine: (test-preload-769198) DBG | I0913 19:34:48.296919   46946 retry.go:31] will retry after 314.830219ms: waiting for machine to come up
	I0913 19:34:48.613544   46862 main.go:141] libmachine: (test-preload-769198) DBG | domain test-preload-769198 has defined MAC address 52:54:00:b8:f0:e3 in network mk-test-preload-769198
	I0913 19:34:48.613944   46862 main.go:141] libmachine: (test-preload-769198) DBG | unable to find current IP address of domain test-preload-769198 in network mk-test-preload-769198
	I0913 19:34:48.613966   46862 main.go:141] libmachine: (test-preload-769198) DBG | I0913 19:34:48.613914   46946 retry.go:31] will retry after 308.988182ms: waiting for machine to come up
	I0913 19:34:48.924315   46862 main.go:141] libmachine: (test-preload-769198) DBG | domain test-preload-769198 has defined MAC address 52:54:00:b8:f0:e3 in network mk-test-preload-769198
	I0913 19:34:48.924733   46862 main.go:141] libmachine: (test-preload-769198) DBG | unable to find current IP address of domain test-preload-769198 in network mk-test-preload-769198
	I0913 19:34:48.924761   46862 main.go:141] libmachine: (test-preload-769198) DBG | I0913 19:34:48.924685   46946 retry.go:31] will retry after 456.244247ms: waiting for machine to come up
	I0913 19:34:49.382278   46862 main.go:141] libmachine: (test-preload-769198) DBG | domain test-preload-769198 has defined MAC address 52:54:00:b8:f0:e3 in network mk-test-preload-769198
	I0913 19:34:49.382614   46862 main.go:141] libmachine: (test-preload-769198) DBG | unable to find current IP address of domain test-preload-769198 in network mk-test-preload-769198
	I0913 19:34:49.382636   46862 main.go:141] libmachine: (test-preload-769198) DBG | I0913 19:34:49.382574   46946 retry.go:31] will retry after 651.566052ms: waiting for machine to come up
	I0913 19:34:50.035295   46862 main.go:141] libmachine: (test-preload-769198) DBG | domain test-preload-769198 has defined MAC address 52:54:00:b8:f0:e3 in network mk-test-preload-769198
	I0913 19:34:50.035712   46862 main.go:141] libmachine: (test-preload-769198) DBG | unable to find current IP address of domain test-preload-769198 in network mk-test-preload-769198
	I0913 19:34:50.035740   46862 main.go:141] libmachine: (test-preload-769198) DBG | I0913 19:34:50.035671   46946 retry.go:31] will retry after 647.805477ms: waiting for machine to come up
	I0913 19:34:50.685606   46862 main.go:141] libmachine: (test-preload-769198) DBG | domain test-preload-769198 has defined MAC address 52:54:00:b8:f0:e3 in network mk-test-preload-769198
	I0913 19:34:50.686004   46862 main.go:141] libmachine: (test-preload-769198) DBG | unable to find current IP address of domain test-preload-769198 in network mk-test-preload-769198
	I0913 19:34:50.686032   46862 main.go:141] libmachine: (test-preload-769198) DBG | I0913 19:34:50.685953   46946 retry.go:31] will retry after 820.179532ms: waiting for machine to come up
	I0913 19:34:51.507849   46862 main.go:141] libmachine: (test-preload-769198) DBG | domain test-preload-769198 has defined MAC address 52:54:00:b8:f0:e3 in network mk-test-preload-769198
	I0913 19:34:51.508236   46862 main.go:141] libmachine: (test-preload-769198) DBG | unable to find current IP address of domain test-preload-769198 in network mk-test-preload-769198
	I0913 19:34:51.508271   46862 main.go:141] libmachine: (test-preload-769198) DBG | I0913 19:34:51.508205   46946 retry.go:31] will retry after 1.075907575s: waiting for machine to come up
	I0913 19:34:52.585897   46862 main.go:141] libmachine: (test-preload-769198) DBG | domain test-preload-769198 has defined MAC address 52:54:00:b8:f0:e3 in network mk-test-preload-769198
	I0913 19:34:52.586379   46862 main.go:141] libmachine: (test-preload-769198) DBG | unable to find current IP address of domain test-preload-769198 in network mk-test-preload-769198
	I0913 19:34:52.586410   46862 main.go:141] libmachine: (test-preload-769198) DBG | I0913 19:34:52.586351   46946 retry.go:31] will retry after 1.658298435s: waiting for machine to come up
	I0913 19:34:54.247150   46862 main.go:141] libmachine: (test-preload-769198) DBG | domain test-preload-769198 has defined MAC address 52:54:00:b8:f0:e3 in network mk-test-preload-769198
	I0913 19:34:54.247503   46862 main.go:141] libmachine: (test-preload-769198) DBG | unable to find current IP address of domain test-preload-769198 in network mk-test-preload-769198
	I0913 19:34:54.247533   46862 main.go:141] libmachine: (test-preload-769198) DBG | I0913 19:34:54.247457   46946 retry.go:31] will retry after 1.617071357s: waiting for machine to come up
	I0913 19:34:55.867328   46862 main.go:141] libmachine: (test-preload-769198) DBG | domain test-preload-769198 has defined MAC address 52:54:00:b8:f0:e3 in network mk-test-preload-769198
	I0913 19:34:55.867761   46862 main.go:141] libmachine: (test-preload-769198) DBG | unable to find current IP address of domain test-preload-769198 in network mk-test-preload-769198
	I0913 19:34:55.867786   46862 main.go:141] libmachine: (test-preload-769198) DBG | I0913 19:34:55.867727   46946 retry.go:31] will retry after 2.537175221s: waiting for machine to come up
	I0913 19:34:58.406290   46862 main.go:141] libmachine: (test-preload-769198) DBG | domain test-preload-769198 has defined MAC address 52:54:00:b8:f0:e3 in network mk-test-preload-769198
	I0913 19:34:58.406658   46862 main.go:141] libmachine: (test-preload-769198) DBG | unable to find current IP address of domain test-preload-769198 in network mk-test-preload-769198
	I0913 19:34:58.406706   46862 main.go:141] libmachine: (test-preload-769198) DBG | I0913 19:34:58.406632   46946 retry.go:31] will retry after 3.42443027s: waiting for machine to come up
	I0913 19:35:01.832328   46862 main.go:141] libmachine: (test-preload-769198) DBG | domain test-preload-769198 has defined MAC address 52:54:00:b8:f0:e3 in network mk-test-preload-769198
	I0913 19:35:01.832707   46862 main.go:141] libmachine: (test-preload-769198) DBG | unable to find current IP address of domain test-preload-769198 in network mk-test-preload-769198
	I0913 19:35:01.832740   46862 main.go:141] libmachine: (test-preload-769198) DBG | I0913 19:35:01.832661   46946 retry.go:31] will retry after 3.680838977s: waiting for machine to come up
	I0913 19:35:05.517556   46862 main.go:141] libmachine: (test-preload-769198) DBG | domain test-preload-769198 has defined MAC address 52:54:00:b8:f0:e3 in network mk-test-preload-769198
	I0913 19:35:05.517971   46862 main.go:141] libmachine: (test-preload-769198) DBG | domain test-preload-769198 has current primary IP address 192.168.39.171 and MAC address 52:54:00:b8:f0:e3 in network mk-test-preload-769198
	I0913 19:35:05.517993   46862 main.go:141] libmachine: (test-preload-769198) Found IP for machine: 192.168.39.171
	I0913 19:35:05.518006   46862 main.go:141] libmachine: (test-preload-769198) Reserving static IP address...
	I0913 19:35:05.518384   46862 main.go:141] libmachine: (test-preload-769198) DBG | found host DHCP lease matching {name: "test-preload-769198", mac: "52:54:00:b8:f0:e3", ip: "192.168.39.171"} in network mk-test-preload-769198: {Iface:virbr1 ExpiryTime:2024-09-13 20:34:57 +0000 UTC Type:0 Mac:52:54:00:b8:f0:e3 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:test-preload-769198 Clientid:01:52:54:00:b8:f0:e3}
	I0913 19:35:05.518407   46862 main.go:141] libmachine: (test-preload-769198) DBG | skip adding static IP to network mk-test-preload-769198 - found existing host DHCP lease matching {name: "test-preload-769198", mac: "52:54:00:b8:f0:e3", ip: "192.168.39.171"}
	I0913 19:35:05.518421   46862 main.go:141] libmachine: (test-preload-769198) DBG | Getting to WaitForSSH function...
	I0913 19:35:05.518428   46862 main.go:141] libmachine: (test-preload-769198) Reserved static IP address: 192.168.39.171
	I0913 19:35:05.518439   46862 main.go:141] libmachine: (test-preload-769198) Waiting for SSH to be available...
	I0913 19:35:05.520432   46862 main.go:141] libmachine: (test-preload-769198) DBG | domain test-preload-769198 has defined MAC address 52:54:00:b8:f0:e3 in network mk-test-preload-769198
	I0913 19:35:05.520742   46862 main.go:141] libmachine: (test-preload-769198) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:f0:e3", ip: ""} in network mk-test-preload-769198: {Iface:virbr1 ExpiryTime:2024-09-13 20:34:57 +0000 UTC Type:0 Mac:52:54:00:b8:f0:e3 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:test-preload-769198 Clientid:01:52:54:00:b8:f0:e3}
	I0913 19:35:05.520771   46862 main.go:141] libmachine: (test-preload-769198) DBG | domain test-preload-769198 has defined IP address 192.168.39.171 and MAC address 52:54:00:b8:f0:e3 in network mk-test-preload-769198
	I0913 19:35:05.520894   46862 main.go:141] libmachine: (test-preload-769198) DBG | Using SSH client type: external
	I0913 19:35:05.520930   46862 main.go:141] libmachine: (test-preload-769198) DBG | Using SSH private key: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/test-preload-769198/id_rsa (-rw-------)
	I0913 19:35:05.520965   46862 main.go:141] libmachine: (test-preload-769198) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.171 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19636-3902/.minikube/machines/test-preload-769198/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0913 19:35:05.520981   46862 main.go:141] libmachine: (test-preload-769198) DBG | About to run SSH command:
	I0913 19:35:05.521003   46862 main.go:141] libmachine: (test-preload-769198) DBG | exit 0
	I0913 19:35:05.646029   46862 main.go:141] libmachine: (test-preload-769198) DBG | SSH cmd err, output: <nil>: 
	I0913 19:35:05.646444   46862 main.go:141] libmachine: (test-preload-769198) Calling .GetConfigRaw
	I0913 19:35:05.647212   46862 main.go:141] libmachine: (test-preload-769198) Calling .GetIP
	I0913 19:35:05.649359   46862 main.go:141] libmachine: (test-preload-769198) DBG | domain test-preload-769198 has defined MAC address 52:54:00:b8:f0:e3 in network mk-test-preload-769198
	I0913 19:35:05.649726   46862 main.go:141] libmachine: (test-preload-769198) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:f0:e3", ip: ""} in network mk-test-preload-769198: {Iface:virbr1 ExpiryTime:2024-09-13 20:34:57 +0000 UTC Type:0 Mac:52:54:00:b8:f0:e3 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:test-preload-769198 Clientid:01:52:54:00:b8:f0:e3}
	I0913 19:35:05.649753   46862 main.go:141] libmachine: (test-preload-769198) DBG | domain test-preload-769198 has defined IP address 192.168.39.171 and MAC address 52:54:00:b8:f0:e3 in network mk-test-preload-769198
	I0913 19:35:05.649985   46862 profile.go:143] Saving config to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/test-preload-769198/config.json ...
	I0913 19:35:05.650188   46862 machine.go:93] provisionDockerMachine start ...
	I0913 19:35:05.650204   46862 main.go:141] libmachine: (test-preload-769198) Calling .DriverName
	I0913 19:35:05.650379   46862 main.go:141] libmachine: (test-preload-769198) Calling .GetSSHHostname
	I0913 19:35:05.652579   46862 main.go:141] libmachine: (test-preload-769198) DBG | domain test-preload-769198 has defined MAC address 52:54:00:b8:f0:e3 in network mk-test-preload-769198
	I0913 19:35:05.652901   46862 main.go:141] libmachine: (test-preload-769198) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:f0:e3", ip: ""} in network mk-test-preload-769198: {Iface:virbr1 ExpiryTime:2024-09-13 20:34:57 +0000 UTC Type:0 Mac:52:54:00:b8:f0:e3 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:test-preload-769198 Clientid:01:52:54:00:b8:f0:e3}
	I0913 19:35:05.652935   46862 main.go:141] libmachine: (test-preload-769198) DBG | domain test-preload-769198 has defined IP address 192.168.39.171 and MAC address 52:54:00:b8:f0:e3 in network mk-test-preload-769198
	I0913 19:35:05.653045   46862 main.go:141] libmachine: (test-preload-769198) Calling .GetSSHPort
	I0913 19:35:05.653181   46862 main.go:141] libmachine: (test-preload-769198) Calling .GetSSHKeyPath
	I0913 19:35:05.653320   46862 main.go:141] libmachine: (test-preload-769198) Calling .GetSSHKeyPath
	I0913 19:35:05.653424   46862 main.go:141] libmachine: (test-preload-769198) Calling .GetSSHUsername
	I0913 19:35:05.653553   46862 main.go:141] libmachine: Using SSH client type: native
	I0913 19:35:05.653733   46862 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.171 22 <nil> <nil>}
	I0913 19:35:05.653743   46862 main.go:141] libmachine: About to run SSH command:
	hostname
	I0913 19:35:05.762557   46862 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0913 19:35:05.762587   46862 main.go:141] libmachine: (test-preload-769198) Calling .GetMachineName
	I0913 19:35:05.762807   46862 buildroot.go:166] provisioning hostname "test-preload-769198"
	I0913 19:35:05.762831   46862 main.go:141] libmachine: (test-preload-769198) Calling .GetMachineName
	I0913 19:35:05.763020   46862 main.go:141] libmachine: (test-preload-769198) Calling .GetSSHHostname
	I0913 19:35:05.765322   46862 main.go:141] libmachine: (test-preload-769198) DBG | domain test-preload-769198 has defined MAC address 52:54:00:b8:f0:e3 in network mk-test-preload-769198
	I0913 19:35:05.765681   46862 main.go:141] libmachine: (test-preload-769198) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:f0:e3", ip: ""} in network mk-test-preload-769198: {Iface:virbr1 ExpiryTime:2024-09-13 20:34:57 +0000 UTC Type:0 Mac:52:54:00:b8:f0:e3 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:test-preload-769198 Clientid:01:52:54:00:b8:f0:e3}
	I0913 19:35:05.765712   46862 main.go:141] libmachine: (test-preload-769198) DBG | domain test-preload-769198 has defined IP address 192.168.39.171 and MAC address 52:54:00:b8:f0:e3 in network mk-test-preload-769198
	I0913 19:35:05.765805   46862 main.go:141] libmachine: (test-preload-769198) Calling .GetSSHPort
	I0913 19:35:05.765971   46862 main.go:141] libmachine: (test-preload-769198) Calling .GetSSHKeyPath
	I0913 19:35:05.766116   46862 main.go:141] libmachine: (test-preload-769198) Calling .GetSSHKeyPath
	I0913 19:35:05.766248   46862 main.go:141] libmachine: (test-preload-769198) Calling .GetSSHUsername
	I0913 19:35:05.766380   46862 main.go:141] libmachine: Using SSH client type: native
	I0913 19:35:05.766571   46862 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.171 22 <nil> <nil>}
	I0913 19:35:05.766585   46862 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-769198 && echo "test-preload-769198" | sudo tee /etc/hostname
	I0913 19:35:05.888921   46862 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-769198
	
	I0913 19:35:05.888946   46862 main.go:141] libmachine: (test-preload-769198) Calling .GetSSHHostname
	I0913 19:35:05.891619   46862 main.go:141] libmachine: (test-preload-769198) DBG | domain test-preload-769198 has defined MAC address 52:54:00:b8:f0:e3 in network mk-test-preload-769198
	I0913 19:35:05.891962   46862 main.go:141] libmachine: (test-preload-769198) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:f0:e3", ip: ""} in network mk-test-preload-769198: {Iface:virbr1 ExpiryTime:2024-09-13 20:34:57 +0000 UTC Type:0 Mac:52:54:00:b8:f0:e3 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:test-preload-769198 Clientid:01:52:54:00:b8:f0:e3}
	I0913 19:35:05.891990   46862 main.go:141] libmachine: (test-preload-769198) DBG | domain test-preload-769198 has defined IP address 192.168.39.171 and MAC address 52:54:00:b8:f0:e3 in network mk-test-preload-769198
	I0913 19:35:05.892126   46862 main.go:141] libmachine: (test-preload-769198) Calling .GetSSHPort
	I0913 19:35:05.892350   46862 main.go:141] libmachine: (test-preload-769198) Calling .GetSSHKeyPath
	I0913 19:35:05.892528   46862 main.go:141] libmachine: (test-preload-769198) Calling .GetSSHKeyPath
	I0913 19:35:05.892684   46862 main.go:141] libmachine: (test-preload-769198) Calling .GetSSHUsername
	I0913 19:35:05.892807   46862 main.go:141] libmachine: Using SSH client type: native
	I0913 19:35:05.893016   46862 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.171 22 <nil> <nil>}
	I0913 19:35:05.893046   46862 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-769198' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-769198/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-769198' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0913 19:35:06.012271   46862 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0913 19:35:06.012305   46862 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19636-3902/.minikube CaCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19636-3902/.minikube}
	I0913 19:35:06.012341   46862 buildroot.go:174] setting up certificates
	I0913 19:35:06.012352   46862 provision.go:84] configureAuth start
	I0913 19:35:06.012366   46862 main.go:141] libmachine: (test-preload-769198) Calling .GetMachineName
	I0913 19:35:06.012636   46862 main.go:141] libmachine: (test-preload-769198) Calling .GetIP
	I0913 19:35:06.014865   46862 main.go:141] libmachine: (test-preload-769198) DBG | domain test-preload-769198 has defined MAC address 52:54:00:b8:f0:e3 in network mk-test-preload-769198
	I0913 19:35:06.015169   46862 main.go:141] libmachine: (test-preload-769198) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:f0:e3", ip: ""} in network mk-test-preload-769198: {Iface:virbr1 ExpiryTime:2024-09-13 20:34:57 +0000 UTC Type:0 Mac:52:54:00:b8:f0:e3 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:test-preload-769198 Clientid:01:52:54:00:b8:f0:e3}
	I0913 19:35:06.015191   46862 main.go:141] libmachine: (test-preload-769198) DBG | domain test-preload-769198 has defined IP address 192.168.39.171 and MAC address 52:54:00:b8:f0:e3 in network mk-test-preload-769198
	I0913 19:35:06.015279   46862 main.go:141] libmachine: (test-preload-769198) Calling .GetSSHHostname
	I0913 19:35:06.017389   46862 main.go:141] libmachine: (test-preload-769198) DBG | domain test-preload-769198 has defined MAC address 52:54:00:b8:f0:e3 in network mk-test-preload-769198
	I0913 19:35:06.017710   46862 main.go:141] libmachine: (test-preload-769198) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:f0:e3", ip: ""} in network mk-test-preload-769198: {Iface:virbr1 ExpiryTime:2024-09-13 20:34:57 +0000 UTC Type:0 Mac:52:54:00:b8:f0:e3 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:test-preload-769198 Clientid:01:52:54:00:b8:f0:e3}
	I0913 19:35:06.017738   46862 main.go:141] libmachine: (test-preload-769198) DBG | domain test-preload-769198 has defined IP address 192.168.39.171 and MAC address 52:54:00:b8:f0:e3 in network mk-test-preload-769198
	I0913 19:35:06.017837   46862 provision.go:143] copyHostCerts
	I0913 19:35:06.017886   46862 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem, removing ...
	I0913 19:35:06.017896   46862 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem
	I0913 19:35:06.017959   46862 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem (1123 bytes)
	I0913 19:35:06.018053   46862 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem, removing ...
	I0913 19:35:06.018063   46862 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem
	I0913 19:35:06.018087   46862 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem (1679 bytes)
	I0913 19:35:06.018169   46862 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem, removing ...
	I0913 19:35:06.018174   46862 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem
	I0913 19:35:06.018200   46862 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem (1082 bytes)
	I0913 19:35:06.018247   46862 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem org=jenkins.test-preload-769198 san=[127.0.0.1 192.168.39.171 localhost minikube test-preload-769198]
	I0913 19:35:06.194394   46862 provision.go:177] copyRemoteCerts
	I0913 19:35:06.194457   46862 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0913 19:35:06.194481   46862 main.go:141] libmachine: (test-preload-769198) Calling .GetSSHHostname
	I0913 19:35:06.196856   46862 main.go:141] libmachine: (test-preload-769198) DBG | domain test-preload-769198 has defined MAC address 52:54:00:b8:f0:e3 in network mk-test-preload-769198
	I0913 19:35:06.197111   46862 main.go:141] libmachine: (test-preload-769198) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:f0:e3", ip: ""} in network mk-test-preload-769198: {Iface:virbr1 ExpiryTime:2024-09-13 20:34:57 +0000 UTC Type:0 Mac:52:54:00:b8:f0:e3 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:test-preload-769198 Clientid:01:52:54:00:b8:f0:e3}
	I0913 19:35:06.197146   46862 main.go:141] libmachine: (test-preload-769198) DBG | domain test-preload-769198 has defined IP address 192.168.39.171 and MAC address 52:54:00:b8:f0:e3 in network mk-test-preload-769198
	I0913 19:35:06.197290   46862 main.go:141] libmachine: (test-preload-769198) Calling .GetSSHPort
	I0913 19:35:06.197443   46862 main.go:141] libmachine: (test-preload-769198) Calling .GetSSHKeyPath
	I0913 19:35:06.197574   46862 main.go:141] libmachine: (test-preload-769198) Calling .GetSSHUsername
	I0913 19:35:06.197704   46862 sshutil.go:53] new ssh client: &{IP:192.168.39.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/test-preload-769198/id_rsa Username:docker}
	I0913 19:35:06.284817   46862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0913 19:35:06.309297   46862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0913 19:35:06.333440   46862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0913 19:35:06.357302   46862 provision.go:87] duration metric: took 344.936452ms to configureAuth
	I0913 19:35:06.357328   46862 buildroot.go:189] setting minikube options for container-runtime
	I0913 19:35:06.357524   46862 config.go:182] Loaded profile config "test-preload-769198": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0913 19:35:06.357608   46862 main.go:141] libmachine: (test-preload-769198) Calling .GetSSHHostname
	I0913 19:35:06.359940   46862 main.go:141] libmachine: (test-preload-769198) DBG | domain test-preload-769198 has defined MAC address 52:54:00:b8:f0:e3 in network mk-test-preload-769198
	I0913 19:35:06.360314   46862 main.go:141] libmachine: (test-preload-769198) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:f0:e3", ip: ""} in network mk-test-preload-769198: {Iface:virbr1 ExpiryTime:2024-09-13 20:34:57 +0000 UTC Type:0 Mac:52:54:00:b8:f0:e3 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:test-preload-769198 Clientid:01:52:54:00:b8:f0:e3}
	I0913 19:35:06.360341   46862 main.go:141] libmachine: (test-preload-769198) DBG | domain test-preload-769198 has defined IP address 192.168.39.171 and MAC address 52:54:00:b8:f0:e3 in network mk-test-preload-769198
	I0913 19:35:06.360493   46862 main.go:141] libmachine: (test-preload-769198) Calling .GetSSHPort
	I0913 19:35:06.360661   46862 main.go:141] libmachine: (test-preload-769198) Calling .GetSSHKeyPath
	I0913 19:35:06.360790   46862 main.go:141] libmachine: (test-preload-769198) Calling .GetSSHKeyPath
	I0913 19:35:06.360922   46862 main.go:141] libmachine: (test-preload-769198) Calling .GetSSHUsername
	I0913 19:35:06.361106   46862 main.go:141] libmachine: Using SSH client type: native
	I0913 19:35:06.361271   46862 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.171 22 <nil> <nil>}
	I0913 19:35:06.361285   46862 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0913 19:35:06.584667   46862 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0913 19:35:06.584691   46862 machine.go:96] duration metric: took 934.491627ms to provisionDockerMachine
	I0913 19:35:06.584702   46862 start.go:293] postStartSetup for "test-preload-769198" (driver="kvm2")
	I0913 19:35:06.584714   46862 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0913 19:35:06.584735   46862 main.go:141] libmachine: (test-preload-769198) Calling .DriverName
	I0913 19:35:06.585024   46862 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0913 19:35:06.585045   46862 main.go:141] libmachine: (test-preload-769198) Calling .GetSSHHostname
	I0913 19:35:06.587821   46862 main.go:141] libmachine: (test-preload-769198) DBG | domain test-preload-769198 has defined MAC address 52:54:00:b8:f0:e3 in network mk-test-preload-769198
	I0913 19:35:06.588189   46862 main.go:141] libmachine: (test-preload-769198) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:f0:e3", ip: ""} in network mk-test-preload-769198: {Iface:virbr1 ExpiryTime:2024-09-13 20:34:57 +0000 UTC Type:0 Mac:52:54:00:b8:f0:e3 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:test-preload-769198 Clientid:01:52:54:00:b8:f0:e3}
	I0913 19:35:06.588225   46862 main.go:141] libmachine: (test-preload-769198) DBG | domain test-preload-769198 has defined IP address 192.168.39.171 and MAC address 52:54:00:b8:f0:e3 in network mk-test-preload-769198
	I0913 19:35:06.588344   46862 main.go:141] libmachine: (test-preload-769198) Calling .GetSSHPort
	I0913 19:35:06.588525   46862 main.go:141] libmachine: (test-preload-769198) Calling .GetSSHKeyPath
	I0913 19:35:06.588694   46862 main.go:141] libmachine: (test-preload-769198) Calling .GetSSHUsername
	I0913 19:35:06.588837   46862 sshutil.go:53] new ssh client: &{IP:192.168.39.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/test-preload-769198/id_rsa Username:docker}
	I0913 19:35:06.673280   46862 ssh_runner.go:195] Run: cat /etc/os-release
	I0913 19:35:06.677571   46862 info.go:137] Remote host: Buildroot 2023.02.9
	I0913 19:35:06.677593   46862 filesync.go:126] Scanning /home/jenkins/minikube-integration/19636-3902/.minikube/addons for local assets ...
	I0913 19:35:06.677659   46862 filesync.go:126] Scanning /home/jenkins/minikube-integration/19636-3902/.minikube/files for local assets ...
	I0913 19:35:06.677757   46862 filesync.go:149] local asset: /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem -> 110792.pem in /etc/ssl/certs
	I0913 19:35:06.677876   46862 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0913 19:35:06.687385   46862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem --> /etc/ssl/certs/110792.pem (1708 bytes)
	I0913 19:35:06.709903   46862 start.go:296] duration metric: took 125.18985ms for postStartSetup
	I0913 19:35:06.709941   46862 fix.go:56] duration metric: took 19.953290371s for fixHost
	I0913 19:35:06.709964   46862 main.go:141] libmachine: (test-preload-769198) Calling .GetSSHHostname
	I0913 19:35:06.712573   46862 main.go:141] libmachine: (test-preload-769198) DBG | domain test-preload-769198 has defined MAC address 52:54:00:b8:f0:e3 in network mk-test-preload-769198
	I0913 19:35:06.712875   46862 main.go:141] libmachine: (test-preload-769198) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:f0:e3", ip: ""} in network mk-test-preload-769198: {Iface:virbr1 ExpiryTime:2024-09-13 20:34:57 +0000 UTC Type:0 Mac:52:54:00:b8:f0:e3 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:test-preload-769198 Clientid:01:52:54:00:b8:f0:e3}
	I0913 19:35:06.712904   46862 main.go:141] libmachine: (test-preload-769198) DBG | domain test-preload-769198 has defined IP address 192.168.39.171 and MAC address 52:54:00:b8:f0:e3 in network mk-test-preload-769198
	I0913 19:35:06.713032   46862 main.go:141] libmachine: (test-preload-769198) Calling .GetSSHPort
	I0913 19:35:06.713270   46862 main.go:141] libmachine: (test-preload-769198) Calling .GetSSHKeyPath
	I0913 19:35:06.713425   46862 main.go:141] libmachine: (test-preload-769198) Calling .GetSSHKeyPath
	I0913 19:35:06.713550   46862 main.go:141] libmachine: (test-preload-769198) Calling .GetSSHUsername
	I0913 19:35:06.713687   46862 main.go:141] libmachine: Using SSH client type: native
	I0913 19:35:06.713851   46862 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.171 22 <nil> <nil>}
	I0913 19:35:06.713861   46862 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0913 19:35:06.822546   46862 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726256106.791163823
	
	I0913 19:35:06.822565   46862 fix.go:216] guest clock: 1726256106.791163823
	I0913 19:35:06.822573   46862 fix.go:229] Guest: 2024-09-13 19:35:06.791163823 +0000 UTC Remote: 2024-09-13 19:35:06.709946174 +0000 UTC m=+35.414081681 (delta=81.217649ms)
	I0913 19:35:06.822600   46862 fix.go:200] guest clock delta is within tolerance: 81.217649ms
	I0913 19:35:06.822607   46862 start.go:83] releasing machines lock for "test-preload-769198", held for 20.065966436s
	I0913 19:35:06.822627   46862 main.go:141] libmachine: (test-preload-769198) Calling .DriverName
	I0913 19:35:06.822840   46862 main.go:141] libmachine: (test-preload-769198) Calling .GetIP
	I0913 19:35:06.825469   46862 main.go:141] libmachine: (test-preload-769198) DBG | domain test-preload-769198 has defined MAC address 52:54:00:b8:f0:e3 in network mk-test-preload-769198
	I0913 19:35:06.825793   46862 main.go:141] libmachine: (test-preload-769198) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:f0:e3", ip: ""} in network mk-test-preload-769198: {Iface:virbr1 ExpiryTime:2024-09-13 20:34:57 +0000 UTC Type:0 Mac:52:54:00:b8:f0:e3 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:test-preload-769198 Clientid:01:52:54:00:b8:f0:e3}
	I0913 19:35:06.825820   46862 main.go:141] libmachine: (test-preload-769198) DBG | domain test-preload-769198 has defined IP address 192.168.39.171 and MAC address 52:54:00:b8:f0:e3 in network mk-test-preload-769198
	I0913 19:35:06.825991   46862 main.go:141] libmachine: (test-preload-769198) Calling .DriverName
	I0913 19:35:06.826480   46862 main.go:141] libmachine: (test-preload-769198) Calling .DriverName
	I0913 19:35:06.826636   46862 main.go:141] libmachine: (test-preload-769198) Calling .DriverName
	I0913 19:35:06.826717   46862 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0913 19:35:06.826757   46862 main.go:141] libmachine: (test-preload-769198) Calling .GetSSHHostname
	I0913 19:35:06.826784   46862 ssh_runner.go:195] Run: cat /version.json
	I0913 19:35:06.826804   46862 main.go:141] libmachine: (test-preload-769198) Calling .GetSSHHostname
	I0913 19:35:06.829238   46862 main.go:141] libmachine: (test-preload-769198) DBG | domain test-preload-769198 has defined MAC address 52:54:00:b8:f0:e3 in network mk-test-preload-769198
	I0913 19:35:06.829264   46862 main.go:141] libmachine: (test-preload-769198) DBG | domain test-preload-769198 has defined MAC address 52:54:00:b8:f0:e3 in network mk-test-preload-769198
	I0913 19:35:06.829623   46862 main.go:141] libmachine: (test-preload-769198) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:f0:e3", ip: ""} in network mk-test-preload-769198: {Iface:virbr1 ExpiryTime:2024-09-13 20:34:57 +0000 UTC Type:0 Mac:52:54:00:b8:f0:e3 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:test-preload-769198 Clientid:01:52:54:00:b8:f0:e3}
	I0913 19:35:06.829648   46862 main.go:141] libmachine: (test-preload-769198) DBG | domain test-preload-769198 has defined IP address 192.168.39.171 and MAC address 52:54:00:b8:f0:e3 in network mk-test-preload-769198
	I0913 19:35:06.829674   46862 main.go:141] libmachine: (test-preload-769198) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:f0:e3", ip: ""} in network mk-test-preload-769198: {Iface:virbr1 ExpiryTime:2024-09-13 20:34:57 +0000 UTC Type:0 Mac:52:54:00:b8:f0:e3 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:test-preload-769198 Clientid:01:52:54:00:b8:f0:e3}
	I0913 19:35:06.829693   46862 main.go:141] libmachine: (test-preload-769198) DBG | domain test-preload-769198 has defined IP address 192.168.39.171 and MAC address 52:54:00:b8:f0:e3 in network mk-test-preload-769198
	I0913 19:35:06.829791   46862 main.go:141] libmachine: (test-preload-769198) Calling .GetSSHPort
	I0913 19:35:06.829929   46862 main.go:141] libmachine: (test-preload-769198) Calling .GetSSHPort
	I0913 19:35:06.829950   46862 main.go:141] libmachine: (test-preload-769198) Calling .GetSSHKeyPath
	I0913 19:35:06.830079   46862 main.go:141] libmachine: (test-preload-769198) Calling .GetSSHUsername
	I0913 19:35:06.830124   46862 main.go:141] libmachine: (test-preload-769198) Calling .GetSSHKeyPath
	I0913 19:35:06.830231   46862 sshutil.go:53] new ssh client: &{IP:192.168.39.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/test-preload-769198/id_rsa Username:docker}
	I0913 19:35:06.830240   46862 main.go:141] libmachine: (test-preload-769198) Calling .GetSSHUsername
	I0913 19:35:06.830396   46862 sshutil.go:53] new ssh client: &{IP:192.168.39.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/test-preload-769198/id_rsa Username:docker}
	I0913 19:35:06.906890   46862 ssh_runner.go:195] Run: systemctl --version
	I0913 19:35:06.933264   46862 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0913 19:35:07.074290   46862 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0913 19:35:07.080145   46862 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0913 19:35:07.080214   46862 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0913 19:35:07.097012   46862 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0913 19:35:07.097043   46862 start.go:495] detecting cgroup driver to use...
	I0913 19:35:07.097109   46862 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0913 19:35:07.114740   46862 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0913 19:35:07.128861   46862 docker.go:217] disabling cri-docker service (if available) ...
	I0913 19:35:07.128930   46862 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0913 19:35:07.143486   46862 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0913 19:35:07.157485   46862 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0913 19:35:07.271312   46862 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0913 19:35:07.408365   46862 docker.go:233] disabling docker service ...
	I0913 19:35:07.408455   46862 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0913 19:35:07.423064   46862 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0913 19:35:07.436474   46862 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0913 19:35:07.575270   46862 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0913 19:35:07.703397   46862 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0913 19:35:07.717139   46862 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0913 19:35:07.735063   46862 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I0913 19:35:07.735118   46862 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:35:07.745651   46862 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0913 19:35:07.745705   46862 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:35:07.756462   46862 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:35:07.767028   46862 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:35:07.777503   46862 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0913 19:35:07.788334   46862 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:35:07.798818   46862 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:35:07.815556   46862 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:35:07.826307   46862 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0913 19:35:07.835946   46862 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0913 19:35:07.835996   46862 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0913 19:35:07.849683   46862 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0913 19:35:07.859975   46862 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 19:35:07.983041   46862 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0913 19:35:08.070103   46862 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0913 19:35:08.070176   46862 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0913 19:35:08.074988   46862 start.go:563] Will wait 60s for crictl version
	I0913 19:35:08.075042   46862 ssh_runner.go:195] Run: which crictl
	I0913 19:35:08.078789   46862 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0913 19:35:08.118113   46862 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0913 19:35:08.118211   46862 ssh_runner.go:195] Run: crio --version
	I0913 19:35:08.145227   46862 ssh_runner.go:195] Run: crio --version
	I0913 19:35:08.174895   46862 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.29.1 ...
	I0913 19:35:08.176225   46862 main.go:141] libmachine: (test-preload-769198) Calling .GetIP
	I0913 19:35:08.178665   46862 main.go:141] libmachine: (test-preload-769198) DBG | domain test-preload-769198 has defined MAC address 52:54:00:b8:f0:e3 in network mk-test-preload-769198
	I0913 19:35:08.179008   46862 main.go:141] libmachine: (test-preload-769198) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:f0:e3", ip: ""} in network mk-test-preload-769198: {Iface:virbr1 ExpiryTime:2024-09-13 20:34:57 +0000 UTC Type:0 Mac:52:54:00:b8:f0:e3 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:test-preload-769198 Clientid:01:52:54:00:b8:f0:e3}
	I0913 19:35:08.179043   46862 main.go:141] libmachine: (test-preload-769198) DBG | domain test-preload-769198 has defined IP address 192.168.39.171 and MAC address 52:54:00:b8:f0:e3 in network mk-test-preload-769198
	I0913 19:35:08.179252   46862 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0913 19:35:08.183428   46862 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0913 19:35:08.197986   46862 kubeadm.go:883] updating cluster {Name:test-preload-769198 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.24.4 ClusterName:test-preload-769198 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.171 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker
MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0913 19:35:08.198126   46862 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0913 19:35:08.198180   46862 ssh_runner.go:195] Run: sudo crictl images --output json
	I0913 19:35:08.239594   46862 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0913 19:35:08.239664   46862 ssh_runner.go:195] Run: which lz4
	I0913 19:35:08.243966   46862 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0913 19:35:08.248460   46862 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0913 19:35:08.248509   46862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I0913 19:35:09.825682   46862 crio.go:462] duration metric: took 1.581743744s to copy over tarball
	I0913 19:35:09.825780   46862 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0913 19:35:12.200692   46862 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.374884706s)
	I0913 19:35:12.200719   46862 crio.go:469] duration metric: took 2.375011693s to extract the tarball
	I0913 19:35:12.200726   46862 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0913 19:35:12.242215   46862 ssh_runner.go:195] Run: sudo crictl images --output json
	I0913 19:35:12.282880   46862 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0913 19:35:12.282910   46862 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0913 19:35:12.282978   46862 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 19:35:12.282992   46862 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0913 19:35:12.283023   46862 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0913 19:35:12.283050   46862 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0913 19:35:12.283073   46862 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0913 19:35:12.283003   46862 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0913 19:35:12.283248   46862 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0913 19:35:12.283260   46862 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0913 19:35:12.284533   46862 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0913 19:35:12.284544   46862 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0913 19:35:12.284561   46862 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 19:35:12.284562   46862 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0913 19:35:12.284585   46862 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0913 19:35:12.284547   46862 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0913 19:35:12.284613   46862 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0913 19:35:12.284622   46862 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0913 19:35:12.476607   46862 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0913 19:35:12.516310   46862 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I0913 19:35:12.516362   46862 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0913 19:35:12.516408   46862 ssh_runner.go:195] Run: which crictl
	I0913 19:35:12.520386   46862 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0913 19:35:12.523918   46862 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0913 19:35:12.525943   46862 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0913 19:35:12.545943   46862 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I0913 19:35:12.552100   46862 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I0913 19:35:12.565061   46862 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I0913 19:35:12.567201   46862 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0913 19:35:12.592600   46862 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I0913 19:35:12.652614   46862 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I0913 19:35:12.652662   46862 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0913 19:35:12.652710   46862 ssh_runner.go:195] Run: which crictl
	I0913 19:35:12.702715   46862 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I0913 19:35:12.702759   46862 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I0913 19:35:12.702772   46862 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I0913 19:35:12.702803   46862 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I0913 19:35:12.702807   46862 ssh_runner.go:195] Run: which crictl
	I0913 19:35:12.702841   46862 ssh_runner.go:195] Run: which crictl
	I0913 19:35:12.742047   46862 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I0913 19:35:12.742090   46862 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I0913 19:35:12.742159   46862 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I0913 19:35:12.742214   46862 ssh_runner.go:195] Run: which crictl
	I0913 19:35:12.742214   46862 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0913 19:35:12.742289   46862 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I0913 19:35:12.742109   46862 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0913 19:35:12.742310   46862 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I0913 19:35:12.742350   46862 ssh_runner.go:195] Run: which crictl
	I0913 19:35:12.742381   46862 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0913 19:35:12.742355   46862 ssh_runner.go:195] Run: which crictl
	I0913 19:35:12.742433   46862 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0913 19:35:12.742444   46862 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0913 19:35:12.758624   46862 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0913 19:35:12.832060   46862 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I0913 19:35:12.832123   46862 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0913 19:35:12.832169   46862 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0913 19:35:12.832182   46862 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0913 19:35:12.858236   46862 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0913 19:35:12.858360   46862 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0913 19:35:12.858403   46862 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0913 19:35:12.863975   46862 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0913 19:35:12.928681   46862 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0913 19:35:12.976676   46862 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I0913 19:35:12.976696   46862 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0913 19:35:12.976698   46862 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0913 19:35:12.976795   46862 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0913 19:35:12.976802   46862 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I0913 19:35:13.009563   46862 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0913 19:35:13.013934   46862 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0913 19:35:13.013986   46862 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0913 19:35:13.035594   46862 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0913 19:35:13.159652   46862 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I0913 19:35:13.159699   46862 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I0913 19:35:13.159762   46862 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0913 19:35:13.159787   46862 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0913 19:35:13.440203   46862 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 19:35:16.102336   46862 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (3.125512949s)
	I0913 19:35:16.102369   46862 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0913 19:35:16.102401   46862 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6: (3.092802813s)
	I0913 19:35:16.102459   46862 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I0913 19:35:16.102481   46862 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4: (3.088477623s)
	I0913 19:35:16.102520   46862 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I0913 19:35:16.102549   46862 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4: (3.088582793s)
	I0913 19:35:16.102600   46862 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0913 19:35:16.102612   46862 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0913 19:35:16.102553   46862 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0913 19:35:16.102632   46862 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: (2.942828278s)
	I0913 19:35:16.102600   46862 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4: (3.066982001s)
	I0913 19:35:16.102654   46862 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I0913 19:35:16.102661   46862 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.7
	I0913 19:35:16.102684   46862 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.24.4: (2.942904565s)
	I0913 19:35:16.102694   46862 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I0913 19:35:16.102709   46862 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I0913 19:35:16.102721   46862 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.662492301s)
	I0913 19:35:16.102687   46862 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I0913 19:35:16.102815   46862 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.4
	I0913 19:35:16.108820   46862 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I0913 19:35:16.271462   46862 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I0913 19:35:16.271511   46862 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0913 19:35:16.271550   46862 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0913 19:35:16.271563   46862 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0913 19:35:16.271572   46862 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I0913 19:35:16.271596   46862 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I0913 19:35:16.271646   46862 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0913 19:35:16.276140   46862 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I0913 19:35:17.014972   46862 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I0913 19:35:17.015021   46862 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I0913 19:35:17.015074   46862 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I0913 19:35:17.859541   46862 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I0913 19:35:17.859583   46862 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0913 19:35:17.859624   46862 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I0913 19:35:18.308415   46862 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0913 19:35:18.308471   46862 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0913 19:35:18.308559   46862 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0913 19:35:18.757729   46862 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I0913 19:35:18.757780   46862 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0913 19:35:18.757836   46862 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0913 19:35:19.407340   46862 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I0913 19:35:19.407390   46862 cache_images.go:123] Successfully loaded all cached images
	I0913 19:35:19.407395   46862 cache_images.go:92] duration metric: took 7.124472018s to LoadCachedImages
	I0913 19:35:19.407406   46862 kubeadm.go:934] updating node { 192.168.39.171 8443 v1.24.4 crio true true} ...
	I0913 19:35:19.407500   46862 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-769198 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.171
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-769198 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0913 19:35:19.407565   46862 ssh_runner.go:195] Run: crio config
	I0913 19:35:19.454368   46862 cni.go:84] Creating CNI manager for ""
	I0913 19:35:19.454390   46862 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0913 19:35:19.454401   46862 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0913 19:35:19.454417   46862 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.171 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-769198 NodeName:test-preload-769198 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.171"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.171 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0913 19:35:19.454537   46862 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.171
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-769198"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.171
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.171"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0913 19:35:19.454593   46862 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I0913 19:35:19.464770   46862 binaries.go:44] Found k8s binaries, skipping transfer
	I0913 19:35:19.464811   46862 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0913 19:35:19.474779   46862 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0913 19:35:19.491381   46862 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0913 19:35:19.507372   46862 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2106 bytes)
	I0913 19:35:19.524080   46862 ssh_runner.go:195] Run: grep 192.168.39.171	control-plane.minikube.internal$ /etc/hosts
	I0913 19:35:19.527776   46862 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.171	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0913 19:35:19.540793   46862 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 19:35:19.653045   46862 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0913 19:35:19.670647   46862 certs.go:68] Setting up /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/test-preload-769198 for IP: 192.168.39.171
	I0913 19:35:19.670678   46862 certs.go:194] generating shared ca certs ...
	I0913 19:35:19.670698   46862 certs.go:226] acquiring lock for ca certs: {Name:mke780aab4c2895bded11772e31c7a357d07742c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 19:35:19.670872   46862 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.key
	I0913 19:35:19.670926   46862 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.key
	I0913 19:35:19.670939   46862 certs.go:256] generating profile certs ...
	I0913 19:35:19.671068   46862 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/test-preload-769198/client.key
	I0913 19:35:19.671148   46862 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/test-preload-769198/apiserver.key.a2159fbf
	I0913 19:35:19.671207   46862 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/test-preload-769198/proxy-client.key
	I0913 19:35:19.671369   46862 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079.pem (1338 bytes)
	W0913 19:35:19.671424   46862 certs.go:480] ignoring /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079_empty.pem, impossibly tiny 0 bytes
	I0913 19:35:19.671438   46862 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem (1675 bytes)
	I0913 19:35:19.671471   46862 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem (1082 bytes)
	I0913 19:35:19.671507   46862 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem (1123 bytes)
	I0913 19:35:19.671536   46862 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem (1679 bytes)
	I0913 19:35:19.671595   46862 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem (1708 bytes)
	I0913 19:35:19.672298   46862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0913 19:35:19.709876   46862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0913 19:35:19.749904   46862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0913 19:35:19.778265   46862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0913 19:35:19.818729   46862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/test-preload-769198/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0913 19:35:19.849247   46862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/test-preload-769198/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0913 19:35:19.882529   46862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/test-preload-769198/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0913 19:35:19.908683   46862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/test-preload-769198/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0913 19:35:19.932332   46862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0913 19:35:19.955663   46862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079.pem --> /usr/share/ca-certificates/11079.pem (1338 bytes)
	I0913 19:35:19.978598   46862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem --> /usr/share/ca-certificates/110792.pem (1708 bytes)
	I0913 19:35:20.001554   46862 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0913 19:35:20.018457   46862 ssh_runner.go:195] Run: openssl version
	I0913 19:35:20.024320   46862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11079.pem && ln -fs /usr/share/ca-certificates/11079.pem /etc/ssl/certs/11079.pem"
	I0913 19:35:20.035122   46862 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11079.pem
	I0913 19:35:20.039519   46862 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 13 18:38 /usr/share/ca-certificates/11079.pem
	I0913 19:35:20.039570   46862 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11079.pem
	I0913 19:35:20.045317   46862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11079.pem /etc/ssl/certs/51391683.0"
	I0913 19:35:20.056066   46862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110792.pem && ln -fs /usr/share/ca-certificates/110792.pem /etc/ssl/certs/110792.pem"
	I0913 19:35:20.067060   46862 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110792.pem
	I0913 19:35:20.071353   46862 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 13 18:38 /usr/share/ca-certificates/110792.pem
	I0913 19:35:20.071398   46862 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110792.pem
	I0913 19:35:20.077037   46862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110792.pem /etc/ssl/certs/3ec20f2e.0"
	I0913 19:35:20.087576   46862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0913 19:35:20.097837   46862 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0913 19:35:20.102243   46862 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 13 18:22 /usr/share/ca-certificates/minikubeCA.pem
	I0913 19:35:20.102277   46862 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0913 19:35:20.107586   46862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0913 19:35:20.117994   46862 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0913 19:35:20.122464   46862 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0913 19:35:20.127998   46862 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0913 19:35:20.133914   46862 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0913 19:35:20.140032   46862 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0913 19:35:20.145904   46862 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0913 19:35:20.151740   46862 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0913 19:35:20.157469   46862 kubeadm.go:392] StartCluster: {Name:test-preload-769198 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
24.4 ClusterName:test-preload-769198 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.171 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 19:35:20.157555   46862 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0913 19:35:20.157606   46862 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0913 19:35:20.202374   46862 cri.go:89] found id: ""
	I0913 19:35:20.202456   46862 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0913 19:35:20.214623   46862 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0913 19:35:20.214643   46862 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0913 19:35:20.214682   46862 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0913 19:35:20.225364   46862 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0913 19:35:20.225882   46862 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-769198" does not appear in /home/jenkins/minikube-integration/19636-3902/kubeconfig
	I0913 19:35:20.226031   46862 kubeconfig.go:62] /home/jenkins/minikube-integration/19636-3902/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-769198" cluster setting kubeconfig missing "test-preload-769198" context setting]
	I0913 19:35:20.226348   46862 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/kubeconfig: {Name:mk324cf401f96b93fed93af51e24b0634e5fa1a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 19:35:20.226978   46862 kapi.go:59] client config for test-preload-769198: &rest.Config{Host:"https://192.168.39.171:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19636-3902/.minikube/profiles/test-preload-769198/client.crt", KeyFile:"/home/jenkins/minikube-integration/19636-3902/.minikube/profiles/test-preload-769198/client.key", CAFile:"/home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6f9c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0913 19:35:20.227655   46862 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0913 19:35:20.237362   46862 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.171
	I0913 19:35:20.237396   46862 kubeadm.go:1160] stopping kube-system containers ...
	I0913 19:35:20.237408   46862 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0913 19:35:20.237463   46862 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0913 19:35:20.278559   46862 cri.go:89] found id: ""
	I0913 19:35:20.278628   46862 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0913 19:35:20.299067   46862 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0913 19:35:20.311350   46862 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0913 19:35:20.311368   46862 kubeadm.go:157] found existing configuration files:
	
	I0913 19:35:20.311405   46862 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0913 19:35:20.322781   46862 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0913 19:35:20.322839   46862 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0913 19:35:20.341447   46862 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0913 19:35:20.352112   46862 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0913 19:35:20.352183   46862 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0913 19:35:20.361841   46862 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0913 19:35:20.370867   46862 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0913 19:35:20.370930   46862 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0913 19:35:20.380437   46862 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0913 19:35:20.389562   46862 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0913 19:35:20.389625   46862 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0913 19:35:20.399211   46862 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0913 19:35:20.408602   46862 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:35:20.495410   46862 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:35:21.279485   46862 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:35:21.533784   46862 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:35:21.603973   46862 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:35:21.708065   46862 api_server.go:52] waiting for apiserver process to appear ...
	I0913 19:35:21.708153   46862 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:35:22.208280   46862 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:35:22.708990   46862 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:35:22.730555   46862 api_server.go:72] duration metric: took 1.02248998s to wait for apiserver process to appear ...
	I0913 19:35:22.730585   46862 api_server.go:88] waiting for apiserver healthz status ...
	I0913 19:35:22.730608   46862 api_server.go:253] Checking apiserver healthz at https://192.168.39.171:8443/healthz ...
	I0913 19:35:22.731140   46862 api_server.go:269] stopped: https://192.168.39.171:8443/healthz: Get "https://192.168.39.171:8443/healthz": dial tcp 192.168.39.171:8443: connect: connection refused
	I0913 19:35:23.230730   46862 api_server.go:253] Checking apiserver healthz at https://192.168.39.171:8443/healthz ...
	I0913 19:35:26.453628   46862 api_server.go:279] https://192.168.39.171:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0913 19:35:26.453652   46862 api_server.go:103] status: https://192.168.39.171:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0913 19:35:26.453666   46862 api_server.go:253] Checking apiserver healthz at https://192.168.39.171:8443/healthz ...
	I0913 19:35:26.508530   46862 api_server.go:279] https://192.168.39.171:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0913 19:35:26.508551   46862 api_server.go:103] status: https://192.168.39.171:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0913 19:35:26.730844   46862 api_server.go:253] Checking apiserver healthz at https://192.168.39.171:8443/healthz ...
	I0913 19:35:26.735585   46862 api_server.go:279] https://192.168.39.171:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0913 19:35:26.735614   46862 api_server.go:103] status: https://192.168.39.171:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0913 19:35:27.231663   46862 api_server.go:253] Checking apiserver healthz at https://192.168.39.171:8443/healthz ...
	I0913 19:35:27.236799   46862 api_server.go:279] https://192.168.39.171:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0913 19:35:27.236827   46862 api_server.go:103] status: https://192.168.39.171:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0913 19:35:27.731464   46862 api_server.go:253] Checking apiserver healthz at https://192.168.39.171:8443/healthz ...
	I0913 19:35:27.739575   46862 api_server.go:279] https://192.168.39.171:8443/healthz returned 200:
	ok
	I0913 19:35:27.751197   46862 api_server.go:141] control plane version: v1.24.4
	I0913 19:35:27.751224   46862 api_server.go:131] duration metric: took 5.020631259s to wait for apiserver health ...
	I0913 19:35:27.751234   46862 cni.go:84] Creating CNI manager for ""
	I0913 19:35:27.751241   46862 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0913 19:35:27.752915   46862 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0913 19:35:27.754323   46862 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0913 19:35:27.772250   46862 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0913 19:35:27.813166   46862 system_pods.go:43] waiting for kube-system pods to appear ...
	I0913 19:35:27.813248   46862 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0913 19:35:27.813282   46862 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0913 19:35:27.822362   46862 system_pods.go:59] 7 kube-system pods found
	I0913 19:35:27.822391   46862 system_pods.go:61] "coredns-6d4b75cb6d-9w4z6" [9c3af864-6533-4d2d-8743-fe459b9e97dc] Running
	I0913 19:35:27.822398   46862 system_pods.go:61] "etcd-test-preload-769198" [787b0de8-4d19-40ff-96bd-bd9e0d18c782] Running
	I0913 19:35:27.822406   46862 system_pods.go:61] "kube-apiserver-test-preload-769198" [4c462d69-23ac-4589-adbf-04b918f65c2b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0913 19:35:27.822413   46862 system_pods.go:61] "kube-controller-manager-test-preload-769198" [644765b0-8196-40d0-a975-01598ad13328] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0913 19:35:27.822418   46862 system_pods.go:61] "kube-proxy-jz5gt" [9ca60293-585e-4563-a860-ff34ea85c16a] Running
	I0913 19:35:27.822426   46862 system_pods.go:61] "kube-scheduler-test-preload-769198" [52f89b53-ee5f-4106-bbf8-796a8657d80d] Running
	I0913 19:35:27.822438   46862 system_pods.go:61] "storage-provisioner" [be33a57b-7592-486e-9b28-636011baf9b0] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0913 19:35:27.822447   46862 system_pods.go:74] duration metric: took 9.259194ms to wait for pod list to return data ...
	I0913 19:35:27.822457   46862 node_conditions.go:102] verifying NodePressure condition ...
	I0913 19:35:27.827650   46862 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0913 19:35:27.827673   46862 node_conditions.go:123] node cpu capacity is 2
	I0913 19:35:27.827685   46862 node_conditions.go:105] duration metric: took 5.222561ms to run NodePressure ...
	I0913 19:35:27.827704   46862 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:35:28.041981   46862 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0913 19:35:28.046714   46862 kubeadm.go:739] kubelet initialised
	I0913 19:35:28.046735   46862 kubeadm.go:740] duration metric: took 4.729908ms waiting for restarted kubelet to initialise ...
	I0913 19:35:28.046744   46862 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0913 19:35:28.052587   46862 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6d4b75cb6d-9w4z6" in "kube-system" namespace to be "Ready" ...
	I0913 19:35:28.058466   46862 pod_ready.go:98] node "test-preload-769198" hosting pod "coredns-6d4b75cb6d-9w4z6" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-769198" has status "Ready":"False"
	I0913 19:35:28.058488   46862 pod_ready.go:82] duration metric: took 5.871222ms for pod "coredns-6d4b75cb6d-9w4z6" in "kube-system" namespace to be "Ready" ...
	E0913 19:35:28.058498   46862 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-769198" hosting pod "coredns-6d4b75cb6d-9w4z6" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-769198" has status "Ready":"False"
	I0913 19:35:28.058506   46862 pod_ready.go:79] waiting up to 4m0s for pod "etcd-test-preload-769198" in "kube-system" namespace to be "Ready" ...
	I0913 19:35:28.066947   46862 pod_ready.go:98] node "test-preload-769198" hosting pod "etcd-test-preload-769198" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-769198" has status "Ready":"False"
	I0913 19:35:28.066968   46862 pod_ready.go:82] duration metric: took 8.450554ms for pod "etcd-test-preload-769198" in "kube-system" namespace to be "Ready" ...
	E0913 19:35:28.066978   46862 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-769198" hosting pod "etcd-test-preload-769198" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-769198" has status "Ready":"False"
	I0913 19:35:28.066986   46862 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-test-preload-769198" in "kube-system" namespace to be "Ready" ...
	I0913 19:35:28.074559   46862 pod_ready.go:98] node "test-preload-769198" hosting pod "kube-apiserver-test-preload-769198" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-769198" has status "Ready":"False"
	I0913 19:35:28.074587   46862 pod_ready.go:82] duration metric: took 7.590041ms for pod "kube-apiserver-test-preload-769198" in "kube-system" namespace to be "Ready" ...
	E0913 19:35:28.074598   46862 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-769198" hosting pod "kube-apiserver-test-preload-769198" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-769198" has status "Ready":"False"
	I0913 19:35:28.074611   46862 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-test-preload-769198" in "kube-system" namespace to be "Ready" ...
	I0913 19:35:28.217407   46862 pod_ready.go:98] node "test-preload-769198" hosting pod "kube-controller-manager-test-preload-769198" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-769198" has status "Ready":"False"
	I0913 19:35:28.217442   46862 pod_ready.go:82] duration metric: took 142.811277ms for pod "kube-controller-manager-test-preload-769198" in "kube-system" namespace to be "Ready" ...
	E0913 19:35:28.217454   46862 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-769198" hosting pod "kube-controller-manager-test-preload-769198" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-769198" has status "Ready":"False"
	I0913 19:35:28.217462   46862 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-jz5gt" in "kube-system" namespace to be "Ready" ...
	I0913 19:35:28.616612   46862 pod_ready.go:98] node "test-preload-769198" hosting pod "kube-proxy-jz5gt" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-769198" has status "Ready":"False"
	I0913 19:35:28.616635   46862 pod_ready.go:82] duration metric: took 399.162414ms for pod "kube-proxy-jz5gt" in "kube-system" namespace to be "Ready" ...
	E0913 19:35:28.616644   46862 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-769198" hosting pod "kube-proxy-jz5gt" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-769198" has status "Ready":"False"
	I0913 19:35:28.616650   46862 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-test-preload-769198" in "kube-system" namespace to be "Ready" ...
	I0913 19:35:29.016204   46862 pod_ready.go:98] node "test-preload-769198" hosting pod "kube-scheduler-test-preload-769198" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-769198" has status "Ready":"False"
	I0913 19:35:29.016231   46862 pod_ready.go:82] duration metric: took 399.57411ms for pod "kube-scheduler-test-preload-769198" in "kube-system" namespace to be "Ready" ...
	E0913 19:35:29.016242   46862 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-769198" hosting pod "kube-scheduler-test-preload-769198" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-769198" has status "Ready":"False"
	I0913 19:35:29.016250   46862 pod_ready.go:39] duration metric: took 969.496208ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0913 19:35:29.016272   46862 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0913 19:35:29.028260   46862 ops.go:34] apiserver oom_adj: -16
	I0913 19:35:29.028282   46862 kubeadm.go:597] duration metric: took 8.813633856s to restartPrimaryControlPlane
	I0913 19:35:29.028289   46862 kubeadm.go:394] duration metric: took 8.870826546s to StartCluster
	I0913 19:35:29.028309   46862 settings.go:142] acquiring lock: {Name:mk3b751ea8a9f3a6c2d19469cffad500e96411b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 19:35:29.028382   46862 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19636-3902/kubeconfig
	I0913 19:35:29.028949   46862 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/kubeconfig: {Name:mk324cf401f96b93fed93af51e24b0634e5fa1a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 19:35:29.029182   46862 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.171 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0913 19:35:29.029274   46862 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0913 19:35:29.029358   46862 addons.go:69] Setting storage-provisioner=true in profile "test-preload-769198"
	I0913 19:35:29.029370   46862 addons.go:69] Setting default-storageclass=true in profile "test-preload-769198"
	I0913 19:35:29.029395   46862 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-769198"
	I0913 19:35:29.029416   46862 config.go:182] Loaded profile config "test-preload-769198": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0913 19:35:29.029378   46862 addons.go:234] Setting addon storage-provisioner=true in "test-preload-769198"
	W0913 19:35:29.029503   46862 addons.go:243] addon storage-provisioner should already be in state true
	I0913 19:35:29.029551   46862 host.go:66] Checking if "test-preload-769198" exists ...
	I0913 19:35:29.029797   46862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:35:29.029839   46862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:35:29.029941   46862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:35:29.029993   46862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:35:29.031702   46862 out.go:177] * Verifying Kubernetes components...
	I0913 19:35:29.032976   46862 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 19:35:29.044245   46862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40369
	I0913 19:35:29.044701   46862 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:35:29.045267   46862 main.go:141] libmachine: Using API Version  1
	I0913 19:35:29.045289   46862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:35:29.045464   46862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46383
	I0913 19:35:29.045631   46862 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:35:29.045827   46862 main.go:141] libmachine: (test-preload-769198) Calling .GetState
	I0913 19:35:29.045932   46862 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:35:29.046428   46862 main.go:141] libmachine: Using API Version  1
	I0913 19:35:29.046449   46862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:35:29.046740   46862 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:35:29.047359   46862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:35:29.047400   46862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:35:29.048011   46862 kapi.go:59] client config for test-preload-769198: &rest.Config{Host:"https://192.168.39.171:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19636-3902/.minikube/profiles/test-preload-769198/client.crt", KeyFile:"/home/jenkins/minikube-integration/19636-3902/.minikube/profiles/test-preload-769198/client.key", CAFile:"/home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6f9c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0913 19:35:29.048276   46862 addons.go:234] Setting addon default-storageclass=true in "test-preload-769198"
	W0913 19:35:29.048296   46862 addons.go:243] addon default-storageclass should already be in state true
	I0913 19:35:29.048321   46862 host.go:66] Checking if "test-preload-769198" exists ...
	I0913 19:35:29.048692   46862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:35:29.048732   46862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:35:29.061626   46862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38075
	I0913 19:35:29.061978   46862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34469
	I0913 19:35:29.062166   46862 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:35:29.062436   46862 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:35:29.062650   46862 main.go:141] libmachine: Using API Version  1
	I0913 19:35:29.062674   46862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:35:29.062868   46862 main.go:141] libmachine: Using API Version  1
	I0913 19:35:29.062888   46862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:35:29.063010   46862 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:35:29.063176   46862 main.go:141] libmachine: (test-preload-769198) Calling .GetState
	I0913 19:35:29.063243   46862 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:35:29.063722   46862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:35:29.063768   46862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:35:29.064870   46862 main.go:141] libmachine: (test-preload-769198) Calling .DriverName
	I0913 19:35:29.066801   46862 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 19:35:29.068083   46862 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0913 19:35:29.068100   46862 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0913 19:35:29.068113   46862 main.go:141] libmachine: (test-preload-769198) Calling .GetSSHHostname
	I0913 19:35:29.070770   46862 main.go:141] libmachine: (test-preload-769198) DBG | domain test-preload-769198 has defined MAC address 52:54:00:b8:f0:e3 in network mk-test-preload-769198
	I0913 19:35:29.071205   46862 main.go:141] libmachine: (test-preload-769198) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:f0:e3", ip: ""} in network mk-test-preload-769198: {Iface:virbr1 ExpiryTime:2024-09-13 20:34:57 +0000 UTC Type:0 Mac:52:54:00:b8:f0:e3 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:test-preload-769198 Clientid:01:52:54:00:b8:f0:e3}
	I0913 19:35:29.071233   46862 main.go:141] libmachine: (test-preload-769198) DBG | domain test-preload-769198 has defined IP address 192.168.39.171 and MAC address 52:54:00:b8:f0:e3 in network mk-test-preload-769198
	I0913 19:35:29.071388   46862 main.go:141] libmachine: (test-preload-769198) Calling .GetSSHPort
	I0913 19:35:29.071561   46862 main.go:141] libmachine: (test-preload-769198) Calling .GetSSHKeyPath
	I0913 19:35:29.071696   46862 main.go:141] libmachine: (test-preload-769198) Calling .GetSSHUsername
	I0913 19:35:29.071829   46862 sshutil.go:53] new ssh client: &{IP:192.168.39.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/test-preload-769198/id_rsa Username:docker}
	I0913 19:35:29.100806   46862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39365
	I0913 19:35:29.101177   46862 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:35:29.101637   46862 main.go:141] libmachine: Using API Version  1
	I0913 19:35:29.101659   46862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:35:29.101971   46862 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:35:29.102153   46862 main.go:141] libmachine: (test-preload-769198) Calling .GetState
	I0913 19:35:29.103943   46862 main.go:141] libmachine: (test-preload-769198) Calling .DriverName
	I0913 19:35:29.104132   46862 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0913 19:35:29.104146   46862 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0913 19:35:29.104160   46862 main.go:141] libmachine: (test-preload-769198) Calling .GetSSHHostname
	I0913 19:35:29.106823   46862 main.go:141] libmachine: (test-preload-769198) DBG | domain test-preload-769198 has defined MAC address 52:54:00:b8:f0:e3 in network mk-test-preload-769198
	I0913 19:35:29.107247   46862 main.go:141] libmachine: (test-preload-769198) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:f0:e3", ip: ""} in network mk-test-preload-769198: {Iface:virbr1 ExpiryTime:2024-09-13 20:34:57 +0000 UTC Type:0 Mac:52:54:00:b8:f0:e3 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:test-preload-769198 Clientid:01:52:54:00:b8:f0:e3}
	I0913 19:35:29.107286   46862 main.go:141] libmachine: (test-preload-769198) DBG | domain test-preload-769198 has defined IP address 192.168.39.171 and MAC address 52:54:00:b8:f0:e3 in network mk-test-preload-769198
	I0913 19:35:29.107423   46862 main.go:141] libmachine: (test-preload-769198) Calling .GetSSHPort
	I0913 19:35:29.107598   46862 main.go:141] libmachine: (test-preload-769198) Calling .GetSSHKeyPath
	I0913 19:35:29.107745   46862 main.go:141] libmachine: (test-preload-769198) Calling .GetSSHUsername
	I0913 19:35:29.107878   46862 sshutil.go:53] new ssh client: &{IP:192.168.39.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/test-preload-769198/id_rsa Username:docker}
	I0913 19:35:29.198466   46862 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0913 19:35:29.215951   46862 node_ready.go:35] waiting up to 6m0s for node "test-preload-769198" to be "Ready" ...
	I0913 19:35:29.312071   46862 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0913 19:35:29.331127   46862 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0913 19:35:30.341661   46862 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.029562605s)
	I0913 19:35:30.341708   46862 main.go:141] libmachine: Making call to close driver server
	I0913 19:35:30.341717   46862 main.go:141] libmachine: (test-preload-769198) Calling .Close
	I0913 19:35:30.341665   46862 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.010495421s)
	I0913 19:35:30.341816   46862 main.go:141] libmachine: Making call to close driver server
	I0913 19:35:30.341832   46862 main.go:141] libmachine: (test-preload-769198) Calling .Close
	I0913 19:35:30.341976   46862 main.go:141] libmachine: Successfully made call to close driver server
	I0913 19:35:30.341992   46862 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 19:35:30.342002   46862 main.go:141] libmachine: Making call to close driver server
	I0913 19:35:30.342010   46862 main.go:141] libmachine: (test-preload-769198) Calling .Close
	I0913 19:35:30.342062   46862 main.go:141] libmachine: Successfully made call to close driver server
	I0913 19:35:30.342125   46862 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 19:35:30.342139   46862 main.go:141] libmachine: Making call to close driver server
	I0913 19:35:30.342147   46862 main.go:141] libmachine: (test-preload-769198) Calling .Close
	I0913 19:35:30.342075   46862 main.go:141] libmachine: (test-preload-769198) DBG | Closing plugin on server side
	I0913 19:35:30.342267   46862 main.go:141] libmachine: Successfully made call to close driver server
	I0913 19:35:30.342282   46862 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 19:35:30.342295   46862 main.go:141] libmachine: (test-preload-769198) DBG | Closing plugin on server side
	I0913 19:35:30.342399   46862 main.go:141] libmachine: Successfully made call to close driver server
	I0913 19:35:30.342409   46862 main.go:141] libmachine: (test-preload-769198) DBG | Closing plugin on server side
	I0913 19:35:30.342412   46862 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 19:35:30.349611   46862 main.go:141] libmachine: Making call to close driver server
	I0913 19:35:30.349626   46862 main.go:141] libmachine: (test-preload-769198) Calling .Close
	I0913 19:35:30.349880   46862 main.go:141] libmachine: (test-preload-769198) DBG | Closing plugin on server side
	I0913 19:35:30.349895   46862 main.go:141] libmachine: Successfully made call to close driver server
	I0913 19:35:30.349916   46862 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 19:35:30.352382   46862 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0913 19:35:30.353492   46862 addons.go:510] duration metric: took 1.324226092s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0913 19:35:31.221222   46862 node_ready.go:53] node "test-preload-769198" has status "Ready":"False"
	I0913 19:35:33.719659   46862 node_ready.go:53] node "test-preload-769198" has status "Ready":"False"
	I0913 19:35:35.720304   46862 node_ready.go:53] node "test-preload-769198" has status "Ready":"False"
	I0913 19:35:37.220313   46862 node_ready.go:49] node "test-preload-769198" has status "Ready":"True"
	I0913 19:35:37.220334   46862 node_ready.go:38] duration metric: took 8.004348839s for node "test-preload-769198" to be "Ready" ...
	I0913 19:35:37.220343   46862 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0913 19:35:37.224999   46862 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6d4b75cb6d-9w4z6" in "kube-system" namespace to be "Ready" ...
	I0913 19:35:37.229370   46862 pod_ready.go:93] pod "coredns-6d4b75cb6d-9w4z6" in "kube-system" namespace has status "Ready":"True"
	I0913 19:35:37.229386   46862 pod_ready.go:82] duration metric: took 4.366582ms for pod "coredns-6d4b75cb6d-9w4z6" in "kube-system" namespace to be "Ready" ...
	I0913 19:35:37.229393   46862 pod_ready.go:79] waiting up to 6m0s for pod "etcd-test-preload-769198" in "kube-system" namespace to be "Ready" ...
	I0913 19:35:38.750277   46862 pod_ready.go:93] pod "etcd-test-preload-769198" in "kube-system" namespace has status "Ready":"True"
	I0913 19:35:38.750299   46862 pod_ready.go:82] duration metric: took 1.520899429s for pod "etcd-test-preload-769198" in "kube-system" namespace to be "Ready" ...
	I0913 19:35:38.750309   46862 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-test-preload-769198" in "kube-system" namespace to be "Ready" ...
	I0913 19:35:40.758166   46862 pod_ready.go:103] pod "kube-apiserver-test-preload-769198" in "kube-system" namespace has status "Ready":"False"
	I0913 19:35:42.257743   46862 pod_ready.go:93] pod "kube-apiserver-test-preload-769198" in "kube-system" namespace has status "Ready":"True"
	I0913 19:35:42.257764   46862 pod_ready.go:82] duration metric: took 3.507449974s for pod "kube-apiserver-test-preload-769198" in "kube-system" namespace to be "Ready" ...
	I0913 19:35:42.257773   46862 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-test-preload-769198" in "kube-system" namespace to be "Ready" ...
	I0913 19:35:42.262173   46862 pod_ready.go:93] pod "kube-controller-manager-test-preload-769198" in "kube-system" namespace has status "Ready":"True"
	I0913 19:35:42.262191   46862 pod_ready.go:82] duration metric: took 4.412191ms for pod "kube-controller-manager-test-preload-769198" in "kube-system" namespace to be "Ready" ...
	I0913 19:35:42.262200   46862 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-jz5gt" in "kube-system" namespace to be "Ready" ...
	I0913 19:35:42.266603   46862 pod_ready.go:93] pod "kube-proxy-jz5gt" in "kube-system" namespace has status "Ready":"True"
	I0913 19:35:42.266628   46862 pod_ready.go:82] duration metric: took 4.420824ms for pod "kube-proxy-jz5gt" in "kube-system" namespace to be "Ready" ...
	I0913 19:35:42.266640   46862 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-test-preload-769198" in "kube-system" namespace to be "Ready" ...
	I0913 19:35:42.270618   46862 pod_ready.go:93] pod "kube-scheduler-test-preload-769198" in "kube-system" namespace has status "Ready":"True"
	I0913 19:35:42.270633   46862 pod_ready.go:82] duration metric: took 3.986302ms for pod "kube-scheduler-test-preload-769198" in "kube-system" namespace to be "Ready" ...
	I0913 19:35:42.270642   46862 pod_ready.go:39] duration metric: took 5.050289972s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0913 19:35:42.270654   46862 api_server.go:52] waiting for apiserver process to appear ...
	I0913 19:35:42.270701   46862 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:35:42.285542   46862 api_server.go:72] duration metric: took 13.25633161s to wait for apiserver process to appear ...
	I0913 19:35:42.285564   46862 api_server.go:88] waiting for apiserver healthz status ...
	I0913 19:35:42.285586   46862 api_server.go:253] Checking apiserver healthz at https://192.168.39.171:8443/healthz ...
	I0913 19:35:42.290745   46862 api_server.go:279] https://192.168.39.171:8443/healthz returned 200:
	ok
	I0913 19:35:42.291542   46862 api_server.go:141] control plane version: v1.24.4
	I0913 19:35:42.291559   46862 api_server.go:131] duration metric: took 5.989011ms to wait for apiserver health ...
	I0913 19:35:42.291568   46862 system_pods.go:43] waiting for kube-system pods to appear ...
	I0913 19:35:42.296922   46862 system_pods.go:59] 7 kube-system pods found
	I0913 19:35:42.296946   46862 system_pods.go:61] "coredns-6d4b75cb6d-9w4z6" [9c3af864-6533-4d2d-8743-fe459b9e97dc] Running
	I0913 19:35:42.296951   46862 system_pods.go:61] "etcd-test-preload-769198" [787b0de8-4d19-40ff-96bd-bd9e0d18c782] Running
	I0913 19:35:42.296955   46862 system_pods.go:61] "kube-apiserver-test-preload-769198" [4c462d69-23ac-4589-adbf-04b918f65c2b] Running
	I0913 19:35:42.296959   46862 system_pods.go:61] "kube-controller-manager-test-preload-769198" [644765b0-8196-40d0-a975-01598ad13328] Running
	I0913 19:35:42.296962   46862 system_pods.go:61] "kube-proxy-jz5gt" [9ca60293-585e-4563-a860-ff34ea85c16a] Running
	I0913 19:35:42.296965   46862 system_pods.go:61] "kube-scheduler-test-preload-769198" [52f89b53-ee5f-4106-bbf8-796a8657d80d] Running
	I0913 19:35:42.296967   46862 system_pods.go:61] "storage-provisioner" [be33a57b-7592-486e-9b28-636011baf9b0] Running
	I0913 19:35:42.296972   46862 system_pods.go:74] duration metric: took 5.398896ms to wait for pod list to return data ...
	I0913 19:35:42.296981   46862 default_sa.go:34] waiting for default service account to be created ...
	I0913 19:35:42.419966   46862 default_sa.go:45] found service account: "default"
	I0913 19:35:42.419989   46862 default_sa.go:55] duration metric: took 123.003417ms for default service account to be created ...
	I0913 19:35:42.419997   46862 system_pods.go:116] waiting for k8s-apps to be running ...
	I0913 19:35:42.622425   46862 system_pods.go:86] 7 kube-system pods found
	I0913 19:35:42.622452   46862 system_pods.go:89] "coredns-6d4b75cb6d-9w4z6" [9c3af864-6533-4d2d-8743-fe459b9e97dc] Running
	I0913 19:35:42.622459   46862 system_pods.go:89] "etcd-test-preload-769198" [787b0de8-4d19-40ff-96bd-bd9e0d18c782] Running
	I0913 19:35:42.622467   46862 system_pods.go:89] "kube-apiserver-test-preload-769198" [4c462d69-23ac-4589-adbf-04b918f65c2b] Running
	I0913 19:35:42.622470   46862 system_pods.go:89] "kube-controller-manager-test-preload-769198" [644765b0-8196-40d0-a975-01598ad13328] Running
	I0913 19:35:42.622473   46862 system_pods.go:89] "kube-proxy-jz5gt" [9ca60293-585e-4563-a860-ff34ea85c16a] Running
	I0913 19:35:42.622476   46862 system_pods.go:89] "kube-scheduler-test-preload-769198" [52f89b53-ee5f-4106-bbf8-796a8657d80d] Running
	I0913 19:35:42.622479   46862 system_pods.go:89] "storage-provisioner" [be33a57b-7592-486e-9b28-636011baf9b0] Running
	I0913 19:35:42.622491   46862 system_pods.go:126] duration metric: took 202.489705ms to wait for k8s-apps to be running ...
	I0913 19:35:42.622497   46862 system_svc.go:44] waiting for kubelet service to be running ....
	I0913 19:35:42.622541   46862 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 19:35:42.637634   46862 system_svc.go:56] duration metric: took 15.130657ms WaitForService to wait for kubelet
	I0913 19:35:42.637658   46862 kubeadm.go:582] duration metric: took 13.608451015s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0913 19:35:42.637675   46862 node_conditions.go:102] verifying NodePressure condition ...
	I0913 19:35:42.820959   46862 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0913 19:35:42.820980   46862 node_conditions.go:123] node cpu capacity is 2
	I0913 19:35:42.820988   46862 node_conditions.go:105] duration metric: took 183.309673ms to run NodePressure ...
	I0913 19:35:42.820999   46862 start.go:241] waiting for startup goroutines ...
	I0913 19:35:42.821005   46862 start.go:246] waiting for cluster config update ...
	I0913 19:35:42.821015   46862 start.go:255] writing updated cluster config ...
	I0913 19:35:42.821260   46862 ssh_runner.go:195] Run: rm -f paused
	I0913 19:35:42.868459   46862 start.go:600] kubectl: 1.31.0, cluster: 1.24.4 (minor skew: 7)
	I0913 19:35:42.870419   46862 out.go:201] 
	W0913 19:35:42.871771   46862 out.go:270] ! /usr/local/bin/kubectl is version 1.31.0, which may have incompatibilities with Kubernetes 1.24.4.
	I0913 19:35:42.872917   46862 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I0913 19:35:42.874104   46862 out.go:177] * Done! kubectl is now configured to use "test-preload-769198" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 13 19:35:43 test-preload-769198 crio[661]: time="2024-09-13 19:35:43.815365872Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3ce4788d-7b03-4c20-abae-e391332a3400 name=/runtime.v1.RuntimeService/Version
	Sep 13 19:35:43 test-preload-769198 crio[661]: time="2024-09-13 19:35:43.816972761Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=087463f3-88f3-4fd8-a1cd-523f780d2ce8 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 19:35:43 test-preload-769198 crio[661]: time="2024-09-13 19:35:43.817400110Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726256143817379581,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=087463f3-88f3-4fd8-a1cd-523f780d2ce8 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 19:35:43 test-preload-769198 crio[661]: time="2024-09-13 19:35:43.818370212Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2f528dab-5b7e-4496-b22c-83f407e29658 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 19:35:43 test-preload-769198 crio[661]: time="2024-09-13 19:35:43.818484412Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2f528dab-5b7e-4496-b22c-83f407e29658 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 19:35:43 test-preload-769198 crio[661]: time="2024-09-13 19:35:43.818707117Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:88b953d130d3b8306834138204ab250352e56cf2c0b69951327e108705ffcb70,PodSandboxId:ec308e3b5920e34543e6ad5994688289ff1796a6420fa08caf838f97ce410e9b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1726256134790389341,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-9w4z6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c3af864-6533-4d2d-8743-fe459b9e97dc,},Annotations:map[string]string{io.kubernetes.container.hash: f0524fea,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9baea2f73535a3eb5244f2cc47f93d4c740c67897c652e33f42ba6fa6bf2b51b,PodSandboxId:a93ce3d619d8266974210bda5eb420a7121dc9651d733811a17cf9a647d8f20d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1726256127671793039,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jz5gt,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 9ca60293-585e-4563-a860-ff34ea85c16a,},Annotations:map[string]string{io.kubernetes.container.hash: cf1aa7bd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f9ffa012e7264056edc000b2228ddc80c42e30133f4e9587636471294655eb1,PodSandboxId:c9e2009775c9eaf09f8bf28fdd1c34ba2f8ea70645d16e6db0bd654ed363590b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726256127361966984,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be
33a57b-7592-486e-9b28-636011baf9b0,},Annotations:map[string]string{io.kubernetes.container.hash: 746db250,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25943ae50cbaf1e6d870d6dc5785361bc3f3fd0fa67553f6518ecb2c5c188b12,PodSandboxId:6085a66adcf1d3cf14bd43008915208551b4d76620925d7c1d1a9c4adad6c1b1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1726256122470978870,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-769198,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21b81cc55
0050dde2f7b8d3cccb41c4e,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8bef9a0c4302272cf803453cec9009ab1f4b13c0ef7b030666ed6704138bdcd3,PodSandboxId:3c400515c5141f67780be7f69c0e22314fd92f1b13ec92de58265e0a17e9a1a5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1726256122405260314,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-769198,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: cb491b5aa60a3117664f483881c2b1da,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8938a26b91edf99b7be91713612b338b02885fd0aea07f392dd2c17efcfed760,PodSandboxId:27b10debe638e4ca44493e7a96454a6d85bfc84e46745fe710194cf773bb81d8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1726256122398745010,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-769198,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 802b
2c27973e71fbf1a9f80717976a87,},Annotations:map[string]string{io.kubernetes.container.hash: 2f7a8e23,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18f6d826576a8a5bdde1840a5b6689b9a708d016a350d9853c22dfb215ea7023,PodSandboxId:53737e371c3594c0894436b046a61673b6eea0f3cb94d2b22feaac7e909f5d3d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1726256122362383478,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-769198,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7127b7ffdab61f788b2a514e4baee6b7,},Annotation
s:map[string]string{io.kubernetes.container.hash: 7fe81924,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2f528dab-5b7e-4496-b22c-83f407e29658 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 19:35:43 test-preload-769198 crio[661]: time="2024-09-13 19:35:43.858329482Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=3de7971c-833c-4565-924e-4f5f1dcb86eb name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 13 19:35:43 test-preload-769198 crio[661]: time="2024-09-13 19:35:43.858610431Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:ec308e3b5920e34543e6ad5994688289ff1796a6420fa08caf838f97ce410e9b,Metadata:&PodSandboxMetadata{Name:coredns-6d4b75cb6d-9w4z6,Uid:9c3af864-6533-4d2d-8743-fe459b9e97dc,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726256134567838319,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6d4b75cb6d-9w4z6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c3af864-6533-4d2d-8743-fe459b9e97dc,k8s-app: kube-dns,pod-template-hash: 6d4b75cb6d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-13T19:35:26.654217205Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a93ce3d619d8266974210bda5eb420a7121dc9651d733811a17cf9a647d8f20d,Metadata:&PodSandboxMetadata{Name:kube-proxy-jz5gt,Uid:9ca60293-585e-4563-a860-ff34ea85c16a,Namespace:kube-system,A
ttempt:0,},State:SANDBOX_READY,CreatedAt:1726256127562859040,Labels:map[string]string{controller-revision-hash: 6fd4744df8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-jz5gt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ca60293-585e-4563-a860-ff34ea85c16a,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-13T19:35:26.654214252Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c9e2009775c9eaf09f8bf28fdd1c34ba2f8ea70645d16e6db0bd654ed363590b,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:be33a57b-7592-486e-9b28-636011baf9b0,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726256127260834170,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be33a57b-7592-486e-9b28-6360
11baf9b0,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-09-13T19:35:26.654216160Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6085a66adcf1d3cf14bd43008915208551b4d76620925d7c1d1a9c4adad6c1b1,Metadata:&PodSandboxMetadata{Name:kube-scheduler-test-preload-769198,Uid:21b81cc
550050dde2f7b8d3cccb41c4e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726256122202318666,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-test-preload-769198,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21b81cc550050dde2f7b8d3cccb41c4e,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 21b81cc550050dde2f7b8d3cccb41c4e,kubernetes.io/config.seen: 2024-09-13T19:35:21.666280142Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:27b10debe638e4ca44493e7a96454a6d85bfc84e46745fe710194cf773bb81d8,Metadata:&PodSandboxMetadata{Name:kube-apiserver-test-preload-769198,Uid:802b2c27973e71fbf1a9f80717976a87,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726256122186216546,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-test-preload-769198,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: 802b2c27973e71fbf1a9f80717976a87,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.171:8443,kubernetes.io/config.hash: 802b2c27973e71fbf1a9f80717976a87,kubernetes.io/config.seen: 2024-09-13T19:35:21.666256871Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:3c400515c5141f67780be7f69c0e22314fd92f1b13ec92de58265e0a17e9a1a5,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-test-preload-769198,Uid:cb491b5aa60a3117664f483881c2b1da,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726256122183517397,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-test-preload-769198,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb491b5aa60a3117664f483881c2b1da,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: cb491b5aa60a3117664f483881c2b1da,kub
ernetes.io/config.seen: 2024-09-13T19:35:21.666279115Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:53737e371c3594c0894436b046a61673b6eea0f3cb94d2b22feaac7e909f5d3d,Metadata:&PodSandboxMetadata{Name:etcd-test-preload-769198,Uid:7127b7ffdab61f788b2a514e4baee6b7,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726256122182976811,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-test-preload-769198,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7127b7ffdab61f788b2a514e4baee6b7,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.171:2379,kubernetes.io/config.hash: 7127b7ffdab61f788b2a514e4baee6b7,kubernetes.io/config.seen: 2024-09-13T19:35:21.696966745Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=3de7971c-833c-4565-924e-4f5f1dcb86eb name=/runtime.v1.RuntimeService/ListPodSandbox

                                                
                                                
	Sep 13 19:35:43 test-preload-769198 crio[661]: time="2024-09-13 19:35:43.860360311Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e3d5e6c7-3cb1-4cfe-b8ce-06fa1e22e68f name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 19:35:43 test-preload-769198 crio[661]: time="2024-09-13 19:35:43.860415120Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e3d5e6c7-3cb1-4cfe-b8ce-06fa1e22e68f name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 19:35:43 test-preload-769198 crio[661]: time="2024-09-13 19:35:43.862693955Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:88b953d130d3b8306834138204ab250352e56cf2c0b69951327e108705ffcb70,PodSandboxId:ec308e3b5920e34543e6ad5994688289ff1796a6420fa08caf838f97ce410e9b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1726256134790389341,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-9w4z6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c3af864-6533-4d2d-8743-fe459b9e97dc,},Annotations:map[string]string{io.kubernetes.container.hash: f0524fea,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9baea2f73535a3eb5244f2cc47f93d4c740c67897c652e33f42ba6fa6bf2b51b,PodSandboxId:a93ce3d619d8266974210bda5eb420a7121dc9651d733811a17cf9a647d8f20d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1726256127671793039,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jz5gt,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 9ca60293-585e-4563-a860-ff34ea85c16a,},Annotations:map[string]string{io.kubernetes.container.hash: cf1aa7bd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f9ffa012e7264056edc000b2228ddc80c42e30133f4e9587636471294655eb1,PodSandboxId:c9e2009775c9eaf09f8bf28fdd1c34ba2f8ea70645d16e6db0bd654ed363590b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726256127361966984,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be
33a57b-7592-486e-9b28-636011baf9b0,},Annotations:map[string]string{io.kubernetes.container.hash: 746db250,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25943ae50cbaf1e6d870d6dc5785361bc3f3fd0fa67553f6518ecb2c5c188b12,PodSandboxId:6085a66adcf1d3cf14bd43008915208551b4d76620925d7c1d1a9c4adad6c1b1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1726256122470978870,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-769198,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21b81cc55
0050dde2f7b8d3cccb41c4e,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8bef9a0c4302272cf803453cec9009ab1f4b13c0ef7b030666ed6704138bdcd3,PodSandboxId:3c400515c5141f67780be7f69c0e22314fd92f1b13ec92de58265e0a17e9a1a5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1726256122405260314,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-769198,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: cb491b5aa60a3117664f483881c2b1da,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8938a26b91edf99b7be91713612b338b02885fd0aea07f392dd2c17efcfed760,PodSandboxId:27b10debe638e4ca44493e7a96454a6d85bfc84e46745fe710194cf773bb81d8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1726256122398745010,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-769198,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 802b
2c27973e71fbf1a9f80717976a87,},Annotations:map[string]string{io.kubernetes.container.hash: 2f7a8e23,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18f6d826576a8a5bdde1840a5b6689b9a708d016a350d9853c22dfb215ea7023,PodSandboxId:53737e371c3594c0894436b046a61673b6eea0f3cb94d2b22feaac7e909f5d3d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1726256122362383478,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-769198,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7127b7ffdab61f788b2a514e4baee6b7,},Annotation
s:map[string]string{io.kubernetes.container.hash: 7fe81924,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e3d5e6c7-3cb1-4cfe-b8ce-06fa1e22e68f name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 19:35:43 test-preload-769198 crio[661]: time="2024-09-13 19:35:43.868073077Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3874b779-b057-4cf3-8b69-ee0efd87a070 name=/runtime.v1.RuntimeService/Version
	Sep 13 19:35:43 test-preload-769198 crio[661]: time="2024-09-13 19:35:43.868152852Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3874b779-b057-4cf3-8b69-ee0efd87a070 name=/runtime.v1.RuntimeService/Version
	Sep 13 19:35:43 test-preload-769198 crio[661]: time="2024-09-13 19:35:43.869120449Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=35798bcf-8f80-4f66-861c-bb4a8cdf37da name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 19:35:43 test-preload-769198 crio[661]: time="2024-09-13 19:35:43.869600715Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726256143869580887,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=35798bcf-8f80-4f66-861c-bb4a8cdf37da name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 19:35:43 test-preload-769198 crio[661]: time="2024-09-13 19:35:43.870048128Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=37c31dda-c349-4deb-9d84-e15de3f04833 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 19:35:43 test-preload-769198 crio[661]: time="2024-09-13 19:35:43.870111431Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=37c31dda-c349-4deb-9d84-e15de3f04833 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 19:35:43 test-preload-769198 crio[661]: time="2024-09-13 19:35:43.870265190Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:88b953d130d3b8306834138204ab250352e56cf2c0b69951327e108705ffcb70,PodSandboxId:ec308e3b5920e34543e6ad5994688289ff1796a6420fa08caf838f97ce410e9b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1726256134790389341,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-9w4z6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c3af864-6533-4d2d-8743-fe459b9e97dc,},Annotations:map[string]string{io.kubernetes.container.hash: f0524fea,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9baea2f73535a3eb5244f2cc47f93d4c740c67897c652e33f42ba6fa6bf2b51b,PodSandboxId:a93ce3d619d8266974210bda5eb420a7121dc9651d733811a17cf9a647d8f20d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1726256127671793039,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jz5gt,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 9ca60293-585e-4563-a860-ff34ea85c16a,},Annotations:map[string]string{io.kubernetes.container.hash: cf1aa7bd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f9ffa012e7264056edc000b2228ddc80c42e30133f4e9587636471294655eb1,PodSandboxId:c9e2009775c9eaf09f8bf28fdd1c34ba2f8ea70645d16e6db0bd654ed363590b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726256127361966984,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be
33a57b-7592-486e-9b28-636011baf9b0,},Annotations:map[string]string{io.kubernetes.container.hash: 746db250,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25943ae50cbaf1e6d870d6dc5785361bc3f3fd0fa67553f6518ecb2c5c188b12,PodSandboxId:6085a66adcf1d3cf14bd43008915208551b4d76620925d7c1d1a9c4adad6c1b1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1726256122470978870,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-769198,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21b81cc55
0050dde2f7b8d3cccb41c4e,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8bef9a0c4302272cf803453cec9009ab1f4b13c0ef7b030666ed6704138bdcd3,PodSandboxId:3c400515c5141f67780be7f69c0e22314fd92f1b13ec92de58265e0a17e9a1a5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1726256122405260314,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-769198,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: cb491b5aa60a3117664f483881c2b1da,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8938a26b91edf99b7be91713612b338b02885fd0aea07f392dd2c17efcfed760,PodSandboxId:27b10debe638e4ca44493e7a96454a6d85bfc84e46745fe710194cf773bb81d8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1726256122398745010,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-769198,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 802b
2c27973e71fbf1a9f80717976a87,},Annotations:map[string]string{io.kubernetes.container.hash: 2f7a8e23,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18f6d826576a8a5bdde1840a5b6689b9a708d016a350d9853c22dfb215ea7023,PodSandboxId:53737e371c3594c0894436b046a61673b6eea0f3cb94d2b22feaac7e909f5d3d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1726256122362383478,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-769198,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7127b7ffdab61f788b2a514e4baee6b7,},Annotation
s:map[string]string{io.kubernetes.container.hash: 7fe81924,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=37c31dda-c349-4deb-9d84-e15de3f04833 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 19:35:43 test-preload-769198 crio[661]: time="2024-09-13 19:35:43.912189214Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=463382c8-8688-4f6a-8d53-484994b5cf2c name=/runtime.v1.RuntimeService/Version
	Sep 13 19:35:43 test-preload-769198 crio[661]: time="2024-09-13 19:35:43.912301554Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=463382c8-8688-4f6a-8d53-484994b5cf2c name=/runtime.v1.RuntimeService/Version
	Sep 13 19:35:43 test-preload-769198 crio[661]: time="2024-09-13 19:35:43.913915907Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b48233ff-b62e-43a6-8b31-9807ccfa19eb name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 19:35:43 test-preload-769198 crio[661]: time="2024-09-13 19:35:43.914810482Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726256143914777638,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b48233ff-b62e-43a6-8b31-9807ccfa19eb name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 19:35:43 test-preload-769198 crio[661]: time="2024-09-13 19:35:43.915504920Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a7d1fbcf-e694-4ccf-a189-3314aa45b343 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 19:35:43 test-preload-769198 crio[661]: time="2024-09-13 19:35:43.915594832Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a7d1fbcf-e694-4ccf-a189-3314aa45b343 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 19:35:43 test-preload-769198 crio[661]: time="2024-09-13 19:35:43.915815577Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:88b953d130d3b8306834138204ab250352e56cf2c0b69951327e108705ffcb70,PodSandboxId:ec308e3b5920e34543e6ad5994688289ff1796a6420fa08caf838f97ce410e9b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1726256134790389341,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-9w4z6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c3af864-6533-4d2d-8743-fe459b9e97dc,},Annotations:map[string]string{io.kubernetes.container.hash: f0524fea,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9baea2f73535a3eb5244f2cc47f93d4c740c67897c652e33f42ba6fa6bf2b51b,PodSandboxId:a93ce3d619d8266974210bda5eb420a7121dc9651d733811a17cf9a647d8f20d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1726256127671793039,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jz5gt,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 9ca60293-585e-4563-a860-ff34ea85c16a,},Annotations:map[string]string{io.kubernetes.container.hash: cf1aa7bd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f9ffa012e7264056edc000b2228ddc80c42e30133f4e9587636471294655eb1,PodSandboxId:c9e2009775c9eaf09f8bf28fdd1c34ba2f8ea70645d16e6db0bd654ed363590b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726256127361966984,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be
33a57b-7592-486e-9b28-636011baf9b0,},Annotations:map[string]string{io.kubernetes.container.hash: 746db250,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25943ae50cbaf1e6d870d6dc5785361bc3f3fd0fa67553f6518ecb2c5c188b12,PodSandboxId:6085a66adcf1d3cf14bd43008915208551b4d76620925d7c1d1a9c4adad6c1b1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1726256122470978870,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-769198,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21b81cc55
0050dde2f7b8d3cccb41c4e,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8bef9a0c4302272cf803453cec9009ab1f4b13c0ef7b030666ed6704138bdcd3,PodSandboxId:3c400515c5141f67780be7f69c0e22314fd92f1b13ec92de58265e0a17e9a1a5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1726256122405260314,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-769198,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: cb491b5aa60a3117664f483881c2b1da,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8938a26b91edf99b7be91713612b338b02885fd0aea07f392dd2c17efcfed760,PodSandboxId:27b10debe638e4ca44493e7a96454a6d85bfc84e46745fe710194cf773bb81d8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1726256122398745010,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-769198,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 802b
2c27973e71fbf1a9f80717976a87,},Annotations:map[string]string{io.kubernetes.container.hash: 2f7a8e23,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18f6d826576a8a5bdde1840a5b6689b9a708d016a350d9853c22dfb215ea7023,PodSandboxId:53737e371c3594c0894436b046a61673b6eea0f3cb94d2b22feaac7e909f5d3d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1726256122362383478,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-769198,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7127b7ffdab61f788b2a514e4baee6b7,},Annotation
s:map[string]string{io.kubernetes.container.hash: 7fe81924,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a7d1fbcf-e694-4ccf-a189-3314aa45b343 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	88b953d130d3b       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   9 seconds ago       Running             coredns                   1                   ec308e3b5920e       coredns-6d4b75cb6d-9w4z6
	9baea2f73535a       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   16 seconds ago      Running             kube-proxy                1                   a93ce3d619d82       kube-proxy-jz5gt
	4f9ffa012e726       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   16 seconds ago      Running             storage-provisioner       1                   c9e2009775c9e       storage-provisioner
	25943ae50cbaf       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   21 seconds ago      Running             kube-scheduler            1                   6085a66adcf1d       kube-scheduler-test-preload-769198
	8bef9a0c43022       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   21 seconds ago      Running             kube-controller-manager   1                   3c400515c5141       kube-controller-manager-test-preload-769198
	8938a26b91edf       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   21 seconds ago      Running             kube-apiserver            1                   27b10debe638e       kube-apiserver-test-preload-769198
	18f6d826576a8       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   21 seconds ago      Running             etcd                      1                   53737e371c359       etcd-test-preload-769198
	
	
	==> coredns [88b953d130d3b8306834138204ab250352e56cf2c0b69951327e108705ffcb70] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:43549 - 19025 "HINFO IN 396186686525728643.6287226622119271730. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.01525153s
	
	
	==> describe nodes <==
	Name:               test-preload-769198
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-769198
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fdd33bebc6743cfd1c61ec7fe066add478610a92
	                    minikube.k8s.io/name=test-preload-769198
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_13T19_33_25_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Sep 2024 19:33:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-769198
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 13 Sep 2024 19:35:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Sep 2024 19:35:36 +0000   Fri, 13 Sep 2024 19:33:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Sep 2024 19:35:36 +0000   Fri, 13 Sep 2024 19:33:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Sep 2024 19:35:36 +0000   Fri, 13 Sep 2024 19:33:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 13 Sep 2024 19:35:36 +0000   Fri, 13 Sep 2024 19:35:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.171
	  Hostname:    test-preload-769198
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 66559252bf024bbc953500b92057221d
	  System UUID:                66559252-bf02-4bbc-9535-00b92057221d
	  Boot ID:                    821f4cb7-78c4-4ad4-9b94-29ca0ed166be
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-9w4z6                       100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     2m7s
	  kube-system                 etcd-test-preload-769198                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         2m19s
	  kube-system                 kube-apiserver-test-preload-769198             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m19s
	  kube-system                 kube-controller-manager-test-preload-769198    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m19s
	  kube-system                 kube-proxy-jz5gt                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m7s
	  kube-system                 kube-scheduler-test-preload-769198             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m19s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 16s                kube-proxy       
	  Normal  Starting                 2m4s               kube-proxy       
	  Normal  Starting                 2m19s              kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  2m19s              kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m19s              kubelet          Node test-preload-769198 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m19s              kubelet          Node test-preload-769198 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m19s              kubelet          Node test-preload-769198 status is now: NodeHasSufficientPID
	  Normal  NodeReady                2m9s               kubelet          Node test-preload-769198 status is now: NodeReady
	  Normal  RegisteredNode           2m8s               node-controller  Node test-preload-769198 event: Registered Node test-preload-769198 in Controller
	  Normal  Starting                 23s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  23s (x8 over 23s)  kubelet          Node test-preload-769198 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23s (x8 over 23s)  kubelet          Node test-preload-769198 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23s (x7 over 23s)  kubelet          Node test-preload-769198 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6s                 node-controller  Node test-preload-769198 event: Registered Node test-preload-769198 in Controller
	
	
	==> dmesg <==
	[Sep13 19:34] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050854] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039554] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.794807] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.523383] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.567015] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Sep13 19:35] systemd-fstab-generator[584]: Ignoring "noauto" option for root device
	[  +0.058426] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.059455] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.168689] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.142950] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.275595] systemd-fstab-generator[652]: Ignoring "noauto" option for root device
	[ +11.680008] systemd-fstab-generator[985]: Ignoring "noauto" option for root device
	[  +0.055299] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.810347] systemd-fstab-generator[1115]: Ignoring "noauto" option for root device
	[  +5.875039] kauditd_printk_skb: 105 callbacks suppressed
	[  +1.762024] systemd-fstab-generator[1731]: Ignoring "noauto" option for root device
	[  +5.508380] kauditd_printk_skb: 53 callbacks suppressed
	
	
	==> etcd [18f6d826576a8a5bdde1840a5b6689b9a708d016a350d9853c22dfb215ea7023] <==
	{"level":"info","ts":"2024-09-13T19:35:22.791Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"4e6b9cdcc1ed933f","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-09-13T19:35:22.792Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4e6b9cdcc1ed933f switched to configuration voters=(5650782629426729791)"}
	{"level":"info","ts":"2024-09-13T19:35:22.792Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"c9ee22fca1de3e71","local-member-id":"4e6b9cdcc1ed933f","added-peer-id":"4e6b9cdcc1ed933f","added-peer-peer-urls":["https://192.168.39.171:2380"]}
	{"level":"info","ts":"2024-09-13T19:35:22.792Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"c9ee22fca1de3e71","local-member-id":"4e6b9cdcc1ed933f","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-13T19:35:22.792Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-13T19:35:22.794Z","caller":"etcdserver/server.go:736","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"4e6b9cdcc1ed933f","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2024-09-13T19:35:22.799Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.171:2380"}
	{"level":"info","ts":"2024-09-13T19:35:22.799Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.171:2380"}
	{"level":"info","ts":"2024-09-13T19:35:22.805Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-13T19:35:22.807Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"4e6b9cdcc1ed933f","initial-advertise-peer-urls":["https://192.168.39.171:2380"],"listen-peer-urls":["https://192.168.39.171:2380"],"advertise-client-urls":["https://192.168.39.171:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.171:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-13T19:35:22.807Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-13T19:35:23.339Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4e6b9cdcc1ed933f is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-13T19:35:23.339Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4e6b9cdcc1ed933f became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-13T19:35:23.339Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4e6b9cdcc1ed933f received MsgPreVoteResp from 4e6b9cdcc1ed933f at term 2"}
	{"level":"info","ts":"2024-09-13T19:35:23.339Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4e6b9cdcc1ed933f became candidate at term 3"}
	{"level":"info","ts":"2024-09-13T19:35:23.339Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4e6b9cdcc1ed933f received MsgVoteResp from 4e6b9cdcc1ed933f at term 3"}
	{"level":"info","ts":"2024-09-13T19:35:23.339Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4e6b9cdcc1ed933f became leader at term 3"}
	{"level":"info","ts":"2024-09-13T19:35:23.339Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 4e6b9cdcc1ed933f elected leader 4e6b9cdcc1ed933f at term 3"}
	{"level":"info","ts":"2024-09-13T19:35:23.341Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"4e6b9cdcc1ed933f","local-member-attributes":"{Name:test-preload-769198 ClientURLs:[https://192.168.39.171:2379]}","request-path":"/0/members/4e6b9cdcc1ed933f/attributes","cluster-id":"c9ee22fca1de3e71","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-13T19:35:23.341Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-13T19:35:23.345Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-13T19:35:23.346Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.171:2379"}
	{"level":"info","ts":"2024-09-13T19:35:23.346Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-13T19:35:23.347Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-13T19:35:23.348Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 19:35:44 up 0 min,  0 users,  load average: 0.99, 0.27, 0.09
	Linux test-preload-769198 5.10.207 #1 SMP Thu Sep 12 19:03:33 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [8938a26b91edf99b7be91713612b338b02885fd0aea07f392dd2c17efcfed760] <==
	I0913 19:35:26.392088       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0913 19:35:26.392115       1 apf_controller.go:317] Starting API Priority and Fairness config controller
	I0913 19:35:26.392557       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
	I0913 19:35:26.435506       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0913 19:35:26.392581       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0913 19:35:26.392592       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	E0913 19:35:26.507081       1 controller.go:169] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0913 19:35:26.513858       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0913 19:35:26.533671       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0913 19:35:26.535662       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0913 19:35:26.536309       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0913 19:35:26.590904       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0913 19:35:26.597735       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0913 19:35:26.601784       1 cache.go:39] Caches are synced for autoregister controller
	I0913 19:35:26.606749       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0913 19:35:27.092604       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0913 19:35:27.397990       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0913 19:35:27.916913       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0913 19:35:27.932896       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0913 19:35:27.983057       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0913 19:35:28.006865       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0913 19:35:28.015134       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0913 19:35:28.105492       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0913 19:35:38.886217       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0913 19:35:38.894093       1 controller.go:611] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [8bef9a0c4302272cf803453cec9009ab1f4b13c0ef7b030666ed6704138bdcd3] <==
	I0913 19:35:38.888367       1 shared_informer.go:262] Caches are synced for node
	I0913 19:35:38.888411       1 range_allocator.go:173] Starting range CIDR allocator
	I0913 19:35:38.888416       1 shared_informer.go:255] Waiting for caches to sync for cidrallocator
	I0913 19:35:38.889012       1 shared_informer.go:262] Caches are synced for cidrallocator
	I0913 19:35:38.889245       1 shared_informer.go:262] Caches are synced for disruption
	I0913 19:35:38.889322       1 disruption.go:371] Sending events to api server.
	I0913 19:35:38.895826       1 shared_informer.go:262] Caches are synced for crt configmap
	I0913 19:35:38.944339       1 shared_informer.go:262] Caches are synced for ephemeral
	I0913 19:35:38.950828       1 shared_informer.go:262] Caches are synced for persistent volume
	I0913 19:35:38.965332       1 shared_informer.go:262] Caches are synced for daemon sets
	I0913 19:35:38.971756       1 shared_informer.go:262] Caches are synced for PVC protection
	I0913 19:35:38.981502       1 shared_informer.go:262] Caches are synced for taint
	I0913 19:35:38.981655       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W0913 19:35:38.981762       1 node_lifecycle_controller.go:1014] Missing timestamp for Node test-preload-769198. Assuming now as a timestamp.
	I0913 19:35:38.981812       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0913 19:35:38.982180       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0913 19:35:38.982522       1 event.go:294] "Event occurred" object="test-preload-769198" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node test-preload-769198 event: Registered Node test-preload-769198 in Controller"
	I0913 19:35:38.982738       1 shared_informer.go:262] Caches are synced for expand
	I0913 19:35:38.995929       1 shared_informer.go:262] Caches are synced for stateful set
	I0913 19:35:39.022642       1 shared_informer.go:262] Caches are synced for resource quota
	I0913 19:35:39.054021       1 shared_informer.go:262] Caches are synced for resource quota
	I0913 19:35:39.103826       1 shared_informer.go:262] Caches are synced for attach detach
	I0913 19:35:39.501314       1 shared_informer.go:262] Caches are synced for garbage collector
	I0913 19:35:39.509755       1 shared_informer.go:262] Caches are synced for garbage collector
	I0913 19:35:39.509790       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	
	==> kube-proxy [9baea2f73535a3eb5244f2cc47f93d4c740c67897c652e33f42ba6fa6bf2b51b] <==
	I0913 19:35:28.028813       1 node.go:163] Successfully retrieved node IP: 192.168.39.171
	I0913 19:35:28.028904       1 server_others.go:138] "Detected node IP" address="192.168.39.171"
	I0913 19:35:28.028966       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0913 19:35:28.085091       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0913 19:35:28.085125       1 server_others.go:206] "Using iptables Proxier"
	I0913 19:35:28.085668       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0913 19:35:28.090757       1 server.go:661] "Version info" version="v1.24.4"
	I0913 19:35:28.090877       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0913 19:35:28.097939       1 config.go:317] "Starting service config controller"
	I0913 19:35:28.098029       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0913 19:35:28.098051       1 config.go:226] "Starting endpoint slice config controller"
	I0913 19:35:28.098054       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0913 19:35:28.102612       1 config.go:444] "Starting node config controller"
	I0913 19:35:28.102636       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0913 19:35:28.198888       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0913 19:35:28.199132       1 shared_informer.go:262] Caches are synced for service config
	I0913 19:35:28.202718       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [25943ae50cbaf1e6d870d6dc5785361bc3f3fd0fa67553f6518ecb2c5c188b12] <==
	I0913 19:35:23.542739       1 serving.go:348] Generated self-signed cert in-memory
	W0913 19:35:26.458520       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0913 19:35:26.460518       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0913 19:35:26.460612       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0913 19:35:26.460640       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0913 19:35:26.514470       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4"
	I0913 19:35:26.514542       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0913 19:35:26.519695       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0913 19:35:26.521835       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0913 19:35:26.520187       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0913 19:35:26.521914       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0913 19:35:26.622892       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 13 19:35:26 test-preload-769198 kubelet[1122]: I0913 19:35:26.651220    1122 apiserver.go:52] "Watching apiserver"
	Sep 13 19:35:26 test-preload-769198 kubelet[1122]: I0913 19:35:26.654482    1122 topology_manager.go:200] "Topology Admit Handler"
	Sep 13 19:35:26 test-preload-769198 kubelet[1122]: I0913 19:35:26.654577    1122 topology_manager.go:200] "Topology Admit Handler"
	Sep 13 19:35:26 test-preload-769198 kubelet[1122]: I0913 19:35:26.654613    1122 topology_manager.go:200] "Topology Admit Handler"
	Sep 13 19:35:26 test-preload-769198 kubelet[1122]: E0913 19:35:26.658156    1122 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-9w4z6" podUID=9c3af864-6533-4d2d-8743-fe459b9e97dc
	Sep 13 19:35:26 test-preload-769198 kubelet[1122]: I0913 19:35:26.697687    1122 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9ca60293-585e-4563-a860-ff34ea85c16a-lib-modules\") pod \"kube-proxy-jz5gt\" (UID: \"9ca60293-585e-4563-a860-ff34ea85c16a\") " pod="kube-system/kube-proxy-jz5gt"
	Sep 13 19:35:26 test-preload-769198 kubelet[1122]: I0913 19:35:26.697725    1122 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5kjx\" (UniqueName: \"kubernetes.io/projected/be33a57b-7592-486e-9b28-636011baf9b0-kube-api-access-r5kjx\") pod \"storage-provisioner\" (UID: \"be33a57b-7592-486e-9b28-636011baf9b0\") " pod="kube-system/storage-provisioner"
	Sep 13 19:35:26 test-preload-769198 kubelet[1122]: I0913 19:35:26.697760    1122 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9c3af864-6533-4d2d-8743-fe459b9e97dc-config-volume\") pod \"coredns-6d4b75cb6d-9w4z6\" (UID: \"9c3af864-6533-4d2d-8743-fe459b9e97dc\") " pod="kube-system/coredns-6d4b75cb6d-9w4z6"
	Sep 13 19:35:26 test-preload-769198 kubelet[1122]: I0913 19:35:26.697782    1122 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4tlk\" (UniqueName: \"kubernetes.io/projected/9c3af864-6533-4d2d-8743-fe459b9e97dc-kube-api-access-w4tlk\") pod \"coredns-6d4b75cb6d-9w4z6\" (UID: \"9c3af864-6533-4d2d-8743-fe459b9e97dc\") " pod="kube-system/coredns-6d4b75cb6d-9w4z6"
	Sep 13 19:35:26 test-preload-769198 kubelet[1122]: I0913 19:35:26.697801    1122 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9ca60293-585e-4563-a860-ff34ea85c16a-xtables-lock\") pod \"kube-proxy-jz5gt\" (UID: \"9ca60293-585e-4563-a860-ff34ea85c16a\") " pod="kube-system/kube-proxy-jz5gt"
	Sep 13 19:35:26 test-preload-769198 kubelet[1122]: I0913 19:35:26.697823    1122 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/be33a57b-7592-486e-9b28-636011baf9b0-tmp\") pod \"storage-provisioner\" (UID: \"be33a57b-7592-486e-9b28-636011baf9b0\") " pod="kube-system/storage-provisioner"
	Sep 13 19:35:26 test-preload-769198 kubelet[1122]: I0913 19:35:26.697840    1122 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9ca60293-585e-4563-a860-ff34ea85c16a-kube-proxy\") pod \"kube-proxy-jz5gt\" (UID: \"9ca60293-585e-4563-a860-ff34ea85c16a\") " pod="kube-system/kube-proxy-jz5gt"
	Sep 13 19:35:26 test-preload-769198 kubelet[1122]: I0913 19:35:26.697859    1122 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nc4ml\" (UniqueName: \"kubernetes.io/projected/9ca60293-585e-4563-a860-ff34ea85c16a-kube-api-access-nc4ml\") pod \"kube-proxy-jz5gt\" (UID: \"9ca60293-585e-4563-a860-ff34ea85c16a\") " pod="kube-system/kube-proxy-jz5gt"
	Sep 13 19:35:26 test-preload-769198 kubelet[1122]: I0913 19:35:26.697874    1122 reconciler.go:159] "Reconciler: start to sync state"
	Sep 13 19:35:26 test-preload-769198 kubelet[1122]: E0913 19:35:26.705292    1122 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Sep 13 19:35:26 test-preload-769198 kubelet[1122]: E0913 19:35:26.803192    1122 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 13 19:35:26 test-preload-769198 kubelet[1122]: E0913 19:35:26.803273    1122 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/9c3af864-6533-4d2d-8743-fe459b9e97dc-config-volume podName:9c3af864-6533-4d2d-8743-fe459b9e97dc nodeName:}" failed. No retries permitted until 2024-09-13 19:35:27.303243735 +0000 UTC m=+5.786821507 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/9c3af864-6533-4d2d-8743-fe459b9e97dc-config-volume") pod "coredns-6d4b75cb6d-9w4z6" (UID: "9c3af864-6533-4d2d-8743-fe459b9e97dc") : object "kube-system"/"coredns" not registered
	Sep 13 19:35:27 test-preload-769198 kubelet[1122]: E0913 19:35:27.309642    1122 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 13 19:35:27 test-preload-769198 kubelet[1122]: E0913 19:35:27.309737    1122 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/9c3af864-6533-4d2d-8743-fe459b9e97dc-config-volume podName:9c3af864-6533-4d2d-8743-fe459b9e97dc nodeName:}" failed. No retries permitted until 2024-09-13 19:35:28.309720421 +0000 UTC m=+6.793298189 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/9c3af864-6533-4d2d-8743-fe459b9e97dc-config-volume") pod "coredns-6d4b75cb6d-9w4z6" (UID: "9c3af864-6533-4d2d-8743-fe459b9e97dc") : object "kube-system"/"coredns" not registered
	Sep 13 19:35:28 test-preload-769198 kubelet[1122]: E0913 19:35:28.316174    1122 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 13 19:35:28 test-preload-769198 kubelet[1122]: E0913 19:35:28.316260    1122 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/9c3af864-6533-4d2d-8743-fe459b9e97dc-config-volume podName:9c3af864-6533-4d2d-8743-fe459b9e97dc nodeName:}" failed. No retries permitted until 2024-09-13 19:35:30.316245454 +0000 UTC m=+8.799823223 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/9c3af864-6533-4d2d-8743-fe459b9e97dc-config-volume") pod "coredns-6d4b75cb6d-9w4z6" (UID: "9c3af864-6533-4d2d-8743-fe459b9e97dc") : object "kube-system"/"coredns" not registered
	Sep 13 19:35:28 test-preload-769198 kubelet[1122]: E0913 19:35:28.756892    1122 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-9w4z6" podUID=9c3af864-6533-4d2d-8743-fe459b9e97dc
	Sep 13 19:35:30 test-preload-769198 kubelet[1122]: E0913 19:35:30.333778    1122 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 13 19:35:30 test-preload-769198 kubelet[1122]: E0913 19:35:30.333926    1122 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/9c3af864-6533-4d2d-8743-fe459b9e97dc-config-volume podName:9c3af864-6533-4d2d-8743-fe459b9e97dc nodeName:}" failed. No retries permitted until 2024-09-13 19:35:34.333898612 +0000 UTC m=+12.817476392 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/9c3af864-6533-4d2d-8743-fe459b9e97dc-config-volume") pod "coredns-6d4b75cb6d-9w4z6" (UID: "9c3af864-6533-4d2d-8743-fe459b9e97dc") : object "kube-system"/"coredns" not registered
	Sep 13 19:35:30 test-preload-769198 kubelet[1122]: E0913 19:35:30.760104    1122 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-9w4z6" podUID=9c3af864-6533-4d2d-8743-fe459b9e97dc
	
	
	==> storage-provisioner [4f9ffa012e7264056edc000b2228ddc80c42e30133f4e9587636471294655eb1] <==
	I0913 19:35:27.452480       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-769198 -n test-preload-769198
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-769198 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-769198" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-769198
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-769198: (1.146055371s)
--- FAIL: TestPreload (220.15s)

                                                
                                    
x
+
TestKubernetesUpgrade (435.6s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-421098 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-421098 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (4m51.517664641s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-421098] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19636
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19636-3902/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19636-3902/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-421098" primary control-plane node in "kubernetes-upgrade-421098" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 19:41:35.570855   54234 out.go:345] Setting OutFile to fd 1 ...
	I0913 19:41:35.571099   54234 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 19:41:35.571108   54234 out.go:358] Setting ErrFile to fd 2...
	I0913 19:41:35.571112   54234 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 19:41:35.571286   54234 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19636-3902/.minikube/bin
	I0913 19:41:35.571814   54234 out.go:352] Setting JSON to false
	I0913 19:41:35.572696   54234 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5039,"bootTime":1726251457,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0913 19:41:35.572784   54234 start.go:139] virtualization: kvm guest
	I0913 19:41:35.574958   54234 out.go:177] * [kubernetes-upgrade-421098] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0913 19:41:35.576573   54234 notify.go:220] Checking for updates...
	I0913 19:41:35.576592   54234 out.go:177]   - MINIKUBE_LOCATION=19636
	I0913 19:41:35.578077   54234 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 19:41:35.579303   54234 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19636-3902/kubeconfig
	I0913 19:41:35.580633   54234 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19636-3902/.minikube
	I0913 19:41:35.581842   54234 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0913 19:41:35.583085   54234 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 19:41:35.584746   54234 config.go:182] Loaded profile config "cert-expiration-235626": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 19:41:35.584848   54234 config.go:182] Loaded profile config "cert-options-718151": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 19:41:35.584959   54234 config.go:182] Loaded profile config "pause-933457": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 19:41:35.585051   54234 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 19:41:35.626290   54234 out.go:177] * Using the kvm2 driver based on user configuration
	I0913 19:41:35.627445   54234 start.go:297] selected driver: kvm2
	I0913 19:41:35.627456   54234 start.go:901] validating driver "kvm2" against <nil>
	I0913 19:41:35.627467   54234 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 19:41:35.628118   54234 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 19:41:35.628207   54234 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19636-3902/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0913 19:41:35.643218   54234 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0913 19:41:35.643287   54234 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0913 19:41:35.643589   54234 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0913 19:41:35.643619   54234 cni.go:84] Creating CNI manager for ""
	I0913 19:41:35.643673   54234 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0913 19:41:35.643684   54234 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0913 19:41:35.643767   54234 start.go:340] cluster config:
	{Name:kubernetes-upgrade-421098 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-421098 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 19:41:35.643865   54234 iso.go:125] acquiring lock: {Name:mk6d09a9e7ffc35d34bf41cb51ad15df14a4d34d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 19:41:35.645585   54234 out.go:177] * Starting "kubernetes-upgrade-421098" primary control-plane node in "kubernetes-upgrade-421098" cluster
	I0913 19:41:35.646809   54234 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0913 19:41:35.646853   54234 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0913 19:41:35.646868   54234 cache.go:56] Caching tarball of preloaded images
	I0913 19:41:35.646939   54234 preload.go:172] Found /home/jenkins/minikube-integration/19636-3902/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0913 19:41:35.646950   54234 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0913 19:41:35.647049   54234 profile.go:143] Saving config to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/kubernetes-upgrade-421098/config.json ...
	I0913 19:41:35.647071   54234 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/kubernetes-upgrade-421098/config.json: {Name:mkd28bc5263090e39ef85baa39947f6a3b049773 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 19:41:35.647217   54234 start.go:360] acquireMachinesLock for kubernetes-upgrade-421098: {Name:mk2a4fc9f87a3264088553785e086036ce1d8c5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 19:41:58.198943   54234 start.go:364] duration metric: took 22.55170231s to acquireMachinesLock for "kubernetes-upgrade-421098"
	I0913 19:41:58.199022   54234 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-421098 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-421098 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0913 19:41:58.199123   54234 start.go:125] createHost starting for "" (driver="kvm2")
	I0913 19:41:58.201170   54234 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0913 19:41:58.201385   54234 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:41:58.201461   54234 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:41:58.217663   54234 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41495
	I0913 19:41:58.218157   54234 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:41:58.218657   54234 main.go:141] libmachine: Using API Version  1
	I0913 19:41:58.218677   54234 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:41:58.218980   54234 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:41:58.219166   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) Calling .GetMachineName
	I0913 19:41:58.219315   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) Calling .DriverName
	I0913 19:41:58.219496   54234 start.go:159] libmachine.API.Create for "kubernetes-upgrade-421098" (driver="kvm2")
	I0913 19:41:58.219524   54234 client.go:168] LocalClient.Create starting
	I0913 19:41:58.219552   54234 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem
	I0913 19:41:58.219586   54234 main.go:141] libmachine: Decoding PEM data...
	I0913 19:41:58.219613   54234 main.go:141] libmachine: Parsing certificate...
	I0913 19:41:58.219675   54234 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem
	I0913 19:41:58.219702   54234 main.go:141] libmachine: Decoding PEM data...
	I0913 19:41:58.219717   54234 main.go:141] libmachine: Parsing certificate...
	I0913 19:41:58.219745   54234 main.go:141] libmachine: Running pre-create checks...
	I0913 19:41:58.219774   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) Calling .PreCreateCheck
	I0913 19:41:58.220176   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) Calling .GetConfigRaw
	I0913 19:41:58.220552   54234 main.go:141] libmachine: Creating machine...
	I0913 19:41:58.220567   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) Calling .Create
	I0913 19:41:58.220674   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) Creating KVM machine...
	I0913 19:41:58.221836   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | found existing default KVM network
	I0913 19:41:58.223343   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | I0913 19:41:58.223192   54436 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000300060}
	I0913 19:41:58.223402   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | created network xml: 
	I0913 19:41:58.223427   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | <network>
	I0913 19:41:58.223440   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG |   <name>mk-kubernetes-upgrade-421098</name>
	I0913 19:41:58.223449   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG |   <dns enable='no'/>
	I0913 19:41:58.223459   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG |   
	I0913 19:41:58.223472   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0913 19:41:58.223480   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG |     <dhcp>
	I0913 19:41:58.223495   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0913 19:41:58.223508   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG |     </dhcp>
	I0913 19:41:58.223518   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG |   </ip>
	I0913 19:41:58.223528   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG |   
	I0913 19:41:58.223539   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | </network>
	I0913 19:41:58.223568   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | 
	I0913 19:41:58.229134   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | trying to create private KVM network mk-kubernetes-upgrade-421098 192.168.39.0/24...
	I0913 19:41:58.297087   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | private KVM network mk-kubernetes-upgrade-421098 192.168.39.0/24 created
	I0913 19:41:58.297135   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | I0913 19:41:58.297064   54436 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19636-3902/.minikube
	I0913 19:41:58.297155   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) Setting up store path in /home/jenkins/minikube-integration/19636-3902/.minikube/machines/kubernetes-upgrade-421098 ...
	I0913 19:41:58.297173   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) Building disk image from file:///home/jenkins/minikube-integration/19636-3902/.minikube/cache/iso/amd64/minikube-v1.34.0-1726156389-19616-amd64.iso
	I0913 19:41:58.297260   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) Downloading /home/jenkins/minikube-integration/19636-3902/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19636-3902/.minikube/cache/iso/amd64/minikube-v1.34.0-1726156389-19616-amd64.iso...
	I0913 19:41:58.537949   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | I0913 19:41:58.537853   54436 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/kubernetes-upgrade-421098/id_rsa...
	I0913 19:41:58.798543   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | I0913 19:41:58.798381   54436 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/kubernetes-upgrade-421098/kubernetes-upgrade-421098.rawdisk...
	I0913 19:41:58.798570   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | Writing magic tar header
	I0913 19:41:58.798614   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | Writing SSH key tar header
	I0913 19:41:58.798650   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | I0913 19:41:58.798504   54436 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19636-3902/.minikube/machines/kubernetes-upgrade-421098 ...
	I0913 19:41:58.798682   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) Setting executable bit set on /home/jenkins/minikube-integration/19636-3902/.minikube/machines/kubernetes-upgrade-421098 (perms=drwx------)
	I0913 19:41:58.798706   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/kubernetes-upgrade-421098
	I0913 19:41:58.798721   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) Setting executable bit set on /home/jenkins/minikube-integration/19636-3902/.minikube/machines (perms=drwxr-xr-x)
	I0913 19:41:58.798739   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) Setting executable bit set on /home/jenkins/minikube-integration/19636-3902/.minikube (perms=drwxr-xr-x)
	I0913 19:41:58.798752   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) Setting executable bit set on /home/jenkins/minikube-integration/19636-3902 (perms=drwxrwxr-x)
	I0913 19:41:58.798765   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0913 19:41:58.798776   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0913 19:41:58.798789   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) Creating domain...
	I0913 19:41:58.798804   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19636-3902/.minikube/machines
	I0913 19:41:58.798822   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19636-3902/.minikube
	I0913 19:41:58.798836   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19636-3902
	I0913 19:41:58.798848   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0913 19:41:58.798859   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | Checking permissions on dir: /home/jenkins
	I0913 19:41:58.798871   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | Checking permissions on dir: /home
	I0913 19:41:58.798882   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | Skipping /home - not owner
	I0913 19:41:58.800038   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) define libvirt domain using xml: 
	I0913 19:41:58.800057   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) <domain type='kvm'>
	I0913 19:41:58.800065   54234 main.go:141] libmachine: (kubernetes-upgrade-421098)   <name>kubernetes-upgrade-421098</name>
	I0913 19:41:58.800071   54234 main.go:141] libmachine: (kubernetes-upgrade-421098)   <memory unit='MiB'>2200</memory>
	I0913 19:41:58.800080   54234 main.go:141] libmachine: (kubernetes-upgrade-421098)   <vcpu>2</vcpu>
	I0913 19:41:58.800093   54234 main.go:141] libmachine: (kubernetes-upgrade-421098)   <features>
	I0913 19:41:58.800101   54234 main.go:141] libmachine: (kubernetes-upgrade-421098)     <acpi/>
	I0913 19:41:58.800116   54234 main.go:141] libmachine: (kubernetes-upgrade-421098)     <apic/>
	I0913 19:41:58.800125   54234 main.go:141] libmachine: (kubernetes-upgrade-421098)     <pae/>
	I0913 19:41:58.800140   54234 main.go:141] libmachine: (kubernetes-upgrade-421098)     
	I0913 19:41:58.800153   54234 main.go:141] libmachine: (kubernetes-upgrade-421098)   </features>
	I0913 19:41:58.800160   54234 main.go:141] libmachine: (kubernetes-upgrade-421098)   <cpu mode='host-passthrough'>
	I0913 19:41:58.800168   54234 main.go:141] libmachine: (kubernetes-upgrade-421098)   
	I0913 19:41:58.800178   54234 main.go:141] libmachine: (kubernetes-upgrade-421098)   </cpu>
	I0913 19:41:58.800186   54234 main.go:141] libmachine: (kubernetes-upgrade-421098)   <os>
	I0913 19:41:58.800197   54234 main.go:141] libmachine: (kubernetes-upgrade-421098)     <type>hvm</type>
	I0913 19:41:58.800206   54234 main.go:141] libmachine: (kubernetes-upgrade-421098)     <boot dev='cdrom'/>
	I0913 19:41:58.800218   54234 main.go:141] libmachine: (kubernetes-upgrade-421098)     <boot dev='hd'/>
	I0913 19:41:58.800251   54234 main.go:141] libmachine: (kubernetes-upgrade-421098)     <bootmenu enable='no'/>
	I0913 19:41:58.800272   54234 main.go:141] libmachine: (kubernetes-upgrade-421098)   </os>
	I0913 19:41:58.800283   54234 main.go:141] libmachine: (kubernetes-upgrade-421098)   <devices>
	I0913 19:41:58.800294   54234 main.go:141] libmachine: (kubernetes-upgrade-421098)     <disk type='file' device='cdrom'>
	I0913 19:41:58.800309   54234 main.go:141] libmachine: (kubernetes-upgrade-421098)       <source file='/home/jenkins/minikube-integration/19636-3902/.minikube/machines/kubernetes-upgrade-421098/boot2docker.iso'/>
	I0913 19:41:58.800320   54234 main.go:141] libmachine: (kubernetes-upgrade-421098)       <target dev='hdc' bus='scsi'/>
	I0913 19:41:58.800331   54234 main.go:141] libmachine: (kubernetes-upgrade-421098)       <readonly/>
	I0913 19:41:58.800340   54234 main.go:141] libmachine: (kubernetes-upgrade-421098)     </disk>
	I0913 19:41:58.800352   54234 main.go:141] libmachine: (kubernetes-upgrade-421098)     <disk type='file' device='disk'>
	I0913 19:41:58.800367   54234 main.go:141] libmachine: (kubernetes-upgrade-421098)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0913 19:41:58.800381   54234 main.go:141] libmachine: (kubernetes-upgrade-421098)       <source file='/home/jenkins/minikube-integration/19636-3902/.minikube/machines/kubernetes-upgrade-421098/kubernetes-upgrade-421098.rawdisk'/>
	I0913 19:41:58.800391   54234 main.go:141] libmachine: (kubernetes-upgrade-421098)       <target dev='hda' bus='virtio'/>
	I0913 19:41:58.800399   54234 main.go:141] libmachine: (kubernetes-upgrade-421098)     </disk>
	I0913 19:41:58.800405   54234 main.go:141] libmachine: (kubernetes-upgrade-421098)     <interface type='network'>
	I0913 19:41:58.800417   54234 main.go:141] libmachine: (kubernetes-upgrade-421098)       <source network='mk-kubernetes-upgrade-421098'/>
	I0913 19:41:58.800426   54234 main.go:141] libmachine: (kubernetes-upgrade-421098)       <model type='virtio'/>
	I0913 19:41:58.800447   54234 main.go:141] libmachine: (kubernetes-upgrade-421098)     </interface>
	I0913 19:41:58.800461   54234 main.go:141] libmachine: (kubernetes-upgrade-421098)     <interface type='network'>
	I0913 19:41:58.800470   54234 main.go:141] libmachine: (kubernetes-upgrade-421098)       <source network='default'/>
	I0913 19:41:58.800480   54234 main.go:141] libmachine: (kubernetes-upgrade-421098)       <model type='virtio'/>
	I0913 19:41:58.800491   54234 main.go:141] libmachine: (kubernetes-upgrade-421098)     </interface>
	I0913 19:41:58.800501   54234 main.go:141] libmachine: (kubernetes-upgrade-421098)     <serial type='pty'>
	I0913 19:41:58.800512   54234 main.go:141] libmachine: (kubernetes-upgrade-421098)       <target port='0'/>
	I0913 19:41:58.800522   54234 main.go:141] libmachine: (kubernetes-upgrade-421098)     </serial>
	I0913 19:41:58.800533   54234 main.go:141] libmachine: (kubernetes-upgrade-421098)     <console type='pty'>
	I0913 19:41:58.800541   54234 main.go:141] libmachine: (kubernetes-upgrade-421098)       <target type='serial' port='0'/>
	I0913 19:41:58.800553   54234 main.go:141] libmachine: (kubernetes-upgrade-421098)     </console>
	I0913 19:41:58.800563   54234 main.go:141] libmachine: (kubernetes-upgrade-421098)     <rng model='virtio'>
	I0913 19:41:58.800595   54234 main.go:141] libmachine: (kubernetes-upgrade-421098)       <backend model='random'>/dev/random</backend>
	I0913 19:41:58.800631   54234 main.go:141] libmachine: (kubernetes-upgrade-421098)     </rng>
	I0913 19:41:58.800645   54234 main.go:141] libmachine: (kubernetes-upgrade-421098)     
	I0913 19:41:58.800655   54234 main.go:141] libmachine: (kubernetes-upgrade-421098)     
	I0913 19:41:58.800666   54234 main.go:141] libmachine: (kubernetes-upgrade-421098)   </devices>
	I0913 19:41:58.800676   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) </domain>
	I0913 19:41:58.800687   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) 
	I0913 19:41:58.805065   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | domain kubernetes-upgrade-421098 has defined MAC address 52:54:00:28:4c:ea in network default
	I0913 19:41:58.805708   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) Ensuring networks are active...
	I0913 19:41:58.805723   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | domain kubernetes-upgrade-421098 has defined MAC address 52:54:00:f9:1c:62 in network mk-kubernetes-upgrade-421098
	I0913 19:41:58.806415   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) Ensuring network default is active
	I0913 19:41:58.806746   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) Ensuring network mk-kubernetes-upgrade-421098 is active
	I0913 19:41:58.807290   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) Getting domain xml...
	I0913 19:41:58.807974   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) Creating domain...
	I0913 19:42:00.198237   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) Waiting to get IP...
	I0913 19:42:00.199165   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | domain kubernetes-upgrade-421098 has defined MAC address 52:54:00:f9:1c:62 in network mk-kubernetes-upgrade-421098
	I0913 19:42:00.199685   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | unable to find current IP address of domain kubernetes-upgrade-421098 in network mk-kubernetes-upgrade-421098
	I0913 19:42:00.199752   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | I0913 19:42:00.199686   54436 retry.go:31] will retry after 201.978172ms: waiting for machine to come up
	I0913 19:42:00.403299   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | domain kubernetes-upgrade-421098 has defined MAC address 52:54:00:f9:1c:62 in network mk-kubernetes-upgrade-421098
	I0913 19:42:00.403870   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | unable to find current IP address of domain kubernetes-upgrade-421098 in network mk-kubernetes-upgrade-421098
	I0913 19:42:00.403897   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | I0913 19:42:00.403845   54436 retry.go:31] will retry after 362.578156ms: waiting for machine to come up
	I0913 19:42:00.768526   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | domain kubernetes-upgrade-421098 has defined MAC address 52:54:00:f9:1c:62 in network mk-kubernetes-upgrade-421098
	I0913 19:42:00.769130   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | unable to find current IP address of domain kubernetes-upgrade-421098 in network mk-kubernetes-upgrade-421098
	I0913 19:42:00.769161   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | I0913 19:42:00.769085   54436 retry.go:31] will retry after 436.512505ms: waiting for machine to come up
	I0913 19:42:01.207851   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | domain kubernetes-upgrade-421098 has defined MAC address 52:54:00:f9:1c:62 in network mk-kubernetes-upgrade-421098
	I0913 19:42:01.208435   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | unable to find current IP address of domain kubernetes-upgrade-421098 in network mk-kubernetes-upgrade-421098
	I0913 19:42:01.208490   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | I0913 19:42:01.208385   54436 retry.go:31] will retry after 421.76051ms: waiting for machine to come up
	I0913 19:42:01.632213   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | domain kubernetes-upgrade-421098 has defined MAC address 52:54:00:f9:1c:62 in network mk-kubernetes-upgrade-421098
	I0913 19:42:01.632835   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | unable to find current IP address of domain kubernetes-upgrade-421098 in network mk-kubernetes-upgrade-421098
	I0913 19:42:01.632877   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | I0913 19:42:01.632797   54436 retry.go:31] will retry after 682.124119ms: waiting for machine to come up
	I0913 19:42:02.316730   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | domain kubernetes-upgrade-421098 has defined MAC address 52:54:00:f9:1c:62 in network mk-kubernetes-upgrade-421098
	I0913 19:42:02.317273   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | unable to find current IP address of domain kubernetes-upgrade-421098 in network mk-kubernetes-upgrade-421098
	I0913 19:42:02.317302   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | I0913 19:42:02.317214   54436 retry.go:31] will retry after 652.626532ms: waiting for machine to come up
	I0913 19:42:02.971216   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | domain kubernetes-upgrade-421098 has defined MAC address 52:54:00:f9:1c:62 in network mk-kubernetes-upgrade-421098
	I0913 19:42:02.971733   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | unable to find current IP address of domain kubernetes-upgrade-421098 in network mk-kubernetes-upgrade-421098
	I0913 19:42:02.971756   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | I0913 19:42:02.971664   54436 retry.go:31] will retry after 760.776762ms: waiting for machine to come up
	I0913 19:42:03.734205   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | domain kubernetes-upgrade-421098 has defined MAC address 52:54:00:f9:1c:62 in network mk-kubernetes-upgrade-421098
	I0913 19:42:03.734735   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | unable to find current IP address of domain kubernetes-upgrade-421098 in network mk-kubernetes-upgrade-421098
	I0913 19:42:03.734755   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | I0913 19:42:03.734677   54436 retry.go:31] will retry after 963.237769ms: waiting for machine to come up
	I0913 19:42:04.699420   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | domain kubernetes-upgrade-421098 has defined MAC address 52:54:00:f9:1c:62 in network mk-kubernetes-upgrade-421098
	I0913 19:42:04.700022   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | unable to find current IP address of domain kubernetes-upgrade-421098 in network mk-kubernetes-upgrade-421098
	I0913 19:42:04.700049   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | I0913 19:42:04.699967   54436 retry.go:31] will retry after 1.468349248s: waiting for machine to come up
	I0913 19:42:06.169743   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | domain kubernetes-upgrade-421098 has defined MAC address 52:54:00:f9:1c:62 in network mk-kubernetes-upgrade-421098
	I0913 19:42:06.170249   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | unable to find current IP address of domain kubernetes-upgrade-421098 in network mk-kubernetes-upgrade-421098
	I0913 19:42:06.170272   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | I0913 19:42:06.170197   54436 retry.go:31] will retry after 1.635155102s: waiting for machine to come up
	I0913 19:42:07.807486   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | domain kubernetes-upgrade-421098 has defined MAC address 52:54:00:f9:1c:62 in network mk-kubernetes-upgrade-421098
	I0913 19:42:07.807969   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | unable to find current IP address of domain kubernetes-upgrade-421098 in network mk-kubernetes-upgrade-421098
	I0913 19:42:07.807996   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | I0913 19:42:07.807913   54436 retry.go:31] will retry after 1.98638089s: waiting for machine to come up
	I0913 19:42:09.795881   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | domain kubernetes-upgrade-421098 has defined MAC address 52:54:00:f9:1c:62 in network mk-kubernetes-upgrade-421098
	I0913 19:42:09.796534   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | unable to find current IP address of domain kubernetes-upgrade-421098 in network mk-kubernetes-upgrade-421098
	I0913 19:42:09.796570   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | I0913 19:42:09.796413   54436 retry.go:31] will retry after 2.212758827s: waiting for machine to come up
	I0913 19:42:12.011043   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | domain kubernetes-upgrade-421098 has defined MAC address 52:54:00:f9:1c:62 in network mk-kubernetes-upgrade-421098
	I0913 19:42:12.011582   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | unable to find current IP address of domain kubernetes-upgrade-421098 in network mk-kubernetes-upgrade-421098
	I0913 19:42:12.011598   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | I0913 19:42:12.011560   54436 retry.go:31] will retry after 4.200066256s: waiting for machine to come up
	I0913 19:42:16.213486   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | domain kubernetes-upgrade-421098 has defined MAC address 52:54:00:f9:1c:62 in network mk-kubernetes-upgrade-421098
	I0913 19:42:16.214958   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | unable to find current IP address of domain kubernetes-upgrade-421098 in network mk-kubernetes-upgrade-421098
	I0913 19:42:16.214989   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | I0913 19:42:16.214898   54436 retry.go:31] will retry after 4.366205907s: waiting for machine to come up
	I0913 19:42:20.584036   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | domain kubernetes-upgrade-421098 has defined MAC address 52:54:00:f9:1c:62 in network mk-kubernetes-upgrade-421098
	I0913 19:42:20.584659   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | domain kubernetes-upgrade-421098 has current primary IP address 192.168.39.115 and MAC address 52:54:00:f9:1c:62 in network mk-kubernetes-upgrade-421098
	I0913 19:42:20.584680   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) Found IP for machine: 192.168.39.115
	I0913 19:42:20.584718   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) Reserving static IP address...
	I0913 19:42:20.585023   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-421098", mac: "52:54:00:f9:1c:62", ip: "192.168.39.115"} in network mk-kubernetes-upgrade-421098
	I0913 19:42:20.662787   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | Getting to WaitForSSH function...
	I0913 19:42:20.662855   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) Reserved static IP address: 192.168.39.115
	I0913 19:42:20.662873   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) Waiting for SSH to be available...
	I0913 19:42:20.666053   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | domain kubernetes-upgrade-421098 has defined MAC address 52:54:00:f9:1c:62 in network mk-kubernetes-upgrade-421098
	I0913 19:42:20.666622   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:1c:62", ip: ""} in network mk-kubernetes-upgrade-421098: {Iface:virbr3 ExpiryTime:2024-09-13 20:42:13 +0000 UTC Type:0 Mac:52:54:00:f9:1c:62 Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:minikube Clientid:01:52:54:00:f9:1c:62}
	I0913 19:42:20.666669   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | domain kubernetes-upgrade-421098 has defined IP address 192.168.39.115 and MAC address 52:54:00:f9:1c:62 in network mk-kubernetes-upgrade-421098
	I0913 19:42:20.666793   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | Using SSH client type: external
	I0913 19:42:20.666823   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | Using SSH private key: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/kubernetes-upgrade-421098/id_rsa (-rw-------)
	I0913 19:42:20.666857   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.115 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19636-3902/.minikube/machines/kubernetes-upgrade-421098/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0913 19:42:20.666868   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | About to run SSH command:
	I0913 19:42:20.666902   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | exit 0
	I0913 19:42:20.790606   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | SSH cmd err, output: <nil>: 
	I0913 19:42:20.790916   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) KVM machine creation complete!
	I0913 19:42:20.791186   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) Calling .GetConfigRaw
	I0913 19:42:20.791740   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) Calling .DriverName
	I0913 19:42:20.791938   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) Calling .DriverName
	I0913 19:42:20.792084   54234 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0913 19:42:20.792097   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) Calling .GetState
	I0913 19:42:20.793462   54234 main.go:141] libmachine: Detecting operating system of created instance...
	I0913 19:42:20.793476   54234 main.go:141] libmachine: Waiting for SSH to be available...
	I0913 19:42:20.793482   54234 main.go:141] libmachine: Getting to WaitForSSH function...
	I0913 19:42:20.793489   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) Calling .GetSSHHostname
	I0913 19:42:20.796265   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | domain kubernetes-upgrade-421098 has defined MAC address 52:54:00:f9:1c:62 in network mk-kubernetes-upgrade-421098
	I0913 19:42:20.796603   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:1c:62", ip: ""} in network mk-kubernetes-upgrade-421098: {Iface:virbr3 ExpiryTime:2024-09-13 20:42:13 +0000 UTC Type:0 Mac:52:54:00:f9:1c:62 Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:kubernetes-upgrade-421098 Clientid:01:52:54:00:f9:1c:62}
	I0913 19:42:20.796623   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | domain kubernetes-upgrade-421098 has defined IP address 192.168.39.115 and MAC address 52:54:00:f9:1c:62 in network mk-kubernetes-upgrade-421098
	I0913 19:42:20.796806   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) Calling .GetSSHPort
	I0913 19:42:20.796974   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) Calling .GetSSHKeyPath
	I0913 19:42:20.797157   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) Calling .GetSSHKeyPath
	I0913 19:42:20.797294   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) Calling .GetSSHUsername
	I0913 19:42:20.797477   54234 main.go:141] libmachine: Using SSH client type: native
	I0913 19:42:20.797679   54234 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.115 22 <nil> <nil>}
	I0913 19:42:20.797695   54234 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0913 19:42:20.901234   54234 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0913 19:42:20.901260   54234 main.go:141] libmachine: Detecting the provisioner...
	I0913 19:42:20.901271   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) Calling .GetSSHHostname
	I0913 19:42:20.904226   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | domain kubernetes-upgrade-421098 has defined MAC address 52:54:00:f9:1c:62 in network mk-kubernetes-upgrade-421098
	I0913 19:42:20.904681   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:1c:62", ip: ""} in network mk-kubernetes-upgrade-421098: {Iface:virbr3 ExpiryTime:2024-09-13 20:42:13 +0000 UTC Type:0 Mac:52:54:00:f9:1c:62 Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:kubernetes-upgrade-421098 Clientid:01:52:54:00:f9:1c:62}
	I0913 19:42:20.904715   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | domain kubernetes-upgrade-421098 has defined IP address 192.168.39.115 and MAC address 52:54:00:f9:1c:62 in network mk-kubernetes-upgrade-421098
	I0913 19:42:20.904915   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) Calling .GetSSHPort
	I0913 19:42:20.905124   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) Calling .GetSSHKeyPath
	I0913 19:42:20.905282   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) Calling .GetSSHKeyPath
	I0913 19:42:20.905463   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) Calling .GetSSHUsername
	I0913 19:42:20.905616   54234 main.go:141] libmachine: Using SSH client type: native
	I0913 19:42:20.905784   54234 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.115 22 <nil> <nil>}
	I0913 19:42:20.905795   54234 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0913 19:42:21.011940   54234 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0913 19:42:21.012008   54234 main.go:141] libmachine: found compatible host: buildroot
	I0913 19:42:21.012017   54234 main.go:141] libmachine: Provisioning with buildroot...
	I0913 19:42:21.012025   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) Calling .GetMachineName
	I0913 19:42:21.012274   54234 buildroot.go:166] provisioning hostname "kubernetes-upgrade-421098"
	I0913 19:42:21.012299   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) Calling .GetMachineName
	I0913 19:42:21.012471   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) Calling .GetSSHHostname
	I0913 19:42:21.015289   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | domain kubernetes-upgrade-421098 has defined MAC address 52:54:00:f9:1c:62 in network mk-kubernetes-upgrade-421098
	I0913 19:42:21.015630   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:1c:62", ip: ""} in network mk-kubernetes-upgrade-421098: {Iface:virbr3 ExpiryTime:2024-09-13 20:42:13 +0000 UTC Type:0 Mac:52:54:00:f9:1c:62 Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:kubernetes-upgrade-421098 Clientid:01:52:54:00:f9:1c:62}
	I0913 19:42:21.015666   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | domain kubernetes-upgrade-421098 has defined IP address 192.168.39.115 and MAC address 52:54:00:f9:1c:62 in network mk-kubernetes-upgrade-421098
	I0913 19:42:21.015770   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) Calling .GetSSHPort
	I0913 19:42:21.015938   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) Calling .GetSSHKeyPath
	I0913 19:42:21.016084   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) Calling .GetSSHKeyPath
	I0913 19:42:21.016221   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) Calling .GetSSHUsername
	I0913 19:42:21.016366   54234 main.go:141] libmachine: Using SSH client type: native
	I0913 19:42:21.016587   54234 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.115 22 <nil> <nil>}
	I0913 19:42:21.016604   54234 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-421098 && echo "kubernetes-upgrade-421098" | sudo tee /etc/hostname
	I0913 19:42:21.129863   54234 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-421098
	
	I0913 19:42:21.129890   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) Calling .GetSSHHostname
	I0913 19:42:21.132554   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | domain kubernetes-upgrade-421098 has defined MAC address 52:54:00:f9:1c:62 in network mk-kubernetes-upgrade-421098
	I0913 19:42:21.132849   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:1c:62", ip: ""} in network mk-kubernetes-upgrade-421098: {Iface:virbr3 ExpiryTime:2024-09-13 20:42:13 +0000 UTC Type:0 Mac:52:54:00:f9:1c:62 Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:kubernetes-upgrade-421098 Clientid:01:52:54:00:f9:1c:62}
	I0913 19:42:21.132881   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | domain kubernetes-upgrade-421098 has defined IP address 192.168.39.115 and MAC address 52:54:00:f9:1c:62 in network mk-kubernetes-upgrade-421098
	I0913 19:42:21.133106   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) Calling .GetSSHPort
	I0913 19:42:21.133292   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) Calling .GetSSHKeyPath
	I0913 19:42:21.133454   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) Calling .GetSSHKeyPath
	I0913 19:42:21.133582   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) Calling .GetSSHUsername
	I0913 19:42:21.133743   54234 main.go:141] libmachine: Using SSH client type: native
	I0913 19:42:21.134000   54234 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.115 22 <nil> <nil>}
	I0913 19:42:21.134025   54234 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-421098' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-421098/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-421098' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0913 19:42:21.244368   54234 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0913 19:42:21.244398   54234 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19636-3902/.minikube CaCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19636-3902/.minikube}
	I0913 19:42:21.244458   54234 buildroot.go:174] setting up certificates
	I0913 19:42:21.244482   54234 provision.go:84] configureAuth start
	I0913 19:42:21.244502   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) Calling .GetMachineName
	I0913 19:42:21.244817   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) Calling .GetIP
	I0913 19:42:21.247822   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | domain kubernetes-upgrade-421098 has defined MAC address 52:54:00:f9:1c:62 in network mk-kubernetes-upgrade-421098
	I0913 19:42:21.248205   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:1c:62", ip: ""} in network mk-kubernetes-upgrade-421098: {Iface:virbr3 ExpiryTime:2024-09-13 20:42:13 +0000 UTC Type:0 Mac:52:54:00:f9:1c:62 Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:kubernetes-upgrade-421098 Clientid:01:52:54:00:f9:1c:62}
	I0913 19:42:21.248229   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | domain kubernetes-upgrade-421098 has defined IP address 192.168.39.115 and MAC address 52:54:00:f9:1c:62 in network mk-kubernetes-upgrade-421098
	I0913 19:42:21.248371   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) Calling .GetSSHHostname
	I0913 19:42:21.251031   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | domain kubernetes-upgrade-421098 has defined MAC address 52:54:00:f9:1c:62 in network mk-kubernetes-upgrade-421098
	I0913 19:42:21.251390   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:1c:62", ip: ""} in network mk-kubernetes-upgrade-421098: {Iface:virbr3 ExpiryTime:2024-09-13 20:42:13 +0000 UTC Type:0 Mac:52:54:00:f9:1c:62 Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:kubernetes-upgrade-421098 Clientid:01:52:54:00:f9:1c:62}
	I0913 19:42:21.251421   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | domain kubernetes-upgrade-421098 has defined IP address 192.168.39.115 and MAC address 52:54:00:f9:1c:62 in network mk-kubernetes-upgrade-421098
	I0913 19:42:21.251595   54234 provision.go:143] copyHostCerts
	I0913 19:42:21.251650   54234 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem, removing ...
	I0913 19:42:21.251663   54234 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem
	I0913 19:42:21.251719   54234 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem (1679 bytes)
	I0913 19:42:21.251814   54234 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem, removing ...
	I0913 19:42:21.251823   54234 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem
	I0913 19:42:21.251848   54234 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem (1082 bytes)
	I0913 19:42:21.251912   54234 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem, removing ...
	I0913 19:42:21.251923   54234 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem
	I0913 19:42:21.251939   54234 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem (1123 bytes)
	I0913 19:42:21.251998   54234 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-421098 san=[127.0.0.1 192.168.39.115 kubernetes-upgrade-421098 localhost minikube]
	I0913 19:42:21.338761   54234 provision.go:177] copyRemoteCerts
	I0913 19:42:21.338817   54234 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0913 19:42:21.338841   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) Calling .GetSSHHostname
	I0913 19:42:21.341506   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | domain kubernetes-upgrade-421098 has defined MAC address 52:54:00:f9:1c:62 in network mk-kubernetes-upgrade-421098
	I0913 19:42:21.341883   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:1c:62", ip: ""} in network mk-kubernetes-upgrade-421098: {Iface:virbr3 ExpiryTime:2024-09-13 20:42:13 +0000 UTC Type:0 Mac:52:54:00:f9:1c:62 Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:kubernetes-upgrade-421098 Clientid:01:52:54:00:f9:1c:62}
	I0913 19:42:21.341914   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | domain kubernetes-upgrade-421098 has defined IP address 192.168.39.115 and MAC address 52:54:00:f9:1c:62 in network mk-kubernetes-upgrade-421098
	I0913 19:42:21.342148   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) Calling .GetSSHPort
	I0913 19:42:21.342298   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) Calling .GetSSHKeyPath
	I0913 19:42:21.342443   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) Calling .GetSSHUsername
	I0913 19:42:21.342531   54234 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/kubernetes-upgrade-421098/id_rsa Username:docker}
	I0913 19:42:21.424955   54234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0913 19:42:21.451210   54234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0913 19:42:21.475266   54234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0913 19:42:21.500641   54234 provision.go:87] duration metric: took 256.144949ms to configureAuth
	I0913 19:42:21.500671   54234 buildroot.go:189] setting minikube options for container-runtime
	I0913 19:42:21.500859   54234 config.go:182] Loaded profile config "kubernetes-upgrade-421098": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0913 19:42:21.500966   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) Calling .GetSSHHostname
	I0913 19:42:21.503904   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | domain kubernetes-upgrade-421098 has defined MAC address 52:54:00:f9:1c:62 in network mk-kubernetes-upgrade-421098
	I0913 19:42:21.504273   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:1c:62", ip: ""} in network mk-kubernetes-upgrade-421098: {Iface:virbr3 ExpiryTime:2024-09-13 20:42:13 +0000 UTC Type:0 Mac:52:54:00:f9:1c:62 Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:kubernetes-upgrade-421098 Clientid:01:52:54:00:f9:1c:62}
	I0913 19:42:21.504317   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | domain kubernetes-upgrade-421098 has defined IP address 192.168.39.115 and MAC address 52:54:00:f9:1c:62 in network mk-kubernetes-upgrade-421098
	I0913 19:42:21.504538   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) Calling .GetSSHPort
	I0913 19:42:21.504695   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) Calling .GetSSHKeyPath
	I0913 19:42:21.504881   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) Calling .GetSSHKeyPath
	I0913 19:42:21.505025   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) Calling .GetSSHUsername
	I0913 19:42:21.505190   54234 main.go:141] libmachine: Using SSH client type: native
	I0913 19:42:21.505368   54234 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.115 22 <nil> <nil>}
	I0913 19:42:21.505392   54234 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0913 19:42:21.736105   54234 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0913 19:42:21.736131   54234 main.go:141] libmachine: Checking connection to Docker...
	I0913 19:42:21.736140   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) Calling .GetURL
	I0913 19:42:21.737457   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | Using libvirt version 6000000
	I0913 19:42:21.739687   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | domain kubernetes-upgrade-421098 has defined MAC address 52:54:00:f9:1c:62 in network mk-kubernetes-upgrade-421098
	I0913 19:42:21.739999   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:1c:62", ip: ""} in network mk-kubernetes-upgrade-421098: {Iface:virbr3 ExpiryTime:2024-09-13 20:42:13 +0000 UTC Type:0 Mac:52:54:00:f9:1c:62 Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:kubernetes-upgrade-421098 Clientid:01:52:54:00:f9:1c:62}
	I0913 19:42:21.740038   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | domain kubernetes-upgrade-421098 has defined IP address 192.168.39.115 and MAC address 52:54:00:f9:1c:62 in network mk-kubernetes-upgrade-421098
	I0913 19:42:21.740175   54234 main.go:141] libmachine: Docker is up and running!
	I0913 19:42:21.740193   54234 main.go:141] libmachine: Reticulating splines...
	I0913 19:42:21.740201   54234 client.go:171] duration metric: took 23.520670634s to LocalClient.Create
	I0913 19:42:21.740222   54234 start.go:167] duration metric: took 23.520727821s to libmachine.API.Create "kubernetes-upgrade-421098"
	I0913 19:42:21.740234   54234 start.go:293] postStartSetup for "kubernetes-upgrade-421098" (driver="kvm2")
	I0913 19:42:21.740248   54234 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0913 19:42:21.740268   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) Calling .DriverName
	I0913 19:42:21.740498   54234 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0913 19:42:21.740522   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) Calling .GetSSHHostname
	I0913 19:42:21.742486   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | domain kubernetes-upgrade-421098 has defined MAC address 52:54:00:f9:1c:62 in network mk-kubernetes-upgrade-421098
	I0913 19:42:21.742821   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:1c:62", ip: ""} in network mk-kubernetes-upgrade-421098: {Iface:virbr3 ExpiryTime:2024-09-13 20:42:13 +0000 UTC Type:0 Mac:52:54:00:f9:1c:62 Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:kubernetes-upgrade-421098 Clientid:01:52:54:00:f9:1c:62}
	I0913 19:42:21.742849   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | domain kubernetes-upgrade-421098 has defined IP address 192.168.39.115 and MAC address 52:54:00:f9:1c:62 in network mk-kubernetes-upgrade-421098
	I0913 19:42:21.742934   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) Calling .GetSSHPort
	I0913 19:42:21.743120   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) Calling .GetSSHKeyPath
	I0913 19:42:21.743274   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) Calling .GetSSHUsername
	I0913 19:42:21.743420   54234 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/kubernetes-upgrade-421098/id_rsa Username:docker}
	I0913 19:42:21.825330   54234 ssh_runner.go:195] Run: cat /etc/os-release
	I0913 19:42:21.829541   54234 info.go:137] Remote host: Buildroot 2023.02.9
	I0913 19:42:21.829572   54234 filesync.go:126] Scanning /home/jenkins/minikube-integration/19636-3902/.minikube/addons for local assets ...
	I0913 19:42:21.829646   54234 filesync.go:126] Scanning /home/jenkins/minikube-integration/19636-3902/.minikube/files for local assets ...
	I0913 19:42:21.829728   54234 filesync.go:149] local asset: /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem -> 110792.pem in /etc/ssl/certs
	I0913 19:42:21.829833   54234 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0913 19:42:21.839851   54234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem --> /etc/ssl/certs/110792.pem (1708 bytes)
	I0913 19:42:21.864748   54234 start.go:296] duration metric: took 124.501976ms for postStartSetup
	I0913 19:42:21.864796   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) Calling .GetConfigRaw
	I0913 19:42:21.865445   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) Calling .GetIP
	I0913 19:42:21.868678   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | domain kubernetes-upgrade-421098 has defined MAC address 52:54:00:f9:1c:62 in network mk-kubernetes-upgrade-421098
	I0913 19:42:21.869034   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:1c:62", ip: ""} in network mk-kubernetes-upgrade-421098: {Iface:virbr3 ExpiryTime:2024-09-13 20:42:13 +0000 UTC Type:0 Mac:52:54:00:f9:1c:62 Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:kubernetes-upgrade-421098 Clientid:01:52:54:00:f9:1c:62}
	I0913 19:42:21.869072   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | domain kubernetes-upgrade-421098 has defined IP address 192.168.39.115 and MAC address 52:54:00:f9:1c:62 in network mk-kubernetes-upgrade-421098
	I0913 19:42:21.869404   54234 profile.go:143] Saving config to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/kubernetes-upgrade-421098/config.json ...
	I0913 19:42:21.869653   54234 start.go:128] duration metric: took 23.670518106s to createHost
	I0913 19:42:21.869681   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) Calling .GetSSHHostname
	I0913 19:42:21.872026   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | domain kubernetes-upgrade-421098 has defined MAC address 52:54:00:f9:1c:62 in network mk-kubernetes-upgrade-421098
	I0913 19:42:21.872366   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:1c:62", ip: ""} in network mk-kubernetes-upgrade-421098: {Iface:virbr3 ExpiryTime:2024-09-13 20:42:13 +0000 UTC Type:0 Mac:52:54:00:f9:1c:62 Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:kubernetes-upgrade-421098 Clientid:01:52:54:00:f9:1c:62}
	I0913 19:42:21.872398   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | domain kubernetes-upgrade-421098 has defined IP address 192.168.39.115 and MAC address 52:54:00:f9:1c:62 in network mk-kubernetes-upgrade-421098
	I0913 19:42:21.872515   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) Calling .GetSSHPort
	I0913 19:42:21.872725   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) Calling .GetSSHKeyPath
	I0913 19:42:21.872862   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) Calling .GetSSHKeyPath
	I0913 19:42:21.873002   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) Calling .GetSSHUsername
	I0913 19:42:21.873240   54234 main.go:141] libmachine: Using SSH client type: native
	I0913 19:42:21.873461   54234 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.115 22 <nil> <nil>}
	I0913 19:42:21.873492   54234 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0913 19:42:21.974914   54234 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726256541.954831039
	
	I0913 19:42:21.974934   54234 fix.go:216] guest clock: 1726256541.954831039
	I0913 19:42:21.974941   54234 fix.go:229] Guest: 2024-09-13 19:42:21.954831039 +0000 UTC Remote: 2024-09-13 19:42:21.869666893 +0000 UTC m=+46.333277999 (delta=85.164146ms)
	I0913 19:42:21.974960   54234 fix.go:200] guest clock delta is within tolerance: 85.164146ms
	I0913 19:42:21.974964   54234 start.go:83] releasing machines lock for "kubernetes-upgrade-421098", held for 23.77597612s
	I0913 19:42:21.974988   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) Calling .DriverName
	I0913 19:42:21.975247   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) Calling .GetIP
	I0913 19:42:21.978143   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | domain kubernetes-upgrade-421098 has defined MAC address 52:54:00:f9:1c:62 in network mk-kubernetes-upgrade-421098
	I0913 19:42:21.978552   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:1c:62", ip: ""} in network mk-kubernetes-upgrade-421098: {Iface:virbr3 ExpiryTime:2024-09-13 20:42:13 +0000 UTC Type:0 Mac:52:54:00:f9:1c:62 Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:kubernetes-upgrade-421098 Clientid:01:52:54:00:f9:1c:62}
	I0913 19:42:21.978577   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | domain kubernetes-upgrade-421098 has defined IP address 192.168.39.115 and MAC address 52:54:00:f9:1c:62 in network mk-kubernetes-upgrade-421098
	I0913 19:42:21.978799   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) Calling .DriverName
	I0913 19:42:21.979294   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) Calling .DriverName
	I0913 19:42:21.979486   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) Calling .DriverName
	I0913 19:42:21.979608   54234 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0913 19:42:21.979677   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) Calling .GetSSHHostname
	I0913 19:42:21.979733   54234 ssh_runner.go:195] Run: cat /version.json
	I0913 19:42:21.979783   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) Calling .GetSSHHostname
	I0913 19:42:21.982763   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | domain kubernetes-upgrade-421098 has defined MAC address 52:54:00:f9:1c:62 in network mk-kubernetes-upgrade-421098
	I0913 19:42:21.982931   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | domain kubernetes-upgrade-421098 has defined MAC address 52:54:00:f9:1c:62 in network mk-kubernetes-upgrade-421098
	I0913 19:42:21.983157   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:1c:62", ip: ""} in network mk-kubernetes-upgrade-421098: {Iface:virbr3 ExpiryTime:2024-09-13 20:42:13 +0000 UTC Type:0 Mac:52:54:00:f9:1c:62 Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:kubernetes-upgrade-421098 Clientid:01:52:54:00:f9:1c:62}
	I0913 19:42:21.983182   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | domain kubernetes-upgrade-421098 has defined IP address 192.168.39.115 and MAC address 52:54:00:f9:1c:62 in network mk-kubernetes-upgrade-421098
	I0913 19:42:21.983213   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:1c:62", ip: ""} in network mk-kubernetes-upgrade-421098: {Iface:virbr3 ExpiryTime:2024-09-13 20:42:13 +0000 UTC Type:0 Mac:52:54:00:f9:1c:62 Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:kubernetes-upgrade-421098 Clientid:01:52:54:00:f9:1c:62}
	I0913 19:42:21.983266   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | domain kubernetes-upgrade-421098 has defined IP address 192.168.39.115 and MAC address 52:54:00:f9:1c:62 in network mk-kubernetes-upgrade-421098
	I0913 19:42:21.983301   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) Calling .GetSSHPort
	I0913 19:42:21.983483   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) Calling .GetSSHKeyPath
	I0913 19:42:21.983503   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) Calling .GetSSHPort
	I0913 19:42:21.983618   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) Calling .GetSSHUsername
	I0913 19:42:21.983695   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) Calling .GetSSHKeyPath
	I0913 19:42:21.983784   54234 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/kubernetes-upgrade-421098/id_rsa Username:docker}
	I0913 19:42:21.983824   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) Calling .GetSSHUsername
	I0913 19:42:21.983959   54234 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/kubernetes-upgrade-421098/id_rsa Username:docker}
	I0913 19:42:22.059161   54234 ssh_runner.go:195] Run: systemctl --version
	I0913 19:42:22.085713   54234 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0913 19:42:22.249353   54234 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0913 19:42:22.256538   54234 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0913 19:42:22.256603   54234 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0913 19:42:22.274292   54234 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0913 19:42:22.274319   54234 start.go:495] detecting cgroup driver to use...
	I0913 19:42:22.274399   54234 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0913 19:42:22.291217   54234 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0913 19:42:22.306675   54234 docker.go:217] disabling cri-docker service (if available) ...
	I0913 19:42:22.306735   54234 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0913 19:42:22.321071   54234 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0913 19:42:22.335272   54234 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0913 19:42:22.464970   54234 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0913 19:42:22.640010   54234 docker.go:233] disabling docker service ...
	I0913 19:42:22.640083   54234 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0913 19:42:22.656320   54234 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0913 19:42:22.673981   54234 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0913 19:42:22.804215   54234 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0913 19:42:22.929101   54234 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0913 19:42:22.944842   54234 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0913 19:42:22.965023   54234 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0913 19:42:22.965092   54234 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:42:22.975847   54234 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0913 19:42:22.975917   54234 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:42:22.986912   54234 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:42:22.997242   54234 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:42:23.009739   54234 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0913 19:42:23.020341   54234 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0913 19:42:23.030123   54234 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0913 19:42:23.030196   54234 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0913 19:42:23.044788   54234 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0913 19:42:23.054201   54234 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 19:42:23.175880   54234 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0913 19:42:23.274804   54234 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0913 19:42:23.274878   54234 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0913 19:42:23.279905   54234 start.go:563] Will wait 60s for crictl version
	I0913 19:42:23.279965   54234 ssh_runner.go:195] Run: which crictl
	I0913 19:42:23.283817   54234 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0913 19:42:23.325939   54234 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0913 19:42:23.326028   54234 ssh_runner.go:195] Run: crio --version
	I0913 19:42:23.355079   54234 ssh_runner.go:195] Run: crio --version
	I0913 19:42:23.384928   54234 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0913 19:42:23.386221   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) Calling .GetIP
	I0913 19:42:23.389156   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | domain kubernetes-upgrade-421098 has defined MAC address 52:54:00:f9:1c:62 in network mk-kubernetes-upgrade-421098
	I0913 19:42:23.389529   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:1c:62", ip: ""} in network mk-kubernetes-upgrade-421098: {Iface:virbr3 ExpiryTime:2024-09-13 20:42:13 +0000 UTC Type:0 Mac:52:54:00:f9:1c:62 Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:kubernetes-upgrade-421098 Clientid:01:52:54:00:f9:1c:62}
	I0913 19:42:23.389560   54234 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | domain kubernetes-upgrade-421098 has defined IP address 192.168.39.115 and MAC address 52:54:00:f9:1c:62 in network mk-kubernetes-upgrade-421098
	I0913 19:42:23.389752   54234 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0913 19:42:23.393873   54234 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0913 19:42:23.406340   54234 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-421098 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-421098 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.115 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0913 19:42:23.406453   54234 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0913 19:42:23.406496   54234 ssh_runner.go:195] Run: sudo crictl images --output json
	I0913 19:42:23.438165   54234 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0913 19:42:23.438240   54234 ssh_runner.go:195] Run: which lz4
	I0913 19:42:23.442130   54234 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0913 19:42:23.446134   54234 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0913 19:42:23.446166   54234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0913 19:42:25.058186   54234 crio.go:462] duration metric: took 1.616096237s to copy over tarball
	I0913 19:42:25.058268   54234 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0913 19:42:27.525712   54234 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.467417493s)
	I0913 19:42:27.525737   54234 crio.go:469] duration metric: took 2.46752025s to extract the tarball
	I0913 19:42:27.525743   54234 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0913 19:42:27.568068   54234 ssh_runner.go:195] Run: sudo crictl images --output json
	I0913 19:42:27.612240   54234 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0913 19:42:27.612263   54234 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0913 19:42:27.612336   54234 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 19:42:27.612400   54234 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0913 19:42:27.612414   54234 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0913 19:42:27.612336   54234 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0913 19:42:27.612441   54234 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0913 19:42:27.612443   54234 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0913 19:42:27.612467   54234 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0913 19:42:27.612472   54234 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0913 19:42:27.613763   54234 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0913 19:42:27.613766   54234 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0913 19:42:27.613797   54234 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0913 19:42:27.613816   54234 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 19:42:27.613818   54234 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0913 19:42:27.614088   54234 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0913 19:42:27.614206   54234 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0913 19:42:27.614215   54234 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0913 19:42:27.860404   54234 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0913 19:42:27.877025   54234 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0913 19:42:27.877085   54234 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0913 19:42:27.922814   54234 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0913 19:42:27.922874   54234 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0913 19:42:27.922919   54234 ssh_runner.go:195] Run: which crictl
	I0913 19:42:27.931135   54234 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0913 19:42:27.945058   54234 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0913 19:42:27.950427   54234 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0913 19:42:27.974074   54234 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0913 19:42:27.974163   54234 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0913 19:42:27.974206   54234 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0913 19:42:27.974220   54234 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0913 19:42:27.974241   54234 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0913 19:42:27.974276   54234 ssh_runner.go:195] Run: which crictl
	I0913 19:42:27.974279   54234 ssh_runner.go:195] Run: which crictl
	I0913 19:42:28.015333   54234 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0913 19:42:28.025880   54234 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0913 19:42:28.025927   54234 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0913 19:42:28.025977   54234 ssh_runner.go:195] Run: which crictl
	I0913 19:42:28.086661   54234 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0913 19:42:28.086707   54234 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0913 19:42:28.086767   54234 ssh_runner.go:195] Run: which crictl
	I0913 19:42:28.086771   54234 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0913 19:42:28.086801   54234 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0913 19:42:28.086850   54234 ssh_runner.go:195] Run: which crictl
	I0913 19:42:28.093167   54234 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0913 19:42:28.093211   54234 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0913 19:42:28.093240   54234 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0913 19:42:28.094853   54234 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0913 19:42:28.094966   54234 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0913 19:42:28.095022   54234 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0913 19:42:28.095079   54234 ssh_runner.go:195] Run: which crictl
	I0913 19:42:28.103646   54234 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0913 19:42:28.103660   54234 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0913 19:42:28.233490   54234 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0913 19:42:28.233538   54234 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0913 19:42:28.233490   54234 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0913 19:42:28.233603   54234 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0913 19:42:28.233626   54234 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0913 19:42:28.233765   54234 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0913 19:42:28.233775   54234 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0913 19:42:28.398663   54234 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0913 19:42:28.398669   54234 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0913 19:42:28.398789   54234 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0913 19:42:28.398836   54234 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0913 19:42:28.398883   54234 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0913 19:42:28.398908   54234 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0913 19:42:28.398970   54234 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0913 19:42:28.500827   54234 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0913 19:42:28.521267   54234 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0913 19:42:28.521359   54234 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0913 19:42:28.521358   54234 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0913 19:42:28.524706   54234 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0913 19:42:28.525101   54234 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0913 19:42:28.575793   54234 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0913 19:42:28.791666   54234 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 19:42:28.947946   54234 cache_images.go:92] duration metric: took 1.335665733s to LoadCachedImages
	W0913 19:42:28.948051   54234 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I0913 19:42:28.948084   54234 kubeadm.go:934] updating node { 192.168.39.115 8443 v1.20.0 crio true true} ...
	I0913 19:42:28.948236   54234 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-421098 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.115
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-421098 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0913 19:42:28.948309   54234 ssh_runner.go:195] Run: crio config
	I0913 19:42:29.016338   54234 cni.go:84] Creating CNI manager for ""
	I0913 19:42:29.016361   54234 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0913 19:42:29.016372   54234 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0913 19:42:29.016394   54234 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.115 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-421098 NodeName:kubernetes-upgrade-421098 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.115"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.115 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0913 19:42:29.016548   54234 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.115
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-421098"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.115
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.115"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0913 19:42:29.016621   54234 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0913 19:42:29.027482   54234 binaries.go:44] Found k8s binaries, skipping transfer
	I0913 19:42:29.027551   54234 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0913 19:42:29.041355   54234 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (433 bytes)
	I0913 19:42:29.065739   54234 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0913 19:42:29.084735   54234 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I0913 19:42:29.104834   54234 ssh_runner.go:195] Run: grep 192.168.39.115	control-plane.minikube.internal$ /etc/hosts
	I0913 19:42:29.109388   54234 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.115	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0913 19:42:29.122813   54234 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 19:42:29.243471   54234 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0913 19:42:29.263245   54234 certs.go:68] Setting up /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/kubernetes-upgrade-421098 for IP: 192.168.39.115
	I0913 19:42:29.263274   54234 certs.go:194] generating shared ca certs ...
	I0913 19:42:29.263295   54234 certs.go:226] acquiring lock for ca certs: {Name:mke780aab4c2895bded11772e31c7a357d07742c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 19:42:29.263454   54234 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.key
	I0913 19:42:29.263519   54234 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.key
	I0913 19:42:29.263531   54234 certs.go:256] generating profile certs ...
	I0913 19:42:29.263586   54234 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/kubernetes-upgrade-421098/client.key
	I0913 19:42:29.263619   54234 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/kubernetes-upgrade-421098/client.crt with IP's: []
	I0913 19:42:29.366240   54234 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/kubernetes-upgrade-421098/client.crt ...
	I0913 19:42:29.366270   54234 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/kubernetes-upgrade-421098/client.crt: {Name:mka0191d94582ebe62f81f31bdf21f594266d55e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 19:42:29.366467   54234 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/kubernetes-upgrade-421098/client.key ...
	I0913 19:42:29.366483   54234 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/kubernetes-upgrade-421098/client.key: {Name:mk592ae21629c6ea1dc31ccd5d96c4a3c0e0dbb3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 19:42:29.366600   54234 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/kubernetes-upgrade-421098/apiserver.key.d0cc4592
	I0913 19:42:29.366618   54234 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/kubernetes-upgrade-421098/apiserver.crt.d0cc4592 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.115]
	I0913 19:42:29.437200   54234 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/kubernetes-upgrade-421098/apiserver.crt.d0cc4592 ...
	I0913 19:42:29.437232   54234 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/kubernetes-upgrade-421098/apiserver.crt.d0cc4592: {Name:mk2bffd046e8f5abc885b1cc9b34bc6b9e4515b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 19:42:29.437425   54234 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/kubernetes-upgrade-421098/apiserver.key.d0cc4592 ...
	I0913 19:42:29.437446   54234 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/kubernetes-upgrade-421098/apiserver.key.d0cc4592: {Name:mkdce7d4d807bfa0f717612182ef8e959257961b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 19:42:29.437561   54234 certs.go:381] copying /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/kubernetes-upgrade-421098/apiserver.crt.d0cc4592 -> /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/kubernetes-upgrade-421098/apiserver.crt
	I0913 19:42:29.437681   54234 certs.go:385] copying /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/kubernetes-upgrade-421098/apiserver.key.d0cc4592 -> /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/kubernetes-upgrade-421098/apiserver.key
	I0913 19:42:29.437742   54234 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/kubernetes-upgrade-421098/proxy-client.key
	I0913 19:42:29.437757   54234 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/kubernetes-upgrade-421098/proxy-client.crt with IP's: []
	I0913 19:42:29.553127   54234 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/kubernetes-upgrade-421098/proxy-client.crt ...
	I0913 19:42:29.553156   54234 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/kubernetes-upgrade-421098/proxy-client.crt: {Name:mk717aecb6c81d19d13de21da93a36b92c856e89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 19:42:29.553331   54234 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/kubernetes-upgrade-421098/proxy-client.key ...
	I0913 19:42:29.553349   54234 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/kubernetes-upgrade-421098/proxy-client.key: {Name:mk6790a6ff93a6fe3a68f2df53ce5d9e7dddd152 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 19:42:29.553559   54234 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079.pem (1338 bytes)
	W0913 19:42:29.553634   54234 certs.go:480] ignoring /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079_empty.pem, impossibly tiny 0 bytes
	I0913 19:42:29.553649   54234 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem (1675 bytes)
	I0913 19:42:29.553677   54234 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem (1082 bytes)
	I0913 19:42:29.553714   54234 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem (1123 bytes)
	I0913 19:42:29.553750   54234 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem (1679 bytes)
	I0913 19:42:29.553815   54234 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem (1708 bytes)
	I0913 19:42:29.554470   54234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0913 19:42:29.586087   54234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0913 19:42:29.618999   54234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0913 19:42:29.647178   54234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0913 19:42:29.675784   54234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/kubernetes-upgrade-421098/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0913 19:42:29.705966   54234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/kubernetes-upgrade-421098/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0913 19:42:29.738812   54234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/kubernetes-upgrade-421098/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0913 19:42:29.766921   54234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/kubernetes-upgrade-421098/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0913 19:42:29.791652   54234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0913 19:42:29.816601   54234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079.pem --> /usr/share/ca-certificates/11079.pem (1338 bytes)
	I0913 19:42:29.843575   54234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem --> /usr/share/ca-certificates/110792.pem (1708 bytes)
	I0913 19:42:29.870035   54234 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0913 19:42:29.889996   54234 ssh_runner.go:195] Run: openssl version
	I0913 19:42:29.899007   54234 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0913 19:42:29.918687   54234 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0913 19:42:29.925450   54234 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 13 18:22 /usr/share/ca-certificates/minikubeCA.pem
	I0913 19:42:29.925511   54234 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0913 19:42:29.933245   54234 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0913 19:42:29.946947   54234 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11079.pem && ln -fs /usr/share/ca-certificates/11079.pem /etc/ssl/certs/11079.pem"
	I0913 19:42:29.960393   54234 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11079.pem
	I0913 19:42:29.965228   54234 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 13 18:38 /usr/share/ca-certificates/11079.pem
	I0913 19:42:29.965279   54234 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11079.pem
	I0913 19:42:29.971284   54234 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11079.pem /etc/ssl/certs/51391683.0"
	I0913 19:42:29.983586   54234 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110792.pem && ln -fs /usr/share/ca-certificates/110792.pem /etc/ssl/certs/110792.pem"
	I0913 19:42:29.995401   54234 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110792.pem
	I0913 19:42:30.000205   54234 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 13 18:38 /usr/share/ca-certificates/110792.pem
	I0913 19:42:30.000277   54234 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110792.pem
	I0913 19:42:30.006275   54234 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110792.pem /etc/ssl/certs/3ec20f2e.0"
	I0913 19:42:30.018433   54234 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0913 19:42:30.024298   54234 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0913 19:42:30.024620   54234 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-421098 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-421098 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.115 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 19:42:30.024743   54234 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0913 19:42:30.024812   54234 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0913 19:42:30.073340   54234 cri.go:89] found id: ""
	I0913 19:42:30.073413   54234 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0913 19:42:30.087193   54234 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0913 19:42:30.099831   54234 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0913 19:42:30.120083   54234 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0913 19:42:30.120103   54234 kubeadm.go:157] found existing configuration files:
	
	I0913 19:42:30.120202   54234 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0913 19:42:30.133355   54234 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0913 19:42:30.133417   54234 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0913 19:42:30.144974   54234 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0913 19:42:30.155064   54234 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0913 19:42:30.155128   54234 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0913 19:42:30.166919   54234 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0913 19:42:30.184914   54234 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0913 19:42:30.184984   54234 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0913 19:42:30.196190   54234 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0913 19:42:30.206965   54234 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0913 19:42:30.207034   54234 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0913 19:42:30.218164   54234 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0913 19:42:30.512906   54234 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0913 19:44:29.119646   54234 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0913 19:44:29.119769   54234 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0913 19:44:29.121005   54234 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0913 19:44:29.121145   54234 kubeadm.go:310] [preflight] Running pre-flight checks
	I0913 19:44:29.121247   54234 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0913 19:44:29.121366   54234 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0913 19:44:29.121486   54234 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0913 19:44:29.121588   54234 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0913 19:44:29.123305   54234 out.go:235]   - Generating certificates and keys ...
	I0913 19:44:29.123397   54234 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0913 19:44:29.123489   54234 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0913 19:44:29.123585   54234 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0913 19:44:29.123664   54234 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0913 19:44:29.123739   54234 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0913 19:44:29.123805   54234 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0913 19:44:29.123873   54234 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0913 19:44:29.124067   54234 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-421098 localhost] and IPs [192.168.39.115 127.0.0.1 ::1]
	I0913 19:44:29.124155   54234 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0913 19:44:29.124310   54234 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-421098 localhost] and IPs [192.168.39.115 127.0.0.1 ::1]
	I0913 19:44:29.124418   54234 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0913 19:44:29.124500   54234 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0913 19:44:29.124557   54234 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0913 19:44:29.124625   54234 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0913 19:44:29.124691   54234 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0913 19:44:29.124757   54234 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0913 19:44:29.124853   54234 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0913 19:44:29.124926   54234 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0913 19:44:29.125083   54234 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0913 19:44:29.125206   54234 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0913 19:44:29.125266   54234 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0913 19:44:29.125354   54234 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0913 19:44:29.126809   54234 out.go:235]   - Booting up control plane ...
	I0913 19:44:29.126945   54234 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0913 19:44:29.127060   54234 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0913 19:44:29.127144   54234 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0913 19:44:29.127261   54234 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0913 19:44:29.127458   54234 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0913 19:44:29.127528   54234 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0913 19:44:29.127611   54234 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0913 19:44:29.127841   54234 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0913 19:44:29.127924   54234 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0913 19:44:29.128151   54234 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0913 19:44:29.128243   54234 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0913 19:44:29.128438   54234 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0913 19:44:29.128529   54234 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0913 19:44:29.128720   54234 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0913 19:44:29.128821   54234 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0913 19:44:29.129045   54234 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0913 19:44:29.129055   54234 kubeadm.go:310] 
	I0913 19:44:29.129108   54234 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0913 19:44:29.129178   54234 kubeadm.go:310] 		timed out waiting for the condition
	I0913 19:44:29.129188   54234 kubeadm.go:310] 
	I0913 19:44:29.129231   54234 kubeadm.go:310] 	This error is likely caused by:
	I0913 19:44:29.129288   54234 kubeadm.go:310] 		- The kubelet is not running
	I0913 19:44:29.129440   54234 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0913 19:44:29.129457   54234 kubeadm.go:310] 
	I0913 19:44:29.129599   54234 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0913 19:44:29.129645   54234 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0913 19:44:29.129705   54234 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0913 19:44:29.129714   54234 kubeadm.go:310] 
	I0913 19:44:29.129836   54234 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0913 19:44:29.129949   54234 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0913 19:44:29.129971   54234 kubeadm.go:310] 
	I0913 19:44:29.130129   54234 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0913 19:44:29.130245   54234 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0913 19:44:29.130357   54234 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0913 19:44:29.130458   54234 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0913 19:44:29.130526   54234 kubeadm.go:310] 
	W0913 19:44:29.130612   54234 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-421098 localhost] and IPs [192.168.39.115 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-421098 localhost] and IPs [192.168.39.115 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-421098 localhost] and IPs [192.168.39.115 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-421098 localhost] and IPs [192.168.39.115 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0913 19:44:29.130659   54234 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0913 19:44:29.800969   54234 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 19:44:29.815471   54234 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0913 19:44:29.830152   54234 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0913 19:44:29.830179   54234 kubeadm.go:157] found existing configuration files:
	
	I0913 19:44:29.830234   54234 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0913 19:44:29.843888   54234 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0913 19:44:29.843957   54234 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0913 19:44:29.857237   54234 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0913 19:44:29.867091   54234 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0913 19:44:29.867160   54234 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0913 19:44:29.877242   54234 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0913 19:44:29.886641   54234 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0913 19:44:29.886702   54234 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0913 19:44:29.896880   54234 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0913 19:44:29.906620   54234 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0913 19:44:29.906683   54234 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0913 19:44:29.916568   54234 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0913 19:44:29.985600   54234 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0913 19:44:29.985670   54234 kubeadm.go:310] [preflight] Running pre-flight checks
	I0913 19:44:30.136634   54234 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0913 19:44:30.137055   54234 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0913 19:44:30.137317   54234 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0913 19:44:30.335356   54234 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0913 19:44:30.337253   54234 out.go:235]   - Generating certificates and keys ...
	I0913 19:44:30.337362   54234 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0913 19:44:30.337462   54234 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0913 19:44:30.337578   54234 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0913 19:44:30.337674   54234 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0913 19:44:30.337773   54234 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0913 19:44:30.337860   54234 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0913 19:44:30.337945   54234 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0913 19:44:30.338042   54234 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0913 19:44:30.338168   54234 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0913 19:44:30.338280   54234 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0913 19:44:30.338342   54234 kubeadm.go:310] [certs] Using the existing "sa" key
	I0913 19:44:30.338427   54234 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0913 19:44:30.509248   54234 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0913 19:44:30.652723   54234 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0913 19:44:30.733540   54234 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0913 19:44:31.029758   54234 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0913 19:44:31.044037   54234 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0913 19:44:31.045242   54234 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0913 19:44:31.045338   54234 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0913 19:44:31.207237   54234 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0913 19:44:31.210164   54234 out.go:235]   - Booting up control plane ...
	I0913 19:44:31.210308   54234 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0913 19:44:31.213753   54234 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0913 19:44:31.215703   54234 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0913 19:44:31.217389   54234 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0913 19:44:31.222219   54234 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0913 19:45:11.223528   54234 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0913 19:45:11.223649   54234 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0913 19:45:11.223913   54234 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0913 19:45:16.224269   54234 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0913 19:45:16.224507   54234 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0913 19:45:26.224913   54234 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0913 19:45:26.225215   54234 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0913 19:45:46.225995   54234 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0913 19:45:46.226314   54234 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0913 19:46:26.226948   54234 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0913 19:46:26.227210   54234 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0913 19:46:26.227231   54234 kubeadm.go:310] 
	I0913 19:46:26.227266   54234 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0913 19:46:26.227302   54234 kubeadm.go:310] 		timed out waiting for the condition
	I0913 19:46:26.227307   54234 kubeadm.go:310] 
	I0913 19:46:26.227345   54234 kubeadm.go:310] 	This error is likely caused by:
	I0913 19:46:26.227375   54234 kubeadm.go:310] 		- The kubelet is not running
	I0913 19:46:26.227471   54234 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0913 19:46:26.227478   54234 kubeadm.go:310] 
	I0913 19:46:26.227605   54234 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0913 19:46:26.227647   54234 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0913 19:46:26.227686   54234 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0913 19:46:26.227693   54234 kubeadm.go:310] 
	I0913 19:46:26.227826   54234 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0913 19:46:26.227929   54234 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0913 19:46:26.227936   54234 kubeadm.go:310] 
	I0913 19:46:26.228078   54234 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0913 19:46:26.228184   54234 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0913 19:46:26.228279   54234 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0913 19:46:26.228370   54234 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0913 19:46:26.228376   54234 kubeadm.go:310] 
	I0913 19:46:26.229269   54234 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0913 19:46:26.229392   54234 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0913 19:46:26.229488   54234 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0913 19:46:26.229615   54234 kubeadm.go:394] duration metric: took 3m56.205001794s to StartCluster
	I0913 19:46:26.229673   54234 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 19:46:26.229742   54234 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 19:46:26.289721   54234 cri.go:89] found id: ""
	I0913 19:46:26.289747   54234 logs.go:276] 0 containers: []
	W0913 19:46:26.289759   54234 logs.go:278] No container was found matching "kube-apiserver"
	I0913 19:46:26.289767   54234 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 19:46:26.289837   54234 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 19:46:26.336272   54234 cri.go:89] found id: ""
	I0913 19:46:26.336309   54234 logs.go:276] 0 containers: []
	W0913 19:46:26.336321   54234 logs.go:278] No container was found matching "etcd"
	I0913 19:46:26.336329   54234 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 19:46:26.336402   54234 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 19:46:26.376958   54234 cri.go:89] found id: ""
	I0913 19:46:26.376987   54234 logs.go:276] 0 containers: []
	W0913 19:46:26.376998   54234 logs.go:278] No container was found matching "coredns"
	I0913 19:46:26.377005   54234 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 19:46:26.377061   54234 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 19:46:26.447044   54234 cri.go:89] found id: ""
	I0913 19:46:26.447077   54234 logs.go:276] 0 containers: []
	W0913 19:46:26.447087   54234 logs.go:278] No container was found matching "kube-scheduler"
	I0913 19:46:26.447094   54234 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 19:46:26.447150   54234 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 19:46:26.504012   54234 cri.go:89] found id: ""
	I0913 19:46:26.504035   54234 logs.go:276] 0 containers: []
	W0913 19:46:26.504047   54234 logs.go:278] No container was found matching "kube-proxy"
	I0913 19:46:26.504055   54234 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 19:46:26.504119   54234 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 19:46:26.566208   54234 cri.go:89] found id: ""
	I0913 19:46:26.566234   54234 logs.go:276] 0 containers: []
	W0913 19:46:26.566266   54234 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 19:46:26.566273   54234 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 19:46:26.566337   54234 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 19:46:26.609341   54234 cri.go:89] found id: ""
	I0913 19:46:26.609366   54234 logs.go:276] 0 containers: []
	W0913 19:46:26.609378   54234 logs.go:278] No container was found matching "kindnet"
	I0913 19:46:26.609390   54234 logs.go:123] Gathering logs for kubelet ...
	I0913 19:46:26.609405   54234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 19:46:26.678083   54234 logs.go:123] Gathering logs for dmesg ...
	I0913 19:46:26.678130   54234 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 19:46:26.696384   54234 logs.go:123] Gathering logs for describe nodes ...
	I0913 19:46:26.696418   54234 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 19:46:26.860746   54234 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 19:46:26.860772   54234 logs.go:123] Gathering logs for CRI-O ...
	I0913 19:46:26.860787   54234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 19:46:26.989933   54234 logs.go:123] Gathering logs for container status ...
	I0913 19:46:26.989965   54234 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0913 19:46:27.038248   54234 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0913 19:46:27.038305   54234 out.go:270] * 
	* 
	W0913 19:46:27.038383   54234 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0913 19:46:27.038404   54234 out.go:270] * 
	* 
	W0913 19:46:27.039555   54234 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0913 19:46:27.043484   54234 out.go:201] 
	W0913 19:46:27.045151   54234 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0913 19:46:27.045252   54234 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0913 19:46:27.045343   54234 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0913 19:46:27.046804   54234 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-421098 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-421098
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-421098: (3.074477961s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-421098 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-421098 status --format={{.Host}}: exit status 7 (63.764265ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-421098 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-421098 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m27.043621422s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-421098 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-421098 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-421098 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (99.317916ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-421098] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19636
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19636-3902/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19636-3902/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-421098
	    minikube start -p kubernetes-upgrade-421098 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4210982 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-421098 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-421098 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-421098 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (50.163851691s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-09-13 19:48:47.649389297 +0000 UTC m=+5281.128394515
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-421098 -n kubernetes-upgrade-421098
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-421098 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-421098 logs -n 25: (1.729101286s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p flannel-604714 sudo                               | flannel-604714    | jenkins | v1.34.0 | 13 Sep 24 19:48 UTC | 13 Sep 24 19:48 UTC |
	|         | systemctl status kubelet --all                       |                   |         |         |                     |                     |
	|         | --full --no-pager                                    |                   |         |         |                     |                     |
	| ssh     | -p flannel-604714 sudo                               | flannel-604714    | jenkins | v1.34.0 | 13 Sep 24 19:48 UTC | 13 Sep 24 19:48 UTC |
	|         | systemctl cat kubelet                                |                   |         |         |                     |                     |
	|         | --no-pager                                           |                   |         |         |                     |                     |
	| ssh     | -p flannel-604714 sudo                               | flannel-604714    | jenkins | v1.34.0 | 13 Sep 24 19:48 UTC | 13 Sep 24 19:48 UTC |
	|         | journalctl -xeu kubelet --all                        |                   |         |         |                     |                     |
	|         | --full --no-pager                                    |                   |         |         |                     |                     |
	| ssh     | -p flannel-604714 sudo cat                           | flannel-604714    | jenkins | v1.34.0 | 13 Sep 24 19:48 UTC | 13 Sep 24 19:48 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |                   |         |         |                     |                     |
	| ssh     | -p flannel-604714 sudo cat                           | flannel-604714    | jenkins | v1.34.0 | 13 Sep 24 19:48 UTC | 13 Sep 24 19:48 UTC |
	|         | /var/lib/kubelet/config.yaml                         |                   |         |         |                     |                     |
	| ssh     | -p flannel-604714 sudo                               | flannel-604714    | jenkins | v1.34.0 | 13 Sep 24 19:48 UTC |                     |
	|         | systemctl status docker --all                        |                   |         |         |                     |                     |
	|         | --full --no-pager                                    |                   |         |         |                     |                     |
	| ssh     | -p flannel-604714 sudo                               | flannel-604714    | jenkins | v1.34.0 | 13 Sep 24 19:48 UTC | 13 Sep 24 19:48 UTC |
	|         | systemctl cat docker                                 |                   |         |         |                     |                     |
	|         | --no-pager                                           |                   |         |         |                     |                     |
	| ssh     | -p flannel-604714 sudo cat                           | flannel-604714    | jenkins | v1.34.0 | 13 Sep 24 19:48 UTC | 13 Sep 24 19:48 UTC |
	|         | /etc/docker/daemon.json                              |                   |         |         |                     |                     |
	| ssh     | -p flannel-604714 sudo docker                        | flannel-604714    | jenkins | v1.34.0 | 13 Sep 24 19:48 UTC |                     |
	|         | system info                                          |                   |         |         |                     |                     |
	| ssh     | -p flannel-604714 sudo                               | flannel-604714    | jenkins | v1.34.0 | 13 Sep 24 19:48 UTC |                     |
	|         | systemctl status cri-docker                          |                   |         |         |                     |                     |
	|         | --all --full --no-pager                              |                   |         |         |                     |                     |
	| ssh     | -p flannel-604714 sudo                               | flannel-604714    | jenkins | v1.34.0 | 13 Sep 24 19:48 UTC | 13 Sep 24 19:48 UTC |
	|         | systemctl cat cri-docker                             |                   |         |         |                     |                     |
	|         | --no-pager                                           |                   |         |         |                     |                     |
	| ssh     | -p flannel-604714 sudo cat                           | flannel-604714    | jenkins | v1.34.0 | 13 Sep 24 19:48 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                   |         |         |                     |                     |
	| ssh     | -p flannel-604714 sudo cat                           | flannel-604714    | jenkins | v1.34.0 | 13 Sep 24 19:48 UTC | 13 Sep 24 19:48 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                   |         |         |                     |                     |
	| ssh     | -p flannel-604714 sudo                               | flannel-604714    | jenkins | v1.34.0 | 13 Sep 24 19:48 UTC | 13 Sep 24 19:48 UTC |
	|         | cri-dockerd --version                                |                   |         |         |                     |                     |
	| ssh     | -p flannel-604714 sudo                               | flannel-604714    | jenkins | v1.34.0 | 13 Sep 24 19:48 UTC |                     |
	|         | systemctl status containerd                          |                   |         |         |                     |                     |
	|         | --all --full --no-pager                              |                   |         |         |                     |                     |
	| ssh     | -p flannel-604714 sudo                               | flannel-604714    | jenkins | v1.34.0 | 13 Sep 24 19:48 UTC | 13 Sep 24 19:48 UTC |
	|         | systemctl cat containerd                             |                   |         |         |                     |                     |
	|         | --no-pager                                           |                   |         |         |                     |                     |
	| ssh     | -p flannel-604714 sudo cat                           | flannel-604714    | jenkins | v1.34.0 | 13 Sep 24 19:48 UTC | 13 Sep 24 19:48 UTC |
	|         | /lib/systemd/system/containerd.service               |                   |         |         |                     |                     |
	| ssh     | -p flannel-604714 sudo cat                           | flannel-604714    | jenkins | v1.34.0 | 13 Sep 24 19:48 UTC | 13 Sep 24 19:48 UTC |
	|         | /etc/containerd/config.toml                          |                   |         |         |                     |                     |
	| ssh     | -p flannel-604714 sudo                               | flannel-604714    | jenkins | v1.34.0 | 13 Sep 24 19:48 UTC | 13 Sep 24 19:48 UTC |
	|         | containerd config dump                               |                   |         |         |                     |                     |
	| ssh     | -p flannel-604714 sudo                               | flannel-604714    | jenkins | v1.34.0 | 13 Sep 24 19:48 UTC | 13 Sep 24 19:48 UTC |
	|         | systemctl status crio --all                          |                   |         |         |                     |                     |
	|         | --full --no-pager                                    |                   |         |         |                     |                     |
	| ssh     | -p flannel-604714 sudo                               | flannel-604714    | jenkins | v1.34.0 | 13 Sep 24 19:48 UTC | 13 Sep 24 19:48 UTC |
	|         | systemctl cat crio --no-pager                        |                   |         |         |                     |                     |
	| ssh     | -p flannel-604714 sudo find                          | flannel-604714    | jenkins | v1.34.0 | 13 Sep 24 19:48 UTC | 13 Sep 24 19:48 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                   |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                   |         |         |                     |                     |
	| ssh     | -p flannel-604714 sudo crio                          | flannel-604714    | jenkins | v1.34.0 | 13 Sep 24 19:48 UTC | 13 Sep 24 19:48 UTC |
	|         | config                                               |                   |         |         |                     |                     |
	| delete  | -p flannel-604714                                    | flannel-604714    | jenkins | v1.34.0 | 13 Sep 24 19:48 UTC | 13 Sep 24 19:48 UTC |
	| start   | -p no-preload-239327                                 | no-preload-239327 | jenkins | v1.34.0 | 13 Sep 24 19:48 UTC |                     |
	|         | --memory=2200                                        |                   |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                   |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                        |                   |         |         |                     |                     |
	|         |  --container-runtime=crio                            |                   |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                         |                   |         |         |                     |                     |
	|---------|------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/13 19:48:25
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0913 19:48:25.359290   67470 out.go:345] Setting OutFile to fd 1 ...
	I0913 19:48:25.359544   67470 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 19:48:25.359554   67470 out.go:358] Setting ErrFile to fd 2...
	I0913 19:48:25.359559   67470 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 19:48:25.359773   67470 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19636-3902/.minikube/bin
	I0913 19:48:25.360367   67470 out.go:352] Setting JSON to false
	I0913 19:48:25.361447   67470 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5448,"bootTime":1726251457,"procs":301,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0913 19:48:25.361548   67470 start.go:139] virtualization: kvm guest
	I0913 19:48:25.363918   67470 out.go:177] * [no-preload-239327] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0913 19:48:25.365449   67470 out.go:177]   - MINIKUBE_LOCATION=19636
	I0913 19:48:25.365493   67470 notify.go:220] Checking for updates...
	I0913 19:48:25.367874   67470 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 19:48:25.369043   67470 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19636-3902/kubeconfig
	I0913 19:48:25.370284   67470 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19636-3902/.minikube
	I0913 19:48:25.371687   67470 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0913 19:48:25.372923   67470 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 19:48:25.374806   67470 config.go:182] Loaded profile config "bridge-604714": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 19:48:25.374950   67470 config.go:182] Loaded profile config "kubernetes-upgrade-421098": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 19:48:25.375067   67470 config.go:182] Loaded profile config "old-k8s-version-234290": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0913 19:48:25.375171   67470 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 19:48:25.418484   67470 out.go:177] * Using the kvm2 driver based on user configuration
	I0913 19:48:25.420055   67470 start.go:297] selected driver: kvm2
	I0913 19:48:25.420069   67470 start.go:901] validating driver "kvm2" against <nil>
	I0913 19:48:25.420082   67470 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 19:48:25.421026   67470 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 19:48:25.421131   67470 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19636-3902/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0913 19:48:25.437484   67470 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0913 19:48:25.437531   67470 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0913 19:48:25.437859   67470 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0913 19:48:25.437898   67470 cni.go:84] Creating CNI manager for ""
	I0913 19:48:25.437958   67470 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0913 19:48:25.437973   67470 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0913 19:48:25.438035   67470 start.go:340] cluster config:
	{Name:no-preload-239327 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-239327 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSoc
k: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 19:48:25.438182   67470 iso.go:125] acquiring lock: {Name:mk6d09a9e7ffc35d34bf41cb51ad15df14a4d34d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 19:48:25.441060   67470 out.go:177] * Starting "no-preload-239327" primary control-plane node in "no-preload-239327" cluster
	I0913 19:48:25.442318   67470 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0913 19:48:25.442588   67470 cache.go:107] acquiring lock: {Name:mk6dae5e24511f854655acaaf71c4c31f69d60e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 19:48:25.442625   67470 cache.go:107] acquiring lock: {Name:mk0c7dab543d538ca63ce83c05b98e82d6d8d492 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 19:48:25.442634   67470 cache.go:107] acquiring lock: {Name:mkfd5ad4e7c67b95dfed478578ee6c1e17bdfbac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 19:48:25.442680   67470 cache.go:115] /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0913 19:48:25.442692   67470 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 120.364µs
	I0913 19:48:25.442705   67470 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0913 19:48:25.442720   67470 cache.go:107] acquiring lock: {Name:mkd0cc76762de8c76180d1ec538c10fcf67d8fb9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 19:48:25.442744   67470 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I0913 19:48:25.442770   67470 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I0913 19:48:25.442775   67470 profile.go:143] Saving config to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/no-preload-239327/config.json ...
	I0913 19:48:25.442799   67470 cache.go:107] acquiring lock: {Name:mk7a033918998cc88829db137e7eff68c1b3360b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 19:48:25.442822   67470 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/no-preload-239327/config.json: {Name:mka4161cbca81c35f05cfdecf9092f1ad33c475a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 19:48:25.442824   67470 cache.go:107] acquiring lock: {Name:mk9ddc7fb8d473335fac7547615be4b79cabe065 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 19:48:25.442879   67470 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0913 19:48:25.442926   67470 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0913 19:48:25.442974   67470 start.go:360] acquireMachinesLock for no-preload-239327: {Name:mk2a4fc9f87a3264088553785e086036ce1d8c5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 19:48:25.443004   67470 start.go:364] duration metric: took 16.891µs to acquireMachinesLock for "no-preload-239327"
	I0913 19:48:25.442594   67470 cache.go:107] acquiring lock: {Name:mke1527d1510c3dd98b051128ec7a27668b8c45e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 19:48:25.443023   67470 start.go:93] Provisioning new machine with config: &{Name:no-preload-239327 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.1 ClusterName:no-preload-239327 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0913 19:48:25.442787   67470 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0913 19:48:25.443103   67470 start.go:125] createHost starting for "" (driver="kvm2")
	I0913 19:48:24.050233   65574 out.go:235]   - Generating certificates and keys ...
	I0913 19:48:24.050351   65574 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0913 19:48:24.050453   65574 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0913 19:48:24.116695   65574 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0913 19:48:24.309498   65574 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0913 19:48:24.424997   65574 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0913 19:48:24.734852   65574 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0913 19:48:25.193333   65574 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0913 19:48:25.193596   65574 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-234290] and IPs [192.168.72.137 127.0.0.1 ::1]
	I0913 19:48:25.442499   65574 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0913 19:48:25.442800   65574 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-234290] and IPs [192.168.72.137 127.0.0.1 ::1]
	I0913 19:48:25.606357   65574 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0913 19:48:25.868306   65574 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0913 19:48:26.019429   65574 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0913 19:48:26.019612   65574 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0913 19:48:26.186548   65574 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0913 19:48:26.618265   65574 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0913 19:48:26.807149   65574 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0913 19:48:27.113245   65574 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0913 19:48:27.134715   65574 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0913 19:48:27.137282   65574 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0913 19:48:27.137340   65574 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0913 19:48:27.299319   65574 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0913 19:48:24.738964   65843 main.go:141] libmachine: (kubernetes-upgrade-421098) Calling .GetIP
	I0913 19:48:24.742120   65843 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | domain kubernetes-upgrade-421098 has defined MAC address 52:54:00:f9:1c:62 in network mk-kubernetes-upgrade-421098
	I0913 19:48:24.742766   65843 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:1c:62", ip: ""} in network mk-kubernetes-upgrade-421098: {Iface:virbr3 ExpiryTime:2024-09-13 20:47:08 +0000 UTC Type:0 Mac:52:54:00:f9:1c:62 Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:kubernetes-upgrade-421098 Clientid:01:52:54:00:f9:1c:62}
	I0913 19:48:24.742794   65843 main.go:141] libmachine: (kubernetes-upgrade-421098) DBG | domain kubernetes-upgrade-421098 has defined IP address 192.168.39.115 and MAC address 52:54:00:f9:1c:62 in network mk-kubernetes-upgrade-421098
	I0913 19:48:24.743064   65843 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0913 19:48:24.747607   65843 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-421098 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:kubernetes-upgrade-421098 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.115 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0913 19:48:24.747724   65843 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0913 19:48:24.747779   65843 ssh_runner.go:195] Run: sudo crictl images --output json
	I0913 19:48:24.794180   65843 crio.go:514] all images are preloaded for cri-o runtime.
	I0913 19:48:24.794205   65843 crio.go:433] Images already preloaded, skipping extraction
	I0913 19:48:24.794270   65843 ssh_runner.go:195] Run: sudo crictl images --output json
	I0913 19:48:24.833883   65843 crio.go:514] all images are preloaded for cri-o runtime.
	I0913 19:48:24.833911   65843 cache_images.go:84] Images are preloaded, skipping loading
	I0913 19:48:24.833920   65843 kubeadm.go:934] updating node { 192.168.39.115 8443 v1.31.1 crio true true} ...
	I0913 19:48:24.834039   65843 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-421098 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.115
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:kubernetes-upgrade-421098 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0913 19:48:24.834151   65843 ssh_runner.go:195] Run: crio config
	I0913 19:48:24.886631   65843 cni.go:84] Creating CNI manager for ""
	I0913 19:48:24.886653   65843 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0913 19:48:24.886661   65843 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0913 19:48:24.886686   65843 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.115 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-421098 NodeName:kubernetes-upgrade-421098 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.115"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.115 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0913 19:48:24.886849   65843 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.115
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-421098"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.115
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.115"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0913 19:48:24.886913   65843 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0913 19:48:24.911336   65843 binaries.go:44] Found k8s binaries, skipping transfer
	I0913 19:48:24.911404   65843 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0913 19:48:24.922738   65843 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (325 bytes)
	I0913 19:48:24.942703   65843 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0913 19:48:24.959459   65843 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0913 19:48:24.976827   65843 ssh_runner.go:195] Run: grep 192.168.39.115	control-plane.minikube.internal$ /etc/hosts
	I0913 19:48:24.981498   65843 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 19:48:25.126763   65843 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0913 19:48:25.153082   65843 certs.go:68] Setting up /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/kubernetes-upgrade-421098 for IP: 192.168.39.115
	I0913 19:48:25.153103   65843 certs.go:194] generating shared ca certs ...
	I0913 19:48:25.153119   65843 certs.go:226] acquiring lock for ca certs: {Name:mke780aab4c2895bded11772e31c7a357d07742c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 19:48:25.153324   65843 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.key
	I0913 19:48:25.153390   65843 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.key
	I0913 19:48:25.153403   65843 certs.go:256] generating profile certs ...
	I0913 19:48:25.153479   65843 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/kubernetes-upgrade-421098/client.key
	I0913 19:48:25.153520   65843 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/kubernetes-upgrade-421098/apiserver.key.d0cc4592
	I0913 19:48:25.153556   65843 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/kubernetes-upgrade-421098/proxy-client.key
	I0913 19:48:25.153661   65843 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079.pem (1338 bytes)
	W0913 19:48:25.153688   65843 certs.go:480] ignoring /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079_empty.pem, impossibly tiny 0 bytes
	I0913 19:48:25.153695   65843 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem (1675 bytes)
	I0913 19:48:25.153720   65843 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem (1082 bytes)
	I0913 19:48:25.153748   65843 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem (1123 bytes)
	I0913 19:48:25.153777   65843 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem (1679 bytes)
	I0913 19:48:25.153831   65843 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem (1708 bytes)
	I0913 19:48:25.154633   65843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0913 19:48:25.189167   65843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0913 19:48:25.219843   65843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0913 19:48:25.244833   65843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0913 19:48:25.271235   65843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/kubernetes-upgrade-421098/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0913 19:48:25.296405   65843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/kubernetes-upgrade-421098/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0913 19:48:25.325024   65843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/kubernetes-upgrade-421098/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0913 19:48:25.355626   65843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/kubernetes-upgrade-421098/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0913 19:48:25.384477   65843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0913 19:48:25.415333   65843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079.pem --> /usr/share/ca-certificates/11079.pem (1338 bytes)
	I0913 19:48:25.441436   65843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem --> /usr/share/ca-certificates/110792.pem (1708 bytes)
	I0913 19:48:25.471233   65843 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0913 19:48:25.497186   65843 ssh_runner.go:195] Run: openssl version
	I0913 19:48:25.505186   65843 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110792.pem && ln -fs /usr/share/ca-certificates/110792.pem /etc/ssl/certs/110792.pem"
	I0913 19:48:25.520160   65843 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110792.pem
	I0913 19:48:25.525790   65843 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 13 18:38 /usr/share/ca-certificates/110792.pem
	I0913 19:48:25.525850   65843 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110792.pem
	I0913 19:48:25.533922   65843 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110792.pem /etc/ssl/certs/3ec20f2e.0"
	I0913 19:48:25.591763   65843 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0913 19:48:25.620342   65843 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0913 19:48:25.637445   65843 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 13 18:22 /usr/share/ca-certificates/minikubeCA.pem
	I0913 19:48:25.637506   65843 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0913 19:48:25.673373   65843 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0913 19:48:25.699441   65843 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11079.pem && ln -fs /usr/share/ca-certificates/11079.pem /etc/ssl/certs/11079.pem"
	I0913 19:48:25.734318   65843 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11079.pem
	I0913 19:48:25.753525   65843 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 13 18:38 /usr/share/ca-certificates/11079.pem
	I0913 19:48:25.753591   65843 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11079.pem
	I0913 19:48:25.774051   65843 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11079.pem /etc/ssl/certs/51391683.0"
	I0913 19:48:25.834707   65843 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0913 19:48:25.855262   65843 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0913 19:48:25.904316   65843 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0913 19:48:25.974883   65843 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0913 19:48:26.008464   65843 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0913 19:48:26.060581   65843 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0913 19:48:26.114586   65843 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0913 19:48:26.146548   65843 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-421098 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.31.1 ClusterName:kubernetes-upgrade-421098 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.115 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 19:48:26.146642   65843 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0913 19:48:26.146701   65843 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0913 19:48:26.290131   65843 cri.go:89] found id: "913f05dae74de22790f3dcba051fcfb22a3a3c16648bfbbec7c33a52da986a31"
	I0913 19:48:26.290155   65843 cri.go:89] found id: "cfb999393ab5ba8567f62722307d34fef4919a26f681adf2652a995f7ca5daa1"
	I0913 19:48:26.290161   65843 cri.go:89] found id: "2139076bed3e4f4ac8eaa4da7d19f2fb836126f849466320faef326e9d4b009c"
	I0913 19:48:26.290166   65843 cri.go:89] found id: "ffc5cb09120d0b25eb302baf4f0f28730d341b042486efd0424bc1c30696dcf7"
	I0913 19:48:26.290174   65843 cri.go:89] found id: "cdc073abdc81b25ffd86b462987abb461b2ed11081a07981865cecd5e4d4033c"
	I0913 19:48:26.290179   65843 cri.go:89] found id: "c9a021fe648ffce884abc229967729b8f8b062668322b8ba3f3300671e61ca52"
	I0913 19:48:26.290183   65843 cri.go:89] found id: "f64413f09f91a92df27071c60c4a09dabb5230d1e00162feea6e10bfbdf3840c"
	I0913 19:48:26.290187   65843 cri.go:89] found id: "69b7ca9edf28cd0510fcc6c779f25d6a7118ce3e9c68623bf2deede1761896e8"
	I0913 19:48:26.290193   65843 cri.go:89] found id: "dfc10cbdc0867ce347586649fa6976cc914e697956d3d6847985e112cb0275b8"
	I0913 19:48:26.290200   65843 cri.go:89] found id: "7396a80499d55a8b28deb912603b0b21c640ecdedad686c7e689d54012437b47"
	I0913 19:48:26.290204   65843 cri.go:89] found id: ""
	I0913 19:48:26.290245   65843 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 13 19:48:48 kubernetes-upgrade-421098 crio[2628]: time="2024-09-13 19:48:48.333255622Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726256928333233714,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b6de368a-513c-4788-a525-3bc02ec7383c name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 19:48:48 kubernetes-upgrade-421098 crio[2628]: time="2024-09-13 19:48:48.333789567Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ded88257-b591-4364-8727-586a9c688191 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 19:48:48 kubernetes-upgrade-421098 crio[2628]: time="2024-09-13 19:48:48.333844246Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ded88257-b591-4364-8727-586a9c688191 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 19:48:48 kubernetes-upgrade-421098 crio[2628]: time="2024-09-13 19:48:48.334238710Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4e814ce462210e19e439a3c9bff12eb8362962d78430cd2b3aa937d694b8a36d,PodSandboxId:357aae8f75c7ddcfb167368264e6c585b20087e813f1ce631becaefb726b57da,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726256925873084670,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d50876b-97aa-40c3-82f0-e410ca48b6c1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54d21249de2240ad3aeb6977e59f782412415e4dc4939306c31169e1c7ed7a43,PodSandboxId:0f2879d05c30d443155059395fa8479721e5493dec572844f22d3ebf4c8d8e4b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726256922029959810,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-421098,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c650bcecb5204818f99f5c8a35a63744,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 3,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0adc3218dfe4ad8007b8ac1846e00373b2818ec113cafddc3ab43121d02d20cb,PodSandboxId:cc3acbf02dedb80edeed4f02b6777c2c207d6afa4cc47e5d64491cb90200cd2a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726256922003310803,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-421098,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: faced3438a0f0ad8a4c23c9cce71b44f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /
dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b3616d9ee7de20abe3c22ae33fa0fb83fea2cca3e07d40d522f6c25e2947310,PodSandboxId:1f915601546fc3d87977f27f040e3473d0352e3993bc67875f94415d89568e2c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726256922020408625,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-421098,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0c2d47d3329bf4d7745c100066f578e,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 3,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0b6be7eb6ac6fabbb0ef4606855ead712ea0af269def946bd838868392b06e6,PodSandboxId:2f7b55ddd5159a7b57b6aa9d9fb3ed9f324e4aa463010ee73c22d361960464c2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726256922010272775,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-421098,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c8bf3b12520cf1f7fc5ef0371370949,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a799936876f9e525390fb19d6429ef916aa53d4cfbb6a0c5de8e6c0c00ed8b5,PodSandboxId:0f2879d05c30d443155059395fa8479721e5493dec572844f22d3ebf4c8d8e4b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_CREATED,CreatedAt:1726256920210748225,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-421098,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c650bcecb5204818f99f5c8a35a63744,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d045aac3193951743ba893173504a56c7a8f925bab51922f8e5ed4ce26128cb,PodSandboxId:f9cfc9d667c162e1a9ec71bb1b664e3e95680af35a76078bd0ee223d5fc49ef9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726256914258503960,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9pdb4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c204116-f08f-43c8-9d67-bf8b775ba70e,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c21210de7d1ba963f5fed5e3299f560ddd9c3843977780e32ad6238369e0114,PodSandboxId:f6b3c7e3effd8e9ebe19ce10db85d0f95d0dbadb1fd1975084875f4aa3b3eafa,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726256912309149365,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-m45fm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9f252cd-1730-49f4-b2cf-21dd8ca579f2,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"pro
tocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4def21a19c30a8a8ed69a4d2a43bb609ee6337de1454ed2ea1174b0c903a1ae1,PodSandboxId:b48292eb516f118b0be23c2f0cd531f5be8961d9fadabbd1849ec1d709f0cef9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726256912261802485,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-v6sdc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62ec0025-e3ef-442c-8b4f-11d5163691ab,},Annotations:map
[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15404da0de304994d5fd71bd450f1b07a0978d3c83cdf98857b4273fd39b3e06,PodSandboxId:1f915601546fc3d87977f27f040e3473d0352e3993bc67875f94415d89568e2c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726256906095726456,Labels:map[string]st
ring{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-421098,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0c2d47d3329bf4d7745c100066f578e,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fb160fccea867151681ac0821bd72f9067a2b1d60a47426968b136cc9b6c17f,PodSandboxId:cc3acbf02dedb80edeed4f02b6777c2c207d6afa4cc47e5d64491cb90200cd2a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726256905942918745,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-421098,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: faced3438a0f0ad8a4c23c9cce71b44f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:913f05dae74de22790f3dcba051fcfb22a3a3c16648bfbbec7c33a52da986a31,PodSandboxId:357aae8f75c7ddcfb167368264e6c585b20087e813f1ce631becaefb726b57da,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726256905932501951,Labels:map[string]string{io.kubernetes.
container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d50876b-97aa-40c3-82f0-e410ca48b6c1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cfb999393ab5ba8567f62722307d34fef4919a26f681adf2652a995f7ca5daa1,PodSandboxId:2f7b55ddd5159a7b57b6aa9d9fb3ed9f324e4aa463010ee73c22d361960464c2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726256905788065746,Labels:map[string]string{io.kubernetes.container.name: ku
be-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-421098,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c8bf3b12520cf1f7fc5ef0371370949,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffc5cb09120d0b25eb302baf4f0f28730d341b042486efd0424bc1c30696dcf7,PodSandboxId:1291d34dc4fbc25b5437e29c15864b20eb94f30b625002fedb8dce2ce9470578,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726256882538125947,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kube
rnetes.pod.name: coredns-7c65d6cfc9-m45fm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9f252cd-1730-49f4-b2cf-21dd8ca579f2,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2139076bed3e4f4ac8eaa4da7d19f2fb836126f849466320faef326e9d4b009c,PodSandboxId:eeaff4a578eeb37940c16cfa73885ab9615dbad9e2e82bbeed8e1d7bf1af8675,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Ima
geRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726256882544573045,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-v6sdc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62ec0025-e3ef-442c-8b4f-11d5163691ab,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdc073abdc81b25ffd86b462987abb461b2ed11081a07981865cecd5e4d4033c,PodSandboxId:e7832f0d3fc2629a6a29568d627f0bb61163e6c850951e9cdfdfe3a76d5ada25,Metadata:&ContainerM
etadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726256882303841935,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9pdb4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c204116-f08f-43c8-9d67-bf8b775ba70e,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ded88257-b591-4364-8727-586a9c688191 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 19:48:48 kubernetes-upgrade-421098 crio[2628]: time="2024-09-13 19:48:48.380680667Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=eabd9103-9eb9-4ab8-aa09-81f0ce4eb659 name=/runtime.v1.RuntimeService/Version
	Sep 13 19:48:48 kubernetes-upgrade-421098 crio[2628]: time="2024-09-13 19:48:48.380807647Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=eabd9103-9eb9-4ab8-aa09-81f0ce4eb659 name=/runtime.v1.RuntimeService/Version
	Sep 13 19:48:48 kubernetes-upgrade-421098 crio[2628]: time="2024-09-13 19:48:48.382603477Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d0a67804-852f-45be-b4ce-8d78edde2ee4 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 19:48:48 kubernetes-upgrade-421098 crio[2628]: time="2024-09-13 19:48:48.383152208Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726256928383111374,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d0a67804-852f-45be-b4ce-8d78edde2ee4 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 19:48:48 kubernetes-upgrade-421098 crio[2628]: time="2024-09-13 19:48:48.384127739Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=03acfada-c7d6-40f9-a6e5-1657e2f46ef6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 19:48:48 kubernetes-upgrade-421098 crio[2628]: time="2024-09-13 19:48:48.384241612Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=03acfada-c7d6-40f9-a6e5-1657e2f46ef6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 19:48:48 kubernetes-upgrade-421098 crio[2628]: time="2024-09-13 19:48:48.384889202Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4e814ce462210e19e439a3c9bff12eb8362962d78430cd2b3aa937d694b8a36d,PodSandboxId:357aae8f75c7ddcfb167368264e6c585b20087e813f1ce631becaefb726b57da,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726256925873084670,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d50876b-97aa-40c3-82f0-e410ca48b6c1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54d21249de2240ad3aeb6977e59f782412415e4dc4939306c31169e1c7ed7a43,PodSandboxId:0f2879d05c30d443155059395fa8479721e5493dec572844f22d3ebf4c8d8e4b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726256922029959810,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-421098,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c650bcecb5204818f99f5c8a35a63744,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 3,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0adc3218dfe4ad8007b8ac1846e00373b2818ec113cafddc3ab43121d02d20cb,PodSandboxId:cc3acbf02dedb80edeed4f02b6777c2c207d6afa4cc47e5d64491cb90200cd2a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726256922003310803,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-421098,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: faced3438a0f0ad8a4c23c9cce71b44f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /
dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b3616d9ee7de20abe3c22ae33fa0fb83fea2cca3e07d40d522f6c25e2947310,PodSandboxId:1f915601546fc3d87977f27f040e3473d0352e3993bc67875f94415d89568e2c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726256922020408625,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-421098,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0c2d47d3329bf4d7745c100066f578e,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 3,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0b6be7eb6ac6fabbb0ef4606855ead712ea0af269def946bd838868392b06e6,PodSandboxId:2f7b55ddd5159a7b57b6aa9d9fb3ed9f324e4aa463010ee73c22d361960464c2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726256922010272775,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-421098,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c8bf3b12520cf1f7fc5ef0371370949,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a799936876f9e525390fb19d6429ef916aa53d4cfbb6a0c5de8e6c0c00ed8b5,PodSandboxId:0f2879d05c30d443155059395fa8479721e5493dec572844f22d3ebf4c8d8e4b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_CREATED,CreatedAt:1726256920210748225,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-421098,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c650bcecb5204818f99f5c8a35a63744,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d045aac3193951743ba893173504a56c7a8f925bab51922f8e5ed4ce26128cb,PodSandboxId:f9cfc9d667c162e1a9ec71bb1b664e3e95680af35a76078bd0ee223d5fc49ef9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726256914258503960,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9pdb4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c204116-f08f-43c8-9d67-bf8b775ba70e,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c21210de7d1ba963f5fed5e3299f560ddd9c3843977780e32ad6238369e0114,PodSandboxId:f6b3c7e3effd8e9ebe19ce10db85d0f95d0dbadb1fd1975084875f4aa3b3eafa,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726256912309149365,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-m45fm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9f252cd-1730-49f4-b2cf-21dd8ca579f2,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"pro
tocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4def21a19c30a8a8ed69a4d2a43bb609ee6337de1454ed2ea1174b0c903a1ae1,PodSandboxId:b48292eb516f118b0be23c2f0cd531f5be8961d9fadabbd1849ec1d709f0cef9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726256912261802485,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-v6sdc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62ec0025-e3ef-442c-8b4f-11d5163691ab,},Annotations:map
[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15404da0de304994d5fd71bd450f1b07a0978d3c83cdf98857b4273fd39b3e06,PodSandboxId:1f915601546fc3d87977f27f040e3473d0352e3993bc67875f94415d89568e2c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726256906095726456,Labels:map[string]st
ring{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-421098,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0c2d47d3329bf4d7745c100066f578e,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fb160fccea867151681ac0821bd72f9067a2b1d60a47426968b136cc9b6c17f,PodSandboxId:cc3acbf02dedb80edeed4f02b6777c2c207d6afa4cc47e5d64491cb90200cd2a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726256905942918745,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-421098,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: faced3438a0f0ad8a4c23c9cce71b44f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:913f05dae74de22790f3dcba051fcfb22a3a3c16648bfbbec7c33a52da986a31,PodSandboxId:357aae8f75c7ddcfb167368264e6c585b20087e813f1ce631becaefb726b57da,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726256905932501951,Labels:map[string]string{io.kubernetes.
container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d50876b-97aa-40c3-82f0-e410ca48b6c1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cfb999393ab5ba8567f62722307d34fef4919a26f681adf2652a995f7ca5daa1,PodSandboxId:2f7b55ddd5159a7b57b6aa9d9fb3ed9f324e4aa463010ee73c22d361960464c2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726256905788065746,Labels:map[string]string{io.kubernetes.container.name: ku
be-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-421098,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c8bf3b12520cf1f7fc5ef0371370949,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffc5cb09120d0b25eb302baf4f0f28730d341b042486efd0424bc1c30696dcf7,PodSandboxId:1291d34dc4fbc25b5437e29c15864b20eb94f30b625002fedb8dce2ce9470578,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726256882538125947,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kube
rnetes.pod.name: coredns-7c65d6cfc9-m45fm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9f252cd-1730-49f4-b2cf-21dd8ca579f2,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2139076bed3e4f4ac8eaa4da7d19f2fb836126f849466320faef326e9d4b009c,PodSandboxId:eeaff4a578eeb37940c16cfa73885ab9615dbad9e2e82bbeed8e1d7bf1af8675,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Ima
geRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726256882544573045,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-v6sdc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62ec0025-e3ef-442c-8b4f-11d5163691ab,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdc073abdc81b25ffd86b462987abb461b2ed11081a07981865cecd5e4d4033c,PodSandboxId:e7832f0d3fc2629a6a29568d627f0bb61163e6c850951e9cdfdfe3a76d5ada25,Metadata:&ContainerM
etadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726256882303841935,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9pdb4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c204116-f08f-43c8-9d67-bf8b775ba70e,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=03acfada-c7d6-40f9-a6e5-1657e2f46ef6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 19:48:48 kubernetes-upgrade-421098 crio[2628]: time="2024-09-13 19:48:48.439810794Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=05030f22-c0a0-4540-9c4b-eeb01e9b8f43 name=/runtime.v1.RuntimeService/Version
	Sep 13 19:48:48 kubernetes-upgrade-421098 crio[2628]: time="2024-09-13 19:48:48.439965753Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=05030f22-c0a0-4540-9c4b-eeb01e9b8f43 name=/runtime.v1.RuntimeService/Version
	Sep 13 19:48:48 kubernetes-upgrade-421098 crio[2628]: time="2024-09-13 19:48:48.452615101Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6d827596-2e45-451d-8c5d-73bc4fd27b1c name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 19:48:48 kubernetes-upgrade-421098 crio[2628]: time="2024-09-13 19:48:48.453695374Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726256928453667739,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6d827596-2e45-451d-8c5d-73bc4fd27b1c name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 19:48:48 kubernetes-upgrade-421098 crio[2628]: time="2024-09-13 19:48:48.454587130Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2d8a4daa-f4a2-4f15-9696-da6be5ec1ba3 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 19:48:48 kubernetes-upgrade-421098 crio[2628]: time="2024-09-13 19:48:48.454664548Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2d8a4daa-f4a2-4f15-9696-da6be5ec1ba3 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 19:48:48 kubernetes-upgrade-421098 crio[2628]: time="2024-09-13 19:48:48.454966338Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4e814ce462210e19e439a3c9bff12eb8362962d78430cd2b3aa937d694b8a36d,PodSandboxId:357aae8f75c7ddcfb167368264e6c585b20087e813f1ce631becaefb726b57da,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726256925873084670,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d50876b-97aa-40c3-82f0-e410ca48b6c1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54d21249de2240ad3aeb6977e59f782412415e4dc4939306c31169e1c7ed7a43,PodSandboxId:0f2879d05c30d443155059395fa8479721e5493dec572844f22d3ebf4c8d8e4b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726256922029959810,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-421098,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c650bcecb5204818f99f5c8a35a63744,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 3,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0adc3218dfe4ad8007b8ac1846e00373b2818ec113cafddc3ab43121d02d20cb,PodSandboxId:cc3acbf02dedb80edeed4f02b6777c2c207d6afa4cc47e5d64491cb90200cd2a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726256922003310803,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-421098,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: faced3438a0f0ad8a4c23c9cce71b44f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /
dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b3616d9ee7de20abe3c22ae33fa0fb83fea2cca3e07d40d522f6c25e2947310,PodSandboxId:1f915601546fc3d87977f27f040e3473d0352e3993bc67875f94415d89568e2c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726256922020408625,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-421098,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0c2d47d3329bf4d7745c100066f578e,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 3,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0b6be7eb6ac6fabbb0ef4606855ead712ea0af269def946bd838868392b06e6,PodSandboxId:2f7b55ddd5159a7b57b6aa9d9fb3ed9f324e4aa463010ee73c22d361960464c2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726256922010272775,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-421098,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c8bf3b12520cf1f7fc5ef0371370949,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a799936876f9e525390fb19d6429ef916aa53d4cfbb6a0c5de8e6c0c00ed8b5,PodSandboxId:0f2879d05c30d443155059395fa8479721e5493dec572844f22d3ebf4c8d8e4b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_CREATED,CreatedAt:1726256920210748225,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-421098,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c650bcecb5204818f99f5c8a35a63744,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d045aac3193951743ba893173504a56c7a8f925bab51922f8e5ed4ce26128cb,PodSandboxId:f9cfc9d667c162e1a9ec71bb1b664e3e95680af35a76078bd0ee223d5fc49ef9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726256914258503960,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9pdb4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c204116-f08f-43c8-9d67-bf8b775ba70e,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c21210de7d1ba963f5fed5e3299f560ddd9c3843977780e32ad6238369e0114,PodSandboxId:f6b3c7e3effd8e9ebe19ce10db85d0f95d0dbadb1fd1975084875f4aa3b3eafa,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726256912309149365,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-m45fm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9f252cd-1730-49f4-b2cf-21dd8ca579f2,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"pro
tocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4def21a19c30a8a8ed69a4d2a43bb609ee6337de1454ed2ea1174b0c903a1ae1,PodSandboxId:b48292eb516f118b0be23c2f0cd531f5be8961d9fadabbd1849ec1d709f0cef9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726256912261802485,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-v6sdc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62ec0025-e3ef-442c-8b4f-11d5163691ab,},Annotations:map
[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15404da0de304994d5fd71bd450f1b07a0978d3c83cdf98857b4273fd39b3e06,PodSandboxId:1f915601546fc3d87977f27f040e3473d0352e3993bc67875f94415d89568e2c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726256906095726456,Labels:map[string]st
ring{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-421098,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0c2d47d3329bf4d7745c100066f578e,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fb160fccea867151681ac0821bd72f9067a2b1d60a47426968b136cc9b6c17f,PodSandboxId:cc3acbf02dedb80edeed4f02b6777c2c207d6afa4cc47e5d64491cb90200cd2a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726256905942918745,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-421098,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: faced3438a0f0ad8a4c23c9cce71b44f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:913f05dae74de22790f3dcba051fcfb22a3a3c16648bfbbec7c33a52da986a31,PodSandboxId:357aae8f75c7ddcfb167368264e6c585b20087e813f1ce631becaefb726b57da,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726256905932501951,Labels:map[string]string{io.kubernetes.
container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d50876b-97aa-40c3-82f0-e410ca48b6c1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cfb999393ab5ba8567f62722307d34fef4919a26f681adf2652a995f7ca5daa1,PodSandboxId:2f7b55ddd5159a7b57b6aa9d9fb3ed9f324e4aa463010ee73c22d361960464c2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726256905788065746,Labels:map[string]string{io.kubernetes.container.name: ku
be-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-421098,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c8bf3b12520cf1f7fc5ef0371370949,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffc5cb09120d0b25eb302baf4f0f28730d341b042486efd0424bc1c30696dcf7,PodSandboxId:1291d34dc4fbc25b5437e29c15864b20eb94f30b625002fedb8dce2ce9470578,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726256882538125947,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kube
rnetes.pod.name: coredns-7c65d6cfc9-m45fm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9f252cd-1730-49f4-b2cf-21dd8ca579f2,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2139076bed3e4f4ac8eaa4da7d19f2fb836126f849466320faef326e9d4b009c,PodSandboxId:eeaff4a578eeb37940c16cfa73885ab9615dbad9e2e82bbeed8e1d7bf1af8675,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Ima
geRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726256882544573045,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-v6sdc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62ec0025-e3ef-442c-8b4f-11d5163691ab,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdc073abdc81b25ffd86b462987abb461b2ed11081a07981865cecd5e4d4033c,PodSandboxId:e7832f0d3fc2629a6a29568d627f0bb61163e6c850951e9cdfdfe3a76d5ada25,Metadata:&ContainerM
etadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726256882303841935,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9pdb4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c204116-f08f-43c8-9d67-bf8b775ba70e,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2d8a4daa-f4a2-4f15-9696-da6be5ec1ba3 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 19:48:48 kubernetes-upgrade-421098 crio[2628]: time="2024-09-13 19:48:48.500320652Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3056c69d-119f-4663-b61c-447ef598cd0f name=/runtime.v1.RuntimeService/Version
	Sep 13 19:48:48 kubernetes-upgrade-421098 crio[2628]: time="2024-09-13 19:48:48.500505053Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3056c69d-119f-4663-b61c-447ef598cd0f name=/runtime.v1.RuntimeService/Version
	Sep 13 19:48:48 kubernetes-upgrade-421098 crio[2628]: time="2024-09-13 19:48:48.502178214Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8785ee1d-1ca7-44c4-9bf6-43527f6029ce name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 19:48:48 kubernetes-upgrade-421098 crio[2628]: time="2024-09-13 19:48:48.504112228Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726256928504050764,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8785ee1d-1ca7-44c4-9bf6-43527f6029ce name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 19:48:48 kubernetes-upgrade-421098 crio[2628]: time="2024-09-13 19:48:48.507982173Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1e05a64a-b225-4d01-a2fe-bc01421ea5c5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 19:48:48 kubernetes-upgrade-421098 crio[2628]: time="2024-09-13 19:48:48.508065321Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1e05a64a-b225-4d01-a2fe-bc01421ea5c5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 19:48:48 kubernetes-upgrade-421098 crio[2628]: time="2024-09-13 19:48:48.508511679Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4e814ce462210e19e439a3c9bff12eb8362962d78430cd2b3aa937d694b8a36d,PodSandboxId:357aae8f75c7ddcfb167368264e6c585b20087e813f1ce631becaefb726b57da,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726256925873084670,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d50876b-97aa-40c3-82f0-e410ca48b6c1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54d21249de2240ad3aeb6977e59f782412415e4dc4939306c31169e1c7ed7a43,PodSandboxId:0f2879d05c30d443155059395fa8479721e5493dec572844f22d3ebf4c8d8e4b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726256922029959810,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-421098,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c650bcecb5204818f99f5c8a35a63744,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 3,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0adc3218dfe4ad8007b8ac1846e00373b2818ec113cafddc3ab43121d02d20cb,PodSandboxId:cc3acbf02dedb80edeed4f02b6777c2c207d6afa4cc47e5d64491cb90200cd2a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726256922003310803,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-421098,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: faced3438a0f0ad8a4c23c9cce71b44f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /
dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b3616d9ee7de20abe3c22ae33fa0fb83fea2cca3e07d40d522f6c25e2947310,PodSandboxId:1f915601546fc3d87977f27f040e3473d0352e3993bc67875f94415d89568e2c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726256922020408625,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-421098,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0c2d47d3329bf4d7745c100066f578e,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 3,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0b6be7eb6ac6fabbb0ef4606855ead712ea0af269def946bd838868392b06e6,PodSandboxId:2f7b55ddd5159a7b57b6aa9d9fb3ed9f324e4aa463010ee73c22d361960464c2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726256922010272775,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-421098,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c8bf3b12520cf1f7fc5ef0371370949,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a799936876f9e525390fb19d6429ef916aa53d4cfbb6a0c5de8e6c0c00ed8b5,PodSandboxId:0f2879d05c30d443155059395fa8479721e5493dec572844f22d3ebf4c8d8e4b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_CREATED,CreatedAt:1726256920210748225,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-421098,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c650bcecb5204818f99f5c8a35a63744,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d045aac3193951743ba893173504a56c7a8f925bab51922f8e5ed4ce26128cb,PodSandboxId:f9cfc9d667c162e1a9ec71bb1b664e3e95680af35a76078bd0ee223d5fc49ef9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726256914258503960,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9pdb4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c204116-f08f-43c8-9d67-bf8b775ba70e,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c21210de7d1ba963f5fed5e3299f560ddd9c3843977780e32ad6238369e0114,PodSandboxId:f6b3c7e3effd8e9ebe19ce10db85d0f95d0dbadb1fd1975084875f4aa3b3eafa,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726256912309149365,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-m45fm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9f252cd-1730-49f4-b2cf-21dd8ca579f2,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"pro
tocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4def21a19c30a8a8ed69a4d2a43bb609ee6337de1454ed2ea1174b0c903a1ae1,PodSandboxId:b48292eb516f118b0be23c2f0cd531f5be8961d9fadabbd1849ec1d709f0cef9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726256912261802485,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-v6sdc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62ec0025-e3ef-442c-8b4f-11d5163691ab,},Annotations:map
[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15404da0de304994d5fd71bd450f1b07a0978d3c83cdf98857b4273fd39b3e06,PodSandboxId:1f915601546fc3d87977f27f040e3473d0352e3993bc67875f94415d89568e2c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726256906095726456,Labels:map[string]st
ring{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-421098,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0c2d47d3329bf4d7745c100066f578e,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fb160fccea867151681ac0821bd72f9067a2b1d60a47426968b136cc9b6c17f,PodSandboxId:cc3acbf02dedb80edeed4f02b6777c2c207d6afa4cc47e5d64491cb90200cd2a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726256905942918745,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-421098,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: faced3438a0f0ad8a4c23c9cce71b44f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:913f05dae74de22790f3dcba051fcfb22a3a3c16648bfbbec7c33a52da986a31,PodSandboxId:357aae8f75c7ddcfb167368264e6c585b20087e813f1ce631becaefb726b57da,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726256905932501951,Labels:map[string]string{io.kubernetes.
container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d50876b-97aa-40c3-82f0-e410ca48b6c1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cfb999393ab5ba8567f62722307d34fef4919a26f681adf2652a995f7ca5daa1,PodSandboxId:2f7b55ddd5159a7b57b6aa9d9fb3ed9f324e4aa463010ee73c22d361960464c2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726256905788065746,Labels:map[string]string{io.kubernetes.container.name: ku
be-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-421098,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c8bf3b12520cf1f7fc5ef0371370949,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffc5cb09120d0b25eb302baf4f0f28730d341b042486efd0424bc1c30696dcf7,PodSandboxId:1291d34dc4fbc25b5437e29c15864b20eb94f30b625002fedb8dce2ce9470578,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726256882538125947,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kube
rnetes.pod.name: coredns-7c65d6cfc9-m45fm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9f252cd-1730-49f4-b2cf-21dd8ca579f2,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2139076bed3e4f4ac8eaa4da7d19f2fb836126f849466320faef326e9d4b009c,PodSandboxId:eeaff4a578eeb37940c16cfa73885ab9615dbad9e2e82bbeed8e1d7bf1af8675,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Ima
geRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726256882544573045,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-v6sdc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62ec0025-e3ef-442c-8b4f-11d5163691ab,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdc073abdc81b25ffd86b462987abb461b2ed11081a07981865cecd5e4d4033c,PodSandboxId:e7832f0d3fc2629a6a29568d627f0bb61163e6c850951e9cdfdfe3a76d5ada25,Metadata:&ContainerM
etadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726256882303841935,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9pdb4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c204116-f08f-43c8-9d67-bf8b775ba70e,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1e05a64a-b225-4d01-a2fe-bc01421ea5c5 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	4e814ce462210       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   2 seconds ago       Running             storage-provisioner       2                   357aae8f75c7d       storage-provisioner
	54d21249de224       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   6 seconds ago       Running             kube-scheduler            3                   0f2879d05c30d       kube-scheduler-kubernetes-upgrade-421098
	4b3616d9ee7de       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   6 seconds ago       Running             kube-controller-manager   3                   1f915601546fc       kube-controller-manager-kubernetes-upgrade-421098
	e0b6be7eb6ac6       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   6 seconds ago       Running             kube-apiserver            3                   2f7b55ddd5159       kube-apiserver-kubernetes-upgrade-421098
	0adc3218dfe4a       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   6 seconds ago       Running             etcd                      2                   cc3acbf02dedb       etcd-kubernetes-upgrade-421098
	0a799936876f9       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   8 seconds ago       Created             kube-scheduler            2                   0f2879d05c30d       kube-scheduler-kubernetes-upgrade-421098
	4d045aac31939       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   14 seconds ago      Running             kube-proxy                1                   f9cfc9d667c16       kube-proxy-9pdb4
	7c21210de7d1b       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   16 seconds ago      Running             coredns                   1                   f6b3c7e3effd8       coredns-7c65d6cfc9-m45fm
	4def21a19c30a       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   16 seconds ago      Running             coredns                   1                   b48292eb516f1       coredns-7c65d6cfc9-v6sdc
	15404da0de304       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   22 seconds ago      Exited              kube-controller-manager   2                   1f915601546fc       kube-controller-manager-kubernetes-upgrade-421098
	0fb160fccea86       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   22 seconds ago      Exited              etcd                      1                   cc3acbf02dedb       etcd-kubernetes-upgrade-421098
	913f05dae74de       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   22 seconds ago      Exited              storage-provisioner       1                   357aae8f75c7d       storage-provisioner
	cfb999393ab5b       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   22 seconds ago      Exited              kube-apiserver            2                   2f7b55ddd5159       kube-apiserver-kubernetes-upgrade-421098
	2139076bed3e4       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   46 seconds ago      Exited              coredns                   0                   eeaff4a578eeb       coredns-7c65d6cfc9-v6sdc
	ffc5cb09120d0       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   46 seconds ago      Exited              coredns                   0                   1291d34dc4fbc       coredns-7c65d6cfc9-m45fm
	cdc073abdc81b       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   46 seconds ago      Exited              kube-proxy                0                   e7832f0d3fc26       kube-proxy-9pdb4
	
	
	==> coredns [2139076bed3e4f4ac8eaa4da7d19f2fb836126f849466320faef326e9d4b009c] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/kubernetes: Trace[690830372]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (13-Sep-2024 19:48:02.881) (total time: 12298ms):
	Trace[690830372]: [12.298934149s] [12.298934149s] END
	[INFO] plugin/kubernetes: Trace[6064833]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (13-Sep-2024 19:48:02.879) (total time: 12300ms):
	Trace[6064833]: [12.300488838s] [12.300488838s] END
	[INFO] plugin/kubernetes: Trace[1301473330]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (13-Sep-2024 19:48:02.879) (total time: 12300ms):
	Trace[1301473330]: [12.300726359s] [12.300726359s] END
	
	
	==> coredns [4def21a19c30a8a8ed69a4d2a43bb609ee6337de1454ed2ea1174b0c903a1ae1] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [7c21210de7d1ba963f5fed5e3299f560ddd9c3843977780e32ad6238369e0114] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [ffc5cb09120d0b25eb302baf4f0f28730d341b042486efd0424bc1c30696dcf7] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/kubernetes: Trace[1718274994]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (13-Sep-2024 19:48:02.882) (total time: 12289ms):
	Trace[1718274994]: [12.289282546s] [12.289282546s] END
	[INFO] plugin/kubernetes: Trace[1264730981]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (13-Sep-2024 19:48:02.879) (total time: 12291ms):
	Trace[1264730981]: [12.291720693s] [12.291720693s] END
	[INFO] plugin/kubernetes: Trace[1050573717]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (13-Sep-2024 19:48:02.880) (total time: 12291ms):
	Trace[1050573717]: [12.291605549s] [12.291605549s] END
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-421098
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-421098
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Sep 2024 19:47:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-421098
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 13 Sep 2024 19:48:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Sep 2024 19:48:45 +0000   Fri, 13 Sep 2024 19:47:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Sep 2024 19:48:45 +0000   Fri, 13 Sep 2024 19:47:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Sep 2024 19:48:45 +0000   Fri, 13 Sep 2024 19:47:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 13 Sep 2024 19:48:45 +0000   Fri, 13 Sep 2024 19:47:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.115
	  Hostname:    kubernetes-upgrade-421098
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d398bbe5b9254ba28f7e25fb71741672
	  System UUID:                d398bbe5-b925-4ba2-8f7e-25fb71741672
	  Boot ID:                    7a3391ed-027f-476b-8468-60ba10d0b7d4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-m45fm                             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     48s
	  kube-system                 coredns-7c65d6cfc9-v6sdc                             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     48s
	  kube-system                 etcd-kubernetes-upgrade-421098                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         53s
	  kube-system                 kube-apiserver-kubernetes-upgrade-421098             250m (12%)    0 (0%)      0 (0%)           0 (0%)         48s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-421098    200m (10%)    0 (0%)      0 (0%)           0 (0%)         57s
	  kube-system                 kube-proxy-9pdb4                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  kube-system                 kube-scheduler-kubernetes-upgrade-421098             100m (5%)     0 (0%)      0 (0%)           0 (0%)         19s
	  kube-system                 storage-provisioner                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             240Mi (11%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 46s                kube-proxy       
	  Normal  Starting                 3s                 kube-proxy       
	  Normal  NodeAllocatableEnforced  83s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  83s (x8 over 83s)  kubelet          Node kubernetes-upgrade-421098 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    83s (x8 over 83s)  kubelet          Node kubernetes-upgrade-421098 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     83s (x7 over 83s)  kubelet          Node kubernetes-upgrade-421098 status is now: NodeHasSufficientPID
	  Normal  Starting                 83s                kubelet          Starting kubelet.
	  Normal  RegisteredNode           49s                node-controller  Node kubernetes-upgrade-421098 event: Registered Node kubernetes-upgrade-421098 in Controller
	  Normal  Starting                 7s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7s (x8 over 7s)    kubelet          Node kubernetes-upgrade-421098 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7s (x8 over 7s)    kubelet          Node kubernetes-upgrade-421098 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7s (x7 over 7s)    kubelet          Node kubernetes-upgrade-421098 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           0s                 node-controller  Node kubernetes-upgrade-421098 event: Registered Node kubernetes-upgrade-421098 in Controller
	
	
	==> dmesg <==
	[  +6.440205] systemd-fstab-generator[556]: Ignoring "noauto" option for root device
	[  +0.078886] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.064460] systemd-fstab-generator[568]: Ignoring "noauto" option for root device
	[  +0.214810] systemd-fstab-generator[582]: Ignoring "noauto" option for root device
	[  +0.126578] systemd-fstab-generator[595]: Ignoring "noauto" option for root device
	[  +0.310625] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +4.870625] systemd-fstab-generator[720]: Ignoring "noauto" option for root device
	[  +0.062000] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.630696] systemd-fstab-generator[840]: Ignoring "noauto" option for root device
	[ +14.624903] kauditd_printk_skb: 87 callbacks suppressed
	[ +16.726143] systemd-fstab-generator[1344]: Ignoring "noauto" option for root device
	[  +0.104397] kauditd_printk_skb: 10 callbacks suppressed
	[Sep13 19:48] kauditd_printk_skb: 12 callbacks suppressed
	[ +20.347295] systemd-fstab-generator[2282]: Ignoring "noauto" option for root device
	[  +0.088265] kauditd_printk_skb: 77 callbacks suppressed
	[  +0.093961] systemd-fstab-generator[2294]: Ignoring "noauto" option for root device
	[  +0.220748] systemd-fstab-generator[2308]: Ignoring "noauto" option for root device
	[  +0.224958] systemd-fstab-generator[2385]: Ignoring "noauto" option for root device
	[  +0.446111] systemd-fstab-generator[2454]: Ignoring "noauto" option for root device
	[  +1.897287] systemd-fstab-generator[2723]: Ignoring "noauto" option for root device
	[  +2.632309] kauditd_printk_skb: 203 callbacks suppressed
	[  +6.548916] kauditd_printk_skb: 28 callbacks suppressed
	[  +7.117169] systemd-fstab-generator[3732]: Ignoring "noauto" option for root device
	[  +4.013433] kauditd_printk_skb: 46 callbacks suppressed
	[  +1.316887] systemd-fstab-generator[4152]: Ignoring "noauto" option for root device
	
	
	==> etcd [0adc3218dfe4ad8007b8ac1846e00373b2818ec113cafddc3ab43121d02d20cb] <==
	{"level":"info","ts":"2024-09-13T19:48:42.480833Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c7abbacde39fb9a4 switched to configuration voters=(14387798828015139236)"}
	{"level":"info","ts":"2024-09-13T19:48:42.485527Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"efb3de1b79640a9c","local-member-id":"c7abbacde39fb9a4","added-peer-id":"c7abbacde39fb9a4","added-peer-peer-urls":["https://192.168.39.115:2380"]}
	{"level":"info","ts":"2024-09-13T19:48:42.485661Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"efb3de1b79640a9c","local-member-id":"c7abbacde39fb9a4","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-13T19:48:42.485713Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-13T19:48:42.481709Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-13T19:48:42.487597Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"c7abbacde39fb9a4","initial-advertise-peer-urls":["https://192.168.39.115:2380"],"listen-peer-urls":["https://192.168.39.115:2380"],"advertise-client-urls":["https://192.168.39.115:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.115:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-13T19:48:42.487638Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-13T19:48:42.481737Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.115:2380"}
	{"level":"info","ts":"2024-09-13T19:48:42.487683Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.115:2380"}
	{"level":"info","ts":"2024-09-13T19:48:43.712020Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c7abbacde39fb9a4 is starting a new election at term 3"}
	{"level":"info","ts":"2024-09-13T19:48:43.712180Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c7abbacde39fb9a4 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-09-13T19:48:43.712230Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c7abbacde39fb9a4 received MsgPreVoteResp from c7abbacde39fb9a4 at term 3"}
	{"level":"info","ts":"2024-09-13T19:48:43.712265Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c7abbacde39fb9a4 became candidate at term 4"}
	{"level":"info","ts":"2024-09-13T19:48:43.712289Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c7abbacde39fb9a4 received MsgVoteResp from c7abbacde39fb9a4 at term 4"}
	{"level":"info","ts":"2024-09-13T19:48:43.712316Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c7abbacde39fb9a4 became leader at term 4"}
	{"level":"info","ts":"2024-09-13T19:48:43.712341Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: c7abbacde39fb9a4 elected leader c7abbacde39fb9a4 at term 4"}
	{"level":"info","ts":"2024-09-13T19:48:43.716992Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"c7abbacde39fb9a4","local-member-attributes":"{Name:kubernetes-upgrade-421098 ClientURLs:[https://192.168.39.115:2379]}","request-path":"/0/members/c7abbacde39fb9a4/attributes","cluster-id":"efb3de1b79640a9c","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-13T19:48:43.717227Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-13T19:48:43.717424Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-13T19:48:43.718520Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-13T19:48:43.718586Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-13T19:48:43.718766Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-13T19:48:43.719444Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-13T19:48:43.719921Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.115:2379"}
	{"level":"info","ts":"2024-09-13T19:48:43.720788Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [0fb160fccea867151681ac0821bd72f9067a2b1d60a47426968b136cc9b6c17f] <==
	{"level":"info","ts":"2024-09-13T19:48:28.294835Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c7abbacde39fb9a4 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-13T19:48:28.294862Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c7abbacde39fb9a4 received MsgPreVoteResp from c7abbacde39fb9a4 at term 2"}
	{"level":"info","ts":"2024-09-13T19:48:28.294875Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c7abbacde39fb9a4 became candidate at term 3"}
	{"level":"info","ts":"2024-09-13T19:48:28.294881Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c7abbacde39fb9a4 received MsgVoteResp from c7abbacde39fb9a4 at term 3"}
	{"level":"info","ts":"2024-09-13T19:48:28.294890Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c7abbacde39fb9a4 became leader at term 3"}
	{"level":"info","ts":"2024-09-13T19:48:28.294898Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: c7abbacde39fb9a4 elected leader c7abbacde39fb9a4 at term 3"}
	{"level":"info","ts":"2024-09-13T19:48:28.302343Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-13T19:48:28.303287Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-13T19:48:28.302300Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"c7abbacde39fb9a4","local-member-attributes":"{Name:kubernetes-upgrade-421098 ClientURLs:[https://192.168.39.115:2379]}","request-path":"/0/members/c7abbacde39fb9a4/attributes","cluster-id":"efb3de1b79640a9c","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-13T19:48:28.303648Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-13T19:48:28.304052Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-13T19:48:28.304069Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-13T19:48:28.304760Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.115:2379"}
	{"level":"info","ts":"2024-09-13T19:48:28.305011Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-13T19:48:28.306430Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-13T19:48:29.970436Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-13T19:48:29.970497Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"kubernetes-upgrade-421098","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.115:2380"],"advertise-client-urls":["https://192.168.39.115:2379"]}
	{"level":"warn","ts":"2024-09-13T19:48:29.970603Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.115:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-13T19:48:29.970637Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.115:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-13T19:48:29.970700Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-13T19:48:29.970770Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-13T19:48:30.016559Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"c7abbacde39fb9a4","current-leader-member-id":"c7abbacde39fb9a4"}
	{"level":"info","ts":"2024-09-13T19:48:30.030686Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.115:2380"}
	{"level":"info","ts":"2024-09-13T19:48:30.030862Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.115:2380"}
	{"level":"info","ts":"2024-09-13T19:48:30.030895Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"kubernetes-upgrade-421098","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.115:2380"],"advertise-client-urls":["https://192.168.39.115:2379"]}
	
	
	==> kernel <==
	 19:48:49 up 1 min,  0 users,  load average: 2.03, 0.65, 0.23
	Linux kubernetes-upgrade-421098 5.10.207 #1 SMP Thu Sep 12 19:03:33 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [cfb999393ab5ba8567f62722307d34fef4919a26f681adf2652a995f7ca5daa1] <==
	W0913 19:48:39.187776       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0913 19:48:39.265173       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0913 19:48:39.299749       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0913 19:48:39.330699       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0913 19:48:39.334170       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0913 19:48:39.344700       1 logging.go:55] [core] [Channel #184 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0913 19:48:39.359454       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0913 19:48:39.378100       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0913 19:48:39.388760       1 logging.go:55] [core] [Channel #17 SubChannel #18]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0913 19:48:39.420441       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0913 19:48:39.555619       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0913 19:48:39.616861       1 logging.go:55] [core] [Channel #106 SubChannel #107]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0913 19:48:39.678658       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0913 19:48:39.704888       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0913 19:48:39.710456       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0913 19:48:39.790310       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0913 19:48:39.818576       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0913 19:48:39.887939       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0913 19:48:39.899517       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0913 19:48:39.910215       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0913 19:48:39.919932       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0913 19:48:39.924316       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0913 19:48:39.932870       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0913 19:48:39.955338       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0913 19:48:40.133639       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [e0b6be7eb6ac6fabbb0ef4606855ead712ea0af269def946bd838868392b06e6] <==
	I0913 19:48:45.111320       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0913 19:48:45.112547       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0913 19:48:45.118058       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0913 19:48:45.120141       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0913 19:48:45.122697       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0913 19:48:45.122765       1 aggregator.go:171] initial CRD sync complete...
	I0913 19:48:45.122772       1 autoregister_controller.go:144] Starting autoregister controller
	I0913 19:48:45.122776       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0913 19:48:45.122781       1 cache.go:39] Caches are synced for autoregister controller
	I0913 19:48:45.123909       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0913 19:48:45.126113       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0913 19:48:45.128503       1 shared_informer.go:320] Caches are synced for configmaps
	I0913 19:48:45.139381       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0913 19:48:45.154257       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0913 19:48:45.167691       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0913 19:48:45.167870       1 policy_source.go:224] refreshing policies
	I0913 19:48:45.170581       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0913 19:48:45.928818       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0913 19:48:46.369440       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0913 19:48:46.382289       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0913 19:48:46.421861       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0913 19:48:46.545494       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0913 19:48:46.551316       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0913 19:48:47.456589       1 controller.go:615] quota admission added evaluator for: endpoints
	I0913 19:48:48.504029       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [15404da0de304994d5fd71bd450f1b07a0978d3c83cdf98857b4273fd39b3e06] <==
	
	
	==> kube-controller-manager [4b3616d9ee7de20abe3c22ae33fa0fb83fea2cca3e07d40d522f6c25e2947310] <==
	I0913 19:48:48.343590       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0913 19:48:48.343595       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0913 19:48:48.343686       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="kubernetes-upgrade-421098"
	I0913 19:48:48.343939       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0913 19:48:48.345930       1 shared_informer.go:320] Caches are synced for PV protection
	I0913 19:48:48.347019       1 shared_informer.go:320] Caches are synced for persistent volume
	I0913 19:48:48.348584       1 shared_informer.go:320] Caches are synced for crt configmap
	I0913 19:48:48.352144       1 shared_informer.go:320] Caches are synced for endpoint
	I0913 19:48:48.374100       1 shared_informer.go:320] Caches are synced for namespace
	I0913 19:48:48.402709       1 shared_informer.go:320] Caches are synced for attach detach
	I0913 19:48:48.441415       1 shared_informer.go:320] Caches are synced for daemon sets
	I0913 19:48:48.441448       1 shared_informer.go:320] Caches are synced for stateful set
	I0913 19:48:48.499783       1 shared_informer.go:320] Caches are synced for job
	I0913 19:48:48.503486       1 shared_informer.go:320] Caches are synced for cronjob
	I0913 19:48:48.505598       1 shared_informer.go:320] Caches are synced for deployment
	I0913 19:48:48.512909       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0913 19:48:48.542267       1 shared_informer.go:320] Caches are synced for disruption
	I0913 19:48:48.547494       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0913 19:48:48.549765       1 shared_informer.go:320] Caches are synced for resource quota
	I0913 19:48:48.574485       1 shared_informer.go:320] Caches are synced for resource quota
	I0913 19:48:48.757674       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="244.697304ms"
	I0913 19:48:48.766217       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="34.615µs"
	I0913 19:48:48.993466       1 shared_informer.go:320] Caches are synced for garbage collector
	I0913 19:48:48.993508       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0913 19:48:49.014712       1 shared_informer.go:320] Caches are synced for garbage collector
	
	
	==> kube-proxy [4d045aac3193951743ba893173504a56c7a8f925bab51922f8e5ed4ce26128cb] <==
	 >
	E0913 19:48:34.412717       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0913 19:48:41.199107       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/kubernetes-upgrade-421098\": dial tcp 192.168.39.115:8443: connect: connection refused - error from a previous attempt: unexpected EOF"
	E0913 19:48:42.314725       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/kubernetes-upgrade-421098\": dial tcp 192.168.39.115:8443: connect: connection refused"
	I0913 19:48:45.109656       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.115"]
	E0913 19:48:45.109740       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0913 19:48:45.165049       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0913 19:48:45.165114       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0913 19:48:45.165137       1 server_linux.go:169] "Using iptables Proxier"
	I0913 19:48:45.167445       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0913 19:48:45.167692       1 server.go:483] "Version info" version="v1.31.1"
	I0913 19:48:45.167721       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0913 19:48:45.169276       1 config.go:199] "Starting service config controller"
	I0913 19:48:45.169328       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0913 19:48:45.169413       1 config.go:105] "Starting endpoint slice config controller"
	I0913 19:48:45.169418       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0913 19:48:45.170079       1 config.go:328] "Starting node config controller"
	I0913 19:48:45.170108       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0913 19:48:45.271002       1 shared_informer.go:320] Caches are synced for node config
	I0913 19:48:45.271054       1 shared_informer.go:320] Caches are synced for service config
	I0913 19:48:45.271075       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [cdc073abdc81b25ffd86b462987abb461b2ed11081a07981865cecd5e4d4033c] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0913 19:48:02.883042       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0913 19:48:02.899215       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.115"]
	E0913 19:48:02.899528       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0913 19:48:02.934098       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0913 19:48:02.934160       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0913 19:48:02.934193       1 server_linux.go:169] "Using iptables Proxier"
	I0913 19:48:02.937308       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0913 19:48:02.938447       1 server.go:483] "Version info" version="v1.31.1"
	I0913 19:48:02.938493       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0913 19:48:02.942827       1 config.go:328] "Starting node config controller"
	I0913 19:48:02.943544       1 config.go:199] "Starting service config controller"
	I0913 19:48:02.943598       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0913 19:48:02.943618       1 config.go:105] "Starting endpoint slice config controller"
	I0913 19:48:02.943622       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0913 19:48:02.943993       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0913 19:48:03.044062       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0913 19:48:03.044301       1 shared_informer.go:320] Caches are synced for node config
	I0913 19:48:03.044480       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [0a799936876f9e525390fb19d6429ef916aa53d4cfbb6a0c5de8e6c0c00ed8b5] <==
	
	
	==> kube-scheduler [54d21249de2240ad3aeb6977e59f782412415e4dc4939306c31169e1c7ed7a43] <==
	I0913 19:48:43.016696       1 serving.go:386] Generated self-signed cert in-memory
	W0913 19:48:45.027860       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0913 19:48:45.027955       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0913 19:48:45.027984       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0913 19:48:45.028046       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0913 19:48:45.064121       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0913 19:48:45.066436       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0913 19:48:45.073191       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0913 19:48:45.073465       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0913 19:48:45.075598       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0913 19:48:45.073487       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0913 19:48:45.176196       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 13 19:48:41 kubernetes-upgrade-421098 kubelet[3739]: I0913 19:48:41.795117    3739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7c8bf3b12520cf1f7fc5ef0371370949-k8s-certs\") pod \"kube-apiserver-kubernetes-upgrade-421098\" (UID: \"7c8bf3b12520cf1f7fc5ef0371370949\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-421098"
	Sep 13 19:48:41 kubernetes-upgrade-421098 kubelet[3739]: I0913 19:48:41.795135    3739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c0c2d47d3329bf4d7745c100066f578e-flexvolume-dir\") pod \"kube-controller-manager-kubernetes-upgrade-421098\" (UID: \"c0c2d47d3329bf4d7745c100066f578e\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-421098"
	Sep 13 19:48:41 kubernetes-upgrade-421098 kubelet[3739]: I0913 19:48:41.795231    3739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c0c2d47d3329bf4d7745c100066f578e-usr-share-ca-certificates\") pod \"kube-controller-manager-kubernetes-upgrade-421098\" (UID: \"c0c2d47d3329bf4d7745c100066f578e\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-421098"
	Sep 13 19:48:41 kubernetes-upgrade-421098 kubelet[3739]: I0913 19:48:41.795276    3739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c650bcecb5204818f99f5c8a35a63744-kubeconfig\") pod \"kube-scheduler-kubernetes-upgrade-421098\" (UID: \"c650bcecb5204818f99f5c8a35a63744\") " pod="kube-system/kube-scheduler-kubernetes-upgrade-421098"
	Sep 13 19:48:41 kubernetes-upgrade-421098 kubelet[3739]: E0913 19:48:41.800734    3739 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-421098?timeout=10s\": dial tcp 192.168.39.115:8443: connect: connection refused" interval="400ms"
	Sep 13 19:48:41 kubernetes-upgrade-421098 kubelet[3739]: I0913 19:48:41.963049    3739 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-421098"
	Sep 13 19:48:41 kubernetes-upgrade-421098 kubelet[3739]: E0913 19:48:41.963931    3739 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.115:8443: connect: connection refused" node="kubernetes-upgrade-421098"
	Sep 13 19:48:41 kubernetes-upgrade-421098 kubelet[3739]: I0913 19:48:41.981647    3739 scope.go:117] "RemoveContainer" containerID="0fb160fccea867151681ac0821bd72f9067a2b1d60a47426968b136cc9b6c17f"
	Sep 13 19:48:41 kubernetes-upgrade-421098 kubelet[3739]: I0913 19:48:41.984182    3739 scope.go:117] "RemoveContainer" containerID="cfb999393ab5ba8567f62722307d34fef4919a26f681adf2652a995f7ca5daa1"
	Sep 13 19:48:41 kubernetes-upgrade-421098 kubelet[3739]: I0913 19:48:41.987638    3739 scope.go:117] "RemoveContainer" containerID="15404da0de304994d5fd71bd450f1b07a0978d3c83cdf98857b4273fd39b3e06"
	Sep 13 19:48:41 kubernetes-upgrade-421098 kubelet[3739]: I0913 19:48:41.990968    3739 scope.go:117] "RemoveContainer" containerID="0a799936876f9e525390fb19d6429ef916aa53d4cfbb6a0c5de8e6c0c00ed8b5"
	Sep 13 19:48:42 kubernetes-upgrade-421098 kubelet[3739]: E0913 19:48:42.201830    3739 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-421098?timeout=10s\": dial tcp 192.168.39.115:8443: connect: connection refused" interval="800ms"
	Sep 13 19:48:42 kubernetes-upgrade-421098 kubelet[3739]: I0913 19:48:42.366054    3739 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-421098"
	Sep 13 19:48:45 kubernetes-upgrade-421098 kubelet[3739]: I0913 19:48:45.194511    3739 kubelet_node_status.go:111] "Node was previously registered" node="kubernetes-upgrade-421098"
	Sep 13 19:48:45 kubernetes-upgrade-421098 kubelet[3739]: I0913 19:48:45.194909    3739 kubelet_node_status.go:75] "Successfully registered node" node="kubernetes-upgrade-421098"
	Sep 13 19:48:45 kubernetes-upgrade-421098 kubelet[3739]: I0913 19:48:45.194945    3739 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 13 19:48:45 kubernetes-upgrade-421098 kubelet[3739]: I0913 19:48:45.196043    3739 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 13 19:48:45 kubernetes-upgrade-421098 kubelet[3739]: I0913 19:48:45.558528    3739 apiserver.go:52] "Watching apiserver"
	Sep 13 19:48:45 kubernetes-upgrade-421098 kubelet[3739]: I0913 19:48:45.570516    3739 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Sep 13 19:48:45 kubernetes-upgrade-421098 kubelet[3739]: I0913 19:48:45.590669    3739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6c204116-f08f-43c8-9d67-bf8b775ba70e-xtables-lock\") pod \"kube-proxy-9pdb4\" (UID: \"6c204116-f08f-43c8-9d67-bf8b775ba70e\") " pod="kube-system/kube-proxy-9pdb4"
	Sep 13 19:48:45 kubernetes-upgrade-421098 kubelet[3739]: I0913 19:48:45.590814    3739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6c204116-f08f-43c8-9d67-bf8b775ba70e-lib-modules\") pod \"kube-proxy-9pdb4\" (UID: \"6c204116-f08f-43c8-9d67-bf8b775ba70e\") " pod="kube-system/kube-proxy-9pdb4"
	Sep 13 19:48:45 kubernetes-upgrade-421098 kubelet[3739]: I0913 19:48:45.591201    3739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/6d50876b-97aa-40c3-82f0-e410ca48b6c1-tmp\") pod \"storage-provisioner\" (UID: \"6d50876b-97aa-40c3-82f0-e410ca48b6c1\") " pod="kube-system/storage-provisioner"
	Sep 13 19:48:45 kubernetes-upgrade-421098 kubelet[3739]: I0913 19:48:45.631748    3739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-kubernetes-upgrade-421098" podStartSLOduration=16.631719265 podStartE2EDuration="16.631719265s" podCreationTimestamp="2024-09-13 19:48:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-13 19:48:45.614847976 +0000 UTC m=+4.170979134" watchObservedRunningTime="2024-09-13 19:48:45.631719265 +0000 UTC m=+4.187850414"
	Sep 13 19:48:45 kubernetes-upgrade-421098 kubelet[3739]: E0913 19:48:45.759879    3739 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-kubernetes-upgrade-421098\" already exists" pod="kube-system/kube-apiserver-kubernetes-upgrade-421098"
	Sep 13 19:48:45 kubernetes-upgrade-421098 kubelet[3739]: I0913 19:48:45.863238    3739 scope.go:117] "RemoveContainer" containerID="913f05dae74de22790f3dcba051fcfb22a3a3c16648bfbbec7c33a52da986a31"
	
	
	==> storage-provisioner [4e814ce462210e19e439a3c9bff12eb8362962d78430cd2b3aa937d694b8a36d] <==
	I0913 19:48:45.965751       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0913 19:48:45.981609       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0913 19:48:45.981878       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	
	
	==> storage-provisioner [913f05dae74de22790f3dcba051fcfb22a3a3c16648bfbbec7c33a52da986a31] <==
	I0913 19:48:26.685501       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0913 19:48:29.812584       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0913 19:48:29.812699       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0913 19:48:29.867177       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0913 19:48:29.867433       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-421098_db394904-12c4-491b-a4a8-24a0c1030c54!
	I0913 19:48:29.869708       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a9951df6-d04f-4425-a9ca-e69a9e4aa7ea", APIVersion:"v1", ResourceVersion:"401", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kubernetes-upgrade-421098_db394904-12c4-491b-a4a8-24a0c1030c54 became leader
	I0913 19:48:29.968709       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-421098_db394904-12c4-491b-a4a8-24a0c1030c54!
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0913 19:48:47.952706   67783 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19636-3902/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-421098 -n kubernetes-upgrade-421098
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-421098 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-421098" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-421098
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-421098: (1.151834948s)
--- FAIL: TestKubernetesUpgrade (435.60s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (107.47s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-933457 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-933457 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m43.531000812s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-933457] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19636
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19636-3902/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19636-3902/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-933457" primary control-plane node in "pause-933457" cluster
	* Updating the running kvm2 "pause-933457" VM ...
	* Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	* Done! kubectl is now configured to use "pause-933457" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 19:41:35.891624   54266 out.go:345] Setting OutFile to fd 1 ...
	I0913 19:41:35.891867   54266 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 19:41:35.891875   54266 out.go:358] Setting ErrFile to fd 2...
	I0913 19:41:35.891880   54266 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 19:41:35.892113   54266 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19636-3902/.minikube/bin
	I0913 19:41:35.892643   54266 out.go:352] Setting JSON to false
	I0913 19:41:35.893507   54266 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5039,"bootTime":1726251457,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0913 19:41:35.893603   54266 start.go:139] virtualization: kvm guest
	I0913 19:41:35.895508   54266 out.go:177] * [pause-933457] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0913 19:41:35.897288   54266 out.go:177]   - MINIKUBE_LOCATION=19636
	I0913 19:41:35.897297   54266 notify.go:220] Checking for updates...
	I0913 19:41:35.899811   54266 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 19:41:35.901151   54266 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19636-3902/kubeconfig
	I0913 19:41:35.902344   54266 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19636-3902/.minikube
	I0913 19:41:35.903793   54266 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0913 19:41:35.905087   54266 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 19:41:35.906873   54266 config.go:182] Loaded profile config "pause-933457": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 19:41:35.907497   54266 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:41:35.907556   54266 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:41:35.922338   54266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36627
	I0913 19:41:35.922766   54266 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:41:35.923296   54266 main.go:141] libmachine: Using API Version  1
	I0913 19:41:35.923317   54266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:41:35.923672   54266 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:41:35.923858   54266 main.go:141] libmachine: (pause-933457) Calling .DriverName
	I0913 19:41:35.924278   54266 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 19:41:35.924580   54266 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:41:35.924618   54266 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:41:35.939491   54266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34235
	I0913 19:41:35.939891   54266 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:41:35.940359   54266 main.go:141] libmachine: Using API Version  1
	I0913 19:41:35.940380   54266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:41:35.940718   54266 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:41:35.940921   54266 main.go:141] libmachine: (pause-933457) Calling .DriverName
	I0913 19:41:35.976057   54266 out.go:177] * Using the kvm2 driver based on existing profile
	I0913 19:41:35.977491   54266 start.go:297] selected driver: kvm2
	I0913 19:41:35.977506   54266 start.go:901] validating driver "kvm2" against &{Name:pause-933457 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.31.1 ClusterName:pause-933457 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.59 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false ol
m:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 19:41:35.977629   54266 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 19:41:35.977922   54266 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 19:41:35.977995   54266 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19636-3902/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0913 19:41:35.993844   54266 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0913 19:41:35.994591   54266 cni.go:84] Creating CNI manager for ""
	I0913 19:41:35.994656   54266 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0913 19:41:35.994708   54266 start.go:340] cluster config:
	{Name:pause-933457 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:pause-933457 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.59 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-alia
ses:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 19:41:35.994871   54266 iso.go:125] acquiring lock: {Name:mk6d09a9e7ffc35d34bf41cb51ad15df14a4d34d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 19:41:35.996712   54266 out.go:177] * Starting "pause-933457" primary control-plane node in "pause-933457" cluster
	I0913 19:41:35.997852   54266 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0913 19:41:35.997888   54266 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0913 19:41:35.997901   54266 cache.go:56] Caching tarball of preloaded images
	I0913 19:41:35.998003   54266 preload.go:172] Found /home/jenkins/minikube-integration/19636-3902/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0913 19:41:35.998017   54266 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0913 19:41:35.998264   54266 profile.go:143] Saving config to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/pause-933457/config.json ...
	I0913 19:41:35.998507   54266 start.go:360] acquireMachinesLock for pause-933457: {Name:mk2a4fc9f87a3264088553785e086036ce1d8c5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 19:42:21.975046   54266 start.go:364] duration metric: took 45.976511074s to acquireMachinesLock for "pause-933457"
	I0913 19:42:21.975105   54266 start.go:96] Skipping create...Using existing machine configuration
	I0913 19:42:21.975112   54266 fix.go:54] fixHost starting: 
	I0913 19:42:21.975543   54266 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:42:21.975602   54266 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:42:21.993046   54266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41897
	I0913 19:42:21.993725   54266 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:42:21.994373   54266 main.go:141] libmachine: Using API Version  1
	I0913 19:42:21.994403   54266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:42:21.994794   54266 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:42:21.994995   54266 main.go:141] libmachine: (pause-933457) Calling .DriverName
	I0913 19:42:21.995136   54266 main.go:141] libmachine: (pause-933457) Calling .GetState
	I0913 19:42:21.996777   54266 fix.go:112] recreateIfNeeded on pause-933457: state=Running err=<nil>
	W0913 19:42:21.996804   54266 fix.go:138] unexpected machine state, will restart: <nil>
	I0913 19:42:21.998896   54266 out.go:177] * Updating the running kvm2 "pause-933457" VM ...
	I0913 19:42:22.000429   54266 machine.go:93] provisionDockerMachine start ...
	I0913 19:42:22.000455   54266 main.go:141] libmachine: (pause-933457) Calling .DriverName
	I0913 19:42:22.000645   54266 main.go:141] libmachine: (pause-933457) Calling .GetSSHHostname
	I0913 19:42:22.003457   54266 main.go:141] libmachine: (pause-933457) DBG | domain pause-933457 has defined MAC address 52:54:00:fc:fe:11 in network mk-pause-933457
	I0913 19:42:22.003940   54266 main.go:141] libmachine: (pause-933457) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:fe:11", ip: ""} in network mk-pause-933457: {Iface:virbr2 ExpiryTime:2024-09-13 20:40:57 +0000 UTC Type:0 Mac:52:54:00:fc:fe:11 Iaid: IPaddr:192.168.83.59 Prefix:24 Hostname:pause-933457 Clientid:01:52:54:00:fc:fe:11}
	I0913 19:42:22.003968   54266 main.go:141] libmachine: (pause-933457) DBG | domain pause-933457 has defined IP address 192.168.83.59 and MAC address 52:54:00:fc:fe:11 in network mk-pause-933457
	I0913 19:42:22.004116   54266 main.go:141] libmachine: (pause-933457) Calling .GetSSHPort
	I0913 19:42:22.004281   54266 main.go:141] libmachine: (pause-933457) Calling .GetSSHKeyPath
	I0913 19:42:22.004439   54266 main.go:141] libmachine: (pause-933457) Calling .GetSSHKeyPath
	I0913 19:42:22.004587   54266 main.go:141] libmachine: (pause-933457) Calling .GetSSHUsername
	I0913 19:42:22.004761   54266 main.go:141] libmachine: Using SSH client type: native
	I0913 19:42:22.004981   54266 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.83.59 22 <nil> <nil>}
	I0913 19:42:22.005001   54266 main.go:141] libmachine: About to run SSH command:
	hostname
	I0913 19:42:22.112418   54266 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-933457
	
	I0913 19:42:22.112460   54266 main.go:141] libmachine: (pause-933457) Calling .GetMachineName
	I0913 19:42:22.112688   54266 buildroot.go:166] provisioning hostname "pause-933457"
	I0913 19:42:22.112722   54266 main.go:141] libmachine: (pause-933457) Calling .GetMachineName
	I0913 19:42:22.112905   54266 main.go:141] libmachine: (pause-933457) Calling .GetSSHHostname
	I0913 19:42:22.115978   54266 main.go:141] libmachine: (pause-933457) DBG | domain pause-933457 has defined MAC address 52:54:00:fc:fe:11 in network mk-pause-933457
	I0913 19:42:22.116342   54266 main.go:141] libmachine: (pause-933457) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:fe:11", ip: ""} in network mk-pause-933457: {Iface:virbr2 ExpiryTime:2024-09-13 20:40:57 +0000 UTC Type:0 Mac:52:54:00:fc:fe:11 Iaid: IPaddr:192.168.83.59 Prefix:24 Hostname:pause-933457 Clientid:01:52:54:00:fc:fe:11}
	I0913 19:42:22.116369   54266 main.go:141] libmachine: (pause-933457) DBG | domain pause-933457 has defined IP address 192.168.83.59 and MAC address 52:54:00:fc:fe:11 in network mk-pause-933457
	I0913 19:42:22.116521   54266 main.go:141] libmachine: (pause-933457) Calling .GetSSHPort
	I0913 19:42:22.116679   54266 main.go:141] libmachine: (pause-933457) Calling .GetSSHKeyPath
	I0913 19:42:22.116866   54266 main.go:141] libmachine: (pause-933457) Calling .GetSSHKeyPath
	I0913 19:42:22.117017   54266 main.go:141] libmachine: (pause-933457) Calling .GetSSHUsername
	I0913 19:42:22.117203   54266 main.go:141] libmachine: Using SSH client type: native
	I0913 19:42:22.117373   54266 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.83.59 22 <nil> <nil>}
	I0913 19:42:22.117385   54266 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-933457 && echo "pause-933457" | sudo tee /etc/hostname
	I0913 19:42:22.250022   54266 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-933457
	
	I0913 19:42:22.250046   54266 main.go:141] libmachine: (pause-933457) Calling .GetSSHHostname
	I0913 19:42:22.253082   54266 main.go:141] libmachine: (pause-933457) DBG | domain pause-933457 has defined MAC address 52:54:00:fc:fe:11 in network mk-pause-933457
	I0913 19:42:22.253428   54266 main.go:141] libmachine: (pause-933457) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:fe:11", ip: ""} in network mk-pause-933457: {Iface:virbr2 ExpiryTime:2024-09-13 20:40:57 +0000 UTC Type:0 Mac:52:54:00:fc:fe:11 Iaid: IPaddr:192.168.83.59 Prefix:24 Hostname:pause-933457 Clientid:01:52:54:00:fc:fe:11}
	I0913 19:42:22.253463   54266 main.go:141] libmachine: (pause-933457) DBG | domain pause-933457 has defined IP address 192.168.83.59 and MAC address 52:54:00:fc:fe:11 in network mk-pause-933457
	I0913 19:42:22.253586   54266 main.go:141] libmachine: (pause-933457) Calling .GetSSHPort
	I0913 19:42:22.253799   54266 main.go:141] libmachine: (pause-933457) Calling .GetSSHKeyPath
	I0913 19:42:22.254011   54266 main.go:141] libmachine: (pause-933457) Calling .GetSSHKeyPath
	I0913 19:42:22.254197   54266 main.go:141] libmachine: (pause-933457) Calling .GetSSHUsername
	I0913 19:42:22.254393   54266 main.go:141] libmachine: Using SSH client type: native
	I0913 19:42:22.254585   54266 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.83.59 22 <nil> <nil>}
	I0913 19:42:22.254602   54266 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-933457' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-933457/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-933457' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0913 19:42:22.359153   54266 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0913 19:42:22.359182   54266 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19636-3902/.minikube CaCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19636-3902/.minikube}
	I0913 19:42:22.359203   54266 buildroot.go:174] setting up certificates
	I0913 19:42:22.359216   54266 provision.go:84] configureAuth start
	I0913 19:42:22.359228   54266 main.go:141] libmachine: (pause-933457) Calling .GetMachineName
	I0913 19:42:22.359500   54266 main.go:141] libmachine: (pause-933457) Calling .GetIP
	I0913 19:42:22.362235   54266 main.go:141] libmachine: (pause-933457) DBG | domain pause-933457 has defined MAC address 52:54:00:fc:fe:11 in network mk-pause-933457
	I0913 19:42:22.362591   54266 main.go:141] libmachine: (pause-933457) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:fe:11", ip: ""} in network mk-pause-933457: {Iface:virbr2 ExpiryTime:2024-09-13 20:40:57 +0000 UTC Type:0 Mac:52:54:00:fc:fe:11 Iaid: IPaddr:192.168.83.59 Prefix:24 Hostname:pause-933457 Clientid:01:52:54:00:fc:fe:11}
	I0913 19:42:22.362621   54266 main.go:141] libmachine: (pause-933457) DBG | domain pause-933457 has defined IP address 192.168.83.59 and MAC address 52:54:00:fc:fe:11 in network mk-pause-933457
	I0913 19:42:22.362754   54266 main.go:141] libmachine: (pause-933457) Calling .GetSSHHostname
	I0913 19:42:22.364913   54266 main.go:141] libmachine: (pause-933457) DBG | domain pause-933457 has defined MAC address 52:54:00:fc:fe:11 in network mk-pause-933457
	I0913 19:42:22.365306   54266 main.go:141] libmachine: (pause-933457) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:fe:11", ip: ""} in network mk-pause-933457: {Iface:virbr2 ExpiryTime:2024-09-13 20:40:57 +0000 UTC Type:0 Mac:52:54:00:fc:fe:11 Iaid: IPaddr:192.168.83.59 Prefix:24 Hostname:pause-933457 Clientid:01:52:54:00:fc:fe:11}
	I0913 19:42:22.365333   54266 main.go:141] libmachine: (pause-933457) DBG | domain pause-933457 has defined IP address 192.168.83.59 and MAC address 52:54:00:fc:fe:11 in network mk-pause-933457
	I0913 19:42:22.365538   54266 provision.go:143] copyHostCerts
	I0913 19:42:22.365597   54266 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem, removing ...
	I0913 19:42:22.365609   54266 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem
	I0913 19:42:22.365675   54266 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem (1082 bytes)
	I0913 19:42:22.365805   54266 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem, removing ...
	I0913 19:42:22.365816   54266 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem
	I0913 19:42:22.365849   54266 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem (1123 bytes)
	I0913 19:42:22.365946   54266 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem, removing ...
	I0913 19:42:22.365956   54266 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem
	I0913 19:42:22.365984   54266 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem (1679 bytes)
	I0913 19:42:22.366066   54266 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem org=jenkins.pause-933457 san=[127.0.0.1 192.168.83.59 localhost minikube pause-933457]
	I0913 19:42:22.529348   54266 provision.go:177] copyRemoteCerts
	I0913 19:42:22.529424   54266 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0913 19:42:22.529454   54266 main.go:141] libmachine: (pause-933457) Calling .GetSSHHostname
	I0913 19:42:22.532314   54266 main.go:141] libmachine: (pause-933457) DBG | domain pause-933457 has defined MAC address 52:54:00:fc:fe:11 in network mk-pause-933457
	I0913 19:42:22.532643   54266 main.go:141] libmachine: (pause-933457) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:fe:11", ip: ""} in network mk-pause-933457: {Iface:virbr2 ExpiryTime:2024-09-13 20:40:57 +0000 UTC Type:0 Mac:52:54:00:fc:fe:11 Iaid: IPaddr:192.168.83.59 Prefix:24 Hostname:pause-933457 Clientid:01:52:54:00:fc:fe:11}
	I0913 19:42:22.532670   54266 main.go:141] libmachine: (pause-933457) DBG | domain pause-933457 has defined IP address 192.168.83.59 and MAC address 52:54:00:fc:fe:11 in network mk-pause-933457
	I0913 19:42:22.532821   54266 main.go:141] libmachine: (pause-933457) Calling .GetSSHPort
	I0913 19:42:22.533025   54266 main.go:141] libmachine: (pause-933457) Calling .GetSSHKeyPath
	I0913 19:42:22.533233   54266 main.go:141] libmachine: (pause-933457) Calling .GetSSHUsername
	I0913 19:42:22.533408   54266 sshutil.go:53] new ssh client: &{IP:192.168.83.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/pause-933457/id_rsa Username:docker}
	I0913 19:42:22.617076   54266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0913 19:42:22.648023   54266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0913 19:42:22.687124   54266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0913 19:42:22.718282   54266 provision.go:87] duration metric: took 359.051949ms to configureAuth
	I0913 19:42:22.718317   54266 buildroot.go:189] setting minikube options for container-runtime
	I0913 19:42:22.718607   54266 config.go:182] Loaded profile config "pause-933457": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 19:42:22.718721   54266 main.go:141] libmachine: (pause-933457) Calling .GetSSHHostname
	I0913 19:42:22.721535   54266 main.go:141] libmachine: (pause-933457) DBG | domain pause-933457 has defined MAC address 52:54:00:fc:fe:11 in network mk-pause-933457
	I0913 19:42:22.721923   54266 main.go:141] libmachine: (pause-933457) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:fe:11", ip: ""} in network mk-pause-933457: {Iface:virbr2 ExpiryTime:2024-09-13 20:40:57 +0000 UTC Type:0 Mac:52:54:00:fc:fe:11 Iaid: IPaddr:192.168.83.59 Prefix:24 Hostname:pause-933457 Clientid:01:52:54:00:fc:fe:11}
	I0913 19:42:22.721953   54266 main.go:141] libmachine: (pause-933457) DBG | domain pause-933457 has defined IP address 192.168.83.59 and MAC address 52:54:00:fc:fe:11 in network mk-pause-933457
	I0913 19:42:22.722132   54266 main.go:141] libmachine: (pause-933457) Calling .GetSSHPort
	I0913 19:42:22.722346   54266 main.go:141] libmachine: (pause-933457) Calling .GetSSHKeyPath
	I0913 19:42:22.722487   54266 main.go:141] libmachine: (pause-933457) Calling .GetSSHKeyPath
	I0913 19:42:22.722658   54266 main.go:141] libmachine: (pause-933457) Calling .GetSSHUsername
	I0913 19:42:22.722804   54266 main.go:141] libmachine: Using SSH client type: native
	I0913 19:42:22.722971   54266 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.83.59 22 <nil> <nil>}
	I0913 19:42:22.722985   54266 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0913 19:42:28.270604   54266 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0913 19:42:28.270641   54266 machine.go:96] duration metric: took 6.270193932s to provisionDockerMachine
	I0913 19:42:28.270655   54266 start.go:293] postStartSetup for "pause-933457" (driver="kvm2")
	I0913 19:42:28.270668   54266 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0913 19:42:28.270689   54266 main.go:141] libmachine: (pause-933457) Calling .DriverName
	I0913 19:42:28.271109   54266 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0913 19:42:28.271143   54266 main.go:141] libmachine: (pause-933457) Calling .GetSSHHostname
	I0913 19:42:28.274123   54266 main.go:141] libmachine: (pause-933457) DBG | domain pause-933457 has defined MAC address 52:54:00:fc:fe:11 in network mk-pause-933457
	I0913 19:42:28.274568   54266 main.go:141] libmachine: (pause-933457) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:fe:11", ip: ""} in network mk-pause-933457: {Iface:virbr2 ExpiryTime:2024-09-13 20:40:57 +0000 UTC Type:0 Mac:52:54:00:fc:fe:11 Iaid: IPaddr:192.168.83.59 Prefix:24 Hostname:pause-933457 Clientid:01:52:54:00:fc:fe:11}
	I0913 19:42:28.274597   54266 main.go:141] libmachine: (pause-933457) DBG | domain pause-933457 has defined IP address 192.168.83.59 and MAC address 52:54:00:fc:fe:11 in network mk-pause-933457
	I0913 19:42:28.274769   54266 main.go:141] libmachine: (pause-933457) Calling .GetSSHPort
	I0913 19:42:28.274950   54266 main.go:141] libmachine: (pause-933457) Calling .GetSSHKeyPath
	I0913 19:42:28.275065   54266 main.go:141] libmachine: (pause-933457) Calling .GetSSHUsername
	I0913 19:42:28.275216   54266 sshutil.go:53] new ssh client: &{IP:192.168.83.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/pause-933457/id_rsa Username:docker}
	I0913 19:42:28.357609   54266 ssh_runner.go:195] Run: cat /etc/os-release
	I0913 19:42:28.362109   54266 info.go:137] Remote host: Buildroot 2023.02.9
	I0913 19:42:28.362137   54266 filesync.go:126] Scanning /home/jenkins/minikube-integration/19636-3902/.minikube/addons for local assets ...
	I0913 19:42:28.362206   54266 filesync.go:126] Scanning /home/jenkins/minikube-integration/19636-3902/.minikube/files for local assets ...
	I0913 19:42:28.362313   54266 filesync.go:149] local asset: /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem -> 110792.pem in /etc/ssl/certs
	I0913 19:42:28.362454   54266 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0913 19:42:28.372884   54266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem --> /etc/ssl/certs/110792.pem (1708 bytes)
	I0913 19:42:28.406219   54266 start.go:296] duration metric: took 135.540449ms for postStartSetup
	I0913 19:42:28.406256   54266 fix.go:56] duration metric: took 6.43114364s for fixHost
	I0913 19:42:28.406286   54266 main.go:141] libmachine: (pause-933457) Calling .GetSSHHostname
	I0913 19:42:28.409213   54266 main.go:141] libmachine: (pause-933457) DBG | domain pause-933457 has defined MAC address 52:54:00:fc:fe:11 in network mk-pause-933457
	I0913 19:42:28.409482   54266 main.go:141] libmachine: (pause-933457) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:fe:11", ip: ""} in network mk-pause-933457: {Iface:virbr2 ExpiryTime:2024-09-13 20:40:57 +0000 UTC Type:0 Mac:52:54:00:fc:fe:11 Iaid: IPaddr:192.168.83.59 Prefix:24 Hostname:pause-933457 Clientid:01:52:54:00:fc:fe:11}
	I0913 19:42:28.409516   54266 main.go:141] libmachine: (pause-933457) DBG | domain pause-933457 has defined IP address 192.168.83.59 and MAC address 52:54:00:fc:fe:11 in network mk-pause-933457
	I0913 19:42:28.409643   54266 main.go:141] libmachine: (pause-933457) Calling .GetSSHPort
	I0913 19:42:28.409824   54266 main.go:141] libmachine: (pause-933457) Calling .GetSSHKeyPath
	I0913 19:42:28.409977   54266 main.go:141] libmachine: (pause-933457) Calling .GetSSHKeyPath
	I0913 19:42:28.410166   54266 main.go:141] libmachine: (pause-933457) Calling .GetSSHUsername
	I0913 19:42:28.410330   54266 main.go:141] libmachine: Using SSH client type: native
	I0913 19:42:28.410545   54266 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.83.59 22 <nil> <nil>}
	I0913 19:42:28.410566   54266 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0913 19:42:28.520055   54266 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726256548.509099420
	
	I0913 19:42:28.520076   54266 fix.go:216] guest clock: 1726256548.509099420
	I0913 19:42:28.520083   54266 fix.go:229] Guest: 2024-09-13 19:42:28.50909942 +0000 UTC Remote: 2024-09-13 19:42:28.40626072 +0000 UTC m=+52.550777207 (delta=102.8387ms)
	I0913 19:42:28.520105   54266 fix.go:200] guest clock delta is within tolerance: 102.8387ms
	I0913 19:42:28.520111   54266 start.go:83] releasing machines lock for "pause-933457", held for 6.545029173s
	I0913 19:42:28.520144   54266 main.go:141] libmachine: (pause-933457) Calling .DriverName
	I0913 19:42:28.520377   54266 main.go:141] libmachine: (pause-933457) Calling .GetIP
	I0913 19:42:28.523511   54266 main.go:141] libmachine: (pause-933457) DBG | domain pause-933457 has defined MAC address 52:54:00:fc:fe:11 in network mk-pause-933457
	I0913 19:42:28.523871   54266 main.go:141] libmachine: (pause-933457) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:fe:11", ip: ""} in network mk-pause-933457: {Iface:virbr2 ExpiryTime:2024-09-13 20:40:57 +0000 UTC Type:0 Mac:52:54:00:fc:fe:11 Iaid: IPaddr:192.168.83.59 Prefix:24 Hostname:pause-933457 Clientid:01:52:54:00:fc:fe:11}
	I0913 19:42:28.523891   54266 main.go:141] libmachine: (pause-933457) DBG | domain pause-933457 has defined IP address 192.168.83.59 and MAC address 52:54:00:fc:fe:11 in network mk-pause-933457
	I0913 19:42:28.524059   54266 main.go:141] libmachine: (pause-933457) Calling .DriverName
	I0913 19:42:28.524634   54266 main.go:141] libmachine: (pause-933457) Calling .DriverName
	I0913 19:42:28.524827   54266 main.go:141] libmachine: (pause-933457) Calling .DriverName
	I0913 19:42:28.524940   54266 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0913 19:42:28.524981   54266 main.go:141] libmachine: (pause-933457) Calling .GetSSHHostname
	I0913 19:42:28.525065   54266 ssh_runner.go:195] Run: cat /version.json
	I0913 19:42:28.525085   54266 main.go:141] libmachine: (pause-933457) Calling .GetSSHHostname
	I0913 19:42:28.528137   54266 main.go:141] libmachine: (pause-933457) DBG | domain pause-933457 has defined MAC address 52:54:00:fc:fe:11 in network mk-pause-933457
	I0913 19:42:28.528188   54266 main.go:141] libmachine: (pause-933457) DBG | domain pause-933457 has defined MAC address 52:54:00:fc:fe:11 in network mk-pause-933457
	I0913 19:42:28.528533   54266 main.go:141] libmachine: (pause-933457) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:fe:11", ip: ""} in network mk-pause-933457: {Iface:virbr2 ExpiryTime:2024-09-13 20:40:57 +0000 UTC Type:0 Mac:52:54:00:fc:fe:11 Iaid: IPaddr:192.168.83.59 Prefix:24 Hostname:pause-933457 Clientid:01:52:54:00:fc:fe:11}
	I0913 19:42:28.528601   54266 main.go:141] libmachine: (pause-933457) DBG | domain pause-933457 has defined IP address 192.168.83.59 and MAC address 52:54:00:fc:fe:11 in network mk-pause-933457
	I0913 19:42:28.528634   54266 main.go:141] libmachine: (pause-933457) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:fe:11", ip: ""} in network mk-pause-933457: {Iface:virbr2 ExpiryTime:2024-09-13 20:40:57 +0000 UTC Type:0 Mac:52:54:00:fc:fe:11 Iaid: IPaddr:192.168.83.59 Prefix:24 Hostname:pause-933457 Clientid:01:52:54:00:fc:fe:11}
	I0913 19:42:28.528657   54266 main.go:141] libmachine: (pause-933457) DBG | domain pause-933457 has defined IP address 192.168.83.59 and MAC address 52:54:00:fc:fe:11 in network mk-pause-933457
	I0913 19:42:28.528735   54266 main.go:141] libmachine: (pause-933457) Calling .GetSSHPort
	I0913 19:42:28.528867   54266 main.go:141] libmachine: (pause-933457) Calling .GetSSHPort
	I0913 19:42:28.528941   54266 main.go:141] libmachine: (pause-933457) Calling .GetSSHKeyPath
	I0913 19:42:28.529058   54266 main.go:141] libmachine: (pause-933457) Calling .GetSSHKeyPath
	I0913 19:42:28.529120   54266 main.go:141] libmachine: (pause-933457) Calling .GetSSHUsername
	I0913 19:42:28.529200   54266 main.go:141] libmachine: (pause-933457) Calling .GetSSHUsername
	I0913 19:42:28.529385   54266 sshutil.go:53] new ssh client: &{IP:192.168.83.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/pause-933457/id_rsa Username:docker}
	I0913 19:42:28.529391   54266 sshutil.go:53] new ssh client: &{IP:192.168.83.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/pause-933457/id_rsa Username:docker}
	I0913 19:42:28.638217   54266 ssh_runner.go:195] Run: systemctl --version
	I0913 19:42:28.644858   54266 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0913 19:42:28.800310   54266 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0913 19:42:28.806841   54266 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0913 19:42:28.806902   54266 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0913 19:42:28.819048   54266 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0913 19:42:28.819073   54266 start.go:495] detecting cgroup driver to use...
	I0913 19:42:28.819151   54266 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0913 19:42:28.836614   54266 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0913 19:42:28.852059   54266 docker.go:217] disabling cri-docker service (if available) ...
	I0913 19:42:28.852117   54266 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0913 19:42:28.866985   54266 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0913 19:42:28.880852   54266 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0913 19:42:29.064397   54266 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0913 19:42:29.234181   54266 docker.go:233] disabling docker service ...
	I0913 19:42:29.234257   54266 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0913 19:42:29.253807   54266 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0913 19:42:29.271209   54266 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0913 19:42:29.428121   54266 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0913 19:42:29.589460   54266 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0913 19:42:29.605148   54266 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0913 19:42:29.630566   54266 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0913 19:42:29.630637   54266 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:42:29.641541   54266 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0913 19:42:29.641615   54266 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:42:29.652661   54266 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:42:29.663609   54266 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:42:29.677452   54266 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0913 19:42:29.689324   54266 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:42:29.704107   54266 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:42:29.717935   54266 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:42:29.730662   54266 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0913 19:42:29.744765   54266 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0913 19:42:29.757314   54266 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 19:42:29.925746   54266 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0913 19:42:32.092370   54266 ssh_runner.go:235] Completed: sudo systemctl restart crio: (2.166593902s)
	I0913 19:42:32.092400   54266 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0913 19:42:32.092456   54266 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0913 19:42:32.098763   54266 start.go:563] Will wait 60s for crictl version
	I0913 19:42:32.098828   54266 ssh_runner.go:195] Run: which crictl
	I0913 19:42:32.103928   54266 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0913 19:42:32.158198   54266 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0913 19:42:32.158314   54266 ssh_runner.go:195] Run: crio --version
	I0913 19:42:32.193172   54266 ssh_runner.go:195] Run: crio --version
	I0913 19:42:32.235366   54266 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0913 19:42:32.237091   54266 main.go:141] libmachine: (pause-933457) Calling .GetIP
	I0913 19:42:32.240361   54266 main.go:141] libmachine: (pause-933457) DBG | domain pause-933457 has defined MAC address 52:54:00:fc:fe:11 in network mk-pause-933457
	I0913 19:42:32.240747   54266 main.go:141] libmachine: (pause-933457) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:fe:11", ip: ""} in network mk-pause-933457: {Iface:virbr2 ExpiryTime:2024-09-13 20:40:57 +0000 UTC Type:0 Mac:52:54:00:fc:fe:11 Iaid: IPaddr:192.168.83.59 Prefix:24 Hostname:pause-933457 Clientid:01:52:54:00:fc:fe:11}
	I0913 19:42:32.240772   54266 main.go:141] libmachine: (pause-933457) DBG | domain pause-933457 has defined IP address 192.168.83.59 and MAC address 52:54:00:fc:fe:11 in network mk-pause-933457
	I0913 19:42:32.241025   54266 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I0913 19:42:32.246307   54266 kubeadm.go:883] updating cluster {Name:pause-933457 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1
ClusterName:pause-933457 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.59 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-sec
urity-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0913 19:42:32.246443   54266 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0913 19:42:32.246502   54266 ssh_runner.go:195] Run: sudo crictl images --output json
	I0913 19:42:32.296004   54266 crio.go:514] all images are preloaded for cri-o runtime.
	I0913 19:42:32.296026   54266 crio.go:433] Images already preloaded, skipping extraction
	I0913 19:42:32.296085   54266 ssh_runner.go:195] Run: sudo crictl images --output json
	I0913 19:42:32.333348   54266 crio.go:514] all images are preloaded for cri-o runtime.
	I0913 19:42:32.333371   54266 cache_images.go:84] Images are preloaded, skipping loading
	I0913 19:42:32.333380   54266 kubeadm.go:934] updating node { 192.168.83.59 8443 v1.31.1 crio true true} ...
	I0913 19:42:32.333499   54266 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-933457 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.83.59
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:pause-933457 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0913 19:42:32.333566   54266 ssh_runner.go:195] Run: crio config
	I0913 19:42:32.390924   54266 cni.go:84] Creating CNI manager for ""
	I0913 19:42:32.390954   54266 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0913 19:42:32.390970   54266 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0913 19:42:32.390999   54266 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.59 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-933457 NodeName:pause-933457 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.59"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.59 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0913 19:42:32.391213   54266 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.59
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-933457"
	  kubeletExtraArgs:
	    node-ip: 192.168.83.59
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.59"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0913 19:42:32.391286   54266 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0913 19:42:32.402698   54266 binaries.go:44] Found k8s binaries, skipping transfer
	I0913 19:42:32.402792   54266 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0913 19:42:32.413039   54266 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0913 19:42:32.431480   54266 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0913 19:42:32.450128   54266 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0913 19:42:32.540868   54266 ssh_runner.go:195] Run: grep 192.168.83.59	control-plane.minikube.internal$ /etc/hosts
	I0913 19:42:32.550879   54266 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 19:42:32.860253   54266 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0913 19:42:32.902942   54266 certs.go:68] Setting up /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/pause-933457 for IP: 192.168.83.59
	I0913 19:42:32.902969   54266 certs.go:194] generating shared ca certs ...
	I0913 19:42:32.902988   54266 certs.go:226] acquiring lock for ca certs: {Name:mke780aab4c2895bded11772e31c7a357d07742c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 19:42:32.903196   54266 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.key
	I0913 19:42:32.903257   54266 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.key
	I0913 19:42:32.903268   54266 certs.go:256] generating profile certs ...
	I0913 19:42:32.903408   54266 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/pause-933457/client.key
	I0913 19:42:32.903513   54266 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/pause-933457/apiserver.key.519d5ab5
	I0913 19:42:32.903574   54266 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/pause-933457/proxy-client.key
	I0913 19:42:32.903718   54266 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079.pem (1338 bytes)
	W0913 19:42:32.903759   54266 certs.go:480] ignoring /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079_empty.pem, impossibly tiny 0 bytes
	I0913 19:42:32.903772   54266 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem (1675 bytes)
	I0913 19:42:32.903811   54266 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem (1082 bytes)
	I0913 19:42:32.903845   54266 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem (1123 bytes)
	I0913 19:42:32.903875   54266 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem (1679 bytes)
	I0913 19:42:32.903928   54266 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem (1708 bytes)
	I0913 19:42:32.904789   54266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0913 19:42:32.968679   54266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0913 19:42:33.025883   54266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0913 19:42:33.065000   54266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0913 19:42:33.178319   54266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/pause-933457/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0913 19:42:33.262045   54266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/pause-933457/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0913 19:42:33.339554   54266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/pause-933457/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0913 19:42:33.419699   54266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/pause-933457/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0913 19:42:33.572192   54266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem --> /usr/share/ca-certificates/110792.pem (1708 bytes)
	I0913 19:42:33.615412   54266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0913 19:42:33.658819   54266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079.pem --> /usr/share/ca-certificates/11079.pem (1338 bytes)
	I0913 19:42:33.759742   54266 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0913 19:42:33.802248   54266 ssh_runner.go:195] Run: openssl version
	I0913 19:42:33.810656   54266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110792.pem && ln -fs /usr/share/ca-certificates/110792.pem /etc/ssl/certs/110792.pem"
	I0913 19:42:33.833884   54266 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110792.pem
	I0913 19:42:33.851714   54266 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 13 18:38 /usr/share/ca-certificates/110792.pem
	I0913 19:42:33.851800   54266 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110792.pem
	I0913 19:42:33.865622   54266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110792.pem /etc/ssl/certs/3ec20f2e.0"
	I0913 19:42:33.910857   54266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0913 19:42:33.935451   54266 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0913 19:42:33.940257   54266 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 13 18:22 /usr/share/ca-certificates/minikubeCA.pem
	I0913 19:42:33.940327   54266 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0913 19:42:33.950788   54266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0913 19:42:33.970562   54266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11079.pem && ln -fs /usr/share/ca-certificates/11079.pem /etc/ssl/certs/11079.pem"
	I0913 19:42:33.987068   54266 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11079.pem
	I0913 19:42:33.993859   54266 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 13 18:38 /usr/share/ca-certificates/11079.pem
	I0913 19:42:33.993933   54266 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11079.pem
	I0913 19:42:34.001102   54266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11079.pem /etc/ssl/certs/51391683.0"
	I0913 19:42:34.015509   54266 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0913 19:42:34.020239   54266 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0913 19:42:34.028110   54266 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0913 19:42:34.034497   54266 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0913 19:42:34.041930   54266 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0913 19:42:34.047926   54266 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0913 19:42:34.055234   54266 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0913 19:42:34.061893   54266 kubeadm.go:392] StartCluster: {Name:pause-933457 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:pause-933457 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.59 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-securi
ty-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 19:42:34.062029   54266 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0913 19:42:34.062126   54266 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0913 19:42:34.145733   54266 cri.go:89] found id: "ba02c00e17ad3940584d065430964f8e9776e00d33734f058867687120958d05"
	I0913 19:42:34.145759   54266 cri.go:89] found id: "4dc94897deaaf4e69873a1b170e8112acf0c6df733065ad175f5e8ae501562f2"
	I0913 19:42:34.145765   54266 cri.go:89] found id: "8676b252bf1c28d3cb6742f7bfbd7dd807365a67f2e32f36e9217c0da7d7f23a"
	I0913 19:42:34.145769   54266 cri.go:89] found id: "61067c1a85d8673f926fb0b1821d2c36123e06963b2c321d3346bf51a52cb966"
	I0913 19:42:34.145772   54266 cri.go:89] found id: "6edacfe2510fbe9a4608060fd1be51aca7e450fe78177a0cf92a4539040d7b27"
	I0913 19:42:34.145777   54266 cri.go:89] found id: "c49e99000aa6dd7ed6586c9e93f0ac4a9f934540a949c269b2dc9dcdac0e6259"
	I0913 19:42:34.145780   54266 cri.go:89] found id: "e336fd9ccdee0bae97772cfa5d1c90ce0405bef430f47174d0d65e0900359f97"
	I0913 19:42:34.145784   54266 cri.go:89] found id: "3fd7eccb00a42eaa74d7161a5f6416e6fd223c8c216005f740e0fb22783e778e"
	I0913 19:42:34.145789   54266 cri.go:89] found id: "2b138d59a9af3a5ede60e9676e17f5c907be811fb414c1b8f3be341eabd5ad50"
	I0913 19:42:34.145798   54266 cri.go:89] found id: "65b586e2724bd1169660c04138357b13211dd02371da5e0915ecc493d7a8b41e"
	I0913 19:42:34.145802   54266 cri.go:89] found id: "10b2ec33bc0015aea63b50d9f59c21764b14d949be78d8576daf464426fc886f"
	I0913 19:42:34.145806   54266 cri.go:89] found id: "2c678d0a47ec190eacf0f21bffd78831a318149c88307e7df550f6690764b5c2"
	I0913 19:42:34.145812   54266 cri.go:89] found id: ""
	I0913 19:42:34.145864   54266 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-933457 -n pause-933457
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-933457 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-933457 logs -n 25: (1.36562721s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p cert-expiration-235626             | cert-expiration-235626    | jenkins | v1.34.0 | 13 Sep 24 19:38 UTC | 13 Sep 24 19:40 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-590674                | NoKubernetes-590674       | jenkins | v1.34.0 | 13 Sep 24 19:39 UTC | 13 Sep 24 19:39 UTC |
	|         | --no-kubernetes --driver=kvm2         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p offline-crio-568412                | offline-crio-568412       | jenkins | v1.34.0 | 13 Sep 24 19:39 UTC | 13 Sep 24 19:39 UTC |
	| start   | -p force-systemd-flag-642942          | force-systemd-flag-642942 | jenkins | v1.34.0 | 13 Sep 24 19:39 UTC | 13 Sep 24 19:40 UTC |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-590674                | NoKubernetes-590674       | jenkins | v1.34.0 | 13 Sep 24 19:39 UTC | 13 Sep 24 19:39 UTC |
	| start   | -p NoKubernetes-590674                | NoKubernetes-590674       | jenkins | v1.34.0 | 13 Sep 24 19:39 UTC | 13 Sep 24 19:40 UTC |
	|         | --no-kubernetes --driver=kvm2         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p running-upgrade-605510             | running-upgrade-605510    | jenkins | v1.34.0 | 13 Sep 24 19:39 UTC | 13 Sep 24 19:41 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-642942 ssh cat     | force-systemd-flag-642942 | jenkins | v1.34.0 | 13 Sep 24 19:40 UTC | 13 Sep 24 19:40 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf    |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-642942          | force-systemd-flag-642942 | jenkins | v1.34.0 | 13 Sep 24 19:40 UTC | 13 Sep 24 19:40 UTC |
	| start   | -p pause-933457 --memory=2048         | pause-933457              | jenkins | v1.34.0 | 13 Sep 24 19:40 UTC | 13 Sep 24 19:41 UTC |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2              |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-590674 sudo           | NoKubernetes-590674       | jenkins | v1.34.0 | 13 Sep 24 19:40 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-590674                | NoKubernetes-590674       | jenkins | v1.34.0 | 13 Sep 24 19:40 UTC | 13 Sep 24 19:40 UTC |
	| start   | -p NoKubernetes-590674                | NoKubernetes-590674       | jenkins | v1.34.0 | 13 Sep 24 19:40 UTC | 13 Sep 24 19:41 UTC |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-590674 sudo           | NoKubernetes-590674       | jenkins | v1.34.0 | 13 Sep 24 19:41 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-590674                | NoKubernetes-590674       | jenkins | v1.34.0 | 13 Sep 24 19:41 UTC | 13 Sep 24 19:41 UTC |
	| start   | -p cert-options-718151                | cert-options-718151       | jenkins | v1.34.0 | 13 Sep 24 19:41 UTC | 13 Sep 24 19:42 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-605510             | running-upgrade-605510    | jenkins | v1.34.0 | 13 Sep 24 19:41 UTC | 13 Sep 24 19:41 UTC |
	| start   | -p kubernetes-upgrade-421098          | kubernetes-upgrade-421098 | jenkins | v1.34.0 | 13 Sep 24 19:41 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p pause-933457                       | pause-933457              | jenkins | v1.34.0 | 13 Sep 24 19:41 UTC | 13 Sep 24 19:43 UTC |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | cert-options-718151 ssh               | cert-options-718151       | jenkins | v1.34.0 | 13 Sep 24 19:42 UTC | 13 Sep 24 19:42 UTC |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-718151 -- sudo        | cert-options-718151       | jenkins | v1.34.0 | 13 Sep 24 19:42 UTC | 13 Sep 24 19:42 UTC |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-718151                | cert-options-718151       | jenkins | v1.34.0 | 13 Sep 24 19:42 UTC | 13 Sep 24 19:42 UTC |
	| start   | -p stopped-upgrade-520539             | minikube                  | jenkins | v1.26.0 | 13 Sep 24 19:42 UTC | 13 Sep 24 19:43 UTC |
	|         | --memory=2200 --vm-driver=kvm2        |                           |         |         |                     |                     |
	|         |  --container-runtime=crio             |                           |         |         |                     |                     |
	| start   | -p cert-expiration-235626             | cert-expiration-235626    | jenkins | v1.34.0 | 13 Sep 24 19:43 UTC |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-520539 stop           | minikube                  | jenkins | v1.26.0 | 13 Sep 24 19:43 UTC |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/13 19:43:02
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0913 19:43:02.465471   55271 out.go:345] Setting OutFile to fd 1 ...
	I0913 19:43:02.465620   55271 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 19:43:02.465625   55271 out.go:358] Setting ErrFile to fd 2...
	I0913 19:43:02.465630   55271 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 19:43:02.465877   55271 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19636-3902/.minikube/bin
	I0913 19:43:02.466582   55271 out.go:352] Setting JSON to false
	I0913 19:43:02.467892   55271 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5125,"bootTime":1726251457,"procs":218,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0913 19:43:02.468013   55271 start.go:139] virtualization: kvm guest
	I0913 19:43:02.470760   55271 out.go:177] * [cert-expiration-235626] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0913 19:43:02.472261   55271 out.go:177]   - MINIKUBE_LOCATION=19636
	I0913 19:43:02.472263   55271 notify.go:220] Checking for updates...
	I0913 19:43:02.473649   55271 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 19:43:02.474997   55271 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19636-3902/kubeconfig
	I0913 19:43:02.476353   55271 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19636-3902/.minikube
	I0913 19:43:02.477674   55271 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0913 19:43:02.479077   55271 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 19:43:02.480942   55271 config.go:182] Loaded profile config "cert-expiration-235626": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 19:43:02.481402   55271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:43:02.481462   55271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:43:02.500649   55271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34041
	I0913 19:43:02.501445   55271 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:43:02.502149   55271 main.go:141] libmachine: Using API Version  1
	I0913 19:43:02.502163   55271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:43:02.502661   55271 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:43:02.502880   55271 main.go:141] libmachine: (cert-expiration-235626) Calling .DriverName
	I0913 19:43:02.503138   55271 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 19:43:02.503583   55271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:43:02.503617   55271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:43:02.519783   55271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34641
	I0913 19:43:02.520155   55271 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:43:02.520744   55271 main.go:141] libmachine: Using API Version  1
	I0913 19:43:02.520756   55271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:43:02.521234   55271 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:43:02.521391   55271 main.go:141] libmachine: (cert-expiration-235626) Calling .DriverName
	I0913 19:43:02.559201   55271 out.go:177] * Using the kvm2 driver based on existing profile
	I0913 19:43:02.560694   55271 start.go:297] selected driver: kvm2
	I0913 19:43:02.560704   55271 start.go:901] validating driver "kvm2" against &{Name:cert-expiration-235626 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:cert-expiration-235626 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.171 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 19:43:02.560820   55271 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 19:43:02.561517   55271 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 19:43:02.561589   55271 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19636-3902/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0913 19:43:02.578152   55271 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0913 19:43:02.578645   55271 cni.go:84] Creating CNI manager for ""
	I0913 19:43:02.578699   55271 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0913 19:43:02.578767   55271 start.go:340] cluster config:
	{Name:cert-expiration-235626 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:cert-expiration-235626 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.171 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disabl
eMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 19:43:02.578901   55271 iso.go:125] acquiring lock: {Name:mk6d09a9e7ffc35d34bf41cb51ad15df14a4d34d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 19:43:02.581636   55271 out.go:177] * Starting "cert-expiration-235626" primary control-plane node in "cert-expiration-235626" cluster
	I0913 19:43:02.582909   55271 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0913 19:43:02.582951   55271 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0913 19:43:02.582959   55271 cache.go:56] Caching tarball of preloaded images
	I0913 19:43:02.583027   55271 preload.go:172] Found /home/jenkins/minikube-integration/19636-3902/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0913 19:43:02.583033   55271 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0913 19:43:02.583116   55271 profile.go:143] Saving config to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/cert-expiration-235626/config.json ...
	I0913 19:43:02.583313   55271 start.go:360] acquireMachinesLock for cert-expiration-235626: {Name:mk2a4fc9f87a3264088553785e086036ce1d8c5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 19:43:02.583349   55271 start.go:364] duration metric: took 24.184µs to acquireMachinesLock for "cert-expiration-235626"
	I0913 19:43:02.583359   55271 start.go:96] Skipping create...Using existing machine configuration
	I0913 19:43:02.583363   55271 fix.go:54] fixHost starting: 
	I0913 19:43:02.583609   55271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:43:02.583639   55271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:43:02.600527   55271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40951
	I0913 19:43:02.601007   55271 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:43:02.601606   55271 main.go:141] libmachine: Using API Version  1
	I0913 19:43:02.601623   55271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:43:02.602016   55271 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:43:02.602217   55271 main.go:141] libmachine: (cert-expiration-235626) Calling .DriverName
	I0913 19:43:02.602371   55271 main.go:141] libmachine: (cert-expiration-235626) Calling .GetState
	I0913 19:43:02.604692   55271 fix.go:112] recreateIfNeeded on cert-expiration-235626: state=Running err=<nil>
	W0913 19:43:02.604707   55271 fix.go:138] unexpected machine state, will restart: <nil>
	I0913 19:43:02.606373   55271 out.go:177] * Updating the running kvm2 "cert-expiration-235626" VM ...
	I0913 19:43:01.739991   54266 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0913 19:43:01.752407   54266 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0913 19:43:01.774585   54266 system_pods.go:43] waiting for kube-system pods to appear ...
	I0913 19:43:01.774678   54266 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0913 19:43:01.774707   54266 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0913 19:43:01.783350   54266 system_pods.go:59] 6 kube-system pods found
	I0913 19:43:01.783391   54266 system_pods.go:61] "coredns-7c65d6cfc9-7fxbj" [b0dc6419-dce7-46cd-8caa-d46406a809a5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0913 19:43:01.783398   54266 system_pods.go:61] "etcd-pause-933457" [5d84e9ab-6209-4a01-8fcd-369590fe189d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0913 19:43:01.783405   54266 system_pods.go:61] "kube-apiserver-pause-933457" [0a0a9fb2-b27b-4665-a48e-ff4bfb87f804] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0913 19:43:01.783417   54266 system_pods.go:61] "kube-controller-manager-pause-933457" [3c186d3b-da13-4262-9100-81e9f9d74fb0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0913 19:43:01.783422   54266 system_pods.go:61] "kube-proxy-frbfp" [cfb8342b-c790-4425-baeb-c40e02d7fad0] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0913 19:43:01.783427   54266 system_pods.go:61] "kube-scheduler-pause-933457" [f4f3c82c-538a-41c8-9f95-b73f293127ce] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0913 19:43:01.783434   54266 system_pods.go:74] duration metric: took 8.822956ms to wait for pod list to return data ...
	I0913 19:43:01.783441   54266 node_conditions.go:102] verifying NodePressure condition ...
	I0913 19:43:01.786955   54266 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0913 19:43:01.786977   54266 node_conditions.go:123] node cpu capacity is 2
	I0913 19:43:01.786987   54266 node_conditions.go:105] duration metric: took 3.541888ms to run NodePressure ...
	I0913 19:43:01.787004   54266 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:43:02.071798   54266 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0913 19:43:02.077101   54266 kubeadm.go:739] kubelet initialised
	I0913 19:43:02.077121   54266 kubeadm.go:740] duration metric: took 5.30071ms waiting for restarted kubelet to initialise ...
	I0913 19:43:02.077130   54266 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0913 19:43:02.083417   54266 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-7fxbj" in "kube-system" namespace to be "Ready" ...
	I0913 19:43:04.403689   54266 pod_ready.go:103] pod "coredns-7c65d6cfc9-7fxbj" in "kube-system" namespace has status "Ready":"False"
	I0913 19:43:01.277431   54890 main.go:134] libmachine: (stopped-upgrade-520539) Calling .GetIP
	I0913 19:43:01.280223   54890 main.go:134] libmachine: (stopped-upgrade-520539) DBG | domain stopped-upgrade-520539 has defined MAC address 52:54:00:ed:f0:e5 in network mk-stopped-upgrade-520539
	I0913 19:43:01.280595   54890 main.go:134] libmachine: (stopped-upgrade-520539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:f0:e5", ip: ""} in network mk-stopped-upgrade-520539: {Iface:virbr1 ExpiryTime:2024-09-13 20:42:42 +0000 UTC Type:0 Mac:52:54:00:ed:f0:e5 Iaid: IPaddr:192.168.50.110 Prefix:24 Hostname:stopped-upgrade-520539 Clientid:01:52:54:00:ed:f0:e5}
	I0913 19:43:01.280620   54890 main.go:134] libmachine: (stopped-upgrade-520539) DBG | domain stopped-upgrade-520539 has defined IP address 192.168.50.110 and MAC address 52:54:00:ed:f0:e5 in network mk-stopped-upgrade-520539
	I0913 19:43:01.280787   54890 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0913 19:43:01.284524   54890 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0913 19:43:01.296341   54890 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime crio
	I0913 19:43:01.296397   54890 ssh_runner.go:195] Run: sudo crictl images --output json
	I0913 19:43:01.349315   54890 crio.go:494] all images are preloaded for cri-o runtime.
	I0913 19:43:01.349328   54890 crio.go:413] Images already preloaded, skipping extraction
	I0913 19:43:01.349373   54890 ssh_runner.go:195] Run: sudo crictl images --output json
	I0913 19:43:01.378037   54890 crio.go:494] all images are preloaded for cri-o runtime.
	I0913 19:43:01.378051   54890 cache_images.go:84] Images are preloaded, skipping loading
	I0913 19:43:01.378148   54890 ssh_runner.go:195] Run: crio config
	I0913 19:43:01.420966   54890 cni.go:95] Creating CNI manager for ""
	I0913 19:43:01.420980   54890 cni.go:165] "kvm2" driver + crio runtime found, recommending bridge
	I0913 19:43:01.420993   54890 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0913 19:43:01.421015   54890 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.110 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-520539 NodeName:stopped-upgrade-520539 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.110"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.50.110 CgroupDriver:systemd ClientCAFile:/var/lib/miniku
be/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0913 19:43:01.421227   54890 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.110
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "stopped-upgrade-520539"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.110
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.110"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0913 19:43:01.421328   54890 kubeadm.go:961] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock --hostname-override=stopped-upgrade-520539 --image-service-endpoint=/var/run/crio/crio.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.110 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-520539 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0913 19:43:01.421401   54890 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0913 19:43:01.431330   54890 binaries.go:44] Found k8s binaries, skipping transfer
	I0913 19:43:01.431393   54890 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0913 19:43:01.440494   54890 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (481 bytes)
	I0913 19:43:01.455644   54890 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0913 19:43:01.470660   54890 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2048 bytes)
	I0913 19:43:01.485910   54890 ssh_runner.go:195] Run: grep 192.168.50.110	control-plane.minikube.internal$ /etc/hosts
	I0913 19:43:01.489899   54890 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.110	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0913 19:43:01.504168   54890 certs.go:54] Setting up /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/stopped-upgrade-520539 for IP: 192.168.50.110
	I0913 19:43:01.504282   54890 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.key
	I0913 19:43:01.504312   54890 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.key
	I0913 19:43:01.504354   54890 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/stopped-upgrade-520539/client.key
	I0913 19:43:01.504367   54890 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/stopped-upgrade-520539/client.crt with IP's: []
	I0913 19:43:01.906926   54890 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/stopped-upgrade-520539/client.crt ...
	I0913 19:43:01.906949   54890 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/stopped-upgrade-520539/client.crt: {Name:mk34347f1e14f1ed42eddaf780e49e4769a74f9e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 19:43:01.907162   54890 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/stopped-upgrade-520539/client.key ...
	I0913 19:43:01.907169   54890 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/stopped-upgrade-520539/client.key: {Name:mkddb249814e7a894983f7747080a459bda5cd7d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 19:43:01.907261   54890 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/stopped-upgrade-520539/apiserver.key.9da92a4a
	I0913 19:43:01.907270   54890 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/stopped-upgrade-520539/apiserver.crt.9da92a4a with IP's: [192.168.50.110 10.96.0.1 127.0.0.1 10.0.0.1]
	I0913 19:43:02.084024   54890 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/stopped-upgrade-520539/apiserver.crt.9da92a4a ...
	I0913 19:43:02.084049   54890 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/stopped-upgrade-520539/apiserver.crt.9da92a4a: {Name:mkeca1fef52b090240f44eae454775218504702a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 19:43:02.084241   54890 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/stopped-upgrade-520539/apiserver.key.9da92a4a ...
	I0913 19:43:02.084250   54890 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/stopped-upgrade-520539/apiserver.key.9da92a4a: {Name:mkce9b5ba0a72379e1255bc072600be7495fb48a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 19:43:02.084374   54890 certs.go:320] copying /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/stopped-upgrade-520539/apiserver.crt.9da92a4a -> /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/stopped-upgrade-520539/apiserver.crt
	I0913 19:43:02.084429   54890 certs.go:324] copying /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/stopped-upgrade-520539/apiserver.key.9da92a4a -> /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/stopped-upgrade-520539/apiserver.key
	I0913 19:43:02.084467   54890 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/stopped-upgrade-520539/proxy-client.key
	I0913 19:43:02.084477   54890 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/stopped-upgrade-520539/proxy-client.crt with IP's: []
	I0913 19:43:02.292473   54890 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/stopped-upgrade-520539/proxy-client.crt ...
	I0913 19:43:02.292488   54890 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/stopped-upgrade-520539/proxy-client.crt: {Name:mk707486bb995b6d6c0b992b07eda6a728281b51 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 19:43:02.292680   54890 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/stopped-upgrade-520539/proxy-client.key ...
	I0913 19:43:02.292685   54890 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/stopped-upgrade-520539/proxy-client.key: {Name:mk8297b0fc7508d28a7debd0e7cd966953ea29b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 19:43:02.292880   54890 certs.go:388] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079.pem (1338 bytes)
	W0913 19:43:02.292913   54890 certs.go:384] ignoring /home/jenkins/minikube-integration/19636-3902/.minikube/certs/home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079_empty.pem, impossibly tiny 0 bytes
	I0913 19:43:02.292919   54890 certs.go:388] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem (1675 bytes)
	I0913 19:43:02.292938   54890 certs.go:388] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem (1082 bytes)
	I0913 19:43:02.292955   54890 certs.go:388] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem (1123 bytes)
	I0913 19:43:02.292970   54890 certs.go:388] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem (1679 bytes)
	I0913 19:43:02.293001   54890 certs.go:388] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem (1708 bytes)
	I0913 19:43:02.293976   54890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/stopped-upgrade-520539/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0913 19:43:02.316954   54890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/stopped-upgrade-520539/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0913 19:43:02.336878   54890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/stopped-upgrade-520539/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0913 19:43:02.356994   54890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/stopped-upgrade-520539/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0913 19:43:02.376973   54890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0913 19:43:02.396893   54890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0913 19:43:02.416853   54890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0913 19:43:02.441572   54890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0913 19:43:02.468462   54890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem --> /usr/share/ca-certificates/110792.pem (1708 bytes)
	I0913 19:43:02.491506   54890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0913 19:43:02.518762   54890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079.pem --> /usr/share/ca-certificates/11079.pem (1338 bytes)
	I0913 19:43:02.541819   54890 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0913 19:43:02.560768   54890 ssh_runner.go:195] Run: openssl version
	I0913 19:43:02.567823   54890 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110792.pem && ln -fs /usr/share/ca-certificates/110792.pem /etc/ssl/certs/110792.pem"
	I0913 19:43:02.578320   54890 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110792.pem
	I0913 19:43:02.584062   54890 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Sep 13 18:38 /usr/share/ca-certificates/110792.pem
	I0913 19:43:02.584122   54890 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110792.pem
	I0913 19:43:02.589762   54890 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110792.pem /etc/ssl/certs/3ec20f2e.0"
	I0913 19:43:02.601970   54890 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0913 19:43:02.614522   54890 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0913 19:43:02.619281   54890 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Sep 13 18:22 /usr/share/ca-certificates/minikubeCA.pem
	I0913 19:43:02.619331   54890 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0913 19:43:02.625019   54890 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0913 19:43:02.636399   54890 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11079.pem && ln -fs /usr/share/ca-certificates/11079.pem /etc/ssl/certs/11079.pem"
	I0913 19:43:02.647180   54890 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11079.pem
	I0913 19:43:02.651715   54890 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Sep 13 18:38 /usr/share/ca-certificates/11079.pem
	I0913 19:43:02.651763   54890 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11079.pem
	I0913 19:43:02.658864   54890 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11079.pem /etc/ssl/certs/51391683.0"
	I0913 19:43:02.671218   54890 kubeadm.go:395] StartCluster: {Name:stopped-upgrade-520539 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-520539 Name
space:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.110 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custo
mQemuFirmwarePath:}
	I0913 19:43:02.671305   54890 cri.go:52] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0913 19:43:02.671362   54890 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0913 19:43:02.701675   54890 cri.go:87] found id: ""
	I0913 19:43:02.701759   54890 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0913 19:43:02.711637   54890 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0913 19:43:02.720522   54890 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0913 19:43:02.735181   54890 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0913 19:43:02.735214   54890 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem"
	I0913 19:43:02.607653   55271 machine.go:93] provisionDockerMachine start ...
	I0913 19:43:02.607666   55271 main.go:141] libmachine: (cert-expiration-235626) Calling .DriverName
	I0913 19:43:02.607881   55271 main.go:141] libmachine: (cert-expiration-235626) Calling .GetSSHHostname
	I0913 19:43:02.610783   55271 main.go:141] libmachine: (cert-expiration-235626) DBG | domain cert-expiration-235626 has defined MAC address 52:54:00:e6:60:89 in network mk-cert-expiration-235626
	I0913 19:43:02.611189   55271 main.go:141] libmachine: (cert-expiration-235626) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:60:89", ip: ""} in network mk-cert-expiration-235626: {Iface:virbr4 ExpiryTime:2024-09-13 20:39:34 +0000 UTC Type:0 Mac:52:54:00:e6:60:89 Iaid: IPaddr:192.168.72.171 Prefix:24 Hostname:cert-expiration-235626 Clientid:01:52:54:00:e6:60:89}
	I0913 19:43:02.611212   55271 main.go:141] libmachine: (cert-expiration-235626) DBG | domain cert-expiration-235626 has defined IP address 192.168.72.171 and MAC address 52:54:00:e6:60:89 in network mk-cert-expiration-235626
	I0913 19:43:02.611472   55271 main.go:141] libmachine: (cert-expiration-235626) Calling .GetSSHPort
	I0913 19:43:02.611639   55271 main.go:141] libmachine: (cert-expiration-235626) Calling .GetSSHKeyPath
	I0913 19:43:02.611818   55271 main.go:141] libmachine: (cert-expiration-235626) Calling .GetSSHKeyPath
	I0913 19:43:02.611954   55271 main.go:141] libmachine: (cert-expiration-235626) Calling .GetSSHUsername
	I0913 19:43:02.612142   55271 main.go:141] libmachine: Using SSH client type: native
	I0913 19:43:02.612388   55271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.171 22 <nil> <nil>}
	I0913 19:43:02.612395   55271 main.go:141] libmachine: About to run SSH command:
	hostname
	I0913 19:43:02.723869   55271 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-235626
	
	I0913 19:43:02.723917   55271 main.go:141] libmachine: (cert-expiration-235626) Calling .GetMachineName
	I0913 19:43:02.724174   55271 buildroot.go:166] provisioning hostname "cert-expiration-235626"
	I0913 19:43:02.724201   55271 main.go:141] libmachine: (cert-expiration-235626) Calling .GetMachineName
	I0913 19:43:02.724357   55271 main.go:141] libmachine: (cert-expiration-235626) Calling .GetSSHHostname
	I0913 19:43:02.727891   55271 main.go:141] libmachine: (cert-expiration-235626) DBG | domain cert-expiration-235626 has defined MAC address 52:54:00:e6:60:89 in network mk-cert-expiration-235626
	I0913 19:43:02.728281   55271 main.go:141] libmachine: (cert-expiration-235626) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:60:89", ip: ""} in network mk-cert-expiration-235626: {Iface:virbr4 ExpiryTime:2024-09-13 20:39:34 +0000 UTC Type:0 Mac:52:54:00:e6:60:89 Iaid: IPaddr:192.168.72.171 Prefix:24 Hostname:cert-expiration-235626 Clientid:01:52:54:00:e6:60:89}
	I0913 19:43:02.728296   55271 main.go:141] libmachine: (cert-expiration-235626) DBG | domain cert-expiration-235626 has defined IP address 192.168.72.171 and MAC address 52:54:00:e6:60:89 in network mk-cert-expiration-235626
	I0913 19:43:02.728554   55271 main.go:141] libmachine: (cert-expiration-235626) Calling .GetSSHPort
	I0913 19:43:02.728740   55271 main.go:141] libmachine: (cert-expiration-235626) Calling .GetSSHKeyPath
	I0913 19:43:02.728913   55271 main.go:141] libmachine: (cert-expiration-235626) Calling .GetSSHKeyPath
	I0913 19:43:02.729046   55271 main.go:141] libmachine: (cert-expiration-235626) Calling .GetSSHUsername
	I0913 19:43:02.729203   55271 main.go:141] libmachine: Using SSH client type: native
	I0913 19:43:02.729359   55271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.171 22 <nil> <nil>}
	I0913 19:43:02.729365   55271 main.go:141] libmachine: About to run SSH command:
	sudo hostname cert-expiration-235626 && echo "cert-expiration-235626" | sudo tee /etc/hostname
	I0913 19:43:02.861276   55271 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-235626
	
	I0913 19:43:02.861336   55271 main.go:141] libmachine: (cert-expiration-235626) Calling .GetSSHHostname
	I0913 19:43:02.864379   55271 main.go:141] libmachine: (cert-expiration-235626) DBG | domain cert-expiration-235626 has defined MAC address 52:54:00:e6:60:89 in network mk-cert-expiration-235626
	I0913 19:43:02.864823   55271 main.go:141] libmachine: (cert-expiration-235626) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:60:89", ip: ""} in network mk-cert-expiration-235626: {Iface:virbr4 ExpiryTime:2024-09-13 20:39:34 +0000 UTC Type:0 Mac:52:54:00:e6:60:89 Iaid: IPaddr:192.168.72.171 Prefix:24 Hostname:cert-expiration-235626 Clientid:01:52:54:00:e6:60:89}
	I0913 19:43:02.864848   55271 main.go:141] libmachine: (cert-expiration-235626) DBG | domain cert-expiration-235626 has defined IP address 192.168.72.171 and MAC address 52:54:00:e6:60:89 in network mk-cert-expiration-235626
	I0913 19:43:02.865022   55271 main.go:141] libmachine: (cert-expiration-235626) Calling .GetSSHPort
	I0913 19:43:02.865215   55271 main.go:141] libmachine: (cert-expiration-235626) Calling .GetSSHKeyPath
	I0913 19:43:02.865354   55271 main.go:141] libmachine: (cert-expiration-235626) Calling .GetSSHKeyPath
	I0913 19:43:02.865489   55271 main.go:141] libmachine: (cert-expiration-235626) Calling .GetSSHUsername
	I0913 19:43:02.865665   55271 main.go:141] libmachine: Using SSH client type: native
	I0913 19:43:02.865865   55271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.171 22 <nil> <nil>}
	I0913 19:43:02.865881   55271 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-expiration-235626' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-expiration-235626/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-expiration-235626' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0913 19:43:02.976375   55271 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0913 19:43:02.976394   55271 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19636-3902/.minikube CaCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19636-3902/.minikube}
	I0913 19:43:02.976439   55271 buildroot.go:174] setting up certificates
	I0913 19:43:02.976447   55271 provision.go:84] configureAuth start
	I0913 19:43:02.976456   55271 main.go:141] libmachine: (cert-expiration-235626) Calling .GetMachineName
	I0913 19:43:02.976722   55271 main.go:141] libmachine: (cert-expiration-235626) Calling .GetIP
	I0913 19:43:02.979745   55271 main.go:141] libmachine: (cert-expiration-235626) DBG | domain cert-expiration-235626 has defined MAC address 52:54:00:e6:60:89 in network mk-cert-expiration-235626
	I0913 19:43:02.980100   55271 main.go:141] libmachine: (cert-expiration-235626) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:60:89", ip: ""} in network mk-cert-expiration-235626: {Iface:virbr4 ExpiryTime:2024-09-13 20:39:34 +0000 UTC Type:0 Mac:52:54:00:e6:60:89 Iaid: IPaddr:192.168.72.171 Prefix:24 Hostname:cert-expiration-235626 Clientid:01:52:54:00:e6:60:89}
	I0913 19:43:02.980119   55271 main.go:141] libmachine: (cert-expiration-235626) DBG | domain cert-expiration-235626 has defined IP address 192.168.72.171 and MAC address 52:54:00:e6:60:89 in network mk-cert-expiration-235626
	I0913 19:43:02.980279   55271 main.go:141] libmachine: (cert-expiration-235626) Calling .GetSSHHostname
	I0913 19:43:02.982576   55271 main.go:141] libmachine: (cert-expiration-235626) DBG | domain cert-expiration-235626 has defined MAC address 52:54:00:e6:60:89 in network mk-cert-expiration-235626
	I0913 19:43:02.982908   55271 main.go:141] libmachine: (cert-expiration-235626) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:60:89", ip: ""} in network mk-cert-expiration-235626: {Iface:virbr4 ExpiryTime:2024-09-13 20:39:34 +0000 UTC Type:0 Mac:52:54:00:e6:60:89 Iaid: IPaddr:192.168.72.171 Prefix:24 Hostname:cert-expiration-235626 Clientid:01:52:54:00:e6:60:89}
	I0913 19:43:02.982922   55271 main.go:141] libmachine: (cert-expiration-235626) DBG | domain cert-expiration-235626 has defined IP address 192.168.72.171 and MAC address 52:54:00:e6:60:89 in network mk-cert-expiration-235626
	I0913 19:43:02.983063   55271 provision.go:143] copyHostCerts
	I0913 19:43:02.983107   55271 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem, removing ...
	I0913 19:43:02.983112   55271 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem
	I0913 19:43:02.983172   55271 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem (1082 bytes)
	I0913 19:43:02.983257   55271 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem, removing ...
	I0913 19:43:02.983261   55271 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem
	I0913 19:43:02.983281   55271 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem (1123 bytes)
	I0913 19:43:02.983337   55271 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem, removing ...
	I0913 19:43:02.983340   55271 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem
	I0913 19:43:02.983360   55271 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem (1679 bytes)
	I0913 19:43:02.983423   55271 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem org=jenkins.cert-expiration-235626 san=[127.0.0.1 192.168.72.171 cert-expiration-235626 localhost minikube]
	I0913 19:43:03.183726   55271 provision.go:177] copyRemoteCerts
	I0913 19:43:03.183778   55271 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0913 19:43:03.183824   55271 main.go:141] libmachine: (cert-expiration-235626) Calling .GetSSHHostname
	I0913 19:43:03.187016   55271 main.go:141] libmachine: (cert-expiration-235626) DBG | domain cert-expiration-235626 has defined MAC address 52:54:00:e6:60:89 in network mk-cert-expiration-235626
	I0913 19:43:03.187381   55271 main.go:141] libmachine: (cert-expiration-235626) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:60:89", ip: ""} in network mk-cert-expiration-235626: {Iface:virbr4 ExpiryTime:2024-09-13 20:39:34 +0000 UTC Type:0 Mac:52:54:00:e6:60:89 Iaid: IPaddr:192.168.72.171 Prefix:24 Hostname:cert-expiration-235626 Clientid:01:52:54:00:e6:60:89}
	I0913 19:43:03.187398   55271 main.go:141] libmachine: (cert-expiration-235626) DBG | domain cert-expiration-235626 has defined IP address 192.168.72.171 and MAC address 52:54:00:e6:60:89 in network mk-cert-expiration-235626
	I0913 19:43:03.187681   55271 main.go:141] libmachine: (cert-expiration-235626) Calling .GetSSHPort
	I0913 19:43:03.187851   55271 main.go:141] libmachine: (cert-expiration-235626) Calling .GetSSHKeyPath
	I0913 19:43:03.187987   55271 main.go:141] libmachine: (cert-expiration-235626) Calling .GetSSHUsername
	I0913 19:43:03.188143   55271 sshutil.go:53] new ssh client: &{IP:192.168.72.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/cert-expiration-235626/id_rsa Username:docker}
	I0913 19:43:03.269493   55271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0913 19:43:03.296096   55271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0913 19:43:03.322044   55271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0913 19:43:03.346904   55271 provision.go:87] duration metric: took 370.446131ms to configureAuth
	I0913 19:43:03.346921   55271 buildroot.go:189] setting minikube options for container-runtime
	I0913 19:43:03.347088   55271 config.go:182] Loaded profile config "cert-expiration-235626": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 19:43:03.347145   55271 main.go:141] libmachine: (cert-expiration-235626) Calling .GetSSHHostname
	I0913 19:43:03.349669   55271 main.go:141] libmachine: (cert-expiration-235626) DBG | domain cert-expiration-235626 has defined MAC address 52:54:00:e6:60:89 in network mk-cert-expiration-235626
	I0913 19:43:03.350085   55271 main.go:141] libmachine: (cert-expiration-235626) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:60:89", ip: ""} in network mk-cert-expiration-235626: {Iface:virbr4 ExpiryTime:2024-09-13 20:39:34 +0000 UTC Type:0 Mac:52:54:00:e6:60:89 Iaid: IPaddr:192.168.72.171 Prefix:24 Hostname:cert-expiration-235626 Clientid:01:52:54:00:e6:60:89}
	I0913 19:43:03.350115   55271 main.go:141] libmachine: (cert-expiration-235626) DBG | domain cert-expiration-235626 has defined IP address 192.168.72.171 and MAC address 52:54:00:e6:60:89 in network mk-cert-expiration-235626
	I0913 19:43:03.350283   55271 main.go:141] libmachine: (cert-expiration-235626) Calling .GetSSHPort
	I0913 19:43:03.350442   55271 main.go:141] libmachine: (cert-expiration-235626) Calling .GetSSHKeyPath
	I0913 19:43:03.350608   55271 main.go:141] libmachine: (cert-expiration-235626) Calling .GetSSHKeyPath
	I0913 19:43:03.350757   55271 main.go:141] libmachine: (cert-expiration-235626) Calling .GetSSHUsername
	I0913 19:43:03.350914   55271 main.go:141] libmachine: Using SSH client type: native
	I0913 19:43:03.351057   55271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.171 22 <nil> <nil>}
	I0913 19:43:03.351071   55271 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0913 19:43:08.977107   55271 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0913 19:43:08.977122   55271 machine.go:96] duration metric: took 6.369462032s to provisionDockerMachine
	I0913 19:43:08.977133   55271 start.go:293] postStartSetup for "cert-expiration-235626" (driver="kvm2")
	I0913 19:43:08.977146   55271 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0913 19:43:08.977166   55271 main.go:141] libmachine: (cert-expiration-235626) Calling .DriverName
	I0913 19:43:08.977529   55271 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0913 19:43:08.977553   55271 main.go:141] libmachine: (cert-expiration-235626) Calling .GetSSHHostname
	I0913 19:43:08.980495   55271 main.go:141] libmachine: (cert-expiration-235626) DBG | domain cert-expiration-235626 has defined MAC address 52:54:00:e6:60:89 in network mk-cert-expiration-235626
	I0913 19:43:08.980831   55271 main.go:141] libmachine: (cert-expiration-235626) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:60:89", ip: ""} in network mk-cert-expiration-235626: {Iface:virbr4 ExpiryTime:2024-09-13 20:39:34 +0000 UTC Type:0 Mac:52:54:00:e6:60:89 Iaid: IPaddr:192.168.72.171 Prefix:24 Hostname:cert-expiration-235626 Clientid:01:52:54:00:e6:60:89}
	I0913 19:43:08.980853   55271 main.go:141] libmachine: (cert-expiration-235626) DBG | domain cert-expiration-235626 has defined IP address 192.168.72.171 and MAC address 52:54:00:e6:60:89 in network mk-cert-expiration-235626
	I0913 19:43:08.981030   55271 main.go:141] libmachine: (cert-expiration-235626) Calling .GetSSHPort
	I0913 19:43:08.981200   55271 main.go:141] libmachine: (cert-expiration-235626) Calling .GetSSHKeyPath
	I0913 19:43:08.981326   55271 main.go:141] libmachine: (cert-expiration-235626) Calling .GetSSHUsername
	I0913 19:43:08.981434   55271 sshutil.go:53] new ssh client: &{IP:192.168.72.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/cert-expiration-235626/id_rsa Username:docker}
	I0913 19:43:09.065370   55271 ssh_runner.go:195] Run: cat /etc/os-release
	I0913 19:43:09.069836   55271 info.go:137] Remote host: Buildroot 2023.02.9
	I0913 19:43:09.069859   55271 filesync.go:126] Scanning /home/jenkins/minikube-integration/19636-3902/.minikube/addons for local assets ...
	I0913 19:43:09.069926   55271 filesync.go:126] Scanning /home/jenkins/minikube-integration/19636-3902/.minikube/files for local assets ...
	I0913 19:43:09.070038   55271 filesync.go:149] local asset: /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem -> 110792.pem in /etc/ssl/certs
	I0913 19:43:09.070192   55271 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0913 19:43:09.080361   55271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem --> /etc/ssl/certs/110792.pem (1708 bytes)
	I0913 19:43:09.108994   55271 start.go:296] duration metric: took 131.848572ms for postStartSetup
	I0913 19:43:09.109016   55271 fix.go:56] duration metric: took 6.525653495s for fixHost
	I0913 19:43:09.109036   55271 main.go:141] libmachine: (cert-expiration-235626) Calling .GetSSHHostname
	I0913 19:43:09.112161   55271 main.go:141] libmachine: (cert-expiration-235626) DBG | domain cert-expiration-235626 has defined MAC address 52:54:00:e6:60:89 in network mk-cert-expiration-235626
	I0913 19:43:09.112474   55271 main.go:141] libmachine: (cert-expiration-235626) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:60:89", ip: ""} in network mk-cert-expiration-235626: {Iface:virbr4 ExpiryTime:2024-09-13 20:39:34 +0000 UTC Type:0 Mac:52:54:00:e6:60:89 Iaid: IPaddr:192.168.72.171 Prefix:24 Hostname:cert-expiration-235626 Clientid:01:52:54:00:e6:60:89}
	I0913 19:43:09.112495   55271 main.go:141] libmachine: (cert-expiration-235626) DBG | domain cert-expiration-235626 has defined IP address 192.168.72.171 and MAC address 52:54:00:e6:60:89 in network mk-cert-expiration-235626
	I0913 19:43:09.112679   55271 main.go:141] libmachine: (cert-expiration-235626) Calling .GetSSHPort
	I0913 19:43:09.112845   55271 main.go:141] libmachine: (cert-expiration-235626) Calling .GetSSHKeyPath
	I0913 19:43:09.113019   55271 main.go:141] libmachine: (cert-expiration-235626) Calling .GetSSHKeyPath
	I0913 19:43:09.113161   55271 main.go:141] libmachine: (cert-expiration-235626) Calling .GetSSHUsername
	I0913 19:43:09.113333   55271 main.go:141] libmachine: Using SSH client type: native
	I0913 19:43:09.113496   55271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.171 22 <nil> <nil>}
	I0913 19:43:09.113500   55271 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0913 19:43:09.215508   55271 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726256589.173885811
	
	I0913 19:43:09.215522   55271 fix.go:216] guest clock: 1726256589.173885811
	I0913 19:43:09.215529   55271 fix.go:229] Guest: 2024-09-13 19:43:09.173885811 +0000 UTC Remote: 2024-09-13 19:43:09.109018036 +0000 UTC m=+6.682295211 (delta=64.867775ms)
	I0913 19:43:09.215551   55271 fix.go:200] guest clock delta is within tolerance: 64.867775ms
	I0913 19:43:09.215556   55271 start.go:83] releasing machines lock for "cert-expiration-235626", held for 6.632201619s
	I0913 19:43:09.215577   55271 main.go:141] libmachine: (cert-expiration-235626) Calling .DriverName
	I0913 19:43:09.215832   55271 main.go:141] libmachine: (cert-expiration-235626) Calling .GetIP
	I0913 19:43:09.218707   55271 main.go:141] libmachine: (cert-expiration-235626) DBG | domain cert-expiration-235626 has defined MAC address 52:54:00:e6:60:89 in network mk-cert-expiration-235626
	I0913 19:43:09.219135   55271 main.go:141] libmachine: (cert-expiration-235626) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:60:89", ip: ""} in network mk-cert-expiration-235626: {Iface:virbr4 ExpiryTime:2024-09-13 20:39:34 +0000 UTC Type:0 Mac:52:54:00:e6:60:89 Iaid: IPaddr:192.168.72.171 Prefix:24 Hostname:cert-expiration-235626 Clientid:01:52:54:00:e6:60:89}
	I0913 19:43:09.219157   55271 main.go:141] libmachine: (cert-expiration-235626) DBG | domain cert-expiration-235626 has defined IP address 192.168.72.171 and MAC address 52:54:00:e6:60:89 in network mk-cert-expiration-235626
	I0913 19:43:09.219327   55271 main.go:141] libmachine: (cert-expiration-235626) Calling .DriverName
	I0913 19:43:09.219897   55271 main.go:141] libmachine: (cert-expiration-235626) Calling .DriverName
	I0913 19:43:09.220092   55271 main.go:141] libmachine: (cert-expiration-235626) Calling .DriverName
	I0913 19:43:09.220182   55271 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0913 19:43:09.220211   55271 main.go:141] libmachine: (cert-expiration-235626) Calling .GetSSHHostname
	I0913 19:43:09.220292   55271 ssh_runner.go:195] Run: cat /version.json
	I0913 19:43:09.220305   55271 main.go:141] libmachine: (cert-expiration-235626) Calling .GetSSHHostname
	I0913 19:43:09.223160   55271 main.go:141] libmachine: (cert-expiration-235626) DBG | domain cert-expiration-235626 has defined MAC address 52:54:00:e6:60:89 in network mk-cert-expiration-235626
	I0913 19:43:09.223393   55271 main.go:141] libmachine: (cert-expiration-235626) DBG | domain cert-expiration-235626 has defined MAC address 52:54:00:e6:60:89 in network mk-cert-expiration-235626
	I0913 19:43:09.223458   55271 main.go:141] libmachine: (cert-expiration-235626) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:60:89", ip: ""} in network mk-cert-expiration-235626: {Iface:virbr4 ExpiryTime:2024-09-13 20:39:34 +0000 UTC Type:0 Mac:52:54:00:e6:60:89 Iaid: IPaddr:192.168.72.171 Prefix:24 Hostname:cert-expiration-235626 Clientid:01:52:54:00:e6:60:89}
	I0913 19:43:09.223472   55271 main.go:141] libmachine: (cert-expiration-235626) DBG | domain cert-expiration-235626 has defined IP address 192.168.72.171 and MAC address 52:54:00:e6:60:89 in network mk-cert-expiration-235626
	I0913 19:43:09.223639   55271 main.go:141] libmachine: (cert-expiration-235626) Calling .GetSSHPort
	I0913 19:43:09.223740   55271 main.go:141] libmachine: (cert-expiration-235626) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:60:89", ip: ""} in network mk-cert-expiration-235626: {Iface:virbr4 ExpiryTime:2024-09-13 20:39:34 +0000 UTC Type:0 Mac:52:54:00:e6:60:89 Iaid: IPaddr:192.168.72.171 Prefix:24 Hostname:cert-expiration-235626 Clientid:01:52:54:00:e6:60:89}
	I0913 19:43:09.223754   55271 main.go:141] libmachine: (cert-expiration-235626) DBG | domain cert-expiration-235626 has defined IP address 192.168.72.171 and MAC address 52:54:00:e6:60:89 in network mk-cert-expiration-235626
	I0913 19:43:09.223794   55271 main.go:141] libmachine: (cert-expiration-235626) Calling .GetSSHKeyPath
	I0913 19:43:09.223945   55271 main.go:141] libmachine: (cert-expiration-235626) Calling .GetSSHPort
	I0913 19:43:09.223948   55271 main.go:141] libmachine: (cert-expiration-235626) Calling .GetSSHUsername
	I0913 19:43:09.224091   55271 main.go:141] libmachine: (cert-expiration-235626) Calling .GetSSHKeyPath
	I0913 19:43:09.224095   55271 sshutil.go:53] new ssh client: &{IP:192.168.72.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/cert-expiration-235626/id_rsa Username:docker}
	I0913 19:43:09.224234   55271 main.go:141] libmachine: (cert-expiration-235626) Calling .GetSSHUsername
	I0913 19:43:09.224332   55271 sshutil.go:53] new ssh client: &{IP:192.168.72.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/cert-expiration-235626/id_rsa Username:docker}
	I0913 19:43:09.327865   55271 ssh_runner.go:195] Run: systemctl --version
	I0913 19:43:09.335986   55271 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0913 19:43:09.509596   55271 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0913 19:43:09.517714   55271 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0913 19:43:09.517772   55271 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0913 19:43:09.530892   55271 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0913 19:43:09.530907   55271 start.go:495] detecting cgroup driver to use...
	I0913 19:43:09.530970   55271 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0913 19:43:09.552652   55271 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0913 19:43:09.571508   55271 docker.go:217] disabling cri-docker service (if available) ...
	I0913 19:43:09.571563   55271 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0913 19:43:09.589388   55271 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0913 19:43:09.605833   55271 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0913 19:43:09.752413   55271 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0913 19:43:09.891244   55271 docker.go:233] disabling docker service ...
	I0913 19:43:09.891312   55271 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0913 19:43:09.908729   55271 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0913 19:43:09.924024   55271 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0913 19:43:10.066487   55271 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0913 19:43:10.205983   55271 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0913 19:43:10.222916   55271 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0913 19:43:10.243957   55271 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0913 19:43:10.244002   55271 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:43:10.255526   55271 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0913 19:43:10.255581   55271 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:43:10.266628   55271 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:43:10.277814   55271 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:43:10.289699   55271 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0913 19:43:10.301293   55271 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:43:10.313574   55271 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:43:10.326888   55271 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:43:10.338885   55271 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0913 19:43:10.349981   55271 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0913 19:43:10.361016   55271 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 19:43:10.506654   55271 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0913 19:43:10.728716   55271 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0913 19:43:10.728778   55271 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0913 19:43:10.734902   55271 start.go:563] Will wait 60s for crictl version
	I0913 19:43:10.734965   55271 ssh_runner.go:195] Run: which crictl
	I0913 19:43:10.739338   55271 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0913 19:43:10.792669   55271 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0913 19:43:10.792750   55271 ssh_runner.go:195] Run: crio --version
	I0913 19:43:10.822589   55271 ssh_runner.go:195] Run: crio --version
	I0913 19:43:10.853202   55271 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0913 19:43:06.595848   54266 pod_ready.go:93] pod "coredns-7c65d6cfc9-7fxbj" in "kube-system" namespace has status "Ready":"True"
	I0913 19:43:06.595877   54266 pod_ready.go:82] duration metric: took 4.512430091s for pod "coredns-7c65d6cfc9-7fxbj" in "kube-system" namespace to be "Ready" ...
	I0913 19:43:06.595889   54266 pod_ready.go:79] waiting up to 4m0s for pod "etcd-pause-933457" in "kube-system" namespace to be "Ready" ...
	I0913 19:43:08.603673   54266 pod_ready.go:103] pod "etcd-pause-933457" in "kube-system" namespace has status "Ready":"False"
	I0913 19:43:10.603808   54266 pod_ready.go:103] pod "etcd-pause-933457" in "kube-system" namespace has status "Ready":"False"
	I0913 19:43:10.854474   55271 main.go:141] libmachine: (cert-expiration-235626) Calling .GetIP
	I0913 19:43:10.857253   55271 main.go:141] libmachine: (cert-expiration-235626) DBG | domain cert-expiration-235626 has defined MAC address 52:54:00:e6:60:89 in network mk-cert-expiration-235626
	I0913 19:43:10.857674   55271 main.go:141] libmachine: (cert-expiration-235626) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:60:89", ip: ""} in network mk-cert-expiration-235626: {Iface:virbr4 ExpiryTime:2024-09-13 20:39:34 +0000 UTC Type:0 Mac:52:54:00:e6:60:89 Iaid: IPaddr:192.168.72.171 Prefix:24 Hostname:cert-expiration-235626 Clientid:01:52:54:00:e6:60:89}
	I0913 19:43:10.857695   55271 main.go:141] libmachine: (cert-expiration-235626) DBG | domain cert-expiration-235626 has defined IP address 192.168.72.171 and MAC address 52:54:00:e6:60:89 in network mk-cert-expiration-235626
	I0913 19:43:10.857934   55271 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0913 19:43:10.862643   55271 kubeadm.go:883] updating cluster {Name:cert-expiration-235626 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.31.1 ClusterName:cert-expiration-235626 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.171 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0913 19:43:10.862739   55271 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0913 19:43:10.862793   55271 ssh_runner.go:195] Run: sudo crictl images --output json
	I0913 19:43:10.914117   55271 crio.go:514] all images are preloaded for cri-o runtime.
	I0913 19:43:10.914132   55271 crio.go:433] Images already preloaded, skipping extraction
	I0913 19:43:10.914192   55271 ssh_runner.go:195] Run: sudo crictl images --output json
	I0913 19:43:10.956333   55271 crio.go:514] all images are preloaded for cri-o runtime.
	I0913 19:43:10.956352   55271 cache_images.go:84] Images are preloaded, skipping loading
	I0913 19:43:10.956360   55271 kubeadm.go:934] updating node { 192.168.72.171 8443 v1.31.1 crio true true} ...
	I0913 19:43:10.956493   55271 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=cert-expiration-235626 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.171
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:cert-expiration-235626 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0913 19:43:10.956570   55271 ssh_runner.go:195] Run: crio config
	I0913 19:43:11.016756   55271 cni.go:84] Creating CNI manager for ""
	I0913 19:43:11.016772   55271 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0913 19:43:11.016782   55271 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0913 19:43:11.016807   55271 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.171 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cert-expiration-235626 NodeName:cert-expiration-235626 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.171"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.171 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0913 19:43:11.016974   55271 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.171
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "cert-expiration-235626"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.171
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.171"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0913 19:43:11.017066   55271 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0913 19:43:11.029511   55271 binaries.go:44] Found k8s binaries, skipping transfer
	I0913 19:43:11.029595   55271 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0913 19:43:11.040602   55271 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I0913 19:43:11.059145   55271 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0913 19:43:11.081343   55271 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0913 19:43:11.104259   55271 ssh_runner.go:195] Run: grep 192.168.72.171	control-plane.minikube.internal$ /etc/hosts
	I0913 19:43:11.108482   55271 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 19:43:11.256131   55271 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0913 19:43:11.271746   55271 certs.go:68] Setting up /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/cert-expiration-235626 for IP: 192.168.72.171
	I0913 19:43:11.271758   55271 certs.go:194] generating shared ca certs ...
	I0913 19:43:11.271775   55271 certs.go:226] acquiring lock for ca certs: {Name:mke780aab4c2895bded11772e31c7a357d07742c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 19:43:11.271973   55271 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.key
	I0913 19:43:11.272053   55271 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.key
	I0913 19:43:11.272061   55271 certs.go:256] generating profile certs ...
	W0913 19:43:11.272206   55271 out.go:270] ! Certificate client.crt has expired. Generating a new one...
	I0913 19:43:11.272226   55271 certs.go:624] cert expired /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/cert-expiration-235626/client.crt: expiration: 2024-09-13 19:42:49 +0000 UTC, now: 2024-09-13 19:43:11.272221481 +0000 UTC m=+8.845498655
	I0913 19:43:11.272342   55271 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/cert-expiration-235626/client.key
	I0913 19:43:11.272368   55271 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/cert-expiration-235626/client.crt with IP's: []
	I0913 19:43:11.452439   55271 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/cert-expiration-235626/client.crt ...
	I0913 19:43:11.452451   55271 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/cert-expiration-235626/client.crt: {Name:mk0f4250a884360245fdc8e84da5d16743f6fa58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 19:43:11.452581   55271 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/cert-expiration-235626/client.key ...
	I0913 19:43:11.452588   55271 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/cert-expiration-235626/client.key: {Name:mkdb6005f1a20c35428cbc8d10498128582db9d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	W0913 19:43:11.452727   55271 out.go:270] ! Certificate apiserver.crt.cf9bca76 has expired. Generating a new one...
	I0913 19:43:11.452744   55271 certs.go:624] cert expired /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/cert-expiration-235626/apiserver.crt.cf9bca76: expiration: 2024-09-13 19:42:49 +0000 UTC, now: 2024-09-13 19:43:11.452739471 +0000 UTC m=+9.026016636
	I0913 19:43:11.452809   55271 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/cert-expiration-235626/apiserver.key.cf9bca76
	I0913 19:43:11.452819   55271 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/cert-expiration-235626/apiserver.crt.cf9bca76 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.171]
	I0913 19:43:11.745036   55271 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/cert-expiration-235626/apiserver.crt.cf9bca76 ...
	I0913 19:43:11.745050   55271 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/cert-expiration-235626/apiserver.crt.cf9bca76: {Name:mka37c9d15041a54e3f798c2acefeb618e1d142b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 19:43:11.745185   55271 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/cert-expiration-235626/apiserver.key.cf9bca76 ...
	I0913 19:43:11.745192   55271 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/cert-expiration-235626/apiserver.key.cf9bca76: {Name:mk0bb0502a87f519fb6288111026edb95e130e6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 19:43:11.745250   55271 certs.go:381] copying /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/cert-expiration-235626/apiserver.crt.cf9bca76 -> /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/cert-expiration-235626/apiserver.crt
	I0913 19:43:11.745392   55271 certs.go:385] copying /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/cert-expiration-235626/apiserver.key.cf9bca76 -> /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/cert-expiration-235626/apiserver.key
	W0913 19:43:11.745563   55271 out.go:270] ! Certificate proxy-client.crt has expired. Generating a new one...
	I0913 19:43:11.745579   55271 certs.go:624] cert expired /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/cert-expiration-235626/proxy-client.crt: expiration: 2024-09-13 19:42:49 +0000 UTC, now: 2024-09-13 19:43:11.745575852 +0000 UTC m=+9.318853017
	I0913 19:43:11.745636   55271 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/cert-expiration-235626/proxy-client.key
	I0913 19:43:11.745652   55271 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/cert-expiration-235626/proxy-client.crt with IP's: []
	I0913 19:43:12.063866   55271 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/cert-expiration-235626/proxy-client.crt ...
	I0913 19:43:12.063885   55271 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/cert-expiration-235626/proxy-client.crt: {Name:mka7dcd1b937df37230b11247bb955254ebf8d9e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 19:43:12.064064   55271 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/cert-expiration-235626/proxy-client.key ...
	I0913 19:43:12.064076   55271 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/cert-expiration-235626/proxy-client.key: {Name:mkfe491a6e91675bafecde727f9eadc7ac3120c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 19:43:12.064321   55271 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079.pem (1338 bytes)
	W0913 19:43:12.064363   55271 certs.go:480] ignoring /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079_empty.pem, impossibly tiny 0 bytes
	I0913 19:43:12.064371   55271 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem (1675 bytes)
	I0913 19:43:12.064405   55271 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem (1082 bytes)
	I0913 19:43:12.064455   55271 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem (1123 bytes)
	I0913 19:43:12.064487   55271 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem (1679 bytes)
	I0913 19:43:12.064540   55271 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem (1708 bytes)
	I0913 19:43:12.065323   55271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0913 19:43:12.122580   55271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0913 19:43:12.330878   55271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0913 19:43:12.431683   55271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0913 19:43:12.603958   54266 pod_ready.go:103] pod "etcd-pause-933457" in "kube-system" namespace has status "Ready":"False"
	I0913 19:43:13.107968   54266 pod_ready.go:93] pod "etcd-pause-933457" in "kube-system" namespace has status "Ready":"True"
	I0913 19:43:13.107997   54266 pod_ready.go:82] duration metric: took 6.512100347s for pod "etcd-pause-933457" in "kube-system" namespace to be "Ready" ...
	I0913 19:43:13.108012   54266 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-pause-933457" in "kube-system" namespace to be "Ready" ...
	I0913 19:43:15.115101   54266 pod_ready.go:103] pod "kube-apiserver-pause-933457" in "kube-system" namespace has status "Ready":"False"
	I0913 19:43:16.114780   54266 pod_ready.go:93] pod "kube-apiserver-pause-933457" in "kube-system" namespace has status "Ready":"True"
	I0913 19:43:16.114805   54266 pod_ready.go:82] duration metric: took 3.006784728s for pod "kube-apiserver-pause-933457" in "kube-system" namespace to be "Ready" ...
	I0913 19:43:16.114818   54266 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-pause-933457" in "kube-system" namespace to be "Ready" ...
	I0913 19:43:16.118929   54266 pod_ready.go:93] pod "kube-controller-manager-pause-933457" in "kube-system" namespace has status "Ready":"True"
	I0913 19:43:16.118947   54266 pod_ready.go:82] duration metric: took 4.122347ms for pod "kube-controller-manager-pause-933457" in "kube-system" namespace to be "Ready" ...
	I0913 19:43:16.118956   54266 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-frbfp" in "kube-system" namespace to be "Ready" ...
	I0913 19:43:16.124027   54266 pod_ready.go:93] pod "kube-proxy-frbfp" in "kube-system" namespace has status "Ready":"True"
	I0913 19:43:16.124043   54266 pod_ready.go:82] duration metric: took 5.081595ms for pod "kube-proxy-frbfp" in "kube-system" namespace to be "Ready" ...
	I0913 19:43:16.124051   54266 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-pause-933457" in "kube-system" namespace to be "Ready" ...
	I0913 19:43:16.128987   54266 pod_ready.go:93] pod "kube-scheduler-pause-933457" in "kube-system" namespace has status "Ready":"True"
	I0913 19:43:16.129011   54266 pod_ready.go:82] duration metric: took 4.948383ms for pod "kube-scheduler-pause-933457" in "kube-system" namespace to be "Ready" ...
	I0913 19:43:16.129019   54266 pod_ready.go:39] duration metric: took 14.051875678s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0913 19:43:16.129036   54266 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0913 19:43:16.141603   54266 ops.go:34] apiserver oom_adj: -16
	I0913 19:43:16.141626   54266 kubeadm.go:597] duration metric: took 41.899151947s to restartPrimaryControlPlane
	I0913 19:43:16.141636   54266 kubeadm.go:394] duration metric: took 42.079750017s to StartCluster
	I0913 19:43:16.141660   54266 settings.go:142] acquiring lock: {Name:mk3b751ea8a9f3a6c2d19469cffad500e96411b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 19:43:16.141725   54266 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19636-3902/kubeconfig
	I0913 19:43:16.143146   54266 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/kubeconfig: {Name:mk324cf401f96b93fed93af51e24b0634e5fa1a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 19:43:16.143405   54266 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.83.59 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0913 19:43:16.143599   54266 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0913 19:43:16.144065   54266 config.go:182] Loaded profile config "pause-933457": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 19:43:16.145182   54266 out.go:177] * Verifying Kubernetes components...
	I0913 19:43:16.145943   54266 out.go:177] * Enabled addons: 
	I0913 19:43:16.621010   54890 out.go:204]   - Generating certificates and keys ...
	I0913 19:43:16.623782   54890 out.go:204]   - Booting up control plane ...
	I0913 19:43:16.626254   54890 out.go:204]   - Configuring RBAC rules ...
	I0913 19:43:16.628273   54890 cni.go:95] Creating CNI manager for ""
	I0913 19:43:16.628285   54890 cni.go:165] "kvm2" driver + crio runtime found, recommending bridge
	I0913 19:43:16.629685   54890 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0913 19:43:16.630986   54890 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0913 19:43:16.641519   54890 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0913 19:43:16.662954   54890 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0913 19:43:16.663024   54890 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl label nodes minikube.k8s.io/version=v1.26.0 minikube.k8s.io/commit=f4b412861bb746be73053c9f6d2895f12cf78565 minikube.k8s.io/name=stopped-upgrade-520539 minikube.k8s.io/updated_at=2024_09_13T19_43_16_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 19:43:16.663032   54890 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 19:43:16.700907   54890 ops.go:34] apiserver oom_adj: -16
	I0913 19:43:16.899700   54890 kubeadm.go:1045] duration metric: took 236.741078ms to wait for elevateKubeSystemPrivileges.
	I0913 19:43:16.938670   54890 kubeadm.go:397] StartCluster complete in 14.267462103s
	I0913 19:43:16.938701   54890 settings.go:142] acquiring lock: {Name:mkf30569800948772d6c0737d6db82ca36b804e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 19:43:16.938851   54890 settings.go:150] Updating kubeconfig:  /tmp/legacy_kubeconfig3635365411
	I0913 19:43:16.939287   54890 lock.go:35] WriteFile acquiring /tmp/legacy_kubeconfig3635365411: {Name:mk92aba0c8e6cfb5c7a0258c869e3e7f1a19b4a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 19:43:17.456618   54890 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "stopped-upgrade-520539" rescaled to 1
	I0913 19:43:17.456661   54890 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.50.110 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0913 19:43:17.458452   54890 out.go:177] * Verifying Kubernetes components...
	I0913 19:43:17.456712   54890 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0913 19:43:12.476882   55271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/cert-expiration-235626/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0913 19:43:12.656820   55271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/cert-expiration-235626/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0913 19:43:12.722173   55271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/cert-expiration-235626/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0913 19:43:12.864811   55271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/cert-expiration-235626/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0913 19:43:12.940353   55271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem --> /usr/share/ca-certificates/110792.pem (1708 bytes)
	I0913 19:43:13.054737   55271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0913 19:43:13.158597   55271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079.pem --> /usr/share/ca-certificates/11079.pem (1338 bytes)
	I0913 19:43:13.291873   55271 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0913 19:43:13.321021   55271 ssh_runner.go:195] Run: openssl version
	I0913 19:43:13.329217   55271 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110792.pem && ln -fs /usr/share/ca-certificates/110792.pem /etc/ssl/certs/110792.pem"
	I0913 19:43:13.350034   55271 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110792.pem
	I0913 19:43:13.358447   55271 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 13 18:38 /usr/share/ca-certificates/110792.pem
	I0913 19:43:13.358505   55271 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110792.pem
	I0913 19:43:13.367996   55271 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110792.pem /etc/ssl/certs/3ec20f2e.0"
	I0913 19:43:13.386417   55271 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0913 19:43:13.401057   55271 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0913 19:43:13.407526   55271 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 13 18:22 /usr/share/ca-certificates/minikubeCA.pem
	I0913 19:43:13.407585   55271 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0913 19:43:13.418699   55271 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0913 19:43:13.431772   55271 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11079.pem && ln -fs /usr/share/ca-certificates/11079.pem /etc/ssl/certs/11079.pem"
	I0913 19:43:13.445675   55271 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11079.pem
	I0913 19:43:13.451551   55271 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 13 18:38 /usr/share/ca-certificates/11079.pem
	I0913 19:43:13.451600   55271 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11079.pem
	I0913 19:43:13.468640   55271 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11079.pem /etc/ssl/certs/51391683.0"
	I0913 19:43:13.498237   55271 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0913 19:43:13.518919   55271 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0913 19:43:13.529689   55271 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0913 19:43:13.546771   55271 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0913 19:43:13.558785   55271 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0913 19:43:13.581878   55271 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0913 19:43:13.594922   55271 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0913 19:43:13.605476   55271 kubeadm.go:392] StartCluster: {Name:cert-expiration-235626 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.31.1 ClusterName:cert-expiration-235626 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.171 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 19:43:13.605564   55271 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0913 19:43:13.605621   55271 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0913 19:43:13.681172   55271 cri.go:89] found id: "923654eea96173c8c980ee5857f8ec0adc269480dcf812ef36940ca348e867b8"
	I0913 19:43:13.681182   55271 cri.go:89] found id: "f757a79b6ababc504e27203f9015f10af299064a2534ae745bd82ccbd3f8c432"
	I0913 19:43:13.681185   55271 cri.go:89] found id: "be5c63aa8a67e8153994ac11f1b4e00621239818ccc756cc4f3a8da7dbd9d88e"
	I0913 19:43:13.681187   55271 cri.go:89] found id: "1203c50c57b4c9525b36650a565887456ab8139199f1e030cf161de3b2b33aac"
	I0913 19:43:13.681189   55271 cri.go:89] found id: "ca4ed7706af1cb9c0f52d1c068164c6a5a04715fc9ee276c49141ed9b0519559"
	I0913 19:43:13.681191   55271 cri.go:89] found id: "79fd366db23d3a7bc7bc710e83b2ab53e92d0300ca258f922a4761136bea0420"
	I0913 19:43:13.681193   55271 cri.go:89] found id: "7ee2aea8ef68108297bb3fc06e7c65114b936bba46e7aa98c13593ba7a177530"
	I0913 19:43:13.681194   55271 cri.go:89] found id: "e24e2095856f142b48eda1bc10a53765d3b9f7a2d5819b57a3548b12d63adf5f"
	I0913 19:43:13.681196   55271 cri.go:89] found id: "b09831004bd9e3dc0ac4bb274d2725e73e9d1b50e34f0f0e43c0fc6d8012f11f"
	I0913 19:43:13.681200   55271 cri.go:89] found id: "586e821e7bbde256ae5a0d8add259b35e2d93633ffa78a1cac07820f2cf62f77"
	I0913 19:43:13.681202   55271 cri.go:89] found id: "ebbe9d7a33d1d1329ad455fdd061d48b0c0d835493f7fa9295a383e2a86a433d"
	I0913 19:43:13.681204   55271 cri.go:89] found id: "dd387bf777e32cf5459ad36c4686b0dfab500f68d42dfc2d6b9f59441b5cbff4"
	I0913 19:43:13.681205   55271 cri.go:89] found id: ""
	I0913 19:43:13.681243   55271 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 13 19:43:20 pause-933457 crio[2102]: time="2024-09-13 19:43:20.039854520Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726256600039809217,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=944c651a-13d5-4c18-90cd-2666bb8a8d90 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 19:43:20 pause-933457 crio[2102]: time="2024-09-13 19:43:20.040363024Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2a6335cc-0450-4504-a47a-571bbaa88731 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 19:43:20 pause-933457 crio[2102]: time="2024-09-13 19:43:20.040417787Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2a6335cc-0450-4504-a47a-571bbaa88731 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 19:43:20 pause-933457 crio[2102]: time="2024-09-13 19:43:20.040702728Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0f710e3da45b15fe72958441699682fc1298243fb5bc7df2767f49de0a7ad687,PodSandboxId:91087deff47dd6578bca67df0c3d313a93d4a5dfc477749c5249f9ce5e32ba9b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726256580910398162,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-frbfp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfb8342b-c790-4425-baeb-c40e02d7fad0,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da5210c32a8237155bd7ef5a445c688ad3bc167ea231c7dde4e58c30b7b97dcd,PodSandboxId:6bd168360a06144dd5f1b390b41123dc34b15e9a5e772242fdda0ec88f0469cd,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726256580917675169,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7fxbj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0dc6419-dce7-46cd-8caa-d46406a809a5,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63e339ba2951c30bd8da7a88639d237b0309efaf7a94cd250a9a4e92825e327d,PodSandboxId:d0bc474a4e97dea27035affae13ef39a5a770e575535b9bb6ae809cc717ca8d5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726256577052875931,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-933457,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: e4d49aafe3d52d9d99cb2d193950023f,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98bbf70fd63304f22b2ba09c27befc8cc2eabb158a32942d7b6368164a56dded,PodSandboxId:2e80380b4598439d335bd90802158dea9ec3d8e10f6a5a9d3f8f0fa786304714,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726256577177052993,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-933457,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3
37159310d07a596d6014ddf4913aa0f,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7d538f1baabf0dcc15a5450f23023f75524ca8b22bba77cf8c62b56dba2a2d4,PodSandboxId:5eb3122c8500087753eda9c48231d39f602533fc59935b1fc1ade3c149cbc775,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726256577110002636,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-933457,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31785125bea90a37884
5ca568268a3e8,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b17e0cbb3322decd9e0daed3123abc5809b47637a87e4e04f8fae4ddaac51ff4,PodSandboxId:0686db9965f3c9844ab051dfd8532c0741afd32e930592357f1767dff877da11,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726256577071063880,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-933457,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40c784af0805c3420ec6b1601a925698,},Annotations:map[string]string{io
.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4dc94897deaaf4e69873a1b170e8112acf0c6df733065ad175f5e8ae501562f2,PodSandboxId:91087deff47dd6578bca67df0c3d313a93d4a5dfc477749c5249f9ce5e32ba9b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726256553172858969,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-frbfp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfb8342b-c790-4425-baeb-c40e02d7fad0,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc
59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba02c00e17ad3940584d065430964f8e9776e00d33734f058867687120958d05,PodSandboxId:6bd168360a06144dd5f1b390b41123dc34b15e9a5e772242fdda0ec88f0469cd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726256553699380560,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7fxbj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0dc6419-dce7-46cd-8caa-d46406a809a5,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports
: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61067c1a85d8673f926fb0b1821d2c36123e06963b2c321d3346bf51a52cb966,PodSandboxId:0686db9965f3c9844ab051dfd8532c0741afd32e930592357f1767dff877da11,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726256553069436771,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-933457,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 40c784af0805c3420ec6b1601a925698,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8676b252bf1c28d3cb6742f7bfbd7dd807365a67f2e32f36e9217c0da7d7f23a,PodSandboxId:d0bc474a4e97dea27035affae13ef39a5a770e575535b9bb6ae809cc717ca8d5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726256553167181658,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-933457,io.kubernete
s.pod.namespace: kube-system,io.kubernetes.pod.uid: e4d49aafe3d52d9d99cb2d193950023f,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6edacfe2510fbe9a4608060fd1be51aca7e450fe78177a0cf92a4539040d7b27,PodSandboxId:2e80380b4598439d335bd90802158dea9ec3d8e10f6a5a9d3f8f0fa786304714,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726256553064178840,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-933457,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 337159310d07a596d6014ddf4913aa0f,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c49e99000aa6dd7ed6586c9e93f0ac4a9f934540a949c269b2dc9dcdac0e6259,PodSandboxId:5eb3122c8500087753eda9c48231d39f602533fc59935b1fc1ade3c149cbc775,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726256553053825888,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-933457,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 31785125bea90a378845ca568268a3e8,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2a6335cc-0450-4504-a47a-571bbaa88731 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 19:43:20 pause-933457 crio[2102]: time="2024-09-13 19:43:20.082442303Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dddd2e80-e389-4204-a1f7-3bb6d9217daa name=/runtime.v1.RuntimeService/Version
	Sep 13 19:43:20 pause-933457 crio[2102]: time="2024-09-13 19:43:20.082528291Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dddd2e80-e389-4204-a1f7-3bb6d9217daa name=/runtime.v1.RuntimeService/Version
	Sep 13 19:43:20 pause-933457 crio[2102]: time="2024-09-13 19:43:20.083689017Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a4a87a3b-294d-4e09-99ba-d11a6d09d4d9 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 19:43:20 pause-933457 crio[2102]: time="2024-09-13 19:43:20.084049806Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726256600084029800,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a4a87a3b-294d-4e09-99ba-d11a6d09d4d9 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 19:43:20 pause-933457 crio[2102]: time="2024-09-13 19:43:20.084943581Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dabf4921-f19f-42f0-922f-63f016b269b8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 19:43:20 pause-933457 crio[2102]: time="2024-09-13 19:43:20.085138273Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dabf4921-f19f-42f0-922f-63f016b269b8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 19:43:20 pause-933457 crio[2102]: time="2024-09-13 19:43:20.085412232Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0f710e3da45b15fe72958441699682fc1298243fb5bc7df2767f49de0a7ad687,PodSandboxId:91087deff47dd6578bca67df0c3d313a93d4a5dfc477749c5249f9ce5e32ba9b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726256580910398162,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-frbfp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfb8342b-c790-4425-baeb-c40e02d7fad0,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da5210c32a8237155bd7ef5a445c688ad3bc167ea231c7dde4e58c30b7b97dcd,PodSandboxId:6bd168360a06144dd5f1b390b41123dc34b15e9a5e772242fdda0ec88f0469cd,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726256580917675169,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7fxbj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0dc6419-dce7-46cd-8caa-d46406a809a5,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63e339ba2951c30bd8da7a88639d237b0309efaf7a94cd250a9a4e92825e327d,PodSandboxId:d0bc474a4e97dea27035affae13ef39a5a770e575535b9bb6ae809cc717ca8d5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726256577052875931,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-933457,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: e4d49aafe3d52d9d99cb2d193950023f,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98bbf70fd63304f22b2ba09c27befc8cc2eabb158a32942d7b6368164a56dded,PodSandboxId:2e80380b4598439d335bd90802158dea9ec3d8e10f6a5a9d3f8f0fa786304714,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726256577177052993,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-933457,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3
37159310d07a596d6014ddf4913aa0f,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7d538f1baabf0dcc15a5450f23023f75524ca8b22bba77cf8c62b56dba2a2d4,PodSandboxId:5eb3122c8500087753eda9c48231d39f602533fc59935b1fc1ade3c149cbc775,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726256577110002636,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-933457,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31785125bea90a37884
5ca568268a3e8,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b17e0cbb3322decd9e0daed3123abc5809b47637a87e4e04f8fae4ddaac51ff4,PodSandboxId:0686db9965f3c9844ab051dfd8532c0741afd32e930592357f1767dff877da11,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726256577071063880,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-933457,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40c784af0805c3420ec6b1601a925698,},Annotations:map[string]string{io
.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4dc94897deaaf4e69873a1b170e8112acf0c6df733065ad175f5e8ae501562f2,PodSandboxId:91087deff47dd6578bca67df0c3d313a93d4a5dfc477749c5249f9ce5e32ba9b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726256553172858969,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-frbfp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfb8342b-c790-4425-baeb-c40e02d7fad0,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc
59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba02c00e17ad3940584d065430964f8e9776e00d33734f058867687120958d05,PodSandboxId:6bd168360a06144dd5f1b390b41123dc34b15e9a5e772242fdda0ec88f0469cd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726256553699380560,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7fxbj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0dc6419-dce7-46cd-8caa-d46406a809a5,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports
: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61067c1a85d8673f926fb0b1821d2c36123e06963b2c321d3346bf51a52cb966,PodSandboxId:0686db9965f3c9844ab051dfd8532c0741afd32e930592357f1767dff877da11,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726256553069436771,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-933457,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 40c784af0805c3420ec6b1601a925698,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8676b252bf1c28d3cb6742f7bfbd7dd807365a67f2e32f36e9217c0da7d7f23a,PodSandboxId:d0bc474a4e97dea27035affae13ef39a5a770e575535b9bb6ae809cc717ca8d5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726256553167181658,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-933457,io.kubernete
s.pod.namespace: kube-system,io.kubernetes.pod.uid: e4d49aafe3d52d9d99cb2d193950023f,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6edacfe2510fbe9a4608060fd1be51aca7e450fe78177a0cf92a4539040d7b27,PodSandboxId:2e80380b4598439d335bd90802158dea9ec3d8e10f6a5a9d3f8f0fa786304714,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726256553064178840,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-933457,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 337159310d07a596d6014ddf4913aa0f,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c49e99000aa6dd7ed6586c9e93f0ac4a9f934540a949c269b2dc9dcdac0e6259,PodSandboxId:5eb3122c8500087753eda9c48231d39f602533fc59935b1fc1ade3c149cbc775,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726256553053825888,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-933457,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 31785125bea90a378845ca568268a3e8,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dabf4921-f19f-42f0-922f-63f016b269b8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 19:43:20 pause-933457 crio[2102]: time="2024-09-13 19:43:20.128863939Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0e93e262-b710-4d41-826d-a5b45372e48a name=/runtime.v1.RuntimeService/Version
	Sep 13 19:43:20 pause-933457 crio[2102]: time="2024-09-13 19:43:20.128968024Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0e93e262-b710-4d41-826d-a5b45372e48a name=/runtime.v1.RuntimeService/Version
	Sep 13 19:43:20 pause-933457 crio[2102]: time="2024-09-13 19:43:20.131060427Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e011c661-cf2d-4a0f-a383-ebac007821b3 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 19:43:20 pause-933457 crio[2102]: time="2024-09-13 19:43:20.131436236Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726256600131416214,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e011c661-cf2d-4a0f-a383-ebac007821b3 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 19:43:20 pause-933457 crio[2102]: time="2024-09-13 19:43:20.132086814Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cb36c420-c5d7-4706-852f-8de0699960bd name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 19:43:20 pause-933457 crio[2102]: time="2024-09-13 19:43:20.132157335Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cb36c420-c5d7-4706-852f-8de0699960bd name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 19:43:20 pause-933457 crio[2102]: time="2024-09-13 19:43:20.132388778Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0f710e3da45b15fe72958441699682fc1298243fb5bc7df2767f49de0a7ad687,PodSandboxId:91087deff47dd6578bca67df0c3d313a93d4a5dfc477749c5249f9ce5e32ba9b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726256580910398162,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-frbfp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfb8342b-c790-4425-baeb-c40e02d7fad0,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da5210c32a8237155bd7ef5a445c688ad3bc167ea231c7dde4e58c30b7b97dcd,PodSandboxId:6bd168360a06144dd5f1b390b41123dc34b15e9a5e772242fdda0ec88f0469cd,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726256580917675169,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7fxbj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0dc6419-dce7-46cd-8caa-d46406a809a5,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63e339ba2951c30bd8da7a88639d237b0309efaf7a94cd250a9a4e92825e327d,PodSandboxId:d0bc474a4e97dea27035affae13ef39a5a770e575535b9bb6ae809cc717ca8d5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726256577052875931,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-933457,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: e4d49aafe3d52d9d99cb2d193950023f,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98bbf70fd63304f22b2ba09c27befc8cc2eabb158a32942d7b6368164a56dded,PodSandboxId:2e80380b4598439d335bd90802158dea9ec3d8e10f6a5a9d3f8f0fa786304714,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726256577177052993,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-933457,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3
37159310d07a596d6014ddf4913aa0f,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7d538f1baabf0dcc15a5450f23023f75524ca8b22bba77cf8c62b56dba2a2d4,PodSandboxId:5eb3122c8500087753eda9c48231d39f602533fc59935b1fc1ade3c149cbc775,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726256577110002636,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-933457,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31785125bea90a37884
5ca568268a3e8,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b17e0cbb3322decd9e0daed3123abc5809b47637a87e4e04f8fae4ddaac51ff4,PodSandboxId:0686db9965f3c9844ab051dfd8532c0741afd32e930592357f1767dff877da11,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726256577071063880,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-933457,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40c784af0805c3420ec6b1601a925698,},Annotations:map[string]string{io
.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4dc94897deaaf4e69873a1b170e8112acf0c6df733065ad175f5e8ae501562f2,PodSandboxId:91087deff47dd6578bca67df0c3d313a93d4a5dfc477749c5249f9ce5e32ba9b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726256553172858969,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-frbfp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfb8342b-c790-4425-baeb-c40e02d7fad0,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc
59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba02c00e17ad3940584d065430964f8e9776e00d33734f058867687120958d05,PodSandboxId:6bd168360a06144dd5f1b390b41123dc34b15e9a5e772242fdda0ec88f0469cd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726256553699380560,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7fxbj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0dc6419-dce7-46cd-8caa-d46406a809a5,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports
: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61067c1a85d8673f926fb0b1821d2c36123e06963b2c321d3346bf51a52cb966,PodSandboxId:0686db9965f3c9844ab051dfd8532c0741afd32e930592357f1767dff877da11,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726256553069436771,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-933457,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 40c784af0805c3420ec6b1601a925698,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8676b252bf1c28d3cb6742f7bfbd7dd807365a67f2e32f36e9217c0da7d7f23a,PodSandboxId:d0bc474a4e97dea27035affae13ef39a5a770e575535b9bb6ae809cc717ca8d5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726256553167181658,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-933457,io.kubernete
s.pod.namespace: kube-system,io.kubernetes.pod.uid: e4d49aafe3d52d9d99cb2d193950023f,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6edacfe2510fbe9a4608060fd1be51aca7e450fe78177a0cf92a4539040d7b27,PodSandboxId:2e80380b4598439d335bd90802158dea9ec3d8e10f6a5a9d3f8f0fa786304714,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726256553064178840,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-933457,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 337159310d07a596d6014ddf4913aa0f,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c49e99000aa6dd7ed6586c9e93f0ac4a9f934540a949c269b2dc9dcdac0e6259,PodSandboxId:5eb3122c8500087753eda9c48231d39f602533fc59935b1fc1ade3c149cbc775,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726256553053825888,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-933457,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 31785125bea90a378845ca568268a3e8,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cb36c420-c5d7-4706-852f-8de0699960bd name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 19:43:20 pause-933457 crio[2102]: time="2024-09-13 19:43:20.175246369Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2b52b315-6ccf-4dc9-bcf9-78ab2074be81 name=/runtime.v1.RuntimeService/Version
	Sep 13 19:43:20 pause-933457 crio[2102]: time="2024-09-13 19:43:20.175336954Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2b52b315-6ccf-4dc9-bcf9-78ab2074be81 name=/runtime.v1.RuntimeService/Version
	Sep 13 19:43:20 pause-933457 crio[2102]: time="2024-09-13 19:43:20.176921270Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=586b3799-4423-4aca-84af-8e77ea408b34 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 19:43:20 pause-933457 crio[2102]: time="2024-09-13 19:43:20.177415533Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726256600177376177,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=586b3799-4423-4aca-84af-8e77ea408b34 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 19:43:20 pause-933457 crio[2102]: time="2024-09-13 19:43:20.178102063Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b44b258b-dd4b-4e91-93d2-2a8e14965d14 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 19:43:20 pause-933457 crio[2102]: time="2024-09-13 19:43:20.178177485Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b44b258b-dd4b-4e91-93d2-2a8e14965d14 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 19:43:20 pause-933457 crio[2102]: time="2024-09-13 19:43:20.178496177Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0f710e3da45b15fe72958441699682fc1298243fb5bc7df2767f49de0a7ad687,PodSandboxId:91087deff47dd6578bca67df0c3d313a93d4a5dfc477749c5249f9ce5e32ba9b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726256580910398162,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-frbfp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfb8342b-c790-4425-baeb-c40e02d7fad0,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da5210c32a8237155bd7ef5a445c688ad3bc167ea231c7dde4e58c30b7b97dcd,PodSandboxId:6bd168360a06144dd5f1b390b41123dc34b15e9a5e772242fdda0ec88f0469cd,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726256580917675169,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7fxbj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0dc6419-dce7-46cd-8caa-d46406a809a5,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63e339ba2951c30bd8da7a88639d237b0309efaf7a94cd250a9a4e92825e327d,PodSandboxId:d0bc474a4e97dea27035affae13ef39a5a770e575535b9bb6ae809cc717ca8d5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726256577052875931,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-933457,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: e4d49aafe3d52d9d99cb2d193950023f,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98bbf70fd63304f22b2ba09c27befc8cc2eabb158a32942d7b6368164a56dded,PodSandboxId:2e80380b4598439d335bd90802158dea9ec3d8e10f6a5a9d3f8f0fa786304714,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726256577177052993,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-933457,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3
37159310d07a596d6014ddf4913aa0f,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7d538f1baabf0dcc15a5450f23023f75524ca8b22bba77cf8c62b56dba2a2d4,PodSandboxId:5eb3122c8500087753eda9c48231d39f602533fc59935b1fc1ade3c149cbc775,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726256577110002636,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-933457,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31785125bea90a37884
5ca568268a3e8,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b17e0cbb3322decd9e0daed3123abc5809b47637a87e4e04f8fae4ddaac51ff4,PodSandboxId:0686db9965f3c9844ab051dfd8532c0741afd32e930592357f1767dff877da11,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726256577071063880,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-933457,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40c784af0805c3420ec6b1601a925698,},Annotations:map[string]string{io
.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4dc94897deaaf4e69873a1b170e8112acf0c6df733065ad175f5e8ae501562f2,PodSandboxId:91087deff47dd6578bca67df0c3d313a93d4a5dfc477749c5249f9ce5e32ba9b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726256553172858969,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-frbfp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfb8342b-c790-4425-baeb-c40e02d7fad0,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc
59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba02c00e17ad3940584d065430964f8e9776e00d33734f058867687120958d05,PodSandboxId:6bd168360a06144dd5f1b390b41123dc34b15e9a5e772242fdda0ec88f0469cd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726256553699380560,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7fxbj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0dc6419-dce7-46cd-8caa-d46406a809a5,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports
: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61067c1a85d8673f926fb0b1821d2c36123e06963b2c321d3346bf51a52cb966,PodSandboxId:0686db9965f3c9844ab051dfd8532c0741afd32e930592357f1767dff877da11,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726256553069436771,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-933457,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 40c784af0805c3420ec6b1601a925698,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8676b252bf1c28d3cb6742f7bfbd7dd807365a67f2e32f36e9217c0da7d7f23a,PodSandboxId:d0bc474a4e97dea27035affae13ef39a5a770e575535b9bb6ae809cc717ca8d5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726256553167181658,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-933457,io.kubernete
s.pod.namespace: kube-system,io.kubernetes.pod.uid: e4d49aafe3d52d9d99cb2d193950023f,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6edacfe2510fbe9a4608060fd1be51aca7e450fe78177a0cf92a4539040d7b27,PodSandboxId:2e80380b4598439d335bd90802158dea9ec3d8e10f6a5a9d3f8f0fa786304714,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726256553064178840,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-933457,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 337159310d07a596d6014ddf4913aa0f,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c49e99000aa6dd7ed6586c9e93f0ac4a9f934540a949c269b2dc9dcdac0e6259,PodSandboxId:5eb3122c8500087753eda9c48231d39f602533fc59935b1fc1ade3c149cbc775,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726256553053825888,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-933457,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 31785125bea90a378845ca568268a3e8,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b44b258b-dd4b-4e91-93d2-2a8e14965d14 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	da5210c32a823       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   19 seconds ago      Running             coredns                   2                   6bd168360a061       coredns-7c65d6cfc9-7fxbj
	0f710e3da45b1       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   19 seconds ago      Running             kube-proxy                2                   91087deff47dd       kube-proxy-frbfp
	98bbf70fd6330       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   23 seconds ago      Running             kube-apiserver            2                   2e80380b45984       kube-apiserver-pause-933457
	a7d538f1baabf       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   23 seconds ago      Running             kube-scheduler            2                   5eb3122c85000       kube-scheduler-pause-933457
	b17e0cbb3322d       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   23 seconds ago      Running             etcd                      2                   0686db9965f3c       etcd-pause-933457
	63e339ba2951c       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   23 seconds ago      Running             kube-controller-manager   2                   d0bc474a4e97d       kube-controller-manager-pause-933457
	ba02c00e17ad3       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   46 seconds ago      Exited              coredns                   1                   6bd168360a061       coredns-7c65d6cfc9-7fxbj
	4dc94897deaaf       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   47 seconds ago      Exited              kube-proxy                1                   91087deff47dd       kube-proxy-frbfp
	8676b252bf1c2       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   47 seconds ago      Exited              kube-controller-manager   1                   d0bc474a4e97d       kube-controller-manager-pause-933457
	61067c1a85d86       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   47 seconds ago      Exited              etcd                      1                   0686db9965f3c       etcd-pause-933457
	6edacfe2510fb       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   47 seconds ago      Exited              kube-apiserver            1                   2e80380b45984       kube-apiserver-pause-933457
	c49e99000aa6d       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   47 seconds ago      Exited              kube-scheduler            1                   5eb3122c85000       kube-scheduler-pause-933457
	
	
	==> coredns [ba02c00e17ad3940584d065430964f8e9776e00d33734f058867687120958d05] <==
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 347fb4f25cc546215231b2e9ef34a7838489408c50ad1d77e38b06de967dd388dc540a0db2692259640c7998323f3763426b7a7e73fad2aa89cebddf27cf7c94
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:49449 - 21658 "HINFO IN 3703302259842171280.2195062110973943195. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015494249s
	
	
	==> coredns [da5210c32a8237155bd7ef5a445c688ad3bc167ea231c7dde4e58c30b7b97dcd] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 347fb4f25cc546215231b2e9ef34a7838489408c50ad1d77e38b06de967dd388dc540a0db2692259640c7998323f3763426b7a7e73fad2aa89cebddf27cf7c94
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:39816 - 20096 "HINFO IN 2119544516448739733.2420666444096889193. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.025521087s
	
	
	==> describe nodes <==
	Name:               pause-933457
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-933457
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fdd33bebc6743cfd1c61ec7fe066add478610a92
	                    minikube.k8s.io/name=pause-933457
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_13T19_41_22_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Sep 2024 19:41:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-933457
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 13 Sep 2024 19:43:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Sep 2024 19:43:00 +0000   Fri, 13 Sep 2024 19:41:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Sep 2024 19:43:00 +0000   Fri, 13 Sep 2024 19:41:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Sep 2024 19:43:00 +0000   Fri, 13 Sep 2024 19:41:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 13 Sep 2024 19:43:00 +0000   Fri, 13 Sep 2024 19:41:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.83.59
	  Hostname:    pause-933457
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 e7f6b0d702d54608a683218ef1f97497
	  System UUID:                e7f6b0d7-02d5-4608-a683-218ef1f97497
	  Boot ID:                    10ce631c-aafb-45ec-87cc-19580d513661
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-7fxbj                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     113s
	  kube-system                 etcd-pause-933457                       100m (5%)     0 (0%)      100Mi (5%)       0 (0%)         118s
	  kube-system                 kube-apiserver-pause-933457             250m (12%)    0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 kube-controller-manager-pause-933457    200m (10%)    0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 kube-proxy-frbfp                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-scheduler-pause-933457             100m (5%)     0 (0%)      0 (0%)           0 (0%)         118s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 111s               kube-proxy       
	  Normal  Starting                 19s                kube-proxy       
	  Normal  Starting                 43s                kube-proxy       
	  Normal  Starting                 118s               kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  118s               kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  118s               kubelet          Node pause-933457 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    118s               kubelet          Node pause-933457 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     118s               kubelet          Node pause-933457 status is now: NodeHasSufficientPID
	  Normal  NodeReady                117s               kubelet          Node pause-933457 status is now: NodeReady
	  Normal  RegisteredNode           114s               node-controller  Node pause-933457 event: Registered Node pause-933457 in Controller
	  Normal  RegisteredNode           40s                node-controller  Node pause-933457 event: Registered Node pause-933457 in Controller
	  Normal  Starting                 24s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  24s (x8 over 24s)  kubelet          Node pause-933457 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    24s (x8 over 24s)  kubelet          Node pause-933457 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     24s (x7 over 24s)  kubelet          Node pause-933457 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  24s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           14s                node-controller  Node pause-933457 event: Registered Node pause-933457 in Controller
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Sep13 19:41] systemd-fstab-generator[595]: Ignoring "noauto" option for root device
	[  +0.061424] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.067497] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.182524] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.155828] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.317831] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +4.105849] systemd-fstab-generator[755]: Ignoring "noauto" option for root device
	[  +4.544100] systemd-fstab-generator[890]: Ignoring "noauto" option for root device
	[  +0.064854] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.999663] systemd-fstab-generator[1226]: Ignoring "noauto" option for root device
	[  +0.096093] kauditd_printk_skb: 69 callbacks suppressed
	[  +4.790314] systemd-fstab-generator[1357]: Ignoring "noauto" option for root device
	[  +0.571357] kauditd_printk_skb: 46 callbacks suppressed
	[  +7.609915] kauditd_printk_skb: 50 callbacks suppressed
	[Sep13 19:42] systemd-fstab-generator[2027]: Ignoring "noauto" option for root device
	[  +0.206789] systemd-fstab-generator[2039]: Ignoring "noauto" option for root device
	[  +0.197656] systemd-fstab-generator[2054]: Ignoring "noauto" option for root device
	[  +0.153262] systemd-fstab-generator[2066]: Ignoring "noauto" option for root device
	[  +0.330996] systemd-fstab-generator[2094]: Ignoring "noauto" option for root device
	[  +2.847049] systemd-fstab-generator[2243]: Ignoring "noauto" option for root device
	[  +4.857546] kauditd_printk_skb: 195 callbacks suppressed
	[ +18.785624] systemd-fstab-generator[3091]: Ignoring "noauto" option for root device
	[Sep13 19:43] kauditd_printk_skb: 52 callbacks suppressed
	[  +9.719493] systemd-fstab-generator[3558]: Ignoring "noauto" option for root device
	
	
	==> etcd [61067c1a85d8673f926fb0b1821d2c36123e06963b2c321d3346bf51a52cb966] <==
	{"level":"info","ts":"2024-09-13T19:42:35.746163Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9984c9f5bd40dbf7 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-13T19:42:35.746184Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9984c9f5bd40dbf7 received MsgPreVoteResp from 9984c9f5bd40dbf7 at term 2"}
	{"level":"info","ts":"2024-09-13T19:42:35.746207Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9984c9f5bd40dbf7 became candidate at term 3"}
	{"level":"info","ts":"2024-09-13T19:42:35.746215Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9984c9f5bd40dbf7 received MsgVoteResp from 9984c9f5bd40dbf7 at term 3"}
	{"level":"info","ts":"2024-09-13T19:42:35.746250Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9984c9f5bd40dbf7 became leader at term 3"}
	{"level":"info","ts":"2024-09-13T19:42:35.746261Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9984c9f5bd40dbf7 elected leader 9984c9f5bd40dbf7 at term 3"}
	{"level":"info","ts":"2024-09-13T19:42:35.753256Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"9984c9f5bd40dbf7","local-member-attributes":"{Name:pause-933457 ClientURLs:[https://192.168.83.59:2379]}","request-path":"/0/members/9984c9f5bd40dbf7/attributes","cluster-id":"fc03475d3706ce65","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-13T19:42:35.753314Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-13T19:42:35.754036Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-13T19:42:35.754753Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-13T19:42:35.754777Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-13T19:42:35.755089Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-13T19:42:35.755721Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-13T19:42:35.756329Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-13T19:42:35.756793Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.83.59:2379"}
	{"level":"info","ts":"2024-09-13T19:42:44.748148Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-13T19:42:44.748198Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"pause-933457","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.83.59:2380"],"advertise-client-urls":["https://192.168.83.59:2379"]}
	{"level":"warn","ts":"2024-09-13T19:42:44.748268Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-13T19:42:44.748348Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-13T19:42:44.778179Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.83.59:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-13T19:42:44.778258Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.83.59:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-13T19:42:44.779727Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9984c9f5bd40dbf7","current-leader-member-id":"9984c9f5bd40dbf7"}
	{"level":"info","ts":"2024-09-13T19:42:44.785892Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.83.59:2380"}
	{"level":"info","ts":"2024-09-13T19:42:44.786019Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.83.59:2380"}
	{"level":"info","ts":"2024-09-13T19:42:44.786050Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"pause-933457","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.83.59:2380"],"advertise-client-urls":["https://192.168.83.59:2379"]}
	
	
	==> etcd [b17e0cbb3322decd9e0daed3123abc5809b47637a87e4e04f8fae4ddaac51ff4] <==
	{"level":"info","ts":"2024-09-13T19:43:05.374016Z","caller":"traceutil/trace.go:171","msg":"trace[619488644] linearizableReadLoop","detail":"{readStateIndex:566; appliedIndex:565; }","duration":"332.81954ms","start":"2024-09-13T19:43:05.041181Z","end":"2024-09-13T19:43:05.374001Z","steps":["trace[619488644] 'read index received'  (duration: 290.581543ms)","trace[619488644] 'applied index is now lower than readState.Index'  (duration: 42.236723ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-13T19:43:05.374193Z","caller":"traceutil/trace.go:171","msg":"trace[1455107215] transaction","detail":"{read_only:false; response_revision:522; number_of_response:1; }","duration":"335.320255ms","start":"2024-09-13T19:43:05.038747Z","end":"2024-09-13T19:43:05.374067Z","steps":["trace[1455107215] 'process raft request'  (duration: 293.045957ms)","trace[1455107215] 'compare'  (duration: 41.994563ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-13T19:43:05.374469Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-13T19:43:05.038736Z","time spent":"335.692024ms","remote":"127.0.0.1:35912","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":746,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/kube-apiserver-pause-933457.17f4e53606166eca\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kube-apiserver-pause-933457.17f4e53606166eca\" value_size:655 lease:6626925823406077536 >> failure:<>"}
	{"level":"warn","ts":"2024-09-13T19:43:05.374210Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"333.010048ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/resourcequota-controller\" ","response":"range_response_count:1 size:214"}
	{"level":"info","ts":"2024-09-13T19:43:05.374751Z","caller":"traceutil/trace.go:171","msg":"trace[157284599] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/resourcequota-controller; range_end:; response_count:1; response_revision:522; }","duration":"333.562743ms","start":"2024-09-13T19:43:05.041179Z","end":"2024-09-13T19:43:05.374741Z","steps":["trace[157284599] 'agreement among raft nodes before linearized reading'  (duration: 332.940969ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-13T19:43:05.374797Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-13T19:43:05.041155Z","time spent":"333.631803ms","remote":"127.0.0.1:36020","response type":"/etcdserverpb.KV/Range","request count":0,"request size":64,"response count":1,"response size":238,"request content":"key:\"/registry/serviceaccounts/kube-system/resourcequota-controller\" "}
	{"level":"warn","ts":"2024-09-13T19:43:05.374245Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"300.101208ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-7c65d6cfc9-7fxbj\" ","response":"range_response_count:1 size:5147"}
	{"level":"info","ts":"2024-09-13T19:43:05.374906Z","caller":"traceutil/trace.go:171","msg":"trace[1136643454] range","detail":"{range_begin:/registry/pods/kube-system/coredns-7c65d6cfc9-7fxbj; range_end:; response_count:1; response_revision:522; }","duration":"300.756824ms","start":"2024-09-13T19:43:05.074141Z","end":"2024-09-13T19:43:05.374898Z","steps":["trace[1136643454] 'agreement among raft nodes before linearized reading'  (duration: 300.088126ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-13T19:43:05.374936Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-13T19:43:05.074107Z","time spent":"300.818806ms","remote":"127.0.0.1:36006","response type":"/etcdserverpb.KV/Range","request count":0,"request size":53,"response count":1,"response size":5171,"request content":"key:\"/registry/pods/kube-system/coredns-7c65d6cfc9-7fxbj\" "}
	{"level":"warn","ts":"2024-09-13T19:43:05.855018Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"327.749029ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15850297860260853454 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/kube-scheduler-pause-933457.17f4e5360b471db7\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kube-scheduler-pause-933457.17f4e5360b471db7\" value_size:655 lease:6626925823406077536 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-09-13T19:43:05.855166Z","caller":"traceutil/trace.go:171","msg":"trace[566151769] linearizableReadLoop","detail":"{readStateIndex:568; appliedIndex:567; }","duration":"400.310838ms","start":"2024-09-13T19:43:05.454845Z","end":"2024-09-13T19:43:05.855156Z","steps":["trace[566151769] 'read index received'  (duration: 72.300932ms)","trace[566151769] 'applied index is now lower than readState.Index'  (duration: 328.008157ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-13T19:43:05.855197Z","caller":"traceutil/trace.go:171","msg":"trace[2137692898] transaction","detail":"{read_only:false; response_revision:524; number_of_response:1; }","duration":"400.422336ms","start":"2024-09-13T19:43:05.454752Z","end":"2024-09-13T19:43:05.855175Z","steps":["trace[2137692898] 'process raft request'  (duration: 72.4595ms)","trace[2137692898] 'compare'  (duration: 327.657835ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-13T19:43:05.855301Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-13T19:43:05.454736Z","time spent":"400.52666ms","remote":"127.0.0.1:35912","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":746,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/kube-scheduler-pause-933457.17f4e5360b471db7\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kube-scheduler-pause-933457.17f4e5360b471db7\" value_size:655 lease:6626925823406077536 >> failure:<>"}
	{"level":"warn","ts":"2024-09-13T19:43:05.855389Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"400.534103ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/certificate-controller\" ","response":"range_response_count:1 size:209"}
	{"level":"info","ts":"2024-09-13T19:43:05.855432Z","caller":"traceutil/trace.go:171","msg":"trace[380587743] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/certificate-controller; range_end:; response_count:1; response_revision:524; }","duration":"400.583253ms","start":"2024-09-13T19:43:05.454842Z","end":"2024-09-13T19:43:05.855425Z","steps":["trace[380587743] 'agreement among raft nodes before linearized reading'  (duration: 400.447211ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-13T19:43:05.855471Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-13T19:43:05.454813Z","time spent":"400.652455ms","remote":"127.0.0.1:36020","response type":"/etcdserverpb.KV/Range","request count":0,"request size":62,"response count":1,"response size":233,"request content":"key:\"/registry/serviceaccounts/kube-system/certificate-controller\" "}
	{"level":"warn","ts":"2024-09-13T19:43:05.855726Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"281.254708ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-7c65d6cfc9-7fxbj\" ","response":"range_response_count:1 size:5147"}
	{"level":"info","ts":"2024-09-13T19:43:05.855774Z","caller":"traceutil/trace.go:171","msg":"trace[1417660270] range","detail":"{range_begin:/registry/pods/kube-system/coredns-7c65d6cfc9-7fxbj; range_end:; response_count:1; response_revision:524; }","duration":"281.307429ms","start":"2024-09-13T19:43:05.574458Z","end":"2024-09-13T19:43:05.855766Z","steps":["trace[1417660270] 'agreement among raft nodes before linearized reading'  (duration: 281.218478ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-13T19:43:06.148990Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"164.621021ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15850297860260853457 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/kube-apiserver-pause-933457.17f4e5360b84af93\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kube-apiserver-pause-933457.17f4e5360b84af93\" value_size:655 lease:6626925823406077536 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-09-13T19:43:06.149168Z","caller":"traceutil/trace.go:171","msg":"trace[1682055123] linearizableReadLoop","detail":"{readStateIndex:569; appliedIndex:568; }","duration":"289.047872ms","start":"2024-09-13T19:43:05.860109Z","end":"2024-09-13T19:43:06.149157Z","steps":["trace[1682055123] 'read index received'  (duration: 124.205197ms)","trace[1682055123] 'applied index is now lower than readState.Index'  (duration: 164.841648ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-13T19:43:06.149242Z","caller":"traceutil/trace.go:171","msg":"trace[1744278239] transaction","detail":"{read_only:false; response_revision:525; number_of_response:1; }","duration":"289.470903ms","start":"2024-09-13T19:43:05.859742Z","end":"2024-09-13T19:43:06.149213Z","steps":["trace[1744278239] 'process raft request'  (duration: 124.584882ms)","trace[1744278239] 'compare'  (duration: 164.491169ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-13T19:43:06.149366Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"289.247756ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/pv-protection-controller\" ","response":"range_response_count:1 size:214"}
	{"level":"info","ts":"2024-09-13T19:43:06.149410Z","caller":"traceutil/trace.go:171","msg":"trace[1411431965] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/pv-protection-controller; range_end:; response_count:1; response_revision:525; }","duration":"289.297377ms","start":"2024-09-13T19:43:05.860106Z","end":"2024-09-13T19:43:06.149403Z","steps":["trace[1411431965] 'agreement among raft nodes before linearized reading'  (duration: 289.13477ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-13T19:43:06.149678Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"288.700055ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/pause-933457\" ","response":"range_response_count:1 size:5427"}
	{"level":"info","ts":"2024-09-13T19:43:06.150513Z","caller":"traceutil/trace.go:171","msg":"trace[936262000] range","detail":"{range_begin:/registry/minions/pause-933457; range_end:; response_count:1; response_revision:525; }","duration":"289.532214ms","start":"2024-09-13T19:43:05.860969Z","end":"2024-09-13T19:43:06.150501Z","steps":["trace[936262000] 'agreement among raft nodes before linearized reading'  (duration: 288.6185ms)"],"step_count":1}
	
	
	==> kernel <==
	 19:43:20 up 2 min,  0 users,  load average: 0.50, 0.22, 0.08
	Linux pause-933457 5.10.207 #1 SMP Thu Sep 12 19:03:33 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [6edacfe2510fbe9a4608060fd1be51aca7e450fe78177a0cf92a4539040d7b27] <==
	W0913 19:42:54.063386       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0913 19:42:54.106213       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0913 19:42:54.113763       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0913 19:42:54.153970       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0913 19:42:54.166588       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0913 19:42:54.166912       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0913 19:42:54.174800       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0913 19:42:54.215299       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0913 19:42:54.234749       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0913 19:42:54.269069       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0913 19:42:54.280757       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0913 19:42:54.296888       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0913 19:42:54.319018       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0913 19:42:54.322456       1 logging.go:55] [core] [Channel #9 SubChannel #10]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0913 19:42:54.377049       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0913 19:42:54.401581       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0913 19:42:54.406019       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0913 19:42:54.434750       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0913 19:42:54.542917       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0913 19:42:54.590690       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0913 19:42:54.609720       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0913 19:42:54.633464       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0913 19:42:54.677275       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0913 19:42:54.788908       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0913 19:42:54.888368       1 logging.go:55] [core] [Channel #17 SubChannel #18]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [98bbf70fd63304f22b2ba09c27befc8cc2eabb158a32942d7b6368164a56dded] <==
	I0913 19:43:00.553959       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0913 19:43:00.554150       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0913 19:43:00.554190       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0913 19:43:00.556296       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0913 19:43:00.556991       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0913 19:43:00.569271       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0913 19:43:00.569468       1 shared_informer.go:320] Caches are synced for configmaps
	I0913 19:43:00.569542       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0913 19:43:00.569688       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0913 19:43:00.570331       1 aggregator.go:171] initial CRD sync complete...
	I0913 19:43:00.570371       1 autoregister_controller.go:144] Starting autoregister controller
	I0913 19:43:00.570378       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0913 19:43:00.570383       1 cache.go:39] Caches are synced for autoregister controller
	E0913 19:43:00.577827       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0913 19:43:00.608687       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0913 19:43:00.608709       1 policy_source.go:224] refreshing policies
	I0913 19:43:00.615109       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0913 19:43:01.355228       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0913 19:43:01.899477       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0913 19:43:01.913262       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0913 19:43:01.956072       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0913 19:43:01.985992       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0913 19:43:01.992742       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0913 19:43:06.450847       1 controller.go:615] quota admission added evaluator for: endpoints
	I0913 19:43:06.453016       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [63e339ba2951c30bd8da7a88639d237b0309efaf7a94cd250a9a4e92825e327d] <==
	I0913 19:43:06.330221       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0913 19:43:06.339226       1 shared_informer.go:320] Caches are synced for PVC protection
	I0913 19:43:06.348461       1 shared_informer.go:320] Caches are synced for attach detach
	I0913 19:43:06.354030       1 shared_informer.go:320] Caches are synced for stateful set
	I0913 19:43:06.362224       1 shared_informer.go:320] Caches are synced for persistent volume
	I0913 19:43:06.367999       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0913 19:43:06.370731       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0913 19:43:06.372810       1 shared_informer.go:320] Caches are synced for endpoint
	I0913 19:43:06.373423       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0913 19:43:06.374670       1 shared_informer.go:320] Caches are synced for job
	I0913 19:43:06.384251       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0913 19:43:06.384384       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="40.912µs"
	I0913 19:43:06.399056       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0913 19:43:06.402382       1 shared_informer.go:320] Caches are synced for HPA
	I0913 19:43:06.405928       1 shared_informer.go:320] Caches are synced for ephemeral
	I0913 19:43:06.408090       1 shared_informer.go:320] Caches are synced for daemon sets
	I0913 19:43:06.413723       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0913 19:43:06.419071       1 shared_informer.go:320] Caches are synced for disruption
	I0913 19:43:06.458156       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="20.276886ms"
	I0913 19:43:06.458731       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="42.97µs"
	I0913 19:43:06.464024       1 shared_informer.go:320] Caches are synced for resource quota
	I0913 19:43:06.488156       1 shared_informer.go:320] Caches are synced for resource quota
	I0913 19:43:06.866755       1 shared_informer.go:320] Caches are synced for garbage collector
	I0913 19:43:06.866793       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0913 19:43:06.905198       1 shared_informer.go:320] Caches are synced for garbage collector
	
	
	==> kube-controller-manager [8676b252bf1c28d3cb6742f7bfbd7dd807365a67f2e32f36e9217c0da7d7f23a] <==
	I0913 19:42:40.480724       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0913 19:42:40.482307       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0913 19:42:40.505237       1 shared_informer.go:320] Caches are synced for persistent volume
	I0913 19:42:40.520784       1 shared_informer.go:320] Caches are synced for taint
	I0913 19:42:40.521119       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0913 19:42:40.521364       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-933457"
	I0913 19:42:40.521489       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0913 19:42:40.530337       1 shared_informer.go:320] Caches are synced for GC
	I0913 19:42:40.536857       1 shared_informer.go:320] Caches are synced for node
	I0913 19:42:40.536930       1 range_allocator.go:171] "Sending events to api server" logger="node-ipam-controller"
	I0913 19:42:40.536953       1 range_allocator.go:177] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0913 19:42:40.536957       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0913 19:42:40.536962       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0913 19:42:40.537082       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="pause-933457"
	I0913 19:42:40.543990       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0913 19:42:40.556157       1 shared_informer.go:320] Caches are synced for resource quota
	I0913 19:42:40.562537       1 shared_informer.go:320] Caches are synced for resource quota
	I0913 19:42:40.629111       1 shared_informer.go:320] Caches are synced for stateful set
	I0913 19:42:40.632702       1 shared_informer.go:320] Caches are synced for daemon sets
	I0913 19:42:40.680128       1 shared_informer.go:320] Caches are synced for attach detach
	I0913 19:42:40.694227       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="265.056776ms"
	I0913 19:42:40.694786       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="128.767µs"
	I0913 19:42:41.096581       1 shared_informer.go:320] Caches are synced for garbage collector
	I0913 19:42:41.129072       1 shared_informer.go:320] Caches are synced for garbage collector
	I0913 19:42:41.129117       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [0f710e3da45b15fe72958441699682fc1298243fb5bc7df2767f49de0a7ad687] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0913 19:43:01.144136       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0913 19:43:01.155453       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.83.59"]
	E0913 19:43:01.155774       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0913 19:43:01.201128       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0913 19:43:01.201174       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0913 19:43:01.201197       1 server_linux.go:169] "Using iptables Proxier"
	I0913 19:43:01.204496       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0913 19:43:01.204882       1 server.go:483] "Version info" version="v1.31.1"
	I0913 19:43:01.204908       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0913 19:43:01.206179       1 config.go:199] "Starting service config controller"
	I0913 19:43:01.206221       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0913 19:43:01.206248       1 config.go:105] "Starting endpoint slice config controller"
	I0913 19:43:01.206252       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0913 19:43:01.206819       1 config.go:328] "Starting node config controller"
	I0913 19:43:01.206857       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0913 19:43:01.306724       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0913 19:43:01.306749       1 shared_informer.go:320] Caches are synced for service config
	I0913 19:43:01.306891       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [4dc94897deaaf4e69873a1b170e8112acf0c6df733065ad175f5e8ae501562f2] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0913 19:42:34.906023       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0913 19:42:37.197942       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.83.59"]
	E0913 19:42:37.242711       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0913 19:42:37.341340       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0913 19:42:37.341505       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0913 19:42:37.341598       1 server_linux.go:169] "Using iptables Proxier"
	I0913 19:42:37.349759       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0913 19:42:37.355412       1 server.go:483] "Version info" version="v1.31.1"
	I0913 19:42:37.355843       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0913 19:42:37.358157       1 config.go:199] "Starting service config controller"
	I0913 19:42:37.358262       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0913 19:42:37.358339       1 config.go:105] "Starting endpoint slice config controller"
	I0913 19:42:37.358366       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0913 19:42:37.363469       1 config.go:328] "Starting node config controller"
	I0913 19:42:37.363599       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0913 19:42:37.459441       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0913 19:42:37.459584       1 shared_informer.go:320] Caches are synced for service config
	I0913 19:42:37.464756       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [a7d538f1baabf0dcc15a5450f23023f75524ca8b22bba77cf8c62b56dba2a2d4] <==
	I0913 19:42:58.685760       1 serving.go:386] Generated self-signed cert in-memory
	W0913 19:43:00.439273       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0913 19:43:00.439451       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0913 19:43:00.439485       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0913 19:43:00.439556       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0913 19:43:00.505572       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0913 19:43:00.505676       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0913 19:43:00.514348       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0913 19:43:00.514396       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0913 19:43:00.515079       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0913 19:43:00.515142       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0913 19:43:00.615709       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [c49e99000aa6dd7ed6586c9e93f0ac4a9f934540a949c269b2dc9dcdac0e6259] <==
	I0913 19:42:34.703299       1 serving.go:386] Generated self-signed cert in-memory
	W0913 19:42:37.069886       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0913 19:42:37.069930       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0913 19:42:37.069940       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0913 19:42:37.069952       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0913 19:42:37.189547       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0913 19:42:37.189597       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0913 19:42:37.207335       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0913 19:42:37.207563       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0913 19:42:37.207673       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0913 19:42:37.207712       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0913 19:42:37.308569       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0913 19:42:54.958219       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0913 19:42:54.958280       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0913 19:42:54.958527       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0913 19:42:54.958535       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 13 19:42:57 pause-933457 kubelet[3098]: I0913 19:42:57.033926    3098 scope.go:117] "RemoveContainer" containerID="8676b252bf1c28d3cb6742f7bfbd7dd807365a67f2e32f36e9217c0da7d7f23a"
	Sep 13 19:42:57 pause-933457 kubelet[3098]: I0913 19:42:57.046871    3098 scope.go:117] "RemoveContainer" containerID="61067c1a85d8673f926fb0b1821d2c36123e06963b2c321d3346bf51a52cb966"
	Sep 13 19:42:57 pause-933457 kubelet[3098]: I0913 19:42:57.056272    3098 scope.go:117] "RemoveContainer" containerID="6edacfe2510fbe9a4608060fd1be51aca7e450fe78177a0cf92a4539040d7b27"
	Sep 13 19:42:57 pause-933457 kubelet[3098]: I0913 19:42:57.061905    3098 scope.go:117] "RemoveContainer" containerID="c49e99000aa6dd7ed6586c9e93f0ac4a9f934540a949c269b2dc9dcdac0e6259"
	Sep 13 19:42:57 pause-933457 kubelet[3098]: E0913 19:42:57.224988    3098 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-933457?timeout=10s\": dial tcp 192.168.83.59:8443: connect: connection refused" interval="800ms"
	Sep 13 19:42:57 pause-933457 kubelet[3098]: I0913 19:42:57.433455    3098 kubelet_node_status.go:72] "Attempting to register node" node="pause-933457"
	Sep 13 19:42:57 pause-933457 kubelet[3098]: E0913 19:42:57.434545    3098 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.83.59:8443: connect: connection refused" node="pause-933457"
	Sep 13 19:42:57 pause-933457 kubelet[3098]: W0913 19:42:57.553840    3098 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.83.59:8443: connect: connection refused
	Sep 13 19:42:57 pause-933457 kubelet[3098]: E0913 19:42:57.553981    3098 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.83.59:8443: connect: connection refused" logger="UnhandledError"
	Sep 13 19:42:58 pause-933457 kubelet[3098]: I0913 19:42:58.237108    3098 kubelet_node_status.go:72] "Attempting to register node" node="pause-933457"
	Sep 13 19:43:00 pause-933457 kubelet[3098]: I0913 19:43:00.582857    3098 apiserver.go:52] "Watching apiserver"
	Sep 13 19:43:00 pause-933457 kubelet[3098]: I0913 19:43:00.611741    3098 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Sep 13 19:43:00 pause-933457 kubelet[3098]: I0913 19:43:00.662888    3098 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cfb8342b-c790-4425-baeb-c40e02d7fad0-xtables-lock\") pod \"kube-proxy-frbfp\" (UID: \"cfb8342b-c790-4425-baeb-c40e02d7fad0\") " pod="kube-system/kube-proxy-frbfp"
	Sep 13 19:43:00 pause-933457 kubelet[3098]: I0913 19:43:00.662986    3098 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cfb8342b-c790-4425-baeb-c40e02d7fad0-lib-modules\") pod \"kube-proxy-frbfp\" (UID: \"cfb8342b-c790-4425-baeb-c40e02d7fad0\") " pod="kube-system/kube-proxy-frbfp"
	Sep 13 19:43:00 pause-933457 kubelet[3098]: I0913 19:43:00.702513    3098 kubelet_node_status.go:111] "Node was previously registered" node="pause-933457"
	Sep 13 19:43:00 pause-933457 kubelet[3098]: I0913 19:43:00.702909    3098 kubelet_node_status.go:75] "Successfully registered node" node="pause-933457"
	Sep 13 19:43:00 pause-933457 kubelet[3098]: I0913 19:43:00.703039    3098 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 13 19:43:00 pause-933457 kubelet[3098]: I0913 19:43:00.704433    3098 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 13 19:43:00 pause-933457 kubelet[3098]: I0913 19:43:00.888959    3098 scope.go:117] "RemoveContainer" containerID="4dc94897deaaf4e69873a1b170e8112acf0c6df733065ad175f5e8ae501562f2"
	Sep 13 19:43:00 pause-933457 kubelet[3098]: I0913 19:43:00.889031    3098 scope.go:117] "RemoveContainer" containerID="ba02c00e17ad3940584d065430964f8e9776e00d33734f058867687120958d05"
	Sep 13 19:43:06 pause-933457 kubelet[3098]: I0913 19:43:06.421402    3098 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Sep 13 19:43:06 pause-933457 kubelet[3098]: E0913 19:43:06.726722    3098 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726256586726270300,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 19:43:06 pause-933457 kubelet[3098]: E0913 19:43:06.726791    3098 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726256586726270300,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 19:43:16 pause-933457 kubelet[3098]: E0913 19:43:16.728324    3098 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726256596728049078,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 19:43:16 pause-933457 kubelet[3098]: E0913 19:43:16.728368    3098 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726256596728049078,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0913 19:43:19.731020   55501 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19636-3902/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-933457 -n pause-933457
helpers_test.go:261: (dbg) Run:  kubectl --context pause-933457 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-933457 -n pause-933457
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-933457 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-933457 logs -n 25: (1.385243718s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p NoKubernetes-590674                | NoKubernetes-590674       | jenkins | v1.34.0 | 13 Sep 24 19:39 UTC | 13 Sep 24 19:39 UTC |
	|         | --no-kubernetes --driver=kvm2         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p offline-crio-568412                | offline-crio-568412       | jenkins | v1.34.0 | 13 Sep 24 19:39 UTC | 13 Sep 24 19:39 UTC |
	| start   | -p force-systemd-flag-642942          | force-systemd-flag-642942 | jenkins | v1.34.0 | 13 Sep 24 19:39 UTC | 13 Sep 24 19:40 UTC |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-590674                | NoKubernetes-590674       | jenkins | v1.34.0 | 13 Sep 24 19:39 UTC | 13 Sep 24 19:39 UTC |
	| start   | -p NoKubernetes-590674                | NoKubernetes-590674       | jenkins | v1.34.0 | 13 Sep 24 19:39 UTC | 13 Sep 24 19:40 UTC |
	|         | --no-kubernetes --driver=kvm2         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p running-upgrade-605510             | running-upgrade-605510    | jenkins | v1.34.0 | 13 Sep 24 19:39 UTC | 13 Sep 24 19:41 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-642942 ssh cat     | force-systemd-flag-642942 | jenkins | v1.34.0 | 13 Sep 24 19:40 UTC | 13 Sep 24 19:40 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf    |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-642942          | force-systemd-flag-642942 | jenkins | v1.34.0 | 13 Sep 24 19:40 UTC | 13 Sep 24 19:40 UTC |
	| start   | -p pause-933457 --memory=2048         | pause-933457              | jenkins | v1.34.0 | 13 Sep 24 19:40 UTC | 13 Sep 24 19:41 UTC |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2              |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-590674 sudo           | NoKubernetes-590674       | jenkins | v1.34.0 | 13 Sep 24 19:40 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-590674                | NoKubernetes-590674       | jenkins | v1.34.0 | 13 Sep 24 19:40 UTC | 13 Sep 24 19:40 UTC |
	| start   | -p NoKubernetes-590674                | NoKubernetes-590674       | jenkins | v1.34.0 | 13 Sep 24 19:40 UTC | 13 Sep 24 19:41 UTC |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-590674 sudo           | NoKubernetes-590674       | jenkins | v1.34.0 | 13 Sep 24 19:41 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-590674                | NoKubernetes-590674       | jenkins | v1.34.0 | 13 Sep 24 19:41 UTC | 13 Sep 24 19:41 UTC |
	| start   | -p cert-options-718151                | cert-options-718151       | jenkins | v1.34.0 | 13 Sep 24 19:41 UTC | 13 Sep 24 19:42 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-605510             | running-upgrade-605510    | jenkins | v1.34.0 | 13 Sep 24 19:41 UTC | 13 Sep 24 19:41 UTC |
	| start   | -p kubernetes-upgrade-421098          | kubernetes-upgrade-421098 | jenkins | v1.34.0 | 13 Sep 24 19:41 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p pause-933457                       | pause-933457              | jenkins | v1.34.0 | 13 Sep 24 19:41 UTC | 13 Sep 24 19:43 UTC |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | cert-options-718151 ssh               | cert-options-718151       | jenkins | v1.34.0 | 13 Sep 24 19:42 UTC | 13 Sep 24 19:42 UTC |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-718151 -- sudo        | cert-options-718151       | jenkins | v1.34.0 | 13 Sep 24 19:42 UTC | 13 Sep 24 19:42 UTC |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-718151                | cert-options-718151       | jenkins | v1.34.0 | 13 Sep 24 19:42 UTC | 13 Sep 24 19:42 UTC |
	| start   | -p stopped-upgrade-520539             | minikube                  | jenkins | v1.26.0 | 13 Sep 24 19:42 UTC | 13 Sep 24 19:43 UTC |
	|         | --memory=2200 --vm-driver=kvm2        |                           |         |         |                     |                     |
	|         |  --container-runtime=crio             |                           |         |         |                     |                     |
	| start   | -p cert-expiration-235626             | cert-expiration-235626    | jenkins | v1.34.0 | 13 Sep 24 19:43 UTC |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-520539 stop           | minikube                  | jenkins | v1.26.0 | 13 Sep 24 19:43 UTC | 13 Sep 24 19:43 UTC |
	| start   | -p stopped-upgrade-520539             | stopped-upgrade-520539    | jenkins | v1.34.0 | 13 Sep 24 19:43 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/13 19:43:21
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0913 19:43:21.548991   55629 out.go:345] Setting OutFile to fd 1 ...
	I0913 19:43:21.549082   55629 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 19:43:21.549090   55629 out.go:358] Setting ErrFile to fd 2...
	I0913 19:43:21.549094   55629 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 19:43:21.549266   55629 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19636-3902/.minikube/bin
	I0913 19:43:21.549785   55629 out.go:352] Setting JSON to false
	I0913 19:43:21.550739   55629 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5144,"bootTime":1726251457,"procs":229,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0913 19:43:21.550844   55629 start.go:139] virtualization: kvm guest
	I0913 19:43:21.553269   55629 out.go:177] * [stopped-upgrade-520539] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0913 19:43:21.554607   55629 notify.go:220] Checking for updates...
	I0913 19:43:21.554648   55629 out.go:177]   - MINIKUBE_LOCATION=19636
	I0913 19:43:21.555860   55629 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 19:43:21.557110   55629 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19636-3902/kubeconfig
	I0913 19:43:21.558306   55629 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19636-3902/.minikube
	I0913 19:43:21.559648   55629 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0913 19:43:21.560879   55629 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 19:43:21.562467   55629 config.go:182] Loaded profile config "stopped-upgrade-520539": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0913 19:43:21.562857   55629 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:43:21.562899   55629 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:43:21.580726   55629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42573
	I0913 19:43:21.581324   55629 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:43:21.581965   55629 main.go:141] libmachine: Using API Version  1
	I0913 19:43:21.581992   55629 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:43:21.582464   55629 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:43:21.582614   55629 main.go:141] libmachine: (stopped-upgrade-520539) Calling .DriverName
	I0913 19:43:21.584387   55629 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0913 19:43:21.585595   55629 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 19:43:21.586018   55629 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:43:21.586064   55629 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:43:21.601430   55629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40603
	I0913 19:43:21.601880   55629 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:43:21.602410   55629 main.go:141] libmachine: Using API Version  1
	I0913 19:43:21.602430   55629 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:43:21.602721   55629 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:43:21.602897   55629 main.go:141] libmachine: (stopped-upgrade-520539) Calling .DriverName
	I0913 19:43:21.645047   55629 out.go:177] * Using the kvm2 driver based on existing profile
	I0913 19:43:21.646248   55629 start.go:297] selected driver: kvm2
	I0913 19:43:21.646266   55629 start.go:901] validating driver "kvm2" against &{Name:stopped-upgrade-520539 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-520
539 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.110 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0913 19:43:21.646384   55629 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 19:43:21.647050   55629 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 19:43:21.647129   55629 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19636-3902/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0913 19:43:21.664200   55629 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0913 19:43:21.664746   55629 cni.go:84] Creating CNI manager for ""
	I0913 19:43:21.664818   55629 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0913 19:43:21.664900   55629 start.go:340] cluster config:
	{Name:stopped-upgrade-520539 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-520539 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.110 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClient
Path: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0913 19:43:21.665078   55629 iso.go:125] acquiring lock: {Name:mk6d09a9e7ffc35d34bf41cb51ad15df14a4d34d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 19:43:21.667053   55629 out.go:177] * Starting "stopped-upgrade-520539" primary control-plane node in "stopped-upgrade-520539" cluster
	
	
	==> CRI-O <==
	Sep 13 19:43:21 pause-933457 crio[2102]: time="2024-09-13 19:43:21.981456097Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726256601981433557,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=51934cd6-6b18-4bdf-98e2-da43d36aafe7 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 19:43:21 pause-933457 crio[2102]: time="2024-09-13 19:43:21.982093422Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b10c08bf-49b8-4636-a381-aad706ba3b91 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 19:43:21 pause-933457 crio[2102]: time="2024-09-13 19:43:21.982170259Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b10c08bf-49b8-4636-a381-aad706ba3b91 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 19:43:21 pause-933457 crio[2102]: time="2024-09-13 19:43:21.982404825Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0f710e3da45b15fe72958441699682fc1298243fb5bc7df2767f49de0a7ad687,PodSandboxId:91087deff47dd6578bca67df0c3d313a93d4a5dfc477749c5249f9ce5e32ba9b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726256580910398162,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-frbfp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfb8342b-c790-4425-baeb-c40e02d7fad0,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da5210c32a8237155bd7ef5a445c688ad3bc167ea231c7dde4e58c30b7b97dcd,PodSandboxId:6bd168360a06144dd5f1b390b41123dc34b15e9a5e772242fdda0ec88f0469cd,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726256580917675169,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7fxbj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0dc6419-dce7-46cd-8caa-d46406a809a5,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63e339ba2951c30bd8da7a88639d237b0309efaf7a94cd250a9a4e92825e327d,PodSandboxId:d0bc474a4e97dea27035affae13ef39a5a770e575535b9bb6ae809cc717ca8d5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726256577052875931,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-933457,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: e4d49aafe3d52d9d99cb2d193950023f,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98bbf70fd63304f22b2ba09c27befc8cc2eabb158a32942d7b6368164a56dded,PodSandboxId:2e80380b4598439d335bd90802158dea9ec3d8e10f6a5a9d3f8f0fa786304714,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726256577177052993,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-933457,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3
37159310d07a596d6014ddf4913aa0f,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7d538f1baabf0dcc15a5450f23023f75524ca8b22bba77cf8c62b56dba2a2d4,PodSandboxId:5eb3122c8500087753eda9c48231d39f602533fc59935b1fc1ade3c149cbc775,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726256577110002636,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-933457,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31785125bea90a37884
5ca568268a3e8,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b17e0cbb3322decd9e0daed3123abc5809b47637a87e4e04f8fae4ddaac51ff4,PodSandboxId:0686db9965f3c9844ab051dfd8532c0741afd32e930592357f1767dff877da11,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726256577071063880,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-933457,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40c784af0805c3420ec6b1601a925698,},Annotations:map[string]string{io
.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4dc94897deaaf4e69873a1b170e8112acf0c6df733065ad175f5e8ae501562f2,PodSandboxId:91087deff47dd6578bca67df0c3d313a93d4a5dfc477749c5249f9ce5e32ba9b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726256553172858969,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-frbfp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfb8342b-c790-4425-baeb-c40e02d7fad0,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc
59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba02c00e17ad3940584d065430964f8e9776e00d33734f058867687120958d05,PodSandboxId:6bd168360a06144dd5f1b390b41123dc34b15e9a5e772242fdda0ec88f0469cd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726256553699380560,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7fxbj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0dc6419-dce7-46cd-8caa-d46406a809a5,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports
: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61067c1a85d8673f926fb0b1821d2c36123e06963b2c321d3346bf51a52cb966,PodSandboxId:0686db9965f3c9844ab051dfd8532c0741afd32e930592357f1767dff877da11,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726256553069436771,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-933457,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 40c784af0805c3420ec6b1601a925698,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8676b252bf1c28d3cb6742f7bfbd7dd807365a67f2e32f36e9217c0da7d7f23a,PodSandboxId:d0bc474a4e97dea27035affae13ef39a5a770e575535b9bb6ae809cc717ca8d5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726256553167181658,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-933457,io.kubernete
s.pod.namespace: kube-system,io.kubernetes.pod.uid: e4d49aafe3d52d9d99cb2d193950023f,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6edacfe2510fbe9a4608060fd1be51aca7e450fe78177a0cf92a4539040d7b27,PodSandboxId:2e80380b4598439d335bd90802158dea9ec3d8e10f6a5a9d3f8f0fa786304714,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726256553064178840,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-933457,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 337159310d07a596d6014ddf4913aa0f,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c49e99000aa6dd7ed6586c9e93f0ac4a9f934540a949c269b2dc9dcdac0e6259,PodSandboxId:5eb3122c8500087753eda9c48231d39f602533fc59935b1fc1ade3c149cbc775,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726256553053825888,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-933457,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 31785125bea90a378845ca568268a3e8,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b10c08bf-49b8-4636-a381-aad706ba3b91 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 19:43:22 pause-933457 crio[2102]: time="2024-09-13 19:43:22.029494124Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=70e9a5c6-427e-468e-b66e-214ff5554405 name=/runtime.v1.RuntimeService/Version
	Sep 13 19:43:22 pause-933457 crio[2102]: time="2024-09-13 19:43:22.029650796Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=70e9a5c6-427e-468e-b66e-214ff5554405 name=/runtime.v1.RuntimeService/Version
	Sep 13 19:43:22 pause-933457 crio[2102]: time="2024-09-13 19:43:22.031161351Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9c2cb079-43fa-4b65-a1d5-5d84437cdbfa name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 19:43:22 pause-933457 crio[2102]: time="2024-09-13 19:43:22.031755104Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726256602031718930,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9c2cb079-43fa-4b65-a1d5-5d84437cdbfa name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 19:43:22 pause-933457 crio[2102]: time="2024-09-13 19:43:22.032514041Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=83f6e5ab-2ea2-40fa-b7ca-3e7879a5c4bd name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 19:43:22 pause-933457 crio[2102]: time="2024-09-13 19:43:22.032588623Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=83f6e5ab-2ea2-40fa-b7ca-3e7879a5c4bd name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 19:43:22 pause-933457 crio[2102]: time="2024-09-13 19:43:22.033003955Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0f710e3da45b15fe72958441699682fc1298243fb5bc7df2767f49de0a7ad687,PodSandboxId:91087deff47dd6578bca67df0c3d313a93d4a5dfc477749c5249f9ce5e32ba9b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726256580910398162,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-frbfp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfb8342b-c790-4425-baeb-c40e02d7fad0,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da5210c32a8237155bd7ef5a445c688ad3bc167ea231c7dde4e58c30b7b97dcd,PodSandboxId:6bd168360a06144dd5f1b390b41123dc34b15e9a5e772242fdda0ec88f0469cd,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726256580917675169,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7fxbj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0dc6419-dce7-46cd-8caa-d46406a809a5,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63e339ba2951c30bd8da7a88639d237b0309efaf7a94cd250a9a4e92825e327d,PodSandboxId:d0bc474a4e97dea27035affae13ef39a5a770e575535b9bb6ae809cc717ca8d5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726256577052875931,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-933457,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: e4d49aafe3d52d9d99cb2d193950023f,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98bbf70fd63304f22b2ba09c27befc8cc2eabb158a32942d7b6368164a56dded,PodSandboxId:2e80380b4598439d335bd90802158dea9ec3d8e10f6a5a9d3f8f0fa786304714,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726256577177052993,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-933457,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3
37159310d07a596d6014ddf4913aa0f,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7d538f1baabf0dcc15a5450f23023f75524ca8b22bba77cf8c62b56dba2a2d4,PodSandboxId:5eb3122c8500087753eda9c48231d39f602533fc59935b1fc1ade3c149cbc775,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726256577110002636,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-933457,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31785125bea90a37884
5ca568268a3e8,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b17e0cbb3322decd9e0daed3123abc5809b47637a87e4e04f8fae4ddaac51ff4,PodSandboxId:0686db9965f3c9844ab051dfd8532c0741afd32e930592357f1767dff877da11,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726256577071063880,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-933457,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40c784af0805c3420ec6b1601a925698,},Annotations:map[string]string{io
.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4dc94897deaaf4e69873a1b170e8112acf0c6df733065ad175f5e8ae501562f2,PodSandboxId:91087deff47dd6578bca67df0c3d313a93d4a5dfc477749c5249f9ce5e32ba9b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726256553172858969,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-frbfp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfb8342b-c790-4425-baeb-c40e02d7fad0,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc
59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba02c00e17ad3940584d065430964f8e9776e00d33734f058867687120958d05,PodSandboxId:6bd168360a06144dd5f1b390b41123dc34b15e9a5e772242fdda0ec88f0469cd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726256553699380560,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7fxbj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0dc6419-dce7-46cd-8caa-d46406a809a5,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports
: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61067c1a85d8673f926fb0b1821d2c36123e06963b2c321d3346bf51a52cb966,PodSandboxId:0686db9965f3c9844ab051dfd8532c0741afd32e930592357f1767dff877da11,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726256553069436771,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-933457,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 40c784af0805c3420ec6b1601a925698,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8676b252bf1c28d3cb6742f7bfbd7dd807365a67f2e32f36e9217c0da7d7f23a,PodSandboxId:d0bc474a4e97dea27035affae13ef39a5a770e575535b9bb6ae809cc717ca8d5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726256553167181658,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-933457,io.kubernete
s.pod.namespace: kube-system,io.kubernetes.pod.uid: e4d49aafe3d52d9d99cb2d193950023f,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6edacfe2510fbe9a4608060fd1be51aca7e450fe78177a0cf92a4539040d7b27,PodSandboxId:2e80380b4598439d335bd90802158dea9ec3d8e10f6a5a9d3f8f0fa786304714,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726256553064178840,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-933457,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 337159310d07a596d6014ddf4913aa0f,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c49e99000aa6dd7ed6586c9e93f0ac4a9f934540a949c269b2dc9dcdac0e6259,PodSandboxId:5eb3122c8500087753eda9c48231d39f602533fc59935b1fc1ade3c149cbc775,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726256553053825888,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-933457,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 31785125bea90a378845ca568268a3e8,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=83f6e5ab-2ea2-40fa-b7ca-3e7879a5c4bd name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 19:43:22 pause-933457 crio[2102]: time="2024-09-13 19:43:22.084878116Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1d832dc3-16e2-48bc-b207-4ed9531d2ad1 name=/runtime.v1.RuntimeService/Version
	Sep 13 19:43:22 pause-933457 crio[2102]: time="2024-09-13 19:43:22.084997979Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1d832dc3-16e2-48bc-b207-4ed9531d2ad1 name=/runtime.v1.RuntimeService/Version
	Sep 13 19:43:22 pause-933457 crio[2102]: time="2024-09-13 19:43:22.086810491Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=87dc2afd-a49d-40e3-ab32-3653b0cdfec7 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 19:43:22 pause-933457 crio[2102]: time="2024-09-13 19:43:22.087511879Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726256602087477947,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=87dc2afd-a49d-40e3-ab32-3653b0cdfec7 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 19:43:22 pause-933457 crio[2102]: time="2024-09-13 19:43:22.088290413Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=11a1fb2c-9ddc-4994-81ad-819236629068 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 19:43:22 pause-933457 crio[2102]: time="2024-09-13 19:43:22.088384469Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=11a1fb2c-9ddc-4994-81ad-819236629068 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 19:43:22 pause-933457 crio[2102]: time="2024-09-13 19:43:22.088897141Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0f710e3da45b15fe72958441699682fc1298243fb5bc7df2767f49de0a7ad687,PodSandboxId:91087deff47dd6578bca67df0c3d313a93d4a5dfc477749c5249f9ce5e32ba9b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726256580910398162,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-frbfp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfb8342b-c790-4425-baeb-c40e02d7fad0,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da5210c32a8237155bd7ef5a445c688ad3bc167ea231c7dde4e58c30b7b97dcd,PodSandboxId:6bd168360a06144dd5f1b390b41123dc34b15e9a5e772242fdda0ec88f0469cd,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726256580917675169,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7fxbj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0dc6419-dce7-46cd-8caa-d46406a809a5,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63e339ba2951c30bd8da7a88639d237b0309efaf7a94cd250a9a4e92825e327d,PodSandboxId:d0bc474a4e97dea27035affae13ef39a5a770e575535b9bb6ae809cc717ca8d5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726256577052875931,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-933457,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: e4d49aafe3d52d9d99cb2d193950023f,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98bbf70fd63304f22b2ba09c27befc8cc2eabb158a32942d7b6368164a56dded,PodSandboxId:2e80380b4598439d335bd90802158dea9ec3d8e10f6a5a9d3f8f0fa786304714,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726256577177052993,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-933457,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3
37159310d07a596d6014ddf4913aa0f,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7d538f1baabf0dcc15a5450f23023f75524ca8b22bba77cf8c62b56dba2a2d4,PodSandboxId:5eb3122c8500087753eda9c48231d39f602533fc59935b1fc1ade3c149cbc775,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726256577110002636,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-933457,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31785125bea90a37884
5ca568268a3e8,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b17e0cbb3322decd9e0daed3123abc5809b47637a87e4e04f8fae4ddaac51ff4,PodSandboxId:0686db9965f3c9844ab051dfd8532c0741afd32e930592357f1767dff877da11,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726256577071063880,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-933457,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40c784af0805c3420ec6b1601a925698,},Annotations:map[string]string{io
.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4dc94897deaaf4e69873a1b170e8112acf0c6df733065ad175f5e8ae501562f2,PodSandboxId:91087deff47dd6578bca67df0c3d313a93d4a5dfc477749c5249f9ce5e32ba9b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726256553172858969,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-frbfp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfb8342b-c790-4425-baeb-c40e02d7fad0,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc
59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba02c00e17ad3940584d065430964f8e9776e00d33734f058867687120958d05,PodSandboxId:6bd168360a06144dd5f1b390b41123dc34b15e9a5e772242fdda0ec88f0469cd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726256553699380560,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7fxbj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0dc6419-dce7-46cd-8caa-d46406a809a5,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports
: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61067c1a85d8673f926fb0b1821d2c36123e06963b2c321d3346bf51a52cb966,PodSandboxId:0686db9965f3c9844ab051dfd8532c0741afd32e930592357f1767dff877da11,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726256553069436771,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-933457,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 40c784af0805c3420ec6b1601a925698,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8676b252bf1c28d3cb6742f7bfbd7dd807365a67f2e32f36e9217c0da7d7f23a,PodSandboxId:d0bc474a4e97dea27035affae13ef39a5a770e575535b9bb6ae809cc717ca8d5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726256553167181658,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-933457,io.kubernete
s.pod.namespace: kube-system,io.kubernetes.pod.uid: e4d49aafe3d52d9d99cb2d193950023f,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6edacfe2510fbe9a4608060fd1be51aca7e450fe78177a0cf92a4539040d7b27,PodSandboxId:2e80380b4598439d335bd90802158dea9ec3d8e10f6a5a9d3f8f0fa786304714,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726256553064178840,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-933457,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 337159310d07a596d6014ddf4913aa0f,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c49e99000aa6dd7ed6586c9e93f0ac4a9f934540a949c269b2dc9dcdac0e6259,PodSandboxId:5eb3122c8500087753eda9c48231d39f602533fc59935b1fc1ade3c149cbc775,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726256553053825888,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-933457,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 31785125bea90a378845ca568268a3e8,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=11a1fb2c-9ddc-4994-81ad-819236629068 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 19:43:22 pause-933457 crio[2102]: time="2024-09-13 19:43:22.136457349Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8568fb52-3770-4f04-a842-349faa442edd name=/runtime.v1.RuntimeService/Version
	Sep 13 19:43:22 pause-933457 crio[2102]: time="2024-09-13 19:43:22.136578956Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8568fb52-3770-4f04-a842-349faa442edd name=/runtime.v1.RuntimeService/Version
	Sep 13 19:43:22 pause-933457 crio[2102]: time="2024-09-13 19:43:22.138363518Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=073b0d6f-23eb-4575-8ce2-127863879246 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 19:43:22 pause-933457 crio[2102]: time="2024-09-13 19:43:22.138959364Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726256602138926724,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=073b0d6f-23eb-4575-8ce2-127863879246 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 19:43:22 pause-933457 crio[2102]: time="2024-09-13 19:43:22.140019656Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=aa1fc491-4129-42a6-9d99-3c53b7df3f23 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 19:43:22 pause-933457 crio[2102]: time="2024-09-13 19:43:22.140121817Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=aa1fc491-4129-42a6-9d99-3c53b7df3f23 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 19:43:22 pause-933457 crio[2102]: time="2024-09-13 19:43:22.140445876Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0f710e3da45b15fe72958441699682fc1298243fb5bc7df2767f49de0a7ad687,PodSandboxId:91087deff47dd6578bca67df0c3d313a93d4a5dfc477749c5249f9ce5e32ba9b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726256580910398162,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-frbfp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfb8342b-c790-4425-baeb-c40e02d7fad0,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da5210c32a8237155bd7ef5a445c688ad3bc167ea231c7dde4e58c30b7b97dcd,PodSandboxId:6bd168360a06144dd5f1b390b41123dc34b15e9a5e772242fdda0ec88f0469cd,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726256580917675169,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7fxbj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0dc6419-dce7-46cd-8caa-d46406a809a5,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63e339ba2951c30bd8da7a88639d237b0309efaf7a94cd250a9a4e92825e327d,PodSandboxId:d0bc474a4e97dea27035affae13ef39a5a770e575535b9bb6ae809cc717ca8d5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726256577052875931,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-933457,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: e4d49aafe3d52d9d99cb2d193950023f,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98bbf70fd63304f22b2ba09c27befc8cc2eabb158a32942d7b6368164a56dded,PodSandboxId:2e80380b4598439d335bd90802158dea9ec3d8e10f6a5a9d3f8f0fa786304714,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726256577177052993,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-933457,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3
37159310d07a596d6014ddf4913aa0f,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7d538f1baabf0dcc15a5450f23023f75524ca8b22bba77cf8c62b56dba2a2d4,PodSandboxId:5eb3122c8500087753eda9c48231d39f602533fc59935b1fc1ade3c149cbc775,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726256577110002636,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-933457,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31785125bea90a37884
5ca568268a3e8,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b17e0cbb3322decd9e0daed3123abc5809b47637a87e4e04f8fae4ddaac51ff4,PodSandboxId:0686db9965f3c9844ab051dfd8532c0741afd32e930592357f1767dff877da11,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726256577071063880,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-933457,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40c784af0805c3420ec6b1601a925698,},Annotations:map[string]string{io
.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4dc94897deaaf4e69873a1b170e8112acf0c6df733065ad175f5e8ae501562f2,PodSandboxId:91087deff47dd6578bca67df0c3d313a93d4a5dfc477749c5249f9ce5e32ba9b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726256553172858969,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-frbfp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfb8342b-c790-4425-baeb-c40e02d7fad0,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc
59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba02c00e17ad3940584d065430964f8e9776e00d33734f058867687120958d05,PodSandboxId:6bd168360a06144dd5f1b390b41123dc34b15e9a5e772242fdda0ec88f0469cd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726256553699380560,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7fxbj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0dc6419-dce7-46cd-8caa-d46406a809a5,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports
: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61067c1a85d8673f926fb0b1821d2c36123e06963b2c321d3346bf51a52cb966,PodSandboxId:0686db9965f3c9844ab051dfd8532c0741afd32e930592357f1767dff877da11,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726256553069436771,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-933457,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 40c784af0805c3420ec6b1601a925698,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8676b252bf1c28d3cb6742f7bfbd7dd807365a67f2e32f36e9217c0da7d7f23a,PodSandboxId:d0bc474a4e97dea27035affae13ef39a5a770e575535b9bb6ae809cc717ca8d5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726256553167181658,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-933457,io.kubernete
s.pod.namespace: kube-system,io.kubernetes.pod.uid: e4d49aafe3d52d9d99cb2d193950023f,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6edacfe2510fbe9a4608060fd1be51aca7e450fe78177a0cf92a4539040d7b27,PodSandboxId:2e80380b4598439d335bd90802158dea9ec3d8e10f6a5a9d3f8f0fa786304714,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726256553064178840,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-933457,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 337159310d07a596d6014ddf4913aa0f,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c49e99000aa6dd7ed6586c9e93f0ac4a9f934540a949c269b2dc9dcdac0e6259,PodSandboxId:5eb3122c8500087753eda9c48231d39f602533fc59935b1fc1ade3c149cbc775,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726256553053825888,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-933457,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 31785125bea90a378845ca568268a3e8,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=aa1fc491-4129-42a6-9d99-3c53b7df3f23 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	da5210c32a823       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   21 seconds ago      Running             coredns                   2                   6bd168360a061       coredns-7c65d6cfc9-7fxbj
	0f710e3da45b1       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   21 seconds ago      Running             kube-proxy                2                   91087deff47dd       kube-proxy-frbfp
	98bbf70fd6330       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   25 seconds ago      Running             kube-apiserver            2                   2e80380b45984       kube-apiserver-pause-933457
	a7d538f1baabf       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   25 seconds ago      Running             kube-scheduler            2                   5eb3122c85000       kube-scheduler-pause-933457
	b17e0cbb3322d       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   25 seconds ago      Running             etcd                      2                   0686db9965f3c       etcd-pause-933457
	63e339ba2951c       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   25 seconds ago      Running             kube-controller-manager   2                   d0bc474a4e97d       kube-controller-manager-pause-933457
	ba02c00e17ad3       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   48 seconds ago      Exited              coredns                   1                   6bd168360a061       coredns-7c65d6cfc9-7fxbj
	4dc94897deaaf       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   49 seconds ago      Exited              kube-proxy                1                   91087deff47dd       kube-proxy-frbfp
	8676b252bf1c2       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   49 seconds ago      Exited              kube-controller-manager   1                   d0bc474a4e97d       kube-controller-manager-pause-933457
	61067c1a85d86       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   49 seconds ago      Exited              etcd                      1                   0686db9965f3c       etcd-pause-933457
	6edacfe2510fb       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   49 seconds ago      Exited              kube-apiserver            1                   2e80380b45984       kube-apiserver-pause-933457
	c49e99000aa6d       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   49 seconds ago      Exited              kube-scheduler            1                   5eb3122c85000       kube-scheduler-pause-933457
	
	
	==> coredns [ba02c00e17ad3940584d065430964f8e9776e00d33734f058867687120958d05] <==
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 347fb4f25cc546215231b2e9ef34a7838489408c50ad1d77e38b06de967dd388dc540a0db2692259640c7998323f3763426b7a7e73fad2aa89cebddf27cf7c94
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:49449 - 21658 "HINFO IN 3703302259842171280.2195062110973943195. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015494249s
	
	
	==> coredns [da5210c32a8237155bd7ef5a445c688ad3bc167ea231c7dde4e58c30b7b97dcd] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 347fb4f25cc546215231b2e9ef34a7838489408c50ad1d77e38b06de967dd388dc540a0db2692259640c7998323f3763426b7a7e73fad2aa89cebddf27cf7c94
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:39816 - 20096 "HINFO IN 2119544516448739733.2420666444096889193. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.025521087s
	
	
	==> describe nodes <==
	Name:               pause-933457
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-933457
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fdd33bebc6743cfd1c61ec7fe066add478610a92
	                    minikube.k8s.io/name=pause-933457
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_13T19_41_22_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Sep 2024 19:41:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-933457
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 13 Sep 2024 19:43:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Sep 2024 19:43:00 +0000   Fri, 13 Sep 2024 19:41:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Sep 2024 19:43:00 +0000   Fri, 13 Sep 2024 19:41:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Sep 2024 19:43:00 +0000   Fri, 13 Sep 2024 19:41:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 13 Sep 2024 19:43:00 +0000   Fri, 13 Sep 2024 19:41:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.83.59
	  Hostname:    pause-933457
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 e7f6b0d702d54608a683218ef1f97497
	  System UUID:                e7f6b0d7-02d5-4608-a683-218ef1f97497
	  Boot ID:                    10ce631c-aafb-45ec-87cc-19580d513661
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-7fxbj                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     115s
	  kube-system                 etcd-pause-933457                       100m (5%)     0 (0%)      100Mi (5%)       0 (0%)         2m
	  kube-system                 kube-apiserver-pause-933457             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 kube-controller-manager-pause-933457    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 kube-proxy-frbfp                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-scheduler-pause-933457             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 113s               kube-proxy       
	  Normal  Starting                 21s                kube-proxy       
	  Normal  Starting                 45s                kube-proxy       
	  Normal  Starting                 2m                 kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  2m                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m                 kubelet          Node pause-933457 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m                 kubelet          Node pause-933457 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m                 kubelet          Node pause-933457 status is now: NodeHasSufficientPID
	  Normal  NodeReady                119s               kubelet          Node pause-933457 status is now: NodeReady
	  Normal  RegisteredNode           116s               node-controller  Node pause-933457 event: Registered Node pause-933457 in Controller
	  Normal  RegisteredNode           42s                node-controller  Node pause-933457 event: Registered Node pause-933457 in Controller
	  Normal  Starting                 26s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  26s (x8 over 26s)  kubelet          Node pause-933457 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    26s (x8 over 26s)  kubelet          Node pause-933457 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     26s (x7 over 26s)  kubelet          Node pause-933457 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  26s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           16s                node-controller  Node pause-933457 event: Registered Node pause-933457 in Controller
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Sep13 19:41] systemd-fstab-generator[595]: Ignoring "noauto" option for root device
	[  +0.061424] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.067497] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.182524] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.155828] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.317831] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +4.105849] systemd-fstab-generator[755]: Ignoring "noauto" option for root device
	[  +4.544100] systemd-fstab-generator[890]: Ignoring "noauto" option for root device
	[  +0.064854] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.999663] systemd-fstab-generator[1226]: Ignoring "noauto" option for root device
	[  +0.096093] kauditd_printk_skb: 69 callbacks suppressed
	[  +4.790314] systemd-fstab-generator[1357]: Ignoring "noauto" option for root device
	[  +0.571357] kauditd_printk_skb: 46 callbacks suppressed
	[  +7.609915] kauditd_printk_skb: 50 callbacks suppressed
	[Sep13 19:42] systemd-fstab-generator[2027]: Ignoring "noauto" option for root device
	[  +0.206789] systemd-fstab-generator[2039]: Ignoring "noauto" option for root device
	[  +0.197656] systemd-fstab-generator[2054]: Ignoring "noauto" option for root device
	[  +0.153262] systemd-fstab-generator[2066]: Ignoring "noauto" option for root device
	[  +0.330996] systemd-fstab-generator[2094]: Ignoring "noauto" option for root device
	[  +2.847049] systemd-fstab-generator[2243]: Ignoring "noauto" option for root device
	[  +4.857546] kauditd_printk_skb: 195 callbacks suppressed
	[ +18.785624] systemd-fstab-generator[3091]: Ignoring "noauto" option for root device
	[Sep13 19:43] kauditd_printk_skb: 52 callbacks suppressed
	[  +9.719493] systemd-fstab-generator[3558]: Ignoring "noauto" option for root device
	
	
	==> etcd [61067c1a85d8673f926fb0b1821d2c36123e06963b2c321d3346bf51a52cb966] <==
	{"level":"info","ts":"2024-09-13T19:42:35.746163Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9984c9f5bd40dbf7 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-13T19:42:35.746184Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9984c9f5bd40dbf7 received MsgPreVoteResp from 9984c9f5bd40dbf7 at term 2"}
	{"level":"info","ts":"2024-09-13T19:42:35.746207Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9984c9f5bd40dbf7 became candidate at term 3"}
	{"level":"info","ts":"2024-09-13T19:42:35.746215Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9984c9f5bd40dbf7 received MsgVoteResp from 9984c9f5bd40dbf7 at term 3"}
	{"level":"info","ts":"2024-09-13T19:42:35.746250Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9984c9f5bd40dbf7 became leader at term 3"}
	{"level":"info","ts":"2024-09-13T19:42:35.746261Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9984c9f5bd40dbf7 elected leader 9984c9f5bd40dbf7 at term 3"}
	{"level":"info","ts":"2024-09-13T19:42:35.753256Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"9984c9f5bd40dbf7","local-member-attributes":"{Name:pause-933457 ClientURLs:[https://192.168.83.59:2379]}","request-path":"/0/members/9984c9f5bd40dbf7/attributes","cluster-id":"fc03475d3706ce65","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-13T19:42:35.753314Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-13T19:42:35.754036Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-13T19:42:35.754753Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-13T19:42:35.754777Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-13T19:42:35.755089Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-13T19:42:35.755721Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-13T19:42:35.756329Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-13T19:42:35.756793Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.83.59:2379"}
	{"level":"info","ts":"2024-09-13T19:42:44.748148Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-13T19:42:44.748198Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"pause-933457","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.83.59:2380"],"advertise-client-urls":["https://192.168.83.59:2379"]}
	{"level":"warn","ts":"2024-09-13T19:42:44.748268Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-13T19:42:44.748348Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-13T19:42:44.778179Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.83.59:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-13T19:42:44.778258Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.83.59:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-13T19:42:44.779727Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9984c9f5bd40dbf7","current-leader-member-id":"9984c9f5bd40dbf7"}
	{"level":"info","ts":"2024-09-13T19:42:44.785892Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.83.59:2380"}
	{"level":"info","ts":"2024-09-13T19:42:44.786019Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.83.59:2380"}
	{"level":"info","ts":"2024-09-13T19:42:44.786050Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"pause-933457","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.83.59:2380"],"advertise-client-urls":["https://192.168.83.59:2379"]}
	
	
	==> etcd [b17e0cbb3322decd9e0daed3123abc5809b47637a87e4e04f8fae4ddaac51ff4] <==
	{"level":"info","ts":"2024-09-13T19:43:05.374016Z","caller":"traceutil/trace.go:171","msg":"trace[619488644] linearizableReadLoop","detail":"{readStateIndex:566; appliedIndex:565; }","duration":"332.81954ms","start":"2024-09-13T19:43:05.041181Z","end":"2024-09-13T19:43:05.374001Z","steps":["trace[619488644] 'read index received'  (duration: 290.581543ms)","trace[619488644] 'applied index is now lower than readState.Index'  (duration: 42.236723ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-13T19:43:05.374193Z","caller":"traceutil/trace.go:171","msg":"trace[1455107215] transaction","detail":"{read_only:false; response_revision:522; number_of_response:1; }","duration":"335.320255ms","start":"2024-09-13T19:43:05.038747Z","end":"2024-09-13T19:43:05.374067Z","steps":["trace[1455107215] 'process raft request'  (duration: 293.045957ms)","trace[1455107215] 'compare'  (duration: 41.994563ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-13T19:43:05.374469Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-13T19:43:05.038736Z","time spent":"335.692024ms","remote":"127.0.0.1:35912","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":746,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/kube-apiserver-pause-933457.17f4e53606166eca\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kube-apiserver-pause-933457.17f4e53606166eca\" value_size:655 lease:6626925823406077536 >> failure:<>"}
	{"level":"warn","ts":"2024-09-13T19:43:05.374210Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"333.010048ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/resourcequota-controller\" ","response":"range_response_count:1 size:214"}
	{"level":"info","ts":"2024-09-13T19:43:05.374751Z","caller":"traceutil/trace.go:171","msg":"trace[157284599] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/resourcequota-controller; range_end:; response_count:1; response_revision:522; }","duration":"333.562743ms","start":"2024-09-13T19:43:05.041179Z","end":"2024-09-13T19:43:05.374741Z","steps":["trace[157284599] 'agreement among raft nodes before linearized reading'  (duration: 332.940969ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-13T19:43:05.374797Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-13T19:43:05.041155Z","time spent":"333.631803ms","remote":"127.0.0.1:36020","response type":"/etcdserverpb.KV/Range","request count":0,"request size":64,"response count":1,"response size":238,"request content":"key:\"/registry/serviceaccounts/kube-system/resourcequota-controller\" "}
	{"level":"warn","ts":"2024-09-13T19:43:05.374245Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"300.101208ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-7c65d6cfc9-7fxbj\" ","response":"range_response_count:1 size:5147"}
	{"level":"info","ts":"2024-09-13T19:43:05.374906Z","caller":"traceutil/trace.go:171","msg":"trace[1136643454] range","detail":"{range_begin:/registry/pods/kube-system/coredns-7c65d6cfc9-7fxbj; range_end:; response_count:1; response_revision:522; }","duration":"300.756824ms","start":"2024-09-13T19:43:05.074141Z","end":"2024-09-13T19:43:05.374898Z","steps":["trace[1136643454] 'agreement among raft nodes before linearized reading'  (duration: 300.088126ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-13T19:43:05.374936Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-13T19:43:05.074107Z","time spent":"300.818806ms","remote":"127.0.0.1:36006","response type":"/etcdserverpb.KV/Range","request count":0,"request size":53,"response count":1,"response size":5171,"request content":"key:\"/registry/pods/kube-system/coredns-7c65d6cfc9-7fxbj\" "}
	{"level":"warn","ts":"2024-09-13T19:43:05.855018Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"327.749029ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15850297860260853454 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/kube-scheduler-pause-933457.17f4e5360b471db7\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kube-scheduler-pause-933457.17f4e5360b471db7\" value_size:655 lease:6626925823406077536 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-09-13T19:43:05.855166Z","caller":"traceutil/trace.go:171","msg":"trace[566151769] linearizableReadLoop","detail":"{readStateIndex:568; appliedIndex:567; }","duration":"400.310838ms","start":"2024-09-13T19:43:05.454845Z","end":"2024-09-13T19:43:05.855156Z","steps":["trace[566151769] 'read index received'  (duration: 72.300932ms)","trace[566151769] 'applied index is now lower than readState.Index'  (duration: 328.008157ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-13T19:43:05.855197Z","caller":"traceutil/trace.go:171","msg":"trace[2137692898] transaction","detail":"{read_only:false; response_revision:524; number_of_response:1; }","duration":"400.422336ms","start":"2024-09-13T19:43:05.454752Z","end":"2024-09-13T19:43:05.855175Z","steps":["trace[2137692898] 'process raft request'  (duration: 72.4595ms)","trace[2137692898] 'compare'  (duration: 327.657835ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-13T19:43:05.855301Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-13T19:43:05.454736Z","time spent":"400.52666ms","remote":"127.0.0.1:35912","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":746,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/kube-scheduler-pause-933457.17f4e5360b471db7\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kube-scheduler-pause-933457.17f4e5360b471db7\" value_size:655 lease:6626925823406077536 >> failure:<>"}
	{"level":"warn","ts":"2024-09-13T19:43:05.855389Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"400.534103ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/certificate-controller\" ","response":"range_response_count:1 size:209"}
	{"level":"info","ts":"2024-09-13T19:43:05.855432Z","caller":"traceutil/trace.go:171","msg":"trace[380587743] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/certificate-controller; range_end:; response_count:1; response_revision:524; }","duration":"400.583253ms","start":"2024-09-13T19:43:05.454842Z","end":"2024-09-13T19:43:05.855425Z","steps":["trace[380587743] 'agreement among raft nodes before linearized reading'  (duration: 400.447211ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-13T19:43:05.855471Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-13T19:43:05.454813Z","time spent":"400.652455ms","remote":"127.0.0.1:36020","response type":"/etcdserverpb.KV/Range","request count":0,"request size":62,"response count":1,"response size":233,"request content":"key:\"/registry/serviceaccounts/kube-system/certificate-controller\" "}
	{"level":"warn","ts":"2024-09-13T19:43:05.855726Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"281.254708ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-7c65d6cfc9-7fxbj\" ","response":"range_response_count:1 size:5147"}
	{"level":"info","ts":"2024-09-13T19:43:05.855774Z","caller":"traceutil/trace.go:171","msg":"trace[1417660270] range","detail":"{range_begin:/registry/pods/kube-system/coredns-7c65d6cfc9-7fxbj; range_end:; response_count:1; response_revision:524; }","duration":"281.307429ms","start":"2024-09-13T19:43:05.574458Z","end":"2024-09-13T19:43:05.855766Z","steps":["trace[1417660270] 'agreement among raft nodes before linearized reading'  (duration: 281.218478ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-13T19:43:06.148990Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"164.621021ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15850297860260853457 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/kube-apiserver-pause-933457.17f4e5360b84af93\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kube-apiserver-pause-933457.17f4e5360b84af93\" value_size:655 lease:6626925823406077536 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-09-13T19:43:06.149168Z","caller":"traceutil/trace.go:171","msg":"trace[1682055123] linearizableReadLoop","detail":"{readStateIndex:569; appliedIndex:568; }","duration":"289.047872ms","start":"2024-09-13T19:43:05.860109Z","end":"2024-09-13T19:43:06.149157Z","steps":["trace[1682055123] 'read index received'  (duration: 124.205197ms)","trace[1682055123] 'applied index is now lower than readState.Index'  (duration: 164.841648ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-13T19:43:06.149242Z","caller":"traceutil/trace.go:171","msg":"trace[1744278239] transaction","detail":"{read_only:false; response_revision:525; number_of_response:1; }","duration":"289.470903ms","start":"2024-09-13T19:43:05.859742Z","end":"2024-09-13T19:43:06.149213Z","steps":["trace[1744278239] 'process raft request'  (duration: 124.584882ms)","trace[1744278239] 'compare'  (duration: 164.491169ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-13T19:43:06.149366Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"289.247756ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/pv-protection-controller\" ","response":"range_response_count:1 size:214"}
	{"level":"info","ts":"2024-09-13T19:43:06.149410Z","caller":"traceutil/trace.go:171","msg":"trace[1411431965] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/pv-protection-controller; range_end:; response_count:1; response_revision:525; }","duration":"289.297377ms","start":"2024-09-13T19:43:05.860106Z","end":"2024-09-13T19:43:06.149403Z","steps":["trace[1411431965] 'agreement among raft nodes before linearized reading'  (duration: 289.13477ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-13T19:43:06.149678Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"288.700055ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/pause-933457\" ","response":"range_response_count:1 size:5427"}
	{"level":"info","ts":"2024-09-13T19:43:06.150513Z","caller":"traceutil/trace.go:171","msg":"trace[936262000] range","detail":"{range_begin:/registry/minions/pause-933457; range_end:; response_count:1; response_revision:525; }","duration":"289.532214ms","start":"2024-09-13T19:43:05.860969Z","end":"2024-09-13T19:43:06.150501Z","steps":["trace[936262000] 'agreement among raft nodes before linearized reading'  (duration: 288.6185ms)"],"step_count":1}
	
	
	==> kernel <==
	 19:43:22 up 2 min,  0 users,  load average: 0.54, 0.23, 0.09
	Linux pause-933457 5.10.207 #1 SMP Thu Sep 12 19:03:33 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [6edacfe2510fbe9a4608060fd1be51aca7e450fe78177a0cf92a4539040d7b27] <==
	W0913 19:42:54.063386       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0913 19:42:54.106213       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0913 19:42:54.113763       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0913 19:42:54.153970       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0913 19:42:54.166588       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0913 19:42:54.166912       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0913 19:42:54.174800       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0913 19:42:54.215299       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0913 19:42:54.234749       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0913 19:42:54.269069       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0913 19:42:54.280757       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0913 19:42:54.296888       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0913 19:42:54.319018       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0913 19:42:54.322456       1 logging.go:55] [core] [Channel #9 SubChannel #10]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0913 19:42:54.377049       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0913 19:42:54.401581       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0913 19:42:54.406019       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0913 19:42:54.434750       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0913 19:42:54.542917       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0913 19:42:54.590690       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0913 19:42:54.609720       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0913 19:42:54.633464       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0913 19:42:54.677275       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0913 19:42:54.788908       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0913 19:42:54.888368       1 logging.go:55] [core] [Channel #17 SubChannel #18]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [98bbf70fd63304f22b2ba09c27befc8cc2eabb158a32942d7b6368164a56dded] <==
	I0913 19:43:00.553959       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0913 19:43:00.554150       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0913 19:43:00.554190       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0913 19:43:00.556296       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0913 19:43:00.556991       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0913 19:43:00.569271       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0913 19:43:00.569468       1 shared_informer.go:320] Caches are synced for configmaps
	I0913 19:43:00.569542       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0913 19:43:00.569688       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0913 19:43:00.570331       1 aggregator.go:171] initial CRD sync complete...
	I0913 19:43:00.570371       1 autoregister_controller.go:144] Starting autoregister controller
	I0913 19:43:00.570378       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0913 19:43:00.570383       1 cache.go:39] Caches are synced for autoregister controller
	E0913 19:43:00.577827       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0913 19:43:00.608687       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0913 19:43:00.608709       1 policy_source.go:224] refreshing policies
	I0913 19:43:00.615109       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0913 19:43:01.355228       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0913 19:43:01.899477       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0913 19:43:01.913262       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0913 19:43:01.956072       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0913 19:43:01.985992       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0913 19:43:01.992742       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0913 19:43:06.450847       1 controller.go:615] quota admission added evaluator for: endpoints
	I0913 19:43:06.453016       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [63e339ba2951c30bd8da7a88639d237b0309efaf7a94cd250a9a4e92825e327d] <==
	I0913 19:43:06.330221       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0913 19:43:06.339226       1 shared_informer.go:320] Caches are synced for PVC protection
	I0913 19:43:06.348461       1 shared_informer.go:320] Caches are synced for attach detach
	I0913 19:43:06.354030       1 shared_informer.go:320] Caches are synced for stateful set
	I0913 19:43:06.362224       1 shared_informer.go:320] Caches are synced for persistent volume
	I0913 19:43:06.367999       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0913 19:43:06.370731       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0913 19:43:06.372810       1 shared_informer.go:320] Caches are synced for endpoint
	I0913 19:43:06.373423       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0913 19:43:06.374670       1 shared_informer.go:320] Caches are synced for job
	I0913 19:43:06.384251       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0913 19:43:06.384384       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="40.912µs"
	I0913 19:43:06.399056       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0913 19:43:06.402382       1 shared_informer.go:320] Caches are synced for HPA
	I0913 19:43:06.405928       1 shared_informer.go:320] Caches are synced for ephemeral
	I0913 19:43:06.408090       1 shared_informer.go:320] Caches are synced for daemon sets
	I0913 19:43:06.413723       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0913 19:43:06.419071       1 shared_informer.go:320] Caches are synced for disruption
	I0913 19:43:06.458156       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="20.276886ms"
	I0913 19:43:06.458731       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="42.97µs"
	I0913 19:43:06.464024       1 shared_informer.go:320] Caches are synced for resource quota
	I0913 19:43:06.488156       1 shared_informer.go:320] Caches are synced for resource quota
	I0913 19:43:06.866755       1 shared_informer.go:320] Caches are synced for garbage collector
	I0913 19:43:06.866793       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0913 19:43:06.905198       1 shared_informer.go:320] Caches are synced for garbage collector
	
	
	==> kube-controller-manager [8676b252bf1c28d3cb6742f7bfbd7dd807365a67f2e32f36e9217c0da7d7f23a] <==
	I0913 19:42:40.480724       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0913 19:42:40.482307       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0913 19:42:40.505237       1 shared_informer.go:320] Caches are synced for persistent volume
	I0913 19:42:40.520784       1 shared_informer.go:320] Caches are synced for taint
	I0913 19:42:40.521119       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0913 19:42:40.521364       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-933457"
	I0913 19:42:40.521489       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0913 19:42:40.530337       1 shared_informer.go:320] Caches are synced for GC
	I0913 19:42:40.536857       1 shared_informer.go:320] Caches are synced for node
	I0913 19:42:40.536930       1 range_allocator.go:171] "Sending events to api server" logger="node-ipam-controller"
	I0913 19:42:40.536953       1 range_allocator.go:177] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0913 19:42:40.536957       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0913 19:42:40.536962       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0913 19:42:40.537082       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="pause-933457"
	I0913 19:42:40.543990       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0913 19:42:40.556157       1 shared_informer.go:320] Caches are synced for resource quota
	I0913 19:42:40.562537       1 shared_informer.go:320] Caches are synced for resource quota
	I0913 19:42:40.629111       1 shared_informer.go:320] Caches are synced for stateful set
	I0913 19:42:40.632702       1 shared_informer.go:320] Caches are synced for daemon sets
	I0913 19:42:40.680128       1 shared_informer.go:320] Caches are synced for attach detach
	I0913 19:42:40.694227       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="265.056776ms"
	I0913 19:42:40.694786       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="128.767µs"
	I0913 19:42:41.096581       1 shared_informer.go:320] Caches are synced for garbage collector
	I0913 19:42:41.129072       1 shared_informer.go:320] Caches are synced for garbage collector
	I0913 19:42:41.129117       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [0f710e3da45b15fe72958441699682fc1298243fb5bc7df2767f49de0a7ad687] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0913 19:43:01.144136       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0913 19:43:01.155453       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.83.59"]
	E0913 19:43:01.155774       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0913 19:43:01.201128       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0913 19:43:01.201174       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0913 19:43:01.201197       1 server_linux.go:169] "Using iptables Proxier"
	I0913 19:43:01.204496       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0913 19:43:01.204882       1 server.go:483] "Version info" version="v1.31.1"
	I0913 19:43:01.204908       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0913 19:43:01.206179       1 config.go:199] "Starting service config controller"
	I0913 19:43:01.206221       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0913 19:43:01.206248       1 config.go:105] "Starting endpoint slice config controller"
	I0913 19:43:01.206252       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0913 19:43:01.206819       1 config.go:328] "Starting node config controller"
	I0913 19:43:01.206857       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0913 19:43:01.306724       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0913 19:43:01.306749       1 shared_informer.go:320] Caches are synced for service config
	I0913 19:43:01.306891       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [4dc94897deaaf4e69873a1b170e8112acf0c6df733065ad175f5e8ae501562f2] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0913 19:42:34.906023       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0913 19:42:37.197942       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.83.59"]
	E0913 19:42:37.242711       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0913 19:42:37.341340       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0913 19:42:37.341505       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0913 19:42:37.341598       1 server_linux.go:169] "Using iptables Proxier"
	I0913 19:42:37.349759       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0913 19:42:37.355412       1 server.go:483] "Version info" version="v1.31.1"
	I0913 19:42:37.355843       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0913 19:42:37.358157       1 config.go:199] "Starting service config controller"
	I0913 19:42:37.358262       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0913 19:42:37.358339       1 config.go:105] "Starting endpoint slice config controller"
	I0913 19:42:37.358366       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0913 19:42:37.363469       1 config.go:328] "Starting node config controller"
	I0913 19:42:37.363599       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0913 19:42:37.459441       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0913 19:42:37.459584       1 shared_informer.go:320] Caches are synced for service config
	I0913 19:42:37.464756       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [a7d538f1baabf0dcc15a5450f23023f75524ca8b22bba77cf8c62b56dba2a2d4] <==
	I0913 19:42:58.685760       1 serving.go:386] Generated self-signed cert in-memory
	W0913 19:43:00.439273       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0913 19:43:00.439451       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0913 19:43:00.439485       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0913 19:43:00.439556       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0913 19:43:00.505572       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0913 19:43:00.505676       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0913 19:43:00.514348       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0913 19:43:00.514396       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0913 19:43:00.515079       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0913 19:43:00.515142       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0913 19:43:00.615709       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [c49e99000aa6dd7ed6586c9e93f0ac4a9f934540a949c269b2dc9dcdac0e6259] <==
	I0913 19:42:34.703299       1 serving.go:386] Generated self-signed cert in-memory
	W0913 19:42:37.069886       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0913 19:42:37.069930       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0913 19:42:37.069940       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0913 19:42:37.069952       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0913 19:42:37.189547       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0913 19:42:37.189597       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0913 19:42:37.207335       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0913 19:42:37.207563       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0913 19:42:37.207673       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0913 19:42:37.207712       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0913 19:42:37.308569       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0913 19:42:54.958219       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0913 19:42:54.958280       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0913 19:42:54.958527       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0913 19:42:54.958535       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 13 19:42:57 pause-933457 kubelet[3098]: I0913 19:42:57.033926    3098 scope.go:117] "RemoveContainer" containerID="8676b252bf1c28d3cb6742f7bfbd7dd807365a67f2e32f36e9217c0da7d7f23a"
	Sep 13 19:42:57 pause-933457 kubelet[3098]: I0913 19:42:57.046871    3098 scope.go:117] "RemoveContainer" containerID="61067c1a85d8673f926fb0b1821d2c36123e06963b2c321d3346bf51a52cb966"
	Sep 13 19:42:57 pause-933457 kubelet[3098]: I0913 19:42:57.056272    3098 scope.go:117] "RemoveContainer" containerID="6edacfe2510fbe9a4608060fd1be51aca7e450fe78177a0cf92a4539040d7b27"
	Sep 13 19:42:57 pause-933457 kubelet[3098]: I0913 19:42:57.061905    3098 scope.go:117] "RemoveContainer" containerID="c49e99000aa6dd7ed6586c9e93f0ac4a9f934540a949c269b2dc9dcdac0e6259"
	Sep 13 19:42:57 pause-933457 kubelet[3098]: E0913 19:42:57.224988    3098 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-933457?timeout=10s\": dial tcp 192.168.83.59:8443: connect: connection refused" interval="800ms"
	Sep 13 19:42:57 pause-933457 kubelet[3098]: I0913 19:42:57.433455    3098 kubelet_node_status.go:72] "Attempting to register node" node="pause-933457"
	Sep 13 19:42:57 pause-933457 kubelet[3098]: E0913 19:42:57.434545    3098 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.83.59:8443: connect: connection refused" node="pause-933457"
	Sep 13 19:42:57 pause-933457 kubelet[3098]: W0913 19:42:57.553840    3098 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.83.59:8443: connect: connection refused
	Sep 13 19:42:57 pause-933457 kubelet[3098]: E0913 19:42:57.553981    3098 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.83.59:8443: connect: connection refused" logger="UnhandledError"
	Sep 13 19:42:58 pause-933457 kubelet[3098]: I0913 19:42:58.237108    3098 kubelet_node_status.go:72] "Attempting to register node" node="pause-933457"
	Sep 13 19:43:00 pause-933457 kubelet[3098]: I0913 19:43:00.582857    3098 apiserver.go:52] "Watching apiserver"
	Sep 13 19:43:00 pause-933457 kubelet[3098]: I0913 19:43:00.611741    3098 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Sep 13 19:43:00 pause-933457 kubelet[3098]: I0913 19:43:00.662888    3098 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cfb8342b-c790-4425-baeb-c40e02d7fad0-xtables-lock\") pod \"kube-proxy-frbfp\" (UID: \"cfb8342b-c790-4425-baeb-c40e02d7fad0\") " pod="kube-system/kube-proxy-frbfp"
	Sep 13 19:43:00 pause-933457 kubelet[3098]: I0913 19:43:00.662986    3098 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cfb8342b-c790-4425-baeb-c40e02d7fad0-lib-modules\") pod \"kube-proxy-frbfp\" (UID: \"cfb8342b-c790-4425-baeb-c40e02d7fad0\") " pod="kube-system/kube-proxy-frbfp"
	Sep 13 19:43:00 pause-933457 kubelet[3098]: I0913 19:43:00.702513    3098 kubelet_node_status.go:111] "Node was previously registered" node="pause-933457"
	Sep 13 19:43:00 pause-933457 kubelet[3098]: I0913 19:43:00.702909    3098 kubelet_node_status.go:75] "Successfully registered node" node="pause-933457"
	Sep 13 19:43:00 pause-933457 kubelet[3098]: I0913 19:43:00.703039    3098 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 13 19:43:00 pause-933457 kubelet[3098]: I0913 19:43:00.704433    3098 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 13 19:43:00 pause-933457 kubelet[3098]: I0913 19:43:00.888959    3098 scope.go:117] "RemoveContainer" containerID="4dc94897deaaf4e69873a1b170e8112acf0c6df733065ad175f5e8ae501562f2"
	Sep 13 19:43:00 pause-933457 kubelet[3098]: I0913 19:43:00.889031    3098 scope.go:117] "RemoveContainer" containerID="ba02c00e17ad3940584d065430964f8e9776e00d33734f058867687120958d05"
	Sep 13 19:43:06 pause-933457 kubelet[3098]: I0913 19:43:06.421402    3098 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Sep 13 19:43:06 pause-933457 kubelet[3098]: E0913 19:43:06.726722    3098 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726256586726270300,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 19:43:06 pause-933457 kubelet[3098]: E0913 19:43:06.726791    3098 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726256586726270300,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 19:43:16 pause-933457 kubelet[3098]: E0913 19:43:16.728324    3098 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726256596728049078,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 19:43:16 pause-933457 kubelet[3098]: E0913 19:43:16.728368    3098 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726256596728049078,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-933457 -n pause-933457
helpers_test.go:261: (dbg) Run:  kubectl --context pause-933457 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (107.47s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (272.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-234290 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-234290 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (4m31.945511184s)

                                                
                                                
-- stdout --
	* [old-k8s-version-234290] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19636
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19636-3902/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19636-3902/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-234290" primary control-plane node in "old-k8s-version-234290" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 19:47:48.481381   65574 out.go:345] Setting OutFile to fd 1 ...
	I0913 19:47:48.481504   65574 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 19:47:48.481511   65574 out.go:358] Setting ErrFile to fd 2...
	I0913 19:47:48.481517   65574 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 19:47:48.481808   65574 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19636-3902/.minikube/bin
	I0913 19:47:48.482614   65574 out.go:352] Setting JSON to false
	I0913 19:47:48.484083   65574 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5411,"bootTime":1726251457,"procs":312,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0913 19:47:48.484221   65574 start.go:139] virtualization: kvm guest
	I0913 19:47:48.486614   65574 out.go:177] * [old-k8s-version-234290] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0913 19:47:48.488025   65574 out.go:177]   - MINIKUBE_LOCATION=19636
	I0913 19:47:48.488084   65574 notify.go:220] Checking for updates...
	I0913 19:47:48.490802   65574 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 19:47:48.492267   65574 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19636-3902/kubeconfig
	I0913 19:47:48.493667   65574 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19636-3902/.minikube
	I0913 19:47:48.495083   65574 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0913 19:47:48.496574   65574 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 19:47:48.498699   65574 config.go:182] Loaded profile config "bridge-604714": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 19:47:48.498833   65574 config.go:182] Loaded profile config "flannel-604714": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 19:47:48.498952   65574 config.go:182] Loaded profile config "kubernetes-upgrade-421098": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 19:47:48.499075   65574 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 19:47:48.546661   65574 out.go:177] * Using the kvm2 driver based on user configuration
	I0913 19:47:48.548028   65574 start.go:297] selected driver: kvm2
	I0913 19:47:48.548047   65574 start.go:901] validating driver "kvm2" against <nil>
	I0913 19:47:48.548076   65574 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 19:47:48.548863   65574 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 19:47:48.548932   65574 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19636-3902/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0913 19:47:48.572521   65574 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0913 19:47:48.572583   65574 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0913 19:47:48.572829   65574 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0913 19:47:48.572865   65574 cni.go:84] Creating CNI manager for ""
	I0913 19:47:48.572924   65574 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0913 19:47:48.572935   65574 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0913 19:47:48.572993   65574 start.go:340] cluster config:
	{Name:old-k8s-version-234290 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-234290 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 19:47:48.573117   65574 iso.go:125] acquiring lock: {Name:mk6d09a9e7ffc35d34bf41cb51ad15df14a4d34d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 19:47:48.574830   65574 out.go:177] * Starting "old-k8s-version-234290" primary control-plane node in "old-k8s-version-234290" cluster
	I0913 19:47:48.575883   65574 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0913 19:47:48.575920   65574 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0913 19:47:48.575932   65574 cache.go:56] Caching tarball of preloaded images
	I0913 19:47:48.576021   65574 preload.go:172] Found /home/jenkins/minikube-integration/19636-3902/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0913 19:47:48.576036   65574 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0913 19:47:48.576122   65574 profile.go:143] Saving config to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/old-k8s-version-234290/config.json ...
	I0913 19:47:48.576139   65574 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/old-k8s-version-234290/config.json: {Name:mkf4a5bf136d863132ded36faecc719ed0bfe8ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 19:47:48.576288   65574 start.go:360] acquireMachinesLock for old-k8s-version-234290: {Name:mk2a4fc9f87a3264088553785e086036ce1d8c5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 19:47:48.576327   65574 start.go:364] duration metric: took 21.999µs to acquireMachinesLock for "old-k8s-version-234290"
	I0913 19:47:48.576345   65574 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-234290 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-234290 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0913 19:47:48.576419   65574 start.go:125] createHost starting for "" (driver="kvm2")
	I0913 19:47:48.577964   65574 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0913 19:47:48.578123   65574 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:47:48.578169   65574 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:47:48.600332   65574 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40837
	I0913 19:47:48.600902   65574 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:47:48.601496   65574 main.go:141] libmachine: Using API Version  1
	I0913 19:47:48.601518   65574 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:47:48.601882   65574 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:47:48.602071   65574 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetMachineName
	I0913 19:47:48.602297   65574 main.go:141] libmachine: (old-k8s-version-234290) Calling .DriverName
	I0913 19:47:48.602469   65574 start.go:159] libmachine.API.Create for "old-k8s-version-234290" (driver="kvm2")
	I0913 19:47:48.602501   65574 client.go:168] LocalClient.Create starting
	I0913 19:47:48.602537   65574 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem
	I0913 19:47:48.602581   65574 main.go:141] libmachine: Decoding PEM data...
	I0913 19:47:48.602606   65574 main.go:141] libmachine: Parsing certificate...
	I0913 19:47:48.602669   65574 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem
	I0913 19:47:48.602697   65574 main.go:141] libmachine: Decoding PEM data...
	I0913 19:47:48.602712   65574 main.go:141] libmachine: Parsing certificate...
	I0913 19:47:48.602736   65574 main.go:141] libmachine: Running pre-create checks...
	I0913 19:47:48.602755   65574 main.go:141] libmachine: (old-k8s-version-234290) Calling .PreCreateCheck
	I0913 19:47:48.603287   65574 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetConfigRaw
	I0913 19:47:48.603780   65574 main.go:141] libmachine: Creating machine...
	I0913 19:47:48.603800   65574 main.go:141] libmachine: (old-k8s-version-234290) Calling .Create
	I0913 19:47:48.604463   65574 main.go:141] libmachine: (old-k8s-version-234290) Creating KVM machine...
	I0913 19:47:48.605528   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | found existing default KVM network
	I0913 19:47:48.607126   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | I0913 19:47:48.606936   65595 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:11:af:a6} reservation:<nil>}
	I0913 19:47:48.608427   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | I0913 19:47:48.608332   65595 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:1e:05:f3} reservation:<nil>}
	I0913 19:47:48.611754   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | I0913 19:47:48.609656   65595 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:9d:d5:8c} reservation:<nil>}
	I0913 19:47:48.611789   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | I0913 19:47:48.611296   65595 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0003a0e90}
	I0913 19:47:48.611807   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | created network xml: 
	I0913 19:47:48.611815   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | <network>
	I0913 19:47:48.611825   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG |   <name>mk-old-k8s-version-234290</name>
	I0913 19:47:48.611832   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG |   <dns enable='no'/>
	I0913 19:47:48.611843   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG |   
	I0913 19:47:48.611851   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0913 19:47:48.611861   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG |     <dhcp>
	I0913 19:47:48.611867   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0913 19:47:48.611872   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG |     </dhcp>
	I0913 19:47:48.611884   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG |   </ip>
	I0913 19:47:48.611889   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG |   
	I0913 19:47:48.611893   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | </network>
	I0913 19:47:48.611901   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | 
	I0913 19:47:48.618219   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | trying to create private KVM network mk-old-k8s-version-234290 192.168.72.0/24...
	I0913 19:47:48.722169   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | private KVM network mk-old-k8s-version-234290 192.168.72.0/24 created
	I0913 19:47:48.722206   65574 main.go:141] libmachine: (old-k8s-version-234290) Setting up store path in /home/jenkins/minikube-integration/19636-3902/.minikube/machines/old-k8s-version-234290 ...
	I0913 19:47:48.722220   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | I0913 19:47:48.722068   65595 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19636-3902/.minikube
	I0913 19:47:48.722241   65574 main.go:141] libmachine: (old-k8s-version-234290) Building disk image from file:///home/jenkins/minikube-integration/19636-3902/.minikube/cache/iso/amd64/minikube-v1.34.0-1726156389-19616-amd64.iso
	I0913 19:47:48.722299   65574 main.go:141] libmachine: (old-k8s-version-234290) Downloading /home/jenkins/minikube-integration/19636-3902/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19636-3902/.minikube/cache/iso/amd64/minikube-v1.34.0-1726156389-19616-amd64.iso...
	I0913 19:47:49.012344   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | I0913 19:47:49.012191   65595 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/old-k8s-version-234290/id_rsa...
	I0913 19:47:49.294722   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | I0913 19:47:49.294563   65595 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/old-k8s-version-234290/old-k8s-version-234290.rawdisk...
	I0913 19:47:49.294767   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | Writing magic tar header
	I0913 19:47:49.294789   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | Writing SSH key tar header
	I0913 19:47:49.294805   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | I0913 19:47:49.294718   65595 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19636-3902/.minikube/machines/old-k8s-version-234290 ...
	I0913 19:47:49.294891   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/old-k8s-version-234290
	I0913 19:47:49.294915   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19636-3902/.minikube/machines
	I0913 19:47:49.294944   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19636-3902/.minikube
	I0913 19:47:49.294959   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19636-3902
	I0913 19:47:49.294971   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0913 19:47:49.294993   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | Checking permissions on dir: /home/jenkins
	I0913 19:47:49.295003   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | Checking permissions on dir: /home
	I0913 19:47:49.295012   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | Skipping /home - not owner
	I0913 19:47:49.295038   65574 main.go:141] libmachine: (old-k8s-version-234290) Setting executable bit set on /home/jenkins/minikube-integration/19636-3902/.minikube/machines/old-k8s-version-234290 (perms=drwx------)
	I0913 19:47:49.295049   65574 main.go:141] libmachine: (old-k8s-version-234290) Setting executable bit set on /home/jenkins/minikube-integration/19636-3902/.minikube/machines (perms=drwxr-xr-x)
	I0913 19:47:49.295061   65574 main.go:141] libmachine: (old-k8s-version-234290) Setting executable bit set on /home/jenkins/minikube-integration/19636-3902/.minikube (perms=drwxr-xr-x)
	I0913 19:47:49.295071   65574 main.go:141] libmachine: (old-k8s-version-234290) Setting executable bit set on /home/jenkins/minikube-integration/19636-3902 (perms=drwxrwxr-x)
	I0913 19:47:49.295083   65574 main.go:141] libmachine: (old-k8s-version-234290) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0913 19:47:49.295091   65574 main.go:141] libmachine: (old-k8s-version-234290) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0913 19:47:49.295102   65574 main.go:141] libmachine: (old-k8s-version-234290) Creating domain...
	I0913 19:47:49.296690   65574 main.go:141] libmachine: (old-k8s-version-234290) define libvirt domain using xml: 
	I0913 19:47:49.296706   65574 main.go:141] libmachine: (old-k8s-version-234290) <domain type='kvm'>
	I0913 19:47:49.296715   65574 main.go:141] libmachine: (old-k8s-version-234290)   <name>old-k8s-version-234290</name>
	I0913 19:47:49.296722   65574 main.go:141] libmachine: (old-k8s-version-234290)   <memory unit='MiB'>2200</memory>
	I0913 19:47:49.296730   65574 main.go:141] libmachine: (old-k8s-version-234290)   <vcpu>2</vcpu>
	I0913 19:47:49.296736   65574 main.go:141] libmachine: (old-k8s-version-234290)   <features>
	I0913 19:47:49.296746   65574 main.go:141] libmachine: (old-k8s-version-234290)     <acpi/>
	I0913 19:47:49.296751   65574 main.go:141] libmachine: (old-k8s-version-234290)     <apic/>
	I0913 19:47:49.296759   65574 main.go:141] libmachine: (old-k8s-version-234290)     <pae/>
	I0913 19:47:49.296765   65574 main.go:141] libmachine: (old-k8s-version-234290)     
	I0913 19:47:49.296786   65574 main.go:141] libmachine: (old-k8s-version-234290)   </features>
	I0913 19:47:49.296793   65574 main.go:141] libmachine: (old-k8s-version-234290)   <cpu mode='host-passthrough'>
	I0913 19:47:49.296800   65574 main.go:141] libmachine: (old-k8s-version-234290)   
	I0913 19:47:49.296806   65574 main.go:141] libmachine: (old-k8s-version-234290)   </cpu>
	I0913 19:47:49.296813   65574 main.go:141] libmachine: (old-k8s-version-234290)   <os>
	I0913 19:47:49.296819   65574 main.go:141] libmachine: (old-k8s-version-234290)     <type>hvm</type>
	I0913 19:47:49.296829   65574 main.go:141] libmachine: (old-k8s-version-234290)     <boot dev='cdrom'/>
	I0913 19:47:49.296835   65574 main.go:141] libmachine: (old-k8s-version-234290)     <boot dev='hd'/>
	I0913 19:47:49.296848   65574 main.go:141] libmachine: (old-k8s-version-234290)     <bootmenu enable='no'/>
	I0913 19:47:49.296854   65574 main.go:141] libmachine: (old-k8s-version-234290)   </os>
	I0913 19:47:49.296874   65574 main.go:141] libmachine: (old-k8s-version-234290)   <devices>
	I0913 19:47:49.296880   65574 main.go:141] libmachine: (old-k8s-version-234290)     <disk type='file' device='cdrom'>
	I0913 19:47:49.296893   65574 main.go:141] libmachine: (old-k8s-version-234290)       <source file='/home/jenkins/minikube-integration/19636-3902/.minikube/machines/old-k8s-version-234290/boot2docker.iso'/>
	I0913 19:47:49.296900   65574 main.go:141] libmachine: (old-k8s-version-234290)       <target dev='hdc' bus='scsi'/>
	I0913 19:47:49.296906   65574 main.go:141] libmachine: (old-k8s-version-234290)       <readonly/>
	I0913 19:47:49.296913   65574 main.go:141] libmachine: (old-k8s-version-234290)     </disk>
	I0913 19:47:49.296921   65574 main.go:141] libmachine: (old-k8s-version-234290)     <disk type='file' device='disk'>
	I0913 19:47:49.296929   65574 main.go:141] libmachine: (old-k8s-version-234290)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0913 19:47:49.296942   65574 main.go:141] libmachine: (old-k8s-version-234290)       <source file='/home/jenkins/minikube-integration/19636-3902/.minikube/machines/old-k8s-version-234290/old-k8s-version-234290.rawdisk'/>
	I0913 19:47:49.296949   65574 main.go:141] libmachine: (old-k8s-version-234290)       <target dev='hda' bus='virtio'/>
	I0913 19:47:49.296957   65574 main.go:141] libmachine: (old-k8s-version-234290)     </disk>
	I0913 19:47:49.296963   65574 main.go:141] libmachine: (old-k8s-version-234290)     <interface type='network'>
	I0913 19:47:49.296977   65574 main.go:141] libmachine: (old-k8s-version-234290)       <source network='mk-old-k8s-version-234290'/>
	I0913 19:47:49.296993   65574 main.go:141] libmachine: (old-k8s-version-234290)       <model type='virtio'/>
	I0913 19:47:49.297001   65574 main.go:141] libmachine: (old-k8s-version-234290)     </interface>
	I0913 19:47:49.297013   65574 main.go:141] libmachine: (old-k8s-version-234290)     <interface type='network'>
	I0913 19:47:49.297050   65574 main.go:141] libmachine: (old-k8s-version-234290)       <source network='default'/>
	I0913 19:47:49.297086   65574 main.go:141] libmachine: (old-k8s-version-234290)       <model type='virtio'/>
	I0913 19:47:49.297100   65574 main.go:141] libmachine: (old-k8s-version-234290)     </interface>
	I0913 19:47:49.297109   65574 main.go:141] libmachine: (old-k8s-version-234290)     <serial type='pty'>
	I0913 19:47:49.297142   65574 main.go:141] libmachine: (old-k8s-version-234290)       <target port='0'/>
	I0913 19:47:49.297166   65574 main.go:141] libmachine: (old-k8s-version-234290)     </serial>
	I0913 19:47:49.297182   65574 main.go:141] libmachine: (old-k8s-version-234290)     <console type='pty'>
	I0913 19:47:49.297193   65574 main.go:141] libmachine: (old-k8s-version-234290)       <target type='serial' port='0'/>
	I0913 19:47:49.297202   65574 main.go:141] libmachine: (old-k8s-version-234290)     </console>
	I0913 19:47:49.297264   65574 main.go:141] libmachine: (old-k8s-version-234290)     <rng model='virtio'>
	I0913 19:47:49.297291   65574 main.go:141] libmachine: (old-k8s-version-234290)       <backend model='random'>/dev/random</backend>
	I0913 19:47:49.297302   65574 main.go:141] libmachine: (old-k8s-version-234290)     </rng>
	I0913 19:47:49.297309   65574 main.go:141] libmachine: (old-k8s-version-234290)     
	I0913 19:47:49.297317   65574 main.go:141] libmachine: (old-k8s-version-234290)     
	I0913 19:47:49.297366   65574 main.go:141] libmachine: (old-k8s-version-234290)   </devices>
	I0913 19:47:49.297392   65574 main.go:141] libmachine: (old-k8s-version-234290) </domain>
	I0913 19:47:49.297404   65574 main.go:141] libmachine: (old-k8s-version-234290) 
	I0913 19:47:49.303198   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:5a:a1:37 in network default
	I0913 19:47:49.303940   65574 main.go:141] libmachine: (old-k8s-version-234290) Ensuring networks are active...
	I0913 19:47:49.303966   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:47:49.304846   65574 main.go:141] libmachine: (old-k8s-version-234290) Ensuring network default is active
	I0913 19:47:49.305235   65574 main.go:141] libmachine: (old-k8s-version-234290) Ensuring network mk-old-k8s-version-234290 is active
	I0913 19:47:49.309868   65574 main.go:141] libmachine: (old-k8s-version-234290) Getting domain xml...
	I0913 19:47:49.310668   65574 main.go:141] libmachine: (old-k8s-version-234290) Creating domain...
	I0913 19:47:50.731282   65574 main.go:141] libmachine: (old-k8s-version-234290) Waiting to get IP...
	I0913 19:47:50.732327   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:47:50.732805   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | unable to find current IP address of domain old-k8s-version-234290 in network mk-old-k8s-version-234290
	I0913 19:47:50.732851   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | I0913 19:47:50.732767   65595 retry.go:31] will retry after 308.902108ms: waiting for machine to come up
	I0913 19:47:51.043526   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:47:51.044165   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | unable to find current IP address of domain old-k8s-version-234290 in network mk-old-k8s-version-234290
	I0913 19:47:51.044195   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | I0913 19:47:51.044111   65595 retry.go:31] will retry after 250.733397ms: waiting for machine to come up
	I0913 19:47:51.296399   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:47:51.296968   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | unable to find current IP address of domain old-k8s-version-234290 in network mk-old-k8s-version-234290
	I0913 19:47:51.296992   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | I0913 19:47:51.296925   65595 retry.go:31] will retry after 487.272671ms: waiting for machine to come up
	I0913 19:47:51.785532   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:47:51.785988   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | unable to find current IP address of domain old-k8s-version-234290 in network mk-old-k8s-version-234290
	I0913 19:47:51.786009   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | I0913 19:47:51.785941   65595 retry.go:31] will retry after 401.924965ms: waiting for machine to come up
	I0913 19:47:52.189523   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:47:52.190082   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | unable to find current IP address of domain old-k8s-version-234290 in network mk-old-k8s-version-234290
	I0913 19:47:52.190126   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | I0913 19:47:52.190046   65595 retry.go:31] will retry after 676.963502ms: waiting for machine to come up
	I0913 19:47:52.869230   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:47:52.869813   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | unable to find current IP address of domain old-k8s-version-234290 in network mk-old-k8s-version-234290
	I0913 19:47:52.869857   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | I0913 19:47:52.869760   65595 retry.go:31] will retry after 621.092849ms: waiting for machine to come up
	I0913 19:47:53.492215   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:47:53.492867   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | unable to find current IP address of domain old-k8s-version-234290 in network mk-old-k8s-version-234290
	I0913 19:47:53.492896   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | I0913 19:47:53.492802   65595 retry.go:31] will retry after 933.73638ms: waiting for machine to come up
	I0913 19:47:54.428496   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:47:54.429053   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | unable to find current IP address of domain old-k8s-version-234290 in network mk-old-k8s-version-234290
	I0913 19:47:54.429080   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | I0913 19:47:54.429005   65595 retry.go:31] will retry after 959.616348ms: waiting for machine to come up
	I0913 19:47:55.389752   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:47:55.390206   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | unable to find current IP address of domain old-k8s-version-234290 in network mk-old-k8s-version-234290
	I0913 19:47:55.390236   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | I0913 19:47:55.390170   65595 retry.go:31] will retry after 1.712660625s: waiting for machine to come up
	I0913 19:47:57.104565   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:47:57.105237   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | unable to find current IP address of domain old-k8s-version-234290 in network mk-old-k8s-version-234290
	I0913 19:47:57.105259   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | I0913 19:47:57.105167   65595 retry.go:31] will retry after 2.26565962s: waiting for machine to come up
	I0913 19:47:59.372554   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:47:59.373189   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | unable to find current IP address of domain old-k8s-version-234290 in network mk-old-k8s-version-234290
	I0913 19:47:59.373213   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | I0913 19:47:59.373134   65595 retry.go:31] will retry after 2.006445771s: waiting for machine to come up
	I0913 19:48:01.382707   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:48:01.383346   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | unable to find current IP address of domain old-k8s-version-234290 in network mk-old-k8s-version-234290
	I0913 19:48:01.383379   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | I0913 19:48:01.383296   65595 retry.go:31] will retry after 3.434963432s: waiting for machine to come up
	I0913 19:48:04.820171   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:48:04.820631   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | unable to find current IP address of domain old-k8s-version-234290 in network mk-old-k8s-version-234290
	I0913 19:48:04.820653   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | I0913 19:48:04.820587   65595 retry.go:31] will retry after 3.253008601s: waiting for machine to come up
	I0913 19:48:08.075187   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:48:08.075798   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | unable to find current IP address of domain old-k8s-version-234290 in network mk-old-k8s-version-234290
	I0913 19:48:08.075819   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | I0913 19:48:08.075746   65595 retry.go:31] will retry after 4.482850859s: waiting for machine to come up
	I0913 19:48:12.560223   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:48:12.560609   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has current primary IP address 192.168.72.137 and MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:48:12.560633   65574 main.go:141] libmachine: (old-k8s-version-234290) Found IP for machine: 192.168.72.137
	I0913 19:48:12.560677   65574 main.go:141] libmachine: (old-k8s-version-234290) Reserving static IP address...
	I0913 19:48:12.560885   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-234290", mac: "52:54:00:11:33:43", ip: "192.168.72.137"} in network mk-old-k8s-version-234290
	I0913 19:48:12.637071   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | Getting to WaitForSSH function...
	I0913 19:48:12.637104   65574 main.go:141] libmachine: (old-k8s-version-234290) Reserved static IP address: 192.168.72.137
	I0913 19:48:12.637194   65574 main.go:141] libmachine: (old-k8s-version-234290) Waiting for SSH to be available...
	I0913 19:48:12.639511   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:48:12.639849   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:33:43", ip: ""} in network mk-old-k8s-version-234290: {Iface:virbr4 ExpiryTime:2024-09-13 20:48:05 +0000 UTC Type:0 Mac:52:54:00:11:33:43 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:minikube Clientid:01:52:54:00:11:33:43}
	I0913 19:48:12.639895   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined IP address 192.168.72.137 and MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:48:12.639922   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | Using SSH client type: external
	I0913 19:48:12.639963   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | Using SSH private key: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/old-k8s-version-234290/id_rsa (-rw-------)
	I0913 19:48:12.640018   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.137 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19636-3902/.minikube/machines/old-k8s-version-234290/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0913 19:48:12.640036   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | About to run SSH command:
	I0913 19:48:12.640048   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | exit 0
	I0913 19:48:12.762518   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | SSH cmd err, output: <nil>: 
	I0913 19:48:12.762809   65574 main.go:141] libmachine: (old-k8s-version-234290) KVM machine creation complete!
	I0913 19:48:12.763179   65574 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetConfigRaw
	I0913 19:48:12.763784   65574 main.go:141] libmachine: (old-k8s-version-234290) Calling .DriverName
	I0913 19:48:12.763972   65574 main.go:141] libmachine: (old-k8s-version-234290) Calling .DriverName
	I0913 19:48:12.764128   65574 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0913 19:48:12.764144   65574 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetState
	I0913 19:48:12.765356   65574 main.go:141] libmachine: Detecting operating system of created instance...
	I0913 19:48:12.765371   65574 main.go:141] libmachine: Waiting for SSH to be available...
	I0913 19:48:12.765376   65574 main.go:141] libmachine: Getting to WaitForSSH function...
	I0913 19:48:12.765382   65574 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHHostname
	I0913 19:48:12.767447   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:48:12.767762   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:33:43", ip: ""} in network mk-old-k8s-version-234290: {Iface:virbr4 ExpiryTime:2024-09-13 20:48:05 +0000 UTC Type:0 Mac:52:54:00:11:33:43 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-234290 Clientid:01:52:54:00:11:33:43}
	I0913 19:48:12.767795   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined IP address 192.168.72.137 and MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:48:12.767873   65574 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHPort
	I0913 19:48:12.768044   65574 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHKeyPath
	I0913 19:48:12.768199   65574 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHKeyPath
	I0913 19:48:12.768357   65574 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHUsername
	I0913 19:48:12.768541   65574 main.go:141] libmachine: Using SSH client type: native
	I0913 19:48:12.768715   65574 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.137 22 <nil> <nil>}
	I0913 19:48:12.768725   65574 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0913 19:48:12.869352   65574 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0913 19:48:12.869377   65574 main.go:141] libmachine: Detecting the provisioner...
	I0913 19:48:12.869389   65574 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHHostname
	I0913 19:48:12.872627   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:48:12.873019   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:33:43", ip: ""} in network mk-old-k8s-version-234290: {Iface:virbr4 ExpiryTime:2024-09-13 20:48:05 +0000 UTC Type:0 Mac:52:54:00:11:33:43 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-234290 Clientid:01:52:54:00:11:33:43}
	I0913 19:48:12.873046   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined IP address 192.168.72.137 and MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:48:12.873228   65574 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHPort
	I0913 19:48:12.873422   65574 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHKeyPath
	I0913 19:48:12.873595   65574 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHKeyPath
	I0913 19:48:12.873733   65574 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHUsername
	I0913 19:48:12.873879   65574 main.go:141] libmachine: Using SSH client type: native
	I0913 19:48:12.874034   65574 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.137 22 <nil> <nil>}
	I0913 19:48:12.874044   65574 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0913 19:48:12.974852   65574 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0913 19:48:12.974921   65574 main.go:141] libmachine: found compatible host: buildroot
	I0913 19:48:12.974931   65574 main.go:141] libmachine: Provisioning with buildroot...
	I0913 19:48:12.974938   65574 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetMachineName
	I0913 19:48:12.975186   65574 buildroot.go:166] provisioning hostname "old-k8s-version-234290"
	I0913 19:48:12.975223   65574 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetMachineName
	I0913 19:48:12.975387   65574 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHHostname
	I0913 19:48:12.978012   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:48:12.978404   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:33:43", ip: ""} in network mk-old-k8s-version-234290: {Iface:virbr4 ExpiryTime:2024-09-13 20:48:05 +0000 UTC Type:0 Mac:52:54:00:11:33:43 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-234290 Clientid:01:52:54:00:11:33:43}
	I0913 19:48:12.978439   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined IP address 192.168.72.137 and MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:48:12.978542   65574 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHPort
	I0913 19:48:12.978702   65574 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHKeyPath
	I0913 19:48:12.978845   65574 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHKeyPath
	I0913 19:48:12.978995   65574 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHUsername
	I0913 19:48:12.979176   65574 main.go:141] libmachine: Using SSH client type: native
	I0913 19:48:12.979351   65574 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.137 22 <nil> <nil>}
	I0913 19:48:12.979366   65574 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-234290 && echo "old-k8s-version-234290" | sudo tee /etc/hostname
	I0913 19:48:13.093001   65574 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-234290
	
	I0913 19:48:13.093040   65574 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHHostname
	I0913 19:48:13.095792   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:48:13.096196   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:33:43", ip: ""} in network mk-old-k8s-version-234290: {Iface:virbr4 ExpiryTime:2024-09-13 20:48:05 +0000 UTC Type:0 Mac:52:54:00:11:33:43 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-234290 Clientid:01:52:54:00:11:33:43}
	I0913 19:48:13.096243   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined IP address 192.168.72.137 and MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:48:13.096358   65574 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHPort
	I0913 19:48:13.096508   65574 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHKeyPath
	I0913 19:48:13.096624   65574 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHKeyPath
	I0913 19:48:13.096835   65574 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHUsername
	I0913 19:48:13.097000   65574 main.go:141] libmachine: Using SSH client type: native
	I0913 19:48:13.097167   65574 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.137 22 <nil> <nil>}
	I0913 19:48:13.097182   65574 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-234290' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-234290/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-234290' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0913 19:48:13.210829   65574 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0913 19:48:13.210865   65574 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19636-3902/.minikube CaCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19636-3902/.minikube}
	I0913 19:48:13.210906   65574 buildroot.go:174] setting up certificates
	I0913 19:48:13.210916   65574 provision.go:84] configureAuth start
	I0913 19:48:13.210932   65574 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetMachineName
	I0913 19:48:13.211229   65574 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetIP
	I0913 19:48:13.214366   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:48:13.214763   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:33:43", ip: ""} in network mk-old-k8s-version-234290: {Iface:virbr4 ExpiryTime:2024-09-13 20:48:05 +0000 UTC Type:0 Mac:52:54:00:11:33:43 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-234290 Clientid:01:52:54:00:11:33:43}
	I0913 19:48:13.214787   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined IP address 192.168.72.137 and MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:48:13.214963   65574 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHHostname
	I0913 19:48:13.217483   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:48:13.217826   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:33:43", ip: ""} in network mk-old-k8s-version-234290: {Iface:virbr4 ExpiryTime:2024-09-13 20:48:05 +0000 UTC Type:0 Mac:52:54:00:11:33:43 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-234290 Clientid:01:52:54:00:11:33:43}
	I0913 19:48:13.217864   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined IP address 192.168.72.137 and MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:48:13.218088   65574 provision.go:143] copyHostCerts
	I0913 19:48:13.218220   65574 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem, removing ...
	I0913 19:48:13.218239   65574 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem
	I0913 19:48:13.218308   65574 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem (1082 bytes)
	I0913 19:48:13.218421   65574 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem, removing ...
	I0913 19:48:13.218432   65574 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem
	I0913 19:48:13.218457   65574 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem (1123 bytes)
	I0913 19:48:13.218510   65574 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem, removing ...
	I0913 19:48:13.218516   65574 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem
	I0913 19:48:13.218534   65574 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem (1679 bytes)
	I0913 19:48:13.218586   65574 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-234290 san=[127.0.0.1 192.168.72.137 localhost minikube old-k8s-version-234290]
	I0913 19:48:13.302047   65574 provision.go:177] copyRemoteCerts
	I0913 19:48:13.302126   65574 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0913 19:48:13.302159   65574 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHHostname
	I0913 19:48:13.305035   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:48:13.305573   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:33:43", ip: ""} in network mk-old-k8s-version-234290: {Iface:virbr4 ExpiryTime:2024-09-13 20:48:05 +0000 UTC Type:0 Mac:52:54:00:11:33:43 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-234290 Clientid:01:52:54:00:11:33:43}
	I0913 19:48:13.305597   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined IP address 192.168.72.137 and MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:48:13.305633   65574 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHPort
	I0913 19:48:13.305815   65574 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHKeyPath
	I0913 19:48:13.306028   65574 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHUsername
	I0913 19:48:13.306192   65574 sshutil.go:53] new ssh client: &{IP:192.168.72.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/old-k8s-version-234290/id_rsa Username:docker}
	I0913 19:48:13.389283   65574 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0913 19:48:13.418823   65574 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0913 19:48:13.450271   65574 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0913 19:48:13.478315   65574 provision.go:87] duration metric: took 267.382223ms to configureAuth
	I0913 19:48:13.478348   65574 buildroot.go:189] setting minikube options for container-runtime
	I0913 19:48:13.478541   65574 config.go:182] Loaded profile config "old-k8s-version-234290": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0913 19:48:13.478638   65574 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHHostname
	I0913 19:48:13.481548   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:48:13.481964   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:33:43", ip: ""} in network mk-old-k8s-version-234290: {Iface:virbr4 ExpiryTime:2024-09-13 20:48:05 +0000 UTC Type:0 Mac:52:54:00:11:33:43 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-234290 Clientid:01:52:54:00:11:33:43}
	I0913 19:48:13.481998   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined IP address 192.168.72.137 and MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:48:13.482188   65574 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHPort
	I0913 19:48:13.482358   65574 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHKeyPath
	I0913 19:48:13.482503   65574 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHKeyPath
	I0913 19:48:13.482664   65574 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHUsername
	I0913 19:48:13.482838   65574 main.go:141] libmachine: Using SSH client type: native
	I0913 19:48:13.482990   65574 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.137 22 <nil> <nil>}
	I0913 19:48:13.483004   65574 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0913 19:48:13.714647   65574 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0913 19:48:13.714671   65574 main.go:141] libmachine: Checking connection to Docker...
	I0913 19:48:13.714681   65574 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetURL
	I0913 19:48:13.716072   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | Using libvirt version 6000000
	I0913 19:48:13.718561   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:48:13.719088   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:33:43", ip: ""} in network mk-old-k8s-version-234290: {Iface:virbr4 ExpiryTime:2024-09-13 20:48:05 +0000 UTC Type:0 Mac:52:54:00:11:33:43 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-234290 Clientid:01:52:54:00:11:33:43}
	I0913 19:48:13.719124   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined IP address 192.168.72.137 and MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:48:13.719309   65574 main.go:141] libmachine: Docker is up and running!
	I0913 19:48:13.719326   65574 main.go:141] libmachine: Reticulating splines...
	I0913 19:48:13.719334   65574 client.go:171] duration metric: took 25.116822859s to LocalClient.Create
	I0913 19:48:13.719359   65574 start.go:167] duration metric: took 25.116890373s to libmachine.API.Create "old-k8s-version-234290"
	I0913 19:48:13.719368   65574 start.go:293] postStartSetup for "old-k8s-version-234290" (driver="kvm2")
	I0913 19:48:13.719384   65574 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0913 19:48:13.719407   65574 main.go:141] libmachine: (old-k8s-version-234290) Calling .DriverName
	I0913 19:48:13.719682   65574 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0913 19:48:13.719712   65574 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHHostname
	I0913 19:48:13.721803   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:48:13.722167   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:33:43", ip: ""} in network mk-old-k8s-version-234290: {Iface:virbr4 ExpiryTime:2024-09-13 20:48:05 +0000 UTC Type:0 Mac:52:54:00:11:33:43 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-234290 Clientid:01:52:54:00:11:33:43}
	I0913 19:48:13.722190   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined IP address 192.168.72.137 and MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:48:13.722346   65574 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHPort
	I0913 19:48:13.722537   65574 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHKeyPath
	I0913 19:48:13.722691   65574 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHUsername
	I0913 19:48:13.722849   65574 sshutil.go:53] new ssh client: &{IP:192.168.72.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/old-k8s-version-234290/id_rsa Username:docker}
	I0913 19:48:13.810509   65574 ssh_runner.go:195] Run: cat /etc/os-release
	I0913 19:48:13.815519   65574 info.go:137] Remote host: Buildroot 2023.02.9
	I0913 19:48:13.815545   65574 filesync.go:126] Scanning /home/jenkins/minikube-integration/19636-3902/.minikube/addons for local assets ...
	I0913 19:48:13.815611   65574 filesync.go:126] Scanning /home/jenkins/minikube-integration/19636-3902/.minikube/files for local assets ...
	I0913 19:48:13.815683   65574 filesync.go:149] local asset: /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem -> 110792.pem in /etc/ssl/certs
	I0913 19:48:13.815768   65574 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0913 19:48:13.826455   65574 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem --> /etc/ssl/certs/110792.pem (1708 bytes)
	I0913 19:48:13.850764   65574 start.go:296] duration metric: took 131.379337ms for postStartSetup
	I0913 19:48:13.850817   65574 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetConfigRaw
	I0913 19:48:13.851480   65574 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetIP
	I0913 19:48:13.854082   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:48:13.854529   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:33:43", ip: ""} in network mk-old-k8s-version-234290: {Iface:virbr4 ExpiryTime:2024-09-13 20:48:05 +0000 UTC Type:0 Mac:52:54:00:11:33:43 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-234290 Clientid:01:52:54:00:11:33:43}
	I0913 19:48:13.854560   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined IP address 192.168.72.137 and MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:48:13.854805   65574 profile.go:143] Saving config to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/old-k8s-version-234290/config.json ...
	I0913 19:48:13.854980   65574 start.go:128] duration metric: took 25.278552682s to createHost
	I0913 19:48:13.855001   65574 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHHostname
	I0913 19:48:13.857767   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:48:13.858179   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:33:43", ip: ""} in network mk-old-k8s-version-234290: {Iface:virbr4 ExpiryTime:2024-09-13 20:48:05 +0000 UTC Type:0 Mac:52:54:00:11:33:43 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-234290 Clientid:01:52:54:00:11:33:43}
	I0913 19:48:13.858204   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined IP address 192.168.72.137 and MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:48:13.858327   65574 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHPort
	I0913 19:48:13.858521   65574 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHKeyPath
	I0913 19:48:13.858670   65574 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHKeyPath
	I0913 19:48:13.858812   65574 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHUsername
	I0913 19:48:13.858985   65574 main.go:141] libmachine: Using SSH client type: native
	I0913 19:48:13.859195   65574 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.137 22 <nil> <nil>}
	I0913 19:48:13.859211   65574 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0913 19:48:13.963303   65574 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726256893.918221507
	
	I0913 19:48:13.963340   65574 fix.go:216] guest clock: 1726256893.918221507
	I0913 19:48:13.963353   65574 fix.go:229] Guest: 2024-09-13 19:48:13.918221507 +0000 UTC Remote: 2024-09-13 19:48:13.854989824 +0000 UTC m=+25.417801234 (delta=63.231683ms)
	I0913 19:48:13.963383   65574 fix.go:200] guest clock delta is within tolerance: 63.231683ms
	I0913 19:48:13.963396   65574 start.go:83] releasing machines lock for "old-k8s-version-234290", held for 25.387060112s
	I0913 19:48:13.963436   65574 main.go:141] libmachine: (old-k8s-version-234290) Calling .DriverName
	I0913 19:48:13.963695   65574 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetIP
	I0913 19:48:13.966720   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:48:13.967141   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:33:43", ip: ""} in network mk-old-k8s-version-234290: {Iface:virbr4 ExpiryTime:2024-09-13 20:48:05 +0000 UTC Type:0 Mac:52:54:00:11:33:43 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-234290 Clientid:01:52:54:00:11:33:43}
	I0913 19:48:13.967171   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined IP address 192.168.72.137 and MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:48:13.967438   65574 main.go:141] libmachine: (old-k8s-version-234290) Calling .DriverName
	I0913 19:48:13.967965   65574 main.go:141] libmachine: (old-k8s-version-234290) Calling .DriverName
	I0913 19:48:13.968106   65574 main.go:141] libmachine: (old-k8s-version-234290) Calling .DriverName
	I0913 19:48:13.968184   65574 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0913 19:48:13.968224   65574 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHHostname
	I0913 19:48:13.968292   65574 ssh_runner.go:195] Run: cat /version.json
	I0913 19:48:13.968309   65574 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHHostname
	I0913 19:48:13.970959   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:48:13.971359   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:33:43", ip: ""} in network mk-old-k8s-version-234290: {Iface:virbr4 ExpiryTime:2024-09-13 20:48:05 +0000 UTC Type:0 Mac:52:54:00:11:33:43 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-234290 Clientid:01:52:54:00:11:33:43}
	I0913 19:48:13.971382   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined IP address 192.168.72.137 and MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:48:13.971585   65574 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHPort
	I0913 19:48:13.971713   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:48:13.971748   65574 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHKeyPath
	I0913 19:48:13.971868   65574 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHUsername
	I0913 19:48:13.971982   65574 sshutil.go:53] new ssh client: &{IP:192.168.72.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/old-k8s-version-234290/id_rsa Username:docker}
	I0913 19:48:13.972148   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:33:43", ip: ""} in network mk-old-k8s-version-234290: {Iface:virbr4 ExpiryTime:2024-09-13 20:48:05 +0000 UTC Type:0 Mac:52:54:00:11:33:43 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-234290 Clientid:01:52:54:00:11:33:43}
	I0913 19:48:13.972176   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined IP address 192.168.72.137 and MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:48:13.972364   65574 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHPort
	I0913 19:48:13.972502   65574 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHKeyPath
	I0913 19:48:13.972611   65574 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHUsername
	I0913 19:48:13.972737   65574 sshutil.go:53] new ssh client: &{IP:192.168.72.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/old-k8s-version-234290/id_rsa Username:docker}
	I0913 19:48:14.075219   65574 ssh_runner.go:195] Run: systemctl --version
	I0913 19:48:14.083640   65574 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0913 19:48:14.247088   65574 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0913 19:48:14.253368   65574 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0913 19:48:14.253451   65574 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0913 19:48:14.273204   65574 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0913 19:48:14.273227   65574 start.go:495] detecting cgroup driver to use...
	I0913 19:48:14.273280   65574 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0913 19:48:14.292567   65574 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0913 19:48:14.308062   65574 docker.go:217] disabling cri-docker service (if available) ...
	I0913 19:48:14.308121   65574 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0913 19:48:14.322305   65574 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0913 19:48:14.338088   65574 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0913 19:48:14.482582   65574 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0913 19:48:14.664445   65574 docker.go:233] disabling docker service ...
	I0913 19:48:14.664532   65574 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0913 19:48:14.681794   65574 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0913 19:48:14.698633   65574 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0913 19:48:14.847971   65574 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0913 19:48:14.984900   65574 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0913 19:48:15.001493   65574 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0913 19:48:15.031765   65574 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0913 19:48:15.031828   65574 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:48:15.043285   65574 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0913 19:48:15.043340   65574 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:48:15.056729   65574 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:48:15.068852   65574 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:48:15.083510   65574 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0913 19:48:15.097478   65574 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0913 19:48:15.107620   65574 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0913 19:48:15.107690   65574 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0913 19:48:15.124140   65574 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0913 19:48:15.136754   65574 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 19:48:15.290451   65574 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0913 19:48:15.406487   65574 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0913 19:48:15.406541   65574 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0913 19:48:15.412899   65574 start.go:563] Will wait 60s for crictl version
	I0913 19:48:15.412956   65574 ssh_runner.go:195] Run: which crictl
	I0913 19:48:15.417876   65574 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0913 19:48:15.469387   65574 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0913 19:48:15.469476   65574 ssh_runner.go:195] Run: crio --version
	I0913 19:48:15.502449   65574 ssh_runner.go:195] Run: crio --version
	I0913 19:48:15.538986   65574 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0913 19:48:15.540739   65574 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetIP
	I0913 19:48:15.543751   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:48:15.544208   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:33:43", ip: ""} in network mk-old-k8s-version-234290: {Iface:virbr4 ExpiryTime:2024-09-13 20:48:05 +0000 UTC Type:0 Mac:52:54:00:11:33:43 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-234290 Clientid:01:52:54:00:11:33:43}
	I0913 19:48:15.544235   65574 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined IP address 192.168.72.137 and MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:48:15.544465   65574 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0913 19:48:15.548930   65574 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0913 19:48:15.564368   65574 kubeadm.go:883] updating cluster {Name:old-k8s-version-234290 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-234290 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.137 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0913 19:48:15.564516   65574 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0913 19:48:15.564578   65574 ssh_runner.go:195] Run: sudo crictl images --output json
	I0913 19:48:15.598226   65574 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0913 19:48:15.598308   65574 ssh_runner.go:195] Run: which lz4
	I0913 19:48:15.603011   65574 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0913 19:48:15.607697   65574 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0913 19:48:15.607730   65574 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0913 19:48:17.211769   65574 crio.go:462] duration metric: took 1.608788016s to copy over tarball
	I0913 19:48:17.211844   65574 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0913 19:48:19.949223   65574 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.737344384s)
	I0913 19:48:19.949251   65574 crio.go:469] duration metric: took 2.737454972s to extract the tarball
	I0913 19:48:19.949260   65574 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0913 19:48:20.001156   65574 ssh_runner.go:195] Run: sudo crictl images --output json
	I0913 19:48:20.048806   65574 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0913 19:48:20.048831   65574 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0913 19:48:20.048904   65574 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 19:48:20.048927   65574 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0913 19:48:20.048948   65574 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0913 19:48:20.048963   65574 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0913 19:48:20.048986   65574 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0913 19:48:20.049120   65574 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0913 19:48:20.048935   65574 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0913 19:48:20.049127   65574 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0913 19:48:20.050293   65574 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0913 19:48:20.050720   65574 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0913 19:48:20.050731   65574 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0913 19:48:20.050734   65574 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0913 19:48:20.050721   65574 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0913 19:48:20.050758   65574 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0913 19:48:20.050722   65574 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0913 19:48:20.051192   65574 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 19:48:20.222639   65574 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0913 19:48:20.254617   65574 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0913 19:48:20.267202   65574 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0913 19:48:20.267885   65574 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0913 19:48:20.273735   65574 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0913 19:48:20.273773   65574 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0913 19:48:20.273812   65574 ssh_runner.go:195] Run: which crictl
	I0913 19:48:20.278764   65574 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0913 19:48:20.279526   65574 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0913 19:48:20.357651   65574 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0913 19:48:20.357700   65574 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0913 19:48:20.357748   65574 ssh_runner.go:195] Run: which crictl
	I0913 19:48:20.358414   65574 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0913 19:48:20.363838   65574 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0913 19:48:20.363878   65574 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0913 19:48:20.363918   65574 ssh_runner.go:195] Run: which crictl
	I0913 19:48:20.378335   65574 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0913 19:48:20.378381   65574 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0913 19:48:20.378429   65574 ssh_runner.go:195] Run: which crictl
	I0913 19:48:20.378523   65574 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0913 19:48:20.456416   65574 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0913 19:48:20.456459   65574 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0913 19:48:20.456507   65574 ssh_runner.go:195] Run: which crictl
	I0913 19:48:20.456601   65574 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0913 19:48:20.456621   65574 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0913 19:48:20.456647   65574 ssh_runner.go:195] Run: which crictl
	I0913 19:48:20.456708   65574 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0913 19:48:20.493943   65574 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0913 19:48:20.493981   65574 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0913 19:48:20.494008   65574 ssh_runner.go:195] Run: which crictl
	I0913 19:48:20.494066   65574 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0913 19:48:20.494225   65574 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0913 19:48:20.494239   65574 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0913 19:48:20.535192   65574 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0913 19:48:20.535218   65574 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0913 19:48:20.535229   65574 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0913 19:48:20.628825   65574 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0913 19:48:20.628889   65574 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0913 19:48:20.628983   65574 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0913 19:48:20.629065   65574 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0913 19:48:20.711156   65574 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0913 19:48:20.723312   65574 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0913 19:48:20.723410   65574 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0913 19:48:20.842547   65574 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0913 19:48:20.842602   65574 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0913 19:48:20.842679   65574 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0913 19:48:20.843036   65574 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0913 19:48:20.880124   65574 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0913 19:48:20.928132   65574 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0913 19:48:20.928217   65574 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0913 19:48:20.997282   65574 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0913 19:48:20.997334   65574 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0913 19:48:21.004621   65574 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0913 19:48:21.061160   65574 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0913 19:48:21.061258   65574 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0913 19:48:21.075567   65574 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0913 19:48:21.260843   65574 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 19:48:21.407205   65574 cache_images.go:92] duration metric: took 1.358352445s to LoadCachedImages
	W0913 19:48:21.407308   65574 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0913 19:48:21.407328   65574 kubeadm.go:934] updating node { 192.168.72.137 8443 v1.20.0 crio true true} ...
	I0913 19:48:21.407462   65574 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-234290 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.137
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-234290 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0913 19:48:21.407551   65574 ssh_runner.go:195] Run: crio config
	I0913 19:48:21.465937   65574 cni.go:84] Creating CNI manager for ""
	I0913 19:48:21.465958   65574 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0913 19:48:21.465967   65574 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0913 19:48:21.465983   65574 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.137 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-234290 NodeName:old-k8s-version-234290 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.137"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.137 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0913 19:48:21.466132   65574 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.137
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-234290"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.137
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.137"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0913 19:48:21.466195   65574 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0913 19:48:21.477487   65574 binaries.go:44] Found k8s binaries, skipping transfer
	I0913 19:48:21.477572   65574 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0913 19:48:21.487970   65574 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0913 19:48:21.510130   65574 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0913 19:48:21.529958   65574 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0913 19:48:21.548820   65574 ssh_runner.go:195] Run: grep 192.168.72.137	control-plane.minikube.internal$ /etc/hosts
	I0913 19:48:21.553415   65574 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.137	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0913 19:48:21.566436   65574 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 19:48:21.716666   65574 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0913 19:48:21.738546   65574 certs.go:68] Setting up /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/old-k8s-version-234290 for IP: 192.168.72.137
	I0913 19:48:21.738572   65574 certs.go:194] generating shared ca certs ...
	I0913 19:48:21.738590   65574 certs.go:226] acquiring lock for ca certs: {Name:mke780aab4c2895bded11772e31c7a357d07742c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 19:48:21.738742   65574 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.key
	I0913 19:48:21.738792   65574 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.key
	I0913 19:48:21.738808   65574 certs.go:256] generating profile certs ...
	I0913 19:48:21.738871   65574 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/old-k8s-version-234290/client.key
	I0913 19:48:21.738901   65574 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/old-k8s-version-234290/client.crt with IP's: []
	I0913 19:48:22.153872   65574 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/old-k8s-version-234290/client.crt ...
	I0913 19:48:22.153905   65574 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/old-k8s-version-234290/client.crt: {Name:mk8b320ccf147d0711cf6d045238a5c05310a038 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 19:48:22.154137   65574 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/old-k8s-version-234290/client.key ...
	I0913 19:48:22.154191   65574 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/old-k8s-version-234290/client.key: {Name:mke4259fe804ffc5586980a3fc1f84057e487cf4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 19:48:22.154330   65574 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/old-k8s-version-234290/apiserver.key.e5f62d17
	I0913 19:48:22.154352   65574 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/old-k8s-version-234290/apiserver.crt.e5f62d17 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.137]
	I0913 19:48:22.411200   65574 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/old-k8s-version-234290/apiserver.crt.e5f62d17 ...
	I0913 19:48:22.411229   65574 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/old-k8s-version-234290/apiserver.crt.e5f62d17: {Name:mk5316def20993f7c1e8f4c61e901501066f7e35 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 19:48:22.459139   65574 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/old-k8s-version-234290/apiserver.key.e5f62d17 ...
	I0913 19:48:22.459191   65574 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/old-k8s-version-234290/apiserver.key.e5f62d17: {Name:mk703f2d5a14eeed85de22ef425d583559dc5135 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 19:48:22.459322   65574 certs.go:381] copying /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/old-k8s-version-234290/apiserver.crt.e5f62d17 -> /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/old-k8s-version-234290/apiserver.crt
	I0913 19:48:22.459421   65574 certs.go:385] copying /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/old-k8s-version-234290/apiserver.key.e5f62d17 -> /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/old-k8s-version-234290/apiserver.key
	I0913 19:48:22.459501   65574 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/old-k8s-version-234290/proxy-client.key
	I0913 19:48:22.459521   65574 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/old-k8s-version-234290/proxy-client.crt with IP's: []
	I0913 19:48:22.659212   65574 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/old-k8s-version-234290/proxy-client.crt ...
	I0913 19:48:22.659242   65574 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/old-k8s-version-234290/proxy-client.crt: {Name:mka52fbd05a8f5122056da80dd7991278bbd049e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 19:48:22.659436   65574 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/old-k8s-version-234290/proxy-client.key ...
	I0913 19:48:22.659454   65574 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/old-k8s-version-234290/proxy-client.key: {Name:mkd6cc3697f7bb544bb5eef88b262fcbd98263da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 19:48:22.659652   65574 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079.pem (1338 bytes)
	W0913 19:48:22.659701   65574 certs.go:480] ignoring /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079_empty.pem, impossibly tiny 0 bytes
	I0913 19:48:22.659716   65574 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem (1675 bytes)
	I0913 19:48:22.659754   65574 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem (1082 bytes)
	I0913 19:48:22.659788   65574 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem (1123 bytes)
	I0913 19:48:22.659816   65574 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem (1679 bytes)
	I0913 19:48:22.659869   65574 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem (1708 bytes)
	I0913 19:48:22.660412   65574 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0913 19:48:22.699226   65574 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0913 19:48:22.738239   65574 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0913 19:48:22.781299   65574 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0913 19:48:22.833235   65574 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/old-k8s-version-234290/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0913 19:48:22.864396   65574 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/old-k8s-version-234290/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0913 19:48:22.892078   65574 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/old-k8s-version-234290/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0913 19:48:22.920926   65574 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/old-k8s-version-234290/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0913 19:48:23.022789   65574 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem --> /usr/share/ca-certificates/110792.pem (1708 bytes)
	I0913 19:48:23.059153   65574 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0913 19:48:23.089094   65574 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079.pem --> /usr/share/ca-certificates/11079.pem (1338 bytes)
	I0913 19:48:23.119059   65574 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0913 19:48:23.141178   65574 ssh_runner.go:195] Run: openssl version
	I0913 19:48:23.149307   65574 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110792.pem && ln -fs /usr/share/ca-certificates/110792.pem /etc/ssl/certs/110792.pem"
	I0913 19:48:23.162515   65574 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110792.pem
	I0913 19:48:23.169285   65574 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 13 18:38 /usr/share/ca-certificates/110792.pem
	I0913 19:48:23.169347   65574 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110792.pem
	I0913 19:48:23.178450   65574 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110792.pem /etc/ssl/certs/3ec20f2e.0"
	I0913 19:48:23.194134   65574 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0913 19:48:23.207066   65574 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0913 19:48:23.213399   65574 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 13 18:22 /usr/share/ca-certificates/minikubeCA.pem
	I0913 19:48:23.213473   65574 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0913 19:48:23.221503   65574 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0913 19:48:23.238193   65574 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11079.pem && ln -fs /usr/share/ca-certificates/11079.pem /etc/ssl/certs/11079.pem"
	I0913 19:48:23.250481   65574 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11079.pem
	I0913 19:48:23.255846   65574 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 13 18:38 /usr/share/ca-certificates/11079.pem
	I0913 19:48:23.255899   65574 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11079.pem
	I0913 19:48:23.262435   65574 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11079.pem /etc/ssl/certs/51391683.0"
	I0913 19:48:23.275046   65574 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0913 19:48:23.279802   65574 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0913 19:48:23.279868   65574 kubeadm.go:392] StartCluster: {Name:old-k8s-version-234290 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-234290 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.137 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 19:48:23.279972   65574 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0913 19:48:23.280032   65574 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0913 19:48:23.325205   65574 cri.go:89] found id: ""
	I0913 19:48:23.325301   65574 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0913 19:48:23.336332   65574 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0913 19:48:23.347527   65574 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0913 19:48:23.359231   65574 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0913 19:48:23.359251   65574 kubeadm.go:157] found existing configuration files:
	
	I0913 19:48:23.359304   65574 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0913 19:48:23.370716   65574 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0913 19:48:23.370780   65574 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0913 19:48:23.382047   65574 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0913 19:48:23.393839   65574 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0913 19:48:23.393904   65574 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0913 19:48:23.407616   65574 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0913 19:48:23.418968   65574 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0913 19:48:23.419035   65574 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0913 19:48:23.429817   65574 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0913 19:48:23.446316   65574 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0913 19:48:23.446381   65574 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0913 19:48:23.462319   65574 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0913 19:48:23.621973   65574 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0913 19:48:23.622090   65574 kubeadm.go:310] [preflight] Running pre-flight checks
	I0913 19:48:23.799736   65574 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0913 19:48:23.799890   65574 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0913 19:48:23.800023   65574 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0913 19:48:24.000608   65574 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0913 19:48:24.050233   65574 out.go:235]   - Generating certificates and keys ...
	I0913 19:48:24.050351   65574 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0913 19:48:24.050453   65574 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0913 19:48:24.116695   65574 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0913 19:48:24.309498   65574 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0913 19:48:24.424997   65574 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0913 19:48:24.734852   65574 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0913 19:48:25.193333   65574 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0913 19:48:25.193596   65574 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-234290] and IPs [192.168.72.137 127.0.0.1 ::1]
	I0913 19:48:25.442499   65574 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0913 19:48:25.442800   65574 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-234290] and IPs [192.168.72.137 127.0.0.1 ::1]
	I0913 19:48:25.606357   65574 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0913 19:48:25.868306   65574 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0913 19:48:26.019429   65574 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0913 19:48:26.019612   65574 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0913 19:48:26.186548   65574 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0913 19:48:26.618265   65574 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0913 19:48:26.807149   65574 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0913 19:48:27.113245   65574 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0913 19:48:27.134715   65574 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0913 19:48:27.137282   65574 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0913 19:48:27.137340   65574 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0913 19:48:27.299319   65574 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0913 19:48:27.301761   65574 out.go:235]   - Booting up control plane ...
	I0913 19:48:27.301893   65574 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0913 19:48:27.319651   65574 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0913 19:48:27.322991   65574 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0913 19:48:27.324534   65574 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0913 19:48:27.334024   65574 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0913 19:49:07.297519   65574 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0913 19:49:07.298361   65574 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0913 19:49:07.298724   65574 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0913 19:49:12.297896   65574 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0913 19:49:12.298166   65574 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0913 19:49:22.296993   65574 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0913 19:49:22.297250   65574 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0913 19:49:42.297041   65574 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0913 19:49:42.297304   65574 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0913 19:50:22.296472   65574 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0913 19:50:22.296981   65574 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0913 19:50:22.297006   65574 kubeadm.go:310] 
	I0913 19:50:22.297103   65574 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0913 19:50:22.297258   65574 kubeadm.go:310] 		timed out waiting for the condition
	I0913 19:50:22.297310   65574 kubeadm.go:310] 
	I0913 19:50:22.297406   65574 kubeadm.go:310] 	This error is likely caused by:
	I0913 19:50:22.297481   65574 kubeadm.go:310] 		- The kubelet is not running
	I0913 19:50:22.297724   65574 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0913 19:50:22.297737   65574 kubeadm.go:310] 
	I0913 19:50:22.297944   65574 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0913 19:50:22.298036   65574 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0913 19:50:22.298153   65574 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0913 19:50:22.298253   65574 kubeadm.go:310] 
	I0913 19:50:22.298538   65574 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0913 19:50:22.298882   65574 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0913 19:50:22.298905   65574 kubeadm.go:310] 
	I0913 19:50:22.299308   65574 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0913 19:50:22.299531   65574 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0913 19:50:22.299845   65574 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0913 19:50:22.299959   65574 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0913 19:50:22.299991   65574 kubeadm.go:310] 
	I0913 19:50:22.300134   65574 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0913 19:50:22.300251   65574 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0913 19:50:22.300346   65574 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0913 19:50:22.300475   65574 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-234290] and IPs [192.168.72.137 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-234290] and IPs [192.168.72.137 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-234290] and IPs [192.168.72.137 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-234290] and IPs [192.168.72.137 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0913 19:50:22.300516   65574 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0913 19:50:23.592243   65574 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.29170485s)
	I0913 19:50:23.592325   65574 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 19:50:23.607864   65574 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0913 19:50:23.617636   65574 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0913 19:50:23.617659   65574 kubeadm.go:157] found existing configuration files:
	
	I0913 19:50:23.617714   65574 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0913 19:50:23.627043   65574 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0913 19:50:23.627113   65574 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0913 19:50:23.636699   65574 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0913 19:50:23.645706   65574 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0913 19:50:23.645754   65574 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0913 19:50:23.655047   65574 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0913 19:50:23.663741   65574 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0913 19:50:23.663803   65574 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0913 19:50:23.672835   65574 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0913 19:50:23.681460   65574 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0913 19:50:23.681507   65574 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0913 19:50:23.690568   65574 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0913 19:50:23.908720   65574 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0913 19:52:19.711308   65574 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0913 19:52:19.711430   65574 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0913 19:52:19.713178   65574 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0913 19:52:19.713242   65574 kubeadm.go:310] [preflight] Running pre-flight checks
	I0913 19:52:19.713339   65574 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0913 19:52:19.713489   65574 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0913 19:52:19.713626   65574 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0913 19:52:19.713790   65574 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0913 19:52:19.716411   65574 out.go:235]   - Generating certificates and keys ...
	I0913 19:52:19.716503   65574 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0913 19:52:19.716603   65574 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0913 19:52:19.716701   65574 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0913 19:52:19.716758   65574 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0913 19:52:19.716817   65574 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0913 19:52:19.716872   65574 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0913 19:52:19.716945   65574 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0913 19:52:19.717039   65574 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0913 19:52:19.717146   65574 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0913 19:52:19.717243   65574 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0913 19:52:19.717281   65574 kubeadm.go:310] [certs] Using the existing "sa" key
	I0913 19:52:19.717356   65574 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0913 19:52:19.717398   65574 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0913 19:52:19.717448   65574 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0913 19:52:19.717523   65574 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0913 19:52:19.717566   65574 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0913 19:52:19.717653   65574 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0913 19:52:19.717781   65574 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0913 19:52:19.717820   65574 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0913 19:52:19.717877   65574 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0913 19:52:19.719240   65574 out.go:235]   - Booting up control plane ...
	I0913 19:52:19.719337   65574 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0913 19:52:19.719401   65574 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0913 19:52:19.719453   65574 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0913 19:52:19.719518   65574 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0913 19:52:19.719644   65574 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0913 19:52:19.719688   65574 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0913 19:52:19.719741   65574 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0913 19:52:19.719962   65574 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0913 19:52:19.720078   65574 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0913 19:52:19.720278   65574 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0913 19:52:19.720343   65574 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0913 19:52:19.720509   65574 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0913 19:52:19.720574   65574 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0913 19:52:19.720746   65574 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0913 19:52:19.720823   65574 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0913 19:52:19.721006   65574 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0913 19:52:19.721020   65574 kubeadm.go:310] 
	I0913 19:52:19.721060   65574 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0913 19:52:19.721126   65574 kubeadm.go:310] 		timed out waiting for the condition
	I0913 19:52:19.721133   65574 kubeadm.go:310] 
	I0913 19:52:19.721162   65574 kubeadm.go:310] 	This error is likely caused by:
	I0913 19:52:19.721195   65574 kubeadm.go:310] 		- The kubelet is not running
	I0913 19:52:19.721281   65574 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0913 19:52:19.721287   65574 kubeadm.go:310] 
	I0913 19:52:19.721380   65574 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0913 19:52:19.721419   65574 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0913 19:52:19.721447   65574 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0913 19:52:19.721453   65574 kubeadm.go:310] 
	I0913 19:52:19.721592   65574 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0913 19:52:19.721708   65574 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0913 19:52:19.721718   65574 kubeadm.go:310] 
	I0913 19:52:19.721857   65574 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0913 19:52:19.721981   65574 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0913 19:52:19.722057   65574 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0913 19:52:19.722155   65574 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0913 19:52:19.722212   65574 kubeadm.go:310] 
	I0913 19:52:19.722219   65574 kubeadm.go:394] duration metric: took 3m56.442354574s to StartCluster
	I0913 19:52:19.722264   65574 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 19:52:19.722314   65574 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 19:52:19.771911   65574 cri.go:89] found id: ""
	I0913 19:52:19.771938   65574 logs.go:276] 0 containers: []
	W0913 19:52:19.771946   65574 logs.go:278] No container was found matching "kube-apiserver"
	I0913 19:52:19.771952   65574 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 19:52:19.772013   65574 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 19:52:19.805861   65574 cri.go:89] found id: ""
	I0913 19:52:19.805889   65574 logs.go:276] 0 containers: []
	W0913 19:52:19.805897   65574 logs.go:278] No container was found matching "etcd"
	I0913 19:52:19.805902   65574 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 19:52:19.805950   65574 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 19:52:19.839023   65574 cri.go:89] found id: ""
	I0913 19:52:19.839045   65574 logs.go:276] 0 containers: []
	W0913 19:52:19.839054   65574 logs.go:278] No container was found matching "coredns"
	I0913 19:52:19.839059   65574 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 19:52:19.839106   65574 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 19:52:19.878728   65574 cri.go:89] found id: ""
	I0913 19:52:19.878759   65574 logs.go:276] 0 containers: []
	W0913 19:52:19.878768   65574 logs.go:278] No container was found matching "kube-scheduler"
	I0913 19:52:19.878773   65574 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 19:52:19.878834   65574 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 19:52:19.915983   65574 cri.go:89] found id: ""
	I0913 19:52:19.916005   65574 logs.go:276] 0 containers: []
	W0913 19:52:19.916014   65574 logs.go:278] No container was found matching "kube-proxy"
	I0913 19:52:19.916019   65574 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 19:52:19.916074   65574 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 19:52:19.957780   65574 cri.go:89] found id: ""
	I0913 19:52:19.957807   65574 logs.go:276] 0 containers: []
	W0913 19:52:19.957817   65574 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 19:52:19.957824   65574 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 19:52:19.957885   65574 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 19:52:20.003052   65574 cri.go:89] found id: ""
	I0913 19:52:20.003077   65574 logs.go:276] 0 containers: []
	W0913 19:52:20.003088   65574 logs.go:278] No container was found matching "kindnet"
	I0913 19:52:20.003099   65574 logs.go:123] Gathering logs for dmesg ...
	I0913 19:52:20.003115   65574 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 19:52:20.017489   65574 logs.go:123] Gathering logs for describe nodes ...
	I0913 19:52:20.017515   65574 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 19:52:20.133262   65574 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 19:52:20.133287   65574 logs.go:123] Gathering logs for CRI-O ...
	I0913 19:52:20.133302   65574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 19:52:20.255281   65574 logs.go:123] Gathering logs for container status ...
	I0913 19:52:20.255311   65574 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 19:52:20.317749   65574 logs.go:123] Gathering logs for kubelet ...
	I0913 19:52:20.317784   65574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0913 19:52:20.366255   65574 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0913 19:52:20.366339   65574 out.go:270] * 
	* 
	W0913 19:52:20.366392   65574 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0913 19:52:20.366403   65574 out.go:270] * 
	* 
	W0913 19:52:20.367132   65574 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0913 19:52:20.370686   65574 out.go:201] 
	W0913 19:52:20.372112   65574 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0913 19:52:20.372162   65574 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0913 19:52:20.372183   65574 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0913 19:52:20.373752   65574 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-234290 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-234290 -n old-k8s-version-234290
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-234290 -n old-k8s-version-234290: exit status 6 (216.069968ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0913 19:52:20.631022   70971 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-234290" does not appear in /home/jenkins/minikube-integration/19636-3902/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-234290" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (272.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (139.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-175374 --alsologtostderr -v=3
E0913 19:50:03.268726   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/auto-604714/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:50:05.830340   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/auto-604714/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-175374 --alsologtostderr -v=3: exit status 82 (2m0.510270151s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-175374"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 19:50:02.756896   70169 out.go:345] Setting OutFile to fd 1 ...
	I0913 19:50:02.757011   70169 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 19:50:02.757020   70169 out.go:358] Setting ErrFile to fd 2...
	I0913 19:50:02.757024   70169 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 19:50:02.757212   70169 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19636-3902/.minikube/bin
	I0913 19:50:02.757441   70169 out.go:352] Setting JSON to false
	I0913 19:50:02.757520   70169 mustload.go:65] Loading cluster: embed-certs-175374
	I0913 19:50:02.757856   70169 config.go:182] Loaded profile config "embed-certs-175374": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 19:50:02.757924   70169 profile.go:143] Saving config to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/embed-certs-175374/config.json ...
	I0913 19:50:02.758092   70169 mustload.go:65] Loading cluster: embed-certs-175374
	I0913 19:50:02.758230   70169 config.go:182] Loaded profile config "embed-certs-175374": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 19:50:02.758255   70169 stop.go:39] StopHost: embed-certs-175374
	I0913 19:50:02.758629   70169 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:50:02.758670   70169 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:50:02.774932   70169 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38243
	I0913 19:50:02.775449   70169 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:50:02.776000   70169 main.go:141] libmachine: Using API Version  1
	I0913 19:50:02.776027   70169 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:50:02.776439   70169 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:50:02.779160   70169 out.go:177] * Stopping node "embed-certs-175374"  ...
	I0913 19:50:02.780490   70169 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0913 19:50:02.780522   70169 main.go:141] libmachine: (embed-certs-175374) Calling .DriverName
	I0913 19:50:02.780795   70169 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0913 19:50:02.780836   70169 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHHostname
	I0913 19:50:02.783994   70169 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:50:02.784471   70169 main.go:141] libmachine: (embed-certs-175374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:57:cd", ip: ""} in network mk-embed-certs-175374: {Iface:virbr3 ExpiryTime:2024-09-13 20:49:08 +0000 UTC Type:0 Mac:52:54:00:72:57:cd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:embed-certs-175374 Clientid:01:52:54:00:72:57:cd}
	I0913 19:50:02.784508   70169 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined IP address 192.168.39.32 and MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:50:02.784619   70169 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHPort
	I0913 19:50:02.784809   70169 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHKeyPath
	I0913 19:50:02.784951   70169 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHUsername
	I0913 19:50:02.785121   70169 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/embed-certs-175374/id_rsa Username:docker}
	I0913 19:50:02.907724   70169 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0913 19:50:02.964348   70169 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0913 19:50:03.025268   70169 main.go:141] libmachine: Stopping "embed-certs-175374"...
	I0913 19:50:03.025300   70169 main.go:141] libmachine: (embed-certs-175374) Calling .GetState
	I0913 19:50:03.026968   70169 main.go:141] libmachine: (embed-certs-175374) Calling .Stop
	I0913 19:50:03.030468   70169 main.go:141] libmachine: (embed-certs-175374) Waiting for machine to stop 0/120
	I0913 19:50:04.032057   70169 main.go:141] libmachine: (embed-certs-175374) Waiting for machine to stop 1/120
	I0913 19:50:05.033439   70169 main.go:141] libmachine: (embed-certs-175374) Waiting for machine to stop 2/120
	I0913 19:50:06.034970   70169 main.go:141] libmachine: (embed-certs-175374) Waiting for machine to stop 3/120
	I0913 19:50:07.036826   70169 main.go:141] libmachine: (embed-certs-175374) Waiting for machine to stop 4/120
	I0913 19:50:08.038615   70169 main.go:141] libmachine: (embed-certs-175374) Waiting for machine to stop 5/120
	I0913 19:50:09.040018   70169 main.go:141] libmachine: (embed-certs-175374) Waiting for machine to stop 6/120
	I0913 19:50:10.041308   70169 main.go:141] libmachine: (embed-certs-175374) Waiting for machine to stop 7/120
	I0913 19:50:11.042554   70169 main.go:141] libmachine: (embed-certs-175374) Waiting for machine to stop 8/120
	I0913 19:50:12.043996   70169 main.go:141] libmachine: (embed-certs-175374) Waiting for machine to stop 9/120
	I0913 19:50:13.045668   70169 main.go:141] libmachine: (embed-certs-175374) Waiting for machine to stop 10/120
	I0913 19:50:14.047078   70169 main.go:141] libmachine: (embed-certs-175374) Waiting for machine to stop 11/120
	I0913 19:50:15.048546   70169 main.go:141] libmachine: (embed-certs-175374) Waiting for machine to stop 12/120
	I0913 19:50:16.049787   70169 main.go:141] libmachine: (embed-certs-175374) Waiting for machine to stop 13/120
	I0913 19:50:17.051084   70169 main.go:141] libmachine: (embed-certs-175374) Waiting for machine to stop 14/120
	I0913 19:50:18.052997   70169 main.go:141] libmachine: (embed-certs-175374) Waiting for machine to stop 15/120
	I0913 19:50:19.054328   70169 main.go:141] libmachine: (embed-certs-175374) Waiting for machine to stop 16/120
	I0913 19:50:20.056688   70169 main.go:141] libmachine: (embed-certs-175374) Waiting for machine to stop 17/120
	I0913 19:50:21.058278   70169 main.go:141] libmachine: (embed-certs-175374) Waiting for machine to stop 18/120
	I0913 19:50:22.060410   70169 main.go:141] libmachine: (embed-certs-175374) Waiting for machine to stop 19/120
	I0913 19:50:23.062461   70169 main.go:141] libmachine: (embed-certs-175374) Waiting for machine to stop 20/120
	I0913 19:50:24.064173   70169 main.go:141] libmachine: (embed-certs-175374) Waiting for machine to stop 21/120
	I0913 19:50:25.065581   70169 main.go:141] libmachine: (embed-certs-175374) Waiting for machine to stop 22/120
	I0913 19:50:26.066848   70169 main.go:141] libmachine: (embed-certs-175374) Waiting for machine to stop 23/120
	I0913 19:50:27.068240   70169 main.go:141] libmachine: (embed-certs-175374) Waiting for machine to stop 24/120
	I0913 19:50:28.070054   70169 main.go:141] libmachine: (embed-certs-175374) Waiting for machine to stop 25/120
	I0913 19:50:29.071345   70169 main.go:141] libmachine: (embed-certs-175374) Waiting for machine to stop 26/120
	I0913 19:50:30.072701   70169 main.go:141] libmachine: (embed-certs-175374) Waiting for machine to stop 27/120
	I0913 19:50:31.074067   70169 main.go:141] libmachine: (embed-certs-175374) Waiting for machine to stop 28/120
	I0913 19:50:32.075775   70169 main.go:141] libmachine: (embed-certs-175374) Waiting for machine to stop 29/120
	I0913 19:50:33.077694   70169 main.go:141] libmachine: (embed-certs-175374) Waiting for machine to stop 30/120
	I0913 19:50:34.079207   70169 main.go:141] libmachine: (embed-certs-175374) Waiting for machine to stop 31/120
	I0913 19:50:35.080552   70169 main.go:141] libmachine: (embed-certs-175374) Waiting for machine to stop 32/120
	I0913 19:50:36.081903   70169 main.go:141] libmachine: (embed-certs-175374) Waiting for machine to stop 33/120
	I0913 19:50:37.083272   70169 main.go:141] libmachine: (embed-certs-175374) Waiting for machine to stop 34/120
	I0913 19:50:38.085328   70169 main.go:141] libmachine: (embed-certs-175374) Waiting for machine to stop 35/120
	I0913 19:50:39.086643   70169 main.go:141] libmachine: (embed-certs-175374) Waiting for machine to stop 36/120
	I0913 19:50:40.087885   70169 main.go:141] libmachine: (embed-certs-175374) Waiting for machine to stop 37/120
	I0913 19:50:41.089513   70169 main.go:141] libmachine: (embed-certs-175374) Waiting for machine to stop 38/120
	I0913 19:50:42.090774   70169 main.go:141] libmachine: (embed-certs-175374) Waiting for machine to stop 39/120
	I0913 19:50:43.093033   70169 main.go:141] libmachine: (embed-certs-175374) Waiting for machine to stop 40/120
	I0913 19:50:44.094551   70169 main.go:141] libmachine: (embed-certs-175374) Waiting for machine to stop 41/120
	I0913 19:50:45.096588   70169 main.go:141] libmachine: (embed-certs-175374) Waiting for machine to stop 42/120
	I0913 19:50:46.097909   70169 main.go:141] libmachine: (embed-certs-175374) Waiting for machine to stop 43/120
	I0913 19:50:47.099475   70169 main.go:141] libmachine: (embed-certs-175374) Waiting for machine to stop 44/120
	I0913 19:50:48.101401   70169 main.go:141] libmachine: (embed-certs-175374) Waiting for machine to stop 45/120
	I0913 19:50:49.102748   70169 main.go:141] libmachine: (embed-certs-175374) Waiting for machine to stop 46/120
	I0913 19:50:50.104206   70169 main.go:141] libmachine: (embed-certs-175374) Waiting for machine to stop 47/120
	I0913 19:50:51.105838   70169 main.go:141] libmachine: (embed-certs-175374) Waiting for machine to stop 48/120
	I0913 19:50:52.107866   70169 main.go:141] libmachine: (embed-certs-175374) Waiting for machine to stop 49/120
	I0913 19:50:53.109502   70169 main.go:141] libmachine: (embed-certs-175374) Waiting for machine to stop 50/120
	I0913 19:50:54.110756   70169 main.go:141] libmachine: (embed-certs-175374) Waiting for machine to stop 51/120
	I0913 19:50:55.111936   70169 main.go:141] libmachine: (embed-certs-175374) Waiting for machine to stop 52/120
	I0913 19:50:56.113430   70169 main.go:141] libmachine: (embed-certs-175374) Waiting for machine to stop 53/120
	I0913 19:50:57.114704   70169 main.go:141] libmachine: (embed-certs-175374) Waiting for machine to stop 54/120
	I0913 19:50:58.116630   70169 main.go:141] libmachine: (embed-certs-175374) Waiting for machine to stop 55/120
	I0913 19:50:59.118040   70169 main.go:141] libmachine: (embed-certs-175374) Waiting for machine to stop 56/120
	I0913 19:51:00.119275   70169 main.go:141] libmachine: (embed-certs-175374) Waiting for machine to stop 57/120
	I0913 19:51:01.120578   70169 main.go:141] libmachine: (embed-certs-175374) Waiting for machine to stop 58/120
	I0913 19:51:02.121843   70169 main.go:141] libmachine: (embed-certs-175374) Waiting for machine to stop 59/120
	I0913 19:51:03.123810   70169 main.go:141] libmachine: (embed-certs-175374) Waiting for machine to stop 60/120
	I0913 19:51:04.125071   70169 main.go:141] libmachine: (embed-certs-175374) Waiting for machine to stop 61/120
	I0913 19:51:05.126559   70169 main.go:141] libmachine: (embed-certs-175374) Waiting for machine to stop 62/120
	I0913 19:51:06.127805   70169 main.go:141] libmachine: (embed-certs-175374) Waiting for machine to stop 63/120
	I0913 19:51:07.129138   70169 main.go:141] libmachine: (embed-certs-175374) Waiting for machine to stop 64/120
	I0913 19:51:08.131568   70169 main.go:141] libmachine: (embed-certs-175374) Waiting for machine to stop 65/120
	I0913 19:51:09.132997   70169 main.go:141] libmachine: (embed-certs-175374) Waiting for machine to stop 66/120
	I0913 19:51:10.134646   70169 main.go:141] libmachine: (embed-certs-175374) Waiting for machine to stop 67/120
	I0913 19:51:11.135987   70169 main.go:141] libmachine: (embed-certs-175374) Waiting for machine to stop 68/120
	I0913 19:51:12.137261   70169 main.go:141] libmachine: (embed-certs-175374) Waiting for machine to stop 69/120
	I0913 19:51:13.139440   70169 main.go:141] libmachine: (embed-certs-175374) Waiting for machine to stop 70/120
	I0913 19:51:14.141080   70169 main.go:141] libmachine: (embed-certs-175374) Waiting for machine to stop 71/120
	I0913 19:51:15.142655   70169 main.go:141] libmachine: (embed-certs-175374) Waiting for machine to stop 72/120
	I0913 19:51:16.144130   70169 main.go:141] libmachine: (embed-certs-175374) Waiting for machine to stop 73/120
	I0913 19:51:17.145455   70169 main.go:141] libmachine: (embed-certs-175374) Waiting for machine to stop 74/120
	I0913 19:51:18.147377   70169 main.go:141] libmachine: (embed-certs-175374) Waiting for machine to stop 75/120
	I0913 19:51:19.148934   70169 main.go:141] libmachine: (embed-certs-175374) Waiting for machine to stop 76/120
	I0913 19:51:20.150282   70169 main.go:141] libmachine: (embed-certs-175374) Waiting for machine to stop 77/120
	I0913 19:51:21.151650   70169 main.go:141] libmachine: (embed-certs-175374) Waiting for machine to stop 78/120
	I0913 19:51:22.153020   70169 main.go:141] libmachine: (embed-certs-175374) Waiting for machine to stop 79/120
	I0913 19:51:23.155290   70169 main.go:141] libmachine: (embed-certs-175374) Waiting for machine to stop 80/120
	I0913 19:51:24.156602   70169 main.go:141] libmachine: (embed-certs-175374) Waiting for machine to stop 81/120
	I0913 19:51:25.157921   70169 main.go:141] libmachine: (embed-certs-175374) Waiting for machine to stop 82/120
	I0913 19:51:26.159452   70169 main.go:141] libmachine: (embed-certs-175374) Waiting for machine to stop 83/120
	I0913 19:51:27.160895   70169 main.go:141] libmachine: (embed-certs-175374) Waiting for machine to stop 84/120
	I0913 19:51:28.163006   70169 main.go:141] libmachine: (embed-certs-175374) Waiting for machine to stop 85/120
	I0913 19:51:29.164457   70169 main.go:141] libmachine: (embed-certs-175374) Waiting for machine to stop 86/120
	I0913 19:51:30.165851   70169 main.go:141] libmachine: (embed-certs-175374) Waiting for machine to stop 87/120
	I0913 19:51:31.167199   70169 main.go:141] libmachine: (embed-certs-175374) Waiting for machine to stop 88/120
	I0913 19:51:32.168557   70169 main.go:141] libmachine: (embed-certs-175374) Waiting for machine to stop 89/120
	I0913 19:51:33.170759   70169 main.go:141] libmachine: (embed-certs-175374) Waiting for machine to stop 90/120
	I0913 19:51:34.172769   70169 main.go:141] libmachine: (embed-certs-175374) Waiting for machine to stop 91/120
	I0913 19:51:35.174202   70169 main.go:141] libmachine: (embed-certs-175374) Waiting for machine to stop 92/120
	I0913 19:51:36.175561   70169 main.go:141] libmachine: (embed-certs-175374) Waiting for machine to stop 93/120
	I0913 19:51:37.176880   70169 main.go:141] libmachine: (embed-certs-175374) Waiting for machine to stop 94/120
	I0913 19:51:38.178696   70169 main.go:141] libmachine: (embed-certs-175374) Waiting for machine to stop 95/120
	I0913 19:51:39.180200   70169 main.go:141] libmachine: (embed-certs-175374) Waiting for machine to stop 96/120
	I0913 19:51:40.181646   70169 main.go:141] libmachine: (embed-certs-175374) Waiting for machine to stop 97/120
	I0913 19:51:41.183011   70169 main.go:141] libmachine: (embed-certs-175374) Waiting for machine to stop 98/120
	I0913 19:51:42.184461   70169 main.go:141] libmachine: (embed-certs-175374) Waiting for machine to stop 99/120
	I0913 19:51:43.186852   70169 main.go:141] libmachine: (embed-certs-175374) Waiting for machine to stop 100/120
	I0913 19:51:44.188097   70169 main.go:141] libmachine: (embed-certs-175374) Waiting for machine to stop 101/120
	I0913 19:51:45.189536   70169 main.go:141] libmachine: (embed-certs-175374) Waiting for machine to stop 102/120
	I0913 19:51:46.190826   70169 main.go:141] libmachine: (embed-certs-175374) Waiting for machine to stop 103/120
	I0913 19:51:47.192190   70169 main.go:141] libmachine: (embed-certs-175374) Waiting for machine to stop 104/120
	I0913 19:51:48.194167   70169 main.go:141] libmachine: (embed-certs-175374) Waiting for machine to stop 105/120
	I0913 19:51:49.195656   70169 main.go:141] libmachine: (embed-certs-175374) Waiting for machine to stop 106/120
	I0913 19:51:50.197017   70169 main.go:141] libmachine: (embed-certs-175374) Waiting for machine to stop 107/120
	I0913 19:51:51.198456   70169 main.go:141] libmachine: (embed-certs-175374) Waiting for machine to stop 108/120
	I0913 19:51:52.199727   70169 main.go:141] libmachine: (embed-certs-175374) Waiting for machine to stop 109/120
	I0913 19:51:53.201260   70169 main.go:141] libmachine: (embed-certs-175374) Waiting for machine to stop 110/120
	I0913 19:51:54.202615   70169 main.go:141] libmachine: (embed-certs-175374) Waiting for machine to stop 111/120
	I0913 19:51:55.204024   70169 main.go:141] libmachine: (embed-certs-175374) Waiting for machine to stop 112/120
	I0913 19:51:56.205549   70169 main.go:141] libmachine: (embed-certs-175374) Waiting for machine to stop 113/120
	I0913 19:51:57.207226   70169 main.go:141] libmachine: (embed-certs-175374) Waiting for machine to stop 114/120
	I0913 19:51:58.209262   70169 main.go:141] libmachine: (embed-certs-175374) Waiting for machine to stop 115/120
	I0913 19:51:59.210577   70169 main.go:141] libmachine: (embed-certs-175374) Waiting for machine to stop 116/120
	I0913 19:52:00.212876   70169 main.go:141] libmachine: (embed-certs-175374) Waiting for machine to stop 117/120
	I0913 19:52:01.214137   70169 main.go:141] libmachine: (embed-certs-175374) Waiting for machine to stop 118/120
	I0913 19:52:02.215497   70169 main.go:141] libmachine: (embed-certs-175374) Waiting for machine to stop 119/120
	I0913 19:52:03.216854   70169 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0913 19:52:03.216923   70169 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0913 19:52:03.218698   70169 out.go:201] 
	W0913 19:52:03.220055   70169 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0913 19:52:03.220073   70169 out.go:270] * 
	* 
	W0913 19:52:03.222885   70169 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0913 19:52:03.224380   70169 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-175374 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-175374 -n embed-certs-175374
E0913 19:52:13.310898   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/calico-604714/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:52:19.587940   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/enable-default-cni-604714/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:52:19.594373   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/enable-default-cni-604714/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:52:19.605732   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/enable-default-cni-604714/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:52:19.627102   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/enable-default-cni-604714/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:52:19.668544   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/enable-default-cni-604714/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:52:19.750583   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/enable-default-cni-604714/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:52:19.912108   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/enable-default-cni-604714/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-175374 -n embed-certs-175374: exit status 3 (18.552968706s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0913 19:52:21.778450   70859 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.32:22: connect: no route to host
	E0913 19:52:21.778472   70859 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.32:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-175374" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (139.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (138.97s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-239327 --alsologtostderr -v=3
E0913 19:50:21.193698   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/auto-604714/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:50:22.553019   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/kindnet-604714/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:50:32.794959   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/kindnet-604714/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:50:41.675239   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/auto-604714/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-239327 --alsologtostderr -v=3: exit status 82 (2m0.490782688s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-239327"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 19:50:19.749472   70335 out.go:345] Setting OutFile to fd 1 ...
	I0913 19:50:19.749712   70335 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 19:50:19.749722   70335 out.go:358] Setting ErrFile to fd 2...
	I0913 19:50:19.749726   70335 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 19:50:19.749930   70335 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19636-3902/.minikube/bin
	I0913 19:50:19.750212   70335 out.go:352] Setting JSON to false
	I0913 19:50:19.750305   70335 mustload.go:65] Loading cluster: no-preload-239327
	I0913 19:50:19.750649   70335 config.go:182] Loaded profile config "no-preload-239327": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 19:50:19.750729   70335 profile.go:143] Saving config to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/no-preload-239327/config.json ...
	I0913 19:50:19.750907   70335 mustload.go:65] Loading cluster: no-preload-239327
	I0913 19:50:19.751029   70335 config.go:182] Loaded profile config "no-preload-239327": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 19:50:19.751060   70335 stop.go:39] StopHost: no-preload-239327
	I0913 19:50:19.751527   70335 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:50:19.751565   70335 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:50:19.765697   70335 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39501
	I0913 19:50:19.766185   70335 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:50:19.766792   70335 main.go:141] libmachine: Using API Version  1
	I0913 19:50:19.766818   70335 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:50:19.767146   70335 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:50:19.769136   70335 out.go:177] * Stopping node "no-preload-239327"  ...
	I0913 19:50:19.770276   70335 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0913 19:50:19.770319   70335 main.go:141] libmachine: (no-preload-239327) Calling .DriverName
	I0913 19:50:19.770501   70335 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0913 19:50:19.770521   70335 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHHostname
	I0913 19:50:19.773447   70335 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:50:19.773833   70335 main.go:141] libmachine: (no-preload-239327) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:8c:9d", ip: ""} in network mk-no-preload-239327: {Iface:virbr1 ExpiryTime:2024-09-13 20:48:41 +0000 UTC Type:0 Mac:52:54:00:14:8c:9d Iaid: IPaddr:192.168.50.13 Prefix:24 Hostname:no-preload-239327 Clientid:01:52:54:00:14:8c:9d}
	I0913 19:50:19.773868   70335 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined IP address 192.168.50.13 and MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:50:19.774000   70335 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHPort
	I0913 19:50:19.774169   70335 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHKeyPath
	I0913 19:50:19.774311   70335 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHUsername
	I0913 19:50:19.774468   70335 sshutil.go:53] new ssh client: &{IP:192.168.50.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/no-preload-239327/id_rsa Username:docker}
	I0913 19:50:19.872012   70335 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0913 19:50:19.939471   70335 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0913 19:50:19.998396   70335 main.go:141] libmachine: Stopping "no-preload-239327"...
	I0913 19:50:19.998461   70335 main.go:141] libmachine: (no-preload-239327) Calling .GetState
	I0913 19:50:20.000230   70335 main.go:141] libmachine: (no-preload-239327) Calling .Stop
	I0913 19:50:20.003976   70335 main.go:141] libmachine: (no-preload-239327) Waiting for machine to stop 0/120
	I0913 19:50:21.005554   70335 main.go:141] libmachine: (no-preload-239327) Waiting for machine to stop 1/120
	I0913 19:50:22.006944   70335 main.go:141] libmachine: (no-preload-239327) Waiting for machine to stop 2/120
	I0913 19:50:23.008354   70335 main.go:141] libmachine: (no-preload-239327) Waiting for machine to stop 3/120
	I0913 19:50:24.010052   70335 main.go:141] libmachine: (no-preload-239327) Waiting for machine to stop 4/120
	I0913 19:50:25.012300   70335 main.go:141] libmachine: (no-preload-239327) Waiting for machine to stop 5/120
	I0913 19:50:26.013751   70335 main.go:141] libmachine: (no-preload-239327) Waiting for machine to stop 6/120
	I0913 19:50:27.015131   70335 main.go:141] libmachine: (no-preload-239327) Waiting for machine to stop 7/120
	I0913 19:50:28.016707   70335 main.go:141] libmachine: (no-preload-239327) Waiting for machine to stop 8/120
	I0913 19:50:29.018138   70335 main.go:141] libmachine: (no-preload-239327) Waiting for machine to stop 9/120
	I0913 19:50:30.020501   70335 main.go:141] libmachine: (no-preload-239327) Waiting for machine to stop 10/120
	I0913 19:50:31.021828   70335 main.go:141] libmachine: (no-preload-239327) Waiting for machine to stop 11/120
	I0913 19:50:32.023248   70335 main.go:141] libmachine: (no-preload-239327) Waiting for machine to stop 12/120
	I0913 19:50:33.024587   70335 main.go:141] libmachine: (no-preload-239327) Waiting for machine to stop 13/120
	I0913 19:50:34.026037   70335 main.go:141] libmachine: (no-preload-239327) Waiting for machine to stop 14/120
	I0913 19:50:35.027847   70335 main.go:141] libmachine: (no-preload-239327) Waiting for machine to stop 15/120
	I0913 19:50:36.029292   70335 main.go:141] libmachine: (no-preload-239327) Waiting for machine to stop 16/120
	I0913 19:50:37.030619   70335 main.go:141] libmachine: (no-preload-239327) Waiting for machine to stop 17/120
	I0913 19:50:38.031904   70335 main.go:141] libmachine: (no-preload-239327) Waiting for machine to stop 18/120
	I0913 19:50:39.033252   70335 main.go:141] libmachine: (no-preload-239327) Waiting for machine to stop 19/120
	I0913 19:50:40.035156   70335 main.go:141] libmachine: (no-preload-239327) Waiting for machine to stop 20/120
	I0913 19:50:41.036466   70335 main.go:141] libmachine: (no-preload-239327) Waiting for machine to stop 21/120
	I0913 19:50:42.037817   70335 main.go:141] libmachine: (no-preload-239327) Waiting for machine to stop 22/120
	I0913 19:50:43.039095   70335 main.go:141] libmachine: (no-preload-239327) Waiting for machine to stop 23/120
	I0913 19:50:44.040547   70335 main.go:141] libmachine: (no-preload-239327) Waiting for machine to stop 24/120
	I0913 19:50:45.042440   70335 main.go:141] libmachine: (no-preload-239327) Waiting for machine to stop 25/120
	I0913 19:50:46.043909   70335 main.go:141] libmachine: (no-preload-239327) Waiting for machine to stop 26/120
	I0913 19:50:47.045097   70335 main.go:141] libmachine: (no-preload-239327) Waiting for machine to stop 27/120
	I0913 19:50:48.046414   70335 main.go:141] libmachine: (no-preload-239327) Waiting for machine to stop 28/120
	I0913 19:50:49.048682   70335 main.go:141] libmachine: (no-preload-239327) Waiting for machine to stop 29/120
	I0913 19:50:50.050791   70335 main.go:141] libmachine: (no-preload-239327) Waiting for machine to stop 30/120
	I0913 19:50:51.052630   70335 main.go:141] libmachine: (no-preload-239327) Waiting for machine to stop 31/120
	I0913 19:50:52.053957   70335 main.go:141] libmachine: (no-preload-239327) Waiting for machine to stop 32/120
	I0913 19:50:53.055612   70335 main.go:141] libmachine: (no-preload-239327) Waiting for machine to stop 33/120
	I0913 19:50:54.056874   70335 main.go:141] libmachine: (no-preload-239327) Waiting for machine to stop 34/120
	I0913 19:50:55.058739   70335 main.go:141] libmachine: (no-preload-239327) Waiting for machine to stop 35/120
	I0913 19:50:56.060159   70335 main.go:141] libmachine: (no-preload-239327) Waiting for machine to stop 36/120
	I0913 19:50:57.061574   70335 main.go:141] libmachine: (no-preload-239327) Waiting for machine to stop 37/120
	I0913 19:50:58.062928   70335 main.go:141] libmachine: (no-preload-239327) Waiting for machine to stop 38/120
	I0913 19:50:59.064650   70335 main.go:141] libmachine: (no-preload-239327) Waiting for machine to stop 39/120
	I0913 19:51:00.066873   70335 main.go:141] libmachine: (no-preload-239327) Waiting for machine to stop 40/120
	I0913 19:51:01.068329   70335 main.go:141] libmachine: (no-preload-239327) Waiting for machine to stop 41/120
	I0913 19:51:02.069671   70335 main.go:141] libmachine: (no-preload-239327) Waiting for machine to stop 42/120
	I0913 19:51:03.070983   70335 main.go:141] libmachine: (no-preload-239327) Waiting for machine to stop 43/120
	I0913 19:51:04.072684   70335 main.go:141] libmachine: (no-preload-239327) Waiting for machine to stop 44/120
	I0913 19:51:05.074635   70335 main.go:141] libmachine: (no-preload-239327) Waiting for machine to stop 45/120
	I0913 19:51:06.075972   70335 main.go:141] libmachine: (no-preload-239327) Waiting for machine to stop 46/120
	I0913 19:51:07.077284   70335 main.go:141] libmachine: (no-preload-239327) Waiting for machine to stop 47/120
	I0913 19:51:08.078950   70335 main.go:141] libmachine: (no-preload-239327) Waiting for machine to stop 48/120
	I0913 19:51:09.080341   70335 main.go:141] libmachine: (no-preload-239327) Waiting for machine to stop 49/120
	I0913 19:51:10.082831   70335 main.go:141] libmachine: (no-preload-239327) Waiting for machine to stop 50/120
	I0913 19:51:11.084223   70335 main.go:141] libmachine: (no-preload-239327) Waiting for machine to stop 51/120
	I0913 19:51:12.085570   70335 main.go:141] libmachine: (no-preload-239327) Waiting for machine to stop 52/120
	I0913 19:51:13.087070   70335 main.go:141] libmachine: (no-preload-239327) Waiting for machine to stop 53/120
	I0913 19:51:14.088515   70335 main.go:141] libmachine: (no-preload-239327) Waiting for machine to stop 54/120
	I0913 19:51:15.090566   70335 main.go:141] libmachine: (no-preload-239327) Waiting for machine to stop 55/120
	I0913 19:51:16.092144   70335 main.go:141] libmachine: (no-preload-239327) Waiting for machine to stop 56/120
	I0913 19:51:17.093466   70335 main.go:141] libmachine: (no-preload-239327) Waiting for machine to stop 57/120
	I0913 19:51:18.094998   70335 main.go:141] libmachine: (no-preload-239327) Waiting for machine to stop 58/120
	I0913 19:51:19.096256   70335 main.go:141] libmachine: (no-preload-239327) Waiting for machine to stop 59/120
	I0913 19:51:20.098504   70335 main.go:141] libmachine: (no-preload-239327) Waiting for machine to stop 60/120
	I0913 19:51:21.100373   70335 main.go:141] libmachine: (no-preload-239327) Waiting for machine to stop 61/120
	I0913 19:51:22.101783   70335 main.go:141] libmachine: (no-preload-239327) Waiting for machine to stop 62/120
	I0913 19:51:23.103212   70335 main.go:141] libmachine: (no-preload-239327) Waiting for machine to stop 63/120
	I0913 19:51:24.104716   70335 main.go:141] libmachine: (no-preload-239327) Waiting for machine to stop 64/120
	I0913 19:51:25.106956   70335 main.go:141] libmachine: (no-preload-239327) Waiting for machine to stop 65/120
	I0913 19:51:26.108430   70335 main.go:141] libmachine: (no-preload-239327) Waiting for machine to stop 66/120
	I0913 19:51:27.110090   70335 main.go:141] libmachine: (no-preload-239327) Waiting for machine to stop 67/120
	I0913 19:51:28.111704   70335 main.go:141] libmachine: (no-preload-239327) Waiting for machine to stop 68/120
	I0913 19:51:29.113047   70335 main.go:141] libmachine: (no-preload-239327) Waiting for machine to stop 69/120
	I0913 19:51:30.115244   70335 main.go:141] libmachine: (no-preload-239327) Waiting for machine to stop 70/120
	I0913 19:51:31.116599   70335 main.go:141] libmachine: (no-preload-239327) Waiting for machine to stop 71/120
	I0913 19:51:32.117962   70335 main.go:141] libmachine: (no-preload-239327) Waiting for machine to stop 72/120
	I0913 19:51:33.119236   70335 main.go:141] libmachine: (no-preload-239327) Waiting for machine to stop 73/120
	I0913 19:51:34.120611   70335 main.go:141] libmachine: (no-preload-239327) Waiting for machine to stop 74/120
	I0913 19:51:35.122796   70335 main.go:141] libmachine: (no-preload-239327) Waiting for machine to stop 75/120
	I0913 19:51:36.124087   70335 main.go:141] libmachine: (no-preload-239327) Waiting for machine to stop 76/120
	I0913 19:51:37.125394   70335 main.go:141] libmachine: (no-preload-239327) Waiting for machine to stop 77/120
	I0913 19:51:38.126796   70335 main.go:141] libmachine: (no-preload-239327) Waiting for machine to stop 78/120
	I0913 19:51:39.128174   70335 main.go:141] libmachine: (no-preload-239327) Waiting for machine to stop 79/120
	I0913 19:51:40.129533   70335 main.go:141] libmachine: (no-preload-239327) Waiting for machine to stop 80/120
	I0913 19:51:41.131126   70335 main.go:141] libmachine: (no-preload-239327) Waiting for machine to stop 81/120
	I0913 19:51:42.132651   70335 main.go:141] libmachine: (no-preload-239327) Waiting for machine to stop 82/120
	I0913 19:51:43.133939   70335 main.go:141] libmachine: (no-preload-239327) Waiting for machine to stop 83/120
	I0913 19:51:44.135449   70335 main.go:141] libmachine: (no-preload-239327) Waiting for machine to stop 84/120
	I0913 19:51:45.137365   70335 main.go:141] libmachine: (no-preload-239327) Waiting for machine to stop 85/120
	I0913 19:51:46.138658   70335 main.go:141] libmachine: (no-preload-239327) Waiting for machine to stop 86/120
	I0913 19:51:47.140002   70335 main.go:141] libmachine: (no-preload-239327) Waiting for machine to stop 87/120
	I0913 19:51:48.141322   70335 main.go:141] libmachine: (no-preload-239327) Waiting for machine to stop 88/120
	I0913 19:51:49.142695   70335 main.go:141] libmachine: (no-preload-239327) Waiting for machine to stop 89/120
	I0913 19:51:50.144736   70335 main.go:141] libmachine: (no-preload-239327) Waiting for machine to stop 90/120
	I0913 19:51:51.146225   70335 main.go:141] libmachine: (no-preload-239327) Waiting for machine to stop 91/120
	I0913 19:51:52.147740   70335 main.go:141] libmachine: (no-preload-239327) Waiting for machine to stop 92/120
	I0913 19:51:53.149143   70335 main.go:141] libmachine: (no-preload-239327) Waiting for machine to stop 93/120
	I0913 19:51:54.150813   70335 main.go:141] libmachine: (no-preload-239327) Waiting for machine to stop 94/120
	I0913 19:51:55.153013   70335 main.go:141] libmachine: (no-preload-239327) Waiting for machine to stop 95/120
	I0913 19:51:56.154379   70335 main.go:141] libmachine: (no-preload-239327) Waiting for machine to stop 96/120
	I0913 19:51:57.155802   70335 main.go:141] libmachine: (no-preload-239327) Waiting for machine to stop 97/120
	I0913 19:51:58.157092   70335 main.go:141] libmachine: (no-preload-239327) Waiting for machine to stop 98/120
	I0913 19:51:59.158565   70335 main.go:141] libmachine: (no-preload-239327) Waiting for machine to stop 99/120
	I0913 19:52:00.160930   70335 main.go:141] libmachine: (no-preload-239327) Waiting for machine to stop 100/120
	I0913 19:52:01.162258   70335 main.go:141] libmachine: (no-preload-239327) Waiting for machine to stop 101/120
	I0913 19:52:02.163684   70335 main.go:141] libmachine: (no-preload-239327) Waiting for machine to stop 102/120
	I0913 19:52:03.165242   70335 main.go:141] libmachine: (no-preload-239327) Waiting for machine to stop 103/120
	I0913 19:52:04.166590   70335 main.go:141] libmachine: (no-preload-239327) Waiting for machine to stop 104/120
	I0913 19:52:05.168687   70335 main.go:141] libmachine: (no-preload-239327) Waiting for machine to stop 105/120
	I0913 19:52:06.169955   70335 main.go:141] libmachine: (no-preload-239327) Waiting for machine to stop 106/120
	I0913 19:52:07.171336   70335 main.go:141] libmachine: (no-preload-239327) Waiting for machine to stop 107/120
	I0913 19:52:08.172731   70335 main.go:141] libmachine: (no-preload-239327) Waiting for machine to stop 108/120
	I0913 19:52:09.173982   70335 main.go:141] libmachine: (no-preload-239327) Waiting for machine to stop 109/120
	I0913 19:52:10.176125   70335 main.go:141] libmachine: (no-preload-239327) Waiting for machine to stop 110/120
	I0913 19:52:11.177624   70335 main.go:141] libmachine: (no-preload-239327) Waiting for machine to stop 111/120
	I0913 19:52:12.178951   70335 main.go:141] libmachine: (no-preload-239327) Waiting for machine to stop 112/120
	I0913 19:52:13.180539   70335 main.go:141] libmachine: (no-preload-239327) Waiting for machine to stop 113/120
	I0913 19:52:14.181998   70335 main.go:141] libmachine: (no-preload-239327) Waiting for machine to stop 114/120
	I0913 19:52:15.184132   70335 main.go:141] libmachine: (no-preload-239327) Waiting for machine to stop 115/120
	I0913 19:52:16.185570   70335 main.go:141] libmachine: (no-preload-239327) Waiting for machine to stop 116/120
	I0913 19:52:17.187045   70335 main.go:141] libmachine: (no-preload-239327) Waiting for machine to stop 117/120
	I0913 19:52:18.188358   70335 main.go:141] libmachine: (no-preload-239327) Waiting for machine to stop 118/120
	I0913 19:52:19.189838   70335 main.go:141] libmachine: (no-preload-239327) Waiting for machine to stop 119/120
	I0913 19:52:20.190497   70335 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0913 19:52:20.190553   70335 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0913 19:52:20.192486   70335 out.go:201] 
	W0913 19:52:20.193780   70335 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0913 19:52:20.193803   70335 out.go:270] * 
	* 
	W0913 19:52:20.196633   70335 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0913 19:52:20.198026   70335 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-239327 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-239327 -n no-preload-239327
E0913 19:52:20.233783   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/enable-default-cni-604714/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:52:20.273593   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/custom-flannel-604714/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-239327 -n no-preload-239327: exit status 3 (18.474038949s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0913 19:52:38.674483   70941 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.13:22: connect: no route to host
	E0913 19:52:38.674502   70941 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.13:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-239327" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (138.97s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (138.99s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-512125 --alsologtostderr -v=3
E0913 19:50:53.942922   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/calico-604714/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:50:56.505109   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/calico-604714/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:50:57.575678   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/functional-204039/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:51:01.626516   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/calico-604714/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:51:11.867867   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/calico-604714/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:51:22.637266   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/auto-604714/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:51:32.349547   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/calico-604714/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:51:34.238272   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/kindnet-604714/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:51:39.295915   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/custom-flannel-604714/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:51:39.302287   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/custom-flannel-604714/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:51:39.313710   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/custom-flannel-604714/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:51:39.335090   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/custom-flannel-604714/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:51:39.376509   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/custom-flannel-604714/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:51:39.457928   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/custom-flannel-604714/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:51:39.619645   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/custom-flannel-604714/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:51:39.941101   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/custom-flannel-604714/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:51:40.583347   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/custom-flannel-604714/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:51:41.865563   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/custom-flannel-604714/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:51:44.427794   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/custom-flannel-604714/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:51:49.549700   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/custom-flannel-604714/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:51:59.791913   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/custom-flannel-604714/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-512125 --alsologtostderr -v=3: exit status 82 (2m0.507233506s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-512125"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 19:50:53.521993   70567 out.go:345] Setting OutFile to fd 1 ...
	I0913 19:50:53.522132   70567 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 19:50:53.522142   70567 out.go:358] Setting ErrFile to fd 2...
	I0913 19:50:53.522148   70567 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 19:50:53.522331   70567 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19636-3902/.minikube/bin
	I0913 19:50:53.522565   70567 out.go:352] Setting JSON to false
	I0913 19:50:53.522653   70567 mustload.go:65] Loading cluster: default-k8s-diff-port-512125
	I0913 19:50:53.522999   70567 config.go:182] Loaded profile config "default-k8s-diff-port-512125": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 19:50:53.523080   70567 profile.go:143] Saving config to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/default-k8s-diff-port-512125/config.json ...
	I0913 19:50:53.523262   70567 mustload.go:65] Loading cluster: default-k8s-diff-port-512125
	I0913 19:50:53.523387   70567 config.go:182] Loaded profile config "default-k8s-diff-port-512125": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 19:50:53.523419   70567 stop.go:39] StopHost: default-k8s-diff-port-512125
	I0913 19:50:53.523791   70567 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:50:53.523837   70567 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:50:53.538549   70567 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39979
	I0913 19:50:53.538945   70567 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:50:53.539424   70567 main.go:141] libmachine: Using API Version  1
	I0913 19:50:53.539446   70567 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:50:53.539823   70567 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:50:53.543051   70567 out.go:177] * Stopping node "default-k8s-diff-port-512125"  ...
	I0913 19:50:53.544115   70567 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0913 19:50:53.544140   70567 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .DriverName
	I0913 19:50:53.544360   70567 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0913 19:50:53.544391   70567 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHHostname
	I0913 19:50:53.547182   70567 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:50:53.547604   70567 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:54:e0", ip: ""} in network mk-default-k8s-diff-port-512125: {Iface:virbr2 ExpiryTime:2024-09-13 20:49:35 +0000 UTC Type:0 Mac:52:54:00:5b:54:e0 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-512125 Clientid:01:52:54:00:5b:54:e0}
	I0913 19:50:53.547628   70567 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined IP address 192.168.61.3 and MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:50:53.547744   70567 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHPort
	I0913 19:50:53.547952   70567 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHKeyPath
	I0913 19:50:53.548121   70567 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHUsername
	I0913 19:50:53.548269   70567 sshutil.go:53] new ssh client: &{IP:192.168.61.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/default-k8s-diff-port-512125/id_rsa Username:docker}
	I0913 19:50:53.662055   70567 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0913 19:50:53.721664   70567 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0913 19:50:53.784521   70567 main.go:141] libmachine: Stopping "default-k8s-diff-port-512125"...
	I0913 19:50:53.784549   70567 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetState
	I0913 19:50:53.786122   70567 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .Stop
	I0913 19:50:53.789499   70567 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting for machine to stop 0/120
	I0913 19:50:54.790941   70567 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting for machine to stop 1/120
	I0913 19:50:55.792123   70567 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting for machine to stop 2/120
	I0913 19:50:56.793496   70567 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting for machine to stop 3/120
	I0913 19:50:57.794826   70567 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting for machine to stop 4/120
	I0913 19:50:58.796954   70567 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting for machine to stop 5/120
	I0913 19:50:59.798352   70567 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting for machine to stop 6/120
	I0913 19:51:00.800451   70567 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting for machine to stop 7/120
	I0913 19:51:01.802159   70567 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting for machine to stop 8/120
	I0913 19:51:02.803453   70567 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting for machine to stop 9/120
	I0913 19:51:03.804845   70567 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting for machine to stop 10/120
	I0913 19:51:04.807147   70567 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting for machine to stop 11/120
	I0913 19:51:05.808406   70567 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting for machine to stop 12/120
	I0913 19:51:06.809809   70567 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting for machine to stop 13/120
	I0913 19:51:07.811129   70567 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting for machine to stop 14/120
	I0913 19:51:08.813250   70567 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting for machine to stop 15/120
	I0913 19:51:09.814724   70567 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting for machine to stop 16/120
	I0913 19:51:10.816069   70567 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting for machine to stop 17/120
	I0913 19:51:11.817555   70567 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting for machine to stop 18/120
	I0913 19:51:12.819044   70567 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting for machine to stop 19/120
	I0913 19:51:13.821484   70567 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting for machine to stop 20/120
	I0913 19:51:14.822792   70567 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting for machine to stop 21/120
	I0913 19:51:15.824167   70567 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting for machine to stop 22/120
	I0913 19:51:16.825505   70567 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting for machine to stop 23/120
	I0913 19:51:17.826704   70567 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting for machine to stop 24/120
	I0913 19:51:18.828683   70567 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting for machine to stop 25/120
	I0913 19:51:19.829977   70567 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting for machine to stop 26/120
	I0913 19:51:20.831463   70567 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting for machine to stop 27/120
	I0913 19:51:21.832831   70567 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting for machine to stop 28/120
	I0913 19:51:22.834304   70567 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting for machine to stop 29/120
	I0913 19:51:23.836602   70567 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting for machine to stop 30/120
	I0913 19:51:24.838086   70567 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting for machine to stop 31/120
	I0913 19:51:25.839550   70567 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting for machine to stop 32/120
	I0913 19:51:26.841318   70567 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting for machine to stop 33/120
	I0913 19:51:27.842715   70567 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting for machine to stop 34/120
	I0913 19:51:28.844883   70567 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting for machine to stop 35/120
	I0913 19:51:29.846239   70567 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting for machine to stop 36/120
	I0913 19:51:30.847463   70567 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting for machine to stop 37/120
	I0913 19:51:31.848947   70567 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting for machine to stop 38/120
	I0913 19:51:32.850397   70567 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting for machine to stop 39/120
	I0913 19:51:33.852745   70567 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting for machine to stop 40/120
	I0913 19:51:34.854069   70567 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting for machine to stop 41/120
	I0913 19:51:35.855392   70567 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting for machine to stop 42/120
	I0913 19:51:36.856671   70567 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting for machine to stop 43/120
	I0913 19:51:37.858072   70567 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting for machine to stop 44/120
	I0913 19:51:38.860088   70567 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting for machine to stop 45/120
	I0913 19:51:39.861717   70567 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting for machine to stop 46/120
	I0913 19:51:40.862994   70567 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting for machine to stop 47/120
	I0913 19:51:41.864360   70567 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting for machine to stop 48/120
	I0913 19:51:42.865745   70567 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting for machine to stop 49/120
	I0913 19:51:43.868081   70567 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting for machine to stop 50/120
	I0913 19:51:44.869572   70567 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting for machine to stop 51/120
	I0913 19:51:45.870972   70567 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting for machine to stop 52/120
	I0913 19:51:46.872576   70567 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting for machine to stop 53/120
	I0913 19:51:47.873924   70567 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting for machine to stop 54/120
	I0913 19:51:48.876056   70567 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting for machine to stop 55/120
	I0913 19:51:49.877511   70567 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting for machine to stop 56/120
	I0913 19:51:50.878998   70567 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting for machine to stop 57/120
	I0913 19:51:51.880394   70567 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting for machine to stop 58/120
	I0913 19:51:52.881911   70567 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting for machine to stop 59/120
	I0913 19:51:53.883262   70567 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting for machine to stop 60/120
	I0913 19:51:54.884639   70567 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting for machine to stop 61/120
	I0913 19:51:55.886025   70567 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting for machine to stop 62/120
	I0913 19:51:56.887318   70567 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting for machine to stop 63/120
	I0913 19:51:57.888756   70567 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting for machine to stop 64/120
	I0913 19:51:58.890900   70567 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting for machine to stop 65/120
	I0913 19:51:59.892214   70567 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting for machine to stop 66/120
	I0913 19:52:00.893554   70567 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting for machine to stop 67/120
	I0913 19:52:01.894978   70567 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting for machine to stop 68/120
	I0913 19:52:02.896751   70567 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting for machine to stop 69/120
	I0913 19:52:03.899075   70567 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting for machine to stop 70/120
	I0913 19:52:04.900425   70567 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting for machine to stop 71/120
	I0913 19:52:05.901785   70567 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting for machine to stop 72/120
	I0913 19:52:06.903160   70567 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting for machine to stop 73/120
	I0913 19:52:07.904442   70567 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting for machine to stop 74/120
	I0913 19:52:08.906367   70567 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting for machine to stop 75/120
	I0913 19:52:09.907671   70567 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting for machine to stop 76/120
	I0913 19:52:10.908950   70567 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting for machine to stop 77/120
	I0913 19:52:11.910361   70567 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting for machine to stop 78/120
	I0913 19:52:12.911657   70567 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting for machine to stop 79/120
	I0913 19:52:13.913836   70567 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting for machine to stop 80/120
	I0913 19:52:14.915217   70567 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting for machine to stop 81/120
	I0913 19:52:15.916640   70567 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting for machine to stop 82/120
	I0913 19:52:16.918178   70567 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting for machine to stop 83/120
	I0913 19:52:17.919692   70567 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting for machine to stop 84/120
	I0913 19:52:18.921683   70567 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting for machine to stop 85/120
	I0913 19:52:19.923759   70567 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting for machine to stop 86/120
	I0913 19:52:20.925191   70567 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting for machine to stop 87/120
	I0913 19:52:21.926542   70567 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting for machine to stop 88/120
	I0913 19:52:22.927908   70567 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting for machine to stop 89/120
	I0913 19:52:23.930268   70567 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting for machine to stop 90/120
	I0913 19:52:24.931809   70567 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting for machine to stop 91/120
	I0913 19:52:25.933130   70567 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting for machine to stop 92/120
	I0913 19:52:26.934520   70567 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting for machine to stop 93/120
	I0913 19:52:27.935811   70567 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting for machine to stop 94/120
	I0913 19:52:28.937776   70567 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting for machine to stop 95/120
	I0913 19:52:29.939149   70567 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting for machine to stop 96/120
	I0913 19:52:30.940555   70567 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting for machine to stop 97/120
	I0913 19:52:31.941956   70567 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting for machine to stop 98/120
	I0913 19:52:32.943270   70567 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting for machine to stop 99/120
	I0913 19:52:33.945500   70567 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting for machine to stop 100/120
	I0913 19:52:34.946882   70567 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting for machine to stop 101/120
	I0913 19:52:35.948277   70567 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting for machine to stop 102/120
	I0913 19:52:36.949674   70567 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting for machine to stop 103/120
	I0913 19:52:37.951036   70567 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting for machine to stop 104/120
	I0913 19:52:38.953087   70567 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting for machine to stop 105/120
	I0913 19:52:39.954539   70567 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting for machine to stop 106/120
	I0913 19:52:40.955777   70567 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting for machine to stop 107/120
	I0913 19:52:41.957234   70567 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting for machine to stop 108/120
	I0913 19:52:42.958712   70567 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting for machine to stop 109/120
	I0913 19:52:43.961043   70567 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting for machine to stop 110/120
	I0913 19:52:44.962516   70567 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting for machine to stop 111/120
	I0913 19:52:45.964650   70567 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting for machine to stop 112/120
	I0913 19:52:46.966188   70567 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting for machine to stop 113/120
	I0913 19:52:47.967500   70567 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting for machine to stop 114/120
	I0913 19:52:48.969756   70567 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting for machine to stop 115/120
	I0913 19:52:49.971392   70567 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting for machine to stop 116/120
	I0913 19:52:50.972749   70567 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting for machine to stop 117/120
	I0913 19:52:51.974133   70567 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting for machine to stop 118/120
	I0913 19:52:52.975558   70567 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting for machine to stop 119/120
	I0913 19:52:53.976996   70567 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0913 19:52:53.977063   70567 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0913 19:52:53.979266   70567 out.go:201] 
	W0913 19:52:53.980604   70567 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0913 19:52:53.980621   70567 out.go:270] * 
	* 
	W0913 19:52:53.983266   70567 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0913 19:52:53.984819   70567 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-512125 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-512125 -n default-k8s-diff-port-512125
E0913 19:52:53.999150   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/flannel-604714/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:52:56.160186   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/kindnet-604714/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:52:59.121388   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/flannel-604714/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:53:00.564630   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/enable-default-cni-604714/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:53:01.235013   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/custom-flannel-604714/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:53:09.362745   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/flannel-604714/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-512125 -n default-k8s-diff-port-512125: exit status 3 (18.479580842s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0913 19:53:12.466443   71458 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.3:22: connect: no route to host
	E0913 19:53:12.466463   71458 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.3:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-512125" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (138.99s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.47s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-234290 create -f testdata/busybox.yaml
E0913 19:52:20.648711   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/functional-204039/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-234290 create -f testdata/busybox.yaml: exit status 1 (42.827071ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-234290" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-234290 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-234290 -n old-k8s-version-234290
E0913 19:52:20.875801   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/enable-default-cni-604714/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-234290 -n old-k8s-version-234290: exit status 6 (215.243951ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0913 19:52:20.890877   71013 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-234290" does not appear in /home/jenkins/minikube-integration/19636-3902/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-234290" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-234290 -n old-k8s-version-234290
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-234290 -n old-k8s-version-234290: exit status 6 (214.119155ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0913 19:52:21.104391   71059 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-234290" does not appear in /home/jenkins/minikube-integration/19636-3902/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-234290" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.47s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (79.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-234290 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-234290 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m18.991514977s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-234290 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-234290 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-234290 describe deploy/metrics-server -n kube-system: exit status 1 (43.05383ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-234290" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-234290 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-234290 -n old-k8s-version-234290
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-234290 -n old-k8s-version-234290: exit status 6 (214.332249ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0913 19:53:40.353662   71797 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-234290" does not appear in /home/jenkins/minikube-integration/19636-3902/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-234290" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (79.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-175374 -n embed-certs-175374
E0913 19:52:22.158128   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/enable-default-cni-604714/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:52:24.719558   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/enable-default-cni-604714/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-175374 -n embed-certs-175374: exit status 3 (3.167755016s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0913 19:52:24.946492   71118 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.32:22: connect: no route to host
	E0913 19:52:24.946515   71118 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.32:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-175374 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0913 19:52:29.841484   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/enable-default-cni-604714/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-175374 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.152800199s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.32:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-175374 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-175374 -n embed-certs-175374
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-175374 -n embed-certs-175374: exit status 3 (3.063152163s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0913 19:52:34.162452   71200 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.32:22: connect: no route to host
	E0913 19:52:34.162477   71200 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.32:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-175374" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-239327 -n no-preload-239327
E0913 19:52:40.082856   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/enable-default-cni-604714/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-239327 -n no-preload-239327: exit status 3 (3.16800385s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0913 19:52:41.842471   71290 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.13:22: connect: no route to host
	E0913 19:52:41.842524   71290 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.13:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-239327 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0913 19:52:44.559083   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/auto-604714/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-239327 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.152277321s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.50.13:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-239327 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-239327 -n no-preload-239327
E0913 19:52:48.867830   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/flannel-604714/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:52:48.874193   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/flannel-604714/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:52:48.885548   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/flannel-604714/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:52:48.906982   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/flannel-604714/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:52:48.948386   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/flannel-604714/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:52:49.030079   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/flannel-604714/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:52:49.191705   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/flannel-604714/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:52:49.513745   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/flannel-604714/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:52:50.155892   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/flannel-604714/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-239327 -n no-preload-239327: exit status 3 (3.063312539s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0913 19:52:51.058458   71370 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.13:22: connect: no route to host
	E0913 19:52:51.058480   71370 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.13:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-239327" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-512125 -n default-k8s-diff-port-512125
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-512125 -n default-k8s-diff-port-512125: exit status 3 (3.167910818s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0913 19:53:15.634451   71574 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.3:22: connect: no route to host
	E0913 19:53:15.634482   71574 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.3:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-512125 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-512125 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.15410173s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.61.3:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-512125 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-512125 -n default-k8s-diff-port-512125
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-512125 -n default-k8s-diff-port-512125: exit status 3 (3.061897701s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0913 19:53:24.850533   71669 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.3:22: connect: no route to host
	E0913 19:53:24.850550   71669 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.3:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-512125" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (759.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-234290 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E0913 19:53:50.603890   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/bridge-604714/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:53:50.610250   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/bridge-604714/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:53:50.621610   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/bridge-604714/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:53:50.643041   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/bridge-604714/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:53:50.684415   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/bridge-604714/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:53:50.766120   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/bridge-604714/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:53:50.927626   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/bridge-604714/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:53:51.249428   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/bridge-604714/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:53:51.891730   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/bridge-604714/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:53:53.174032   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/bridge-604714/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:53:55.735977   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/bridge-604714/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:54:00.858234   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/bridge-604714/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:54:06.600860   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:54:10.806848   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/flannel-604714/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:54:11.100522   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/bridge-604714/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:54:23.156954   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/custom-flannel-604714/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:54:31.582037   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/bridge-604714/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:55:00.700909   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/auto-604714/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:55:03.448588   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/enable-default-cni-604714/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:55:12.298903   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/kindnet-604714/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:55:12.543723   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/bridge-604714/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:55:28.400992   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/auto-604714/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:55:32.728488   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/flannel-604714/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:55:40.002390   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/kindnet-604714/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:55:51.374511   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/calico-604714/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:55:57.576201   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/functional-204039/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:56:19.075037   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/calico-604714/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:56:34.465608   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/bridge-604714/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:56:39.295164   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/custom-flannel-604714/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:57:06.999209   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/custom-flannel-604714/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:57:19.588793   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/enable-default-cni-604714/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:57:47.290831   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/enable-default-cni-604714/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:57:48.868172   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/flannel-604714/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:58:16.569950   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/flannel-604714/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:58:50.604144   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/bridge-604714/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:59:06.601567   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:59:18.306964   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/bridge-604714/client.crt: no such file or directory" logger="UnhandledError"
E0913 20:00:00.700421   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/auto-604714/client.crt: no such file or directory" logger="UnhandledError"
E0913 20:00:12.298726   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/kindnet-604714/client.crt: no such file or directory" logger="UnhandledError"
E0913 20:00:51.374037   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/calico-604714/client.crt: no such file or directory" logger="UnhandledError"
E0913 20:00:57.575588   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/functional-204039/client.crt: no such file or directory" logger="UnhandledError"
E0913 20:01:39.295746   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/custom-flannel-604714/client.crt: no such file or directory" logger="UnhandledError"
E0913 20:02:09.674670   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-234290 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (12m35.673549282s)

                                                
                                                
-- stdout --
	* [old-k8s-version-234290] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19636
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19636-3902/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19636-3902/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-234290" primary control-plane node in "old-k8s-version-234290" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-234290" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 19:53:41.943032   71926 out.go:345] Setting OutFile to fd 1 ...
	I0913 19:53:41.943180   71926 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 19:53:41.943190   71926 out.go:358] Setting ErrFile to fd 2...
	I0913 19:53:41.943197   71926 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 19:53:41.943402   71926 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19636-3902/.minikube/bin
	I0913 19:53:41.943930   71926 out.go:352] Setting JSON to false
	I0913 19:53:41.944812   71926 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5765,"bootTime":1726251457,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0913 19:53:41.944913   71926 start.go:139] virtualization: kvm guest
	I0913 19:53:41.946864   71926 out.go:177] * [old-k8s-version-234290] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0913 19:53:41.948276   71926 out.go:177]   - MINIKUBE_LOCATION=19636
	I0913 19:53:41.948281   71926 notify.go:220] Checking for updates...
	I0913 19:53:41.950620   71926 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 19:53:41.951967   71926 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19636-3902/kubeconfig
	I0913 19:53:41.953232   71926 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19636-3902/.minikube
	I0913 19:53:41.954348   71926 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0913 19:53:41.955441   71926 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 19:53:41.957011   71926 config.go:182] Loaded profile config "old-k8s-version-234290": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0913 19:53:41.957371   71926 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:53:41.957441   71926 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:53:41.972835   71926 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38069
	I0913 19:53:41.973170   71926 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:53:41.973680   71926 main.go:141] libmachine: Using API Version  1
	I0913 19:53:41.973702   71926 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:53:41.974021   71926 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:53:41.974203   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .DriverName
	I0913 19:53:41.975950   71926 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0913 19:53:41.977070   71926 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 19:53:41.977361   71926 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:53:41.977393   71926 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:53:41.992564   71926 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46407
	I0913 19:53:41.992937   71926 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:53:41.993374   71926 main.go:141] libmachine: Using API Version  1
	I0913 19:53:41.993394   71926 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:53:41.993696   71926 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:53:41.993887   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .DriverName
	I0913 19:53:42.028938   71926 out.go:177] * Using the kvm2 driver based on existing profile
	I0913 19:53:42.030049   71926 start.go:297] selected driver: kvm2
	I0913 19:53:42.030059   71926 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-234290 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-234290 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.137 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 19:53:42.030200   71926 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 19:53:42.030849   71926 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 19:53:42.030932   71926 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19636-3902/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0913 19:53:42.045463   71926 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0913 19:53:42.045953   71926 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0913 19:53:42.045989   71926 cni.go:84] Creating CNI manager for ""
	I0913 19:53:42.046049   71926 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0913 19:53:42.046110   71926 start.go:340] cluster config:
	{Name:old-k8s-version-234290 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-234290 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.137 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 19:53:42.046256   71926 iso.go:125] acquiring lock: {Name:mk6d09a9e7ffc35d34bf41cb51ad15df14a4d34d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 19:53:42.047975   71926 out.go:177] * Starting "old-k8s-version-234290" primary control-plane node in "old-k8s-version-234290" cluster
	I0913 19:53:42.049111   71926 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0913 19:53:42.049142   71926 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0913 19:53:42.049152   71926 cache.go:56] Caching tarball of preloaded images
	I0913 19:53:42.049244   71926 preload.go:172] Found /home/jenkins/minikube-integration/19636-3902/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0913 19:53:42.049256   71926 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0913 19:53:42.049381   71926 profile.go:143] Saving config to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/old-k8s-version-234290/config.json ...
	I0913 19:53:42.049610   71926 start.go:360] acquireMachinesLock for old-k8s-version-234290: {Name:mk2a4fc9f87a3264088553785e086036ce1d8c5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 19:57:51.327259   71926 start.go:364] duration metric: took 4m9.277620447s to acquireMachinesLock for "old-k8s-version-234290"
	I0913 19:57:51.327324   71926 start.go:96] Skipping create...Using existing machine configuration
	I0913 19:57:51.327338   71926 fix.go:54] fixHost starting: 
	I0913 19:57:51.327769   71926 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:57:51.327815   71926 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:57:51.344030   71926 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37477
	I0913 19:57:51.344527   71926 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:57:51.344994   71926 main.go:141] libmachine: Using API Version  1
	I0913 19:57:51.345018   71926 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:57:51.345360   71926 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:57:51.345563   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .DriverName
	I0913 19:57:51.345700   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetState
	I0913 19:57:51.347144   71926 fix.go:112] recreateIfNeeded on old-k8s-version-234290: state=Stopped err=<nil>
	I0913 19:57:51.347169   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .DriverName
	W0913 19:57:51.347304   71926 fix.go:138] unexpected machine state, will restart: <nil>
	I0913 19:57:51.349231   71926 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-234290" ...
	I0913 19:57:51.350756   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .Start
	I0913 19:57:51.350906   71926 main.go:141] libmachine: (old-k8s-version-234290) Ensuring networks are active...
	I0913 19:57:51.351585   71926 main.go:141] libmachine: (old-k8s-version-234290) Ensuring network default is active
	I0913 19:57:51.351974   71926 main.go:141] libmachine: (old-k8s-version-234290) Ensuring network mk-old-k8s-version-234290 is active
	I0913 19:57:51.352333   71926 main.go:141] libmachine: (old-k8s-version-234290) Getting domain xml...
	I0913 19:57:51.352947   71926 main.go:141] libmachine: (old-k8s-version-234290) Creating domain...
	I0913 19:57:52.718732   71926 main.go:141] libmachine: (old-k8s-version-234290) Waiting to get IP...
	I0913 19:57:52.719575   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:57:52.720004   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | unable to find current IP address of domain old-k8s-version-234290 in network mk-old-k8s-version-234290
	I0913 19:57:52.720083   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | I0913 19:57:52.720009   72953 retry.go:31] will retry after 304.912151ms: waiting for machine to come up
	I0913 19:57:53.026797   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:57:53.027578   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | unable to find current IP address of domain old-k8s-version-234290 in network mk-old-k8s-version-234290
	I0913 19:57:53.027703   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | I0913 19:57:53.027641   72953 retry.go:31] will retry after 242.676909ms: waiting for machine to come up
	I0913 19:57:53.272108   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:57:53.272588   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | unable to find current IP address of domain old-k8s-version-234290 in network mk-old-k8s-version-234290
	I0913 19:57:53.272612   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | I0913 19:57:53.272518   72953 retry.go:31] will retry after 405.559393ms: waiting for machine to come up
	I0913 19:57:53.679940   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:57:53.680380   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | unable to find current IP address of domain old-k8s-version-234290 in network mk-old-k8s-version-234290
	I0913 19:57:53.680414   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | I0913 19:57:53.680348   72953 retry.go:31] will retry after 378.743628ms: waiting for machine to come up
	I0913 19:57:54.061169   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:57:54.061778   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | unable to find current IP address of domain old-k8s-version-234290 in network mk-old-k8s-version-234290
	I0913 19:57:54.061805   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | I0913 19:57:54.061698   72953 retry.go:31] will retry after 481.46563ms: waiting for machine to come up
	I0913 19:57:54.545134   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:57:54.545582   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | unable to find current IP address of domain old-k8s-version-234290 in network mk-old-k8s-version-234290
	I0913 19:57:54.545604   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | I0913 19:57:54.545548   72953 retry.go:31] will retry after 836.433898ms: waiting for machine to come up
	I0913 19:57:55.383396   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:57:55.384063   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | unable to find current IP address of domain old-k8s-version-234290 in network mk-old-k8s-version-234290
	I0913 19:57:55.384094   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | I0913 19:57:55.384023   72953 retry.go:31] will retry after 848.706378ms: waiting for machine to come up
	I0913 19:57:56.233996   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:57:56.234429   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | unable to find current IP address of domain old-k8s-version-234290 in network mk-old-k8s-version-234290
	I0913 19:57:56.234456   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | I0913 19:57:56.234381   72953 retry.go:31] will retry after 969.158848ms: waiting for machine to come up
	I0913 19:57:57.205383   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:57:57.205897   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | unable to find current IP address of domain old-k8s-version-234290 in network mk-old-k8s-version-234290
	I0913 19:57:57.205926   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | I0913 19:57:57.205815   72953 retry.go:31] will retry after 1.270443953s: waiting for machine to come up
	I0913 19:57:58.477621   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:57:58.478121   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | unable to find current IP address of domain old-k8s-version-234290 in network mk-old-k8s-version-234290
	I0913 19:57:58.478142   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | I0913 19:57:58.478071   72953 retry.go:31] will retry after 1.698380616s: waiting for machine to come up
	I0913 19:58:00.179093   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:00.179578   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | unable to find current IP address of domain old-k8s-version-234290 in network mk-old-k8s-version-234290
	I0913 19:58:00.179602   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | I0913 19:58:00.179528   72953 retry.go:31] will retry after 2.83575453s: waiting for machine to come up
	I0913 19:58:03.017261   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:03.017797   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | unable to find current IP address of domain old-k8s-version-234290 in network mk-old-k8s-version-234290
	I0913 19:58:03.017824   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | I0913 19:58:03.017750   72953 retry.go:31] will retry after 2.837073214s: waiting for machine to come up
	I0913 19:58:05.856138   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:05.856521   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | unable to find current IP address of domain old-k8s-version-234290 in network mk-old-k8s-version-234290
	I0913 19:58:05.856541   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | I0913 19:58:05.856478   72953 retry.go:31] will retry after 3.468611434s: waiting for machine to come up
	I0913 19:58:09.327843   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:09.328294   71926 main.go:141] libmachine: (old-k8s-version-234290) Found IP for machine: 192.168.72.137
	I0913 19:58:09.328318   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has current primary IP address 192.168.72.137 and MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:09.328326   71926 main.go:141] libmachine: (old-k8s-version-234290) Reserving static IP address...
	I0913 19:58:09.328829   71926 main.go:141] libmachine: (old-k8s-version-234290) Reserved static IP address: 192.168.72.137
	I0913 19:58:09.328863   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | found host DHCP lease matching {name: "old-k8s-version-234290", mac: "52:54:00:11:33:43", ip: "192.168.72.137"} in network mk-old-k8s-version-234290: {Iface:virbr4 ExpiryTime:2024-09-13 20:58:03 +0000 UTC Type:0 Mac:52:54:00:11:33:43 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-234290 Clientid:01:52:54:00:11:33:43}
	I0913 19:58:09.328879   71926 main.go:141] libmachine: (old-k8s-version-234290) Waiting for SSH to be available...
	I0913 19:58:09.328907   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | skip adding static IP to network mk-old-k8s-version-234290 - found existing host DHCP lease matching {name: "old-k8s-version-234290", mac: "52:54:00:11:33:43", ip: "192.168.72.137"}
	I0913 19:58:09.328936   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | Getting to WaitForSSH function...
	I0913 19:58:09.331039   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:09.331303   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:33:43", ip: ""} in network mk-old-k8s-version-234290: {Iface:virbr4 ExpiryTime:2024-09-13 20:58:03 +0000 UTC Type:0 Mac:52:54:00:11:33:43 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-234290 Clientid:01:52:54:00:11:33:43}
	I0913 19:58:09.331334   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined IP address 192.168.72.137 and MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:09.331400   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | Using SSH client type: external
	I0913 19:58:09.331435   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | Using SSH private key: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/old-k8s-version-234290/id_rsa (-rw-------)
	I0913 19:58:09.331513   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.137 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19636-3902/.minikube/machines/old-k8s-version-234290/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0913 19:58:09.331538   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | About to run SSH command:
	I0913 19:58:09.331555   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | exit 0
	I0913 19:58:09.458142   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | SSH cmd err, output: <nil>: 
	I0913 19:58:09.458503   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetConfigRaw
	I0913 19:58:09.459065   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetIP
	I0913 19:58:09.461622   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:09.461915   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:33:43", ip: ""} in network mk-old-k8s-version-234290: {Iface:virbr4 ExpiryTime:2024-09-13 20:58:03 +0000 UTC Type:0 Mac:52:54:00:11:33:43 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-234290 Clientid:01:52:54:00:11:33:43}
	I0913 19:58:09.461939   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined IP address 192.168.72.137 and MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:09.462215   71926 profile.go:143] Saving config to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/old-k8s-version-234290/config.json ...
	I0913 19:58:09.462430   71926 machine.go:93] provisionDockerMachine start ...
	I0913 19:58:09.462454   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .DriverName
	I0913 19:58:09.462652   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHHostname
	I0913 19:58:09.464731   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:09.465001   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:33:43", ip: ""} in network mk-old-k8s-version-234290: {Iface:virbr4 ExpiryTime:2024-09-13 20:58:03 +0000 UTC Type:0 Mac:52:54:00:11:33:43 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-234290 Clientid:01:52:54:00:11:33:43}
	I0913 19:58:09.465025   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined IP address 192.168.72.137 and MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:09.465140   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHPort
	I0913 19:58:09.465292   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHKeyPath
	I0913 19:58:09.465448   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHKeyPath
	I0913 19:58:09.465586   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHUsername
	I0913 19:58:09.465754   71926 main.go:141] libmachine: Using SSH client type: native
	I0913 19:58:09.465934   71926 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.137 22 <nil> <nil>}
	I0913 19:58:09.465944   71926 main.go:141] libmachine: About to run SSH command:
	hostname
	I0913 19:58:09.578790   71926 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0913 19:58:09.578821   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetMachineName
	I0913 19:58:09.579119   71926 buildroot.go:166] provisioning hostname "old-k8s-version-234290"
	I0913 19:58:09.579148   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetMachineName
	I0913 19:58:09.579352   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHHostname
	I0913 19:58:09.582085   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:09.582501   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:33:43", ip: ""} in network mk-old-k8s-version-234290: {Iface:virbr4 ExpiryTime:2024-09-13 20:58:03 +0000 UTC Type:0 Mac:52:54:00:11:33:43 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-234290 Clientid:01:52:54:00:11:33:43}
	I0913 19:58:09.582530   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined IP address 192.168.72.137 and MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:09.582677   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHPort
	I0913 19:58:09.582828   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHKeyPath
	I0913 19:58:09.582982   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHKeyPath
	I0913 19:58:09.583111   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHUsername
	I0913 19:58:09.583310   71926 main.go:141] libmachine: Using SSH client type: native
	I0913 19:58:09.583519   71926 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.137 22 <nil> <nil>}
	I0913 19:58:09.583536   71926 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-234290 && echo "old-k8s-version-234290" | sudo tee /etc/hostname
	I0913 19:58:09.712818   71926 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-234290
	
	I0913 19:58:09.712849   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHHostname
	I0913 19:58:09.715668   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:09.716012   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:33:43", ip: ""} in network mk-old-k8s-version-234290: {Iface:virbr4 ExpiryTime:2024-09-13 20:58:03 +0000 UTC Type:0 Mac:52:54:00:11:33:43 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-234290 Clientid:01:52:54:00:11:33:43}
	I0913 19:58:09.716033   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined IP address 192.168.72.137 and MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:09.716166   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHPort
	I0913 19:58:09.716370   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHKeyPath
	I0913 19:58:09.716550   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHKeyPath
	I0913 19:58:09.716740   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHUsername
	I0913 19:58:09.716935   71926 main.go:141] libmachine: Using SSH client type: native
	I0913 19:58:09.717120   71926 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.137 22 <nil> <nil>}
	I0913 19:58:09.717145   71926 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-234290' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-234290/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-234290' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0913 19:58:09.835137   71926 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0913 19:58:09.835169   71926 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19636-3902/.minikube CaCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19636-3902/.minikube}
	I0913 19:58:09.835198   71926 buildroot.go:174] setting up certificates
	I0913 19:58:09.835207   71926 provision.go:84] configureAuth start
	I0913 19:58:09.835220   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetMachineName
	I0913 19:58:09.835493   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetIP
	I0913 19:58:09.838090   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:09.838460   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:33:43", ip: ""} in network mk-old-k8s-version-234290: {Iface:virbr4 ExpiryTime:2024-09-13 20:58:03 +0000 UTC Type:0 Mac:52:54:00:11:33:43 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-234290 Clientid:01:52:54:00:11:33:43}
	I0913 19:58:09.838496   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined IP address 192.168.72.137 and MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:09.838570   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHHostname
	I0913 19:58:09.840786   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:09.841121   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:33:43", ip: ""} in network mk-old-k8s-version-234290: {Iface:virbr4 ExpiryTime:2024-09-13 20:58:03 +0000 UTC Type:0 Mac:52:54:00:11:33:43 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-234290 Clientid:01:52:54:00:11:33:43}
	I0913 19:58:09.841147   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined IP address 192.168.72.137 and MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:09.841283   71926 provision.go:143] copyHostCerts
	I0913 19:58:09.841339   71926 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem, removing ...
	I0913 19:58:09.841349   71926 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem
	I0913 19:58:09.841404   71926 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem (1082 bytes)
	I0913 19:58:09.841495   71926 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem, removing ...
	I0913 19:58:09.841503   71926 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem
	I0913 19:58:09.841525   71926 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem (1123 bytes)
	I0913 19:58:09.841589   71926 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem, removing ...
	I0913 19:58:09.841596   71926 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem
	I0913 19:58:09.841613   71926 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem (1679 bytes)
	I0913 19:58:09.841671   71926 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-234290 san=[127.0.0.1 192.168.72.137 localhost minikube old-k8s-version-234290]
	I0913 19:58:10.111960   71926 provision.go:177] copyRemoteCerts
	I0913 19:58:10.112014   71926 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0913 19:58:10.112042   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHHostname
	I0913 19:58:10.114801   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:10.115156   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:33:43", ip: ""} in network mk-old-k8s-version-234290: {Iface:virbr4 ExpiryTime:2024-09-13 20:58:03 +0000 UTC Type:0 Mac:52:54:00:11:33:43 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-234290 Clientid:01:52:54:00:11:33:43}
	I0913 19:58:10.115203   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined IP address 192.168.72.137 and MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:10.115378   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHPort
	I0913 19:58:10.115543   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHKeyPath
	I0913 19:58:10.115703   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHUsername
	I0913 19:58:10.115816   71926 sshutil.go:53] new ssh client: &{IP:192.168.72.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/old-k8s-version-234290/id_rsa Username:docker}
	I0913 19:58:10.201034   71926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0913 19:58:10.225250   71926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0913 19:58:10.248653   71926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0913 19:58:10.273946   71926 provision.go:87] duration metric: took 438.72916ms to configureAuth
	I0913 19:58:10.273971   71926 buildroot.go:189] setting minikube options for container-runtime
	I0913 19:58:10.274216   71926 config.go:182] Loaded profile config "old-k8s-version-234290": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0913 19:58:10.274293   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHHostname
	I0913 19:58:10.276661   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:10.276973   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:33:43", ip: ""} in network mk-old-k8s-version-234290: {Iface:virbr4 ExpiryTime:2024-09-13 20:58:03 +0000 UTC Type:0 Mac:52:54:00:11:33:43 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-234290 Clientid:01:52:54:00:11:33:43}
	I0913 19:58:10.277010   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined IP address 192.168.72.137 and MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:10.277098   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHPort
	I0913 19:58:10.277315   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHKeyPath
	I0913 19:58:10.277465   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHKeyPath
	I0913 19:58:10.277593   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHUsername
	I0913 19:58:10.277755   71926 main.go:141] libmachine: Using SSH client type: native
	I0913 19:58:10.277914   71926 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.137 22 <nil> <nil>}
	I0913 19:58:10.277929   71926 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0913 19:58:10.506712   71926 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0913 19:58:10.506740   71926 machine.go:96] duration metric: took 1.044293936s to provisionDockerMachine
	I0913 19:58:10.506752   71926 start.go:293] postStartSetup for "old-k8s-version-234290" (driver="kvm2")
	I0913 19:58:10.506761   71926 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0913 19:58:10.506786   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .DriverName
	I0913 19:58:10.507087   71926 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0913 19:58:10.507121   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHHostname
	I0913 19:58:10.509746   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:10.510073   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:33:43", ip: ""} in network mk-old-k8s-version-234290: {Iface:virbr4 ExpiryTime:2024-09-13 20:58:03 +0000 UTC Type:0 Mac:52:54:00:11:33:43 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-234290 Clientid:01:52:54:00:11:33:43}
	I0913 19:58:10.510115   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined IP address 192.168.72.137 and MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:10.510319   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHPort
	I0913 19:58:10.510493   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHKeyPath
	I0913 19:58:10.510652   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHUsername
	I0913 19:58:10.510791   71926 sshutil.go:53] new ssh client: &{IP:192.168.72.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/old-k8s-version-234290/id_rsa Username:docker}
	I0913 19:58:10.608187   71926 ssh_runner.go:195] Run: cat /etc/os-release
	I0913 19:58:10.612545   71926 info.go:137] Remote host: Buildroot 2023.02.9
	I0913 19:58:10.612570   71926 filesync.go:126] Scanning /home/jenkins/minikube-integration/19636-3902/.minikube/addons for local assets ...
	I0913 19:58:10.612659   71926 filesync.go:126] Scanning /home/jenkins/minikube-integration/19636-3902/.minikube/files for local assets ...
	I0913 19:58:10.612760   71926 filesync.go:149] local asset: /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem -> 110792.pem in /etc/ssl/certs
	I0913 19:58:10.612876   71926 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0913 19:58:10.623923   71926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem --> /etc/ssl/certs/110792.pem (1708 bytes)
	I0913 19:58:10.649632   71926 start.go:296] duration metric: took 142.866871ms for postStartSetup
	I0913 19:58:10.649667   71926 fix.go:56] duration metric: took 19.32233086s for fixHost
	I0913 19:58:10.649685   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHHostname
	I0913 19:58:10.652317   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:10.652626   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:33:43", ip: ""} in network mk-old-k8s-version-234290: {Iface:virbr4 ExpiryTime:2024-09-13 20:58:03 +0000 UTC Type:0 Mac:52:54:00:11:33:43 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-234290 Clientid:01:52:54:00:11:33:43}
	I0913 19:58:10.652654   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined IP address 192.168.72.137 and MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:10.652772   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHPort
	I0913 19:58:10.652954   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHKeyPath
	I0913 19:58:10.653102   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHKeyPath
	I0913 19:58:10.653224   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHUsername
	I0913 19:58:10.653391   71926 main.go:141] libmachine: Using SSH client type: native
	I0913 19:58:10.653546   71926 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.137 22 <nil> <nil>}
	I0913 19:58:10.653555   71926 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0913 19:58:10.762947   71926 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726257490.741456651
	
	I0913 19:58:10.763010   71926 fix.go:216] guest clock: 1726257490.741456651
	I0913 19:58:10.763025   71926 fix.go:229] Guest: 2024-09-13 19:58:10.741456651 +0000 UTC Remote: 2024-09-13 19:58:10.649671047 +0000 UTC m=+268.740736518 (delta=91.785604ms)
	I0913 19:58:10.763052   71926 fix.go:200] guest clock delta is within tolerance: 91.785604ms
	I0913 19:58:10.763059   71926 start.go:83] releasing machines lock for "old-k8s-version-234290", held for 19.435752772s
	I0913 19:58:10.763095   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .DriverName
	I0913 19:58:10.763368   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetIP
	I0913 19:58:10.766318   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:10.766797   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:33:43", ip: ""} in network mk-old-k8s-version-234290: {Iface:virbr4 ExpiryTime:2024-09-13 20:58:03 +0000 UTC Type:0 Mac:52:54:00:11:33:43 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-234290 Clientid:01:52:54:00:11:33:43}
	I0913 19:58:10.766838   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined IP address 192.168.72.137 and MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:10.767094   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .DriverName
	I0913 19:58:10.767602   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .DriverName
	I0913 19:58:10.767791   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .DriverName
	I0913 19:58:10.767901   71926 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0913 19:58:10.767938   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHHostname
	I0913 19:58:10.768057   71926 ssh_runner.go:195] Run: cat /version.json
	I0913 19:58:10.768077   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHHostname
	I0913 19:58:10.770835   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:10.770860   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:10.771204   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:33:43", ip: ""} in network mk-old-k8s-version-234290: {Iface:virbr4 ExpiryTime:2024-09-13 20:58:03 +0000 UTC Type:0 Mac:52:54:00:11:33:43 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-234290 Clientid:01:52:54:00:11:33:43}
	I0913 19:58:10.771231   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined IP address 192.168.72.137 and MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:10.771269   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:33:43", ip: ""} in network mk-old-k8s-version-234290: {Iface:virbr4 ExpiryTime:2024-09-13 20:58:03 +0000 UTC Type:0 Mac:52:54:00:11:33:43 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-234290 Clientid:01:52:54:00:11:33:43}
	I0913 19:58:10.771299   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined IP address 192.168.72.137 and MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:10.771390   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHPort
	I0913 19:58:10.771538   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHKeyPath
	I0913 19:58:10.771562   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHPort
	I0913 19:58:10.771722   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHUsername
	I0913 19:58:10.771758   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHKeyPath
	I0913 19:58:10.771888   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHUsername
	I0913 19:58:10.771910   71926 sshutil.go:53] new ssh client: &{IP:192.168.72.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/old-k8s-version-234290/id_rsa Username:docker}
	I0913 19:58:10.772009   71926 sshutil.go:53] new ssh client: &{IP:192.168.72.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/old-k8s-version-234290/id_rsa Username:docker}
	I0913 19:58:10.851445   71926 ssh_runner.go:195] Run: systemctl --version
	I0913 19:58:10.876291   71926 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0913 19:58:11.022514   71926 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0913 19:58:11.029415   71926 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0913 19:58:11.029478   71926 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0913 19:58:11.046313   71926 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0913 19:58:11.046338   71926 start.go:495] detecting cgroup driver to use...
	I0913 19:58:11.046411   71926 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0913 19:58:11.064638   71926 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0913 19:58:11.079465   71926 docker.go:217] disabling cri-docker service (if available) ...
	I0913 19:58:11.079555   71926 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0913 19:58:11.092965   71926 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0913 19:58:11.107487   71926 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0913 19:58:11.225260   71926 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0913 19:58:11.379777   71926 docker.go:233] disabling docker service ...
	I0913 19:58:11.379918   71926 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0913 19:58:11.399146   71926 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0913 19:58:11.418820   71926 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0913 19:58:11.608056   71926 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0913 19:58:11.793596   71926 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0913 19:58:11.809432   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0913 19:58:11.831794   71926 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0913 19:58:11.831867   71926 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:58:11.843613   71926 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0913 19:58:11.843681   71926 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:58:11.856437   71926 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:58:11.868563   71926 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:58:11.880448   71926 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0913 19:58:11.892795   71926 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0913 19:58:11.903756   71926 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0913 19:58:11.903820   71926 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0913 19:58:11.919323   71926 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0913 19:58:11.932414   71926 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 19:58:12.084112   71926 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0913 19:58:12.186561   71926 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0913 19:58:12.186626   71926 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0913 19:58:12.200935   71926 start.go:563] Will wait 60s for crictl version
	I0913 19:58:12.200999   71926 ssh_runner.go:195] Run: which crictl
	I0913 19:58:12.204888   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0913 19:58:12.251729   71926 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0913 19:58:12.251822   71926 ssh_runner.go:195] Run: crio --version
	I0913 19:58:12.284955   71926 ssh_runner.go:195] Run: crio --version
	I0913 19:58:12.316561   71926 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0913 19:58:12.317917   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetIP
	I0913 19:58:12.320920   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:12.321261   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:33:43", ip: ""} in network mk-old-k8s-version-234290: {Iface:virbr4 ExpiryTime:2024-09-13 20:58:03 +0000 UTC Type:0 Mac:52:54:00:11:33:43 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-234290 Clientid:01:52:54:00:11:33:43}
	I0913 19:58:12.321291   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined IP address 192.168.72.137 and MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:12.321498   71926 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0913 19:58:12.325745   71926 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0913 19:58:12.340042   71926 kubeadm.go:883] updating cluster {Name:old-k8s-version-234290 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-234290 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.137 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0913 19:58:12.340163   71926 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0913 19:58:12.340227   71926 ssh_runner.go:195] Run: sudo crictl images --output json
	I0913 19:58:12.387772   71926 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0913 19:58:12.387831   71926 ssh_runner.go:195] Run: which lz4
	I0913 19:58:12.391877   71926 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0913 19:58:12.397084   71926 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0913 19:58:12.397111   71926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0913 19:58:14.108496   71926 crio.go:462] duration metric: took 1.716639607s to copy over tarball
	I0913 19:58:14.108580   71926 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0913 19:58:17.092899   71926 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.984287393s)
	I0913 19:58:17.092927   71926 crio.go:469] duration metric: took 2.984399164s to extract the tarball
	I0913 19:58:17.092936   71926 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0913 19:58:17.134595   71926 ssh_runner.go:195] Run: sudo crictl images --output json
	I0913 19:58:17.173233   71926 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0913 19:58:17.173261   71926 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0913 19:58:17.173353   71926 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 19:58:17.173420   71926 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0913 19:58:17.173496   71926 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0913 19:58:17.173545   71926 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0913 19:58:17.173432   71926 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0913 19:58:17.173391   71926 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0913 19:58:17.173404   71926 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0913 19:58:17.173354   71926 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0913 19:58:17.174795   71926 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0913 19:58:17.174996   71926 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0913 19:58:17.175016   71926 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0913 19:58:17.175093   71926 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0913 19:58:17.175261   71926 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0913 19:58:17.175274   71926 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0913 19:58:17.175314   71926 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 19:58:17.175374   71926 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0913 19:58:17.389976   71926 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0913 19:58:17.401858   71926 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0913 19:58:17.422207   71926 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0913 19:58:17.437983   71926 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0913 19:58:17.441827   71926 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0913 19:58:17.444353   71926 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0913 19:58:17.448291   71926 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0913 19:58:17.448327   71926 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0913 19:58:17.448360   71926 ssh_runner.go:195] Run: which crictl
	I0913 19:58:17.484630   71926 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0913 19:58:17.486907   71926 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0913 19:58:17.486953   71926 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0913 19:58:17.486992   71926 ssh_runner.go:195] Run: which crictl
	I0913 19:58:17.552972   71926 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0913 19:58:17.553017   71926 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0913 19:58:17.553070   71926 ssh_runner.go:195] Run: which crictl
	I0913 19:58:17.553094   71926 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0913 19:58:17.553131   71926 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0913 19:58:17.553172   71926 ssh_runner.go:195] Run: which crictl
	I0913 19:58:17.573201   71926 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0913 19:58:17.573248   71926 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0913 19:58:17.573298   71926 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0913 19:58:17.573320   71926 ssh_runner.go:195] Run: which crictl
	I0913 19:58:17.573340   71926 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0913 19:58:17.573386   71926 ssh_runner.go:195] Run: which crictl
	I0913 19:58:17.573434   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0913 19:58:17.605151   71926 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0913 19:58:17.605199   71926 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0913 19:58:17.605248   71926 ssh_runner.go:195] Run: which crictl
	I0913 19:58:17.605300   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0913 19:58:17.605347   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0913 19:58:17.605439   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0913 19:58:17.628947   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0913 19:58:17.628992   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0913 19:58:17.629158   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0913 19:58:17.629483   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0913 19:58:17.714772   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0913 19:58:17.714812   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0913 19:58:17.755168   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0913 19:58:17.813880   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0913 19:58:17.813980   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0913 19:58:17.822268   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0913 19:58:17.822277   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0913 19:58:17.864418   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0913 19:58:17.864418   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0913 19:58:17.930419   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0913 19:58:18.000454   71926 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0913 19:58:18.000613   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0913 19:58:18.000655   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0913 19:58:18.014400   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0913 19:58:18.065836   71926 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0913 19:58:18.065876   71926 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0913 19:58:18.065922   71926 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0913 19:58:18.068943   71926 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0913 19:58:18.075442   71926 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0913 19:58:18.102221   71926 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0913 19:58:18.330677   71926 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 19:58:18.474412   71926 cache_images.go:92] duration metric: took 1.301129458s to LoadCachedImages
	W0913 19:58:18.474515   71926 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0913 19:58:18.474532   71926 kubeadm.go:934] updating node { 192.168.72.137 8443 v1.20.0 crio true true} ...
	I0913 19:58:18.474668   71926 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-234290 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.137
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-234290 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0913 19:58:18.474764   71926 ssh_runner.go:195] Run: crio config
	I0913 19:58:18.528116   71926 cni.go:84] Creating CNI manager for ""
	I0913 19:58:18.528142   71926 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0913 19:58:18.528153   71926 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0913 19:58:18.528175   71926 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.137 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-234290 NodeName:old-k8s-version-234290 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.137"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.137 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0913 19:58:18.528341   71926 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.137
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-234290"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.137
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.137"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0913 19:58:18.528421   71926 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0913 19:58:18.539309   71926 binaries.go:44] Found k8s binaries, skipping transfer
	I0913 19:58:18.539396   71926 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0913 19:58:18.549279   71926 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0913 19:58:18.566974   71926 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0913 19:58:18.585652   71926 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0913 19:58:18.606817   71926 ssh_runner.go:195] Run: grep 192.168.72.137	control-plane.minikube.internal$ /etc/hosts
	I0913 19:58:18.610999   71926 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.137	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0913 19:58:18.623650   71926 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 19:58:18.738375   71926 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0913 19:58:18.759935   71926 certs.go:68] Setting up /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/old-k8s-version-234290 for IP: 192.168.72.137
	I0913 19:58:18.759960   71926 certs.go:194] generating shared ca certs ...
	I0913 19:58:18.759976   71926 certs.go:226] acquiring lock for ca certs: {Name:mke780aab4c2895bded11772e31c7a357d07742c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 19:58:18.760149   71926 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.key
	I0913 19:58:18.760202   71926 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.key
	I0913 19:58:18.760217   71926 certs.go:256] generating profile certs ...
	I0913 19:58:18.760337   71926 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/old-k8s-version-234290/client.key
	I0913 19:58:18.760412   71926 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/old-k8s-version-234290/apiserver.key.e5f62d17
	I0913 19:58:18.760468   71926 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/old-k8s-version-234290/proxy-client.key
	I0913 19:58:18.760623   71926 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079.pem (1338 bytes)
	W0913 19:58:18.760669   71926 certs.go:480] ignoring /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079_empty.pem, impossibly tiny 0 bytes
	I0913 19:58:18.760681   71926 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem (1675 bytes)
	I0913 19:58:18.760718   71926 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem (1082 bytes)
	I0913 19:58:18.760751   71926 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem (1123 bytes)
	I0913 19:58:18.760779   71926 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem (1679 bytes)
	I0913 19:58:18.760832   71926 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem (1708 bytes)
	I0913 19:58:18.761583   71926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0913 19:58:18.793014   71926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0913 19:58:18.848745   71926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0913 19:58:18.886745   71926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0913 19:58:18.924588   71926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/old-k8s-version-234290/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0913 19:58:18.958481   71926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/old-k8s-version-234290/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0913 19:58:18.991482   71926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/old-k8s-version-234290/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0913 19:58:19.032324   71926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/old-k8s-version-234290/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0913 19:58:19.059068   71926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079.pem --> /usr/share/ca-certificates/11079.pem (1338 bytes)
	I0913 19:58:19.085949   71926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem --> /usr/share/ca-certificates/110792.pem (1708 bytes)
	I0913 19:58:19.113643   71926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0913 19:58:19.145333   71926 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0913 19:58:19.163687   71926 ssh_runner.go:195] Run: openssl version
	I0913 19:58:19.171767   71926 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11079.pem && ln -fs /usr/share/ca-certificates/11079.pem /etc/ssl/certs/11079.pem"
	I0913 19:58:19.186554   71926 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11079.pem
	I0913 19:58:19.192330   71926 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 13 18:38 /usr/share/ca-certificates/11079.pem
	I0913 19:58:19.192401   71926 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11079.pem
	I0913 19:58:19.198792   71926 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11079.pem /etc/ssl/certs/51391683.0"
	I0913 19:58:19.210300   71926 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110792.pem && ln -fs /usr/share/ca-certificates/110792.pem /etc/ssl/certs/110792.pem"
	I0913 19:58:19.223407   71926 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110792.pem
	I0913 19:58:19.228291   71926 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 13 18:38 /usr/share/ca-certificates/110792.pem
	I0913 19:58:19.228349   71926 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110792.pem
	I0913 19:58:19.234308   71926 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110792.pem /etc/ssl/certs/3ec20f2e.0"
	I0913 19:58:19.245203   71926 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0913 19:58:19.256773   71926 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0913 19:58:19.262488   71926 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 13 18:22 /usr/share/ca-certificates/minikubeCA.pem
	I0913 19:58:19.262571   71926 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0913 19:58:19.269483   71926 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0913 19:58:19.281592   71926 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0913 19:58:19.286741   71926 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0913 19:58:19.293353   71926 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0913 19:58:19.299808   71926 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0913 19:58:19.306799   71926 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0913 19:58:19.313162   71926 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0913 19:58:19.319027   71926 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0913 19:58:19.325179   71926 kubeadm.go:392] StartCluster: {Name:old-k8s-version-234290 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-234290 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.137 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 19:58:19.325264   71926 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0913 19:58:19.325324   71926 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0913 19:58:19.369346   71926 cri.go:89] found id: ""
	I0913 19:58:19.369426   71926 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0913 19:58:19.379886   71926 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0913 19:58:19.379909   71926 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0913 19:58:19.379970   71926 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0913 19:58:19.390431   71926 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0913 19:58:19.391399   71926 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-234290" does not appear in /home/jenkins/minikube-integration/19636-3902/kubeconfig
	I0913 19:58:19.392019   71926 kubeconfig.go:62] /home/jenkins/minikube-integration/19636-3902/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-234290" cluster setting kubeconfig missing "old-k8s-version-234290" context setting]
	I0913 19:58:19.392914   71926 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/kubeconfig: {Name:mk324cf401f96b93fed93af51e24b0634e5fa1a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 19:58:19.407513   71926 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0913 19:58:19.419283   71926 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.137
	I0913 19:58:19.419314   71926 kubeadm.go:1160] stopping kube-system containers ...
	I0913 19:58:19.419326   71926 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0913 19:58:19.419379   71926 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0913 19:58:19.461864   71926 cri.go:89] found id: ""
	I0913 19:58:19.461935   71926 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0913 19:58:19.479746   71926 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0913 19:58:19.490722   71926 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0913 19:58:19.490746   71926 kubeadm.go:157] found existing configuration files:
	
	I0913 19:58:19.490793   71926 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0913 19:58:19.500968   71926 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0913 19:58:19.501031   71926 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0913 19:58:19.511172   71926 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0913 19:58:19.521623   71926 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0913 19:58:19.521690   71926 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0913 19:58:19.532035   71926 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0913 19:58:19.542058   71926 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0913 19:58:19.542139   71926 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0913 19:58:19.551747   71926 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0913 19:58:19.561183   71926 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0913 19:58:19.561240   71926 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0913 19:58:19.571631   71926 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0913 19:58:19.582073   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:58:19.730805   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:58:20.577463   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:58:20.815243   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:58:20.907599   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:58:21.008611   71926 api_server.go:52] waiting for apiserver process to appear ...
	I0913 19:58:21.008687   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:21.509128   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:22.009465   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:22.509128   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:23.009680   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:23.509066   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:24.009717   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:24.509499   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:25.008831   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:25.509742   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:26.009748   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:26.509405   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:27.009130   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:27.509574   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:28.009714   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:28.508758   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:29.008768   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:29.509523   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:30.009031   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:30.508827   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:31.009653   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:31.509554   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:32.009337   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:32.509870   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:33.009618   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:33.509364   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:34.009124   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:34.509744   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:35.009610   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:35.509772   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:36.009680   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:36.509743   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:37.009084   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:37.509368   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:38.009698   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:38.509699   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:39.008821   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:39.509724   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:40.008865   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:40.509533   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:41.009397   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:41.508872   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:42.009722   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:42.509784   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:43.009630   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:43.508726   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:44.009339   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:44.509674   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:45.009675   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:45.509437   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:46.009589   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:46.509457   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:47.009340   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:47.509159   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:48.009550   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:48.509199   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:49.009364   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:49.509522   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:50.008790   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:50.509733   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:51.009675   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:51.509423   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:52.009563   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:52.509133   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:53.008751   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:53.508990   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:54.009430   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:54.508835   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:55.009332   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:55.509474   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:56.009345   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:56.509025   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:57.009384   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:57.509443   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:58.008849   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:58.509514   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:59.009139   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:59.509433   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:00.009778   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:00.508827   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:01.009427   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:01.508910   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:02.009696   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:02.509043   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:03.008825   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:03.509139   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:04.009549   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:04.509093   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:05.009633   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:05.509496   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:06.008914   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:06.509007   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:07.009527   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:07.509637   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:08.009264   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:08.509505   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:09.008838   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:09.509478   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:10.009569   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:10.509697   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:11.009352   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:11.509280   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:12.009200   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:12.509540   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:13.008811   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:13.509512   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:14.008877   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:14.509361   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:15.008823   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:15.509022   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:16.009176   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:16.509654   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:17.009758   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:17.509429   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:18.009470   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:18.509732   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:19.008954   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:19.509475   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:20.008916   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:20.509623   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:21.009697   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 19:59:21.009781   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 19:59:21.044701   71926 cri.go:89] found id: ""
	I0913 19:59:21.044724   71926 logs.go:276] 0 containers: []
	W0913 19:59:21.044734   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 19:59:21.044739   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 19:59:21.044786   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 19:59:21.086372   71926 cri.go:89] found id: ""
	I0913 19:59:21.086399   71926 logs.go:276] 0 containers: []
	W0913 19:59:21.086408   71926 logs.go:278] No container was found matching "etcd"
	I0913 19:59:21.086413   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 19:59:21.086474   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 19:59:21.121663   71926 cri.go:89] found id: ""
	I0913 19:59:21.121691   71926 logs.go:276] 0 containers: []
	W0913 19:59:21.121702   71926 logs.go:278] No container was found matching "coredns"
	I0913 19:59:21.121709   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 19:59:21.121772   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 19:59:21.159432   71926 cri.go:89] found id: ""
	I0913 19:59:21.159462   71926 logs.go:276] 0 containers: []
	W0913 19:59:21.159474   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 19:59:21.159481   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 19:59:21.159547   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 19:59:21.196077   71926 cri.go:89] found id: ""
	I0913 19:59:21.196108   71926 logs.go:276] 0 containers: []
	W0913 19:59:21.196120   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 19:59:21.196128   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 19:59:21.196202   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 19:59:21.228527   71926 cri.go:89] found id: ""
	I0913 19:59:21.228560   71926 logs.go:276] 0 containers: []
	W0913 19:59:21.228572   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 19:59:21.228579   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 19:59:21.228642   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 19:59:21.261972   71926 cri.go:89] found id: ""
	I0913 19:59:21.262004   71926 logs.go:276] 0 containers: []
	W0913 19:59:21.262015   71926 logs.go:278] No container was found matching "kindnet"
	I0913 19:59:21.262023   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 19:59:21.262088   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 19:59:21.297147   71926 cri.go:89] found id: ""
	I0913 19:59:21.297178   71926 logs.go:276] 0 containers: []
	W0913 19:59:21.297191   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 19:59:21.297201   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 19:59:21.297214   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 19:59:21.351067   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 19:59:21.351099   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 19:59:21.367453   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 19:59:21.367492   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 19:59:21.497454   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 19:59:21.497474   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 19:59:21.497487   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 19:59:21.568483   71926 logs.go:123] Gathering logs for container status ...
	I0913 19:59:21.568519   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 19:59:24.114940   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:24.127514   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 19:59:24.127597   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 19:59:24.164868   71926 cri.go:89] found id: ""
	I0913 19:59:24.164896   71926 logs.go:276] 0 containers: []
	W0913 19:59:24.164909   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 19:59:24.164917   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 19:59:24.164976   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 19:59:24.200342   71926 cri.go:89] found id: ""
	I0913 19:59:24.200374   71926 logs.go:276] 0 containers: []
	W0913 19:59:24.200386   71926 logs.go:278] No container was found matching "etcd"
	I0913 19:59:24.200393   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 19:59:24.200453   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 19:59:24.237566   71926 cri.go:89] found id: ""
	I0913 19:59:24.237592   71926 logs.go:276] 0 containers: []
	W0913 19:59:24.237603   71926 logs.go:278] No container was found matching "coredns"
	I0913 19:59:24.237619   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 19:59:24.237691   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 19:59:24.275357   71926 cri.go:89] found id: ""
	I0913 19:59:24.275386   71926 logs.go:276] 0 containers: []
	W0913 19:59:24.275397   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 19:59:24.275412   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 19:59:24.275476   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 19:59:24.312715   71926 cri.go:89] found id: ""
	I0913 19:59:24.312744   71926 logs.go:276] 0 containers: []
	W0913 19:59:24.312754   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 19:59:24.312759   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 19:59:24.312821   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 19:59:24.348027   71926 cri.go:89] found id: ""
	I0913 19:59:24.348053   71926 logs.go:276] 0 containers: []
	W0913 19:59:24.348064   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 19:59:24.348071   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 19:59:24.348149   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 19:59:24.382195   71926 cri.go:89] found id: ""
	I0913 19:59:24.382219   71926 logs.go:276] 0 containers: []
	W0913 19:59:24.382229   71926 logs.go:278] No container was found matching "kindnet"
	I0913 19:59:24.382235   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 19:59:24.382282   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 19:59:24.418783   71926 cri.go:89] found id: ""
	I0913 19:59:24.418806   71926 logs.go:276] 0 containers: []
	W0913 19:59:24.418815   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 19:59:24.418823   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 19:59:24.418833   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 19:59:24.504681   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 19:59:24.504705   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 19:59:24.504718   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 19:59:24.578961   71926 logs.go:123] Gathering logs for container status ...
	I0913 19:59:24.578993   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 19:59:24.623057   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 19:59:24.623083   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 19:59:24.673801   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 19:59:24.673835   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 19:59:27.188790   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:27.201419   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 19:59:27.201476   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 19:59:27.234511   71926 cri.go:89] found id: ""
	I0913 19:59:27.234535   71926 logs.go:276] 0 containers: []
	W0913 19:59:27.234543   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 19:59:27.234550   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 19:59:27.234610   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 19:59:27.273066   71926 cri.go:89] found id: ""
	I0913 19:59:27.273089   71926 logs.go:276] 0 containers: []
	W0913 19:59:27.273098   71926 logs.go:278] No container was found matching "etcd"
	I0913 19:59:27.273112   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 19:59:27.273169   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 19:59:27.306510   71926 cri.go:89] found id: ""
	I0913 19:59:27.306531   71926 logs.go:276] 0 containers: []
	W0913 19:59:27.306540   71926 logs.go:278] No container was found matching "coredns"
	I0913 19:59:27.306545   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 19:59:27.306630   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 19:59:27.343335   71926 cri.go:89] found id: ""
	I0913 19:59:27.343359   71926 logs.go:276] 0 containers: []
	W0913 19:59:27.343371   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 19:59:27.343378   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 19:59:27.343427   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 19:59:27.380440   71926 cri.go:89] found id: ""
	I0913 19:59:27.380469   71926 logs.go:276] 0 containers: []
	W0913 19:59:27.380478   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 19:59:27.380483   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 19:59:27.380536   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 19:59:27.419250   71926 cri.go:89] found id: ""
	I0913 19:59:27.419280   71926 logs.go:276] 0 containers: []
	W0913 19:59:27.419292   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 19:59:27.419299   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 19:59:27.419370   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 19:59:27.454315   71926 cri.go:89] found id: ""
	I0913 19:59:27.454337   71926 logs.go:276] 0 containers: []
	W0913 19:59:27.454346   71926 logs.go:278] No container was found matching "kindnet"
	I0913 19:59:27.454352   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 19:59:27.454402   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 19:59:27.491075   71926 cri.go:89] found id: ""
	I0913 19:59:27.491107   71926 logs.go:276] 0 containers: []
	W0913 19:59:27.491118   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 19:59:27.491128   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 19:59:27.491170   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 19:59:27.540849   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 19:59:27.540877   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 19:59:27.554829   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 19:59:27.554860   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 19:59:27.624534   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 19:59:27.624562   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 19:59:27.624577   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 19:59:27.702577   71926 logs.go:123] Gathering logs for container status ...
	I0913 19:59:27.702612   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 19:59:30.242489   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:30.255585   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 19:59:30.255667   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 19:59:30.298214   71926 cri.go:89] found id: ""
	I0913 19:59:30.298241   71926 logs.go:276] 0 containers: []
	W0913 19:59:30.298253   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 19:59:30.298259   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 19:59:30.298332   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 19:59:30.335270   71926 cri.go:89] found id: ""
	I0913 19:59:30.335300   71926 logs.go:276] 0 containers: []
	W0913 19:59:30.335309   71926 logs.go:278] No container was found matching "etcd"
	I0913 19:59:30.335314   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 19:59:30.335379   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 19:59:30.373478   71926 cri.go:89] found id: ""
	I0913 19:59:30.373506   71926 logs.go:276] 0 containers: []
	W0913 19:59:30.373517   71926 logs.go:278] No container was found matching "coredns"
	I0913 19:59:30.373524   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 19:59:30.373583   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 19:59:30.411820   71926 cri.go:89] found id: ""
	I0913 19:59:30.411845   71926 logs.go:276] 0 containers: []
	W0913 19:59:30.411854   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 19:59:30.411863   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 19:59:30.411908   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 19:59:30.449434   71926 cri.go:89] found id: ""
	I0913 19:59:30.449463   71926 logs.go:276] 0 containers: []
	W0913 19:59:30.449479   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 19:59:30.449486   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 19:59:30.449547   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 19:59:30.482794   71926 cri.go:89] found id: ""
	I0913 19:59:30.482822   71926 logs.go:276] 0 containers: []
	W0913 19:59:30.482831   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 19:59:30.482837   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 19:59:30.482887   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 19:59:30.517320   71926 cri.go:89] found id: ""
	I0913 19:59:30.517342   71926 logs.go:276] 0 containers: []
	W0913 19:59:30.517351   71926 logs.go:278] No container was found matching "kindnet"
	I0913 19:59:30.517357   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 19:59:30.517404   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 19:59:30.553119   71926 cri.go:89] found id: ""
	I0913 19:59:30.553146   71926 logs.go:276] 0 containers: []
	W0913 19:59:30.553154   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 19:59:30.553162   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 19:59:30.553172   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 19:59:30.605857   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 19:59:30.605886   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 19:59:30.620823   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 19:59:30.620855   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 19:59:30.689618   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 19:59:30.689637   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 19:59:30.689650   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 19:59:30.772324   71926 logs.go:123] Gathering logs for container status ...
	I0913 19:59:30.772359   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 19:59:33.313109   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:33.327584   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 19:59:33.327642   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 19:59:33.362880   71926 cri.go:89] found id: ""
	I0913 19:59:33.362904   71926 logs.go:276] 0 containers: []
	W0913 19:59:33.362912   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 19:59:33.362919   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 19:59:33.362979   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 19:59:33.395790   71926 cri.go:89] found id: ""
	I0913 19:59:33.395818   71926 logs.go:276] 0 containers: []
	W0913 19:59:33.395828   71926 logs.go:278] No container was found matching "etcd"
	I0913 19:59:33.395833   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 19:59:33.395883   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 19:59:33.430371   71926 cri.go:89] found id: ""
	I0913 19:59:33.430397   71926 logs.go:276] 0 containers: []
	W0913 19:59:33.430405   71926 logs.go:278] No container was found matching "coredns"
	I0913 19:59:33.430410   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 19:59:33.430522   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 19:59:33.465466   71926 cri.go:89] found id: ""
	I0913 19:59:33.465494   71926 logs.go:276] 0 containers: []
	W0913 19:59:33.465502   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 19:59:33.465508   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 19:59:33.465554   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 19:59:33.500340   71926 cri.go:89] found id: ""
	I0913 19:59:33.500370   71926 logs.go:276] 0 containers: []
	W0913 19:59:33.500385   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 19:59:33.500390   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 19:59:33.500440   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 19:59:33.537219   71926 cri.go:89] found id: ""
	I0913 19:59:33.537248   71926 logs.go:276] 0 containers: []
	W0913 19:59:33.537259   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 19:59:33.537267   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 19:59:33.537315   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 19:59:33.576171   71926 cri.go:89] found id: ""
	I0913 19:59:33.576201   71926 logs.go:276] 0 containers: []
	W0913 19:59:33.576209   71926 logs.go:278] No container was found matching "kindnet"
	I0913 19:59:33.576214   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 19:59:33.576261   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 19:59:33.618525   71926 cri.go:89] found id: ""
	I0913 19:59:33.618552   71926 logs.go:276] 0 containers: []
	W0913 19:59:33.618564   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 19:59:33.618574   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 19:59:33.618588   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 19:59:33.667903   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 19:59:33.667932   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 19:59:33.683870   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 19:59:33.683897   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 19:59:33.755651   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 19:59:33.755675   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 19:59:33.755687   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 19:59:33.834518   71926 logs.go:123] Gathering logs for container status ...
	I0913 19:59:33.834563   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 19:59:36.375763   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:36.389874   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 19:59:36.389945   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 19:59:36.423918   71926 cri.go:89] found id: ""
	I0913 19:59:36.423943   71926 logs.go:276] 0 containers: []
	W0913 19:59:36.423955   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 19:59:36.423962   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 19:59:36.424021   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 19:59:36.461591   71926 cri.go:89] found id: ""
	I0913 19:59:36.461615   71926 logs.go:276] 0 containers: []
	W0913 19:59:36.461627   71926 logs.go:278] No container was found matching "etcd"
	I0913 19:59:36.461633   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 19:59:36.461686   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 19:59:36.500927   71926 cri.go:89] found id: ""
	I0913 19:59:36.500951   71926 logs.go:276] 0 containers: []
	W0913 19:59:36.500961   71926 logs.go:278] No container was found matching "coredns"
	I0913 19:59:36.500966   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 19:59:36.501024   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 19:59:36.538153   71926 cri.go:89] found id: ""
	I0913 19:59:36.538178   71926 logs.go:276] 0 containers: []
	W0913 19:59:36.538189   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 19:59:36.538196   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 19:59:36.538253   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 19:59:36.586559   71926 cri.go:89] found id: ""
	I0913 19:59:36.586593   71926 logs.go:276] 0 containers: []
	W0913 19:59:36.586604   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 19:59:36.586612   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 19:59:36.586671   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 19:59:36.635198   71926 cri.go:89] found id: ""
	I0913 19:59:36.635226   71926 logs.go:276] 0 containers: []
	W0913 19:59:36.635238   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 19:59:36.635246   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 19:59:36.635312   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 19:59:36.679531   71926 cri.go:89] found id: ""
	I0913 19:59:36.679554   71926 logs.go:276] 0 containers: []
	W0913 19:59:36.679565   71926 logs.go:278] No container was found matching "kindnet"
	I0913 19:59:36.679572   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 19:59:36.679635   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 19:59:36.714288   71926 cri.go:89] found id: ""
	I0913 19:59:36.714315   71926 logs.go:276] 0 containers: []
	W0913 19:59:36.714327   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 19:59:36.714338   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 19:59:36.714352   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 19:59:36.793900   71926 logs.go:123] Gathering logs for container status ...
	I0913 19:59:36.793938   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 19:59:36.831700   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 19:59:36.831732   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 19:59:36.884424   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 19:59:36.884466   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 19:59:36.900593   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 19:59:36.900622   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 19:59:36.979173   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 19:59:39.480036   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:39.494368   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 19:59:39.494434   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 19:59:39.528620   71926 cri.go:89] found id: ""
	I0913 19:59:39.528647   71926 logs.go:276] 0 containers: []
	W0913 19:59:39.528655   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 19:59:39.528661   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 19:59:39.528708   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 19:59:39.564307   71926 cri.go:89] found id: ""
	I0913 19:59:39.564339   71926 logs.go:276] 0 containers: []
	W0913 19:59:39.564348   71926 logs.go:278] No container was found matching "etcd"
	I0913 19:59:39.564354   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 19:59:39.564402   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 19:59:39.596786   71926 cri.go:89] found id: ""
	I0913 19:59:39.596813   71926 logs.go:276] 0 containers: []
	W0913 19:59:39.596822   71926 logs.go:278] No container was found matching "coredns"
	I0913 19:59:39.596828   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 19:59:39.596887   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 19:59:39.640612   71926 cri.go:89] found id: ""
	I0913 19:59:39.640636   71926 logs.go:276] 0 containers: []
	W0913 19:59:39.640649   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 19:59:39.640654   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 19:59:39.640701   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 19:59:39.685855   71926 cri.go:89] found id: ""
	I0913 19:59:39.685876   71926 logs.go:276] 0 containers: []
	W0913 19:59:39.685884   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 19:59:39.685890   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 19:59:39.685937   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 19:59:39.723549   71926 cri.go:89] found id: ""
	I0913 19:59:39.723578   71926 logs.go:276] 0 containers: []
	W0913 19:59:39.723586   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 19:59:39.723592   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 19:59:39.723647   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 19:59:39.761901   71926 cri.go:89] found id: ""
	I0913 19:59:39.761928   71926 logs.go:276] 0 containers: []
	W0913 19:59:39.761938   71926 logs.go:278] No container was found matching "kindnet"
	I0913 19:59:39.761944   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 19:59:39.762005   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 19:59:39.804200   71926 cri.go:89] found id: ""
	I0913 19:59:39.804233   71926 logs.go:276] 0 containers: []
	W0913 19:59:39.804244   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 19:59:39.804254   71926 logs.go:123] Gathering logs for container status ...
	I0913 19:59:39.804268   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 19:59:39.843760   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 19:59:39.843792   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 19:59:39.898610   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 19:59:39.898640   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 19:59:39.915710   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 19:59:39.915733   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 19:59:39.991138   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 19:59:39.991161   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 19:59:39.991175   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 19:59:42.567023   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:42.579927   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 19:59:42.580001   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 19:59:42.613569   71926 cri.go:89] found id: ""
	I0913 19:59:42.613595   71926 logs.go:276] 0 containers: []
	W0913 19:59:42.613603   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 19:59:42.613608   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 19:59:42.613654   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 19:59:42.649375   71926 cri.go:89] found id: ""
	I0913 19:59:42.649408   71926 logs.go:276] 0 containers: []
	W0913 19:59:42.649421   71926 logs.go:278] No container was found matching "etcd"
	I0913 19:59:42.649433   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 19:59:42.649502   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 19:59:42.685299   71926 cri.go:89] found id: ""
	I0913 19:59:42.685322   71926 logs.go:276] 0 containers: []
	W0913 19:59:42.685330   71926 logs.go:278] No container was found matching "coredns"
	I0913 19:59:42.685336   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 19:59:42.685383   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 19:59:42.718646   71926 cri.go:89] found id: ""
	I0913 19:59:42.718671   71926 logs.go:276] 0 containers: []
	W0913 19:59:42.718680   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 19:59:42.718686   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 19:59:42.718736   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 19:59:42.755277   71926 cri.go:89] found id: ""
	I0913 19:59:42.755310   71926 logs.go:276] 0 containers: []
	W0913 19:59:42.755322   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 19:59:42.755330   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 19:59:42.755399   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 19:59:42.791071   71926 cri.go:89] found id: ""
	I0913 19:59:42.791099   71926 logs.go:276] 0 containers: []
	W0913 19:59:42.791110   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 19:59:42.791117   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 19:59:42.791191   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 19:59:42.824895   71926 cri.go:89] found id: ""
	I0913 19:59:42.824924   71926 logs.go:276] 0 containers: []
	W0913 19:59:42.824935   71926 logs.go:278] No container was found matching "kindnet"
	I0913 19:59:42.824942   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 19:59:42.825004   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 19:59:42.864526   71926 cri.go:89] found id: ""
	I0913 19:59:42.864555   71926 logs.go:276] 0 containers: []
	W0913 19:59:42.864567   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 19:59:42.864576   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 19:59:42.864590   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 19:59:42.913990   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 19:59:42.914023   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 19:59:42.929285   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 19:59:42.929319   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 19:59:43.003029   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 19:59:43.003061   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 19:59:43.003075   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 19:59:43.083457   71926 logs.go:123] Gathering logs for container status ...
	I0913 19:59:43.083490   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 19:59:45.625941   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:45.639111   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 19:59:45.639200   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 19:59:45.672356   71926 cri.go:89] found id: ""
	I0913 19:59:45.672383   71926 logs.go:276] 0 containers: []
	W0913 19:59:45.672391   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 19:59:45.672397   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 19:59:45.672463   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 19:59:45.713528   71926 cri.go:89] found id: ""
	I0913 19:59:45.713551   71926 logs.go:276] 0 containers: []
	W0913 19:59:45.713557   71926 logs.go:278] No container was found matching "etcd"
	I0913 19:59:45.713564   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 19:59:45.713618   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 19:59:45.749950   71926 cri.go:89] found id: ""
	I0913 19:59:45.749974   71926 logs.go:276] 0 containers: []
	W0913 19:59:45.749982   71926 logs.go:278] No container was found matching "coredns"
	I0913 19:59:45.749988   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 19:59:45.750036   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 19:59:45.787366   71926 cri.go:89] found id: ""
	I0913 19:59:45.787403   71926 logs.go:276] 0 containers: []
	W0913 19:59:45.787415   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 19:59:45.787423   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 19:59:45.787482   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 19:59:45.822464   71926 cri.go:89] found id: ""
	I0913 19:59:45.822493   71926 logs.go:276] 0 containers: []
	W0913 19:59:45.822504   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 19:59:45.822511   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 19:59:45.822574   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 19:59:45.857614   71926 cri.go:89] found id: ""
	I0913 19:59:45.857643   71926 logs.go:276] 0 containers: []
	W0913 19:59:45.857654   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 19:59:45.857666   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 19:59:45.857716   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 19:59:45.896323   71926 cri.go:89] found id: ""
	I0913 19:59:45.896349   71926 logs.go:276] 0 containers: []
	W0913 19:59:45.896357   71926 logs.go:278] No container was found matching "kindnet"
	I0913 19:59:45.896362   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 19:59:45.896416   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 19:59:45.935706   71926 cri.go:89] found id: ""
	I0913 19:59:45.935731   71926 logs.go:276] 0 containers: []
	W0913 19:59:45.935742   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 19:59:45.935751   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 19:59:45.935763   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 19:59:45.986687   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 19:59:45.986721   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 19:59:46.001549   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 19:59:46.001584   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 19:59:46.075482   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 19:59:46.075505   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 19:59:46.075520   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 19:59:46.152094   71926 logs.go:123] Gathering logs for container status ...
	I0913 19:59:46.152130   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 19:59:48.698127   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:48.711198   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 19:59:48.711271   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 19:59:48.746560   71926 cri.go:89] found id: ""
	I0913 19:59:48.746588   71926 logs.go:276] 0 containers: []
	W0913 19:59:48.746598   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 19:59:48.746605   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 19:59:48.746662   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 19:59:48.780588   71926 cri.go:89] found id: ""
	I0913 19:59:48.780614   71926 logs.go:276] 0 containers: []
	W0913 19:59:48.780624   71926 logs.go:278] No container was found matching "etcd"
	I0913 19:59:48.780631   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 19:59:48.780689   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 19:59:48.812521   71926 cri.go:89] found id: ""
	I0913 19:59:48.812547   71926 logs.go:276] 0 containers: []
	W0913 19:59:48.812560   71926 logs.go:278] No container was found matching "coredns"
	I0913 19:59:48.812567   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 19:59:48.812626   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 19:59:48.850273   71926 cri.go:89] found id: ""
	I0913 19:59:48.850303   71926 logs.go:276] 0 containers: []
	W0913 19:59:48.850314   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 19:59:48.850322   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 19:59:48.850384   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 19:59:48.887865   71926 cri.go:89] found id: ""
	I0913 19:59:48.887888   71926 logs.go:276] 0 containers: []
	W0913 19:59:48.887896   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 19:59:48.887901   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 19:59:48.887966   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 19:59:48.928155   71926 cri.go:89] found id: ""
	I0913 19:59:48.928182   71926 logs.go:276] 0 containers: []
	W0913 19:59:48.928193   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 19:59:48.928201   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 19:59:48.928263   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 19:59:48.966159   71926 cri.go:89] found id: ""
	I0913 19:59:48.966185   71926 logs.go:276] 0 containers: []
	W0913 19:59:48.966194   71926 logs.go:278] No container was found matching "kindnet"
	I0913 19:59:48.966199   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 19:59:48.966267   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 19:59:48.999627   71926 cri.go:89] found id: ""
	I0913 19:59:48.999654   71926 logs.go:276] 0 containers: []
	W0913 19:59:48.999663   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 19:59:48.999674   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 19:59:48.999685   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 19:59:49.087362   71926 logs.go:123] Gathering logs for container status ...
	I0913 19:59:49.087398   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 19:59:49.131037   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 19:59:49.131065   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 19:59:49.183183   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 19:59:49.183214   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 19:59:49.197511   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 19:59:49.197536   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 19:59:49.269251   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 19:59:51.769494   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:51.783274   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 19:59:51.783334   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 19:59:51.819611   71926 cri.go:89] found id: ""
	I0913 19:59:51.819644   71926 logs.go:276] 0 containers: []
	W0913 19:59:51.819655   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 19:59:51.819662   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 19:59:51.819722   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 19:59:51.857054   71926 cri.go:89] found id: ""
	I0913 19:59:51.857081   71926 logs.go:276] 0 containers: []
	W0913 19:59:51.857093   71926 logs.go:278] No container was found matching "etcd"
	I0913 19:59:51.857101   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 19:59:51.857183   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 19:59:51.892270   71926 cri.go:89] found id: ""
	I0913 19:59:51.892292   71926 logs.go:276] 0 containers: []
	W0913 19:59:51.892301   71926 logs.go:278] No container was found matching "coredns"
	I0913 19:59:51.892306   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 19:59:51.892354   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 19:59:51.926615   71926 cri.go:89] found id: ""
	I0913 19:59:51.926641   71926 logs.go:276] 0 containers: []
	W0913 19:59:51.926653   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 19:59:51.926667   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 19:59:51.926728   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 19:59:51.966322   71926 cri.go:89] found id: ""
	I0913 19:59:51.966352   71926 logs.go:276] 0 containers: []
	W0913 19:59:51.966372   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 19:59:51.966380   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 19:59:51.966446   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 19:59:52.008145   71926 cri.go:89] found id: ""
	I0913 19:59:52.008173   71926 logs.go:276] 0 containers: []
	W0913 19:59:52.008182   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 19:59:52.008189   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 19:59:52.008241   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 19:59:52.044486   71926 cri.go:89] found id: ""
	I0913 19:59:52.044512   71926 logs.go:276] 0 containers: []
	W0913 19:59:52.044520   71926 logs.go:278] No container was found matching "kindnet"
	I0913 19:59:52.044527   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 19:59:52.044590   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 19:59:52.080845   71926 cri.go:89] found id: ""
	I0913 19:59:52.080873   71926 logs.go:276] 0 containers: []
	W0913 19:59:52.080885   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 19:59:52.080895   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 19:59:52.080910   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 19:59:52.094040   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 19:59:52.094067   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 19:59:52.163809   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 19:59:52.163836   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 19:59:52.163850   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 19:59:52.244680   71926 logs.go:123] Gathering logs for container status ...
	I0913 19:59:52.244724   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 19:59:52.284651   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 19:59:52.284686   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 19:59:54.841167   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:54.853992   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 19:59:54.854055   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 19:59:54.886801   71926 cri.go:89] found id: ""
	I0913 19:59:54.886830   71926 logs.go:276] 0 containers: []
	W0913 19:59:54.886841   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 19:59:54.886848   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 19:59:54.886922   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 19:59:54.921963   71926 cri.go:89] found id: ""
	I0913 19:59:54.921990   71926 logs.go:276] 0 containers: []
	W0913 19:59:54.922001   71926 logs.go:278] No container was found matching "etcd"
	I0913 19:59:54.922009   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 19:59:54.922074   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 19:59:54.960805   71926 cri.go:89] found id: ""
	I0913 19:59:54.960840   71926 logs.go:276] 0 containers: []
	W0913 19:59:54.960852   71926 logs.go:278] No container was found matching "coredns"
	I0913 19:59:54.960859   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 19:59:54.960938   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 19:59:54.998466   71926 cri.go:89] found id: ""
	I0913 19:59:54.998490   71926 logs.go:276] 0 containers: []
	W0913 19:59:54.998501   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 19:59:54.998508   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 19:59:54.998570   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 19:59:55.036768   71926 cri.go:89] found id: ""
	I0913 19:59:55.036795   71926 logs.go:276] 0 containers: []
	W0913 19:59:55.036803   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 19:59:55.036809   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 19:59:55.036870   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 19:59:55.075135   71926 cri.go:89] found id: ""
	I0913 19:59:55.075165   71926 logs.go:276] 0 containers: []
	W0913 19:59:55.075176   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 19:59:55.075184   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 19:59:55.075244   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 19:59:55.112784   71926 cri.go:89] found id: ""
	I0913 19:59:55.112806   71926 logs.go:276] 0 containers: []
	W0913 19:59:55.112815   71926 logs.go:278] No container was found matching "kindnet"
	I0913 19:59:55.112821   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 19:59:55.112866   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 19:59:55.147189   71926 cri.go:89] found id: ""
	I0913 19:59:55.147215   71926 logs.go:276] 0 containers: []
	W0913 19:59:55.147226   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 19:59:55.147236   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 19:59:55.147247   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 19:59:55.199769   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 19:59:55.199802   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 19:59:55.214075   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 19:59:55.214124   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 19:59:55.285621   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 19:59:55.285640   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 19:59:55.285650   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 19:59:55.366727   71926 logs.go:123] Gathering logs for container status ...
	I0913 19:59:55.366762   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 19:59:57.913453   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:57.927109   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 19:59:57.927249   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 19:59:57.964607   71926 cri.go:89] found id: ""
	I0913 19:59:57.964635   71926 logs.go:276] 0 containers: []
	W0913 19:59:57.964647   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 19:59:57.964655   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 19:59:57.964723   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 19:59:58.006749   71926 cri.go:89] found id: ""
	I0913 19:59:58.006773   71926 logs.go:276] 0 containers: []
	W0913 19:59:58.006782   71926 logs.go:278] No container was found matching "etcd"
	I0913 19:59:58.006788   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 19:59:58.006835   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 19:59:58.042438   71926 cri.go:89] found id: ""
	I0913 19:59:58.042461   71926 logs.go:276] 0 containers: []
	W0913 19:59:58.042470   71926 logs.go:278] No container was found matching "coredns"
	I0913 19:59:58.042475   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 19:59:58.042526   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 19:59:58.076413   71926 cri.go:89] found id: ""
	I0913 19:59:58.076443   71926 logs.go:276] 0 containers: []
	W0913 19:59:58.076456   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 19:59:58.076463   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 19:59:58.076525   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 19:59:58.110474   71926 cri.go:89] found id: ""
	I0913 19:59:58.110495   71926 logs.go:276] 0 containers: []
	W0913 19:59:58.110502   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 19:59:58.110507   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 19:59:58.110555   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 19:59:58.145068   71926 cri.go:89] found id: ""
	I0913 19:59:58.145090   71926 logs.go:276] 0 containers: []
	W0913 19:59:58.145098   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 19:59:58.145104   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 19:59:58.145152   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 19:59:58.179071   71926 cri.go:89] found id: ""
	I0913 19:59:58.179102   71926 logs.go:276] 0 containers: []
	W0913 19:59:58.179115   71926 logs.go:278] No container was found matching "kindnet"
	I0913 19:59:58.179122   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 19:59:58.179176   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 19:59:58.212749   71926 cri.go:89] found id: ""
	I0913 19:59:58.212779   71926 logs.go:276] 0 containers: []
	W0913 19:59:58.212791   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 19:59:58.212801   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 19:59:58.212814   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 19:59:58.297012   71926 logs.go:123] Gathering logs for container status ...
	I0913 19:59:58.297046   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 19:59:58.336206   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 19:59:58.336229   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 19:59:58.388982   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 19:59:58.389014   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 19:59:58.404582   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 19:59:58.404612   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 19:59:58.476882   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:00.977647   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:00.991660   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:00.991743   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:01.028753   71926 cri.go:89] found id: ""
	I0913 20:00:01.028779   71926 logs.go:276] 0 containers: []
	W0913 20:00:01.028788   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:01.028794   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:01.028843   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:01.064606   71926 cri.go:89] found id: ""
	I0913 20:00:01.064638   71926 logs.go:276] 0 containers: []
	W0913 20:00:01.064647   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:01.064652   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:01.064701   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:01.100564   71926 cri.go:89] found id: ""
	I0913 20:00:01.100597   71926 logs.go:276] 0 containers: []
	W0913 20:00:01.100609   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:01.100615   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:01.100678   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:01.135197   71926 cri.go:89] found id: ""
	I0913 20:00:01.135230   71926 logs.go:276] 0 containers: []
	W0913 20:00:01.135242   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:01.135249   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:01.135313   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:01.187712   71926 cri.go:89] found id: ""
	I0913 20:00:01.187783   71926 logs.go:276] 0 containers: []
	W0913 20:00:01.187796   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:01.187804   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:01.187870   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:01.225400   71926 cri.go:89] found id: ""
	I0913 20:00:01.225434   71926 logs.go:276] 0 containers: []
	W0913 20:00:01.225443   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:01.225449   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:01.225511   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:01.263652   71926 cri.go:89] found id: ""
	I0913 20:00:01.263685   71926 logs.go:276] 0 containers: []
	W0913 20:00:01.263696   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:01.263703   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:01.263764   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:01.302630   71926 cri.go:89] found id: ""
	I0913 20:00:01.302652   71926 logs.go:276] 0 containers: []
	W0913 20:00:01.302661   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:01.302669   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:01.302680   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:01.316837   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:01.316871   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:01.389124   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:01.389149   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:01.389159   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:01.475528   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:01.475565   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:01.518896   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:01.518922   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:04.069894   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:04.083877   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:04.083940   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:04.119079   71926 cri.go:89] found id: ""
	I0913 20:00:04.119109   71926 logs.go:276] 0 containers: []
	W0913 20:00:04.119118   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:04.119123   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:04.119175   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:04.154999   71926 cri.go:89] found id: ""
	I0913 20:00:04.155026   71926 logs.go:276] 0 containers: []
	W0913 20:00:04.155035   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:04.155040   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:04.155087   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:04.189390   71926 cri.go:89] found id: ""
	I0913 20:00:04.189414   71926 logs.go:276] 0 containers: []
	W0913 20:00:04.189422   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:04.189428   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:04.189477   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:04.224883   71926 cri.go:89] found id: ""
	I0913 20:00:04.224912   71926 logs.go:276] 0 containers: []
	W0913 20:00:04.224924   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:04.224932   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:04.224990   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:04.265300   71926 cri.go:89] found id: ""
	I0913 20:00:04.265328   71926 logs.go:276] 0 containers: []
	W0913 20:00:04.265340   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:04.265347   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:04.265403   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:04.308225   71926 cri.go:89] found id: ""
	I0913 20:00:04.308253   71926 logs.go:276] 0 containers: []
	W0913 20:00:04.308264   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:04.308271   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:04.308339   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:04.347508   71926 cri.go:89] found id: ""
	I0913 20:00:04.347539   71926 logs.go:276] 0 containers: []
	W0913 20:00:04.347552   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:04.347558   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:04.347615   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:04.385610   71926 cri.go:89] found id: ""
	I0913 20:00:04.385635   71926 logs.go:276] 0 containers: []
	W0913 20:00:04.385644   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:04.385651   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:04.385664   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:04.438181   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:04.438210   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:04.452207   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:04.452231   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:04.527920   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:04.527942   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:04.527956   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:04.617212   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:04.617256   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:07.159525   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:07.172355   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:07.172433   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:07.210683   71926 cri.go:89] found id: ""
	I0913 20:00:07.210713   71926 logs.go:276] 0 containers: []
	W0913 20:00:07.210725   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:07.210733   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:07.210794   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:07.252477   71926 cri.go:89] found id: ""
	I0913 20:00:07.252501   71926 logs.go:276] 0 containers: []
	W0913 20:00:07.252511   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:07.252516   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:07.252563   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:07.293646   71926 cri.go:89] found id: ""
	I0913 20:00:07.293671   71926 logs.go:276] 0 containers: []
	W0913 20:00:07.293679   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:07.293685   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:07.293744   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:07.328603   71926 cri.go:89] found id: ""
	I0913 20:00:07.328631   71926 logs.go:276] 0 containers: []
	W0913 20:00:07.328644   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:07.328652   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:07.328716   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:07.366358   71926 cri.go:89] found id: ""
	I0913 20:00:07.366385   71926 logs.go:276] 0 containers: []
	W0913 20:00:07.366395   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:07.366402   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:07.366480   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:07.400380   71926 cri.go:89] found id: ""
	I0913 20:00:07.400406   71926 logs.go:276] 0 containers: []
	W0913 20:00:07.400417   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:07.400425   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:07.400476   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:07.438332   71926 cri.go:89] found id: ""
	I0913 20:00:07.438360   71926 logs.go:276] 0 containers: []
	W0913 20:00:07.438371   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:07.438380   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:07.438437   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:07.477262   71926 cri.go:89] found id: ""
	I0913 20:00:07.477294   71926 logs.go:276] 0 containers: []
	W0913 20:00:07.477305   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:07.477315   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:07.477329   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:07.528404   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:07.528437   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:07.543025   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:07.543058   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:07.619580   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:07.619599   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:07.619611   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:07.698002   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:07.698037   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:10.241528   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:10.255274   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:10.255350   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:10.289635   71926 cri.go:89] found id: ""
	I0913 20:00:10.289660   71926 logs.go:276] 0 containers: []
	W0913 20:00:10.289670   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:10.289675   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:10.289726   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:10.323749   71926 cri.go:89] found id: ""
	I0913 20:00:10.323772   71926 logs.go:276] 0 containers: []
	W0913 20:00:10.323781   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:10.323788   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:10.323835   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:10.360399   71926 cri.go:89] found id: ""
	I0913 20:00:10.360424   71926 logs.go:276] 0 containers: []
	W0913 20:00:10.360432   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:10.360441   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:10.360517   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:10.396685   71926 cri.go:89] found id: ""
	I0913 20:00:10.396714   71926 logs.go:276] 0 containers: []
	W0913 20:00:10.396724   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:10.396731   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:10.396793   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:10.439076   71926 cri.go:89] found id: ""
	I0913 20:00:10.439104   71926 logs.go:276] 0 containers: []
	W0913 20:00:10.439116   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:10.439126   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:10.439202   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:10.484451   71926 cri.go:89] found id: ""
	I0913 20:00:10.484474   71926 logs.go:276] 0 containers: []
	W0913 20:00:10.484483   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:10.484490   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:10.484553   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:10.521271   71926 cri.go:89] found id: ""
	I0913 20:00:10.521297   71926 logs.go:276] 0 containers: []
	W0913 20:00:10.521307   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:10.521313   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:10.521371   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:10.556468   71926 cri.go:89] found id: ""
	I0913 20:00:10.556490   71926 logs.go:276] 0 containers: []
	W0913 20:00:10.556499   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:10.556507   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:10.556519   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:10.593210   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:10.593237   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:10.645456   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:10.645493   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:10.659365   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:10.659393   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:10.734764   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:10.734786   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:10.734800   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:13.320229   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:13.335065   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:13.335136   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:13.373500   71926 cri.go:89] found id: ""
	I0913 20:00:13.373531   71926 logs.go:276] 0 containers: []
	W0913 20:00:13.373543   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:13.373550   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:13.373613   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:13.410784   71926 cri.go:89] found id: ""
	I0913 20:00:13.410815   71926 logs.go:276] 0 containers: []
	W0913 20:00:13.410827   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:13.410834   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:13.410900   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:13.447564   71926 cri.go:89] found id: ""
	I0913 20:00:13.447592   71926 logs.go:276] 0 containers: []
	W0913 20:00:13.447603   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:13.447611   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:13.447668   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:13.482858   71926 cri.go:89] found id: ""
	I0913 20:00:13.482885   71926 logs.go:276] 0 containers: []
	W0913 20:00:13.482895   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:13.482902   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:13.482963   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:13.517827   71926 cri.go:89] found id: ""
	I0913 20:00:13.517853   71926 logs.go:276] 0 containers: []
	W0913 20:00:13.517864   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:13.517870   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:13.517932   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:13.553032   71926 cri.go:89] found id: ""
	I0913 20:00:13.553063   71926 logs.go:276] 0 containers: []
	W0913 20:00:13.553081   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:13.553088   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:13.553149   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:13.588527   71926 cri.go:89] found id: ""
	I0913 20:00:13.588553   71926 logs.go:276] 0 containers: []
	W0913 20:00:13.588561   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:13.588567   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:13.588620   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:13.625094   71926 cri.go:89] found id: ""
	I0913 20:00:13.625118   71926 logs.go:276] 0 containers: []
	W0913 20:00:13.625131   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:13.625141   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:13.625158   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:13.677821   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:13.677851   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:13.691860   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:13.691887   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:13.764966   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:13.764993   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:13.765009   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:13.847866   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:13.847908   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:16.389642   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:16.403272   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:16.403331   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:16.440078   71926 cri.go:89] found id: ""
	I0913 20:00:16.440104   71926 logs.go:276] 0 containers: []
	W0913 20:00:16.440114   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:16.440122   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:16.440190   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:16.474256   71926 cri.go:89] found id: ""
	I0913 20:00:16.474283   71926 logs.go:276] 0 containers: []
	W0913 20:00:16.474301   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:16.474308   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:16.474366   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:16.510715   71926 cri.go:89] found id: ""
	I0913 20:00:16.510749   71926 logs.go:276] 0 containers: []
	W0913 20:00:16.510760   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:16.510767   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:16.510828   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:16.547051   71926 cri.go:89] found id: ""
	I0913 20:00:16.547081   71926 logs.go:276] 0 containers: []
	W0913 20:00:16.547090   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:16.547095   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:16.547181   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:16.583643   71926 cri.go:89] found id: ""
	I0913 20:00:16.583673   71926 logs.go:276] 0 containers: []
	W0913 20:00:16.583684   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:16.583692   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:16.583751   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:16.620508   71926 cri.go:89] found id: ""
	I0913 20:00:16.620531   71926 logs.go:276] 0 containers: []
	W0913 20:00:16.620538   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:16.620544   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:16.620591   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:16.659447   71926 cri.go:89] found id: ""
	I0913 20:00:16.659474   71926 logs.go:276] 0 containers: []
	W0913 20:00:16.659483   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:16.659487   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:16.659551   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:16.696858   71926 cri.go:89] found id: ""
	I0913 20:00:16.696883   71926 logs.go:276] 0 containers: []
	W0913 20:00:16.696892   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:16.696900   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:16.696913   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:16.767299   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:16.767322   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:16.767337   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:16.847320   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:16.847356   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:16.922176   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:16.922209   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:16.982583   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:16.982627   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:19.496581   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:19.509942   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:19.510040   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:19.546457   71926 cri.go:89] found id: ""
	I0913 20:00:19.546493   71926 logs.go:276] 0 containers: []
	W0913 20:00:19.546510   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:19.546517   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:19.546584   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:19.585577   71926 cri.go:89] found id: ""
	I0913 20:00:19.585613   71926 logs.go:276] 0 containers: []
	W0913 20:00:19.585624   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:19.585631   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:19.585681   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:19.624383   71926 cri.go:89] found id: ""
	I0913 20:00:19.624416   71926 logs.go:276] 0 containers: []
	W0913 20:00:19.624428   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:19.624436   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:19.624492   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:19.662531   71926 cri.go:89] found id: ""
	I0913 20:00:19.662558   71926 logs.go:276] 0 containers: []
	W0913 20:00:19.662570   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:19.662578   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:19.662636   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:19.704253   71926 cri.go:89] found id: ""
	I0913 20:00:19.704278   71926 logs.go:276] 0 containers: []
	W0913 20:00:19.704290   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:19.704296   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:19.704354   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:19.743087   71926 cri.go:89] found id: ""
	I0913 20:00:19.743113   71926 logs.go:276] 0 containers: []
	W0913 20:00:19.743122   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:19.743128   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:19.743175   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:19.779598   71926 cri.go:89] found id: ""
	I0913 20:00:19.779625   71926 logs.go:276] 0 containers: []
	W0913 20:00:19.779635   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:19.779643   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:19.779692   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:19.817509   71926 cri.go:89] found id: ""
	I0913 20:00:19.817541   71926 logs.go:276] 0 containers: []
	W0913 20:00:19.817553   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:19.817564   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:19.817577   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:19.870071   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:19.870120   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:19.884612   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:19.884638   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:19.959650   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:19.959675   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:19.959687   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:20.040351   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:20.040384   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:22.581978   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:22.599144   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:22.599205   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:22.652862   71926 cri.go:89] found id: ""
	I0913 20:00:22.652894   71926 logs.go:276] 0 containers: []
	W0913 20:00:22.652906   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:22.652913   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:22.652998   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:22.712188   71926 cri.go:89] found id: ""
	I0913 20:00:22.712220   71926 logs.go:276] 0 containers: []
	W0913 20:00:22.712231   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:22.712238   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:22.712299   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:22.750212   71926 cri.go:89] found id: ""
	I0913 20:00:22.750238   71926 logs.go:276] 0 containers: []
	W0913 20:00:22.750249   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:22.750257   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:22.750319   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:22.786432   71926 cri.go:89] found id: ""
	I0913 20:00:22.786463   71926 logs.go:276] 0 containers: []
	W0913 20:00:22.786475   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:22.786483   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:22.786547   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:22.822681   71926 cri.go:89] found id: ""
	I0913 20:00:22.822707   71926 logs.go:276] 0 containers: []
	W0913 20:00:22.822716   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:22.822722   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:22.822780   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:22.862076   71926 cri.go:89] found id: ""
	I0913 20:00:22.862143   71926 logs.go:276] 0 containers: []
	W0913 20:00:22.862157   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:22.862166   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:22.862230   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:22.899489   71926 cri.go:89] found id: ""
	I0913 20:00:22.899519   71926 logs.go:276] 0 containers: []
	W0913 20:00:22.899528   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:22.899535   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:22.899604   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:22.937229   71926 cri.go:89] found id: ""
	I0913 20:00:22.937255   71926 logs.go:276] 0 containers: []
	W0913 20:00:22.937270   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:22.937282   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:22.937300   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:23.007842   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:23.007871   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:23.007887   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:23.092726   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:23.092763   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:23.131372   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:23.131403   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:23.183785   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:23.183819   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:25.698367   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:25.712256   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:25.712338   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:25.747797   71926 cri.go:89] found id: ""
	I0913 20:00:25.747824   71926 logs.go:276] 0 containers: []
	W0913 20:00:25.747835   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:25.747842   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:25.747929   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:25.783257   71926 cri.go:89] found id: ""
	I0913 20:00:25.783285   71926 logs.go:276] 0 containers: []
	W0913 20:00:25.783295   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:25.783301   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:25.783352   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:25.817087   71926 cri.go:89] found id: ""
	I0913 20:00:25.817120   71926 logs.go:276] 0 containers: []
	W0913 20:00:25.817132   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:25.817142   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:25.817203   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:25.854055   71926 cri.go:89] found id: ""
	I0913 20:00:25.854082   71926 logs.go:276] 0 containers: []
	W0913 20:00:25.854108   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:25.854116   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:25.854188   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:25.889972   71926 cri.go:89] found id: ""
	I0913 20:00:25.889994   71926 logs.go:276] 0 containers: []
	W0913 20:00:25.890002   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:25.890008   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:25.890058   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:25.927061   71926 cri.go:89] found id: ""
	I0913 20:00:25.927093   71926 logs.go:276] 0 containers: []
	W0913 20:00:25.927104   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:25.927115   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:25.927169   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:25.965541   71926 cri.go:89] found id: ""
	I0913 20:00:25.965570   71926 logs.go:276] 0 containers: []
	W0913 20:00:25.965582   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:25.965588   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:25.965649   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:26.002772   71926 cri.go:89] found id: ""
	I0913 20:00:26.002801   71926 logs.go:276] 0 containers: []
	W0913 20:00:26.002814   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:26.002825   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:26.002840   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:26.054407   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:26.054442   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:26.069608   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:26.069634   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:26.141529   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:26.141557   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:26.141572   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:26.223623   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:26.223657   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:28.764312   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:28.779319   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:28.779383   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:28.813431   71926 cri.go:89] found id: ""
	I0913 20:00:28.813457   71926 logs.go:276] 0 containers: []
	W0913 20:00:28.813465   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:28.813473   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:28.813532   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:28.850083   71926 cri.go:89] found id: ""
	I0913 20:00:28.850145   71926 logs.go:276] 0 containers: []
	W0913 20:00:28.850157   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:28.850163   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:28.850221   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:28.890348   71926 cri.go:89] found id: ""
	I0913 20:00:28.890373   71926 logs.go:276] 0 containers: []
	W0913 20:00:28.890384   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:28.890390   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:28.890440   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:28.926531   71926 cri.go:89] found id: ""
	I0913 20:00:28.926564   71926 logs.go:276] 0 containers: []
	W0913 20:00:28.926576   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:28.926583   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:28.926650   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:28.968307   71926 cri.go:89] found id: ""
	I0913 20:00:28.968336   71926 logs.go:276] 0 containers: []
	W0913 20:00:28.968349   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:28.968356   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:28.968420   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:29.004257   71926 cri.go:89] found id: ""
	I0913 20:00:29.004285   71926 logs.go:276] 0 containers: []
	W0913 20:00:29.004297   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:29.004304   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:29.004369   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:29.039438   71926 cri.go:89] found id: ""
	I0913 20:00:29.039466   71926 logs.go:276] 0 containers: []
	W0913 20:00:29.039478   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:29.039486   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:29.039546   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:29.075138   71926 cri.go:89] found id: ""
	I0913 20:00:29.075172   71926 logs.go:276] 0 containers: []
	W0913 20:00:29.075184   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:29.075195   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:29.075210   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:29.151631   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:29.151671   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:29.193680   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:29.193707   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:29.244926   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:29.244960   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:29.258897   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:29.258922   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:29.329212   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:31.829955   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:31.846683   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:31.846747   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:31.888898   71926 cri.go:89] found id: ""
	I0913 20:00:31.888933   71926 logs.go:276] 0 containers: []
	W0913 20:00:31.888944   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:31.888952   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:31.889023   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:31.923896   71926 cri.go:89] found id: ""
	I0913 20:00:31.923922   71926 logs.go:276] 0 containers: []
	W0913 20:00:31.923933   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:31.923940   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:31.923999   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:31.965263   71926 cri.go:89] found id: ""
	I0913 20:00:31.965297   71926 logs.go:276] 0 containers: []
	W0913 20:00:31.965309   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:31.965317   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:31.965387   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:31.998461   71926 cri.go:89] found id: ""
	I0913 20:00:31.998491   71926 logs.go:276] 0 containers: []
	W0913 20:00:31.998505   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:31.998512   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:31.998564   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:32.039654   71926 cri.go:89] found id: ""
	I0913 20:00:32.039681   71926 logs.go:276] 0 containers: []
	W0913 20:00:32.039690   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:32.039696   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:32.039747   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:32.073387   71926 cri.go:89] found id: ""
	I0913 20:00:32.073413   71926 logs.go:276] 0 containers: []
	W0913 20:00:32.073424   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:32.073432   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:32.073491   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:32.112119   71926 cri.go:89] found id: ""
	I0913 20:00:32.112148   71926 logs.go:276] 0 containers: []
	W0913 20:00:32.112159   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:32.112166   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:32.112231   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:32.156508   71926 cri.go:89] found id: ""
	I0913 20:00:32.156539   71926 logs.go:276] 0 containers: []
	W0913 20:00:32.156548   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:32.156556   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:32.156567   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:32.210961   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:32.210994   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:32.224674   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:32.224703   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:32.297699   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:32.297725   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:32.297738   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:32.383090   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:32.383130   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:34.926212   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:34.942930   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:34.943015   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:34.983115   71926 cri.go:89] found id: ""
	I0913 20:00:34.983140   71926 logs.go:276] 0 containers: []
	W0913 20:00:34.983151   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:34.983159   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:34.983232   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:35.022893   71926 cri.go:89] found id: ""
	I0913 20:00:35.022916   71926 logs.go:276] 0 containers: []
	W0913 20:00:35.022924   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:35.022931   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:35.022980   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:35.061105   71926 cri.go:89] found id: ""
	I0913 20:00:35.061129   71926 logs.go:276] 0 containers: []
	W0913 20:00:35.061137   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:35.061143   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:35.061191   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:35.095853   71926 cri.go:89] found id: ""
	I0913 20:00:35.095879   71926 logs.go:276] 0 containers: []
	W0913 20:00:35.095890   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:35.095897   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:35.095966   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:35.132771   71926 cri.go:89] found id: ""
	I0913 20:00:35.132796   71926 logs.go:276] 0 containers: []
	W0913 20:00:35.132811   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:35.132816   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:35.132879   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:35.171692   71926 cri.go:89] found id: ""
	I0913 20:00:35.171720   71926 logs.go:276] 0 containers: []
	W0913 20:00:35.171729   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:35.171734   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:35.171782   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:35.212217   71926 cri.go:89] found id: ""
	I0913 20:00:35.212248   71926 logs.go:276] 0 containers: []
	W0913 20:00:35.212258   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:35.212266   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:35.212318   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:35.247910   71926 cri.go:89] found id: ""
	I0913 20:00:35.247938   71926 logs.go:276] 0 containers: []
	W0913 20:00:35.247949   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:35.247958   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:35.247972   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:35.321607   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:35.321627   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:35.321641   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:35.405442   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:35.405483   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:35.450174   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:35.450201   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:35.503640   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:35.503673   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:38.019116   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:38.033625   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:38.033696   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:38.068648   71926 cri.go:89] found id: ""
	I0913 20:00:38.068677   71926 logs.go:276] 0 containers: []
	W0913 20:00:38.068688   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:38.068696   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:38.068764   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:38.103808   71926 cri.go:89] found id: ""
	I0913 20:00:38.103837   71926 logs.go:276] 0 containers: []
	W0913 20:00:38.103850   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:38.103857   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:38.103920   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:38.143305   71926 cri.go:89] found id: ""
	I0913 20:00:38.143326   71926 logs.go:276] 0 containers: []
	W0913 20:00:38.143335   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:38.143341   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:38.143397   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:38.181356   71926 cri.go:89] found id: ""
	I0913 20:00:38.181382   71926 logs.go:276] 0 containers: []
	W0913 20:00:38.181394   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:38.181401   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:38.181459   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:38.216935   71926 cri.go:89] found id: ""
	I0913 20:00:38.216966   71926 logs.go:276] 0 containers: []
	W0913 20:00:38.216977   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:38.216985   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:38.217043   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:38.253565   71926 cri.go:89] found id: ""
	I0913 20:00:38.253594   71926 logs.go:276] 0 containers: []
	W0913 20:00:38.253606   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:38.253614   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:38.253670   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:38.288672   71926 cri.go:89] found id: ""
	I0913 20:00:38.288698   71926 logs.go:276] 0 containers: []
	W0913 20:00:38.288714   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:38.288721   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:38.288782   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:38.323979   71926 cri.go:89] found id: ""
	I0913 20:00:38.324001   71926 logs.go:276] 0 containers: []
	W0913 20:00:38.324009   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:38.324017   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:38.324028   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:38.375830   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:38.375861   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:38.391703   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:38.391742   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:38.464586   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:38.464615   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:38.464630   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:38.545577   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:38.545616   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:41.090184   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:41.104040   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:41.104129   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:41.148332   71926 cri.go:89] found id: ""
	I0913 20:00:41.148362   71926 logs.go:276] 0 containers: []
	W0913 20:00:41.148371   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:41.148377   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:41.148441   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:41.187205   71926 cri.go:89] found id: ""
	I0913 20:00:41.187230   71926 logs.go:276] 0 containers: []
	W0913 20:00:41.187238   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:41.187244   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:41.187299   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:41.223654   71926 cri.go:89] found id: ""
	I0913 20:00:41.223677   71926 logs.go:276] 0 containers: []
	W0913 20:00:41.223685   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:41.223690   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:41.223750   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:41.258481   71926 cri.go:89] found id: ""
	I0913 20:00:41.258505   71926 logs.go:276] 0 containers: []
	W0913 20:00:41.258515   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:41.258520   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:41.258578   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:41.293205   71926 cri.go:89] found id: ""
	I0913 20:00:41.293236   71926 logs.go:276] 0 containers: []
	W0913 20:00:41.293248   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:41.293255   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:41.293313   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:41.331425   71926 cri.go:89] found id: ""
	I0913 20:00:41.331454   71926 logs.go:276] 0 containers: []
	W0913 20:00:41.331466   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:41.331474   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:41.331524   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:41.367916   71926 cri.go:89] found id: ""
	I0913 20:00:41.367942   71926 logs.go:276] 0 containers: []
	W0913 20:00:41.367953   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:41.367960   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:41.368024   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:41.413683   71926 cri.go:89] found id: ""
	I0913 20:00:41.413713   71926 logs.go:276] 0 containers: []
	W0913 20:00:41.413724   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:41.413736   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:41.413752   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:41.468018   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:41.468053   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:41.482397   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:41.482424   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:41.552203   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:41.552223   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:41.552238   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:41.628515   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:41.628553   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:44.167382   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:44.180046   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:44.180127   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:44.218376   71926 cri.go:89] found id: ""
	I0913 20:00:44.218400   71926 logs.go:276] 0 containers: []
	W0913 20:00:44.218409   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:44.218415   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:44.218474   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:44.255250   71926 cri.go:89] found id: ""
	I0913 20:00:44.255275   71926 logs.go:276] 0 containers: []
	W0913 20:00:44.255284   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:44.255289   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:44.255338   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:44.292561   71926 cri.go:89] found id: ""
	I0913 20:00:44.292587   71926 logs.go:276] 0 containers: []
	W0913 20:00:44.292597   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:44.292604   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:44.292662   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:44.326876   71926 cri.go:89] found id: ""
	I0913 20:00:44.326900   71926 logs.go:276] 0 containers: []
	W0913 20:00:44.326912   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:44.326919   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:44.326977   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:44.362531   71926 cri.go:89] found id: ""
	I0913 20:00:44.362562   71926 logs.go:276] 0 containers: []
	W0913 20:00:44.362574   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:44.362582   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:44.362646   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:44.396145   71926 cri.go:89] found id: ""
	I0913 20:00:44.396170   71926 logs.go:276] 0 containers: []
	W0913 20:00:44.396181   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:44.396188   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:44.396251   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:44.434531   71926 cri.go:89] found id: ""
	I0913 20:00:44.434556   71926 logs.go:276] 0 containers: []
	W0913 20:00:44.434564   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:44.434570   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:44.434627   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:44.470926   71926 cri.go:89] found id: ""
	I0913 20:00:44.470955   71926 logs.go:276] 0 containers: []
	W0913 20:00:44.470965   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:44.470976   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:44.470991   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:44.523651   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:44.523685   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:44.537739   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:44.537766   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:44.608055   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:44.608083   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:44.608098   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:44.694384   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:44.694416   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:47.235609   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:47.248691   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:47.248749   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:47.284011   71926 cri.go:89] found id: ""
	I0913 20:00:47.284037   71926 logs.go:276] 0 containers: []
	W0913 20:00:47.284049   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:47.284056   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:47.284118   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:47.319729   71926 cri.go:89] found id: ""
	I0913 20:00:47.319755   71926 logs.go:276] 0 containers: []
	W0913 20:00:47.319766   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:47.319772   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:47.319833   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:47.356053   71926 cri.go:89] found id: ""
	I0913 20:00:47.356079   71926 logs.go:276] 0 containers: []
	W0913 20:00:47.356087   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:47.356094   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:47.356172   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:47.390054   71926 cri.go:89] found id: ""
	I0913 20:00:47.390083   71926 logs.go:276] 0 containers: []
	W0913 20:00:47.390101   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:47.390109   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:47.390171   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:47.428894   71926 cri.go:89] found id: ""
	I0913 20:00:47.428920   71926 logs.go:276] 0 containers: []
	W0913 20:00:47.428932   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:47.428939   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:47.428996   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:47.463328   71926 cri.go:89] found id: ""
	I0913 20:00:47.463363   71926 logs.go:276] 0 containers: []
	W0913 20:00:47.463376   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:47.463389   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:47.463450   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:47.500739   71926 cri.go:89] found id: ""
	I0913 20:00:47.500764   71926 logs.go:276] 0 containers: []
	W0913 20:00:47.500773   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:47.500779   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:47.500827   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:47.534425   71926 cri.go:89] found id: ""
	I0913 20:00:47.534456   71926 logs.go:276] 0 containers: []
	W0913 20:00:47.534468   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:47.534479   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:47.534493   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:47.584525   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:47.584553   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:47.599468   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:47.599497   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:47.683020   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:47.683044   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:47.683055   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:47.767236   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:47.767272   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:50.310385   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:50.324059   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:50.324136   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:50.359347   71926 cri.go:89] found id: ""
	I0913 20:00:50.359381   71926 logs.go:276] 0 containers: []
	W0913 20:00:50.359393   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:50.359401   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:50.359451   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:50.395291   71926 cri.go:89] found id: ""
	I0913 20:00:50.395339   71926 logs.go:276] 0 containers: []
	W0913 20:00:50.395352   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:50.395360   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:50.395416   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:50.430783   71926 cri.go:89] found id: ""
	I0913 20:00:50.430806   71926 logs.go:276] 0 containers: []
	W0913 20:00:50.430816   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:50.430823   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:50.430884   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:50.467883   71926 cri.go:89] found id: ""
	I0913 20:00:50.467916   71926 logs.go:276] 0 containers: []
	W0913 20:00:50.467928   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:50.467935   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:50.467997   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:50.505962   71926 cri.go:89] found id: ""
	I0913 20:00:50.505995   71926 logs.go:276] 0 containers: []
	W0913 20:00:50.506007   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:50.506014   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:50.506080   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:50.542408   71926 cri.go:89] found id: ""
	I0913 20:00:50.542432   71926 logs.go:276] 0 containers: []
	W0913 20:00:50.542440   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:50.542445   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:50.542492   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:50.576431   71926 cri.go:89] found id: ""
	I0913 20:00:50.576454   71926 logs.go:276] 0 containers: []
	W0913 20:00:50.576463   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:50.576469   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:50.576533   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:50.609940   71926 cri.go:89] found id: ""
	I0913 20:00:50.609982   71926 logs.go:276] 0 containers: []
	W0913 20:00:50.609994   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:50.610004   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:50.610022   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:50.661793   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:50.661829   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:50.676085   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:50.676128   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:50.745092   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:50.745118   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:50.745132   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:50.830719   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:50.830754   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:53.369220   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:53.381944   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:53.382009   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:53.414010   71926 cri.go:89] found id: ""
	I0913 20:00:53.414032   71926 logs.go:276] 0 containers: []
	W0913 20:00:53.414041   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:53.414047   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:53.414131   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:53.448480   71926 cri.go:89] found id: ""
	I0913 20:00:53.448512   71926 logs.go:276] 0 containers: []
	W0913 20:00:53.448523   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:53.448530   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:53.448589   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:53.489451   71926 cri.go:89] found id: ""
	I0913 20:00:53.489483   71926 logs.go:276] 0 containers: []
	W0913 20:00:53.489494   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:53.489502   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:53.489562   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:53.531122   71926 cri.go:89] found id: ""
	I0913 20:00:53.531143   71926 logs.go:276] 0 containers: []
	W0913 20:00:53.531154   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:53.531169   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:53.531228   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:53.568070   71926 cri.go:89] found id: ""
	I0913 20:00:53.568098   71926 logs.go:276] 0 containers: []
	W0913 20:00:53.568109   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:53.568116   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:53.568187   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:53.601557   71926 cri.go:89] found id: ""
	I0913 20:00:53.601580   71926 logs.go:276] 0 containers: []
	W0913 20:00:53.601589   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:53.601595   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:53.601646   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:53.637689   71926 cri.go:89] found id: ""
	I0913 20:00:53.637710   71926 logs.go:276] 0 containers: []
	W0913 20:00:53.637719   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:53.637724   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:53.637782   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:53.672876   71926 cri.go:89] found id: ""
	I0913 20:00:53.672899   71926 logs.go:276] 0 containers: []
	W0913 20:00:53.672908   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:53.672916   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:53.672932   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:53.686015   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:53.686044   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:53.753998   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:53.754023   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:53.754042   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:53.844298   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:53.844329   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:53.883828   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:53.883853   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:56.440474   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:56.453970   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:56.454039   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:56.490986   71926 cri.go:89] found id: ""
	I0913 20:00:56.491013   71926 logs.go:276] 0 containers: []
	W0913 20:00:56.491024   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:56.491031   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:56.491094   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:56.530035   71926 cri.go:89] found id: ""
	I0913 20:00:56.530059   71926 logs.go:276] 0 containers: []
	W0913 20:00:56.530070   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:56.530077   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:56.530175   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:56.565256   71926 cri.go:89] found id: ""
	I0913 20:00:56.565280   71926 logs.go:276] 0 containers: []
	W0913 20:00:56.565290   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:56.565297   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:56.565358   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:56.600685   71926 cri.go:89] found id: ""
	I0913 20:00:56.600705   71926 logs.go:276] 0 containers: []
	W0913 20:00:56.600714   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:56.600725   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:56.600777   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:56.635015   71926 cri.go:89] found id: ""
	I0913 20:00:56.635042   71926 logs.go:276] 0 containers: []
	W0913 20:00:56.635053   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:56.635060   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:56.635125   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:56.671319   71926 cri.go:89] found id: ""
	I0913 20:00:56.671346   71926 logs.go:276] 0 containers: []
	W0913 20:00:56.671354   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:56.671359   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:56.671407   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:56.732238   71926 cri.go:89] found id: ""
	I0913 20:00:56.732269   71926 logs.go:276] 0 containers: []
	W0913 20:00:56.732280   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:56.732287   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:56.732347   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:56.779316   71926 cri.go:89] found id: ""
	I0913 20:00:56.779345   71926 logs.go:276] 0 containers: []
	W0913 20:00:56.779356   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:56.779366   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:56.779378   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:56.858727   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:56.858755   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:56.899449   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:56.899480   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:56.952972   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:56.953003   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:56.967092   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:56.967131   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:57.052690   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:59.553696   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:59.569295   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:59.569370   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:59.608548   71926 cri.go:89] found id: ""
	I0913 20:00:59.608592   71926 logs.go:276] 0 containers: []
	W0913 20:00:59.608605   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:59.608612   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:59.608674   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:59.643673   71926 cri.go:89] found id: ""
	I0913 20:00:59.643699   71926 logs.go:276] 0 containers: []
	W0913 20:00:59.643710   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:59.643716   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:59.643776   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:59.687204   71926 cri.go:89] found id: ""
	I0913 20:00:59.687228   71926 logs.go:276] 0 containers: []
	W0913 20:00:59.687237   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:59.687242   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:59.687306   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:59.720335   71926 cri.go:89] found id: ""
	I0913 20:00:59.720360   71926 logs.go:276] 0 containers: []
	W0913 20:00:59.720371   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:59.720378   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:59.720446   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:59.761164   71926 cri.go:89] found id: ""
	I0913 20:00:59.761194   71926 logs.go:276] 0 containers: []
	W0913 20:00:59.761205   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:59.761213   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:59.761278   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:59.796770   71926 cri.go:89] found id: ""
	I0913 20:00:59.796796   71926 logs.go:276] 0 containers: []
	W0913 20:00:59.796807   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:59.796814   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:59.796880   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:59.832333   71926 cri.go:89] found id: ""
	I0913 20:00:59.832364   71926 logs.go:276] 0 containers: []
	W0913 20:00:59.832377   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:59.832385   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:59.832444   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:59.868387   71926 cri.go:89] found id: ""
	I0913 20:00:59.868415   71926 logs.go:276] 0 containers: []
	W0913 20:00:59.868427   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:59.868437   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:59.868450   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:59.920632   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:59.920663   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:59.937573   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:59.937609   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:00.007803   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:00.007826   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:00.007837   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:00.085289   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:00.085324   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:02.628580   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:02.642122   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:02.642200   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:02.678238   71926 cri.go:89] found id: ""
	I0913 20:01:02.678260   71926 logs.go:276] 0 containers: []
	W0913 20:01:02.678268   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:02.678274   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:02.678325   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:02.714164   71926 cri.go:89] found id: ""
	I0913 20:01:02.714187   71926 logs.go:276] 0 containers: []
	W0913 20:01:02.714197   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:02.714202   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:02.714267   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:02.750200   71926 cri.go:89] found id: ""
	I0913 20:01:02.750228   71926 logs.go:276] 0 containers: []
	W0913 20:01:02.750237   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:02.750243   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:02.750291   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:02.786893   71926 cri.go:89] found id: ""
	I0913 20:01:02.786921   71926 logs.go:276] 0 containers: []
	W0913 20:01:02.786929   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:02.786935   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:02.786987   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:02.824161   71926 cri.go:89] found id: ""
	I0913 20:01:02.824192   71926 logs.go:276] 0 containers: []
	W0913 20:01:02.824204   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:02.824211   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:02.824274   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:02.861567   71926 cri.go:89] found id: ""
	I0913 20:01:02.861594   71926 logs.go:276] 0 containers: []
	W0913 20:01:02.861606   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:02.861613   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:02.861678   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:02.897791   71926 cri.go:89] found id: ""
	I0913 20:01:02.897813   71926 logs.go:276] 0 containers: []
	W0913 20:01:02.897822   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:02.897827   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:02.897875   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:02.935790   71926 cri.go:89] found id: ""
	I0913 20:01:02.935818   71926 logs.go:276] 0 containers: []
	W0913 20:01:02.935830   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:02.935840   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:02.935853   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:02.987011   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:02.987044   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:03.000688   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:03.000722   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:03.075757   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:03.075780   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:03.075795   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:03.167565   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:03.167611   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:05.713852   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:05.727810   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:05.727876   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:05.765630   71926 cri.go:89] found id: ""
	I0913 20:01:05.765655   71926 logs.go:276] 0 containers: []
	W0913 20:01:05.765666   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:05.765672   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:05.765720   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:05.801085   71926 cri.go:89] found id: ""
	I0913 20:01:05.801123   71926 logs.go:276] 0 containers: []
	W0913 20:01:05.801136   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:05.801142   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:05.801209   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:05.840112   71926 cri.go:89] found id: ""
	I0913 20:01:05.840146   71926 logs.go:276] 0 containers: []
	W0913 20:01:05.840156   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:05.840163   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:05.840225   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:05.876082   71926 cri.go:89] found id: ""
	I0913 20:01:05.876107   71926 logs.go:276] 0 containers: []
	W0913 20:01:05.876118   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:05.876125   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:05.876188   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:05.913772   71926 cri.go:89] found id: ""
	I0913 20:01:05.913801   71926 logs.go:276] 0 containers: []
	W0913 20:01:05.913813   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:05.913820   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:05.913879   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:05.947670   71926 cri.go:89] found id: ""
	I0913 20:01:05.947694   71926 logs.go:276] 0 containers: []
	W0913 20:01:05.947702   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:05.947708   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:05.947756   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:05.989763   71926 cri.go:89] found id: ""
	I0913 20:01:05.989794   71926 logs.go:276] 0 containers: []
	W0913 20:01:05.989807   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:05.989814   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:05.989875   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:06.030319   71926 cri.go:89] found id: ""
	I0913 20:01:06.030361   71926 logs.go:276] 0 containers: []
	W0913 20:01:06.030373   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:06.030383   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:06.030397   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:06.111153   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:06.111185   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:06.149331   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:06.149357   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:06.202013   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:06.202056   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:06.216224   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:06.216252   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:06.291004   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:08.791573   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:08.804872   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:08.804930   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:08.846040   71926 cri.go:89] found id: ""
	I0913 20:01:08.846068   71926 logs.go:276] 0 containers: []
	W0913 20:01:08.846081   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:08.846088   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:08.846159   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:08.892954   71926 cri.go:89] found id: ""
	I0913 20:01:08.892986   71926 logs.go:276] 0 containers: []
	W0913 20:01:08.892998   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:08.893005   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:08.893068   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:08.949737   71926 cri.go:89] found id: ""
	I0913 20:01:08.949762   71926 logs.go:276] 0 containers: []
	W0913 20:01:08.949773   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:08.949779   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:08.949847   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:08.991915   71926 cri.go:89] found id: ""
	I0913 20:01:08.991938   71926 logs.go:276] 0 containers: []
	W0913 20:01:08.991950   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:08.991956   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:08.992009   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:09.028702   71926 cri.go:89] found id: ""
	I0913 20:01:09.028733   71926 logs.go:276] 0 containers: []
	W0913 20:01:09.028754   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:09.028764   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:09.028828   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:09.063615   71926 cri.go:89] found id: ""
	I0913 20:01:09.063639   71926 logs.go:276] 0 containers: []
	W0913 20:01:09.063648   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:09.063653   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:09.063713   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:09.096739   71926 cri.go:89] found id: ""
	I0913 20:01:09.096766   71926 logs.go:276] 0 containers: []
	W0913 20:01:09.096776   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:09.096782   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:09.096838   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:09.129605   71926 cri.go:89] found id: ""
	I0913 20:01:09.129635   71926 logs.go:276] 0 containers: []
	W0913 20:01:09.129647   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:09.129657   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:09.129674   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:09.181245   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:09.181280   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:09.195598   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:09.195627   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:09.271676   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:09.271703   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:09.271718   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:09.353657   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:09.353688   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:11.896613   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:11.910334   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:11.910410   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:11.947632   71926 cri.go:89] found id: ""
	I0913 20:01:11.947654   71926 logs.go:276] 0 containers: []
	W0913 20:01:11.947663   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:11.947669   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:11.947727   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:11.983996   71926 cri.go:89] found id: ""
	I0913 20:01:11.984019   71926 logs.go:276] 0 containers: []
	W0913 20:01:11.984028   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:11.984034   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:11.984082   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:12.020497   71926 cri.go:89] found id: ""
	I0913 20:01:12.020536   71926 logs.go:276] 0 containers: []
	W0913 20:01:12.020548   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:12.020556   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:12.020615   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:12.059799   71926 cri.go:89] found id: ""
	I0913 20:01:12.059821   71926 logs.go:276] 0 containers: []
	W0913 20:01:12.059829   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:12.059835   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:12.059882   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:12.099079   71926 cri.go:89] found id: ""
	I0913 20:01:12.099113   71926 logs.go:276] 0 containers: []
	W0913 20:01:12.099125   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:12.099132   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:12.099193   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:12.132677   71926 cri.go:89] found id: ""
	I0913 20:01:12.132704   71926 logs.go:276] 0 containers: []
	W0913 20:01:12.132716   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:12.132722   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:12.132769   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:12.165522   71926 cri.go:89] found id: ""
	I0913 20:01:12.165546   71926 logs.go:276] 0 containers: []
	W0913 20:01:12.165560   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:12.165565   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:12.165625   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:12.207145   71926 cri.go:89] found id: ""
	I0913 20:01:12.207168   71926 logs.go:276] 0 containers: []
	W0913 20:01:12.207177   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:12.207185   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:12.207195   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:12.260523   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:12.260556   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:12.275208   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:12.275236   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:12.346424   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:12.346447   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:12.346463   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:12.431012   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:12.431050   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:14.970909   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:14.984452   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:14.984512   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:15.020614   71926 cri.go:89] found id: ""
	I0913 20:01:15.020635   71926 logs.go:276] 0 containers: []
	W0913 20:01:15.020645   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:15.020650   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:15.020695   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:15.060481   71926 cri.go:89] found id: ""
	I0913 20:01:15.060508   71926 logs.go:276] 0 containers: []
	W0913 20:01:15.060516   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:15.060520   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:15.060580   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:15.095109   71926 cri.go:89] found id: ""
	I0913 20:01:15.095134   71926 logs.go:276] 0 containers: []
	W0913 20:01:15.095143   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:15.095148   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:15.095201   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:15.130733   71926 cri.go:89] found id: ""
	I0913 20:01:15.130754   71926 logs.go:276] 0 containers: []
	W0913 20:01:15.130762   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:15.130768   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:15.130816   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:15.164998   71926 cri.go:89] found id: ""
	I0913 20:01:15.165027   71926 logs.go:276] 0 containers: []
	W0913 20:01:15.165042   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:15.165049   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:15.165100   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:15.205957   71926 cri.go:89] found id: ""
	I0913 20:01:15.205981   71926 logs.go:276] 0 containers: []
	W0913 20:01:15.205989   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:15.205994   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:15.206053   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:15.240502   71926 cri.go:89] found id: ""
	I0913 20:01:15.240526   71926 logs.go:276] 0 containers: []
	W0913 20:01:15.240535   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:15.240540   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:15.240586   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:15.275900   71926 cri.go:89] found id: ""
	I0913 20:01:15.275920   71926 logs.go:276] 0 containers: []
	W0913 20:01:15.275928   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:15.275936   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:15.275946   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:15.343658   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:15.343677   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:15.343690   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:15.426317   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:15.426355   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:15.474538   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:15.474569   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:15.525405   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:15.525439   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:18.040698   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:18.053759   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:18.053820   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:18.086056   71926 cri.go:89] found id: ""
	I0913 20:01:18.086078   71926 logs.go:276] 0 containers: []
	W0913 20:01:18.086087   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:18.086092   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:18.086166   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:18.119247   71926 cri.go:89] found id: ""
	I0913 20:01:18.119277   71926 logs.go:276] 0 containers: []
	W0913 20:01:18.119290   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:18.119301   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:18.119366   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:18.163417   71926 cri.go:89] found id: ""
	I0913 20:01:18.163442   71926 logs.go:276] 0 containers: []
	W0913 20:01:18.163450   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:18.163456   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:18.163504   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:18.196786   71926 cri.go:89] found id: ""
	I0913 20:01:18.196812   71926 logs.go:276] 0 containers: []
	W0913 20:01:18.196820   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:18.196826   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:18.196878   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:18.231864   71926 cri.go:89] found id: ""
	I0913 20:01:18.231893   71926 logs.go:276] 0 containers: []
	W0913 20:01:18.231903   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:18.231909   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:18.231973   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:18.268498   71926 cri.go:89] found id: ""
	I0913 20:01:18.268518   71926 logs.go:276] 0 containers: []
	W0913 20:01:18.268527   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:18.268534   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:18.268586   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:18.305391   71926 cri.go:89] found id: ""
	I0913 20:01:18.305418   71926 logs.go:276] 0 containers: []
	W0913 20:01:18.305430   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:18.305438   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:18.305499   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:18.340160   71926 cri.go:89] found id: ""
	I0913 20:01:18.340187   71926 logs.go:276] 0 containers: []
	W0913 20:01:18.340197   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:18.340207   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:18.340220   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:18.391723   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:18.391757   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:18.405914   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:18.405943   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:18.476941   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:18.476960   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:18.476972   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:18.556812   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:18.556845   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:21.098172   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:21.111147   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:21.111211   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:21.147273   71926 cri.go:89] found id: ""
	I0913 20:01:21.147299   71926 logs.go:276] 0 containers: []
	W0913 20:01:21.147316   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:21.147322   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:21.147370   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:21.185018   71926 cri.go:89] found id: ""
	I0913 20:01:21.185047   71926 logs.go:276] 0 containers: []
	W0913 20:01:21.185059   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:21.185066   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:21.185148   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:21.219763   71926 cri.go:89] found id: ""
	I0913 20:01:21.219798   71926 logs.go:276] 0 containers: []
	W0913 20:01:21.219809   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:21.219816   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:21.219882   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:21.257121   71926 cri.go:89] found id: ""
	I0913 20:01:21.257149   71926 logs.go:276] 0 containers: []
	W0913 20:01:21.257161   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:21.257167   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:21.257229   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:21.292011   71926 cri.go:89] found id: ""
	I0913 20:01:21.292038   71926 logs.go:276] 0 containers: []
	W0913 20:01:21.292049   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:21.292060   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:21.292124   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:21.327563   71926 cri.go:89] found id: ""
	I0913 20:01:21.327592   71926 logs.go:276] 0 containers: []
	W0913 20:01:21.327604   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:21.327612   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:21.327679   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:21.365175   71926 cri.go:89] found id: ""
	I0913 20:01:21.365201   71926 logs.go:276] 0 containers: []
	W0913 20:01:21.365209   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:21.365215   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:21.365272   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:21.403238   71926 cri.go:89] found id: ""
	I0913 20:01:21.403266   71926 logs.go:276] 0 containers: []
	W0913 20:01:21.403278   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:21.403288   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:21.403310   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:21.417704   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:21.417737   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:21.490232   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:21.490264   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:21.490283   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:21.573376   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:21.573414   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:21.619573   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:21.619604   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:24.173931   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:24.187538   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:24.187608   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:24.227803   71926 cri.go:89] found id: ""
	I0913 20:01:24.227827   71926 logs.go:276] 0 containers: []
	W0913 20:01:24.227836   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:24.227841   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:24.227898   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:24.263194   71926 cri.go:89] found id: ""
	I0913 20:01:24.263223   71926 logs.go:276] 0 containers: []
	W0913 20:01:24.263239   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:24.263246   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:24.263308   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:24.298159   71926 cri.go:89] found id: ""
	I0913 20:01:24.298188   71926 logs.go:276] 0 containers: []
	W0913 20:01:24.298201   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:24.298209   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:24.298267   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:24.340590   71926 cri.go:89] found id: ""
	I0913 20:01:24.340621   71926 logs.go:276] 0 containers: []
	W0913 20:01:24.340633   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:24.340640   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:24.340699   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:24.385839   71926 cri.go:89] found id: ""
	I0913 20:01:24.385866   71926 logs.go:276] 0 containers: []
	W0913 20:01:24.385875   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:24.385880   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:24.385944   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:24.424723   71926 cri.go:89] found id: ""
	I0913 20:01:24.424750   71926 logs.go:276] 0 containers: []
	W0913 20:01:24.424761   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:24.424767   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:24.424828   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:24.461879   71926 cri.go:89] found id: ""
	I0913 20:01:24.461911   71926 logs.go:276] 0 containers: []
	W0913 20:01:24.461922   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:24.461929   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:24.461995   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:24.497230   71926 cri.go:89] found id: ""
	I0913 20:01:24.497257   71926 logs.go:276] 0 containers: []
	W0913 20:01:24.497269   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:24.497278   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:24.497297   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:24.538048   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:24.538084   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:24.592840   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:24.592880   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:24.608817   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:24.608851   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:24.683335   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:24.683356   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:24.683367   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:27.262211   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:27.275199   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:27.275277   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:27.308960   71926 cri.go:89] found id: ""
	I0913 20:01:27.308986   71926 logs.go:276] 0 containers: []
	W0913 20:01:27.308994   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:27.309000   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:27.309065   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:27.345030   71926 cri.go:89] found id: ""
	I0913 20:01:27.345055   71926 logs.go:276] 0 containers: []
	W0913 20:01:27.345067   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:27.345074   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:27.345132   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:27.380791   71926 cri.go:89] found id: ""
	I0913 20:01:27.380823   71926 logs.go:276] 0 containers: []
	W0913 20:01:27.380831   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:27.380837   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:27.380896   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:27.416561   71926 cri.go:89] found id: ""
	I0913 20:01:27.416589   71926 logs.go:276] 0 containers: []
	W0913 20:01:27.416599   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:27.416604   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:27.416654   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:27.455781   71926 cri.go:89] found id: ""
	I0913 20:01:27.455809   71926 logs.go:276] 0 containers: []
	W0913 20:01:27.455820   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:27.455827   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:27.455887   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:27.493920   71926 cri.go:89] found id: ""
	I0913 20:01:27.493950   71926 logs.go:276] 0 containers: []
	W0913 20:01:27.493959   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:27.493967   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:27.494016   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:27.531706   71926 cri.go:89] found id: ""
	I0913 20:01:27.531730   71926 logs.go:276] 0 containers: []
	W0913 20:01:27.531740   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:27.531746   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:27.531796   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:27.568697   71926 cri.go:89] found id: ""
	I0913 20:01:27.568726   71926 logs.go:276] 0 containers: []
	W0913 20:01:27.568735   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:27.568744   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:27.568755   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:27.620618   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:27.620655   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:27.636353   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:27.636378   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:27.709779   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:27.709806   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:27.709819   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:27.784476   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:27.784508   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:30.331351   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:30.344583   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:30.344647   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:30.379992   71926 cri.go:89] found id: ""
	I0913 20:01:30.380018   71926 logs.go:276] 0 containers: []
	W0913 20:01:30.380051   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:30.380059   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:30.380129   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:30.415148   71926 cri.go:89] found id: ""
	I0913 20:01:30.415174   71926 logs.go:276] 0 containers: []
	W0913 20:01:30.415185   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:30.415192   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:30.415250   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:30.450578   71926 cri.go:89] found id: ""
	I0913 20:01:30.450602   71926 logs.go:276] 0 containers: []
	W0913 20:01:30.450611   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:30.450616   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:30.450675   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:30.484582   71926 cri.go:89] found id: ""
	I0913 20:01:30.484612   71926 logs.go:276] 0 containers: []
	W0913 20:01:30.484623   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:30.484631   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:30.484683   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:30.520476   71926 cri.go:89] found id: ""
	I0913 20:01:30.520498   71926 logs.go:276] 0 containers: []
	W0913 20:01:30.520506   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:30.520512   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:30.520559   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:30.556898   71926 cri.go:89] found id: ""
	I0913 20:01:30.556928   71926 logs.go:276] 0 containers: []
	W0913 20:01:30.556937   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:30.556943   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:30.556993   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:30.595216   71926 cri.go:89] found id: ""
	I0913 20:01:30.595240   71926 logs.go:276] 0 containers: []
	W0913 20:01:30.595252   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:30.595259   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:30.595312   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:30.629834   71926 cri.go:89] found id: ""
	I0913 20:01:30.629866   71926 logs.go:276] 0 containers: []
	W0913 20:01:30.629880   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:30.629891   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:30.629907   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:30.643487   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:30.643516   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:30.718267   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:30.718290   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:30.718304   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:30.803014   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:30.803044   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:30.843670   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:30.843694   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:33.396566   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:33.409579   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:33.409641   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:33.444647   71926 cri.go:89] found id: ""
	I0913 20:01:33.444671   71926 logs.go:276] 0 containers: []
	W0913 20:01:33.444682   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:33.444689   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:33.444747   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:33.479545   71926 cri.go:89] found id: ""
	I0913 20:01:33.479568   71926 logs.go:276] 0 containers: []
	W0913 20:01:33.479577   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:33.479583   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:33.479634   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:33.517343   71926 cri.go:89] found id: ""
	I0913 20:01:33.517366   71926 logs.go:276] 0 containers: []
	W0913 20:01:33.517375   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:33.517380   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:33.517437   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:33.551394   71926 cri.go:89] found id: ""
	I0913 20:01:33.551424   71926 logs.go:276] 0 containers: []
	W0913 20:01:33.551436   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:33.551449   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:33.551499   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:33.583871   71926 cri.go:89] found id: ""
	I0913 20:01:33.583893   71926 logs.go:276] 0 containers: []
	W0913 20:01:33.583902   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:33.583907   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:33.583966   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:33.616984   71926 cri.go:89] found id: ""
	I0913 20:01:33.617008   71926 logs.go:276] 0 containers: []
	W0913 20:01:33.617018   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:33.617023   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:33.617078   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:33.660231   71926 cri.go:89] found id: ""
	I0913 20:01:33.660252   71926 logs.go:276] 0 containers: []
	W0913 20:01:33.660261   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:33.660267   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:33.660315   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:33.705140   71926 cri.go:89] found id: ""
	I0913 20:01:33.705176   71926 logs.go:276] 0 containers: []
	W0913 20:01:33.705188   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:33.705198   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:33.705213   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:33.781889   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:33.781916   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:33.781931   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:33.860132   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:33.860171   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:33.899723   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:33.899754   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:33.953380   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:33.953416   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:36.469123   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:36.482260   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:36.482324   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:36.516469   71926 cri.go:89] found id: ""
	I0913 20:01:36.516494   71926 logs.go:276] 0 containers: []
	W0913 20:01:36.516504   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:36.516509   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:36.516599   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:36.554065   71926 cri.go:89] found id: ""
	I0913 20:01:36.554103   71926 logs.go:276] 0 containers: []
	W0913 20:01:36.554116   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:36.554123   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:36.554197   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:36.587706   71926 cri.go:89] found id: ""
	I0913 20:01:36.587736   71926 logs.go:276] 0 containers: []
	W0913 20:01:36.587745   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:36.587750   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:36.587812   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:36.621866   71926 cri.go:89] found id: ""
	I0913 20:01:36.621897   71926 logs.go:276] 0 containers: []
	W0913 20:01:36.621908   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:36.621915   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:36.621984   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:36.656374   71926 cri.go:89] found id: ""
	I0913 20:01:36.656403   71926 logs.go:276] 0 containers: []
	W0913 20:01:36.656411   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:36.656416   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:36.656465   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:36.690236   71926 cri.go:89] found id: ""
	I0913 20:01:36.690259   71926 logs.go:276] 0 containers: []
	W0913 20:01:36.690268   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:36.690273   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:36.690329   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:36.725544   71926 cri.go:89] found id: ""
	I0913 20:01:36.725580   71926 logs.go:276] 0 containers: []
	W0913 20:01:36.725592   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:36.725600   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:36.725656   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:36.762143   71926 cri.go:89] found id: ""
	I0913 20:01:36.762175   71926 logs.go:276] 0 containers: []
	W0913 20:01:36.762186   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:36.762195   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:36.762210   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:36.837436   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:36.837459   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:36.837473   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:36.916272   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:36.916310   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:36.962300   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:36.962332   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:37.016419   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:37.016449   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:39.532924   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:39.546066   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:39.546150   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:39.585711   71926 cri.go:89] found id: ""
	I0913 20:01:39.585733   71926 logs.go:276] 0 containers: []
	W0913 20:01:39.585741   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:39.585747   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:39.585803   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:39.623366   71926 cri.go:89] found id: ""
	I0913 20:01:39.623409   71926 logs.go:276] 0 containers: []
	W0913 20:01:39.623419   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:39.623425   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:39.623487   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:39.660533   71926 cri.go:89] found id: ""
	I0913 20:01:39.660558   71926 logs.go:276] 0 containers: []
	W0913 20:01:39.660567   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:39.660575   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:39.660637   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:39.694266   71926 cri.go:89] found id: ""
	I0913 20:01:39.694293   71926 logs.go:276] 0 containers: []
	W0913 20:01:39.694304   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:39.694311   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:39.694373   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:39.734258   71926 cri.go:89] found id: ""
	I0913 20:01:39.734284   71926 logs.go:276] 0 containers: []
	W0913 20:01:39.734295   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:39.734302   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:39.734361   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:39.770202   71926 cri.go:89] found id: ""
	I0913 20:01:39.770219   71926 logs.go:276] 0 containers: []
	W0913 20:01:39.770227   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:39.770233   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:39.770284   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:39.804380   71926 cri.go:89] found id: ""
	I0913 20:01:39.804417   71926 logs.go:276] 0 containers: []
	W0913 20:01:39.804425   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:39.804431   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:39.804481   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:39.837961   71926 cri.go:89] found id: ""
	I0913 20:01:39.837989   71926 logs.go:276] 0 containers: []
	W0913 20:01:39.838011   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:39.838021   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:39.838047   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:39.851152   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:39.851176   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:39.930199   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:39.930224   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:39.930239   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:40.007475   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:40.007507   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:40.050291   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:40.050320   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:42.599100   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:42.613586   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:42.613662   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:42.648187   71926 cri.go:89] found id: ""
	I0913 20:01:42.648213   71926 logs.go:276] 0 containers: []
	W0913 20:01:42.648225   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:42.648232   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:42.648283   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:42.692692   71926 cri.go:89] found id: ""
	I0913 20:01:42.692721   71926 logs.go:276] 0 containers: []
	W0913 20:01:42.692730   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:42.692736   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:42.692787   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:42.726248   71926 cri.go:89] found id: ""
	I0913 20:01:42.726273   71926 logs.go:276] 0 containers: []
	W0913 20:01:42.726283   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:42.726291   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:42.726364   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:42.764372   71926 cri.go:89] found id: ""
	I0913 20:01:42.764416   71926 logs.go:276] 0 containers: []
	W0913 20:01:42.764428   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:42.764434   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:42.764495   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:42.802871   71926 cri.go:89] found id: ""
	I0913 20:01:42.802902   71926 logs.go:276] 0 containers: []
	W0913 20:01:42.802914   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:42.802922   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:42.802983   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:42.840018   71926 cri.go:89] found id: ""
	I0913 20:01:42.840048   71926 logs.go:276] 0 containers: []
	W0913 20:01:42.840060   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:42.840067   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:42.840127   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:42.876359   71926 cri.go:89] found id: ""
	I0913 20:01:42.876388   71926 logs.go:276] 0 containers: []
	W0913 20:01:42.876400   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:42.876408   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:42.876473   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:42.911673   71926 cri.go:89] found id: ""
	I0913 20:01:42.911697   71926 logs.go:276] 0 containers: []
	W0913 20:01:42.911706   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:42.911713   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:42.911725   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:43.016107   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:43.016143   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:43.061781   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:43.061811   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:43.112989   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:43.113035   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:43.129172   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:43.129201   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:43.209596   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:45.710278   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:45.723939   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:45.724013   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:45.759393   71926 cri.go:89] found id: ""
	I0913 20:01:45.759420   71926 logs.go:276] 0 containers: []
	W0913 20:01:45.759432   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:45.759439   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:45.759500   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:45.796795   71926 cri.go:89] found id: ""
	I0913 20:01:45.796820   71926 logs.go:276] 0 containers: []
	W0913 20:01:45.796831   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:45.796838   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:45.796911   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:45.837904   71926 cri.go:89] found id: ""
	I0913 20:01:45.837929   71926 logs.go:276] 0 containers: []
	W0913 20:01:45.837939   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:45.837945   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:45.838005   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:45.873079   71926 cri.go:89] found id: ""
	I0913 20:01:45.873104   71926 logs.go:276] 0 containers: []
	W0913 20:01:45.873115   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:45.873122   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:45.873194   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:45.910115   71926 cri.go:89] found id: ""
	I0913 20:01:45.910144   71926 logs.go:276] 0 containers: []
	W0913 20:01:45.910160   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:45.910167   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:45.910228   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:45.949135   71926 cri.go:89] found id: ""
	I0913 20:01:45.949166   71926 logs.go:276] 0 containers: []
	W0913 20:01:45.949178   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:45.949186   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:45.949246   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:45.984997   71926 cri.go:89] found id: ""
	I0913 20:01:45.985023   71926 logs.go:276] 0 containers: []
	W0913 20:01:45.985033   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:45.985038   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:45.985088   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:46.023590   71926 cri.go:89] found id: ""
	I0913 20:01:46.023618   71926 logs.go:276] 0 containers: []
	W0913 20:01:46.023632   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:46.023642   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:46.023656   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:46.062364   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:46.062392   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:46.114617   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:46.114650   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:46.129756   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:46.129799   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:46.202031   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:46.202052   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:46.202063   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:48.782933   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:48.796685   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:48.796758   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:48.832035   71926 cri.go:89] found id: ""
	I0913 20:01:48.832061   71926 logs.go:276] 0 containers: []
	W0913 20:01:48.832074   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:48.832081   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:48.832155   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:48.866904   71926 cri.go:89] found id: ""
	I0913 20:01:48.866929   71926 logs.go:276] 0 containers: []
	W0913 20:01:48.866942   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:48.866947   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:48.867004   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:48.904082   71926 cri.go:89] found id: ""
	I0913 20:01:48.904105   71926 logs.go:276] 0 containers: []
	W0913 20:01:48.904113   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:48.904118   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:48.904174   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:48.947496   71926 cri.go:89] found id: ""
	I0913 20:01:48.947519   71926 logs.go:276] 0 containers: []
	W0913 20:01:48.947526   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:48.947532   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:48.947588   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:48.982918   71926 cri.go:89] found id: ""
	I0913 20:01:48.982954   71926 logs.go:276] 0 containers: []
	W0913 20:01:48.982964   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:48.982969   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:48.983034   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:49.023593   71926 cri.go:89] found id: ""
	I0913 20:01:49.023615   71926 logs.go:276] 0 containers: []
	W0913 20:01:49.023623   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:49.023629   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:49.023690   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:49.068211   71926 cri.go:89] found id: ""
	I0913 20:01:49.068233   71926 logs.go:276] 0 containers: []
	W0913 20:01:49.068241   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:49.068246   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:49.068310   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:49.104174   71926 cri.go:89] found id: ""
	I0913 20:01:49.104195   71926 logs.go:276] 0 containers: []
	W0913 20:01:49.104203   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:49.104210   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:49.104221   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:49.158282   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:49.158317   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:49.173592   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:49.173617   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:49.249039   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:49.249067   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:49.249082   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:49.334924   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:49.334969   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:51.884986   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:51.898020   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:51.898172   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:51.937858   71926 cri.go:89] found id: ""
	I0913 20:01:51.937885   71926 logs.go:276] 0 containers: []
	W0913 20:01:51.937896   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:51.937905   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:51.937971   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:51.977712   71926 cri.go:89] found id: ""
	I0913 20:01:51.977743   71926 logs.go:276] 0 containers: []
	W0913 20:01:51.977756   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:51.977766   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:51.977852   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:52.012504   71926 cri.go:89] found id: ""
	I0913 20:01:52.012529   71926 logs.go:276] 0 containers: []
	W0913 20:01:52.012539   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:52.012544   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:52.012604   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:52.047642   71926 cri.go:89] found id: ""
	I0913 20:01:52.047671   71926 logs.go:276] 0 containers: []
	W0913 20:01:52.047683   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:52.047692   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:52.047743   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:52.087213   71926 cri.go:89] found id: ""
	I0913 20:01:52.087240   71926 logs.go:276] 0 containers: []
	W0913 20:01:52.087251   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:52.087258   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:52.087317   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:52.126609   71926 cri.go:89] found id: ""
	I0913 20:01:52.126633   71926 logs.go:276] 0 containers: []
	W0913 20:01:52.126641   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:52.126647   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:52.126699   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:52.160754   71926 cri.go:89] found id: ""
	I0913 20:01:52.160778   71926 logs.go:276] 0 containers: []
	W0913 20:01:52.160789   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:52.160796   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:52.160857   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:52.196945   71926 cri.go:89] found id: ""
	I0913 20:01:52.196967   71926 logs.go:276] 0 containers: []
	W0913 20:01:52.196975   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:52.196983   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:52.196994   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:52.248515   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:52.248552   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:52.264602   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:52.264629   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:52.339626   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:52.339653   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:52.339668   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:52.419793   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:52.419827   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:54.966220   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:54.981234   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:54.981299   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:55.015806   71926 cri.go:89] found id: ""
	I0913 20:01:55.015843   71926 logs.go:276] 0 containers: []
	W0913 20:01:55.015855   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:55.015861   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:55.015931   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:55.053979   71926 cri.go:89] found id: ""
	I0913 20:01:55.054002   71926 logs.go:276] 0 containers: []
	W0913 20:01:55.054011   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:55.054016   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:55.054075   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:55.096464   71926 cri.go:89] found id: ""
	I0913 20:01:55.096495   71926 logs.go:276] 0 containers: []
	W0913 20:01:55.096506   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:55.096514   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:55.096591   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:55.146557   71926 cri.go:89] found id: ""
	I0913 20:01:55.146586   71926 logs.go:276] 0 containers: []
	W0913 20:01:55.146599   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:55.146606   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:55.146667   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:55.199034   71926 cri.go:89] found id: ""
	I0913 20:01:55.199063   71926 logs.go:276] 0 containers: []
	W0913 20:01:55.199074   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:55.199083   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:55.199144   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:55.234731   71926 cri.go:89] found id: ""
	I0913 20:01:55.234760   71926 logs.go:276] 0 containers: []
	W0913 20:01:55.234772   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:55.234780   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:55.234830   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:55.273543   71926 cri.go:89] found id: ""
	I0913 20:01:55.273570   71926 logs.go:276] 0 containers: []
	W0913 20:01:55.273579   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:55.273585   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:55.273643   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:55.310449   71926 cri.go:89] found id: ""
	I0913 20:01:55.310483   71926 logs.go:276] 0 containers: []
	W0913 20:01:55.310496   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:55.310507   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:55.310521   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:55.360725   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:55.360761   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:55.375346   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:55.375374   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:55.445075   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:55.445098   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:55.445108   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:55.520105   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:55.520144   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:58.059379   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:58.072373   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:58.072454   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:58.108624   71926 cri.go:89] found id: ""
	I0913 20:01:58.108650   71926 logs.go:276] 0 containers: []
	W0913 20:01:58.108659   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:58.108665   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:58.108722   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:58.143978   71926 cri.go:89] found id: ""
	I0913 20:01:58.143999   71926 logs.go:276] 0 containers: []
	W0913 20:01:58.144007   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:58.144013   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:58.144059   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:58.178996   71926 cri.go:89] found id: ""
	I0913 20:01:58.179024   71926 logs.go:276] 0 containers: []
	W0913 20:01:58.179032   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:58.179038   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:58.179097   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:58.211596   71926 cri.go:89] found id: ""
	I0913 20:01:58.211624   71926 logs.go:276] 0 containers: []
	W0913 20:01:58.211634   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:58.211640   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:58.211696   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:58.245034   71926 cri.go:89] found id: ""
	I0913 20:01:58.245065   71926 logs.go:276] 0 containers: []
	W0913 20:01:58.245077   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:58.245085   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:58.245148   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:58.283216   71926 cri.go:89] found id: ""
	I0913 20:01:58.283238   71926 logs.go:276] 0 containers: []
	W0913 20:01:58.283247   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:58.283252   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:58.283309   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:58.321463   71926 cri.go:89] found id: ""
	I0913 20:01:58.321484   71926 logs.go:276] 0 containers: []
	W0913 20:01:58.321492   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:58.321498   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:58.321544   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:58.355902   71926 cri.go:89] found id: ""
	I0913 20:01:58.355926   71926 logs.go:276] 0 containers: []
	W0913 20:01:58.355937   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:58.355947   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:58.355965   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:58.413005   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:58.413035   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:58.428082   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:58.428108   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:58.498169   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:58.498197   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:58.498212   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:58.578725   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:58.578757   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:02:01.119256   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:02:01.146017   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:02:01.146114   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:02:01.196918   71926 cri.go:89] found id: ""
	I0913 20:02:01.196945   71926 logs.go:276] 0 containers: []
	W0913 20:02:01.196953   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:02:01.196959   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:02:01.197016   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:02:01.233679   71926 cri.go:89] found id: ""
	I0913 20:02:01.233718   71926 logs.go:276] 0 containers: []
	W0913 20:02:01.233729   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:02:01.233736   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:02:01.233800   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:02:01.269464   71926 cri.go:89] found id: ""
	I0913 20:02:01.269492   71926 logs.go:276] 0 containers: []
	W0913 20:02:01.269503   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:02:01.269510   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:02:01.269570   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:02:01.305729   71926 cri.go:89] found id: ""
	I0913 20:02:01.305754   71926 logs.go:276] 0 containers: []
	W0913 20:02:01.305763   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:02:01.305769   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:02:01.305828   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:02:01.342388   71926 cri.go:89] found id: ""
	I0913 20:02:01.342415   71926 logs.go:276] 0 containers: []
	W0913 20:02:01.342426   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:02:01.342433   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:02:01.342496   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:02:01.381841   71926 cri.go:89] found id: ""
	I0913 20:02:01.381870   71926 logs.go:276] 0 containers: []
	W0913 20:02:01.381887   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:02:01.381895   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:02:01.381959   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:02:01.418835   71926 cri.go:89] found id: ""
	I0913 20:02:01.418875   71926 logs.go:276] 0 containers: []
	W0913 20:02:01.418886   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:02:01.418893   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:02:01.418957   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:02:01.457113   71926 cri.go:89] found id: ""
	I0913 20:02:01.457147   71926 logs.go:276] 0 containers: []
	W0913 20:02:01.457158   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:02:01.457168   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:02:01.457178   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:02:01.513024   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:02:01.513060   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:02:01.528102   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:02:01.528128   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:02:01.602459   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:02:01.602480   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:02:01.602495   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:02:01.685567   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:02:01.685599   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:02:04.231832   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:02:04.246134   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:02:04.246215   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:02:04.285749   71926 cri.go:89] found id: ""
	I0913 20:02:04.285779   71926 logs.go:276] 0 containers: []
	W0913 20:02:04.285789   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:02:04.285796   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:02:04.285857   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:02:04.320201   71926 cri.go:89] found id: ""
	I0913 20:02:04.320235   71926 logs.go:276] 0 containers: []
	W0913 20:02:04.320246   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:02:04.320252   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:02:04.320314   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:02:04.356354   71926 cri.go:89] found id: ""
	I0913 20:02:04.356376   71926 logs.go:276] 0 containers: []
	W0913 20:02:04.356387   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:02:04.356394   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:02:04.356452   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:02:04.391995   71926 cri.go:89] found id: ""
	I0913 20:02:04.392025   71926 logs.go:276] 0 containers: []
	W0913 20:02:04.392036   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:02:04.392044   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:02:04.392111   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:02:04.425216   71926 cri.go:89] found id: ""
	I0913 20:02:04.425245   71926 logs.go:276] 0 containers: []
	W0913 20:02:04.425255   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:02:04.425262   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:02:04.425327   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:02:04.473200   71926 cri.go:89] found id: ""
	I0913 20:02:04.473223   71926 logs.go:276] 0 containers: []
	W0913 20:02:04.473232   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:02:04.473238   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:02:04.473283   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:02:04.508088   71926 cri.go:89] found id: ""
	I0913 20:02:04.508110   71926 logs.go:276] 0 containers: []
	W0913 20:02:04.508119   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:02:04.508124   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:02:04.508175   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:02:04.543322   71926 cri.go:89] found id: ""
	I0913 20:02:04.543343   71926 logs.go:276] 0 containers: []
	W0913 20:02:04.543351   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:02:04.543360   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:02:04.543370   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:02:04.618660   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:02:04.618679   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:02:04.618691   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:02:04.695411   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:02:04.695446   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:02:04.735797   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:02:04.735829   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:02:04.792281   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:02:04.792318   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:02:07.307778   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:02:07.322032   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:02:07.322110   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:02:07.357558   71926 cri.go:89] found id: ""
	I0913 20:02:07.357585   71926 logs.go:276] 0 containers: []
	W0913 20:02:07.357597   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:02:07.357605   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:02:07.357664   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:02:07.391428   71926 cri.go:89] found id: ""
	I0913 20:02:07.391457   71926 logs.go:276] 0 containers: []
	W0913 20:02:07.391468   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:02:07.391476   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:02:07.391531   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:02:07.427244   71926 cri.go:89] found id: ""
	I0913 20:02:07.427268   71926 logs.go:276] 0 containers: []
	W0913 20:02:07.427281   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:02:07.427289   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:02:07.427364   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:02:07.477369   71926 cri.go:89] found id: ""
	I0913 20:02:07.477399   71926 logs.go:276] 0 containers: []
	W0913 20:02:07.477411   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:02:07.477420   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:02:07.477478   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:02:07.519477   71926 cri.go:89] found id: ""
	I0913 20:02:07.519505   71926 logs.go:276] 0 containers: []
	W0913 20:02:07.519516   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:02:07.519524   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:02:07.519586   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:02:07.556229   71926 cri.go:89] found id: ""
	I0913 20:02:07.556252   71926 logs.go:276] 0 containers: []
	W0913 20:02:07.556260   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:02:07.556270   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:02:07.556329   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:02:07.594498   71926 cri.go:89] found id: ""
	I0913 20:02:07.594531   71926 logs.go:276] 0 containers: []
	W0913 20:02:07.594543   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:02:07.594551   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:02:07.594609   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:02:07.634000   71926 cri.go:89] found id: ""
	I0913 20:02:07.634027   71926 logs.go:276] 0 containers: []
	W0913 20:02:07.634038   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:02:07.634048   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:02:07.634061   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:02:07.690769   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:02:07.690801   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:02:07.708059   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:02:07.708087   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:02:07.783421   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:02:07.783440   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:02:07.783481   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:02:07.866138   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:02:07.866169   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:02:10.416560   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:02:10.430464   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:02:10.430536   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:02:10.467821   71926 cri.go:89] found id: ""
	I0913 20:02:10.467849   71926 logs.go:276] 0 containers: []
	W0913 20:02:10.467858   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:02:10.467864   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:02:10.467930   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:02:10.504320   71926 cri.go:89] found id: ""
	I0913 20:02:10.504347   71926 logs.go:276] 0 containers: []
	W0913 20:02:10.504358   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:02:10.504371   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:02:10.504461   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:02:10.541261   71926 cri.go:89] found id: ""
	I0913 20:02:10.541290   71926 logs.go:276] 0 containers: []
	W0913 20:02:10.541302   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:02:10.541309   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:02:10.541376   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:02:10.576270   71926 cri.go:89] found id: ""
	I0913 20:02:10.576297   71926 logs.go:276] 0 containers: []
	W0913 20:02:10.576310   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:02:10.576317   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:02:10.576373   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:02:10.614974   71926 cri.go:89] found id: ""
	I0913 20:02:10.615004   71926 logs.go:276] 0 containers: []
	W0913 20:02:10.615022   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:02:10.615029   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:02:10.615091   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:02:10.650917   71926 cri.go:89] found id: ""
	I0913 20:02:10.650947   71926 logs.go:276] 0 containers: []
	W0913 20:02:10.650959   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:02:10.650967   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:02:10.651028   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:02:10.688597   71926 cri.go:89] found id: ""
	I0913 20:02:10.688622   71926 logs.go:276] 0 containers: []
	W0913 20:02:10.688632   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:02:10.688640   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:02:10.688699   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:02:10.723937   71926 cri.go:89] found id: ""
	I0913 20:02:10.723962   71926 logs.go:276] 0 containers: []
	W0913 20:02:10.723973   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:02:10.723983   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:02:10.723998   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:02:10.776033   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:02:10.776065   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:02:10.791601   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:02:10.791624   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:02:10.870427   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:02:10.870453   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:02:10.870481   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:02:10.950924   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:02:10.950958   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:02:13.497982   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:02:13.511795   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:02:13.511865   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:02:13.551686   71926 cri.go:89] found id: ""
	I0913 20:02:13.551714   71926 logs.go:276] 0 containers: []
	W0913 20:02:13.551723   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:02:13.551729   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:02:13.551779   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:02:13.584641   71926 cri.go:89] found id: ""
	I0913 20:02:13.584671   71926 logs.go:276] 0 containers: []
	W0913 20:02:13.584682   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:02:13.584689   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:02:13.584740   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:02:13.622693   71926 cri.go:89] found id: ""
	I0913 20:02:13.622720   71926 logs.go:276] 0 containers: []
	W0913 20:02:13.622731   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:02:13.622739   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:02:13.622801   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:02:13.661315   71926 cri.go:89] found id: ""
	I0913 20:02:13.661343   71926 logs.go:276] 0 containers: []
	W0913 20:02:13.661355   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:02:13.661363   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:02:13.661422   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:02:13.698433   71926 cri.go:89] found id: ""
	I0913 20:02:13.698460   71926 logs.go:276] 0 containers: []
	W0913 20:02:13.698471   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:02:13.698485   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:02:13.698541   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:02:13.733216   71926 cri.go:89] found id: ""
	I0913 20:02:13.733245   71926 logs.go:276] 0 containers: []
	W0913 20:02:13.733256   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:02:13.733264   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:02:13.733323   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:02:13.769397   71926 cri.go:89] found id: ""
	I0913 20:02:13.769426   71926 logs.go:276] 0 containers: []
	W0913 20:02:13.769436   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:02:13.769441   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:02:13.769502   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:02:13.806343   71926 cri.go:89] found id: ""
	I0913 20:02:13.806367   71926 logs.go:276] 0 containers: []
	W0913 20:02:13.806378   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:02:13.806389   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:02:13.806402   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:02:13.885918   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:02:13.885958   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:02:13.927909   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:02:13.927948   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:02:13.983289   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:02:13.983325   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:02:13.996867   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:02:13.996892   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:02:14.065876   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:02:16.566876   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:02:16.581980   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:02:16.582059   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:02:16.620669   71926 cri.go:89] found id: ""
	I0913 20:02:16.620694   71926 logs.go:276] 0 containers: []
	W0913 20:02:16.620703   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:02:16.620709   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:02:16.620758   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:02:16.660133   71926 cri.go:89] found id: ""
	I0913 20:02:16.660156   71926 logs.go:276] 0 containers: []
	W0913 20:02:16.660165   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:02:16.660171   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:02:16.660218   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:02:16.695475   71926 cri.go:89] found id: ""
	I0913 20:02:16.695503   71926 logs.go:276] 0 containers: []
	W0913 20:02:16.695515   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:02:16.695522   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:02:16.695581   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:02:16.730018   71926 cri.go:89] found id: ""
	I0913 20:02:16.730051   71926 logs.go:276] 0 containers: []
	W0913 20:02:16.730063   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:02:16.730069   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:02:16.730136   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:02:16.763193   71926 cri.go:89] found id: ""
	I0913 20:02:16.763219   71926 logs.go:276] 0 containers: []
	W0913 20:02:16.763230   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:02:16.763236   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:02:16.763303   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:02:16.800623   71926 cri.go:89] found id: ""
	I0913 20:02:16.800650   71926 logs.go:276] 0 containers: []
	W0913 20:02:16.800662   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:02:16.800670   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:02:16.800730   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:02:16.834922   71926 cri.go:89] found id: ""
	I0913 20:02:16.834950   71926 logs.go:276] 0 containers: []
	W0913 20:02:16.834961   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:02:16.834968   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:02:16.835012   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:02:16.872570   71926 cri.go:89] found id: ""
	I0913 20:02:16.872598   71926 logs.go:276] 0 containers: []
	W0913 20:02:16.872607   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:02:16.872615   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:02:16.872625   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:02:16.922229   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:02:16.922265   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:02:16.936954   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:02:16.936985   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:02:17.022260   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:02:17.022281   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:02:17.022292   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:02:17.103233   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:02:17.103262   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:02:19.648871   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:02:19.664542   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:02:19.664619   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:02:19.704693   71926 cri.go:89] found id: ""
	I0913 20:02:19.704725   71926 logs.go:276] 0 containers: []
	W0913 20:02:19.704738   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:02:19.704745   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:02:19.704808   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:02:19.746522   71926 cri.go:89] found id: ""
	I0913 20:02:19.746550   71926 logs.go:276] 0 containers: []
	W0913 20:02:19.746562   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:02:19.746569   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:02:19.746630   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:02:19.782277   71926 cri.go:89] found id: ""
	I0913 20:02:19.782305   71926 logs.go:276] 0 containers: []
	W0913 20:02:19.782316   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:02:19.782323   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:02:19.782390   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:02:19.822083   71926 cri.go:89] found id: ""
	I0913 20:02:19.822147   71926 logs.go:276] 0 containers: []
	W0913 20:02:19.822157   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:02:19.822163   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:02:19.822221   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:02:19.865403   71926 cri.go:89] found id: ""
	I0913 20:02:19.865433   71926 logs.go:276] 0 containers: []
	W0913 20:02:19.865443   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:02:19.865451   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:02:19.865513   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:02:19.906436   71926 cri.go:89] found id: ""
	I0913 20:02:19.906462   71926 logs.go:276] 0 containers: []
	W0913 20:02:19.906471   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:02:19.906477   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:02:19.906535   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:02:19.943277   71926 cri.go:89] found id: ""
	I0913 20:02:19.943301   71926 logs.go:276] 0 containers: []
	W0913 20:02:19.943311   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:02:19.943318   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:02:19.943379   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:02:19.978660   71926 cri.go:89] found id: ""
	I0913 20:02:19.978683   71926 logs.go:276] 0 containers: []
	W0913 20:02:19.978694   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:02:19.978704   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:02:19.978718   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:02:20.051748   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:02:20.051773   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:02:20.051788   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:02:20.133912   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:02:20.133951   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:02:20.174826   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:02:20.174854   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:02:20.228002   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:02:20.228038   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:02:22.743346   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:02:22.757641   71926 kubeadm.go:597] duration metric: took 4m3.377721408s to restartPrimaryControlPlane
	W0913 20:02:22.757719   71926 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0913 20:02:22.757750   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0913 20:02:23.489114   71926 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 20:02:23.505494   71926 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0913 20:02:23.516804   71926 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0913 20:02:23.527373   71926 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0913 20:02:23.527392   71926 kubeadm.go:157] found existing configuration files:
	
	I0913 20:02:23.527433   71926 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0913 20:02:23.537725   71926 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0913 20:02:23.537797   71926 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0913 20:02:23.548667   71926 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0913 20:02:23.558212   71926 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0913 20:02:23.558288   71926 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0913 20:02:23.568235   71926 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0913 20:02:23.577925   71926 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0913 20:02:23.577989   71926 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0913 20:02:23.587819   71926 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0913 20:02:23.597266   71926 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0913 20:02:23.597330   71926 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0913 20:02:23.607021   71926 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0913 20:02:23.681432   71926 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0913 20:02:23.681576   71926 kubeadm.go:310] [preflight] Running pre-flight checks
	I0913 20:02:23.837573   71926 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0913 20:02:23.837727   71926 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0913 20:02:23.837858   71926 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0913 20:02:24.016267   71926 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0913 20:02:24.018067   71926 out.go:235]   - Generating certificates and keys ...
	I0913 20:02:24.018195   71926 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0913 20:02:24.018297   71926 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0913 20:02:24.018394   71926 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0913 20:02:24.018482   71926 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0913 20:02:24.018586   71926 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0913 20:02:24.019167   71926 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0913 20:02:24.019590   71926 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0913 20:02:24.020258   71926 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0913 20:02:24.020814   71926 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0913 20:02:24.021409   71926 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0913 20:02:24.021519   71926 kubeadm.go:310] [certs] Using the existing "sa" key
	I0913 20:02:24.021596   71926 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0913 20:02:24.094249   71926 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0913 20:02:24.178186   71926 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0913 20:02:24.412313   71926 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0913 20:02:24.570296   71926 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0913 20:02:24.585365   71926 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0913 20:02:24.586493   71926 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0913 20:02:24.586548   71926 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0913 20:02:24.726026   71926 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0913 20:02:24.727979   71926 out.go:235]   - Booting up control plane ...
	I0913 20:02:24.728117   71926 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0913 20:02:24.740176   71926 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0913 20:02:24.741334   71926 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0913 20:02:24.742057   71926 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0913 20:02:24.744724   71926 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0913 20:03:04.745279   71926 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0913 20:03:04.745917   71926 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0913 20:03:04.746165   71926 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0913 20:03:09.746358   71926 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0913 20:03:09.746651   71926 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0913 20:03:19.746817   71926 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0913 20:03:19.747178   71926 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0913 20:03:39.747563   71926 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0913 20:03:39.747837   71926 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0913 20:04:19.749006   71926 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0913 20:04:19.749293   71926 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0913 20:04:19.749318   71926 kubeadm.go:310] 
	I0913 20:04:19.749381   71926 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0913 20:04:19.749450   71926 kubeadm.go:310] 		timed out waiting for the condition
	I0913 20:04:19.749486   71926 kubeadm.go:310] 
	I0913 20:04:19.749554   71926 kubeadm.go:310] 	This error is likely caused by:
	I0913 20:04:19.749588   71926 kubeadm.go:310] 		- The kubelet is not running
	I0913 20:04:19.749737   71926 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0913 20:04:19.749752   71926 kubeadm.go:310] 
	I0913 20:04:19.749887   71926 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0913 20:04:19.749920   71926 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0913 20:04:19.749949   71926 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0913 20:04:19.749955   71926 kubeadm.go:310] 
	I0913 20:04:19.750044   71926 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0913 20:04:19.750136   71926 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0913 20:04:19.750145   71926 kubeadm.go:310] 
	I0913 20:04:19.750247   71926 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0913 20:04:19.750339   71926 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0913 20:04:19.750430   71926 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0913 20:04:19.750523   71926 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0913 20:04:19.750574   71926 kubeadm.go:310] 
	I0913 20:04:19.751362   71926 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0913 20:04:19.751496   71926 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0913 20:04:19.751584   71926 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0913 20:04:19.751725   71926 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0913 20:04:19.751774   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0913 20:04:20.207124   71926 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 20:04:20.223940   71926 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0913 20:04:20.234902   71926 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0913 20:04:20.234923   71926 kubeadm.go:157] found existing configuration files:
	
	I0913 20:04:20.234970   71926 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0913 20:04:20.246057   71926 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0913 20:04:20.246154   71926 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0913 20:04:20.257102   71926 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0913 20:04:20.266497   71926 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0913 20:04:20.266545   71926 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0913 20:04:20.276259   71926 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0913 20:04:20.285640   71926 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0913 20:04:20.285690   71926 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0913 20:04:20.294988   71926 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0913 20:04:20.303978   71926 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0913 20:04:20.304021   71926 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0913 20:04:20.313426   71926 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0913 20:04:20.392431   71926 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0913 20:04:20.392519   71926 kubeadm.go:310] [preflight] Running pre-flight checks
	I0913 20:04:20.550959   71926 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0913 20:04:20.551072   71926 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0913 20:04:20.551169   71926 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0913 20:04:20.746999   71926 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0913 20:04:20.749008   71926 out.go:235]   - Generating certificates and keys ...
	I0913 20:04:20.749104   71926 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0913 20:04:20.749181   71926 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0913 20:04:20.749255   71926 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0913 20:04:20.749339   71926 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0913 20:04:20.749457   71926 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0913 20:04:20.749540   71926 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0913 20:04:20.749608   71926 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0913 20:04:20.749670   71926 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0913 20:04:20.749732   71926 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0913 20:04:20.749801   71926 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0913 20:04:20.749833   71926 kubeadm.go:310] [certs] Using the existing "sa" key
	I0913 20:04:20.749877   71926 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0913 20:04:20.997924   71926 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0913 20:04:21.175932   71926 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0913 20:04:21.442609   71926 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0913 20:04:21.714181   71926 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0913 20:04:21.737741   71926 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0913 20:04:21.738987   71926 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0913 20:04:21.739058   71926 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0913 20:04:21.885498   71926 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0913 20:04:21.887033   71926 out.go:235]   - Booting up control plane ...
	I0913 20:04:21.887170   71926 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0913 20:04:21.893768   71926 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0913 20:04:21.893887   71926 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0913 20:04:21.894035   71926 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0913 20:04:21.904503   71926 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0913 20:05:01.906985   71926 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0913 20:05:01.907109   71926 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0913 20:05:01.907459   71926 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0913 20:05:06.907586   71926 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0913 20:05:06.907859   71926 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0913 20:05:16.908086   71926 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0913 20:05:16.908310   71926 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0913 20:05:36.908887   71926 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0913 20:05:36.909114   71926 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0913 20:06:16.909944   71926 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0913 20:06:16.910449   71926 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0913 20:06:16.910509   71926 kubeadm.go:310] 
	I0913 20:06:16.910582   71926 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0913 20:06:16.910675   71926 kubeadm.go:310] 		timed out waiting for the condition
	I0913 20:06:16.910694   71926 kubeadm.go:310] 
	I0913 20:06:16.910764   71926 kubeadm.go:310] 	This error is likely caused by:
	I0913 20:06:16.910837   71926 kubeadm.go:310] 		- The kubelet is not running
	I0913 20:06:16.910965   71926 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0913 20:06:16.910979   71926 kubeadm.go:310] 
	I0913 20:06:16.911088   71926 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0913 20:06:16.911126   71926 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0913 20:06:16.911172   71926 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0913 20:06:16.911180   71926 kubeadm.go:310] 
	I0913 20:06:16.911298   71926 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0913 20:06:16.911400   71926 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0913 20:06:16.911415   71926 kubeadm.go:310] 
	I0913 20:06:16.911586   71926 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0913 20:06:16.911787   71926 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0913 20:06:16.911938   71926 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0913 20:06:16.912063   71926 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0913 20:06:16.912087   71926 kubeadm.go:310] 
	I0913 20:06:16.912489   71926 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0913 20:06:16.912671   71926 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0913 20:06:16.912770   71926 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0913 20:06:16.912842   71926 kubeadm.go:394] duration metric: took 7m57.58767006s to StartCluster
	I0913 20:06:16.912893   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:06:16.912960   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:06:16.957280   71926 cri.go:89] found id: ""
	I0913 20:06:16.957307   71926 logs.go:276] 0 containers: []
	W0913 20:06:16.957319   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:06:16.957326   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:06:16.957393   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:06:16.997065   71926 cri.go:89] found id: ""
	I0913 20:06:16.997095   71926 logs.go:276] 0 containers: []
	W0913 20:06:16.997106   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:06:16.997115   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:06:16.997193   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:06:17.033075   71926 cri.go:89] found id: ""
	I0913 20:06:17.033099   71926 logs.go:276] 0 containers: []
	W0913 20:06:17.033107   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:06:17.033112   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:06:17.033180   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:06:17.071062   71926 cri.go:89] found id: ""
	I0913 20:06:17.071090   71926 logs.go:276] 0 containers: []
	W0913 20:06:17.071101   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:06:17.071108   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:06:17.071176   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:06:17.107558   71926 cri.go:89] found id: ""
	I0913 20:06:17.107584   71926 logs.go:276] 0 containers: []
	W0913 20:06:17.107594   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:06:17.107599   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:06:17.107658   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:06:17.146037   71926 cri.go:89] found id: ""
	I0913 20:06:17.146066   71926 logs.go:276] 0 containers: []
	W0913 20:06:17.146076   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:06:17.146083   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:06:17.146158   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:06:17.189129   71926 cri.go:89] found id: ""
	I0913 20:06:17.189163   71926 logs.go:276] 0 containers: []
	W0913 20:06:17.189174   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:06:17.189181   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:06:17.189241   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:06:17.224018   71926 cri.go:89] found id: ""
	I0913 20:06:17.224045   71926 logs.go:276] 0 containers: []
	W0913 20:06:17.224056   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:06:17.224067   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:06:17.224081   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:06:17.238494   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:06:17.238526   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:06:17.319627   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:06:17.319650   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:06:17.319663   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:06:17.472823   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:06:17.472854   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:06:17.515919   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:06:17.515957   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0913 20:06:17.566082   71926 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0913 20:06:17.566145   71926 out.go:270] * 
	* 
	W0913 20:06:17.566249   71926 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0913 20:06:17.566272   71926 out.go:270] * 
	* 
	W0913 20:06:17.567031   71926 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0913 20:06:17.570669   71926 out.go:201] 
	W0913 20:06:17.571961   71926 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0913 20:06:17.572007   71926 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0913 20:06:17.572024   71926 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0913 20:06:17.573440   71926 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-234290 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-234290 -n old-k8s-version-234290
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-234290 -n old-k8s-version-234290: exit status 2 (227.094756ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-234290 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-234290 logs -n 25: (1.63486142s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-604714 sudo cat                              | bridge-604714                | jenkins | v1.34.0 | 13 Sep 24 19:49 UTC | 13 Sep 24 19:49 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-604714 sudo                                  | bridge-604714                | jenkins | v1.34.0 | 13 Sep 24 19:49 UTC | 13 Sep 24 19:49 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-604714 sudo                                  | bridge-604714                | jenkins | v1.34.0 | 13 Sep 24 19:49 UTC | 13 Sep 24 19:49 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-604714 sudo                                  | bridge-604714                | jenkins | v1.34.0 | 13 Sep 24 19:49 UTC | 13 Sep 24 19:49 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-604714 sudo find                             | bridge-604714                | jenkins | v1.34.0 | 13 Sep 24 19:49 UTC | 13 Sep 24 19:49 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-604714 sudo crio                             | bridge-604714                | jenkins | v1.34.0 | 13 Sep 24 19:49 UTC | 13 Sep 24 19:49 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-604714                                       | bridge-604714                | jenkins | v1.34.0 | 13 Sep 24 19:49 UTC | 13 Sep 24 19:49 UTC |
	| delete  | -p                                                     | disable-driver-mounts-221882 | jenkins | v1.34.0 | 13 Sep 24 19:49 UTC | 13 Sep 24 19:49 UTC |
	|         | disable-driver-mounts-221882                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-512125 | jenkins | v1.34.0 | 13 Sep 24 19:49 UTC | 13 Sep 24 19:50 UTC |
	|         | default-k8s-diff-port-512125                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-175374            | embed-certs-175374           | jenkins | v1.34.0 | 13 Sep 24 19:50 UTC | 13 Sep 24 19:50 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-175374                                  | embed-certs-175374           | jenkins | v1.34.0 | 13 Sep 24 19:50 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-239327             | no-preload-239327            | jenkins | v1.34.0 | 13 Sep 24 19:50 UTC | 13 Sep 24 19:50 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-239327                                   | no-preload-239327            | jenkins | v1.34.0 | 13 Sep 24 19:50 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-512125  | default-k8s-diff-port-512125 | jenkins | v1.34.0 | 13 Sep 24 19:50 UTC | 13 Sep 24 19:50 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-512125 | jenkins | v1.34.0 | 13 Sep 24 19:50 UTC |                     |
	|         | default-k8s-diff-port-512125                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-234290        | old-k8s-version-234290       | jenkins | v1.34.0 | 13 Sep 24 19:52 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-175374                 | embed-certs-175374           | jenkins | v1.34.0 | 13 Sep 24 19:52 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-175374                                  | embed-certs-175374           | jenkins | v1.34.0 | 13 Sep 24 19:52 UTC | 13 Sep 24 20:03 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-239327                  | no-preload-239327            | jenkins | v1.34.0 | 13 Sep 24 19:52 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-239327                                   | no-preload-239327            | jenkins | v1.34.0 | 13 Sep 24 19:52 UTC | 13 Sep 24 20:02 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-512125       | default-k8s-diff-port-512125 | jenkins | v1.34.0 | 13 Sep 24 19:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-512125 | jenkins | v1.34.0 | 13 Sep 24 19:53 UTC | 13 Sep 24 20:03 UTC |
	|         | default-k8s-diff-port-512125                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-234290                              | old-k8s-version-234290       | jenkins | v1.34.0 | 13 Sep 24 19:53 UTC | 13 Sep 24 19:53 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-234290             | old-k8s-version-234290       | jenkins | v1.34.0 | 13 Sep 24 19:53 UTC | 13 Sep 24 19:53 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-234290                              | old-k8s-version-234290       | jenkins | v1.34.0 | 13 Sep 24 19:53 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/13 19:53:41
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0913 19:53:41.943032   71926 out.go:345] Setting OutFile to fd 1 ...
	I0913 19:53:41.943180   71926 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 19:53:41.943190   71926 out.go:358] Setting ErrFile to fd 2...
	I0913 19:53:41.943197   71926 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 19:53:41.943402   71926 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19636-3902/.minikube/bin
	I0913 19:53:41.943930   71926 out.go:352] Setting JSON to false
	I0913 19:53:41.944812   71926 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5765,"bootTime":1726251457,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0913 19:53:41.944913   71926 start.go:139] virtualization: kvm guest
	I0913 19:53:41.946864   71926 out.go:177] * [old-k8s-version-234290] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0913 19:53:41.948276   71926 out.go:177]   - MINIKUBE_LOCATION=19636
	I0913 19:53:41.948281   71926 notify.go:220] Checking for updates...
	I0913 19:53:41.950620   71926 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 19:53:41.951967   71926 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19636-3902/kubeconfig
	I0913 19:53:41.953232   71926 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19636-3902/.minikube
	I0913 19:53:41.954348   71926 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0913 19:53:41.955441   71926 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 19:53:41.957011   71926 config.go:182] Loaded profile config "old-k8s-version-234290": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0913 19:53:41.957371   71926 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:53:41.957441   71926 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:53:41.972835   71926 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38069
	I0913 19:53:41.973170   71926 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:53:41.973680   71926 main.go:141] libmachine: Using API Version  1
	I0913 19:53:41.973702   71926 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:53:41.974021   71926 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:53:41.974203   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .DriverName
	I0913 19:53:41.975950   71926 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0913 19:53:41.977070   71926 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 19:53:41.977361   71926 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:53:41.977393   71926 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:53:41.992564   71926 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46407
	I0913 19:53:41.992937   71926 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:53:41.993374   71926 main.go:141] libmachine: Using API Version  1
	I0913 19:53:41.993394   71926 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:53:41.993696   71926 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:53:41.993887   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .DriverName
	I0913 19:53:42.028938   71926 out.go:177] * Using the kvm2 driver based on existing profile
	I0913 19:53:42.030049   71926 start.go:297] selected driver: kvm2
	I0913 19:53:42.030059   71926 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-234290 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-234290 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.137 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 19:53:42.030200   71926 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 19:53:42.030849   71926 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 19:53:42.030932   71926 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19636-3902/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0913 19:53:42.045463   71926 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0913 19:53:42.045953   71926 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0913 19:53:42.045989   71926 cni.go:84] Creating CNI manager for ""
	I0913 19:53:42.046049   71926 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0913 19:53:42.046110   71926 start.go:340] cluster config:
	{Name:old-k8s-version-234290 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-234290 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.137 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 19:53:42.046256   71926 iso.go:125] acquiring lock: {Name:mk6d09a9e7ffc35d34bf41cb51ad15df14a4d34d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 19:53:42.047975   71926 out.go:177] * Starting "old-k8s-version-234290" primary control-plane node in "old-k8s-version-234290" cluster
	I0913 19:53:42.049111   71926 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0913 19:53:42.049142   71926 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0913 19:53:42.049152   71926 cache.go:56] Caching tarball of preloaded images
	I0913 19:53:42.049244   71926 preload.go:172] Found /home/jenkins/minikube-integration/19636-3902/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0913 19:53:42.049256   71926 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0913 19:53:42.049381   71926 profile.go:143] Saving config to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/old-k8s-version-234290/config.json ...
	I0913 19:53:42.049610   71926 start.go:360] acquireMachinesLock for old-k8s-version-234290: {Name:mk2a4fc9f87a3264088553785e086036ce1d8c5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 19:53:44.338294   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:53:47.410436   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:53:53.490365   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:53:56.562332   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:54:02.642421   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:54:05.714373   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:54:11.794509   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:54:14.866446   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:54:20.946376   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:54:24.018394   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:54:30.098454   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:54:33.170427   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:54:39.250379   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:54:42.322396   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:54:48.402383   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:54:51.474349   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:54:57.554326   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:55:00.626470   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:55:06.706406   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:55:09.778406   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:55:15.858396   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:55:18.930350   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:55:25.010369   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:55:28.082351   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:55:34.162384   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:55:37.234340   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:55:43.314402   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:55:46.386350   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:55:52.466366   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:55:55.538393   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:56:01.618347   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:56:04.690441   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:56:10.770383   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:56:13.842385   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:56:19.922294   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:56:22.994351   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:56:29.074375   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:56:32.146398   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:56:38.226398   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:56:41.298354   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:56:47.378372   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:56:50.450410   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:56:56.530367   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:56:59.602397   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:57:05.682382   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:57:08.754412   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:57:11.758611   71424 start.go:364] duration metric: took 4m20.559966284s to acquireMachinesLock for "no-preload-239327"
	I0913 19:57:11.758664   71424 start.go:96] Skipping create...Using existing machine configuration
	I0913 19:57:11.758671   71424 fix.go:54] fixHost starting: 
	I0913 19:57:11.759024   71424 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:57:11.759062   71424 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:57:11.773946   71424 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43109
	I0913 19:57:11.774454   71424 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:57:11.774923   71424 main.go:141] libmachine: Using API Version  1
	I0913 19:57:11.774944   71424 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:57:11.775249   71424 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:57:11.775449   71424 main.go:141] libmachine: (no-preload-239327) Calling .DriverName
	I0913 19:57:11.775561   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetState
	I0913 19:57:11.777226   71424 fix.go:112] recreateIfNeeded on no-preload-239327: state=Stopped err=<nil>
	I0913 19:57:11.777255   71424 main.go:141] libmachine: (no-preload-239327) Calling .DriverName
	W0913 19:57:11.777386   71424 fix.go:138] unexpected machine state, will restart: <nil>
	I0913 19:57:11.778991   71424 out.go:177] * Restarting existing kvm2 VM for "no-preload-239327" ...
	I0913 19:57:11.756000   71233 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0913 19:57:11.756057   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetMachineName
	I0913 19:57:11.756380   71233 buildroot.go:166] provisioning hostname "embed-certs-175374"
	I0913 19:57:11.756419   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetMachineName
	I0913 19:57:11.756625   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHHostname
	I0913 19:57:11.758480   71233 machine.go:96] duration metric: took 4m37.434582624s to provisionDockerMachine
	I0913 19:57:11.758528   71233 fix.go:56] duration metric: took 4m37.454978505s for fixHost
	I0913 19:57:11.758535   71233 start.go:83] releasing machines lock for "embed-certs-175374", held for 4m37.454997672s
	W0913 19:57:11.758553   71233 start.go:714] error starting host: provision: host is not running
	W0913 19:57:11.758636   71233 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0913 19:57:11.758644   71233 start.go:729] Will try again in 5 seconds ...
	I0913 19:57:11.780324   71424 main.go:141] libmachine: (no-preload-239327) Calling .Start
	I0913 19:57:11.780481   71424 main.go:141] libmachine: (no-preload-239327) Ensuring networks are active...
	I0913 19:57:11.781265   71424 main.go:141] libmachine: (no-preload-239327) Ensuring network default is active
	I0913 19:57:11.781663   71424 main.go:141] libmachine: (no-preload-239327) Ensuring network mk-no-preload-239327 is active
	I0913 19:57:11.782007   71424 main.go:141] libmachine: (no-preload-239327) Getting domain xml...
	I0913 19:57:11.782826   71424 main.go:141] libmachine: (no-preload-239327) Creating domain...
	I0913 19:57:12.992355   71424 main.go:141] libmachine: (no-preload-239327) Waiting to get IP...
	I0913 19:57:12.993373   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:12.993782   71424 main.go:141] libmachine: (no-preload-239327) DBG | unable to find current IP address of domain no-preload-239327 in network mk-no-preload-239327
	I0913 19:57:12.993855   71424 main.go:141] libmachine: (no-preload-239327) DBG | I0913 19:57:12.993770   72661 retry.go:31] will retry after 199.574184ms: waiting for machine to come up
	I0913 19:57:13.195419   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:13.195877   71424 main.go:141] libmachine: (no-preload-239327) DBG | unable to find current IP address of domain no-preload-239327 in network mk-no-preload-239327
	I0913 19:57:13.195911   71424 main.go:141] libmachine: (no-preload-239327) DBG | I0913 19:57:13.195826   72661 retry.go:31] will retry after 380.700462ms: waiting for machine to come up
	I0913 19:57:13.578683   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:13.579202   71424 main.go:141] libmachine: (no-preload-239327) DBG | unable to find current IP address of domain no-preload-239327 in network mk-no-preload-239327
	I0913 19:57:13.579222   71424 main.go:141] libmachine: (no-preload-239327) DBG | I0913 19:57:13.579162   72661 retry.go:31] will retry after 398.874813ms: waiting for machine to come up
	I0913 19:57:13.979670   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:13.979999   71424 main.go:141] libmachine: (no-preload-239327) DBG | unable to find current IP address of domain no-preload-239327 in network mk-no-preload-239327
	I0913 19:57:13.980026   71424 main.go:141] libmachine: (no-preload-239327) DBG | I0913 19:57:13.979969   72661 retry.go:31] will retry after 430.946638ms: waiting for machine to come up
	I0913 19:57:14.412524   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:14.412887   71424 main.go:141] libmachine: (no-preload-239327) DBG | unable to find current IP address of domain no-preload-239327 in network mk-no-preload-239327
	I0913 19:57:14.412919   71424 main.go:141] libmachine: (no-preload-239327) DBG | I0913 19:57:14.412851   72661 retry.go:31] will retry after 619.103851ms: waiting for machine to come up
	I0913 19:57:15.033546   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:15.034023   71424 main.go:141] libmachine: (no-preload-239327) DBG | unable to find current IP address of domain no-preload-239327 in network mk-no-preload-239327
	I0913 19:57:15.034049   71424 main.go:141] libmachine: (no-preload-239327) DBG | I0913 19:57:15.033968   72661 retry.go:31] will retry after 686.825946ms: waiting for machine to come up
	I0913 19:57:15.722892   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:15.723272   71424 main.go:141] libmachine: (no-preload-239327) DBG | unable to find current IP address of domain no-preload-239327 in network mk-no-preload-239327
	I0913 19:57:15.723291   71424 main.go:141] libmachine: (no-preload-239327) DBG | I0913 19:57:15.723232   72661 retry.go:31] will retry after 950.457281ms: waiting for machine to come up
	I0913 19:57:16.760330   71233 start.go:360] acquireMachinesLock for embed-certs-175374: {Name:mk2a4fc9f87a3264088553785e086036ce1d8c5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 19:57:16.675363   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:16.675847   71424 main.go:141] libmachine: (no-preload-239327) DBG | unable to find current IP address of domain no-preload-239327 in network mk-no-preload-239327
	I0913 19:57:16.675877   71424 main.go:141] libmachine: (no-preload-239327) DBG | I0913 19:57:16.675800   72661 retry.go:31] will retry after 1.216886459s: waiting for machine to come up
	I0913 19:57:17.894808   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:17.895217   71424 main.go:141] libmachine: (no-preload-239327) DBG | unable to find current IP address of domain no-preload-239327 in network mk-no-preload-239327
	I0913 19:57:17.895239   71424 main.go:141] libmachine: (no-preload-239327) DBG | I0913 19:57:17.895175   72661 retry.go:31] will retry after 1.427837109s: waiting for machine to come up
	I0913 19:57:19.324743   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:19.325196   71424 main.go:141] libmachine: (no-preload-239327) DBG | unable to find current IP address of domain no-preload-239327 in network mk-no-preload-239327
	I0913 19:57:19.325217   71424 main.go:141] libmachine: (no-preload-239327) DBG | I0913 19:57:19.325162   72661 retry.go:31] will retry after 1.457475552s: waiting for machine to come up
	I0913 19:57:20.783805   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:20.784266   71424 main.go:141] libmachine: (no-preload-239327) DBG | unable to find current IP address of domain no-preload-239327 in network mk-no-preload-239327
	I0913 19:57:20.784330   71424 main.go:141] libmachine: (no-preload-239327) DBG | I0913 19:57:20.784199   72661 retry.go:31] will retry after 1.982491512s: waiting for machine to come up
	I0913 19:57:22.768091   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:22.768617   71424 main.go:141] libmachine: (no-preload-239327) DBG | unable to find current IP address of domain no-preload-239327 in network mk-no-preload-239327
	I0913 19:57:22.768648   71424 main.go:141] libmachine: (no-preload-239327) DBG | I0913 19:57:22.768571   72661 retry.go:31] will retry after 2.984595157s: waiting for machine to come up
	I0913 19:57:25.756723   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:25.757201   71424 main.go:141] libmachine: (no-preload-239327) DBG | unable to find current IP address of domain no-preload-239327 in network mk-no-preload-239327
	I0913 19:57:25.757254   71424 main.go:141] libmachine: (no-preload-239327) DBG | I0913 19:57:25.757153   72661 retry.go:31] will retry after 3.54213444s: waiting for machine to come up
	I0913 19:57:30.479236   71702 start.go:364] duration metric: took 4m5.481713344s to acquireMachinesLock for "default-k8s-diff-port-512125"
	I0913 19:57:30.479302   71702 start.go:96] Skipping create...Using existing machine configuration
	I0913 19:57:30.479311   71702 fix.go:54] fixHost starting: 
	I0913 19:57:30.479747   71702 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:57:30.479800   71702 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:57:30.496493   71702 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36631
	I0913 19:57:30.497088   71702 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:57:30.497677   71702 main.go:141] libmachine: Using API Version  1
	I0913 19:57:30.497710   71702 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:57:30.498088   71702 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:57:30.498293   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .DriverName
	I0913 19:57:30.498469   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetState
	I0913 19:57:30.500176   71702 fix.go:112] recreateIfNeeded on default-k8s-diff-port-512125: state=Stopped err=<nil>
	I0913 19:57:30.500202   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .DriverName
	W0913 19:57:30.500367   71702 fix.go:138] unexpected machine state, will restart: <nil>
	I0913 19:57:30.503496   71702 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-512125" ...
	I0913 19:57:29.301999   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:29.302506   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has current primary IP address 192.168.50.13 and MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:29.302529   71424 main.go:141] libmachine: (no-preload-239327) Found IP for machine: 192.168.50.13
	I0913 19:57:29.302571   71424 main.go:141] libmachine: (no-preload-239327) Reserving static IP address...
	I0913 19:57:29.302937   71424 main.go:141] libmachine: (no-preload-239327) Reserved static IP address: 192.168.50.13
	I0913 19:57:29.302956   71424 main.go:141] libmachine: (no-preload-239327) Waiting for SSH to be available...
	I0913 19:57:29.302980   71424 main.go:141] libmachine: (no-preload-239327) DBG | found host DHCP lease matching {name: "no-preload-239327", mac: "52:54:00:14:8c:9d", ip: "192.168.50.13"} in network mk-no-preload-239327: {Iface:virbr1 ExpiryTime:2024-09-13 20:57:22 +0000 UTC Type:0 Mac:52:54:00:14:8c:9d Iaid: IPaddr:192.168.50.13 Prefix:24 Hostname:no-preload-239327 Clientid:01:52:54:00:14:8c:9d}
	I0913 19:57:29.303002   71424 main.go:141] libmachine: (no-preload-239327) DBG | skip adding static IP to network mk-no-preload-239327 - found existing host DHCP lease matching {name: "no-preload-239327", mac: "52:54:00:14:8c:9d", ip: "192.168.50.13"}
	I0913 19:57:29.303016   71424 main.go:141] libmachine: (no-preload-239327) DBG | Getting to WaitForSSH function...
	I0913 19:57:29.305047   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:29.305362   71424 main.go:141] libmachine: (no-preload-239327) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:8c:9d", ip: ""} in network mk-no-preload-239327: {Iface:virbr1 ExpiryTime:2024-09-13 20:57:22 +0000 UTC Type:0 Mac:52:54:00:14:8c:9d Iaid: IPaddr:192.168.50.13 Prefix:24 Hostname:no-preload-239327 Clientid:01:52:54:00:14:8c:9d}
	I0913 19:57:29.305404   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined IP address 192.168.50.13 and MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:29.305515   71424 main.go:141] libmachine: (no-preload-239327) DBG | Using SSH client type: external
	I0913 19:57:29.305542   71424 main.go:141] libmachine: (no-preload-239327) DBG | Using SSH private key: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/no-preload-239327/id_rsa (-rw-------)
	I0913 19:57:29.305564   71424 main.go:141] libmachine: (no-preload-239327) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.13 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19636-3902/.minikube/machines/no-preload-239327/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0913 19:57:29.305573   71424 main.go:141] libmachine: (no-preload-239327) DBG | About to run SSH command:
	I0913 19:57:29.305581   71424 main.go:141] libmachine: (no-preload-239327) DBG | exit 0
	I0913 19:57:29.425845   71424 main.go:141] libmachine: (no-preload-239327) DBG | SSH cmd err, output: <nil>: 
	I0913 19:57:29.426277   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetConfigRaw
	I0913 19:57:29.426883   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetIP
	I0913 19:57:29.429328   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:29.429569   71424 main.go:141] libmachine: (no-preload-239327) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:8c:9d", ip: ""} in network mk-no-preload-239327: {Iface:virbr1 ExpiryTime:2024-09-13 20:57:22 +0000 UTC Type:0 Mac:52:54:00:14:8c:9d Iaid: IPaddr:192.168.50.13 Prefix:24 Hostname:no-preload-239327 Clientid:01:52:54:00:14:8c:9d}
	I0913 19:57:29.429604   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined IP address 192.168.50.13 and MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:29.429866   71424 profile.go:143] Saving config to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/no-preload-239327/config.json ...
	I0913 19:57:29.430088   71424 machine.go:93] provisionDockerMachine start ...
	I0913 19:57:29.430124   71424 main.go:141] libmachine: (no-preload-239327) Calling .DriverName
	I0913 19:57:29.430316   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHHostname
	I0913 19:57:29.432371   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:29.432697   71424 main.go:141] libmachine: (no-preload-239327) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:8c:9d", ip: ""} in network mk-no-preload-239327: {Iface:virbr1 ExpiryTime:2024-09-13 20:57:22 +0000 UTC Type:0 Mac:52:54:00:14:8c:9d Iaid: IPaddr:192.168.50.13 Prefix:24 Hostname:no-preload-239327 Clientid:01:52:54:00:14:8c:9d}
	I0913 19:57:29.432723   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined IP address 192.168.50.13 and MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:29.432877   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHPort
	I0913 19:57:29.433028   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHKeyPath
	I0913 19:57:29.433161   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHKeyPath
	I0913 19:57:29.433304   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHUsername
	I0913 19:57:29.433452   71424 main.go:141] libmachine: Using SSH client type: native
	I0913 19:57:29.433659   71424 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.13 22 <nil> <nil>}
	I0913 19:57:29.433671   71424 main.go:141] libmachine: About to run SSH command:
	hostname
	I0913 19:57:29.530650   71424 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0913 19:57:29.530683   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetMachineName
	I0913 19:57:29.530900   71424 buildroot.go:166] provisioning hostname "no-preload-239327"
	I0913 19:57:29.530926   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetMachineName
	I0913 19:57:29.531118   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHHostname
	I0913 19:57:29.533702   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:29.534171   71424 main.go:141] libmachine: (no-preload-239327) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:8c:9d", ip: ""} in network mk-no-preload-239327: {Iface:virbr1 ExpiryTime:2024-09-13 20:57:22 +0000 UTC Type:0 Mac:52:54:00:14:8c:9d Iaid: IPaddr:192.168.50.13 Prefix:24 Hostname:no-preload-239327 Clientid:01:52:54:00:14:8c:9d}
	I0913 19:57:29.534199   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined IP address 192.168.50.13 and MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:29.534417   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHPort
	I0913 19:57:29.534572   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHKeyPath
	I0913 19:57:29.534745   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHKeyPath
	I0913 19:57:29.534891   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHUsername
	I0913 19:57:29.535019   71424 main.go:141] libmachine: Using SSH client type: native
	I0913 19:57:29.535187   71424 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.13 22 <nil> <nil>}
	I0913 19:57:29.535199   71424 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-239327 && echo "no-preload-239327" | sudo tee /etc/hostname
	I0913 19:57:29.648889   71424 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-239327
	
	I0913 19:57:29.648913   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHHostname
	I0913 19:57:29.651418   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:29.651794   71424 main.go:141] libmachine: (no-preload-239327) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:8c:9d", ip: ""} in network mk-no-preload-239327: {Iface:virbr1 ExpiryTime:2024-09-13 20:57:22 +0000 UTC Type:0 Mac:52:54:00:14:8c:9d Iaid: IPaddr:192.168.50.13 Prefix:24 Hostname:no-preload-239327 Clientid:01:52:54:00:14:8c:9d}
	I0913 19:57:29.651818   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined IP address 192.168.50.13 and MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:29.651947   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHPort
	I0913 19:57:29.652123   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHKeyPath
	I0913 19:57:29.652233   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHKeyPath
	I0913 19:57:29.652398   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHUsername
	I0913 19:57:29.652574   71424 main.go:141] libmachine: Using SSH client type: native
	I0913 19:57:29.652776   71424 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.13 22 <nil> <nil>}
	I0913 19:57:29.652794   71424 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-239327' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-239327/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-239327' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0913 19:57:29.762739   71424 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0913 19:57:29.762770   71424 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19636-3902/.minikube CaCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19636-3902/.minikube}
	I0913 19:57:29.762788   71424 buildroot.go:174] setting up certificates
	I0913 19:57:29.762798   71424 provision.go:84] configureAuth start
	I0913 19:57:29.762807   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetMachineName
	I0913 19:57:29.763076   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetIP
	I0913 19:57:29.765579   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:29.765844   71424 main.go:141] libmachine: (no-preload-239327) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:8c:9d", ip: ""} in network mk-no-preload-239327: {Iface:virbr1 ExpiryTime:2024-09-13 20:57:22 +0000 UTC Type:0 Mac:52:54:00:14:8c:9d Iaid: IPaddr:192.168.50.13 Prefix:24 Hostname:no-preload-239327 Clientid:01:52:54:00:14:8c:9d}
	I0913 19:57:29.765881   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined IP address 192.168.50.13 and MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:29.766037   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHHostname
	I0913 19:57:29.768073   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:29.768363   71424 main.go:141] libmachine: (no-preload-239327) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:8c:9d", ip: ""} in network mk-no-preload-239327: {Iface:virbr1 ExpiryTime:2024-09-13 20:57:22 +0000 UTC Type:0 Mac:52:54:00:14:8c:9d Iaid: IPaddr:192.168.50.13 Prefix:24 Hostname:no-preload-239327 Clientid:01:52:54:00:14:8c:9d}
	I0913 19:57:29.768389   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined IP address 192.168.50.13 and MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:29.768465   71424 provision.go:143] copyHostCerts
	I0913 19:57:29.768517   71424 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem, removing ...
	I0913 19:57:29.768527   71424 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem
	I0913 19:57:29.768590   71424 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem (1679 bytes)
	I0913 19:57:29.768687   71424 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem, removing ...
	I0913 19:57:29.768694   71424 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem
	I0913 19:57:29.768722   71424 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem (1082 bytes)
	I0913 19:57:29.768788   71424 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem, removing ...
	I0913 19:57:29.768795   71424 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem
	I0913 19:57:29.768817   71424 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem (1123 bytes)
	I0913 19:57:29.768889   71424 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem org=jenkins.no-preload-239327 san=[127.0.0.1 192.168.50.13 localhost minikube no-preload-239327]
	I0913 19:57:29.880624   71424 provision.go:177] copyRemoteCerts
	I0913 19:57:29.880682   71424 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0913 19:57:29.880717   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHHostname
	I0913 19:57:29.883382   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:29.883679   71424 main.go:141] libmachine: (no-preload-239327) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:8c:9d", ip: ""} in network mk-no-preload-239327: {Iface:virbr1 ExpiryTime:2024-09-13 20:57:22 +0000 UTC Type:0 Mac:52:54:00:14:8c:9d Iaid: IPaddr:192.168.50.13 Prefix:24 Hostname:no-preload-239327 Clientid:01:52:54:00:14:8c:9d}
	I0913 19:57:29.883706   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined IP address 192.168.50.13 and MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:29.883861   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHPort
	I0913 19:57:29.884034   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHKeyPath
	I0913 19:57:29.884172   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHUsername
	I0913 19:57:29.884299   71424 sshutil.go:53] new ssh client: &{IP:192.168.50.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/no-preload-239327/id_rsa Username:docker}
	I0913 19:57:29.964073   71424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0913 19:57:29.988940   71424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0913 19:57:30.013491   71424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0913 19:57:30.038401   71424 provision.go:87] duration metric: took 275.590034ms to configureAuth
	I0913 19:57:30.038427   71424 buildroot.go:189] setting minikube options for container-runtime
	I0913 19:57:30.038638   71424 config.go:182] Loaded profile config "no-preload-239327": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 19:57:30.038726   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHHostname
	I0913 19:57:30.041435   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:30.041734   71424 main.go:141] libmachine: (no-preload-239327) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:8c:9d", ip: ""} in network mk-no-preload-239327: {Iface:virbr1 ExpiryTime:2024-09-13 20:57:22 +0000 UTC Type:0 Mac:52:54:00:14:8c:9d Iaid: IPaddr:192.168.50.13 Prefix:24 Hostname:no-preload-239327 Clientid:01:52:54:00:14:8c:9d}
	I0913 19:57:30.041758   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined IP address 192.168.50.13 and MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:30.041939   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHPort
	I0913 19:57:30.042135   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHKeyPath
	I0913 19:57:30.042328   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHKeyPath
	I0913 19:57:30.042488   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHUsername
	I0913 19:57:30.042633   71424 main.go:141] libmachine: Using SSH client type: native
	I0913 19:57:30.042788   71424 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.13 22 <nil> <nil>}
	I0913 19:57:30.042803   71424 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0913 19:57:30.253339   71424 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0913 19:57:30.253366   71424 machine.go:96] duration metric: took 823.250507ms to provisionDockerMachine
	I0913 19:57:30.253379   71424 start.go:293] postStartSetup for "no-preload-239327" (driver="kvm2")
	I0913 19:57:30.253391   71424 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0913 19:57:30.253413   71424 main.go:141] libmachine: (no-preload-239327) Calling .DriverName
	I0913 19:57:30.253755   71424 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0913 19:57:30.253789   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHHostname
	I0913 19:57:30.256252   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:30.256514   71424 main.go:141] libmachine: (no-preload-239327) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:8c:9d", ip: ""} in network mk-no-preload-239327: {Iface:virbr1 ExpiryTime:2024-09-13 20:57:22 +0000 UTC Type:0 Mac:52:54:00:14:8c:9d Iaid: IPaddr:192.168.50.13 Prefix:24 Hostname:no-preload-239327 Clientid:01:52:54:00:14:8c:9d}
	I0913 19:57:30.256540   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined IP address 192.168.50.13 and MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:30.256711   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHPort
	I0913 19:57:30.256876   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHKeyPath
	I0913 19:57:30.257073   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHUsername
	I0913 19:57:30.257214   71424 sshutil.go:53] new ssh client: &{IP:192.168.50.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/no-preload-239327/id_rsa Username:docker}
	I0913 19:57:30.337478   71424 ssh_runner.go:195] Run: cat /etc/os-release
	I0913 19:57:30.342399   71424 info.go:137] Remote host: Buildroot 2023.02.9
	I0913 19:57:30.342432   71424 filesync.go:126] Scanning /home/jenkins/minikube-integration/19636-3902/.minikube/addons for local assets ...
	I0913 19:57:30.342520   71424 filesync.go:126] Scanning /home/jenkins/minikube-integration/19636-3902/.minikube/files for local assets ...
	I0913 19:57:30.342602   71424 filesync.go:149] local asset: /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem -> 110792.pem in /etc/ssl/certs
	I0913 19:57:30.342687   71424 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0913 19:57:30.352513   71424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem --> /etc/ssl/certs/110792.pem (1708 bytes)
	I0913 19:57:30.377672   71424 start.go:296] duration metric: took 124.280454ms for postStartSetup
	I0913 19:57:30.377713   71424 fix.go:56] duration metric: took 18.619042375s for fixHost
	I0913 19:57:30.377736   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHHostname
	I0913 19:57:30.380480   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:30.380762   71424 main.go:141] libmachine: (no-preload-239327) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:8c:9d", ip: ""} in network mk-no-preload-239327: {Iface:virbr1 ExpiryTime:2024-09-13 20:57:22 +0000 UTC Type:0 Mac:52:54:00:14:8c:9d Iaid: IPaddr:192.168.50.13 Prefix:24 Hostname:no-preload-239327 Clientid:01:52:54:00:14:8c:9d}
	I0913 19:57:30.380784   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined IP address 192.168.50.13 and MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:30.380956   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHPort
	I0913 19:57:30.381202   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHKeyPath
	I0913 19:57:30.381348   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHKeyPath
	I0913 19:57:30.381458   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHUsername
	I0913 19:57:30.381616   71424 main.go:141] libmachine: Using SSH client type: native
	I0913 19:57:30.381771   71424 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.13 22 <nil> <nil>}
	I0913 19:57:30.381780   71424 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0913 19:57:30.479035   71424 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726257450.452618583
	
	I0913 19:57:30.479060   71424 fix.go:216] guest clock: 1726257450.452618583
	I0913 19:57:30.479069   71424 fix.go:229] Guest: 2024-09-13 19:57:30.452618583 +0000 UTC Remote: 2024-09-13 19:57:30.377717716 +0000 UTC m=+279.312798159 (delta=74.900867ms)
	I0913 19:57:30.479125   71424 fix.go:200] guest clock delta is within tolerance: 74.900867ms
	I0913 19:57:30.479144   71424 start.go:83] releasing machines lock for "no-preload-239327", held for 18.720496354s
	I0913 19:57:30.479184   71424 main.go:141] libmachine: (no-preload-239327) Calling .DriverName
	I0913 19:57:30.479427   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetIP
	I0913 19:57:30.481882   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:30.482255   71424 main.go:141] libmachine: (no-preload-239327) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:8c:9d", ip: ""} in network mk-no-preload-239327: {Iface:virbr1 ExpiryTime:2024-09-13 20:57:22 +0000 UTC Type:0 Mac:52:54:00:14:8c:9d Iaid: IPaddr:192.168.50.13 Prefix:24 Hostname:no-preload-239327 Clientid:01:52:54:00:14:8c:9d}
	I0913 19:57:30.482282   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined IP address 192.168.50.13 and MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:30.482456   71424 main.go:141] libmachine: (no-preload-239327) Calling .DriverName
	I0913 19:57:30.482964   71424 main.go:141] libmachine: (no-preload-239327) Calling .DriverName
	I0913 19:57:30.483140   71424 main.go:141] libmachine: (no-preload-239327) Calling .DriverName
	I0913 19:57:30.483216   71424 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0913 19:57:30.483243   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHHostname
	I0913 19:57:30.483423   71424 ssh_runner.go:195] Run: cat /version.json
	I0913 19:57:30.483453   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHHostname
	I0913 19:57:30.485658   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:30.486000   71424 main.go:141] libmachine: (no-preload-239327) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:8c:9d", ip: ""} in network mk-no-preload-239327: {Iface:virbr1 ExpiryTime:2024-09-13 20:57:22 +0000 UTC Type:0 Mac:52:54:00:14:8c:9d Iaid: IPaddr:192.168.50.13 Prefix:24 Hostname:no-preload-239327 Clientid:01:52:54:00:14:8c:9d}
	I0913 19:57:30.486026   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined IP address 192.168.50.13 and MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:30.486080   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:30.486173   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHPort
	I0913 19:57:30.486321   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHKeyPath
	I0913 19:57:30.486463   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHUsername
	I0913 19:57:30.486536   71424 main.go:141] libmachine: (no-preload-239327) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:8c:9d", ip: ""} in network mk-no-preload-239327: {Iface:virbr1 ExpiryTime:2024-09-13 20:57:22 +0000 UTC Type:0 Mac:52:54:00:14:8c:9d Iaid: IPaddr:192.168.50.13 Prefix:24 Hostname:no-preload-239327 Clientid:01:52:54:00:14:8c:9d}
	I0913 19:57:30.486556   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined IP address 192.168.50.13 and MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:30.486581   71424 sshutil.go:53] new ssh client: &{IP:192.168.50.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/no-preload-239327/id_rsa Username:docker}
	I0913 19:57:30.486717   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHPort
	I0913 19:57:30.486859   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHKeyPath
	I0913 19:57:30.487019   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHUsername
	I0913 19:57:30.487177   71424 sshutil.go:53] new ssh client: &{IP:192.168.50.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/no-preload-239327/id_rsa Username:docker}
	I0913 19:57:30.567383   71424 ssh_runner.go:195] Run: systemctl --version
	I0913 19:57:30.589782   71424 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0913 19:57:30.731014   71424 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0913 19:57:30.737329   71424 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0913 19:57:30.737400   71424 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0913 19:57:30.753326   71424 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0913 19:57:30.753355   71424 start.go:495] detecting cgroup driver to use...
	I0913 19:57:30.753427   71424 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0913 19:57:30.769188   71424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0913 19:57:30.783273   71424 docker.go:217] disabling cri-docker service (if available) ...
	I0913 19:57:30.783338   71424 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0913 19:57:30.796488   71424 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0913 19:57:30.809856   71424 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0913 19:57:30.920704   71424 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0913 19:57:31.096766   71424 docker.go:233] disabling docker service ...
	I0913 19:57:31.096843   71424 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0913 19:57:31.111766   71424 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0913 19:57:31.127537   71424 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0913 19:57:31.243075   71424 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0913 19:57:31.367950   71424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0913 19:57:31.382349   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0913 19:57:31.401339   71424 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0913 19:57:31.401408   71424 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:57:31.412154   71424 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0913 19:57:31.412230   71424 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:57:31.423247   71424 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:57:31.433976   71424 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:57:31.445438   71424 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0913 19:57:31.457530   71424 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:57:31.468624   71424 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:57:31.487026   71424 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:57:31.498412   71424 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0913 19:57:31.508829   71424 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0913 19:57:31.508895   71424 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0913 19:57:31.524710   71424 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0913 19:57:31.535524   71424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 19:57:31.653359   71424 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0913 19:57:31.747320   71424 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0913 19:57:31.747407   71424 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0913 19:57:31.752629   71424 start.go:563] Will wait 60s for crictl version
	I0913 19:57:31.752688   71424 ssh_runner.go:195] Run: which crictl
	I0913 19:57:31.756745   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0913 19:57:31.801760   71424 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0913 19:57:31.801845   71424 ssh_runner.go:195] Run: crio --version
	I0913 19:57:31.831043   71424 ssh_runner.go:195] Run: crio --version
	I0913 19:57:31.864324   71424 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0913 19:57:30.504936   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .Start
	I0913 19:57:30.505113   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Ensuring networks are active...
	I0913 19:57:30.505954   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Ensuring network default is active
	I0913 19:57:30.506465   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Ensuring network mk-default-k8s-diff-port-512125 is active
	I0913 19:57:30.506848   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Getting domain xml...
	I0913 19:57:30.507643   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Creating domain...
	I0913 19:57:31.762345   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting to get IP...
	I0913 19:57:31.763307   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:31.763782   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | unable to find current IP address of domain default-k8s-diff-port-512125 in network mk-default-k8s-diff-port-512125
	I0913 19:57:31.763844   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | I0913 19:57:31.763764   72780 retry.go:31] will retry after 200.585233ms: waiting for machine to come up
	I0913 19:57:31.966496   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:31.968386   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | unable to find current IP address of domain default-k8s-diff-port-512125 in network mk-default-k8s-diff-port-512125
	I0913 19:57:31.968411   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | I0913 19:57:31.968318   72780 retry.go:31] will retry after 263.858664ms: waiting for machine to come up
	I0913 19:57:32.234115   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:32.234585   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | unable to find current IP address of domain default-k8s-diff-port-512125 in network mk-default-k8s-diff-port-512125
	I0913 19:57:32.234611   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | I0913 19:57:32.234528   72780 retry.go:31] will retry after 372.592721ms: waiting for machine to come up
	I0913 19:57:32.609295   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:32.609822   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | unable to find current IP address of domain default-k8s-diff-port-512125 in network mk-default-k8s-diff-port-512125
	I0913 19:57:32.609852   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | I0913 19:57:32.609783   72780 retry.go:31] will retry after 570.937116ms: waiting for machine to come up
	I0913 19:57:33.182680   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:33.183060   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | unable to find current IP address of domain default-k8s-diff-port-512125 in network mk-default-k8s-diff-port-512125
	I0913 19:57:33.183090   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | I0913 19:57:33.183013   72780 retry.go:31] will retry after 573.320817ms: waiting for machine to come up
	I0913 19:57:33.757741   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:33.758113   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | unable to find current IP address of domain default-k8s-diff-port-512125 in network mk-default-k8s-diff-port-512125
	I0913 19:57:33.758145   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | I0913 19:57:33.758052   72780 retry.go:31] will retry after 732.322448ms: waiting for machine to come up
	I0913 19:57:34.492123   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:34.492507   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | unable to find current IP address of domain default-k8s-diff-port-512125 in network mk-default-k8s-diff-port-512125
	I0913 19:57:34.492538   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | I0913 19:57:34.492457   72780 retry.go:31] will retry after 958.042939ms: waiting for machine to come up
	I0913 19:57:31.865671   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetIP
	I0913 19:57:31.868390   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:31.868769   71424 main.go:141] libmachine: (no-preload-239327) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:8c:9d", ip: ""} in network mk-no-preload-239327: {Iface:virbr1 ExpiryTime:2024-09-13 20:57:22 +0000 UTC Type:0 Mac:52:54:00:14:8c:9d Iaid: IPaddr:192.168.50.13 Prefix:24 Hostname:no-preload-239327 Clientid:01:52:54:00:14:8c:9d}
	I0913 19:57:31.868809   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined IP address 192.168.50.13 and MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:31.868948   71424 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0913 19:57:31.873443   71424 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0913 19:57:31.886704   71424 kubeadm.go:883] updating cluster {Name:no-preload-239327 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:no-preload-239327 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.13 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0913 19:57:31.886832   71424 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0913 19:57:31.886886   71424 ssh_runner.go:195] Run: sudo crictl images --output json
	I0913 19:57:31.925232   71424 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0913 19:57:31.925256   71424 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.1 registry.k8s.io/kube-controller-manager:v1.31.1 registry.k8s.io/kube-scheduler:v1.31.1 registry.k8s.io/kube-proxy:v1.31.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0913 19:57:31.925331   71424 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 19:57:31.925351   71424 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0913 19:57:31.925350   71424 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I0913 19:57:31.925433   71424 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0913 19:57:31.925483   71424 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0913 19:57:31.925542   71424 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I0913 19:57:31.925553   71424 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I0913 19:57:31.925619   71424 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0913 19:57:31.927195   71424 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0913 19:57:31.927210   71424 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I0913 19:57:31.927221   71424 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0913 19:57:31.927234   71424 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0913 19:57:31.927201   71424 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I0913 19:57:31.927210   71424 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 19:57:31.927265   71424 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I0913 19:57:31.927291   71424 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0913 19:57:32.127330   71424 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0913 19:57:32.132821   71424 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I0913 19:57:32.142922   71424 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.1
	I0913 19:57:32.151533   71424 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.1
	I0913 19:57:32.187158   71424 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.1
	I0913 19:57:32.196395   71424 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0913 19:57:32.196447   71424 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0913 19:57:32.196495   71424 ssh_runner.go:195] Run: which crictl
	I0913 19:57:32.197121   71424 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.1
	I0913 19:57:32.223747   71424 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0913 19:57:32.241044   71424 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.1" does not exist at hash "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee" in container runtime
	I0913 19:57:32.241098   71424 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.1
	I0913 19:57:32.241146   71424 ssh_runner.go:195] Run: which crictl
	I0913 19:57:32.241193   71424 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I0913 19:57:32.241248   71424 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I0913 19:57:32.241305   71424 ssh_runner.go:195] Run: which crictl
	I0913 19:57:32.307038   71424 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.1" does not exist at hash "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1" in container runtime
	I0913 19:57:32.307081   71424 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0913 19:57:32.307161   71424 ssh_runner.go:195] Run: which crictl
	I0913 19:57:32.310315   71424 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.1" does not exist at hash "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b" in container runtime
	I0913 19:57:32.310353   71424 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.1
	I0913 19:57:32.310403   71424 ssh_runner.go:195] Run: which crictl
	I0913 19:57:32.310456   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0913 19:57:32.310513   71424 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.1" needs transfer: "registry.k8s.io/kube-proxy:v1.31.1" does not exist at hash "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561" in container runtime
	I0913 19:57:32.310544   71424 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.1
	I0913 19:57:32.310579   71424 ssh_runner.go:195] Run: which crictl
	I0913 19:57:32.432848   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0913 19:57:32.432949   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0913 19:57:32.432981   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0913 19:57:32.433034   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0913 19:57:32.433086   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0913 19:57:32.433185   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0913 19:57:32.568999   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0913 19:57:32.569071   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0913 19:57:32.569090   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0913 19:57:32.569137   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0913 19:57:32.569158   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0913 19:57:32.569239   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0913 19:57:32.686591   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0913 19:57:32.709864   71424 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0913 19:57:32.709957   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0913 19:57:32.709984   71424 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0913 19:57:32.710022   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0913 19:57:32.710074   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0913 19:57:32.714371   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0913 19:57:32.812533   71424 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I0913 19:57:32.812546   71424 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I0913 19:57:32.812646   71424 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0913 19:57:32.812679   71424 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I0913 19:57:32.822802   71424 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0913 19:57:32.822821   71424 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0913 19:57:32.822870   71424 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0913 19:57:32.822949   71424 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I0913 19:57:32.823020   71424 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I0913 19:57:32.823036   71424 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I0913 19:57:32.823105   71424 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0913 19:57:32.823127   71424 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0913 19:57:32.823108   71424 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1
	I0913 19:57:32.827694   71424 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I0913 19:57:32.827935   71424 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.1 (exists)
	I0913 19:57:33.133519   71424 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 19:57:35.452314   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:35.452807   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | unable to find current IP address of domain default-k8s-diff-port-512125 in network mk-default-k8s-diff-port-512125
	I0913 19:57:35.452832   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | I0913 19:57:35.452764   72780 retry.go:31] will retry after 1.050724369s: waiting for machine to come up
	I0913 19:57:36.504580   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:36.505059   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | unable to find current IP address of domain default-k8s-diff-port-512125 in network mk-default-k8s-diff-port-512125
	I0913 19:57:36.505083   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | I0913 19:57:36.505005   72780 retry.go:31] will retry after 1.828970571s: waiting for machine to come up
	I0913 19:57:38.336079   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:38.336524   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | unable to find current IP address of domain default-k8s-diff-port-512125 in network mk-default-k8s-diff-port-512125
	I0913 19:57:38.336551   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | I0913 19:57:38.336484   72780 retry.go:31] will retry after 1.745975748s: waiting for machine to come up
	I0913 19:57:36.540092   71424 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.717200665s)
	I0913 19:57:36.540120   71424 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0913 19:57:36.540143   71424 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I0913 19:57:36.540185   71424 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1: (3.717045749s)
	I0913 19:57:36.540088   71424 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1: (3.716939076s)
	I0913 19:57:36.540246   71424 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1: (3.717074576s)
	I0913 19:57:36.540263   71424 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.1 (exists)
	I0913 19:57:36.540196   71424 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I0913 19:57:36.540247   71424 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.1 (exists)
	I0913 19:57:36.540220   71424 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.1 (exists)
	I0913 19:57:36.540318   71424 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (3.406769496s)
	I0913 19:57:36.540350   71424 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0913 19:57:36.540383   71424 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 19:57:36.540425   71424 ssh_runner.go:195] Run: which crictl
	I0913 19:57:38.607617   71424 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (2.06732841s)
	I0913 19:57:38.607656   71424 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I0913 19:57:38.607657   71424 ssh_runner.go:235] Completed: which crictl: (2.067217735s)
	I0913 19:57:38.607681   71424 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0913 19:57:38.607717   71424 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0913 19:57:38.607717   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 19:57:38.655710   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 19:57:40.096743   71424 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.440995963s)
	I0913 19:57:40.096836   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 19:57:40.096885   71424 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1: (1.489140573s)
	I0913 19:57:40.096912   71424 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 from cache
	I0913 19:57:40.096946   71424 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.1
	I0913 19:57:40.097003   71424 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1
	I0913 19:57:40.142959   71424 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0913 19:57:40.143072   71424 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0913 19:57:40.083781   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:40.084316   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | unable to find current IP address of domain default-k8s-diff-port-512125 in network mk-default-k8s-diff-port-512125
	I0913 19:57:40.084339   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | I0913 19:57:40.084202   72780 retry.go:31] will retry after 2.736824298s: waiting for machine to come up
	I0913 19:57:42.823269   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:42.823689   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | unable to find current IP address of domain default-k8s-diff-port-512125 in network mk-default-k8s-diff-port-512125
	I0913 19:57:42.823723   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | I0913 19:57:42.823648   72780 retry.go:31] will retry after 3.517461718s: waiting for machine to come up
	I0913 19:57:42.266895   71424 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1: (2.169865218s)
	I0913 19:57:42.266929   71424 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 from cache
	I0913 19:57:42.266971   71424 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0913 19:57:42.267074   71424 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0913 19:57:42.266978   71424 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (2.123869445s)
	I0913 19:57:42.267185   71424 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0913 19:57:44.129215   71424 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1: (1.86211411s)
	I0913 19:57:44.129248   71424 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 from cache
	I0913 19:57:44.129280   71424 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0913 19:57:44.129356   71424 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0913 19:57:46.077759   71424 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1: (1.948382667s)
	I0913 19:57:46.077791   71424 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 from cache
	I0913 19:57:46.077818   71424 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0913 19:57:46.077859   71424 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0913 19:57:46.342187   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:46.342624   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | unable to find current IP address of domain default-k8s-diff-port-512125 in network mk-default-k8s-diff-port-512125
	I0913 19:57:46.342661   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | I0913 19:57:46.342555   72780 retry.go:31] will retry after 3.728072283s: waiting for machine to come up
	I0913 19:57:46.728210   71424 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0913 19:57:46.728256   71424 cache_images.go:123] Successfully loaded all cached images
	I0913 19:57:46.728261   71424 cache_images.go:92] duration metric: took 14.802990931s to LoadCachedImages
	I0913 19:57:46.728274   71424 kubeadm.go:934] updating node { 192.168.50.13 8443 v1.31.1 crio true true} ...
	I0913 19:57:46.728393   71424 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-239327 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.13
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-239327 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0913 19:57:46.728503   71424 ssh_runner.go:195] Run: crio config
	I0913 19:57:46.777890   71424 cni.go:84] Creating CNI manager for ""
	I0913 19:57:46.777916   71424 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0913 19:57:46.777928   71424 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0913 19:57:46.777948   71424 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.13 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-239327 NodeName:no-preload-239327 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.13"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.13 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0913 19:57:46.778129   71424 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.13
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-239327"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.13
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.13"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0913 19:57:46.778201   71424 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0913 19:57:46.788550   71424 binaries.go:44] Found k8s binaries, skipping transfer
	I0913 19:57:46.788612   71424 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0913 19:57:46.797610   71424 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0913 19:57:46.813683   71424 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0913 19:57:46.829359   71424 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0913 19:57:46.846055   71424 ssh_runner.go:195] Run: grep 192.168.50.13	control-plane.minikube.internal$ /etc/hosts
	I0913 19:57:46.849820   71424 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.13	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0913 19:57:46.861351   71424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 19:57:46.976645   71424 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0913 19:57:46.993359   71424 certs.go:68] Setting up /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/no-preload-239327 for IP: 192.168.50.13
	I0913 19:57:46.993390   71424 certs.go:194] generating shared ca certs ...
	I0913 19:57:46.993410   71424 certs.go:226] acquiring lock for ca certs: {Name:mke780aab4c2895bded11772e31c7a357d07742c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 19:57:46.993586   71424 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.key
	I0913 19:57:46.993648   71424 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.key
	I0913 19:57:46.993661   71424 certs.go:256] generating profile certs ...
	I0913 19:57:46.993761   71424 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/no-preload-239327/client.key
	I0913 19:57:46.993845   71424 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/no-preload-239327/apiserver.key.1d2f30c2
	I0913 19:57:46.993896   71424 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/no-preload-239327/proxy-client.key
	I0913 19:57:46.994053   71424 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079.pem (1338 bytes)
	W0913 19:57:46.994120   71424 certs.go:480] ignoring /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079_empty.pem, impossibly tiny 0 bytes
	I0913 19:57:46.994134   71424 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem (1675 bytes)
	I0913 19:57:46.994178   71424 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem (1082 bytes)
	I0913 19:57:46.994218   71424 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem (1123 bytes)
	I0913 19:57:46.994250   71424 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem (1679 bytes)
	I0913 19:57:46.994307   71424 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem (1708 bytes)
	I0913 19:57:46.995114   71424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0913 19:57:47.025538   71424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0913 19:57:47.078641   71424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0913 19:57:47.107063   71424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0913 19:57:47.147536   71424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/no-preload-239327/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0913 19:57:47.179796   71424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/no-preload-239327/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0913 19:57:47.202593   71424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/no-preload-239327/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0913 19:57:47.227536   71424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/no-preload-239327/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0913 19:57:47.251324   71424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0913 19:57:47.274447   71424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079.pem --> /usr/share/ca-certificates/11079.pem (1338 bytes)
	I0913 19:57:47.297216   71424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem --> /usr/share/ca-certificates/110792.pem (1708 bytes)
	I0913 19:57:47.320138   71424 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0913 19:57:47.336696   71424 ssh_runner.go:195] Run: openssl version
	I0913 19:57:47.342403   71424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0913 19:57:47.352378   71424 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0913 19:57:47.356749   71424 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 13 18:22 /usr/share/ca-certificates/minikubeCA.pem
	I0913 19:57:47.356793   71424 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0913 19:57:47.362541   71424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0913 19:57:47.372621   71424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11079.pem && ln -fs /usr/share/ca-certificates/11079.pem /etc/ssl/certs/11079.pem"
	I0913 19:57:47.382729   71424 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11079.pem
	I0913 19:57:47.387369   71424 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 13 18:38 /usr/share/ca-certificates/11079.pem
	I0913 19:57:47.387431   71424 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11079.pem
	I0913 19:57:47.393218   71424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11079.pem /etc/ssl/certs/51391683.0"
	I0913 19:57:47.403529   71424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110792.pem && ln -fs /usr/share/ca-certificates/110792.pem /etc/ssl/certs/110792.pem"
	I0913 19:57:47.414210   71424 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110792.pem
	I0913 19:57:47.418917   71424 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 13 18:38 /usr/share/ca-certificates/110792.pem
	I0913 19:57:47.418965   71424 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110792.pem
	I0913 19:57:47.424414   71424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110792.pem /etc/ssl/certs/3ec20f2e.0"
	I0913 19:57:47.434850   71424 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0913 19:57:47.439245   71424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0913 19:57:47.445052   71424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0913 19:57:47.450680   71424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0913 19:57:47.456489   71424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0913 19:57:47.462051   71424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0913 19:57:47.467582   71424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0913 19:57:47.473181   71424 kubeadm.go:392] StartCluster: {Name:no-preload-239327 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:no-preload-239327 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.13 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 19:57:47.473256   71424 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0913 19:57:47.473295   71424 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0913 19:57:47.510432   71424 cri.go:89] found id: ""
	I0913 19:57:47.510508   71424 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0913 19:57:47.520272   71424 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0913 19:57:47.520293   71424 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0913 19:57:47.520338   71424 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0913 19:57:47.529391   71424 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0913 19:57:47.530298   71424 kubeconfig.go:125] found "no-preload-239327" server: "https://192.168.50.13:8443"
	I0913 19:57:47.532275   71424 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0913 19:57:47.541080   71424 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.13
	I0913 19:57:47.541115   71424 kubeadm.go:1160] stopping kube-system containers ...
	I0913 19:57:47.541130   71424 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0913 19:57:47.541167   71424 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0913 19:57:47.575726   71424 cri.go:89] found id: ""
	I0913 19:57:47.575797   71424 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0913 19:57:47.591640   71424 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0913 19:57:47.600616   71424 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0913 19:57:47.600634   71424 kubeadm.go:157] found existing configuration files:
	
	I0913 19:57:47.600680   71424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0913 19:57:47.609317   71424 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0913 19:57:47.609360   71424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0913 19:57:47.618729   71424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0913 19:57:47.627198   71424 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0913 19:57:47.627241   71424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0913 19:57:47.636259   71424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0913 19:57:47.645245   71424 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0913 19:57:47.645303   71424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0913 19:57:47.654245   71424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0913 19:57:47.662970   71424 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0913 19:57:47.663045   71424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0913 19:57:47.672250   71424 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0913 19:57:47.681504   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:57:47.783618   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:57:48.614939   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:57:48.812739   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:57:48.888885   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:57:48.999877   71424 api_server.go:52] waiting for apiserver process to appear ...
	I0913 19:57:48.999966   71424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:57:49.500587   71424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:57:50.001072   71424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:57:50.026939   71424 api_server.go:72] duration metric: took 1.027062019s to wait for apiserver process to appear ...
	I0913 19:57:50.026965   71424 api_server.go:88] waiting for apiserver healthz status ...
	I0913 19:57:50.026983   71424 api_server.go:253] Checking apiserver healthz at https://192.168.50.13:8443/healthz ...
	I0913 19:57:51.327259   71926 start.go:364] duration metric: took 4m9.277620447s to acquireMachinesLock for "old-k8s-version-234290"
	I0913 19:57:51.327324   71926 start.go:96] Skipping create...Using existing machine configuration
	I0913 19:57:51.327338   71926 fix.go:54] fixHost starting: 
	I0913 19:57:51.327769   71926 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:57:51.327815   71926 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:57:51.344030   71926 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37477
	I0913 19:57:51.344527   71926 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:57:51.344994   71926 main.go:141] libmachine: Using API Version  1
	I0913 19:57:51.345018   71926 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:57:51.345360   71926 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:57:51.345563   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .DriverName
	I0913 19:57:51.345700   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetState
	I0913 19:57:51.347144   71926 fix.go:112] recreateIfNeeded on old-k8s-version-234290: state=Stopped err=<nil>
	I0913 19:57:51.347169   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .DriverName
	W0913 19:57:51.347304   71926 fix.go:138] unexpected machine state, will restart: <nil>
	I0913 19:57:51.349231   71926 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-234290" ...
	I0913 19:57:51.350756   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .Start
	I0913 19:57:51.350906   71926 main.go:141] libmachine: (old-k8s-version-234290) Ensuring networks are active...
	I0913 19:57:51.351585   71926 main.go:141] libmachine: (old-k8s-version-234290) Ensuring network default is active
	I0913 19:57:51.351974   71926 main.go:141] libmachine: (old-k8s-version-234290) Ensuring network mk-old-k8s-version-234290 is active
	I0913 19:57:51.352333   71926 main.go:141] libmachine: (old-k8s-version-234290) Getting domain xml...
	I0913 19:57:51.352947   71926 main.go:141] libmachine: (old-k8s-version-234290) Creating domain...
	I0913 19:57:50.075284   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:50.075782   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has current primary IP address 192.168.61.3 and MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:50.075801   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Found IP for machine: 192.168.61.3
	I0913 19:57:50.075813   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Reserving static IP address...
	I0913 19:57:50.076344   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Reserved static IP address: 192.168.61.3
	I0913 19:57:50.076383   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting for SSH to be available...
	I0913 19:57:50.076420   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-512125", mac: "52:54:00:5b:54:e0", ip: "192.168.61.3"} in network mk-default-k8s-diff-port-512125: {Iface:virbr2 ExpiryTime:2024-09-13 20:49:35 +0000 UTC Type:0 Mac:52:54:00:5b:54:e0 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-512125 Clientid:01:52:54:00:5b:54:e0}
	I0913 19:57:50.076452   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | skip adding static IP to network mk-default-k8s-diff-port-512125 - found existing host DHCP lease matching {name: "default-k8s-diff-port-512125", mac: "52:54:00:5b:54:e0", ip: "192.168.61.3"}
	I0913 19:57:50.076468   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | Getting to WaitForSSH function...
	I0913 19:57:50.078783   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:50.079184   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:54:e0", ip: ""} in network mk-default-k8s-diff-port-512125: {Iface:virbr2 ExpiryTime:2024-09-13 20:49:35 +0000 UTC Type:0 Mac:52:54:00:5b:54:e0 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-512125 Clientid:01:52:54:00:5b:54:e0}
	I0913 19:57:50.079251   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined IP address 192.168.61.3 and MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:50.079322   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | Using SSH client type: external
	I0913 19:57:50.079363   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | Using SSH private key: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/default-k8s-diff-port-512125/id_rsa (-rw-------)
	I0913 19:57:50.079395   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.3 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19636-3902/.minikube/machines/default-k8s-diff-port-512125/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0913 19:57:50.079422   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | About to run SSH command:
	I0913 19:57:50.079444   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | exit 0
	I0913 19:57:50.206454   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | SSH cmd err, output: <nil>: 
	I0913 19:57:50.206818   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetConfigRaw
	I0913 19:57:50.207468   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetIP
	I0913 19:57:50.210231   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:50.210663   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:54:e0", ip: ""} in network mk-default-k8s-diff-port-512125: {Iface:virbr2 ExpiryTime:2024-09-13 20:49:35 +0000 UTC Type:0 Mac:52:54:00:5b:54:e0 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-512125 Clientid:01:52:54:00:5b:54:e0}
	I0913 19:57:50.210690   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined IP address 192.168.61.3 and MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:50.210983   71702 profile.go:143] Saving config to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/default-k8s-diff-port-512125/config.json ...
	I0913 19:57:50.211209   71702 machine.go:93] provisionDockerMachine start ...
	I0913 19:57:50.211228   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .DriverName
	I0913 19:57:50.211520   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHHostname
	I0913 19:57:50.214581   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:50.214920   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:54:e0", ip: ""} in network mk-default-k8s-diff-port-512125: {Iface:virbr2 ExpiryTime:2024-09-13 20:49:35 +0000 UTC Type:0 Mac:52:54:00:5b:54:e0 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-512125 Clientid:01:52:54:00:5b:54:e0}
	I0913 19:57:50.214943   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined IP address 192.168.61.3 and MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:50.215121   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHPort
	I0913 19:57:50.215303   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHKeyPath
	I0913 19:57:50.215451   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHKeyPath
	I0913 19:57:50.215645   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHUsername
	I0913 19:57:50.215804   71702 main.go:141] libmachine: Using SSH client type: native
	I0913 19:57:50.216045   71702 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.3 22 <nil> <nil>}
	I0913 19:57:50.216060   71702 main.go:141] libmachine: About to run SSH command:
	hostname
	I0913 19:57:50.331657   71702 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0913 19:57:50.331684   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetMachineName
	I0913 19:57:50.331934   71702 buildroot.go:166] provisioning hostname "default-k8s-diff-port-512125"
	I0913 19:57:50.331950   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetMachineName
	I0913 19:57:50.332149   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHHostname
	I0913 19:57:50.335159   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:50.335537   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:54:e0", ip: ""} in network mk-default-k8s-diff-port-512125: {Iface:virbr2 ExpiryTime:2024-09-13 20:49:35 +0000 UTC Type:0 Mac:52:54:00:5b:54:e0 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-512125 Clientid:01:52:54:00:5b:54:e0}
	I0913 19:57:50.335567   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined IP address 192.168.61.3 and MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:50.335723   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHPort
	I0913 19:57:50.335908   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHKeyPath
	I0913 19:57:50.336046   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHKeyPath
	I0913 19:57:50.336226   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHUsername
	I0913 19:57:50.336384   71702 main.go:141] libmachine: Using SSH client type: native
	I0913 19:57:50.336597   71702 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.3 22 <nil> <nil>}
	I0913 19:57:50.336616   71702 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-512125 && echo "default-k8s-diff-port-512125" | sudo tee /etc/hostname
	I0913 19:57:50.467731   71702 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-512125
	
	I0913 19:57:50.467765   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHHostname
	I0913 19:57:50.470668   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:50.471106   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:54:e0", ip: ""} in network mk-default-k8s-diff-port-512125: {Iface:virbr2 ExpiryTime:2024-09-13 20:49:35 +0000 UTC Type:0 Mac:52:54:00:5b:54:e0 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-512125 Clientid:01:52:54:00:5b:54:e0}
	I0913 19:57:50.471135   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined IP address 192.168.61.3 and MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:50.471401   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHPort
	I0913 19:57:50.471588   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHKeyPath
	I0913 19:57:50.471784   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHKeyPath
	I0913 19:57:50.471944   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHUsername
	I0913 19:57:50.472126   71702 main.go:141] libmachine: Using SSH client type: native
	I0913 19:57:50.472334   71702 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.3 22 <nil> <nil>}
	I0913 19:57:50.472352   71702 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-512125' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-512125/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-512125' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0913 19:57:50.587535   71702 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0913 19:57:50.587565   71702 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19636-3902/.minikube CaCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19636-3902/.minikube}
	I0913 19:57:50.587599   71702 buildroot.go:174] setting up certificates
	I0913 19:57:50.587608   71702 provision.go:84] configureAuth start
	I0913 19:57:50.587617   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetMachineName
	I0913 19:57:50.587881   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetIP
	I0913 19:57:50.590622   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:50.591016   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:54:e0", ip: ""} in network mk-default-k8s-diff-port-512125: {Iface:virbr2 ExpiryTime:2024-09-13 20:49:35 +0000 UTC Type:0 Mac:52:54:00:5b:54:e0 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-512125 Clientid:01:52:54:00:5b:54:e0}
	I0913 19:57:50.591046   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined IP address 192.168.61.3 and MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:50.591235   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHHostname
	I0913 19:57:50.593758   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:50.594161   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:54:e0", ip: ""} in network mk-default-k8s-diff-port-512125: {Iface:virbr2 ExpiryTime:2024-09-13 20:49:35 +0000 UTC Type:0 Mac:52:54:00:5b:54:e0 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-512125 Clientid:01:52:54:00:5b:54:e0}
	I0913 19:57:50.594188   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined IP address 192.168.61.3 and MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:50.594290   71702 provision.go:143] copyHostCerts
	I0913 19:57:50.594351   71702 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem, removing ...
	I0913 19:57:50.594364   71702 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem
	I0913 19:57:50.594423   71702 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem (1123 bytes)
	I0913 19:57:50.594504   71702 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem, removing ...
	I0913 19:57:50.594511   71702 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem
	I0913 19:57:50.594529   71702 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem (1679 bytes)
	I0913 19:57:50.594580   71702 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem, removing ...
	I0913 19:57:50.594586   71702 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem
	I0913 19:57:50.594603   71702 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem (1082 bytes)
	I0913 19:57:50.594654   71702 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-512125 san=[127.0.0.1 192.168.61.3 default-k8s-diff-port-512125 localhost minikube]
	I0913 19:57:50.688827   71702 provision.go:177] copyRemoteCerts
	I0913 19:57:50.688879   71702 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0913 19:57:50.688903   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHHostname
	I0913 19:57:50.691724   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:50.692112   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:54:e0", ip: ""} in network mk-default-k8s-diff-port-512125: {Iface:virbr2 ExpiryTime:2024-09-13 20:49:35 +0000 UTC Type:0 Mac:52:54:00:5b:54:e0 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-512125 Clientid:01:52:54:00:5b:54:e0}
	I0913 19:57:50.692142   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined IP address 192.168.61.3 and MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:50.692387   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHPort
	I0913 19:57:50.692579   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHKeyPath
	I0913 19:57:50.692754   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHUsername
	I0913 19:57:50.692876   71702 sshutil.go:53] new ssh client: &{IP:192.168.61.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/default-k8s-diff-port-512125/id_rsa Username:docker}
	I0913 19:57:50.776582   71702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0913 19:57:50.802453   71702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0913 19:57:50.827446   71702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0913 19:57:50.855966   71702 provision.go:87] duration metric: took 268.344608ms to configureAuth
	I0913 19:57:50.855995   71702 buildroot.go:189] setting minikube options for container-runtime
	I0913 19:57:50.856210   71702 config.go:182] Loaded profile config "default-k8s-diff-port-512125": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 19:57:50.856298   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHHostname
	I0913 19:57:50.859097   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:50.859426   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:54:e0", ip: ""} in network mk-default-k8s-diff-port-512125: {Iface:virbr2 ExpiryTime:2024-09-13 20:49:35 +0000 UTC Type:0 Mac:52:54:00:5b:54:e0 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-512125 Clientid:01:52:54:00:5b:54:e0}
	I0913 19:57:50.859464   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined IP address 192.168.61.3 and MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:50.859667   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHPort
	I0913 19:57:50.859851   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHKeyPath
	I0913 19:57:50.860001   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHKeyPath
	I0913 19:57:50.860103   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHUsername
	I0913 19:57:50.860270   71702 main.go:141] libmachine: Using SSH client type: native
	I0913 19:57:50.860450   71702 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.3 22 <nil> <nil>}
	I0913 19:57:50.860472   71702 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0913 19:57:51.091137   71702 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0913 19:57:51.091162   71702 machine.go:96] duration metric: took 879.939352ms to provisionDockerMachine
	I0913 19:57:51.091174   71702 start.go:293] postStartSetup for "default-k8s-diff-port-512125" (driver="kvm2")
	I0913 19:57:51.091187   71702 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0913 19:57:51.091208   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .DriverName
	I0913 19:57:51.091525   71702 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0913 19:57:51.091558   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHHostname
	I0913 19:57:51.094398   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:51.094755   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:54:e0", ip: ""} in network mk-default-k8s-diff-port-512125: {Iface:virbr2 ExpiryTime:2024-09-13 20:49:35 +0000 UTC Type:0 Mac:52:54:00:5b:54:e0 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-512125 Clientid:01:52:54:00:5b:54:e0}
	I0913 19:57:51.094783   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined IP address 192.168.61.3 and MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:51.094945   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHPort
	I0913 19:57:51.095112   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHKeyPath
	I0913 19:57:51.095269   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHUsername
	I0913 19:57:51.095391   71702 sshutil.go:53] new ssh client: &{IP:192.168.61.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/default-k8s-diff-port-512125/id_rsa Username:docker}
	I0913 19:57:51.176959   71702 ssh_runner.go:195] Run: cat /etc/os-release
	I0913 19:57:51.181585   71702 info.go:137] Remote host: Buildroot 2023.02.9
	I0913 19:57:51.181614   71702 filesync.go:126] Scanning /home/jenkins/minikube-integration/19636-3902/.minikube/addons for local assets ...
	I0913 19:57:51.181687   71702 filesync.go:126] Scanning /home/jenkins/minikube-integration/19636-3902/.minikube/files for local assets ...
	I0913 19:57:51.181768   71702 filesync.go:149] local asset: /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem -> 110792.pem in /etc/ssl/certs
	I0913 19:57:51.181857   71702 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0913 19:57:51.191417   71702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem --> /etc/ssl/certs/110792.pem (1708 bytes)
	I0913 19:57:51.218033   71702 start.go:296] duration metric: took 126.844149ms for postStartSetup
	I0913 19:57:51.218076   71702 fix.go:56] duration metric: took 20.738765131s for fixHost
	I0913 19:57:51.218119   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHHostname
	I0913 19:57:51.221206   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:51.221713   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:54:e0", ip: ""} in network mk-default-k8s-diff-port-512125: {Iface:virbr2 ExpiryTime:2024-09-13 20:49:35 +0000 UTC Type:0 Mac:52:54:00:5b:54:e0 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-512125 Clientid:01:52:54:00:5b:54:e0}
	I0913 19:57:51.221748   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined IP address 192.168.61.3 and MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:51.221946   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHPort
	I0913 19:57:51.222151   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHKeyPath
	I0913 19:57:51.222344   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHKeyPath
	I0913 19:57:51.222538   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHUsername
	I0913 19:57:51.222673   71702 main.go:141] libmachine: Using SSH client type: native
	I0913 19:57:51.222834   71702 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.3 22 <nil> <nil>}
	I0913 19:57:51.222844   71702 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0913 19:57:51.327091   71702 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726257471.303496315
	
	I0913 19:57:51.327121   71702 fix.go:216] guest clock: 1726257471.303496315
	I0913 19:57:51.327132   71702 fix.go:229] Guest: 2024-09-13 19:57:51.303496315 +0000 UTC Remote: 2024-09-13 19:57:51.218080493 +0000 UTC m=+266.360246627 (delta=85.415822ms)
	I0913 19:57:51.327179   71702 fix.go:200] guest clock delta is within tolerance: 85.415822ms
	I0913 19:57:51.327187   71702 start.go:83] releasing machines lock for "default-k8s-diff-port-512125", held for 20.847905198s
	I0913 19:57:51.327218   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .DriverName
	I0913 19:57:51.327478   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetIP
	I0913 19:57:51.330295   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:51.330668   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:54:e0", ip: ""} in network mk-default-k8s-diff-port-512125: {Iface:virbr2 ExpiryTime:2024-09-13 20:49:35 +0000 UTC Type:0 Mac:52:54:00:5b:54:e0 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-512125 Clientid:01:52:54:00:5b:54:e0}
	I0913 19:57:51.330701   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined IP address 192.168.61.3 and MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:51.330809   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .DriverName
	I0913 19:57:51.331309   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .DriverName
	I0913 19:57:51.331492   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .DriverName
	I0913 19:57:51.331611   71702 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0913 19:57:51.331653   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHHostname
	I0913 19:57:51.331703   71702 ssh_runner.go:195] Run: cat /version.json
	I0913 19:57:51.331728   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHHostname
	I0913 19:57:51.334221   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:51.334411   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:51.334581   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:54:e0", ip: ""} in network mk-default-k8s-diff-port-512125: {Iface:virbr2 ExpiryTime:2024-09-13 20:49:35 +0000 UTC Type:0 Mac:52:54:00:5b:54:e0 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-512125 Clientid:01:52:54:00:5b:54:e0}
	I0913 19:57:51.334609   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined IP address 192.168.61.3 and MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:51.334779   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHPort
	I0913 19:57:51.334879   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:54:e0", ip: ""} in network mk-default-k8s-diff-port-512125: {Iface:virbr2 ExpiryTime:2024-09-13 20:49:35 +0000 UTC Type:0 Mac:52:54:00:5b:54:e0 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-512125 Clientid:01:52:54:00:5b:54:e0}
	I0913 19:57:51.334919   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined IP address 192.168.61.3 and MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:51.334966   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHKeyPath
	I0913 19:57:51.335052   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHPort
	I0913 19:57:51.335126   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHUsername
	I0913 19:57:51.335198   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHKeyPath
	I0913 19:57:51.335270   71702 sshutil.go:53] new ssh client: &{IP:192.168.61.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/default-k8s-diff-port-512125/id_rsa Username:docker}
	I0913 19:57:51.335331   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHUsername
	I0913 19:57:51.335546   71702 sshutil.go:53] new ssh client: &{IP:192.168.61.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/default-k8s-diff-port-512125/id_rsa Username:docker}
	I0913 19:57:51.415552   71702 ssh_runner.go:195] Run: systemctl --version
	I0913 19:57:51.440411   71702 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0913 19:57:51.584757   71702 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0913 19:57:51.590531   71702 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0913 19:57:51.590604   71702 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0913 19:57:51.606595   71702 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0913 19:57:51.606619   71702 start.go:495] detecting cgroup driver to use...
	I0913 19:57:51.606678   71702 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0913 19:57:51.622887   71702 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0913 19:57:51.642168   71702 docker.go:217] disabling cri-docker service (if available) ...
	I0913 19:57:51.642235   71702 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0913 19:57:51.657201   71702 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0913 19:57:51.672504   71702 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0913 19:57:51.797046   71702 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0913 19:57:51.944856   71702 docker.go:233] disabling docker service ...
	I0913 19:57:51.944930   71702 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0913 19:57:51.962885   71702 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0913 19:57:51.979765   71702 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0913 19:57:52.144865   71702 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0913 19:57:52.305549   71702 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0913 19:57:52.319742   71702 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0913 19:57:52.341814   71702 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0913 19:57:52.341877   71702 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:57:52.356233   71702 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0913 19:57:52.356304   71702 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:57:52.367867   71702 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:57:52.380357   71702 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:57:52.396158   71702 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0913 19:57:52.409682   71702 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:57:52.425012   71702 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:57:52.443770   71702 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:57:52.455296   71702 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0913 19:57:52.471321   71702 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0913 19:57:52.471384   71702 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0913 19:57:52.486626   71702 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0913 19:57:52.503172   71702 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 19:57:52.637550   71702 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0913 19:57:52.749215   71702 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0913 19:57:52.749314   71702 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0913 19:57:52.755695   71702 start.go:563] Will wait 60s for crictl version
	I0913 19:57:52.755764   71702 ssh_runner.go:195] Run: which crictl
	I0913 19:57:52.760759   71702 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0913 19:57:52.810845   71702 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0913 19:57:52.810938   71702 ssh_runner.go:195] Run: crio --version
	I0913 19:57:52.843238   71702 ssh_runner.go:195] Run: crio --version
	I0913 19:57:52.881367   71702 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0913 19:57:52.882926   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetIP
	I0913 19:57:52.886161   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:52.886611   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:54:e0", ip: ""} in network mk-default-k8s-diff-port-512125: {Iface:virbr2 ExpiryTime:2024-09-13 20:49:35 +0000 UTC Type:0 Mac:52:54:00:5b:54:e0 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-512125 Clientid:01:52:54:00:5b:54:e0}
	I0913 19:57:52.886640   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined IP address 192.168.61.3 and MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:52.886873   71702 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0913 19:57:52.891585   71702 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0913 19:57:52.909764   71702 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-512125 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:default-k8s-diff-port-512125 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.3 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0913 19:57:52.909895   71702 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0913 19:57:52.909946   71702 ssh_runner.go:195] Run: sudo crictl images --output json
	I0913 19:57:52.951579   71702 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0913 19:57:52.951663   71702 ssh_runner.go:195] Run: which lz4
	I0913 19:57:52.956284   71702 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0913 19:57:52.961057   71702 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0913 19:57:52.961107   71702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0913 19:57:54.413207   71702 crio.go:462] duration metric: took 1.457013899s to copy over tarball
	I0913 19:57:54.413281   71702 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0913 19:57:53.355482   71424 api_server.go:279] https://192.168.50.13:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0913 19:57:53.355515   71424 api_server.go:103] status: https://192.168.50.13:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0913 19:57:53.355532   71424 api_server.go:253] Checking apiserver healthz at https://192.168.50.13:8443/healthz ...
	I0913 19:57:53.403530   71424 api_server.go:279] https://192.168.50.13:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0913 19:57:53.403563   71424 api_server.go:103] status: https://192.168.50.13:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0913 19:57:53.527891   71424 api_server.go:253] Checking apiserver healthz at https://192.168.50.13:8443/healthz ...
	I0913 19:57:53.540614   71424 api_server.go:279] https://192.168.50.13:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0913 19:57:53.540645   71424 api_server.go:103] status: https://192.168.50.13:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0913 19:57:54.027103   71424 api_server.go:253] Checking apiserver healthz at https://192.168.50.13:8443/healthz ...
	I0913 19:57:54.033969   71424 api_server.go:279] https://192.168.50.13:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0913 19:57:54.034007   71424 api_server.go:103] status: https://192.168.50.13:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0913 19:57:54.527232   71424 api_server.go:253] Checking apiserver healthz at https://192.168.50.13:8443/healthz ...
	I0913 19:57:54.533061   71424 api_server.go:279] https://192.168.50.13:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0913 19:57:54.533101   71424 api_server.go:103] status: https://192.168.50.13:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0913 19:57:55.027284   71424 api_server.go:253] Checking apiserver healthz at https://192.168.50.13:8443/healthz ...
	I0913 19:57:55.033940   71424 api_server.go:279] https://192.168.50.13:8443/healthz returned 200:
	ok
	I0913 19:57:55.041955   71424 api_server.go:141] control plane version: v1.31.1
	I0913 19:57:55.041994   71424 api_server.go:131] duration metric: took 5.01501979s to wait for apiserver health ...
	I0913 19:57:55.042004   71424 cni.go:84] Creating CNI manager for ""
	I0913 19:57:55.042012   71424 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0913 19:57:55.043980   71424 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0913 19:57:55.045528   71424 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0913 19:57:55.095694   71424 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0913 19:57:55.130974   71424 system_pods.go:43] waiting for kube-system pods to appear ...
	I0913 19:57:55.144810   71424 system_pods.go:59] 8 kube-system pods found
	I0913 19:57:55.144850   71424 system_pods.go:61] "coredns-7c65d6cfc9-fjzxv" [984f1946-61b1-4881-ae99-495855aaf948] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0913 19:57:55.144861   71424 system_pods.go:61] "etcd-no-preload-239327" [d0514967-93ea-4792-b49d-500c6800d102] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0913 19:57:55.144871   71424 system_pods.go:61] "kube-apiserver-no-preload-239327" [2e37433d-d767-4e6a-9697-79e99bb2cf74] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0913 19:57:55.144879   71424 system_pods.go:61] "kube-controller-manager-no-preload-239327" [71d86891-1378-43b5-af10-c45acb1ef854] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0913 19:57:55.144885   71424 system_pods.go:61] "kube-proxy-b24zg" [67fffd9e-ddf7-4abb-bfce-1528060d6b43] Running
	I0913 19:57:55.144892   71424 system_pods.go:61] "kube-scheduler-no-preload-239327" [22e17a4f-902f-4593-82dd-d2f04104e66a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0913 19:57:55.144899   71424 system_pods.go:61] "metrics-server-6867b74b74-bq7jp" [9920ad88-3d00-458f-94d4-3dcfd0cd9a01] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0913 19:57:55.144904   71424 system_pods.go:61] "storage-provisioner" [1cb55fe9-4adb-4d3e-9f26-34ee4b3f01a2] Running
	I0913 19:57:55.144912   71424 system_pods.go:74] duration metric: took 13.911878ms to wait for pod list to return data ...
	I0913 19:57:55.144925   71424 node_conditions.go:102] verifying NodePressure condition ...
	I0913 19:57:55.150452   71424 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0913 19:57:55.150485   71424 node_conditions.go:123] node cpu capacity is 2
	I0913 19:57:55.150498   71424 node_conditions.go:105] duration metric: took 5.568616ms to run NodePressure ...
	I0913 19:57:55.150517   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:57:55.469599   71424 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0913 19:57:55.475337   71424 kubeadm.go:739] kubelet initialised
	I0913 19:57:55.475361   71424 kubeadm.go:740] duration metric: took 5.681154ms waiting for restarted kubelet to initialise ...
	I0913 19:57:55.475372   71424 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0913 19:57:55.485218   71424 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-fjzxv" in "kube-system" namespace to be "Ready" ...
	I0913 19:57:55.495426   71424 pod_ready.go:98] node "no-preload-239327" hosting pod "coredns-7c65d6cfc9-fjzxv" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-239327" has status "Ready":"False"
	I0913 19:57:55.495451   71424 pod_ready.go:82] duration metric: took 10.207619ms for pod "coredns-7c65d6cfc9-fjzxv" in "kube-system" namespace to be "Ready" ...
	E0913 19:57:55.495464   71424 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-239327" hosting pod "coredns-7c65d6cfc9-fjzxv" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-239327" has status "Ready":"False"
	I0913 19:57:55.495474   71424 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-239327" in "kube-system" namespace to be "Ready" ...
	I0913 19:57:55.501722   71424 pod_ready.go:98] node "no-preload-239327" hosting pod "etcd-no-preload-239327" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-239327" has status "Ready":"False"
	I0913 19:57:55.501746   71424 pod_ready.go:82] duration metric: took 6.262633ms for pod "etcd-no-preload-239327" in "kube-system" namespace to be "Ready" ...
	E0913 19:57:55.501758   71424 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-239327" hosting pod "etcd-no-preload-239327" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-239327" has status "Ready":"False"
	I0913 19:57:55.501766   71424 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-239327" in "kube-system" namespace to be "Ready" ...
	I0913 19:57:55.508771   71424 pod_ready.go:98] node "no-preload-239327" hosting pod "kube-apiserver-no-preload-239327" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-239327" has status "Ready":"False"
	I0913 19:57:55.508797   71424 pod_ready.go:82] duration metric: took 7.022139ms for pod "kube-apiserver-no-preload-239327" in "kube-system" namespace to be "Ready" ...
	E0913 19:57:55.508808   71424 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-239327" hosting pod "kube-apiserver-no-preload-239327" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-239327" has status "Ready":"False"
	I0913 19:57:55.508816   71424 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-239327" in "kube-system" namespace to be "Ready" ...
	I0913 19:57:55.533464   71424 pod_ready.go:98] node "no-preload-239327" hosting pod "kube-controller-manager-no-preload-239327" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-239327" has status "Ready":"False"
	I0913 19:57:55.533494   71424 pod_ready.go:82] duration metric: took 24.667318ms for pod "kube-controller-manager-no-preload-239327" in "kube-system" namespace to be "Ready" ...
	E0913 19:57:55.533505   71424 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-239327" hosting pod "kube-controller-manager-no-preload-239327" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-239327" has status "Ready":"False"
	I0913 19:57:55.533515   71424 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-b24zg" in "kube-system" namespace to be "Ready" ...
	I0913 19:57:55.935346   71424 pod_ready.go:98] node "no-preload-239327" hosting pod "kube-proxy-b24zg" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-239327" has status "Ready":"False"
	I0913 19:57:55.935376   71424 pod_ready.go:82] duration metric: took 401.852235ms for pod "kube-proxy-b24zg" in "kube-system" namespace to be "Ready" ...
	E0913 19:57:55.935388   71424 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-239327" hosting pod "kube-proxy-b24zg" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-239327" has status "Ready":"False"
	I0913 19:57:55.935399   71424 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-239327" in "kube-system" namespace to be "Ready" ...
	I0913 19:57:56.335156   71424 pod_ready.go:98] node "no-preload-239327" hosting pod "kube-scheduler-no-preload-239327" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-239327" has status "Ready":"False"
	I0913 19:57:56.335194   71424 pod_ready.go:82] duration metric: took 399.782959ms for pod "kube-scheduler-no-preload-239327" in "kube-system" namespace to be "Ready" ...
	E0913 19:57:56.335207   71424 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-239327" hosting pod "kube-scheduler-no-preload-239327" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-239327" has status "Ready":"False"
	I0913 19:57:56.335216   71424 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace to be "Ready" ...
	I0913 19:57:56.734606   71424 pod_ready.go:98] node "no-preload-239327" hosting pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-239327" has status "Ready":"False"
	I0913 19:57:56.734633   71424 pod_ready.go:82] duration metric: took 399.405497ms for pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace to be "Ready" ...
	E0913 19:57:56.734644   71424 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-239327" hosting pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-239327" has status "Ready":"False"
	I0913 19:57:56.734654   71424 pod_ready.go:39] duration metric: took 1.259272309s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0913 19:57:56.734673   71424 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0913 19:57:56.748215   71424 ops.go:34] apiserver oom_adj: -16
	I0913 19:57:56.748236   71424 kubeadm.go:597] duration metric: took 9.227936606s to restartPrimaryControlPlane
	I0913 19:57:56.748247   71424 kubeadm.go:394] duration metric: took 9.275070425s to StartCluster
	I0913 19:57:56.748267   71424 settings.go:142] acquiring lock: {Name:mk3b751ea8a9f3a6c2d19469cffad500e96411b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 19:57:56.748361   71424 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19636-3902/kubeconfig
	I0913 19:57:56.750523   71424 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/kubeconfig: {Name:mk324cf401f96b93fed93af51e24b0634e5fa1a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 19:57:56.750818   71424 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.13 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0913 19:57:56.750914   71424 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0913 19:57:56.751016   71424 addons.go:69] Setting storage-provisioner=true in profile "no-preload-239327"
	I0913 19:57:56.751037   71424 addons.go:234] Setting addon storage-provisioner=true in "no-preload-239327"
	W0913 19:57:56.751046   71424 addons.go:243] addon storage-provisioner should already be in state true
	I0913 19:57:56.751034   71424 addons.go:69] Setting default-storageclass=true in profile "no-preload-239327"
	I0913 19:57:56.751066   71424 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-239327"
	I0913 19:57:56.751076   71424 host.go:66] Checking if "no-preload-239327" exists ...
	I0913 19:57:56.751108   71424 config.go:182] Loaded profile config "no-preload-239327": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 19:57:56.751172   71424 addons.go:69] Setting metrics-server=true in profile "no-preload-239327"
	I0913 19:57:56.751186   71424 addons.go:234] Setting addon metrics-server=true in "no-preload-239327"
	W0913 19:57:56.751208   71424 addons.go:243] addon metrics-server should already be in state true
	I0913 19:57:56.751231   71424 host.go:66] Checking if "no-preload-239327" exists ...
	I0913 19:57:56.751527   71424 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:57:56.751550   71424 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:57:56.751568   71424 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:57:56.751581   71424 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:57:56.751735   71424 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:57:56.751799   71424 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:57:56.753086   71424 out.go:177] * Verifying Kubernetes components...
	I0913 19:57:56.755069   71424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 19:57:56.769111   71424 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39191
	I0913 19:57:56.769722   71424 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:57:56.770138   71424 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37643
	I0913 19:57:56.770380   71424 main.go:141] libmachine: Using API Version  1
	I0913 19:57:56.770397   71424 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:57:56.770472   71424 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44401
	I0913 19:57:56.770616   71424 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:57:56.770858   71424 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:57:56.771033   71424 main.go:141] libmachine: Using API Version  1
	I0913 19:57:56.771054   71424 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:57:56.771358   71424 main.go:141] libmachine: Using API Version  1
	I0913 19:57:56.771375   71424 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:57:56.771393   71424 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:57:56.771418   71424 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:57:56.771553   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetState
	I0913 19:57:56.772058   71424 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:57:56.772097   71424 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:57:56.772313   71424 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:57:56.772870   71424 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:57:56.772911   71424 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:57:56.791429   71424 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39909
	I0913 19:57:56.791741   71424 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:57:56.791800   71424 addons.go:234] Setting addon default-storageclass=true in "no-preload-239327"
	W0913 19:57:56.791813   71424 addons.go:243] addon default-storageclass should already be in state true
	I0913 19:57:56.791841   71424 host.go:66] Checking if "no-preload-239327" exists ...
	I0913 19:57:56.792127   71424 main.go:141] libmachine: Using API Version  1
	I0913 19:57:56.792142   71424 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:57:56.792204   71424 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:57:56.792234   71424 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:57:56.792419   71424 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:57:56.792545   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetState
	I0913 19:57:56.794360   71424 main.go:141] libmachine: (no-preload-239327) Calling .DriverName
	I0913 19:57:56.796432   71424 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0913 19:57:56.797889   71424 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0913 19:57:56.797906   71424 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0913 19:57:56.797936   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHHostname
	I0913 19:57:56.801559   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:56.801916   71424 main.go:141] libmachine: (no-preload-239327) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:8c:9d", ip: ""} in network mk-no-preload-239327: {Iface:virbr1 ExpiryTime:2024-09-13 20:57:22 +0000 UTC Type:0 Mac:52:54:00:14:8c:9d Iaid: IPaddr:192.168.50.13 Prefix:24 Hostname:no-preload-239327 Clientid:01:52:54:00:14:8c:9d}
	I0913 19:57:56.801937   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined IP address 192.168.50.13 and MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:56.803787   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHPort
	I0913 19:57:56.803937   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHKeyPath
	I0913 19:57:56.806185   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHUsername
	I0913 19:57:56.806357   71424 sshutil.go:53] new ssh client: &{IP:192.168.50.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/no-preload-239327/id_rsa Username:docker}
	I0913 19:57:56.809000   71424 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40391
	I0913 19:57:56.809444   71424 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:57:56.809928   71424 main.go:141] libmachine: Using API Version  1
	I0913 19:57:56.809943   71424 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:57:56.809962   71424 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36435
	I0913 19:57:56.810309   71424 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:57:56.810511   71424 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:57:56.810829   71424 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:57:56.810862   71424 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:57:56.810872   71424 main.go:141] libmachine: Using API Version  1
	I0913 19:57:56.810886   71424 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:57:56.811194   71424 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:57:56.811321   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetState
	I0913 19:57:56.812760   71424 main.go:141] libmachine: (no-preload-239327) Calling .DriverName
	I0913 19:57:56.814270   71424 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 19:57:52.718732   71926 main.go:141] libmachine: (old-k8s-version-234290) Waiting to get IP...
	I0913 19:57:52.719575   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:57:52.720004   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | unable to find current IP address of domain old-k8s-version-234290 in network mk-old-k8s-version-234290
	I0913 19:57:52.720083   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | I0913 19:57:52.720009   72953 retry.go:31] will retry after 304.912151ms: waiting for machine to come up
	I0913 19:57:53.026797   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:57:53.027578   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | unable to find current IP address of domain old-k8s-version-234290 in network mk-old-k8s-version-234290
	I0913 19:57:53.027703   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | I0913 19:57:53.027641   72953 retry.go:31] will retry after 242.676909ms: waiting for machine to come up
	I0913 19:57:53.272108   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:57:53.272588   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | unable to find current IP address of domain old-k8s-version-234290 in network mk-old-k8s-version-234290
	I0913 19:57:53.272612   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | I0913 19:57:53.272518   72953 retry.go:31] will retry after 405.559393ms: waiting for machine to come up
	I0913 19:57:53.679940   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:57:53.680380   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | unable to find current IP address of domain old-k8s-version-234290 in network mk-old-k8s-version-234290
	I0913 19:57:53.680414   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | I0913 19:57:53.680348   72953 retry.go:31] will retry after 378.743628ms: waiting for machine to come up
	I0913 19:57:54.061169   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:57:54.061778   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | unable to find current IP address of domain old-k8s-version-234290 in network mk-old-k8s-version-234290
	I0913 19:57:54.061805   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | I0913 19:57:54.061698   72953 retry.go:31] will retry after 481.46563ms: waiting for machine to come up
	I0913 19:57:54.545134   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:57:54.545582   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | unable to find current IP address of domain old-k8s-version-234290 in network mk-old-k8s-version-234290
	I0913 19:57:54.545604   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | I0913 19:57:54.545548   72953 retry.go:31] will retry after 836.433898ms: waiting for machine to come up
	I0913 19:57:55.383396   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:57:55.384063   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | unable to find current IP address of domain old-k8s-version-234290 in network mk-old-k8s-version-234290
	I0913 19:57:55.384094   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | I0913 19:57:55.384023   72953 retry.go:31] will retry after 848.706378ms: waiting for machine to come up
	I0913 19:57:56.233996   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:57:56.234429   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | unable to find current IP address of domain old-k8s-version-234290 in network mk-old-k8s-version-234290
	I0913 19:57:56.234456   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | I0913 19:57:56.234381   72953 retry.go:31] will retry after 969.158848ms: waiting for machine to come up
	I0913 19:57:56.815854   71424 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0913 19:57:56.815866   71424 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0913 19:57:56.815878   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHHostname
	I0913 19:57:56.822710   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:56.823097   71424 main.go:141] libmachine: (no-preload-239327) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:8c:9d", ip: ""} in network mk-no-preload-239327: {Iface:virbr1 ExpiryTime:2024-09-13 20:57:22 +0000 UTC Type:0 Mac:52:54:00:14:8c:9d Iaid: IPaddr:192.168.50.13 Prefix:24 Hostname:no-preload-239327 Clientid:01:52:54:00:14:8c:9d}
	I0913 19:57:56.823115   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined IP address 192.168.50.13 and MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:56.823379   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHPort
	I0913 19:57:56.823519   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHKeyPath
	I0913 19:57:56.823634   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHUsername
	I0913 19:57:56.823721   71424 sshutil.go:53] new ssh client: &{IP:192.168.50.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/no-preload-239327/id_rsa Username:docker}
	I0913 19:57:56.830245   71424 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38447
	I0913 19:57:56.830634   71424 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:57:56.831243   71424 main.go:141] libmachine: Using API Version  1
	I0913 19:57:56.831258   71424 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:57:56.831746   71424 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:57:56.831977   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetState
	I0913 19:57:56.833771   71424 main.go:141] libmachine: (no-preload-239327) Calling .DriverName
	I0913 19:57:56.833953   71424 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0913 19:57:56.833966   71424 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0913 19:57:56.833981   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHHostname
	I0913 19:57:56.837171   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:56.837611   71424 main.go:141] libmachine: (no-preload-239327) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:8c:9d", ip: ""} in network mk-no-preload-239327: {Iface:virbr1 ExpiryTime:2024-09-13 20:57:22 +0000 UTC Type:0 Mac:52:54:00:14:8c:9d Iaid: IPaddr:192.168.50.13 Prefix:24 Hostname:no-preload-239327 Clientid:01:52:54:00:14:8c:9d}
	I0913 19:57:56.837630   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined IP address 192.168.50.13 and MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:56.837793   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHPort
	I0913 19:57:56.837962   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHKeyPath
	I0913 19:57:56.838198   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHUsername
	I0913 19:57:56.838323   71424 sshutil.go:53] new ssh client: &{IP:192.168.50.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/no-preload-239327/id_rsa Username:docker}
	I0913 19:57:57.030836   71424 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0913 19:57:57.056630   71424 node_ready.go:35] waiting up to 6m0s for node "no-preload-239327" to be "Ready" ...
	I0913 19:57:57.157478   71424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0913 19:57:57.169686   71424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0913 19:57:57.302368   71424 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0913 19:57:57.302395   71424 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0913 19:57:57.355982   71424 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0913 19:57:57.356013   71424 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0913 19:57:57.378079   71424 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0913 19:57:57.378128   71424 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0913 19:57:57.437879   71424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0913 19:57:59.395739   71424 node_ready.go:53] node "no-preload-239327" has status "Ready":"False"
	I0913 19:57:59.399929   71424 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.230206257s)
	I0913 19:57:59.399976   71424 main.go:141] libmachine: Making call to close driver server
	I0913 19:57:59.399988   71424 main.go:141] libmachine: (no-preload-239327) Calling .Close
	I0913 19:57:59.400026   71424 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.242509219s)
	I0913 19:57:59.400067   71424 main.go:141] libmachine: Making call to close driver server
	I0913 19:57:59.400083   71424 main.go:141] libmachine: (no-preload-239327) Calling .Close
	I0913 19:57:59.400273   71424 main.go:141] libmachine: Successfully made call to close driver server
	I0913 19:57:59.400287   71424 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 19:57:59.400297   71424 main.go:141] libmachine: Making call to close driver server
	I0913 19:57:59.400305   71424 main.go:141] libmachine: (no-preload-239327) Calling .Close
	I0913 19:57:59.400481   71424 main.go:141] libmachine: (no-preload-239327) DBG | Closing plugin on server side
	I0913 19:57:59.400514   71424 main.go:141] libmachine: Successfully made call to close driver server
	I0913 19:57:59.400529   71424 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 19:57:59.400548   71424 main.go:141] libmachine: Making call to close driver server
	I0913 19:57:59.400556   71424 main.go:141] libmachine: (no-preload-239327) Calling .Close
	I0913 19:57:59.400706   71424 main.go:141] libmachine: Successfully made call to close driver server
	I0913 19:57:59.400716   71424 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 19:57:59.402063   71424 main.go:141] libmachine: Successfully made call to close driver server
	I0913 19:57:59.402078   71424 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 19:57:59.402110   71424 main.go:141] libmachine: (no-preload-239327) DBG | Closing plugin on server side
	I0913 19:57:59.729071   71424 main.go:141] libmachine: Making call to close driver server
	I0913 19:57:59.729097   71424 main.go:141] libmachine: (no-preload-239327) Calling .Close
	I0913 19:57:59.729396   71424 main.go:141] libmachine: Successfully made call to close driver server
	I0913 19:57:59.729416   71424 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 19:57:59.862773   71424 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.424844753s)
	I0913 19:57:59.862831   71424 main.go:141] libmachine: Making call to close driver server
	I0913 19:57:59.862847   71424 main.go:141] libmachine: (no-preload-239327) Calling .Close
	I0913 19:57:59.863167   71424 main.go:141] libmachine: (no-preload-239327) DBG | Closing plugin on server side
	I0913 19:57:59.863223   71424 main.go:141] libmachine: Successfully made call to close driver server
	I0913 19:57:59.863241   71424 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 19:57:59.863253   71424 main.go:141] libmachine: Making call to close driver server
	I0913 19:57:59.863261   71424 main.go:141] libmachine: (no-preload-239327) Calling .Close
	I0913 19:57:59.863505   71424 main.go:141] libmachine: Successfully made call to close driver server
	I0913 19:57:59.863521   71424 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 19:57:59.863536   71424 addons.go:475] Verifying addon metrics-server=true in "no-preload-239327"
	I0913 19:57:59.865569   71424 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0913 19:57:56.673474   71702 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.260118506s)
	I0913 19:57:56.673521   71702 crio.go:469] duration metric: took 2.260277637s to extract the tarball
	I0913 19:57:56.673535   71702 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0913 19:57:56.710512   71702 ssh_runner.go:195] Run: sudo crictl images --output json
	I0913 19:57:56.757884   71702 crio.go:514] all images are preloaded for cri-o runtime.
	I0913 19:57:56.757904   71702 cache_images.go:84] Images are preloaded, skipping loading
	I0913 19:57:56.757913   71702 kubeadm.go:934] updating node { 192.168.61.3 8444 v1.31.1 crio true true} ...
	I0913 19:57:56.758026   71702 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-512125 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-512125 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0913 19:57:56.758115   71702 ssh_runner.go:195] Run: crio config
	I0913 19:57:56.832109   71702 cni.go:84] Creating CNI manager for ""
	I0913 19:57:56.832131   71702 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0913 19:57:56.832143   71702 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0913 19:57:56.832170   71702 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.3 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-512125 NodeName:default-k8s-diff-port-512125 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.3"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0913 19:57:56.832376   71702 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.3
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-512125"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.3"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0913 19:57:56.832442   71702 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0913 19:57:56.845057   71702 binaries.go:44] Found k8s binaries, skipping transfer
	I0913 19:57:56.845112   71702 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0913 19:57:56.855452   71702 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I0913 19:57:56.874607   71702 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0913 19:57:56.891656   71702 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0913 19:57:56.910268   71702 ssh_runner.go:195] Run: grep 192.168.61.3	control-plane.minikube.internal$ /etc/hosts
	I0913 19:57:56.915416   71702 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.3	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0913 19:57:56.929858   71702 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 19:57:57.051400   71702 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0913 19:57:57.073706   71702 certs.go:68] Setting up /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/default-k8s-diff-port-512125 for IP: 192.168.61.3
	I0913 19:57:57.073736   71702 certs.go:194] generating shared ca certs ...
	I0913 19:57:57.073756   71702 certs.go:226] acquiring lock for ca certs: {Name:mke780aab4c2895bded11772e31c7a357d07742c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 19:57:57.073920   71702 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.key
	I0913 19:57:57.073981   71702 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.key
	I0913 19:57:57.073997   71702 certs.go:256] generating profile certs ...
	I0913 19:57:57.074130   71702 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/default-k8s-diff-port-512125/client.key
	I0913 19:57:57.074222   71702 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/default-k8s-diff-port-512125/apiserver.key.c56bc154
	I0913 19:57:57.074281   71702 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/default-k8s-diff-port-512125/proxy-client.key
	I0913 19:57:57.074428   71702 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079.pem (1338 bytes)
	W0913 19:57:57.074478   71702 certs.go:480] ignoring /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079_empty.pem, impossibly tiny 0 bytes
	I0913 19:57:57.074492   71702 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem (1675 bytes)
	I0913 19:57:57.074524   71702 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem (1082 bytes)
	I0913 19:57:57.074552   71702 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem (1123 bytes)
	I0913 19:57:57.074588   71702 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem (1679 bytes)
	I0913 19:57:57.074648   71702 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem (1708 bytes)
	I0913 19:57:57.075352   71702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0913 19:57:57.116487   71702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0913 19:57:57.149579   71702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0913 19:57:57.181669   71702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0913 19:57:57.222493   71702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/default-k8s-diff-port-512125/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0913 19:57:57.265591   71702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/default-k8s-diff-port-512125/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0913 19:57:57.309431   71702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/default-k8s-diff-port-512125/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0913 19:57:57.337978   71702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/default-k8s-diff-port-512125/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0913 19:57:57.368737   71702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem --> /usr/share/ca-certificates/110792.pem (1708 bytes)
	I0913 19:57:57.395163   71702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0913 19:57:57.422620   71702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079.pem --> /usr/share/ca-certificates/11079.pem (1338 bytes)
	I0913 19:57:57.452103   71702 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0913 19:57:57.473413   71702 ssh_runner.go:195] Run: openssl version
	I0913 19:57:57.481312   71702 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11079.pem && ln -fs /usr/share/ca-certificates/11079.pem /etc/ssl/certs/11079.pem"
	I0913 19:57:57.492674   71702 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11079.pem
	I0913 19:57:57.497758   71702 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 13 18:38 /usr/share/ca-certificates/11079.pem
	I0913 19:57:57.497839   71702 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11079.pem
	I0913 19:57:57.504428   71702 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11079.pem /etc/ssl/certs/51391683.0"
	I0913 19:57:57.516174   71702 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110792.pem && ln -fs /usr/share/ca-certificates/110792.pem /etc/ssl/certs/110792.pem"
	I0913 19:57:57.531615   71702 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110792.pem
	I0913 19:57:57.536963   71702 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 13 18:38 /usr/share/ca-certificates/110792.pem
	I0913 19:57:57.537044   71702 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110792.pem
	I0913 19:57:57.543533   71702 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110792.pem /etc/ssl/certs/3ec20f2e.0"
	I0913 19:57:57.555225   71702 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0913 19:57:57.567042   71702 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0913 19:57:57.571812   71702 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 13 18:22 /usr/share/ca-certificates/minikubeCA.pem
	I0913 19:57:57.571880   71702 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0913 19:57:57.578078   71702 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0913 19:57:57.589068   71702 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0913 19:57:57.593977   71702 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0913 19:57:57.600118   71702 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0913 19:57:57.608059   71702 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0913 19:57:57.616018   71702 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0913 19:57:57.623731   71702 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0913 19:57:57.631334   71702 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0913 19:57:57.639262   71702 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-512125 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:default-k8s-diff-port-512125 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.3 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2
6280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 19:57:57.639371   71702 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0913 19:57:57.639428   71702 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0913 19:57:57.690322   71702 cri.go:89] found id: ""
	I0913 19:57:57.690474   71702 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0913 19:57:57.701319   71702 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0913 19:57:57.701343   71702 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0913 19:57:57.701398   71702 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0913 19:57:57.714480   71702 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0913 19:57:57.715899   71702 kubeconfig.go:125] found "default-k8s-diff-port-512125" server: "https://192.168.61.3:8444"
	I0913 19:57:57.719013   71702 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0913 19:57:57.732186   71702 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.3
	I0913 19:57:57.732229   71702 kubeadm.go:1160] stopping kube-system containers ...
	I0913 19:57:57.732243   71702 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0913 19:57:57.732295   71702 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0913 19:57:57.777389   71702 cri.go:89] found id: ""
	I0913 19:57:57.777469   71702 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0913 19:57:57.800158   71702 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0913 19:57:57.813502   71702 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0913 19:57:57.813524   71702 kubeadm.go:157] found existing configuration files:
	
	I0913 19:57:57.813587   71702 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0913 19:57:57.824010   71702 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0913 19:57:57.824089   71702 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0913 19:57:57.837916   71702 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0913 19:57:57.848018   71702 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0913 19:57:57.848100   71702 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0913 19:57:57.858224   71702 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0913 19:57:57.867720   71702 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0913 19:57:57.867791   71702 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0913 19:57:57.877546   71702 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0913 19:57:57.886880   71702 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0913 19:57:57.886946   71702 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0913 19:57:57.897287   71702 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0913 19:57:57.907278   71702 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:57:58.066862   71702 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:57:59.038179   71702 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:57:59.245671   71702 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:57:59.306302   71702 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:57:59.366665   71702 api_server.go:52] waiting for apiserver process to appear ...
	I0913 19:57:59.366755   71702 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:57:59.867295   71702 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:57:59.867010   71424 addons.go:510] duration metric: took 3.116105462s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0913 19:57:57.205383   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:57:57.205897   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | unable to find current IP address of domain old-k8s-version-234290 in network mk-old-k8s-version-234290
	I0913 19:57:57.205926   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | I0913 19:57:57.205815   72953 retry.go:31] will retry after 1.270443953s: waiting for machine to come up
	I0913 19:57:58.477621   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:57:58.478121   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | unable to find current IP address of domain old-k8s-version-234290 in network mk-old-k8s-version-234290
	I0913 19:57:58.478142   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | I0913 19:57:58.478071   72953 retry.go:31] will retry after 1.698380616s: waiting for machine to come up
	I0913 19:58:00.179093   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:00.179578   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | unable to find current IP address of domain old-k8s-version-234290 in network mk-old-k8s-version-234290
	I0913 19:58:00.179602   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | I0913 19:58:00.179528   72953 retry.go:31] will retry after 2.83575453s: waiting for machine to come up
	I0913 19:58:00.367089   71702 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:00.386556   71702 api_server.go:72] duration metric: took 1.019888667s to wait for apiserver process to appear ...
	I0913 19:58:00.386585   71702 api_server.go:88] waiting for apiserver healthz status ...
	I0913 19:58:00.386612   71702 api_server.go:253] Checking apiserver healthz at https://192.168.61.3:8444/healthz ...
	I0913 19:58:00.387195   71702 api_server.go:269] stopped: https://192.168.61.3:8444/healthz: Get "https://192.168.61.3:8444/healthz": dial tcp 192.168.61.3:8444: connect: connection refused
	I0913 19:58:00.887556   71702 api_server.go:253] Checking apiserver healthz at https://192.168.61.3:8444/healthz ...
	I0913 19:58:03.321626   71702 api_server.go:279] https://192.168.61.3:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0913 19:58:03.321655   71702 api_server.go:103] status: https://192.168.61.3:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0913 19:58:03.321671   71702 api_server.go:253] Checking apiserver healthz at https://192.168.61.3:8444/healthz ...
	I0913 19:58:03.348469   71702 api_server.go:279] https://192.168.61.3:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0913 19:58:03.348523   71702 api_server.go:103] status: https://192.168.61.3:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0913 19:58:03.386697   71702 api_server.go:253] Checking apiserver healthz at https://192.168.61.3:8444/healthz ...
	I0913 19:58:03.431803   71702 api_server.go:279] https://192.168.61.3:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0913 19:58:03.431840   71702 api_server.go:103] status: https://192.168.61.3:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0913 19:58:03.887458   71702 api_server.go:253] Checking apiserver healthz at https://192.168.61.3:8444/healthz ...
	I0913 19:58:03.892461   71702 api_server.go:279] https://192.168.61.3:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0913 19:58:03.892542   71702 api_server.go:103] status: https://192.168.61.3:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0913 19:58:04.387025   71702 api_server.go:253] Checking apiserver healthz at https://192.168.61.3:8444/healthz ...
	I0913 19:58:04.392727   71702 api_server.go:279] https://192.168.61.3:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0913 19:58:04.392754   71702 api_server.go:103] status: https://192.168.61.3:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0913 19:58:04.887683   71702 api_server.go:253] Checking apiserver healthz at https://192.168.61.3:8444/healthz ...
	I0913 19:58:04.892753   71702 api_server.go:279] https://192.168.61.3:8444/healthz returned 200:
	ok
	I0913 19:58:04.904148   71702 api_server.go:141] control plane version: v1.31.1
	I0913 19:58:04.904182   71702 api_server.go:131] duration metric: took 4.517588824s to wait for apiserver health ...
	I0913 19:58:04.904194   71702 cni.go:84] Creating CNI manager for ""
	I0913 19:58:04.904202   71702 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0913 19:58:04.905663   71702 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0913 19:58:01.560970   71424 node_ready.go:53] node "no-preload-239327" has status "Ready":"False"
	I0913 19:58:04.064801   71424 node_ready.go:49] node "no-preload-239327" has status "Ready":"True"
	I0913 19:58:04.064833   71424 node_ready.go:38] duration metric: took 7.008173513s for node "no-preload-239327" to be "Ready" ...
	I0913 19:58:04.064847   71424 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0913 19:58:04.071226   71424 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-fjzxv" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:04.075856   71424 pod_ready.go:93] pod "coredns-7c65d6cfc9-fjzxv" in "kube-system" namespace has status "Ready":"True"
	I0913 19:58:04.075876   71424 pod_ready.go:82] duration metric: took 4.620688ms for pod "coredns-7c65d6cfc9-fjzxv" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:04.075886   71424 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-239327" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:06.082608   71424 pod_ready.go:103] pod "etcd-no-preload-239327" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:03.017261   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:03.017797   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | unable to find current IP address of domain old-k8s-version-234290 in network mk-old-k8s-version-234290
	I0913 19:58:03.017824   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | I0913 19:58:03.017750   72953 retry.go:31] will retry after 2.837073214s: waiting for machine to come up
	I0913 19:58:05.856138   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:05.856521   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | unable to find current IP address of domain old-k8s-version-234290 in network mk-old-k8s-version-234290
	I0913 19:58:05.856541   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | I0913 19:58:05.856478   72953 retry.go:31] will retry after 3.468611434s: waiting for machine to come up
	I0913 19:58:04.907086   71702 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0913 19:58:04.935755   71702 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0913 19:58:04.972552   71702 system_pods.go:43] waiting for kube-system pods to appear ...
	I0913 19:58:04.987070   71702 system_pods.go:59] 8 kube-system pods found
	I0913 19:58:04.987104   71702 system_pods.go:61] "coredns-7c65d6cfc9-zvnss" [b6584e3d-4140-4666-8303-94c0900eaf8c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0913 19:58:04.987118   71702 system_pods.go:61] "etcd-default-k8s-diff-port-512125" [5eb1e9b1-b89a-427d-83f5-96d9109b10c6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0913 19:58:04.987128   71702 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-512125" [5118097e-a1ed-403e-8acb-22c7619a6db9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0913 19:58:04.987148   71702 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-512125" [37f11854-a2b8-45d5-8491-e2f92b860220] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0913 19:58:04.987160   71702 system_pods.go:61] "kube-proxy-xqv9m" [92c9dda2-fabe-4b3b-9bae-892e6daf0889] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0913 19:58:04.987172   71702 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-512125" [a9f4fa75-b73d-477a-83e9-e855ec50f90d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0913 19:58:04.987180   71702 system_pods.go:61] "metrics-server-6867b74b74-7ltrm" [8560dbda-82b3-49a1-8ed8-f149e5e99168] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0913 19:58:04.987188   71702 system_pods.go:61] "storage-provisioner" [d8f393fe-0f71-4f3c-b17e-6132503c2b9d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0913 19:58:04.987198   71702 system_pods.go:74] duration metric: took 14.623093ms to wait for pod list to return data ...
	I0913 19:58:04.987207   71702 node_conditions.go:102] verifying NodePressure condition ...
	I0913 19:58:04.991659   71702 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0913 19:58:04.991686   71702 node_conditions.go:123] node cpu capacity is 2
	I0913 19:58:04.991701   71702 node_conditions.go:105] duration metric: took 4.488975ms to run NodePressure ...
	I0913 19:58:04.991720   71702 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:58:05.329547   71702 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0913 19:58:05.342174   71702 kubeadm.go:739] kubelet initialised
	I0913 19:58:05.342208   71702 kubeadm.go:740] duration metric: took 12.632654ms waiting for restarted kubelet to initialise ...
	I0913 19:58:05.342218   71702 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0913 19:58:05.351246   71702 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-zvnss" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:07.371790   71702 pod_ready.go:103] pod "coredns-7c65d6cfc9-zvnss" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:09.857936   71702 pod_ready.go:93] pod "coredns-7c65d6cfc9-zvnss" in "kube-system" namespace has status "Ready":"True"
	I0913 19:58:09.857956   71702 pod_ready.go:82] duration metric: took 4.506679998s for pod "coredns-7c65d6cfc9-zvnss" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:09.857966   71702 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-512125" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:10.763154   71233 start.go:364] duration metric: took 54.002772677s to acquireMachinesLock for "embed-certs-175374"
	I0913 19:58:10.763209   71233 start.go:96] Skipping create...Using existing machine configuration
	I0913 19:58:10.763220   71233 fix.go:54] fixHost starting: 
	I0913 19:58:10.763652   71233 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:58:10.763701   71233 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:58:10.780781   71233 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34533
	I0913 19:58:10.781257   71233 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:58:10.781767   71233 main.go:141] libmachine: Using API Version  1
	I0913 19:58:10.781792   71233 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:58:10.782108   71233 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:58:10.782297   71233 main.go:141] libmachine: (embed-certs-175374) Calling .DriverName
	I0913 19:58:10.782435   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetState
	I0913 19:58:10.783818   71233 fix.go:112] recreateIfNeeded on embed-certs-175374: state=Stopped err=<nil>
	I0913 19:58:10.783838   71233 main.go:141] libmachine: (embed-certs-175374) Calling .DriverName
	W0913 19:58:10.783968   71233 fix.go:138] unexpected machine state, will restart: <nil>
	I0913 19:58:10.786142   71233 out.go:177] * Restarting existing kvm2 VM for "embed-certs-175374" ...
	I0913 19:58:07.082571   71424 pod_ready.go:93] pod "etcd-no-preload-239327" in "kube-system" namespace has status "Ready":"True"
	I0913 19:58:07.082601   71424 pod_ready.go:82] duration metric: took 3.006705611s for pod "etcd-no-preload-239327" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:07.082614   71424 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-239327" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:07.087377   71424 pod_ready.go:93] pod "kube-apiserver-no-preload-239327" in "kube-system" namespace has status "Ready":"True"
	I0913 19:58:07.087394   71424 pod_ready.go:82] duration metric: took 4.772922ms for pod "kube-apiserver-no-preload-239327" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:07.087403   71424 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-239327" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:07.091167   71424 pod_ready.go:93] pod "kube-controller-manager-no-preload-239327" in "kube-system" namespace has status "Ready":"True"
	I0913 19:58:07.091181   71424 pod_ready.go:82] duration metric: took 3.772461ms for pod "kube-controller-manager-no-preload-239327" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:07.091188   71424 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-b24zg" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:07.095143   71424 pod_ready.go:93] pod "kube-proxy-b24zg" in "kube-system" namespace has status "Ready":"True"
	I0913 19:58:07.095158   71424 pod_ready.go:82] duration metric: took 3.964773ms for pod "kube-proxy-b24zg" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:07.095164   71424 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-239327" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:07.259916   71424 pod_ready.go:93] pod "kube-scheduler-no-preload-239327" in "kube-system" namespace has status "Ready":"True"
	I0913 19:58:07.259939   71424 pod_ready.go:82] duration metric: took 164.768229ms for pod "kube-scheduler-no-preload-239327" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:07.259948   71424 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:09.267203   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:09.327843   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:09.328294   71926 main.go:141] libmachine: (old-k8s-version-234290) Found IP for machine: 192.168.72.137
	I0913 19:58:09.328318   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has current primary IP address 192.168.72.137 and MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:09.328326   71926 main.go:141] libmachine: (old-k8s-version-234290) Reserving static IP address...
	I0913 19:58:09.328829   71926 main.go:141] libmachine: (old-k8s-version-234290) Reserved static IP address: 192.168.72.137
	I0913 19:58:09.328863   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | found host DHCP lease matching {name: "old-k8s-version-234290", mac: "52:54:00:11:33:43", ip: "192.168.72.137"} in network mk-old-k8s-version-234290: {Iface:virbr4 ExpiryTime:2024-09-13 20:58:03 +0000 UTC Type:0 Mac:52:54:00:11:33:43 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-234290 Clientid:01:52:54:00:11:33:43}
	I0913 19:58:09.328879   71926 main.go:141] libmachine: (old-k8s-version-234290) Waiting for SSH to be available...
	I0913 19:58:09.328907   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | skip adding static IP to network mk-old-k8s-version-234290 - found existing host DHCP lease matching {name: "old-k8s-version-234290", mac: "52:54:00:11:33:43", ip: "192.168.72.137"}
	I0913 19:58:09.328936   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | Getting to WaitForSSH function...
	I0913 19:58:09.331039   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:09.331303   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:33:43", ip: ""} in network mk-old-k8s-version-234290: {Iface:virbr4 ExpiryTime:2024-09-13 20:58:03 +0000 UTC Type:0 Mac:52:54:00:11:33:43 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-234290 Clientid:01:52:54:00:11:33:43}
	I0913 19:58:09.331334   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined IP address 192.168.72.137 and MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:09.331400   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | Using SSH client type: external
	I0913 19:58:09.331435   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | Using SSH private key: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/old-k8s-version-234290/id_rsa (-rw-------)
	I0913 19:58:09.331513   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.137 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19636-3902/.minikube/machines/old-k8s-version-234290/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0913 19:58:09.331538   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | About to run SSH command:
	I0913 19:58:09.331555   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | exit 0
	I0913 19:58:09.458142   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | SSH cmd err, output: <nil>: 
	I0913 19:58:09.458503   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetConfigRaw
	I0913 19:58:09.459065   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetIP
	I0913 19:58:09.461622   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:09.461915   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:33:43", ip: ""} in network mk-old-k8s-version-234290: {Iface:virbr4 ExpiryTime:2024-09-13 20:58:03 +0000 UTC Type:0 Mac:52:54:00:11:33:43 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-234290 Clientid:01:52:54:00:11:33:43}
	I0913 19:58:09.461939   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined IP address 192.168.72.137 and MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:09.462215   71926 profile.go:143] Saving config to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/old-k8s-version-234290/config.json ...
	I0913 19:58:09.462430   71926 machine.go:93] provisionDockerMachine start ...
	I0913 19:58:09.462454   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .DriverName
	I0913 19:58:09.462652   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHHostname
	I0913 19:58:09.464731   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:09.465001   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:33:43", ip: ""} in network mk-old-k8s-version-234290: {Iface:virbr4 ExpiryTime:2024-09-13 20:58:03 +0000 UTC Type:0 Mac:52:54:00:11:33:43 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-234290 Clientid:01:52:54:00:11:33:43}
	I0913 19:58:09.465025   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined IP address 192.168.72.137 and MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:09.465140   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHPort
	I0913 19:58:09.465292   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHKeyPath
	I0913 19:58:09.465448   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHKeyPath
	I0913 19:58:09.465586   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHUsername
	I0913 19:58:09.465754   71926 main.go:141] libmachine: Using SSH client type: native
	I0913 19:58:09.465934   71926 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.137 22 <nil> <nil>}
	I0913 19:58:09.465944   71926 main.go:141] libmachine: About to run SSH command:
	hostname
	I0913 19:58:09.578790   71926 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0913 19:58:09.578821   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetMachineName
	I0913 19:58:09.579119   71926 buildroot.go:166] provisioning hostname "old-k8s-version-234290"
	I0913 19:58:09.579148   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetMachineName
	I0913 19:58:09.579352   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHHostname
	I0913 19:58:09.582085   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:09.582501   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:33:43", ip: ""} in network mk-old-k8s-version-234290: {Iface:virbr4 ExpiryTime:2024-09-13 20:58:03 +0000 UTC Type:0 Mac:52:54:00:11:33:43 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-234290 Clientid:01:52:54:00:11:33:43}
	I0913 19:58:09.582530   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined IP address 192.168.72.137 and MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:09.582677   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHPort
	I0913 19:58:09.582828   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHKeyPath
	I0913 19:58:09.582982   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHKeyPath
	I0913 19:58:09.583111   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHUsername
	I0913 19:58:09.583310   71926 main.go:141] libmachine: Using SSH client type: native
	I0913 19:58:09.583519   71926 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.137 22 <nil> <nil>}
	I0913 19:58:09.583536   71926 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-234290 && echo "old-k8s-version-234290" | sudo tee /etc/hostname
	I0913 19:58:09.712818   71926 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-234290
	
	I0913 19:58:09.712849   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHHostname
	I0913 19:58:09.715668   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:09.716012   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:33:43", ip: ""} in network mk-old-k8s-version-234290: {Iface:virbr4 ExpiryTime:2024-09-13 20:58:03 +0000 UTC Type:0 Mac:52:54:00:11:33:43 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-234290 Clientid:01:52:54:00:11:33:43}
	I0913 19:58:09.716033   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined IP address 192.168.72.137 and MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:09.716166   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHPort
	I0913 19:58:09.716370   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHKeyPath
	I0913 19:58:09.716550   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHKeyPath
	I0913 19:58:09.716740   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHUsername
	I0913 19:58:09.716935   71926 main.go:141] libmachine: Using SSH client type: native
	I0913 19:58:09.717120   71926 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.137 22 <nil> <nil>}
	I0913 19:58:09.717145   71926 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-234290' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-234290/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-234290' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0913 19:58:09.835137   71926 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0913 19:58:09.835169   71926 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19636-3902/.minikube CaCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19636-3902/.minikube}
	I0913 19:58:09.835198   71926 buildroot.go:174] setting up certificates
	I0913 19:58:09.835207   71926 provision.go:84] configureAuth start
	I0913 19:58:09.835220   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetMachineName
	I0913 19:58:09.835493   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetIP
	I0913 19:58:09.838090   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:09.838460   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:33:43", ip: ""} in network mk-old-k8s-version-234290: {Iface:virbr4 ExpiryTime:2024-09-13 20:58:03 +0000 UTC Type:0 Mac:52:54:00:11:33:43 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-234290 Clientid:01:52:54:00:11:33:43}
	I0913 19:58:09.838496   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined IP address 192.168.72.137 and MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:09.838570   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHHostname
	I0913 19:58:09.840786   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:09.841121   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:33:43", ip: ""} in network mk-old-k8s-version-234290: {Iface:virbr4 ExpiryTime:2024-09-13 20:58:03 +0000 UTC Type:0 Mac:52:54:00:11:33:43 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-234290 Clientid:01:52:54:00:11:33:43}
	I0913 19:58:09.841147   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined IP address 192.168.72.137 and MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:09.841283   71926 provision.go:143] copyHostCerts
	I0913 19:58:09.841339   71926 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem, removing ...
	I0913 19:58:09.841349   71926 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem
	I0913 19:58:09.841404   71926 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem (1082 bytes)
	I0913 19:58:09.841495   71926 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem, removing ...
	I0913 19:58:09.841503   71926 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem
	I0913 19:58:09.841525   71926 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem (1123 bytes)
	I0913 19:58:09.841589   71926 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem, removing ...
	I0913 19:58:09.841596   71926 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem
	I0913 19:58:09.841613   71926 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem (1679 bytes)
	I0913 19:58:09.841671   71926 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-234290 san=[127.0.0.1 192.168.72.137 localhost minikube old-k8s-version-234290]
	I0913 19:58:10.111960   71926 provision.go:177] copyRemoteCerts
	I0913 19:58:10.112014   71926 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0913 19:58:10.112042   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHHostname
	I0913 19:58:10.114801   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:10.115156   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:33:43", ip: ""} in network mk-old-k8s-version-234290: {Iface:virbr4 ExpiryTime:2024-09-13 20:58:03 +0000 UTC Type:0 Mac:52:54:00:11:33:43 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-234290 Clientid:01:52:54:00:11:33:43}
	I0913 19:58:10.115203   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined IP address 192.168.72.137 and MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:10.115378   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHPort
	I0913 19:58:10.115543   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHKeyPath
	I0913 19:58:10.115703   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHUsername
	I0913 19:58:10.115816   71926 sshutil.go:53] new ssh client: &{IP:192.168.72.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/old-k8s-version-234290/id_rsa Username:docker}
	I0913 19:58:10.201034   71926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0913 19:58:10.225250   71926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0913 19:58:10.248653   71926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0913 19:58:10.273946   71926 provision.go:87] duration metric: took 438.72916ms to configureAuth
	I0913 19:58:10.273971   71926 buildroot.go:189] setting minikube options for container-runtime
	I0913 19:58:10.274216   71926 config.go:182] Loaded profile config "old-k8s-version-234290": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0913 19:58:10.274293   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHHostname
	I0913 19:58:10.276661   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:10.276973   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:33:43", ip: ""} in network mk-old-k8s-version-234290: {Iface:virbr4 ExpiryTime:2024-09-13 20:58:03 +0000 UTC Type:0 Mac:52:54:00:11:33:43 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-234290 Clientid:01:52:54:00:11:33:43}
	I0913 19:58:10.277010   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined IP address 192.168.72.137 and MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:10.277098   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHPort
	I0913 19:58:10.277315   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHKeyPath
	I0913 19:58:10.277465   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHKeyPath
	I0913 19:58:10.277593   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHUsername
	I0913 19:58:10.277755   71926 main.go:141] libmachine: Using SSH client type: native
	I0913 19:58:10.277914   71926 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.137 22 <nil> <nil>}
	I0913 19:58:10.277929   71926 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0913 19:58:10.506712   71926 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0913 19:58:10.506740   71926 machine.go:96] duration metric: took 1.044293936s to provisionDockerMachine
	I0913 19:58:10.506752   71926 start.go:293] postStartSetup for "old-k8s-version-234290" (driver="kvm2")
	I0913 19:58:10.506761   71926 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0913 19:58:10.506786   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .DriverName
	I0913 19:58:10.507087   71926 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0913 19:58:10.507121   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHHostname
	I0913 19:58:10.509746   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:10.510073   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:33:43", ip: ""} in network mk-old-k8s-version-234290: {Iface:virbr4 ExpiryTime:2024-09-13 20:58:03 +0000 UTC Type:0 Mac:52:54:00:11:33:43 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-234290 Clientid:01:52:54:00:11:33:43}
	I0913 19:58:10.510115   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined IP address 192.168.72.137 and MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:10.510319   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHPort
	I0913 19:58:10.510493   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHKeyPath
	I0913 19:58:10.510652   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHUsername
	I0913 19:58:10.510791   71926 sshutil.go:53] new ssh client: &{IP:192.168.72.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/old-k8s-version-234290/id_rsa Username:docker}
	I0913 19:58:10.608187   71926 ssh_runner.go:195] Run: cat /etc/os-release
	I0913 19:58:10.612545   71926 info.go:137] Remote host: Buildroot 2023.02.9
	I0913 19:58:10.612570   71926 filesync.go:126] Scanning /home/jenkins/minikube-integration/19636-3902/.minikube/addons for local assets ...
	I0913 19:58:10.612659   71926 filesync.go:126] Scanning /home/jenkins/minikube-integration/19636-3902/.minikube/files for local assets ...
	I0913 19:58:10.612760   71926 filesync.go:149] local asset: /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem -> 110792.pem in /etc/ssl/certs
	I0913 19:58:10.612876   71926 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0913 19:58:10.623923   71926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem --> /etc/ssl/certs/110792.pem (1708 bytes)
	I0913 19:58:10.649632   71926 start.go:296] duration metric: took 142.866871ms for postStartSetup
	I0913 19:58:10.649667   71926 fix.go:56] duration metric: took 19.32233086s for fixHost
	I0913 19:58:10.649685   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHHostname
	I0913 19:58:10.652317   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:10.652626   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:33:43", ip: ""} in network mk-old-k8s-version-234290: {Iface:virbr4 ExpiryTime:2024-09-13 20:58:03 +0000 UTC Type:0 Mac:52:54:00:11:33:43 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-234290 Clientid:01:52:54:00:11:33:43}
	I0913 19:58:10.652654   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined IP address 192.168.72.137 and MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:10.652772   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHPort
	I0913 19:58:10.652954   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHKeyPath
	I0913 19:58:10.653102   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHKeyPath
	I0913 19:58:10.653224   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHUsername
	I0913 19:58:10.653391   71926 main.go:141] libmachine: Using SSH client type: native
	I0913 19:58:10.653546   71926 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.137 22 <nil> <nil>}
	I0913 19:58:10.653555   71926 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0913 19:58:10.762947   71926 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726257490.741456651
	
	I0913 19:58:10.763010   71926 fix.go:216] guest clock: 1726257490.741456651
	I0913 19:58:10.763025   71926 fix.go:229] Guest: 2024-09-13 19:58:10.741456651 +0000 UTC Remote: 2024-09-13 19:58:10.649671047 +0000 UTC m=+268.740736518 (delta=91.785604ms)
	I0913 19:58:10.763052   71926 fix.go:200] guest clock delta is within tolerance: 91.785604ms
	I0913 19:58:10.763059   71926 start.go:83] releasing machines lock for "old-k8s-version-234290", held for 19.435752772s
	I0913 19:58:10.763095   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .DriverName
	I0913 19:58:10.763368   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetIP
	I0913 19:58:10.766318   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:10.766797   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:33:43", ip: ""} in network mk-old-k8s-version-234290: {Iface:virbr4 ExpiryTime:2024-09-13 20:58:03 +0000 UTC Type:0 Mac:52:54:00:11:33:43 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-234290 Clientid:01:52:54:00:11:33:43}
	I0913 19:58:10.766838   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined IP address 192.168.72.137 and MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:10.767094   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .DriverName
	I0913 19:58:10.767602   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .DriverName
	I0913 19:58:10.767791   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .DriverName
	I0913 19:58:10.767901   71926 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0913 19:58:10.767938   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHHostname
	I0913 19:58:10.768057   71926 ssh_runner.go:195] Run: cat /version.json
	I0913 19:58:10.768077   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHHostname
	I0913 19:58:10.770835   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:10.770860   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:10.771204   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:33:43", ip: ""} in network mk-old-k8s-version-234290: {Iface:virbr4 ExpiryTime:2024-09-13 20:58:03 +0000 UTC Type:0 Mac:52:54:00:11:33:43 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-234290 Clientid:01:52:54:00:11:33:43}
	I0913 19:58:10.771231   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined IP address 192.168.72.137 and MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:10.771269   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:33:43", ip: ""} in network mk-old-k8s-version-234290: {Iface:virbr4 ExpiryTime:2024-09-13 20:58:03 +0000 UTC Type:0 Mac:52:54:00:11:33:43 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-234290 Clientid:01:52:54:00:11:33:43}
	I0913 19:58:10.771299   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined IP address 192.168.72.137 and MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:10.771390   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHPort
	I0913 19:58:10.771538   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHKeyPath
	I0913 19:58:10.771562   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHPort
	I0913 19:58:10.771722   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHUsername
	I0913 19:58:10.771758   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHKeyPath
	I0913 19:58:10.771888   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHUsername
	I0913 19:58:10.771910   71926 sshutil.go:53] new ssh client: &{IP:192.168.72.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/old-k8s-version-234290/id_rsa Username:docker}
	I0913 19:58:10.772009   71926 sshutil.go:53] new ssh client: &{IP:192.168.72.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/old-k8s-version-234290/id_rsa Username:docker}
	I0913 19:58:10.851445   71926 ssh_runner.go:195] Run: systemctl --version
	I0913 19:58:10.876291   71926 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0913 19:58:11.022514   71926 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0913 19:58:11.029415   71926 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0913 19:58:11.029478   71926 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0913 19:58:11.046313   71926 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0913 19:58:11.046338   71926 start.go:495] detecting cgroup driver to use...
	I0913 19:58:11.046411   71926 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0913 19:58:11.064638   71926 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0913 19:58:11.079465   71926 docker.go:217] disabling cri-docker service (if available) ...
	I0913 19:58:11.079555   71926 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0913 19:58:11.092965   71926 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0913 19:58:11.107487   71926 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0913 19:58:11.225260   71926 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0913 19:58:11.379777   71926 docker.go:233] disabling docker service ...
	I0913 19:58:11.379918   71926 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0913 19:58:11.399146   71926 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0913 19:58:11.418820   71926 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0913 19:58:11.608056   71926 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0913 19:58:11.793596   71926 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0913 19:58:11.809432   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0913 19:58:11.831794   71926 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0913 19:58:11.831867   71926 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:58:11.843613   71926 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0913 19:58:11.843681   71926 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:58:11.856437   71926 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:58:11.868563   71926 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:58:11.880448   71926 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0913 19:58:11.892795   71926 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0913 19:58:11.903756   71926 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0913 19:58:11.903820   71926 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0913 19:58:11.919323   71926 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0913 19:58:11.932414   71926 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 19:58:12.084112   71926 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0913 19:58:12.186561   71926 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0913 19:58:12.186626   71926 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0913 19:58:12.200935   71926 start.go:563] Will wait 60s for crictl version
	I0913 19:58:12.200999   71926 ssh_runner.go:195] Run: which crictl
	I0913 19:58:12.204888   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0913 19:58:12.251729   71926 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0913 19:58:12.251822   71926 ssh_runner.go:195] Run: crio --version
	I0913 19:58:12.284955   71926 ssh_runner.go:195] Run: crio --version
	I0913 19:58:12.316561   71926 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0913 19:58:10.787457   71233 main.go:141] libmachine: (embed-certs-175374) Calling .Start
	I0913 19:58:10.787620   71233 main.go:141] libmachine: (embed-certs-175374) Ensuring networks are active...
	I0913 19:58:10.788313   71233 main.go:141] libmachine: (embed-certs-175374) Ensuring network default is active
	I0913 19:58:10.788694   71233 main.go:141] libmachine: (embed-certs-175374) Ensuring network mk-embed-certs-175374 is active
	I0913 19:58:10.789203   71233 main.go:141] libmachine: (embed-certs-175374) Getting domain xml...
	I0913 19:58:10.790255   71233 main.go:141] libmachine: (embed-certs-175374) Creating domain...
	I0913 19:58:12.138157   71233 main.go:141] libmachine: (embed-certs-175374) Waiting to get IP...
	I0913 19:58:12.139236   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:12.139700   71233 main.go:141] libmachine: (embed-certs-175374) DBG | unable to find current IP address of domain embed-certs-175374 in network mk-embed-certs-175374
	I0913 19:58:12.139753   71233 main.go:141] libmachine: (embed-certs-175374) DBG | I0913 19:58:12.139667   73146 retry.go:31] will retry after 297.211027ms: waiting for machine to come up
	I0913 19:58:12.438089   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:12.438546   71233 main.go:141] libmachine: (embed-certs-175374) DBG | unable to find current IP address of domain embed-certs-175374 in network mk-embed-certs-175374
	I0913 19:58:12.438573   71233 main.go:141] libmachine: (embed-certs-175374) DBG | I0913 19:58:12.438508   73146 retry.go:31] will retry after 295.16699ms: waiting for machine to come up
	I0913 19:58:12.735114   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:12.735588   71233 main.go:141] libmachine: (embed-certs-175374) DBG | unable to find current IP address of domain embed-certs-175374 in network mk-embed-certs-175374
	I0913 19:58:12.735624   71233 main.go:141] libmachine: (embed-certs-175374) DBG | I0913 19:58:12.735558   73146 retry.go:31] will retry after 439.751807ms: waiting for machine to come up
	I0913 19:58:13.177095   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:13.177613   71233 main.go:141] libmachine: (embed-certs-175374) DBG | unable to find current IP address of domain embed-certs-175374 in network mk-embed-certs-175374
	I0913 19:58:13.177643   71233 main.go:141] libmachine: (embed-certs-175374) DBG | I0913 19:58:13.177584   73146 retry.go:31] will retry after 561.896034ms: waiting for machine to come up
	I0913 19:58:13.741520   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:13.742128   71233 main.go:141] libmachine: (embed-certs-175374) DBG | unable to find current IP address of domain embed-certs-175374 in network mk-embed-certs-175374
	I0913 19:58:13.742164   71233 main.go:141] libmachine: (embed-certs-175374) DBG | I0913 19:58:13.742027   73146 retry.go:31] will retry after 713.20889ms: waiting for machine to come up
	I0913 19:58:11.865414   71702 pod_ready.go:103] pod "etcd-default-k8s-diff-port-512125" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:13.865756   71702 pod_ready.go:103] pod "etcd-default-k8s-diff-port-512125" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:11.267770   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:13.269041   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:15.768231   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:12.317917   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetIP
	I0913 19:58:12.320920   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:12.321261   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:33:43", ip: ""} in network mk-old-k8s-version-234290: {Iface:virbr4 ExpiryTime:2024-09-13 20:58:03 +0000 UTC Type:0 Mac:52:54:00:11:33:43 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-234290 Clientid:01:52:54:00:11:33:43}
	I0913 19:58:12.321291   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined IP address 192.168.72.137 and MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:12.321498   71926 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0913 19:58:12.325745   71926 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0913 19:58:12.340042   71926 kubeadm.go:883] updating cluster {Name:old-k8s-version-234290 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-234290 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.137 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0913 19:58:12.340163   71926 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0913 19:58:12.340227   71926 ssh_runner.go:195] Run: sudo crictl images --output json
	I0913 19:58:12.387772   71926 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0913 19:58:12.387831   71926 ssh_runner.go:195] Run: which lz4
	I0913 19:58:12.391877   71926 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0913 19:58:12.397084   71926 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0913 19:58:12.397111   71926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0913 19:58:14.108496   71926 crio.go:462] duration metric: took 1.716639607s to copy over tarball
	I0913 19:58:14.108580   71926 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0913 19:58:14.457047   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:14.457530   71233 main.go:141] libmachine: (embed-certs-175374) DBG | unable to find current IP address of domain embed-certs-175374 in network mk-embed-certs-175374
	I0913 19:58:14.457578   71233 main.go:141] libmachine: (embed-certs-175374) DBG | I0913 19:58:14.457461   73146 retry.go:31] will retry after 696.737044ms: waiting for machine to come up
	I0913 19:58:15.156145   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:15.156601   71233 main.go:141] libmachine: (embed-certs-175374) DBG | unable to find current IP address of domain embed-certs-175374 in network mk-embed-certs-175374
	I0913 19:58:15.156634   71233 main.go:141] libmachine: (embed-certs-175374) DBG | I0913 19:58:15.156555   73146 retry.go:31] will retry after 799.457406ms: waiting for machine to come up
	I0913 19:58:15.957762   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:15.958268   71233 main.go:141] libmachine: (embed-certs-175374) DBG | unable to find current IP address of domain embed-certs-175374 in network mk-embed-certs-175374
	I0913 19:58:15.958296   71233 main.go:141] libmachine: (embed-certs-175374) DBG | I0913 19:58:15.958218   73146 retry.go:31] will retry after 1.037426883s: waiting for machine to come up
	I0913 19:58:16.996752   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:16.997283   71233 main.go:141] libmachine: (embed-certs-175374) DBG | unable to find current IP address of domain embed-certs-175374 in network mk-embed-certs-175374
	I0913 19:58:16.997310   71233 main.go:141] libmachine: (embed-certs-175374) DBG | I0913 19:58:16.997233   73146 retry.go:31] will retry after 1.529310984s: waiting for machine to come up
	I0913 19:58:18.528167   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:18.528770   71233 main.go:141] libmachine: (embed-certs-175374) DBG | unable to find current IP address of domain embed-certs-175374 in network mk-embed-certs-175374
	I0913 19:58:18.528817   71233 main.go:141] libmachine: (embed-certs-175374) DBG | I0913 19:58:18.528732   73146 retry.go:31] will retry after 1.63281335s: waiting for machine to come up
	I0913 19:58:15.866154   71702 pod_ready.go:103] pod "etcd-default-k8s-diff-port-512125" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:16.865395   71702 pod_ready.go:93] pod "etcd-default-k8s-diff-port-512125" in "kube-system" namespace has status "Ready":"True"
	I0913 19:58:16.865434   71702 pod_ready.go:82] duration metric: took 7.007454177s for pod "etcd-default-k8s-diff-port-512125" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:16.865449   71702 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-512125" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:16.871374   71702 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-512125" in "kube-system" namespace has status "Ready":"True"
	I0913 19:58:16.871398   71702 pod_ready.go:82] duration metric: took 5.94123ms for pod "kube-apiserver-default-k8s-diff-port-512125" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:16.871410   71702 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-512125" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:19.122189   71702 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-512125" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:19.413846   71702 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-512125" in "kube-system" namespace has status "Ready":"True"
	I0913 19:58:19.413866   71702 pod_ready.go:82] duration metric: took 2.542449272s for pod "kube-controller-manager-default-k8s-diff-port-512125" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:19.413880   71702 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-xqv9m" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:19.419124   71702 pod_ready.go:93] pod "kube-proxy-xqv9m" in "kube-system" namespace has status "Ready":"True"
	I0913 19:58:19.419146   71702 pod_ready.go:82] duration metric: took 5.258451ms for pod "kube-proxy-xqv9m" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:19.419157   71702 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-512125" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:19.424347   71702 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-512125" in "kube-system" namespace has status "Ready":"True"
	I0913 19:58:19.424369   71702 pod_ready.go:82] duration metric: took 5.205567ms for pod "kube-scheduler-default-k8s-diff-port-512125" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:19.424378   71702 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:18.266585   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:20.267496   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:17.092899   71926 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.984287393s)
	I0913 19:58:17.092927   71926 crio.go:469] duration metric: took 2.984399164s to extract the tarball
	I0913 19:58:17.092936   71926 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0913 19:58:17.134595   71926 ssh_runner.go:195] Run: sudo crictl images --output json
	I0913 19:58:17.173233   71926 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0913 19:58:17.173261   71926 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0913 19:58:17.173353   71926 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 19:58:17.173420   71926 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0913 19:58:17.173496   71926 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0913 19:58:17.173545   71926 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0913 19:58:17.173432   71926 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0913 19:58:17.173391   71926 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0913 19:58:17.173404   71926 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0913 19:58:17.173354   71926 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0913 19:58:17.174795   71926 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0913 19:58:17.174996   71926 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0913 19:58:17.175016   71926 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0913 19:58:17.175093   71926 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0913 19:58:17.175261   71926 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0913 19:58:17.175274   71926 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0913 19:58:17.175314   71926 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 19:58:17.175374   71926 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0913 19:58:17.389976   71926 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0913 19:58:17.401858   71926 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0913 19:58:17.422207   71926 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0913 19:58:17.437983   71926 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0913 19:58:17.441827   71926 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0913 19:58:17.444353   71926 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0913 19:58:17.448291   71926 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0913 19:58:17.448327   71926 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0913 19:58:17.448360   71926 ssh_runner.go:195] Run: which crictl
	I0913 19:58:17.484630   71926 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0913 19:58:17.486907   71926 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0913 19:58:17.486953   71926 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0913 19:58:17.486992   71926 ssh_runner.go:195] Run: which crictl
	I0913 19:58:17.552972   71926 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0913 19:58:17.553017   71926 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0913 19:58:17.553070   71926 ssh_runner.go:195] Run: which crictl
	I0913 19:58:17.553094   71926 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0913 19:58:17.553131   71926 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0913 19:58:17.553172   71926 ssh_runner.go:195] Run: which crictl
	I0913 19:58:17.573201   71926 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0913 19:58:17.573248   71926 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0913 19:58:17.573298   71926 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0913 19:58:17.573320   71926 ssh_runner.go:195] Run: which crictl
	I0913 19:58:17.573340   71926 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0913 19:58:17.573386   71926 ssh_runner.go:195] Run: which crictl
	I0913 19:58:17.573434   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0913 19:58:17.605151   71926 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0913 19:58:17.605199   71926 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0913 19:58:17.605248   71926 ssh_runner.go:195] Run: which crictl
	I0913 19:58:17.605300   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0913 19:58:17.605347   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0913 19:58:17.605439   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0913 19:58:17.628947   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0913 19:58:17.628992   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0913 19:58:17.629158   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0913 19:58:17.629483   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0913 19:58:17.714772   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0913 19:58:17.714812   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0913 19:58:17.755168   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0913 19:58:17.813880   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0913 19:58:17.813980   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0913 19:58:17.822268   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0913 19:58:17.822277   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0913 19:58:17.864418   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0913 19:58:17.864418   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0913 19:58:17.930419   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0913 19:58:18.000454   71926 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0913 19:58:18.000613   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0913 19:58:18.000655   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0913 19:58:18.014400   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0913 19:58:18.065836   71926 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0913 19:58:18.065876   71926 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0913 19:58:18.065922   71926 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0913 19:58:18.068943   71926 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0913 19:58:18.075442   71926 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0913 19:58:18.102221   71926 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0913 19:58:18.330677   71926 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 19:58:18.474412   71926 cache_images.go:92] duration metric: took 1.301129458s to LoadCachedImages
	W0913 19:58:18.474515   71926 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0913 19:58:18.474532   71926 kubeadm.go:934] updating node { 192.168.72.137 8443 v1.20.0 crio true true} ...
	I0913 19:58:18.474668   71926 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-234290 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.137
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-234290 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0913 19:58:18.474764   71926 ssh_runner.go:195] Run: crio config
	I0913 19:58:18.528116   71926 cni.go:84] Creating CNI manager for ""
	I0913 19:58:18.528142   71926 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0913 19:58:18.528153   71926 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0913 19:58:18.528175   71926 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.137 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-234290 NodeName:old-k8s-version-234290 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.137"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.137 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0913 19:58:18.528341   71926 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.137
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-234290"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.137
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.137"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0913 19:58:18.528421   71926 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0913 19:58:18.539309   71926 binaries.go:44] Found k8s binaries, skipping transfer
	I0913 19:58:18.539396   71926 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0913 19:58:18.549279   71926 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0913 19:58:18.566974   71926 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0913 19:58:18.585652   71926 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0913 19:58:18.606817   71926 ssh_runner.go:195] Run: grep 192.168.72.137	control-plane.minikube.internal$ /etc/hosts
	I0913 19:58:18.610999   71926 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.137	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0913 19:58:18.623650   71926 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 19:58:18.738375   71926 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0913 19:58:18.759935   71926 certs.go:68] Setting up /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/old-k8s-version-234290 for IP: 192.168.72.137
	I0913 19:58:18.759960   71926 certs.go:194] generating shared ca certs ...
	I0913 19:58:18.759976   71926 certs.go:226] acquiring lock for ca certs: {Name:mke780aab4c2895bded11772e31c7a357d07742c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 19:58:18.760149   71926 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.key
	I0913 19:58:18.760202   71926 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.key
	I0913 19:58:18.760217   71926 certs.go:256] generating profile certs ...
	I0913 19:58:18.760337   71926 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/old-k8s-version-234290/client.key
	I0913 19:58:18.760412   71926 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/old-k8s-version-234290/apiserver.key.e5f62d17
	I0913 19:58:18.760468   71926 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/old-k8s-version-234290/proxy-client.key
	I0913 19:58:18.760623   71926 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079.pem (1338 bytes)
	W0913 19:58:18.760669   71926 certs.go:480] ignoring /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079_empty.pem, impossibly tiny 0 bytes
	I0913 19:58:18.760681   71926 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem (1675 bytes)
	I0913 19:58:18.760718   71926 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem (1082 bytes)
	I0913 19:58:18.760751   71926 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem (1123 bytes)
	I0913 19:58:18.760779   71926 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem (1679 bytes)
	I0913 19:58:18.760832   71926 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem (1708 bytes)
	I0913 19:58:18.761583   71926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0913 19:58:18.793014   71926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0913 19:58:18.848745   71926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0913 19:58:18.886745   71926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0913 19:58:18.924588   71926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/old-k8s-version-234290/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0913 19:58:18.958481   71926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/old-k8s-version-234290/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0913 19:58:18.991482   71926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/old-k8s-version-234290/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0913 19:58:19.032324   71926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/old-k8s-version-234290/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0913 19:58:19.059068   71926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079.pem --> /usr/share/ca-certificates/11079.pem (1338 bytes)
	I0913 19:58:19.085949   71926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem --> /usr/share/ca-certificates/110792.pem (1708 bytes)
	I0913 19:58:19.113643   71926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0913 19:58:19.145333   71926 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0913 19:58:19.163687   71926 ssh_runner.go:195] Run: openssl version
	I0913 19:58:19.171767   71926 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11079.pem && ln -fs /usr/share/ca-certificates/11079.pem /etc/ssl/certs/11079.pem"
	I0913 19:58:19.186554   71926 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11079.pem
	I0913 19:58:19.192330   71926 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 13 18:38 /usr/share/ca-certificates/11079.pem
	I0913 19:58:19.192401   71926 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11079.pem
	I0913 19:58:19.198792   71926 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11079.pem /etc/ssl/certs/51391683.0"
	I0913 19:58:19.210300   71926 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110792.pem && ln -fs /usr/share/ca-certificates/110792.pem /etc/ssl/certs/110792.pem"
	I0913 19:58:19.223407   71926 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110792.pem
	I0913 19:58:19.228291   71926 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 13 18:38 /usr/share/ca-certificates/110792.pem
	I0913 19:58:19.228349   71926 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110792.pem
	I0913 19:58:19.234308   71926 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110792.pem /etc/ssl/certs/3ec20f2e.0"
	I0913 19:58:19.245203   71926 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0913 19:58:19.256773   71926 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0913 19:58:19.262488   71926 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 13 18:22 /usr/share/ca-certificates/minikubeCA.pem
	I0913 19:58:19.262571   71926 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0913 19:58:19.269483   71926 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0913 19:58:19.281592   71926 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0913 19:58:19.286741   71926 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0913 19:58:19.293353   71926 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0913 19:58:19.299808   71926 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0913 19:58:19.306799   71926 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0913 19:58:19.313162   71926 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0913 19:58:19.319027   71926 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0913 19:58:19.325179   71926 kubeadm.go:392] StartCluster: {Name:old-k8s-version-234290 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-234290 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.137 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 19:58:19.325264   71926 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0913 19:58:19.325324   71926 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0913 19:58:19.369346   71926 cri.go:89] found id: ""
	I0913 19:58:19.369426   71926 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0913 19:58:19.379886   71926 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0913 19:58:19.379909   71926 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0913 19:58:19.379970   71926 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0913 19:58:19.390431   71926 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0913 19:58:19.391399   71926 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-234290" does not appear in /home/jenkins/minikube-integration/19636-3902/kubeconfig
	I0913 19:58:19.392019   71926 kubeconfig.go:62] /home/jenkins/minikube-integration/19636-3902/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-234290" cluster setting kubeconfig missing "old-k8s-version-234290" context setting]
	I0913 19:58:19.392914   71926 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/kubeconfig: {Name:mk324cf401f96b93fed93af51e24b0634e5fa1a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 19:58:19.407513   71926 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0913 19:58:19.419283   71926 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.137
	I0913 19:58:19.419314   71926 kubeadm.go:1160] stopping kube-system containers ...
	I0913 19:58:19.419326   71926 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0913 19:58:19.419379   71926 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0913 19:58:19.461864   71926 cri.go:89] found id: ""
	I0913 19:58:19.461935   71926 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0913 19:58:19.479746   71926 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0913 19:58:19.490722   71926 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0913 19:58:19.490746   71926 kubeadm.go:157] found existing configuration files:
	
	I0913 19:58:19.490793   71926 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0913 19:58:19.500968   71926 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0913 19:58:19.501031   71926 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0913 19:58:19.511172   71926 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0913 19:58:19.521623   71926 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0913 19:58:19.521690   71926 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0913 19:58:19.532035   71926 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0913 19:58:19.542058   71926 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0913 19:58:19.542139   71926 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0913 19:58:19.551747   71926 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0913 19:58:19.561183   71926 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0913 19:58:19.561240   71926 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0913 19:58:19.571631   71926 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0913 19:58:19.582073   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:58:19.730805   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:58:20.577463   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:58:20.815243   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:58:20.907599   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:58:21.008611   71926 api_server.go:52] waiting for apiserver process to appear ...
	I0913 19:58:21.008687   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:21.509128   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:20.163342   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:20.163836   71233 main.go:141] libmachine: (embed-certs-175374) DBG | unable to find current IP address of domain embed-certs-175374 in network mk-embed-certs-175374
	I0913 19:58:20.163866   71233 main.go:141] libmachine: (embed-certs-175374) DBG | I0913 19:58:20.163797   73146 retry.go:31] will retry after 2.608130242s: waiting for machine to come up
	I0913 19:58:22.773220   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:22.773746   71233 main.go:141] libmachine: (embed-certs-175374) DBG | unable to find current IP address of domain embed-certs-175374 in network mk-embed-certs-175374
	I0913 19:58:22.773773   71233 main.go:141] libmachine: (embed-certs-175374) DBG | I0913 19:58:22.773702   73146 retry.go:31] will retry after 2.358024102s: waiting for machine to come up
	I0913 19:58:21.432080   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:23.931775   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:22.766841   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:24.767073   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:22.009465   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:22.509128   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:23.009680   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:23.509066   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:24.009717   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:24.509499   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:25.008831   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:25.509742   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:26.009748   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:26.509405   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:25.134055   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:25.134613   71233 main.go:141] libmachine: (embed-certs-175374) DBG | unable to find current IP address of domain embed-certs-175374 in network mk-embed-certs-175374
	I0913 19:58:25.134637   71233 main.go:141] libmachine: (embed-certs-175374) DBG | I0913 19:58:25.134569   73146 retry.go:31] will retry after 3.938314294s: waiting for machine to come up
	I0913 19:58:29.076283   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:29.076741   71233 main.go:141] libmachine: (embed-certs-175374) Found IP for machine: 192.168.39.32
	I0913 19:58:29.076760   71233 main.go:141] libmachine: (embed-certs-175374) Reserving static IP address...
	I0913 19:58:29.076770   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has current primary IP address 192.168.39.32 and MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:29.077137   71233 main.go:141] libmachine: (embed-certs-175374) DBG | found host DHCP lease matching {name: "embed-certs-175374", mac: "52:54:00:72:57:cd", ip: "192.168.39.32"} in network mk-embed-certs-175374: {Iface:virbr3 ExpiryTime:2024-09-13 20:58:22 +0000 UTC Type:0 Mac:52:54:00:72:57:cd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:embed-certs-175374 Clientid:01:52:54:00:72:57:cd}
	I0913 19:58:29.077164   71233 main.go:141] libmachine: (embed-certs-175374) DBG | skip adding static IP to network mk-embed-certs-175374 - found existing host DHCP lease matching {name: "embed-certs-175374", mac: "52:54:00:72:57:cd", ip: "192.168.39.32"}
	I0913 19:58:29.077174   71233 main.go:141] libmachine: (embed-certs-175374) Reserved static IP address: 192.168.39.32
	I0913 19:58:29.077185   71233 main.go:141] libmachine: (embed-certs-175374) Waiting for SSH to be available...
	I0913 19:58:29.077194   71233 main.go:141] libmachine: (embed-certs-175374) DBG | Getting to WaitForSSH function...
	I0913 19:58:29.079065   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:29.079375   71233 main.go:141] libmachine: (embed-certs-175374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:57:cd", ip: ""} in network mk-embed-certs-175374: {Iface:virbr3 ExpiryTime:2024-09-13 20:58:22 +0000 UTC Type:0 Mac:52:54:00:72:57:cd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:embed-certs-175374 Clientid:01:52:54:00:72:57:cd}
	I0913 19:58:29.079407   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined IP address 192.168.39.32 and MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:29.079508   71233 main.go:141] libmachine: (embed-certs-175374) DBG | Using SSH client type: external
	I0913 19:58:29.079559   71233 main.go:141] libmachine: (embed-certs-175374) DBG | Using SSH private key: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/embed-certs-175374/id_rsa (-rw-------)
	I0913 19:58:29.079600   71233 main.go:141] libmachine: (embed-certs-175374) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.32 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19636-3902/.minikube/machines/embed-certs-175374/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0913 19:58:29.079615   71233 main.go:141] libmachine: (embed-certs-175374) DBG | About to run SSH command:
	I0913 19:58:29.079643   71233 main.go:141] libmachine: (embed-certs-175374) DBG | exit 0
	I0913 19:58:29.202138   71233 main.go:141] libmachine: (embed-certs-175374) DBG | SSH cmd err, output: <nil>: 
	I0913 19:58:29.202522   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetConfigRaw
	I0913 19:58:26.431735   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:28.930537   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:27.266331   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:29.272314   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:29.203122   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetIP
	I0913 19:58:29.205936   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:29.206304   71233 main.go:141] libmachine: (embed-certs-175374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:57:cd", ip: ""} in network mk-embed-certs-175374: {Iface:virbr3 ExpiryTime:2024-09-13 20:58:22 +0000 UTC Type:0 Mac:52:54:00:72:57:cd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:embed-certs-175374 Clientid:01:52:54:00:72:57:cd}
	I0913 19:58:29.206326   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined IP address 192.168.39.32 and MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:29.206567   71233 profile.go:143] Saving config to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/embed-certs-175374/config.json ...
	I0913 19:58:29.206799   71233 machine.go:93] provisionDockerMachine start ...
	I0913 19:58:29.206820   71233 main.go:141] libmachine: (embed-certs-175374) Calling .DriverName
	I0913 19:58:29.207047   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHHostname
	I0913 19:58:29.209407   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:29.209733   71233 main.go:141] libmachine: (embed-certs-175374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:57:cd", ip: ""} in network mk-embed-certs-175374: {Iface:virbr3 ExpiryTime:2024-09-13 20:58:22 +0000 UTC Type:0 Mac:52:54:00:72:57:cd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:embed-certs-175374 Clientid:01:52:54:00:72:57:cd}
	I0913 19:58:29.209755   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined IP address 192.168.39.32 and MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:29.209880   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHPort
	I0913 19:58:29.210087   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHKeyPath
	I0913 19:58:29.210264   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHKeyPath
	I0913 19:58:29.210475   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHUsername
	I0913 19:58:29.210613   71233 main.go:141] libmachine: Using SSH client type: native
	I0913 19:58:29.210806   71233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.32 22 <nil> <nil>}
	I0913 19:58:29.210819   71233 main.go:141] libmachine: About to run SSH command:
	hostname
	I0913 19:58:29.318615   71233 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0913 19:58:29.318647   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetMachineName
	I0913 19:58:29.318874   71233 buildroot.go:166] provisioning hostname "embed-certs-175374"
	I0913 19:58:29.318891   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetMachineName
	I0913 19:58:29.319050   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHHostname
	I0913 19:58:29.321627   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:29.321981   71233 main.go:141] libmachine: (embed-certs-175374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:57:cd", ip: ""} in network mk-embed-certs-175374: {Iface:virbr3 ExpiryTime:2024-09-13 20:58:22 +0000 UTC Type:0 Mac:52:54:00:72:57:cd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:embed-certs-175374 Clientid:01:52:54:00:72:57:cd}
	I0913 19:58:29.322007   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined IP address 192.168.39.32 and MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:29.322233   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHPort
	I0913 19:58:29.322411   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHKeyPath
	I0913 19:58:29.322524   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHKeyPath
	I0913 19:58:29.322665   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHUsername
	I0913 19:58:29.322814   71233 main.go:141] libmachine: Using SSH client type: native
	I0913 19:58:29.322993   71233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.32 22 <nil> <nil>}
	I0913 19:58:29.323011   71233 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-175374 && echo "embed-certs-175374" | sudo tee /etc/hostname
	I0913 19:58:29.441656   71233 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-175374
	
	I0913 19:58:29.441686   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHHostname
	I0913 19:58:29.444529   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:29.444942   71233 main.go:141] libmachine: (embed-certs-175374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:57:cd", ip: ""} in network mk-embed-certs-175374: {Iface:virbr3 ExpiryTime:2024-09-13 20:58:22 +0000 UTC Type:0 Mac:52:54:00:72:57:cd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:embed-certs-175374 Clientid:01:52:54:00:72:57:cd}
	I0913 19:58:29.444973   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined IP address 192.168.39.32 and MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:29.445107   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHPort
	I0913 19:58:29.445291   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHKeyPath
	I0913 19:58:29.445451   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHKeyPath
	I0913 19:58:29.445560   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHUsername
	I0913 19:58:29.445756   71233 main.go:141] libmachine: Using SSH client type: native
	I0913 19:58:29.445939   71233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.32 22 <nil> <nil>}
	I0913 19:58:29.445961   71233 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-175374' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-175374/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-175374' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0913 19:58:29.555773   71233 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0913 19:58:29.555798   71233 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19636-3902/.minikube CaCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19636-3902/.minikube}
	I0913 19:58:29.555815   71233 buildroot.go:174] setting up certificates
	I0913 19:58:29.555836   71233 provision.go:84] configureAuth start
	I0913 19:58:29.555845   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetMachineName
	I0913 19:58:29.556128   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetIP
	I0913 19:58:29.559064   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:29.559438   71233 main.go:141] libmachine: (embed-certs-175374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:57:cd", ip: ""} in network mk-embed-certs-175374: {Iface:virbr3 ExpiryTime:2024-09-13 20:58:22 +0000 UTC Type:0 Mac:52:54:00:72:57:cd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:embed-certs-175374 Clientid:01:52:54:00:72:57:cd}
	I0913 19:58:29.559459   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined IP address 192.168.39.32 and MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:29.559589   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHHostname
	I0913 19:58:29.561763   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:29.562078   71233 main.go:141] libmachine: (embed-certs-175374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:57:cd", ip: ""} in network mk-embed-certs-175374: {Iface:virbr3 ExpiryTime:2024-09-13 20:58:22 +0000 UTC Type:0 Mac:52:54:00:72:57:cd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:embed-certs-175374 Clientid:01:52:54:00:72:57:cd}
	I0913 19:58:29.562120   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined IP address 192.168.39.32 and MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:29.562218   71233 provision.go:143] copyHostCerts
	I0913 19:58:29.562277   71233 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem, removing ...
	I0913 19:58:29.562288   71233 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem
	I0913 19:58:29.562362   71233 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem (1082 bytes)
	I0913 19:58:29.562476   71233 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem, removing ...
	I0913 19:58:29.562487   71233 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem
	I0913 19:58:29.562519   71233 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem (1123 bytes)
	I0913 19:58:29.562621   71233 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem, removing ...
	I0913 19:58:29.562630   71233 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem
	I0913 19:58:29.562657   71233 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem (1679 bytes)
	I0913 19:58:29.562729   71233 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem org=jenkins.embed-certs-175374 san=[127.0.0.1 192.168.39.32 embed-certs-175374 localhost minikube]
	I0913 19:58:29.724450   71233 provision.go:177] copyRemoteCerts
	I0913 19:58:29.724502   71233 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0913 19:58:29.724524   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHHostname
	I0913 19:58:29.727348   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:29.727653   71233 main.go:141] libmachine: (embed-certs-175374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:57:cd", ip: ""} in network mk-embed-certs-175374: {Iface:virbr3 ExpiryTime:2024-09-13 20:58:22 +0000 UTC Type:0 Mac:52:54:00:72:57:cd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:embed-certs-175374 Clientid:01:52:54:00:72:57:cd}
	I0913 19:58:29.727680   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined IP address 192.168.39.32 and MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:29.727870   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHPort
	I0913 19:58:29.728028   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHKeyPath
	I0913 19:58:29.728142   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHUsername
	I0913 19:58:29.728291   71233 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/embed-certs-175374/id_rsa Username:docker}
	I0913 19:58:29.807752   71233 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0913 19:58:29.832344   71233 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0913 19:58:29.856275   71233 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0913 19:58:29.879235   71233 provision.go:87] duration metric: took 323.386002ms to configureAuth
	I0913 19:58:29.879264   71233 buildroot.go:189] setting minikube options for container-runtime
	I0913 19:58:29.879464   71233 config.go:182] Loaded profile config "embed-certs-175374": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 19:58:29.879535   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHHostname
	I0913 19:58:29.882178   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:29.882577   71233 main.go:141] libmachine: (embed-certs-175374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:57:cd", ip: ""} in network mk-embed-certs-175374: {Iface:virbr3 ExpiryTime:2024-09-13 20:58:22 +0000 UTC Type:0 Mac:52:54:00:72:57:cd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:embed-certs-175374 Clientid:01:52:54:00:72:57:cd}
	I0913 19:58:29.882608   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined IP address 192.168.39.32 and MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:29.882736   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHPort
	I0913 19:58:29.883001   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHKeyPath
	I0913 19:58:29.883187   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHKeyPath
	I0913 19:58:29.883328   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHUsername
	I0913 19:58:29.883519   71233 main.go:141] libmachine: Using SSH client type: native
	I0913 19:58:29.883723   71233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.32 22 <nil> <nil>}
	I0913 19:58:29.883747   71233 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0913 19:58:30.103532   71233 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0913 19:58:30.103557   71233 machine.go:96] duration metric: took 896.744413ms to provisionDockerMachine
	I0913 19:58:30.103574   71233 start.go:293] postStartSetup for "embed-certs-175374" (driver="kvm2")
	I0913 19:58:30.103588   71233 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0913 19:58:30.103610   71233 main.go:141] libmachine: (embed-certs-175374) Calling .DriverName
	I0913 19:58:30.103908   71233 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0913 19:58:30.103935   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHHostname
	I0913 19:58:30.106889   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:30.107288   71233 main.go:141] libmachine: (embed-certs-175374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:57:cd", ip: ""} in network mk-embed-certs-175374: {Iface:virbr3 ExpiryTime:2024-09-13 20:58:22 +0000 UTC Type:0 Mac:52:54:00:72:57:cd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:embed-certs-175374 Clientid:01:52:54:00:72:57:cd}
	I0913 19:58:30.107320   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined IP address 192.168.39.32 and MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:30.107434   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHPort
	I0913 19:58:30.107613   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHKeyPath
	I0913 19:58:30.107766   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHUsername
	I0913 19:58:30.107900   71233 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/embed-certs-175374/id_rsa Username:docker}
	I0913 19:58:30.189085   71233 ssh_runner.go:195] Run: cat /etc/os-release
	I0913 19:58:30.193560   71233 info.go:137] Remote host: Buildroot 2023.02.9
	I0913 19:58:30.193587   71233 filesync.go:126] Scanning /home/jenkins/minikube-integration/19636-3902/.minikube/addons for local assets ...
	I0913 19:58:30.193667   71233 filesync.go:126] Scanning /home/jenkins/minikube-integration/19636-3902/.minikube/files for local assets ...
	I0913 19:58:30.193767   71233 filesync.go:149] local asset: /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem -> 110792.pem in /etc/ssl/certs
	I0913 19:58:30.193878   71233 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0913 19:58:30.203533   71233 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem --> /etc/ssl/certs/110792.pem (1708 bytes)
	I0913 19:58:30.227895   71233 start.go:296] duration metric: took 124.307474ms for postStartSetup
	I0913 19:58:30.227936   71233 fix.go:56] duration metric: took 19.464716966s for fixHost
	I0913 19:58:30.227956   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHHostname
	I0913 19:58:30.230672   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:30.230977   71233 main.go:141] libmachine: (embed-certs-175374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:57:cd", ip: ""} in network mk-embed-certs-175374: {Iface:virbr3 ExpiryTime:2024-09-13 20:58:22 +0000 UTC Type:0 Mac:52:54:00:72:57:cd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:embed-certs-175374 Clientid:01:52:54:00:72:57:cd}
	I0913 19:58:30.231003   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined IP address 192.168.39.32 and MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:30.231167   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHPort
	I0913 19:58:30.231432   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHKeyPath
	I0913 19:58:30.231610   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHKeyPath
	I0913 19:58:30.231758   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHUsername
	I0913 19:58:30.231913   71233 main.go:141] libmachine: Using SSH client type: native
	I0913 19:58:30.232089   71233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.32 22 <nil> <nil>}
	I0913 19:58:30.232100   71233 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0913 19:58:30.331036   71233 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726257510.303110870
	
	I0913 19:58:30.331065   71233 fix.go:216] guest clock: 1726257510.303110870
	I0913 19:58:30.331076   71233 fix.go:229] Guest: 2024-09-13 19:58:30.30311087 +0000 UTC Remote: 2024-09-13 19:58:30.227940037 +0000 UTC m=+356.058673795 (delta=75.170833ms)
	I0913 19:58:30.331112   71233 fix.go:200] guest clock delta is within tolerance: 75.170833ms
	I0913 19:58:30.331117   71233 start.go:83] releasing machines lock for "embed-certs-175374", held for 19.567934671s
	I0913 19:58:30.331140   71233 main.go:141] libmachine: (embed-certs-175374) Calling .DriverName
	I0913 19:58:30.331423   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetIP
	I0913 19:58:30.334022   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:30.334506   71233 main.go:141] libmachine: (embed-certs-175374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:57:cd", ip: ""} in network mk-embed-certs-175374: {Iface:virbr3 ExpiryTime:2024-09-13 20:58:22 +0000 UTC Type:0 Mac:52:54:00:72:57:cd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:embed-certs-175374 Clientid:01:52:54:00:72:57:cd}
	I0913 19:58:30.334533   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined IP address 192.168.39.32 and MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:30.334671   71233 main.go:141] libmachine: (embed-certs-175374) Calling .DriverName
	I0913 19:58:30.335259   71233 main.go:141] libmachine: (embed-certs-175374) Calling .DriverName
	I0913 19:58:30.335431   71233 main.go:141] libmachine: (embed-certs-175374) Calling .DriverName
	I0913 19:58:30.335489   71233 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0913 19:58:30.335528   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHHostname
	I0913 19:58:30.335642   71233 ssh_runner.go:195] Run: cat /version.json
	I0913 19:58:30.335660   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHHostname
	I0913 19:58:30.338223   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:30.338556   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:30.338585   71233 main.go:141] libmachine: (embed-certs-175374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:57:cd", ip: ""} in network mk-embed-certs-175374: {Iface:virbr3 ExpiryTime:2024-09-13 20:58:22 +0000 UTC Type:0 Mac:52:54:00:72:57:cd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:embed-certs-175374 Clientid:01:52:54:00:72:57:cd}
	I0913 19:58:30.338608   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined IP address 192.168.39.32 and MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:30.338738   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHPort
	I0913 19:58:30.338891   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHKeyPath
	I0913 19:58:30.339037   71233 main.go:141] libmachine: (embed-certs-175374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:57:cd", ip: ""} in network mk-embed-certs-175374: {Iface:virbr3 ExpiryTime:2024-09-13 20:58:22 +0000 UTC Type:0 Mac:52:54:00:72:57:cd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:embed-certs-175374 Clientid:01:52:54:00:72:57:cd}
	I0913 19:58:30.339057   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined IP address 192.168.39.32 and MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:30.339072   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHUsername
	I0913 19:58:30.339199   71233 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/embed-certs-175374/id_rsa Username:docker}
	I0913 19:58:30.339247   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHPort
	I0913 19:58:30.339387   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHKeyPath
	I0913 19:58:30.339526   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHUsername
	I0913 19:58:30.339639   71233 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/embed-certs-175374/id_rsa Username:docker}
	I0913 19:58:30.415622   71233 ssh_runner.go:195] Run: systemctl --version
	I0913 19:58:30.440604   71233 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0913 19:58:30.586022   71233 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0913 19:58:30.594584   71233 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0913 19:58:30.594660   71233 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0913 19:58:30.611349   71233 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0913 19:58:30.611371   71233 start.go:495] detecting cgroup driver to use...
	I0913 19:58:30.611431   71233 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0913 19:58:30.626916   71233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0913 19:58:30.641834   71233 docker.go:217] disabling cri-docker service (if available) ...
	I0913 19:58:30.641899   71233 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0913 19:58:30.656109   71233 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0913 19:58:30.670053   71233 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0913 19:58:30.785264   71233 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0913 19:58:30.936484   71233 docker.go:233] disabling docker service ...
	I0913 19:58:30.936548   71233 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0913 19:58:30.951998   71233 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0913 19:58:30.965863   71233 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0913 19:58:31.117753   71233 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0913 19:58:31.241750   71233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0913 19:58:31.255910   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0913 19:58:31.276372   71233 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0913 19:58:31.276453   71233 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:58:31.286686   71233 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0913 19:58:31.286749   71233 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:58:31.296762   71233 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:58:31.306752   71233 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:58:31.317435   71233 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0913 19:58:31.328859   71233 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:58:31.339508   71233 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:58:31.358855   71233 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:58:31.369756   71233 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0913 19:58:31.379838   71233 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0913 19:58:31.379908   71233 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0913 19:58:31.392714   71233 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0913 19:58:31.402973   71233 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 19:58:31.543089   71233 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0913 19:58:31.635184   71233 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0913 19:58:31.635259   71233 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0913 19:58:31.640122   71233 start.go:563] Will wait 60s for crictl version
	I0913 19:58:31.640190   71233 ssh_runner.go:195] Run: which crictl
	I0913 19:58:31.644326   71233 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0913 19:58:31.687840   71233 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0913 19:58:31.687936   71233 ssh_runner.go:195] Run: crio --version
	I0913 19:58:31.716376   71233 ssh_runner.go:195] Run: crio --version
	I0913 19:58:31.749357   71233 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0913 19:58:27.009130   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:27.509574   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:28.009714   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:28.508758   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:29.008768   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:29.509523   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:30.009031   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:30.508827   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:31.009653   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:31.509554   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:31.750649   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetIP
	I0913 19:58:31.753235   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:31.753547   71233 main.go:141] libmachine: (embed-certs-175374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:57:cd", ip: ""} in network mk-embed-certs-175374: {Iface:virbr3 ExpiryTime:2024-09-13 20:58:22 +0000 UTC Type:0 Mac:52:54:00:72:57:cd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:embed-certs-175374 Clientid:01:52:54:00:72:57:cd}
	I0913 19:58:31.753576   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined IP address 192.168.39.32 and MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:31.753809   71233 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0913 19:58:31.757927   71233 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0913 19:58:31.771018   71233 kubeadm.go:883] updating cluster {Name:embed-certs-175374 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:embed-certs-175374 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.32 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0913 19:58:31.771171   71233 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0913 19:58:31.771221   71233 ssh_runner.go:195] Run: sudo crictl images --output json
	I0913 19:58:31.810741   71233 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0913 19:58:31.810798   71233 ssh_runner.go:195] Run: which lz4
	I0913 19:58:31.814892   71233 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0913 19:58:31.819229   71233 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0913 19:58:31.819269   71233 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0913 19:58:33.221865   71233 crio.go:462] duration metric: took 1.407002501s to copy over tarball
	I0913 19:58:33.221951   71233 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0913 19:58:30.931694   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:32.934639   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:31.767243   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:33.767834   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:35.768301   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:32.009337   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:32.509870   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:33.009618   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:33.509364   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:34.009124   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:34.509744   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:35.009610   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:35.509772   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:36.009680   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:36.509743   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:35.282125   71233 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.060124935s)
	I0913 19:58:35.282151   71233 crio.go:469] duration metric: took 2.060254719s to extract the tarball
	I0913 19:58:35.282158   71233 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0913 19:58:35.320685   71233 ssh_runner.go:195] Run: sudo crictl images --output json
	I0913 19:58:35.364371   71233 crio.go:514] all images are preloaded for cri-o runtime.
	I0913 19:58:35.364396   71233 cache_images.go:84] Images are preloaded, skipping loading
	I0913 19:58:35.364404   71233 kubeadm.go:934] updating node { 192.168.39.32 8443 v1.31.1 crio true true} ...
	I0913 19:58:35.364505   71233 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-175374 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.32
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-175374 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0913 19:58:35.364574   71233 ssh_runner.go:195] Run: crio config
	I0913 19:58:35.409662   71233 cni.go:84] Creating CNI manager for ""
	I0913 19:58:35.409684   71233 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0913 19:58:35.409692   71233 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0913 19:58:35.409711   71233 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.32 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-175374 NodeName:embed-certs-175374 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.32"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.32 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0913 19:58:35.409829   71233 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.32
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-175374"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.32
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.32"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0913 19:58:35.409886   71233 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0913 19:58:35.420286   71233 binaries.go:44] Found k8s binaries, skipping transfer
	I0913 19:58:35.420354   71233 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0913 19:58:35.430624   71233 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0913 19:58:35.448662   71233 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0913 19:58:35.465838   71233 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0913 19:58:35.483262   71233 ssh_runner.go:195] Run: grep 192.168.39.32	control-plane.minikube.internal$ /etc/hosts
	I0913 19:58:35.487299   71233 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.32	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0913 19:58:35.500571   71233 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 19:58:35.615618   71233 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0913 19:58:35.634191   71233 certs.go:68] Setting up /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/embed-certs-175374 for IP: 192.168.39.32
	I0913 19:58:35.634216   71233 certs.go:194] generating shared ca certs ...
	I0913 19:58:35.634237   71233 certs.go:226] acquiring lock for ca certs: {Name:mke780aab4c2895bded11772e31c7a357d07742c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 19:58:35.634421   71233 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.key
	I0913 19:58:35.634489   71233 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.key
	I0913 19:58:35.634503   71233 certs.go:256] generating profile certs ...
	I0913 19:58:35.634599   71233 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/embed-certs-175374/client.key
	I0913 19:58:35.634664   71233 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/embed-certs-175374/apiserver.key.f26b0d46
	I0913 19:58:35.634719   71233 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/embed-certs-175374/proxy-client.key
	I0913 19:58:35.634847   71233 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079.pem (1338 bytes)
	W0913 19:58:35.634888   71233 certs.go:480] ignoring /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079_empty.pem, impossibly tiny 0 bytes
	I0913 19:58:35.634903   71233 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem (1675 bytes)
	I0913 19:58:35.634940   71233 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem (1082 bytes)
	I0913 19:58:35.634974   71233 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem (1123 bytes)
	I0913 19:58:35.635013   71233 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem (1679 bytes)
	I0913 19:58:35.635070   71233 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem (1708 bytes)
	I0913 19:58:35.635679   71233 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0913 19:58:35.680013   71233 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0913 19:58:35.708836   71233 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0913 19:58:35.742138   71233 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0913 19:58:35.783230   71233 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/embed-certs-175374/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0913 19:58:35.816022   71233 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/embed-certs-175374/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0913 19:58:35.847365   71233 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/embed-certs-175374/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0913 19:58:35.871389   71233 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/embed-certs-175374/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0913 19:58:35.896617   71233 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem --> /usr/share/ca-certificates/110792.pem (1708 bytes)
	I0913 19:58:35.920811   71233 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0913 19:58:35.947119   71233 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079.pem --> /usr/share/ca-certificates/11079.pem (1338 bytes)
	I0913 19:58:35.971590   71233 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0913 19:58:35.988797   71233 ssh_runner.go:195] Run: openssl version
	I0913 19:58:35.994690   71233 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11079.pem && ln -fs /usr/share/ca-certificates/11079.pem /etc/ssl/certs/11079.pem"
	I0913 19:58:36.006056   71233 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11079.pem
	I0913 19:58:36.010744   71233 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 13 18:38 /usr/share/ca-certificates/11079.pem
	I0913 19:58:36.010813   71233 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11079.pem
	I0913 19:58:36.016820   71233 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11079.pem /etc/ssl/certs/51391683.0"
	I0913 19:58:36.028895   71233 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110792.pem && ln -fs /usr/share/ca-certificates/110792.pem /etc/ssl/certs/110792.pem"
	I0913 19:58:36.040296   71233 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110792.pem
	I0913 19:58:36.044904   71233 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 13 18:38 /usr/share/ca-certificates/110792.pem
	I0913 19:58:36.044948   71233 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110792.pem
	I0913 19:58:36.050727   71233 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110792.pem /etc/ssl/certs/3ec20f2e.0"
	I0913 19:58:36.061195   71233 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0913 19:58:36.071527   71233 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0913 19:58:36.076171   71233 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 13 18:22 /usr/share/ca-certificates/minikubeCA.pem
	I0913 19:58:36.076204   71233 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0913 19:58:36.081765   71233 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0913 19:58:36.093815   71233 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0913 19:58:36.098729   71233 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0913 19:58:36.105238   71233 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0913 19:58:36.111340   71233 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0913 19:58:36.117349   71233 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0913 19:58:36.123329   71233 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0913 19:58:36.129083   71233 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0913 19:58:36.134952   71233 kubeadm.go:392] StartCluster: {Name:embed-certs-175374 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:embed-certs-175374 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.32 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 19:58:36.135035   71233 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0913 19:58:36.135095   71233 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0913 19:58:36.177680   71233 cri.go:89] found id: ""
	I0913 19:58:36.177743   71233 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0913 19:58:36.188511   71233 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0913 19:58:36.188531   71233 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0913 19:58:36.188580   71233 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0913 19:58:36.199007   71233 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0913 19:58:36.200034   71233 kubeconfig.go:125] found "embed-certs-175374" server: "https://192.168.39.32:8443"
	I0913 19:58:36.201838   71233 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0913 19:58:36.211823   71233 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.32
	I0913 19:58:36.211850   71233 kubeadm.go:1160] stopping kube-system containers ...
	I0913 19:58:36.211863   71233 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0913 19:58:36.211907   71233 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0913 19:58:36.254383   71233 cri.go:89] found id: ""
	I0913 19:58:36.254452   71233 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0913 19:58:36.274482   71233 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0913 19:58:36.284752   71233 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0913 19:58:36.284776   71233 kubeadm.go:157] found existing configuration files:
	
	I0913 19:58:36.284826   71233 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0913 19:58:36.294122   71233 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0913 19:58:36.294186   71233 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0913 19:58:36.303848   71233 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0913 19:58:36.313197   71233 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0913 19:58:36.313270   71233 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0913 19:58:36.322754   71233 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0913 19:58:36.332018   71233 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0913 19:58:36.332078   71233 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0913 19:58:36.341980   71233 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0913 19:58:36.351251   71233 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0913 19:58:36.351308   71233 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0913 19:58:36.360867   71233 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0913 19:58:36.370253   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:58:36.476811   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:58:37.459731   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:58:37.701271   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:58:37.795569   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:58:37.884961   71233 api_server.go:52] waiting for apiserver process to appear ...
	I0913 19:58:37.885054   71233 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:38.385265   71233 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:38.886038   71233 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:35.431757   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:37.930698   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:38.869696   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:37.009084   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:37.509368   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:38.009698   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:38.509699   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:39.008821   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:39.509724   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:40.008865   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:40.509533   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:41.009397   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:41.508872   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:39.385638   71233 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:39.885566   71233 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:39.901409   71233 api_server.go:72] duration metric: took 2.016446791s to wait for apiserver process to appear ...
	I0913 19:58:39.901438   71233 api_server.go:88] waiting for apiserver healthz status ...
	I0913 19:58:39.901469   71233 api_server.go:253] Checking apiserver healthz at https://192.168.39.32:8443/healthz ...
	I0913 19:58:42.607623   71233 api_server.go:279] https://192.168.39.32:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0913 19:58:42.607656   71233 api_server.go:103] status: https://192.168.39.32:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0913 19:58:42.607672   71233 api_server.go:253] Checking apiserver healthz at https://192.168.39.32:8443/healthz ...
	I0913 19:58:42.625107   71233 api_server.go:279] https://192.168.39.32:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0913 19:58:42.625134   71233 api_server.go:103] status: https://192.168.39.32:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0913 19:58:42.902512   71233 api_server.go:253] Checking apiserver healthz at https://192.168.39.32:8443/healthz ...
	I0913 19:58:42.912382   71233 api_server.go:279] https://192.168.39.32:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0913 19:58:42.912424   71233 api_server.go:103] status: https://192.168.39.32:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0913 19:58:43.401981   71233 api_server.go:253] Checking apiserver healthz at https://192.168.39.32:8443/healthz ...
	I0913 19:58:43.406231   71233 api_server.go:279] https://192.168.39.32:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0913 19:58:43.406253   71233 api_server.go:103] status: https://192.168.39.32:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0913 19:58:43.901758   71233 api_server.go:253] Checking apiserver healthz at https://192.168.39.32:8443/healthz ...
	I0913 19:58:43.909236   71233 api_server.go:279] https://192.168.39.32:8443/healthz returned 200:
	ok
	I0913 19:58:43.915858   71233 api_server.go:141] control plane version: v1.31.1
	I0913 19:58:43.915878   71233 api_server.go:131] duration metric: took 4.014433541s to wait for apiserver health ...
	I0913 19:58:43.915886   71233 cni.go:84] Creating CNI manager for ""
	I0913 19:58:43.915892   71233 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0913 19:58:43.917333   71233 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0913 19:58:43.918437   71233 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0913 19:58:43.929803   71233 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0913 19:58:43.962264   71233 system_pods.go:43] waiting for kube-system pods to appear ...
	I0913 19:58:43.974064   71233 system_pods.go:59] 8 kube-system pods found
	I0913 19:58:43.974124   71233 system_pods.go:61] "coredns-7c65d6cfc9-lrrkx" [3b5420cc-a0cd-4f34-8f65-9fa6fd5dd02f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0913 19:58:43.974132   71233 system_pods.go:61] "etcd-embed-certs-175374" [4f645ba5-cffa-49a2-9219-8e635f58abd3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0913 19:58:43.974140   71233 system_pods.go:61] "kube-apiserver-embed-certs-175374" [4c21b983-d949-4642-b17c-b547330b0d05] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0913 19:58:43.974146   71233 system_pods.go:61] "kube-controller-manager-embed-certs-175374" [defb4f6c-81f8-4405-b2c0-abe91c846f4c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0913 19:58:43.974154   71233 system_pods.go:61] "kube-proxy-jv77q" [28580bbe-7c5f-4161-8370-41f3286d508c] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0913 19:58:43.974159   71233 system_pods.go:61] "kube-scheduler-embed-certs-175374" [c5aa66b9-523c-46e8-b8e4-70680490dbf5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0913 19:58:43.974168   71233 system_pods.go:61] "metrics-server-6867b74b74-fnznh" [9ca67e1c-a852-4513-abfc-ace5908d2727] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0913 19:58:43.974174   71233 system_pods.go:61] "storage-provisioner" [afa99920-b51a-4d30-a8e0-269a0beeee8a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0913 19:58:43.974180   71233 system_pods.go:74] duration metric: took 11.890984ms to wait for pod list to return data ...
	I0913 19:58:43.974191   71233 node_conditions.go:102] verifying NodePressure condition ...
	I0913 19:58:43.978060   71233 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0913 19:58:43.978084   71233 node_conditions.go:123] node cpu capacity is 2
	I0913 19:58:43.978115   71233 node_conditions.go:105] duration metric: took 3.91914ms to run NodePressure ...
	I0913 19:58:43.978136   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:58:39.931725   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:41.931904   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:43.932454   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:44.265300   71233 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0913 19:58:44.270133   71233 kubeadm.go:739] kubelet initialised
	I0913 19:58:44.270161   71233 kubeadm.go:740] duration metric: took 4.829768ms waiting for restarted kubelet to initialise ...
	I0913 19:58:44.270170   71233 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0913 19:58:44.275324   71233 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-lrrkx" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:44.280420   71233 pod_ready.go:98] node "embed-certs-175374" hosting pod "coredns-7c65d6cfc9-lrrkx" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-175374" has status "Ready":"False"
	I0913 19:58:44.280443   71233 pod_ready.go:82] duration metric: took 5.093507ms for pod "coredns-7c65d6cfc9-lrrkx" in "kube-system" namespace to be "Ready" ...
	E0913 19:58:44.280452   71233 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-175374" hosting pod "coredns-7c65d6cfc9-lrrkx" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-175374" has status "Ready":"False"
	I0913 19:58:44.280459   71233 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-175374" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:44.284917   71233 pod_ready.go:98] node "embed-certs-175374" hosting pod "etcd-embed-certs-175374" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-175374" has status "Ready":"False"
	I0913 19:58:44.284937   71233 pod_ready.go:82] duration metric: took 4.469078ms for pod "etcd-embed-certs-175374" in "kube-system" namespace to be "Ready" ...
	E0913 19:58:44.284945   71233 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-175374" hosting pod "etcd-embed-certs-175374" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-175374" has status "Ready":"False"
	I0913 19:58:44.284952   71233 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-175374" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:44.288979   71233 pod_ready.go:98] node "embed-certs-175374" hosting pod "kube-apiserver-embed-certs-175374" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-175374" has status "Ready":"False"
	I0913 19:58:44.289001   71233 pod_ready.go:82] duration metric: took 4.040314ms for pod "kube-apiserver-embed-certs-175374" in "kube-system" namespace to be "Ready" ...
	E0913 19:58:44.289012   71233 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-175374" hosting pod "kube-apiserver-embed-certs-175374" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-175374" has status "Ready":"False"
	I0913 19:58:44.289019   71233 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-175374" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:44.366067   71233 pod_ready.go:98] node "embed-certs-175374" hosting pod "kube-controller-manager-embed-certs-175374" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-175374" has status "Ready":"False"
	I0913 19:58:44.366115   71233 pod_ready.go:82] duration metric: took 77.081723ms for pod "kube-controller-manager-embed-certs-175374" in "kube-system" namespace to be "Ready" ...
	E0913 19:58:44.366130   71233 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-175374" hosting pod "kube-controller-manager-embed-certs-175374" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-175374" has status "Ready":"False"
	I0913 19:58:44.366138   71233 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-jv77q" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:44.768797   71233 pod_ready.go:98] node "embed-certs-175374" hosting pod "kube-proxy-jv77q" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-175374" has status "Ready":"False"
	I0913 19:58:44.768829   71233 pod_ready.go:82] duration metric: took 402.677833ms for pod "kube-proxy-jv77q" in "kube-system" namespace to be "Ready" ...
	E0913 19:58:44.768838   71233 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-175374" hosting pod "kube-proxy-jv77q" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-175374" has status "Ready":"False"
	I0913 19:58:44.768845   71233 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-175374" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:45.166011   71233 pod_ready.go:98] node "embed-certs-175374" hosting pod "kube-scheduler-embed-certs-175374" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-175374" has status "Ready":"False"
	I0913 19:58:45.166046   71233 pod_ready.go:82] duration metric: took 397.193399ms for pod "kube-scheduler-embed-certs-175374" in "kube-system" namespace to be "Ready" ...
	E0913 19:58:45.166059   71233 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-175374" hosting pod "kube-scheduler-embed-certs-175374" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-175374" has status "Ready":"False"
	I0913 19:58:45.166068   71233 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:45.565304   71233 pod_ready.go:98] node "embed-certs-175374" hosting pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-175374" has status "Ready":"False"
	I0913 19:58:45.565328   71233 pod_ready.go:82] duration metric: took 399.249933ms for pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace to be "Ready" ...
	E0913 19:58:45.565337   71233 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-175374" hosting pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-175374" has status "Ready":"False"
	I0913 19:58:45.565350   71233 pod_ready.go:39] duration metric: took 1.295171906s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0913 19:58:45.565371   71233 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0913 19:58:45.577831   71233 ops.go:34] apiserver oom_adj: -16
	I0913 19:58:45.577857   71233 kubeadm.go:597] duration metric: took 9.389319229s to restartPrimaryControlPlane
	I0913 19:58:45.577868   71233 kubeadm.go:394] duration metric: took 9.442921883s to StartCluster
	I0913 19:58:45.577884   71233 settings.go:142] acquiring lock: {Name:mk3b751ea8a9f3a6c2d19469cffad500e96411b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 19:58:45.577967   71233 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19636-3902/kubeconfig
	I0913 19:58:45.579765   71233 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/kubeconfig: {Name:mk324cf401f96b93fed93af51e24b0634e5fa1a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 19:58:45.580068   71233 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.32 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0913 19:58:45.580156   71233 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0913 19:58:45.580249   71233 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-175374"
	I0913 19:58:45.580272   71233 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-175374"
	W0913 19:58:45.580281   71233 addons.go:243] addon storage-provisioner should already be in state true
	I0913 19:58:45.580295   71233 config.go:182] Loaded profile config "embed-certs-175374": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 19:58:45.580311   71233 host.go:66] Checking if "embed-certs-175374" exists ...
	I0913 19:58:45.580300   71233 addons.go:69] Setting default-storageclass=true in profile "embed-certs-175374"
	I0913 19:58:45.580353   71233 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-175374"
	I0913 19:58:45.580341   71233 addons.go:69] Setting metrics-server=true in profile "embed-certs-175374"
	I0913 19:58:45.580395   71233 addons.go:234] Setting addon metrics-server=true in "embed-certs-175374"
	W0913 19:58:45.580409   71233 addons.go:243] addon metrics-server should already be in state true
	I0913 19:58:45.580482   71233 host.go:66] Checking if "embed-certs-175374" exists ...
	I0913 19:58:45.580753   71233 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:58:45.580799   71233 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:58:45.580846   71233 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:58:45.580894   71233 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:58:45.580952   71233 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:58:45.581001   71233 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:58:45.581828   71233 out.go:177] * Verifying Kubernetes components...
	I0913 19:58:45.583145   71233 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 19:58:45.596215   71233 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38289
	I0913 19:58:45.596347   71233 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43797
	I0913 19:58:45.596650   71233 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:58:45.596775   71233 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44367
	I0913 19:58:45.596889   71233 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:58:45.597150   71233 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:58:45.597156   71233 main.go:141] libmachine: Using API Version  1
	I0913 19:58:45.597175   71233 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:58:45.597345   71233 main.go:141] libmachine: Using API Version  1
	I0913 19:58:45.597359   71233 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:58:45.597606   71233 main.go:141] libmachine: Using API Version  1
	I0913 19:58:45.597623   71233 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:58:45.597659   71233 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:58:45.597683   71233 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:58:45.597842   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetState
	I0913 19:58:45.597952   71233 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:58:45.598212   71233 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:58:45.598243   71233 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:58:45.598512   71233 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:58:45.598541   71233 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:58:45.601548   71233 addons.go:234] Setting addon default-storageclass=true in "embed-certs-175374"
	W0913 19:58:45.601569   71233 addons.go:243] addon default-storageclass should already be in state true
	I0913 19:58:45.601596   71233 host.go:66] Checking if "embed-certs-175374" exists ...
	I0913 19:58:45.601941   71233 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:58:45.601971   71233 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:58:45.613596   71233 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37017
	I0913 19:58:45.614086   71233 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:58:45.614646   71233 main.go:141] libmachine: Using API Version  1
	I0913 19:58:45.614670   71233 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:58:45.615015   71233 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:58:45.615328   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetState
	I0913 19:58:45.615792   71233 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42809
	I0913 19:58:45.616459   71233 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:58:45.617057   71233 main.go:141] libmachine: Using API Version  1
	I0913 19:58:45.617076   71233 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:58:45.617135   71233 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40251
	I0913 19:58:45.617429   71233 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:58:45.617492   71233 main.go:141] libmachine: (embed-certs-175374) Calling .DriverName
	I0913 19:58:45.617538   71233 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:58:45.617720   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetState
	I0913 19:58:45.618009   71233 main.go:141] libmachine: Using API Version  1
	I0913 19:58:45.618029   71233 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:58:45.618610   71233 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:58:45.619215   71233 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:58:45.619257   71233 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:58:45.619496   71233 main.go:141] libmachine: (embed-certs-175374) Calling .DriverName
	I0913 19:58:45.619734   71233 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0913 19:58:45.620863   71233 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 19:58:41.266572   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:43.267658   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:45.768086   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:42.009722   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:42.509784   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:43.009630   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:43.508726   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:44.009339   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:44.509674   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:45.009675   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:45.509437   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:46.009589   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:46.509457   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:45.620906   71233 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0913 19:58:45.620921   71233 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0913 19:58:45.620940   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHHostname
	I0913 19:58:45.622242   71233 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0913 19:58:45.622255   71233 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0913 19:58:45.622272   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHHostname
	I0913 19:58:45.624230   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:45.624735   71233 main.go:141] libmachine: (embed-certs-175374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:57:cd", ip: ""} in network mk-embed-certs-175374: {Iface:virbr3 ExpiryTime:2024-09-13 20:58:22 +0000 UTC Type:0 Mac:52:54:00:72:57:cd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:embed-certs-175374 Clientid:01:52:54:00:72:57:cd}
	I0913 19:58:45.624763   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined IP address 192.168.39.32 and MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:45.624903   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHPort
	I0913 19:58:45.625063   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHKeyPath
	I0913 19:58:45.625200   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHUsername
	I0913 19:58:45.625354   71233 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/embed-certs-175374/id_rsa Username:docker}
	I0913 19:58:45.625501   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:45.625915   71233 main.go:141] libmachine: (embed-certs-175374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:57:cd", ip: ""} in network mk-embed-certs-175374: {Iface:virbr3 ExpiryTime:2024-09-13 20:58:22 +0000 UTC Type:0 Mac:52:54:00:72:57:cd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:embed-certs-175374 Clientid:01:52:54:00:72:57:cd}
	I0913 19:58:45.625938   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined IP address 192.168.39.32 and MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:45.626141   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHPort
	I0913 19:58:45.626285   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHKeyPath
	I0913 19:58:45.626451   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHUsername
	I0913 19:58:45.626625   71233 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/embed-certs-175374/id_rsa Username:docker}
	I0913 19:58:45.658599   71233 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45219
	I0913 19:58:45.659088   71233 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:58:45.659729   71233 main.go:141] libmachine: Using API Version  1
	I0913 19:58:45.659752   71233 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:58:45.660087   71233 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:58:45.660266   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetState
	I0913 19:58:45.661894   71233 main.go:141] libmachine: (embed-certs-175374) Calling .DriverName
	I0913 19:58:45.662127   71233 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0913 19:58:45.662143   71233 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0913 19:58:45.662159   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHHostname
	I0913 19:58:45.664987   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:45.665347   71233 main.go:141] libmachine: (embed-certs-175374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:57:cd", ip: ""} in network mk-embed-certs-175374: {Iface:virbr3 ExpiryTime:2024-09-13 20:58:22 +0000 UTC Type:0 Mac:52:54:00:72:57:cd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:embed-certs-175374 Clientid:01:52:54:00:72:57:cd}
	I0913 19:58:45.665369   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined IP address 192.168.39.32 and MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:45.665475   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHPort
	I0913 19:58:45.665622   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHKeyPath
	I0913 19:58:45.665765   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHUsername
	I0913 19:58:45.665890   71233 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/embed-certs-175374/id_rsa Username:docker}
	I0913 19:58:45.771910   71233 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0913 19:58:45.788103   71233 node_ready.go:35] waiting up to 6m0s for node "embed-certs-175374" to be "Ready" ...
	I0913 19:58:45.849115   71233 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0913 19:58:45.954823   71233 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0913 19:58:45.954845   71233 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0913 19:58:45.972602   71233 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0913 19:58:46.008217   71233 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0913 19:58:46.008243   71233 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0913 19:58:46.087347   71233 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0913 19:58:46.087374   71233 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0913 19:58:46.145493   71233 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0913 19:58:46.413833   71233 main.go:141] libmachine: Making call to close driver server
	I0913 19:58:46.413867   71233 main.go:141] libmachine: (embed-certs-175374) Calling .Close
	I0913 19:58:46.414152   71233 main.go:141] libmachine: Successfully made call to close driver server
	I0913 19:58:46.414211   71233 main.go:141] libmachine: (embed-certs-175374) DBG | Closing plugin on server side
	I0913 19:58:46.414228   71233 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 19:58:46.414239   71233 main.go:141] libmachine: Making call to close driver server
	I0913 19:58:46.414257   71233 main.go:141] libmachine: (embed-certs-175374) Calling .Close
	I0913 19:58:46.414562   71233 main.go:141] libmachine: (embed-certs-175374) DBG | Closing plugin on server side
	I0913 19:58:46.414574   71233 main.go:141] libmachine: Successfully made call to close driver server
	I0913 19:58:46.414587   71233 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 19:58:46.420582   71233 main.go:141] libmachine: Making call to close driver server
	I0913 19:58:46.420600   71233 main.go:141] libmachine: (embed-certs-175374) Calling .Close
	I0913 19:58:46.420839   71233 main.go:141] libmachine: Successfully made call to close driver server
	I0913 19:58:46.420855   71233 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 19:58:46.960928   71233 main.go:141] libmachine: Making call to close driver server
	I0913 19:58:46.960961   71233 main.go:141] libmachine: (embed-certs-175374) Calling .Close
	I0913 19:58:46.961258   71233 main.go:141] libmachine: Successfully made call to close driver server
	I0913 19:58:46.961292   71233 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 19:58:46.961298   71233 main.go:141] libmachine: (embed-certs-175374) DBG | Closing plugin on server side
	I0913 19:58:46.961314   71233 main.go:141] libmachine: Making call to close driver server
	I0913 19:58:46.961325   71233 main.go:141] libmachine: (embed-certs-175374) Calling .Close
	I0913 19:58:46.961592   71233 main.go:141] libmachine: Successfully made call to close driver server
	I0913 19:58:46.961607   71233 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 19:58:47.205831   71233 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.060299398s)
	I0913 19:58:47.205881   71233 main.go:141] libmachine: Making call to close driver server
	I0913 19:58:47.205896   71233 main.go:141] libmachine: (embed-certs-175374) Calling .Close
	I0913 19:58:47.206177   71233 main.go:141] libmachine: Successfully made call to close driver server
	I0913 19:58:47.206198   71233 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 19:58:47.206211   71233 main.go:141] libmachine: Making call to close driver server
	I0913 19:58:47.206209   71233 main.go:141] libmachine: (embed-certs-175374) DBG | Closing plugin on server side
	I0913 19:58:47.206218   71233 main.go:141] libmachine: (embed-certs-175374) Calling .Close
	I0913 19:58:47.206422   71233 main.go:141] libmachine: (embed-certs-175374) DBG | Closing plugin on server side
	I0913 19:58:47.206461   71233 main.go:141] libmachine: Successfully made call to close driver server
	I0913 19:58:47.206469   71233 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 19:58:47.206482   71233 addons.go:475] Verifying addon metrics-server=true in "embed-certs-175374"
	I0913 19:58:47.208308   71233 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0913 19:58:47.209327   71233 addons.go:510] duration metric: took 1.629176141s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0913 19:58:47.792485   71233 node_ready.go:53] node "embed-certs-175374" has status "Ready":"False"
	I0913 19:58:46.431055   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:48.930705   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:48.265994   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:50.266158   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:47.009340   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:47.509159   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:48.009550   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:48.509199   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:49.009364   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:49.509522   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:50.008790   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:50.509733   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:51.009675   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:51.509423   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:50.293136   71233 node_ready.go:53] node "embed-certs-175374" has status "Ready":"False"
	I0913 19:58:52.792201   71233 node_ready.go:53] node "embed-certs-175374" has status "Ready":"False"
	I0913 19:58:53.291781   71233 node_ready.go:49] node "embed-certs-175374" has status "Ready":"True"
	I0913 19:58:53.291808   71233 node_ready.go:38] duration metric: took 7.503674244s for node "embed-certs-175374" to be "Ready" ...
	I0913 19:58:53.291817   71233 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0913 19:58:53.297601   71233 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-lrrkx" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:53.304575   71233 pod_ready.go:93] pod "coredns-7c65d6cfc9-lrrkx" in "kube-system" namespace has status "Ready":"True"
	I0913 19:58:53.304599   71233 pod_ready.go:82] duration metric: took 6.973055ms for pod "coredns-7c65d6cfc9-lrrkx" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:53.304608   71233 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-175374" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:50.932102   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:53.431177   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:52.267198   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:54.267301   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:52.009563   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:52.509133   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:53.008751   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:53.508990   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:54.009430   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:54.508835   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:55.009332   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:55.509474   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:56.009345   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:56.509025   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:55.312022   71233 pod_ready.go:103] pod "etcd-embed-certs-175374" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:57.310407   71233 pod_ready.go:93] pod "etcd-embed-certs-175374" in "kube-system" namespace has status "Ready":"True"
	I0913 19:58:57.310430   71233 pod_ready.go:82] duration metric: took 4.0058159s for pod "etcd-embed-certs-175374" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:57.310440   71233 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-175374" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:57.315573   71233 pod_ready.go:93] pod "kube-apiserver-embed-certs-175374" in "kube-system" namespace has status "Ready":"True"
	I0913 19:58:57.315592   71233 pod_ready.go:82] duration metric: took 5.146474ms for pod "kube-apiserver-embed-certs-175374" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:57.315600   71233 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-175374" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:57.319332   71233 pod_ready.go:93] pod "kube-controller-manager-embed-certs-175374" in "kube-system" namespace has status "Ready":"True"
	I0913 19:58:57.319347   71233 pod_ready.go:82] duration metric: took 3.741976ms for pod "kube-controller-manager-embed-certs-175374" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:57.319356   71233 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-jv77q" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:57.323231   71233 pod_ready.go:93] pod "kube-proxy-jv77q" in "kube-system" namespace has status "Ready":"True"
	I0913 19:58:57.323247   71233 pod_ready.go:82] duration metric: took 3.886178ms for pod "kube-proxy-jv77q" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:57.323254   71233 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-175374" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:57.329250   71233 pod_ready.go:93] pod "kube-scheduler-embed-certs-175374" in "kube-system" namespace has status "Ready":"True"
	I0913 19:58:57.329264   71233 pod_ready.go:82] duration metric: took 6.005366ms for pod "kube-scheduler-embed-certs-175374" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:57.329273   71233 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:55.932146   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:58.430922   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:56.765730   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:58.767104   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:57.009384   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:57.509443   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:58.008849   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:58.509514   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:59.009139   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:59.509433   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:00.009778   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:00.508827   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:01.009427   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:01.508910   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:59.335308   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:01.335559   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:03.337207   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:00.930860   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:02.932443   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:01.267236   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:03.765856   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:05.766799   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:02.009696   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:02.509043   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:03.008825   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:03.509139   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:04.009549   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:04.509093   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:05.009633   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:05.509496   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:06.008914   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:06.509007   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:05.835701   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:07.836050   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:05.431045   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:07.431161   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:08.266221   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:10.267540   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:07.009527   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:07.509637   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:08.009264   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:08.509505   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:09.008838   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:09.509478   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:10.009569   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:10.509697   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:11.009352   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:11.509280   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:10.335743   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:12.835060   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:09.930272   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:11.930469   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:14.431325   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:12.766317   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:14.766811   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:12.009200   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:12.509540   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:13.008811   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:13.509512   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:14.008877   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:14.509361   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:15.008823   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:15.509022   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:16.009176   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:16.509654   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:14.836303   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:17.336034   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:16.431384   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:18.930816   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:17.266683   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:19.268476   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:17.009758   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:17.509429   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:18.009470   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:18.509732   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:19.008954   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:19.509475   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:20.008916   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:20.509623   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:21.009697   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 19:59:21.009781   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 19:59:21.044701   71926 cri.go:89] found id: ""
	I0913 19:59:21.044724   71926 logs.go:276] 0 containers: []
	W0913 19:59:21.044734   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 19:59:21.044739   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 19:59:21.044786   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 19:59:21.086372   71926 cri.go:89] found id: ""
	I0913 19:59:21.086399   71926 logs.go:276] 0 containers: []
	W0913 19:59:21.086408   71926 logs.go:278] No container was found matching "etcd"
	I0913 19:59:21.086413   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 19:59:21.086474   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 19:59:21.121663   71926 cri.go:89] found id: ""
	I0913 19:59:21.121691   71926 logs.go:276] 0 containers: []
	W0913 19:59:21.121702   71926 logs.go:278] No container was found matching "coredns"
	I0913 19:59:21.121709   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 19:59:21.121772   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 19:59:21.159432   71926 cri.go:89] found id: ""
	I0913 19:59:21.159462   71926 logs.go:276] 0 containers: []
	W0913 19:59:21.159474   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 19:59:21.159481   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 19:59:21.159547   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 19:59:21.196077   71926 cri.go:89] found id: ""
	I0913 19:59:21.196108   71926 logs.go:276] 0 containers: []
	W0913 19:59:21.196120   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 19:59:21.196128   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 19:59:21.196202   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 19:59:21.228527   71926 cri.go:89] found id: ""
	I0913 19:59:21.228560   71926 logs.go:276] 0 containers: []
	W0913 19:59:21.228572   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 19:59:21.228579   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 19:59:21.228642   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 19:59:21.261972   71926 cri.go:89] found id: ""
	I0913 19:59:21.262004   71926 logs.go:276] 0 containers: []
	W0913 19:59:21.262015   71926 logs.go:278] No container was found matching "kindnet"
	I0913 19:59:21.262023   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 19:59:21.262088   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 19:59:21.297147   71926 cri.go:89] found id: ""
	I0913 19:59:21.297178   71926 logs.go:276] 0 containers: []
	W0913 19:59:21.297191   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 19:59:21.297201   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 19:59:21.297214   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 19:59:21.351067   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 19:59:21.351099   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 19:59:21.367453   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 19:59:21.367492   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 19:59:21.497454   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 19:59:21.497474   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 19:59:21.497487   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 19:59:21.568483   71926 logs.go:123] Gathering logs for container status ...
	I0913 19:59:21.568519   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 19:59:19.836218   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:22.336293   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:21.430519   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:23.930458   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:21.767677   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:24.267717   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:24.114940   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:24.127514   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 19:59:24.127597   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 19:59:24.164868   71926 cri.go:89] found id: ""
	I0913 19:59:24.164896   71926 logs.go:276] 0 containers: []
	W0913 19:59:24.164909   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 19:59:24.164917   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 19:59:24.164976   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 19:59:24.200342   71926 cri.go:89] found id: ""
	I0913 19:59:24.200374   71926 logs.go:276] 0 containers: []
	W0913 19:59:24.200386   71926 logs.go:278] No container was found matching "etcd"
	I0913 19:59:24.200393   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 19:59:24.200453   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 19:59:24.237566   71926 cri.go:89] found id: ""
	I0913 19:59:24.237592   71926 logs.go:276] 0 containers: []
	W0913 19:59:24.237603   71926 logs.go:278] No container was found matching "coredns"
	I0913 19:59:24.237619   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 19:59:24.237691   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 19:59:24.275357   71926 cri.go:89] found id: ""
	I0913 19:59:24.275386   71926 logs.go:276] 0 containers: []
	W0913 19:59:24.275397   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 19:59:24.275412   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 19:59:24.275476   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 19:59:24.312715   71926 cri.go:89] found id: ""
	I0913 19:59:24.312744   71926 logs.go:276] 0 containers: []
	W0913 19:59:24.312754   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 19:59:24.312759   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 19:59:24.312821   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 19:59:24.348027   71926 cri.go:89] found id: ""
	I0913 19:59:24.348053   71926 logs.go:276] 0 containers: []
	W0913 19:59:24.348064   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 19:59:24.348071   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 19:59:24.348149   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 19:59:24.382195   71926 cri.go:89] found id: ""
	I0913 19:59:24.382219   71926 logs.go:276] 0 containers: []
	W0913 19:59:24.382229   71926 logs.go:278] No container was found matching "kindnet"
	I0913 19:59:24.382235   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 19:59:24.382282   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 19:59:24.418783   71926 cri.go:89] found id: ""
	I0913 19:59:24.418806   71926 logs.go:276] 0 containers: []
	W0913 19:59:24.418815   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 19:59:24.418823   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 19:59:24.418833   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 19:59:24.504681   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 19:59:24.504705   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 19:59:24.504718   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 19:59:24.578961   71926 logs.go:123] Gathering logs for container status ...
	I0913 19:59:24.578993   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 19:59:24.623057   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 19:59:24.623083   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 19:59:24.673801   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 19:59:24.673835   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 19:59:24.336593   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:26.835014   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:28.836636   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:25.932213   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:28.431013   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:26.767205   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:29.266801   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:27.188790   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:27.201419   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 19:59:27.201476   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 19:59:27.234511   71926 cri.go:89] found id: ""
	I0913 19:59:27.234535   71926 logs.go:276] 0 containers: []
	W0913 19:59:27.234543   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 19:59:27.234550   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 19:59:27.234610   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 19:59:27.273066   71926 cri.go:89] found id: ""
	I0913 19:59:27.273089   71926 logs.go:276] 0 containers: []
	W0913 19:59:27.273098   71926 logs.go:278] No container was found matching "etcd"
	I0913 19:59:27.273112   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 19:59:27.273169   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 19:59:27.306510   71926 cri.go:89] found id: ""
	I0913 19:59:27.306531   71926 logs.go:276] 0 containers: []
	W0913 19:59:27.306540   71926 logs.go:278] No container was found matching "coredns"
	I0913 19:59:27.306545   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 19:59:27.306630   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 19:59:27.343335   71926 cri.go:89] found id: ""
	I0913 19:59:27.343359   71926 logs.go:276] 0 containers: []
	W0913 19:59:27.343371   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 19:59:27.343378   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 19:59:27.343427   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 19:59:27.380440   71926 cri.go:89] found id: ""
	I0913 19:59:27.380469   71926 logs.go:276] 0 containers: []
	W0913 19:59:27.380478   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 19:59:27.380483   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 19:59:27.380536   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 19:59:27.419250   71926 cri.go:89] found id: ""
	I0913 19:59:27.419280   71926 logs.go:276] 0 containers: []
	W0913 19:59:27.419292   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 19:59:27.419299   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 19:59:27.419370   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 19:59:27.454315   71926 cri.go:89] found id: ""
	I0913 19:59:27.454337   71926 logs.go:276] 0 containers: []
	W0913 19:59:27.454346   71926 logs.go:278] No container was found matching "kindnet"
	I0913 19:59:27.454352   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 19:59:27.454402   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 19:59:27.491075   71926 cri.go:89] found id: ""
	I0913 19:59:27.491107   71926 logs.go:276] 0 containers: []
	W0913 19:59:27.491118   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 19:59:27.491128   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 19:59:27.491170   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 19:59:27.540849   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 19:59:27.540877   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 19:59:27.554829   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 19:59:27.554860   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 19:59:27.624534   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 19:59:27.624562   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 19:59:27.624577   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 19:59:27.702577   71926 logs.go:123] Gathering logs for container status ...
	I0913 19:59:27.702612   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 19:59:30.242489   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:30.255585   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 19:59:30.255667   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 19:59:30.298214   71926 cri.go:89] found id: ""
	I0913 19:59:30.298241   71926 logs.go:276] 0 containers: []
	W0913 19:59:30.298253   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 19:59:30.298259   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 19:59:30.298332   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 19:59:30.335270   71926 cri.go:89] found id: ""
	I0913 19:59:30.335300   71926 logs.go:276] 0 containers: []
	W0913 19:59:30.335309   71926 logs.go:278] No container was found matching "etcd"
	I0913 19:59:30.335314   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 19:59:30.335379   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 19:59:30.373478   71926 cri.go:89] found id: ""
	I0913 19:59:30.373506   71926 logs.go:276] 0 containers: []
	W0913 19:59:30.373517   71926 logs.go:278] No container was found matching "coredns"
	I0913 19:59:30.373524   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 19:59:30.373583   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 19:59:30.411820   71926 cri.go:89] found id: ""
	I0913 19:59:30.411845   71926 logs.go:276] 0 containers: []
	W0913 19:59:30.411854   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 19:59:30.411863   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 19:59:30.411908   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 19:59:30.449434   71926 cri.go:89] found id: ""
	I0913 19:59:30.449463   71926 logs.go:276] 0 containers: []
	W0913 19:59:30.449479   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 19:59:30.449486   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 19:59:30.449547   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 19:59:30.482794   71926 cri.go:89] found id: ""
	I0913 19:59:30.482822   71926 logs.go:276] 0 containers: []
	W0913 19:59:30.482831   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 19:59:30.482837   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 19:59:30.482887   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 19:59:30.517320   71926 cri.go:89] found id: ""
	I0913 19:59:30.517342   71926 logs.go:276] 0 containers: []
	W0913 19:59:30.517351   71926 logs.go:278] No container was found matching "kindnet"
	I0913 19:59:30.517357   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 19:59:30.517404   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 19:59:30.553119   71926 cri.go:89] found id: ""
	I0913 19:59:30.553146   71926 logs.go:276] 0 containers: []
	W0913 19:59:30.553154   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 19:59:30.553162   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 19:59:30.553172   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 19:59:30.605857   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 19:59:30.605886   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 19:59:30.620823   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 19:59:30.620855   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 19:59:30.689618   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 19:59:30.689637   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 19:59:30.689650   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 19:59:30.772324   71926 logs.go:123] Gathering logs for container status ...
	I0913 19:59:30.772359   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 19:59:31.335265   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:33.336711   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:30.431957   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:32.930866   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:31.765595   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:33.768217   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:33.313109   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:33.327584   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 19:59:33.327642   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 19:59:33.362880   71926 cri.go:89] found id: ""
	I0913 19:59:33.362904   71926 logs.go:276] 0 containers: []
	W0913 19:59:33.362912   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 19:59:33.362919   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 19:59:33.362979   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 19:59:33.395790   71926 cri.go:89] found id: ""
	I0913 19:59:33.395818   71926 logs.go:276] 0 containers: []
	W0913 19:59:33.395828   71926 logs.go:278] No container was found matching "etcd"
	I0913 19:59:33.395833   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 19:59:33.395883   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 19:59:33.430371   71926 cri.go:89] found id: ""
	I0913 19:59:33.430397   71926 logs.go:276] 0 containers: []
	W0913 19:59:33.430405   71926 logs.go:278] No container was found matching "coredns"
	I0913 19:59:33.430410   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 19:59:33.430522   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 19:59:33.465466   71926 cri.go:89] found id: ""
	I0913 19:59:33.465494   71926 logs.go:276] 0 containers: []
	W0913 19:59:33.465502   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 19:59:33.465508   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 19:59:33.465554   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 19:59:33.500340   71926 cri.go:89] found id: ""
	I0913 19:59:33.500370   71926 logs.go:276] 0 containers: []
	W0913 19:59:33.500385   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 19:59:33.500390   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 19:59:33.500440   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 19:59:33.537219   71926 cri.go:89] found id: ""
	I0913 19:59:33.537248   71926 logs.go:276] 0 containers: []
	W0913 19:59:33.537259   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 19:59:33.537267   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 19:59:33.537315   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 19:59:33.576171   71926 cri.go:89] found id: ""
	I0913 19:59:33.576201   71926 logs.go:276] 0 containers: []
	W0913 19:59:33.576209   71926 logs.go:278] No container was found matching "kindnet"
	I0913 19:59:33.576214   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 19:59:33.576261   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 19:59:33.618525   71926 cri.go:89] found id: ""
	I0913 19:59:33.618552   71926 logs.go:276] 0 containers: []
	W0913 19:59:33.618564   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 19:59:33.618574   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 19:59:33.618588   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 19:59:33.667903   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 19:59:33.667932   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 19:59:33.683870   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 19:59:33.683897   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 19:59:33.755651   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 19:59:33.755675   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 19:59:33.755687   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 19:59:33.834518   71926 logs.go:123] Gathering logs for container status ...
	I0913 19:59:33.834563   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 19:59:36.375763   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:36.389874   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 19:59:36.389945   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 19:59:36.423918   71926 cri.go:89] found id: ""
	I0913 19:59:36.423943   71926 logs.go:276] 0 containers: []
	W0913 19:59:36.423955   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 19:59:36.423962   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 19:59:36.424021   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 19:59:36.461591   71926 cri.go:89] found id: ""
	I0913 19:59:36.461615   71926 logs.go:276] 0 containers: []
	W0913 19:59:36.461627   71926 logs.go:278] No container was found matching "etcd"
	I0913 19:59:36.461633   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 19:59:36.461686   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 19:59:36.500927   71926 cri.go:89] found id: ""
	I0913 19:59:36.500951   71926 logs.go:276] 0 containers: []
	W0913 19:59:36.500961   71926 logs.go:278] No container was found matching "coredns"
	I0913 19:59:36.500966   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 19:59:36.501024   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 19:59:36.538153   71926 cri.go:89] found id: ""
	I0913 19:59:36.538178   71926 logs.go:276] 0 containers: []
	W0913 19:59:36.538189   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 19:59:36.538196   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 19:59:36.538253   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 19:59:36.586559   71926 cri.go:89] found id: ""
	I0913 19:59:36.586593   71926 logs.go:276] 0 containers: []
	W0913 19:59:36.586604   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 19:59:36.586612   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 19:59:36.586671   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 19:59:36.635198   71926 cri.go:89] found id: ""
	I0913 19:59:36.635226   71926 logs.go:276] 0 containers: []
	W0913 19:59:36.635238   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 19:59:36.635246   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 19:59:36.635312   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 19:59:36.679531   71926 cri.go:89] found id: ""
	I0913 19:59:36.679554   71926 logs.go:276] 0 containers: []
	W0913 19:59:36.679565   71926 logs.go:278] No container was found matching "kindnet"
	I0913 19:59:36.679572   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 19:59:36.679635   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 19:59:36.714288   71926 cri.go:89] found id: ""
	I0913 19:59:36.714315   71926 logs.go:276] 0 containers: []
	W0913 19:59:36.714327   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 19:59:36.714338   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 19:59:36.714352   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 19:59:36.793900   71926 logs.go:123] Gathering logs for container status ...
	I0913 19:59:36.793938   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 19:59:36.831700   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 19:59:36.831732   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 19:59:36.884424   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 19:59:36.884466   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 19:59:36.900593   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 19:59:36.900622   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 19:59:35.835628   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:37.836645   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:34.931979   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:37.429866   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:39.431100   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:36.265867   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:38.266340   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:40.767051   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	W0913 19:59:36.979173   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 19:59:39.480036   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:39.494368   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 19:59:39.494434   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 19:59:39.528620   71926 cri.go:89] found id: ""
	I0913 19:59:39.528647   71926 logs.go:276] 0 containers: []
	W0913 19:59:39.528655   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 19:59:39.528661   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 19:59:39.528708   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 19:59:39.564307   71926 cri.go:89] found id: ""
	I0913 19:59:39.564339   71926 logs.go:276] 0 containers: []
	W0913 19:59:39.564348   71926 logs.go:278] No container was found matching "etcd"
	I0913 19:59:39.564354   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 19:59:39.564402   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 19:59:39.596786   71926 cri.go:89] found id: ""
	I0913 19:59:39.596813   71926 logs.go:276] 0 containers: []
	W0913 19:59:39.596822   71926 logs.go:278] No container was found matching "coredns"
	I0913 19:59:39.596828   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 19:59:39.596887   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 19:59:39.640612   71926 cri.go:89] found id: ""
	I0913 19:59:39.640636   71926 logs.go:276] 0 containers: []
	W0913 19:59:39.640649   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 19:59:39.640654   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 19:59:39.640701   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 19:59:39.685855   71926 cri.go:89] found id: ""
	I0913 19:59:39.685876   71926 logs.go:276] 0 containers: []
	W0913 19:59:39.685884   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 19:59:39.685890   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 19:59:39.685937   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 19:59:39.723549   71926 cri.go:89] found id: ""
	I0913 19:59:39.723578   71926 logs.go:276] 0 containers: []
	W0913 19:59:39.723586   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 19:59:39.723592   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 19:59:39.723647   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 19:59:39.761901   71926 cri.go:89] found id: ""
	I0913 19:59:39.761928   71926 logs.go:276] 0 containers: []
	W0913 19:59:39.761938   71926 logs.go:278] No container was found matching "kindnet"
	I0913 19:59:39.761944   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 19:59:39.762005   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 19:59:39.804200   71926 cri.go:89] found id: ""
	I0913 19:59:39.804233   71926 logs.go:276] 0 containers: []
	W0913 19:59:39.804244   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 19:59:39.804254   71926 logs.go:123] Gathering logs for container status ...
	I0913 19:59:39.804268   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 19:59:39.843760   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 19:59:39.843792   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 19:59:39.898610   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 19:59:39.898640   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 19:59:39.915710   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 19:59:39.915733   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 19:59:39.991138   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 19:59:39.991161   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 19:59:39.991175   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 19:59:40.335372   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:42.339270   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:41.431411   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:43.930395   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:43.266899   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:45.769316   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:42.567023   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:42.579927   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 19:59:42.580001   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 19:59:42.613569   71926 cri.go:89] found id: ""
	I0913 19:59:42.613595   71926 logs.go:276] 0 containers: []
	W0913 19:59:42.613603   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 19:59:42.613608   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 19:59:42.613654   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 19:59:42.649375   71926 cri.go:89] found id: ""
	I0913 19:59:42.649408   71926 logs.go:276] 0 containers: []
	W0913 19:59:42.649421   71926 logs.go:278] No container was found matching "etcd"
	I0913 19:59:42.649433   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 19:59:42.649502   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 19:59:42.685299   71926 cri.go:89] found id: ""
	I0913 19:59:42.685322   71926 logs.go:276] 0 containers: []
	W0913 19:59:42.685330   71926 logs.go:278] No container was found matching "coredns"
	I0913 19:59:42.685336   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 19:59:42.685383   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 19:59:42.718646   71926 cri.go:89] found id: ""
	I0913 19:59:42.718671   71926 logs.go:276] 0 containers: []
	W0913 19:59:42.718680   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 19:59:42.718686   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 19:59:42.718736   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 19:59:42.755277   71926 cri.go:89] found id: ""
	I0913 19:59:42.755310   71926 logs.go:276] 0 containers: []
	W0913 19:59:42.755322   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 19:59:42.755330   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 19:59:42.755399   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 19:59:42.791071   71926 cri.go:89] found id: ""
	I0913 19:59:42.791099   71926 logs.go:276] 0 containers: []
	W0913 19:59:42.791110   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 19:59:42.791117   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 19:59:42.791191   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 19:59:42.824895   71926 cri.go:89] found id: ""
	I0913 19:59:42.824924   71926 logs.go:276] 0 containers: []
	W0913 19:59:42.824935   71926 logs.go:278] No container was found matching "kindnet"
	I0913 19:59:42.824942   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 19:59:42.825004   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 19:59:42.864526   71926 cri.go:89] found id: ""
	I0913 19:59:42.864555   71926 logs.go:276] 0 containers: []
	W0913 19:59:42.864567   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 19:59:42.864576   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 19:59:42.864590   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 19:59:42.913990   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 19:59:42.914023   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 19:59:42.929285   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 19:59:42.929319   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 19:59:43.003029   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 19:59:43.003061   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 19:59:43.003075   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 19:59:43.083457   71926 logs.go:123] Gathering logs for container status ...
	I0913 19:59:43.083490   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 19:59:45.625941   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:45.639111   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 19:59:45.639200   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 19:59:45.672356   71926 cri.go:89] found id: ""
	I0913 19:59:45.672383   71926 logs.go:276] 0 containers: []
	W0913 19:59:45.672391   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 19:59:45.672397   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 19:59:45.672463   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 19:59:45.713528   71926 cri.go:89] found id: ""
	I0913 19:59:45.713551   71926 logs.go:276] 0 containers: []
	W0913 19:59:45.713557   71926 logs.go:278] No container was found matching "etcd"
	I0913 19:59:45.713564   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 19:59:45.713618   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 19:59:45.749950   71926 cri.go:89] found id: ""
	I0913 19:59:45.749974   71926 logs.go:276] 0 containers: []
	W0913 19:59:45.749982   71926 logs.go:278] No container was found matching "coredns"
	I0913 19:59:45.749988   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 19:59:45.750036   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 19:59:45.787366   71926 cri.go:89] found id: ""
	I0913 19:59:45.787403   71926 logs.go:276] 0 containers: []
	W0913 19:59:45.787415   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 19:59:45.787423   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 19:59:45.787482   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 19:59:45.822464   71926 cri.go:89] found id: ""
	I0913 19:59:45.822493   71926 logs.go:276] 0 containers: []
	W0913 19:59:45.822504   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 19:59:45.822511   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 19:59:45.822574   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 19:59:45.857614   71926 cri.go:89] found id: ""
	I0913 19:59:45.857643   71926 logs.go:276] 0 containers: []
	W0913 19:59:45.857654   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 19:59:45.857666   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 19:59:45.857716   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 19:59:45.896323   71926 cri.go:89] found id: ""
	I0913 19:59:45.896349   71926 logs.go:276] 0 containers: []
	W0913 19:59:45.896357   71926 logs.go:278] No container was found matching "kindnet"
	I0913 19:59:45.896362   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 19:59:45.896416   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 19:59:45.935706   71926 cri.go:89] found id: ""
	I0913 19:59:45.935731   71926 logs.go:276] 0 containers: []
	W0913 19:59:45.935742   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 19:59:45.935751   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 19:59:45.935763   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 19:59:45.986687   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 19:59:45.986721   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 19:59:46.001549   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 19:59:46.001584   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 19:59:46.075482   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 19:59:46.075505   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 19:59:46.075520   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 19:59:46.152094   71926 logs.go:123] Gathering logs for container status ...
	I0913 19:59:46.152130   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 19:59:44.836085   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:46.836175   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:45.932069   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:47.932660   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:48.266623   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:50.766356   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:48.698127   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:48.711198   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 19:59:48.711271   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 19:59:48.746560   71926 cri.go:89] found id: ""
	I0913 19:59:48.746588   71926 logs.go:276] 0 containers: []
	W0913 19:59:48.746598   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 19:59:48.746605   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 19:59:48.746662   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 19:59:48.780588   71926 cri.go:89] found id: ""
	I0913 19:59:48.780614   71926 logs.go:276] 0 containers: []
	W0913 19:59:48.780624   71926 logs.go:278] No container was found matching "etcd"
	I0913 19:59:48.780631   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 19:59:48.780689   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 19:59:48.812521   71926 cri.go:89] found id: ""
	I0913 19:59:48.812547   71926 logs.go:276] 0 containers: []
	W0913 19:59:48.812560   71926 logs.go:278] No container was found matching "coredns"
	I0913 19:59:48.812567   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 19:59:48.812626   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 19:59:48.850273   71926 cri.go:89] found id: ""
	I0913 19:59:48.850303   71926 logs.go:276] 0 containers: []
	W0913 19:59:48.850314   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 19:59:48.850322   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 19:59:48.850384   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 19:59:48.887865   71926 cri.go:89] found id: ""
	I0913 19:59:48.887888   71926 logs.go:276] 0 containers: []
	W0913 19:59:48.887896   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 19:59:48.887901   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 19:59:48.887966   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 19:59:48.928155   71926 cri.go:89] found id: ""
	I0913 19:59:48.928182   71926 logs.go:276] 0 containers: []
	W0913 19:59:48.928193   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 19:59:48.928201   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 19:59:48.928263   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 19:59:48.966159   71926 cri.go:89] found id: ""
	I0913 19:59:48.966185   71926 logs.go:276] 0 containers: []
	W0913 19:59:48.966194   71926 logs.go:278] No container was found matching "kindnet"
	I0913 19:59:48.966199   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 19:59:48.966267   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 19:59:48.999627   71926 cri.go:89] found id: ""
	I0913 19:59:48.999654   71926 logs.go:276] 0 containers: []
	W0913 19:59:48.999663   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 19:59:48.999674   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 19:59:48.999685   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 19:59:49.087362   71926 logs.go:123] Gathering logs for container status ...
	I0913 19:59:49.087398   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 19:59:49.131037   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 19:59:49.131065   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 19:59:49.183183   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 19:59:49.183214   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 19:59:49.197511   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 19:59:49.197536   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 19:59:49.269251   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 19:59:51.769494   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:51.783274   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 19:59:51.783334   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 19:59:51.819611   71926 cri.go:89] found id: ""
	I0913 19:59:51.819644   71926 logs.go:276] 0 containers: []
	W0913 19:59:51.819655   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 19:59:51.819662   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 19:59:51.819722   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 19:59:51.857054   71926 cri.go:89] found id: ""
	I0913 19:59:51.857081   71926 logs.go:276] 0 containers: []
	W0913 19:59:51.857093   71926 logs.go:278] No container was found matching "etcd"
	I0913 19:59:51.857101   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 19:59:51.857183   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 19:59:51.892270   71926 cri.go:89] found id: ""
	I0913 19:59:51.892292   71926 logs.go:276] 0 containers: []
	W0913 19:59:51.892301   71926 logs.go:278] No container was found matching "coredns"
	I0913 19:59:51.892306   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 19:59:51.892354   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 19:59:51.926615   71926 cri.go:89] found id: ""
	I0913 19:59:51.926641   71926 logs.go:276] 0 containers: []
	W0913 19:59:51.926653   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 19:59:51.926667   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 19:59:51.926728   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 19:59:49.336581   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:51.837000   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:53.838872   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:49.936518   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:52.430631   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:52.767109   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:55.265920   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:51.966322   71926 cri.go:89] found id: ""
	I0913 19:59:51.966352   71926 logs.go:276] 0 containers: []
	W0913 19:59:51.966372   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 19:59:51.966380   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 19:59:51.966446   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 19:59:52.008145   71926 cri.go:89] found id: ""
	I0913 19:59:52.008173   71926 logs.go:276] 0 containers: []
	W0913 19:59:52.008182   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 19:59:52.008189   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 19:59:52.008241   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 19:59:52.044486   71926 cri.go:89] found id: ""
	I0913 19:59:52.044512   71926 logs.go:276] 0 containers: []
	W0913 19:59:52.044520   71926 logs.go:278] No container was found matching "kindnet"
	I0913 19:59:52.044527   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 19:59:52.044590   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 19:59:52.080845   71926 cri.go:89] found id: ""
	I0913 19:59:52.080873   71926 logs.go:276] 0 containers: []
	W0913 19:59:52.080885   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 19:59:52.080895   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 19:59:52.080910   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 19:59:52.094040   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 19:59:52.094067   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 19:59:52.163809   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 19:59:52.163836   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 19:59:52.163850   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 19:59:52.244680   71926 logs.go:123] Gathering logs for container status ...
	I0913 19:59:52.244724   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 19:59:52.284651   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 19:59:52.284686   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 19:59:54.841167   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:54.853992   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 19:59:54.854055   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 19:59:54.886801   71926 cri.go:89] found id: ""
	I0913 19:59:54.886830   71926 logs.go:276] 0 containers: []
	W0913 19:59:54.886841   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 19:59:54.886848   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 19:59:54.886922   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 19:59:54.921963   71926 cri.go:89] found id: ""
	I0913 19:59:54.921990   71926 logs.go:276] 0 containers: []
	W0913 19:59:54.922001   71926 logs.go:278] No container was found matching "etcd"
	I0913 19:59:54.922009   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 19:59:54.922074   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 19:59:54.960805   71926 cri.go:89] found id: ""
	I0913 19:59:54.960840   71926 logs.go:276] 0 containers: []
	W0913 19:59:54.960852   71926 logs.go:278] No container was found matching "coredns"
	I0913 19:59:54.960859   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 19:59:54.960938   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 19:59:54.998466   71926 cri.go:89] found id: ""
	I0913 19:59:54.998490   71926 logs.go:276] 0 containers: []
	W0913 19:59:54.998501   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 19:59:54.998508   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 19:59:54.998570   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 19:59:55.036768   71926 cri.go:89] found id: ""
	I0913 19:59:55.036795   71926 logs.go:276] 0 containers: []
	W0913 19:59:55.036803   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 19:59:55.036809   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 19:59:55.036870   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 19:59:55.075135   71926 cri.go:89] found id: ""
	I0913 19:59:55.075165   71926 logs.go:276] 0 containers: []
	W0913 19:59:55.075176   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 19:59:55.075184   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 19:59:55.075244   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 19:59:55.112784   71926 cri.go:89] found id: ""
	I0913 19:59:55.112806   71926 logs.go:276] 0 containers: []
	W0913 19:59:55.112815   71926 logs.go:278] No container was found matching "kindnet"
	I0913 19:59:55.112821   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 19:59:55.112866   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 19:59:55.147189   71926 cri.go:89] found id: ""
	I0913 19:59:55.147215   71926 logs.go:276] 0 containers: []
	W0913 19:59:55.147226   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 19:59:55.147236   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 19:59:55.147247   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 19:59:55.199769   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 19:59:55.199802   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 19:59:55.214075   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 19:59:55.214124   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 19:59:55.285621   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 19:59:55.285640   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 19:59:55.285650   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 19:59:55.366727   71926 logs.go:123] Gathering logs for container status ...
	I0913 19:59:55.366762   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 19:59:56.336491   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:58.836762   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:54.932054   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:57.431007   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:57.266309   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:59.266774   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:57.913453   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:57.927109   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 19:59:57.927249   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 19:59:57.964607   71926 cri.go:89] found id: ""
	I0913 19:59:57.964635   71926 logs.go:276] 0 containers: []
	W0913 19:59:57.964647   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 19:59:57.964655   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 19:59:57.964723   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 19:59:58.006749   71926 cri.go:89] found id: ""
	I0913 19:59:58.006773   71926 logs.go:276] 0 containers: []
	W0913 19:59:58.006782   71926 logs.go:278] No container was found matching "etcd"
	I0913 19:59:58.006788   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 19:59:58.006835   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 19:59:58.042438   71926 cri.go:89] found id: ""
	I0913 19:59:58.042461   71926 logs.go:276] 0 containers: []
	W0913 19:59:58.042470   71926 logs.go:278] No container was found matching "coredns"
	I0913 19:59:58.042475   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 19:59:58.042526   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 19:59:58.076413   71926 cri.go:89] found id: ""
	I0913 19:59:58.076443   71926 logs.go:276] 0 containers: []
	W0913 19:59:58.076456   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 19:59:58.076463   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 19:59:58.076525   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 19:59:58.110474   71926 cri.go:89] found id: ""
	I0913 19:59:58.110495   71926 logs.go:276] 0 containers: []
	W0913 19:59:58.110502   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 19:59:58.110507   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 19:59:58.110555   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 19:59:58.145068   71926 cri.go:89] found id: ""
	I0913 19:59:58.145090   71926 logs.go:276] 0 containers: []
	W0913 19:59:58.145098   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 19:59:58.145104   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 19:59:58.145152   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 19:59:58.179071   71926 cri.go:89] found id: ""
	I0913 19:59:58.179102   71926 logs.go:276] 0 containers: []
	W0913 19:59:58.179115   71926 logs.go:278] No container was found matching "kindnet"
	I0913 19:59:58.179122   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 19:59:58.179176   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 19:59:58.212749   71926 cri.go:89] found id: ""
	I0913 19:59:58.212779   71926 logs.go:276] 0 containers: []
	W0913 19:59:58.212791   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 19:59:58.212801   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 19:59:58.212814   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 19:59:58.297012   71926 logs.go:123] Gathering logs for container status ...
	I0913 19:59:58.297046   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 19:59:58.336206   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 19:59:58.336229   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 19:59:58.388982   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 19:59:58.389014   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 19:59:58.404582   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 19:59:58.404612   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 19:59:58.476882   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:00.977647   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:00.991660   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:00.991743   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:01.028753   71926 cri.go:89] found id: ""
	I0913 20:00:01.028779   71926 logs.go:276] 0 containers: []
	W0913 20:00:01.028788   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:01.028794   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:01.028843   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:01.064606   71926 cri.go:89] found id: ""
	I0913 20:00:01.064638   71926 logs.go:276] 0 containers: []
	W0913 20:00:01.064647   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:01.064652   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:01.064701   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:01.100564   71926 cri.go:89] found id: ""
	I0913 20:00:01.100597   71926 logs.go:276] 0 containers: []
	W0913 20:00:01.100609   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:01.100615   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:01.100678   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:01.135197   71926 cri.go:89] found id: ""
	I0913 20:00:01.135230   71926 logs.go:276] 0 containers: []
	W0913 20:00:01.135242   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:01.135249   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:01.135313   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:01.187712   71926 cri.go:89] found id: ""
	I0913 20:00:01.187783   71926 logs.go:276] 0 containers: []
	W0913 20:00:01.187796   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:01.187804   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:01.187870   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:01.225400   71926 cri.go:89] found id: ""
	I0913 20:00:01.225434   71926 logs.go:276] 0 containers: []
	W0913 20:00:01.225443   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:01.225449   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:01.225511   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:01.263652   71926 cri.go:89] found id: ""
	I0913 20:00:01.263685   71926 logs.go:276] 0 containers: []
	W0913 20:00:01.263696   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:01.263703   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:01.263764   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:01.302630   71926 cri.go:89] found id: ""
	I0913 20:00:01.302652   71926 logs.go:276] 0 containers: []
	W0913 20:00:01.302661   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:01.302669   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:01.302680   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:01.316837   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:01.316871   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:01.389124   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:01.389149   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:01.389159   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:01.475528   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:01.475565   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:01.518896   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:01.518922   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:01.338229   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:03.836029   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:59.932112   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:01.932389   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:03.932525   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:01.267699   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:03.268309   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:05.765913   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:04.069894   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:04.083877   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:04.083940   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:04.119079   71926 cri.go:89] found id: ""
	I0913 20:00:04.119109   71926 logs.go:276] 0 containers: []
	W0913 20:00:04.119118   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:04.119123   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:04.119175   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:04.154999   71926 cri.go:89] found id: ""
	I0913 20:00:04.155026   71926 logs.go:276] 0 containers: []
	W0913 20:00:04.155035   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:04.155040   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:04.155087   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:04.189390   71926 cri.go:89] found id: ""
	I0913 20:00:04.189414   71926 logs.go:276] 0 containers: []
	W0913 20:00:04.189422   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:04.189428   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:04.189477   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:04.224883   71926 cri.go:89] found id: ""
	I0913 20:00:04.224912   71926 logs.go:276] 0 containers: []
	W0913 20:00:04.224924   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:04.224932   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:04.224990   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:04.265300   71926 cri.go:89] found id: ""
	I0913 20:00:04.265328   71926 logs.go:276] 0 containers: []
	W0913 20:00:04.265340   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:04.265347   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:04.265403   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:04.308225   71926 cri.go:89] found id: ""
	I0913 20:00:04.308253   71926 logs.go:276] 0 containers: []
	W0913 20:00:04.308264   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:04.308271   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:04.308339   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:04.347508   71926 cri.go:89] found id: ""
	I0913 20:00:04.347539   71926 logs.go:276] 0 containers: []
	W0913 20:00:04.347552   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:04.347558   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:04.347615   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:04.385610   71926 cri.go:89] found id: ""
	I0913 20:00:04.385635   71926 logs.go:276] 0 containers: []
	W0913 20:00:04.385644   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:04.385651   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:04.385664   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:04.438181   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:04.438210   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:04.452207   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:04.452231   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:04.527920   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:04.527942   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:04.527956   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:04.617212   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:04.617256   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:05.836478   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:08.336478   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:06.429978   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:08.430153   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:08.266149   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:10.267683   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:07.159525   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:07.172355   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:07.172433   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:07.210683   71926 cri.go:89] found id: ""
	I0913 20:00:07.210713   71926 logs.go:276] 0 containers: []
	W0913 20:00:07.210725   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:07.210733   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:07.210794   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:07.252477   71926 cri.go:89] found id: ""
	I0913 20:00:07.252501   71926 logs.go:276] 0 containers: []
	W0913 20:00:07.252511   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:07.252516   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:07.252563   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:07.293646   71926 cri.go:89] found id: ""
	I0913 20:00:07.293671   71926 logs.go:276] 0 containers: []
	W0913 20:00:07.293679   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:07.293685   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:07.293744   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:07.328603   71926 cri.go:89] found id: ""
	I0913 20:00:07.328631   71926 logs.go:276] 0 containers: []
	W0913 20:00:07.328644   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:07.328652   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:07.328716   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:07.366358   71926 cri.go:89] found id: ""
	I0913 20:00:07.366385   71926 logs.go:276] 0 containers: []
	W0913 20:00:07.366395   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:07.366402   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:07.366480   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:07.400380   71926 cri.go:89] found id: ""
	I0913 20:00:07.400406   71926 logs.go:276] 0 containers: []
	W0913 20:00:07.400417   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:07.400425   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:07.400476   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:07.438332   71926 cri.go:89] found id: ""
	I0913 20:00:07.438360   71926 logs.go:276] 0 containers: []
	W0913 20:00:07.438371   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:07.438380   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:07.438437   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:07.477262   71926 cri.go:89] found id: ""
	I0913 20:00:07.477294   71926 logs.go:276] 0 containers: []
	W0913 20:00:07.477305   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:07.477315   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:07.477329   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:07.528404   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:07.528437   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:07.543025   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:07.543058   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:07.619580   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:07.619599   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:07.619611   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:07.698002   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:07.698037   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:10.241528   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:10.255274   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:10.255350   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:10.289635   71926 cri.go:89] found id: ""
	I0913 20:00:10.289660   71926 logs.go:276] 0 containers: []
	W0913 20:00:10.289670   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:10.289675   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:10.289726   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:10.323749   71926 cri.go:89] found id: ""
	I0913 20:00:10.323772   71926 logs.go:276] 0 containers: []
	W0913 20:00:10.323781   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:10.323788   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:10.323835   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:10.360399   71926 cri.go:89] found id: ""
	I0913 20:00:10.360424   71926 logs.go:276] 0 containers: []
	W0913 20:00:10.360432   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:10.360441   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:10.360517   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:10.396685   71926 cri.go:89] found id: ""
	I0913 20:00:10.396714   71926 logs.go:276] 0 containers: []
	W0913 20:00:10.396724   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:10.396731   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:10.396793   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:10.439076   71926 cri.go:89] found id: ""
	I0913 20:00:10.439104   71926 logs.go:276] 0 containers: []
	W0913 20:00:10.439116   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:10.439126   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:10.439202   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:10.484451   71926 cri.go:89] found id: ""
	I0913 20:00:10.484474   71926 logs.go:276] 0 containers: []
	W0913 20:00:10.484483   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:10.484490   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:10.484553   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:10.521271   71926 cri.go:89] found id: ""
	I0913 20:00:10.521297   71926 logs.go:276] 0 containers: []
	W0913 20:00:10.521307   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:10.521313   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:10.521371   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:10.556468   71926 cri.go:89] found id: ""
	I0913 20:00:10.556490   71926 logs.go:276] 0 containers: []
	W0913 20:00:10.556499   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:10.556507   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:10.556519   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:10.593210   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:10.593237   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:10.645456   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:10.645493   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:10.659365   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:10.659393   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:10.734764   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:10.734786   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:10.734800   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:10.338631   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:12.835744   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:10.430954   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:12.931007   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:12.767070   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:15.267220   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:13.320229   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:13.335065   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:13.335136   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:13.373500   71926 cri.go:89] found id: ""
	I0913 20:00:13.373531   71926 logs.go:276] 0 containers: []
	W0913 20:00:13.373543   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:13.373550   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:13.373613   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:13.410784   71926 cri.go:89] found id: ""
	I0913 20:00:13.410815   71926 logs.go:276] 0 containers: []
	W0913 20:00:13.410827   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:13.410834   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:13.410900   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:13.447564   71926 cri.go:89] found id: ""
	I0913 20:00:13.447592   71926 logs.go:276] 0 containers: []
	W0913 20:00:13.447603   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:13.447611   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:13.447668   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:13.482858   71926 cri.go:89] found id: ""
	I0913 20:00:13.482885   71926 logs.go:276] 0 containers: []
	W0913 20:00:13.482895   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:13.482902   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:13.482963   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:13.517827   71926 cri.go:89] found id: ""
	I0913 20:00:13.517853   71926 logs.go:276] 0 containers: []
	W0913 20:00:13.517864   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:13.517870   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:13.517932   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:13.553032   71926 cri.go:89] found id: ""
	I0913 20:00:13.553063   71926 logs.go:276] 0 containers: []
	W0913 20:00:13.553081   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:13.553088   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:13.553149   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:13.588527   71926 cri.go:89] found id: ""
	I0913 20:00:13.588553   71926 logs.go:276] 0 containers: []
	W0913 20:00:13.588561   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:13.588567   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:13.588620   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:13.625094   71926 cri.go:89] found id: ""
	I0913 20:00:13.625118   71926 logs.go:276] 0 containers: []
	W0913 20:00:13.625131   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:13.625141   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:13.625158   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:13.677821   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:13.677851   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:13.691860   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:13.691887   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:13.764966   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:13.764993   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:13.765009   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:13.847866   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:13.847908   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:16.389642   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:16.403272   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:16.403331   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:16.440078   71926 cri.go:89] found id: ""
	I0913 20:00:16.440104   71926 logs.go:276] 0 containers: []
	W0913 20:00:16.440114   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:16.440122   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:16.440190   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:16.474256   71926 cri.go:89] found id: ""
	I0913 20:00:16.474283   71926 logs.go:276] 0 containers: []
	W0913 20:00:16.474301   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:16.474308   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:16.474366   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:16.510715   71926 cri.go:89] found id: ""
	I0913 20:00:16.510749   71926 logs.go:276] 0 containers: []
	W0913 20:00:16.510760   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:16.510767   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:16.510828   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:16.547051   71926 cri.go:89] found id: ""
	I0913 20:00:16.547081   71926 logs.go:276] 0 containers: []
	W0913 20:00:16.547090   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:16.547095   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:16.547181   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:16.583643   71926 cri.go:89] found id: ""
	I0913 20:00:16.583673   71926 logs.go:276] 0 containers: []
	W0913 20:00:16.583684   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:16.583692   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:16.583751   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:16.620508   71926 cri.go:89] found id: ""
	I0913 20:00:16.620531   71926 logs.go:276] 0 containers: []
	W0913 20:00:16.620538   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:16.620544   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:16.620591   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:16.659447   71926 cri.go:89] found id: ""
	I0913 20:00:16.659474   71926 logs.go:276] 0 containers: []
	W0913 20:00:16.659483   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:16.659487   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:16.659551   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:16.696858   71926 cri.go:89] found id: ""
	I0913 20:00:16.696883   71926 logs.go:276] 0 containers: []
	W0913 20:00:16.696892   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:16.696900   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:16.696913   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:16.767299   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:16.767322   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:16.767337   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:16.847320   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:16.847356   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:16.922176   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:16.922209   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:14.836490   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:16.838300   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:15.430562   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:17.431842   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:17.766696   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:19.767921   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:16.982583   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:16.982627   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:19.496581   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:19.509942   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:19.510040   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:19.546457   71926 cri.go:89] found id: ""
	I0913 20:00:19.546493   71926 logs.go:276] 0 containers: []
	W0913 20:00:19.546510   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:19.546517   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:19.546584   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:19.585577   71926 cri.go:89] found id: ""
	I0913 20:00:19.585613   71926 logs.go:276] 0 containers: []
	W0913 20:00:19.585624   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:19.585631   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:19.585681   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:19.624383   71926 cri.go:89] found id: ""
	I0913 20:00:19.624416   71926 logs.go:276] 0 containers: []
	W0913 20:00:19.624428   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:19.624436   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:19.624492   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:19.662531   71926 cri.go:89] found id: ""
	I0913 20:00:19.662558   71926 logs.go:276] 0 containers: []
	W0913 20:00:19.662570   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:19.662578   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:19.662636   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:19.704253   71926 cri.go:89] found id: ""
	I0913 20:00:19.704278   71926 logs.go:276] 0 containers: []
	W0913 20:00:19.704290   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:19.704296   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:19.704354   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:19.743087   71926 cri.go:89] found id: ""
	I0913 20:00:19.743113   71926 logs.go:276] 0 containers: []
	W0913 20:00:19.743122   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:19.743128   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:19.743175   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:19.779598   71926 cri.go:89] found id: ""
	I0913 20:00:19.779625   71926 logs.go:276] 0 containers: []
	W0913 20:00:19.779635   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:19.779643   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:19.779692   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:19.817509   71926 cri.go:89] found id: ""
	I0913 20:00:19.817541   71926 logs.go:276] 0 containers: []
	W0913 20:00:19.817553   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:19.817564   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:19.817577   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:19.870071   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:19.870120   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:19.884612   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:19.884638   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:19.959650   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:19.959675   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:19.959687   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:20.040351   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:20.040384   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:19.335437   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:21.335913   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:23.838023   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:19.931244   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:22.430934   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:24.431456   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:22.266411   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:24.266828   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:22.581978   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:22.599144   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:22.599205   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:22.652862   71926 cri.go:89] found id: ""
	I0913 20:00:22.652894   71926 logs.go:276] 0 containers: []
	W0913 20:00:22.652906   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:22.652913   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:22.652998   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:22.712188   71926 cri.go:89] found id: ""
	I0913 20:00:22.712220   71926 logs.go:276] 0 containers: []
	W0913 20:00:22.712231   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:22.712238   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:22.712299   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:22.750212   71926 cri.go:89] found id: ""
	I0913 20:00:22.750238   71926 logs.go:276] 0 containers: []
	W0913 20:00:22.750249   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:22.750257   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:22.750319   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:22.786432   71926 cri.go:89] found id: ""
	I0913 20:00:22.786463   71926 logs.go:276] 0 containers: []
	W0913 20:00:22.786475   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:22.786483   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:22.786547   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:22.822681   71926 cri.go:89] found id: ""
	I0913 20:00:22.822707   71926 logs.go:276] 0 containers: []
	W0913 20:00:22.822716   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:22.822722   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:22.822780   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:22.862076   71926 cri.go:89] found id: ""
	I0913 20:00:22.862143   71926 logs.go:276] 0 containers: []
	W0913 20:00:22.862157   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:22.862166   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:22.862230   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:22.899489   71926 cri.go:89] found id: ""
	I0913 20:00:22.899519   71926 logs.go:276] 0 containers: []
	W0913 20:00:22.899528   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:22.899535   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:22.899604   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:22.937229   71926 cri.go:89] found id: ""
	I0913 20:00:22.937255   71926 logs.go:276] 0 containers: []
	W0913 20:00:22.937270   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:22.937282   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:22.937300   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:23.007842   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:23.007871   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:23.007887   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:23.092726   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:23.092763   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:23.131372   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:23.131403   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:23.183785   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:23.183819   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:25.698367   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:25.712256   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:25.712338   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:25.747797   71926 cri.go:89] found id: ""
	I0913 20:00:25.747824   71926 logs.go:276] 0 containers: []
	W0913 20:00:25.747835   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:25.747842   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:25.747929   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:25.783257   71926 cri.go:89] found id: ""
	I0913 20:00:25.783285   71926 logs.go:276] 0 containers: []
	W0913 20:00:25.783295   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:25.783301   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:25.783352   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:25.817087   71926 cri.go:89] found id: ""
	I0913 20:00:25.817120   71926 logs.go:276] 0 containers: []
	W0913 20:00:25.817132   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:25.817142   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:25.817203   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:25.854055   71926 cri.go:89] found id: ""
	I0913 20:00:25.854082   71926 logs.go:276] 0 containers: []
	W0913 20:00:25.854108   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:25.854116   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:25.854188   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:25.889972   71926 cri.go:89] found id: ""
	I0913 20:00:25.889994   71926 logs.go:276] 0 containers: []
	W0913 20:00:25.890002   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:25.890008   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:25.890058   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:25.927061   71926 cri.go:89] found id: ""
	I0913 20:00:25.927093   71926 logs.go:276] 0 containers: []
	W0913 20:00:25.927104   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:25.927115   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:25.927169   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:25.965541   71926 cri.go:89] found id: ""
	I0913 20:00:25.965570   71926 logs.go:276] 0 containers: []
	W0913 20:00:25.965582   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:25.965588   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:25.965649   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:26.002772   71926 cri.go:89] found id: ""
	I0913 20:00:26.002801   71926 logs.go:276] 0 containers: []
	W0913 20:00:26.002814   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:26.002825   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:26.002840   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:26.054407   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:26.054442   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:26.069608   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:26.069634   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:26.141529   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:26.141557   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:26.141572   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:26.223623   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:26.223657   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:26.336386   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:28.836218   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:26.431607   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:28.431821   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:26.267742   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:28.766624   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:30.767391   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:28.764312   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:28.779319   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:28.779383   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:28.813431   71926 cri.go:89] found id: ""
	I0913 20:00:28.813457   71926 logs.go:276] 0 containers: []
	W0913 20:00:28.813465   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:28.813473   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:28.813532   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:28.850083   71926 cri.go:89] found id: ""
	I0913 20:00:28.850145   71926 logs.go:276] 0 containers: []
	W0913 20:00:28.850157   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:28.850163   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:28.850221   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:28.890348   71926 cri.go:89] found id: ""
	I0913 20:00:28.890373   71926 logs.go:276] 0 containers: []
	W0913 20:00:28.890384   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:28.890390   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:28.890440   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:28.926531   71926 cri.go:89] found id: ""
	I0913 20:00:28.926564   71926 logs.go:276] 0 containers: []
	W0913 20:00:28.926576   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:28.926583   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:28.926650   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:28.968307   71926 cri.go:89] found id: ""
	I0913 20:00:28.968336   71926 logs.go:276] 0 containers: []
	W0913 20:00:28.968349   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:28.968356   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:28.968420   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:29.004257   71926 cri.go:89] found id: ""
	I0913 20:00:29.004285   71926 logs.go:276] 0 containers: []
	W0913 20:00:29.004297   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:29.004304   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:29.004369   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:29.039438   71926 cri.go:89] found id: ""
	I0913 20:00:29.039466   71926 logs.go:276] 0 containers: []
	W0913 20:00:29.039478   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:29.039486   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:29.039546   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:29.075138   71926 cri.go:89] found id: ""
	I0913 20:00:29.075172   71926 logs.go:276] 0 containers: []
	W0913 20:00:29.075184   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:29.075195   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:29.075210   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:29.151631   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:29.151671   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:29.193680   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:29.193707   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:29.244926   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:29.244960   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:29.258897   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:29.258922   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:29.329212   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:31.829955   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:31.846683   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:31.846747   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:31.888898   71926 cri.go:89] found id: ""
	I0913 20:00:31.888933   71926 logs.go:276] 0 containers: []
	W0913 20:00:31.888944   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:31.888952   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:31.889023   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:31.923896   71926 cri.go:89] found id: ""
	I0913 20:00:31.923922   71926 logs.go:276] 0 containers: []
	W0913 20:00:31.923933   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:31.923940   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:31.923999   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:30.836587   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:33.335323   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:30.431964   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:32.931375   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:32.770852   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:35.267129   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:31.965263   71926 cri.go:89] found id: ""
	I0913 20:00:31.965297   71926 logs.go:276] 0 containers: []
	W0913 20:00:31.965309   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:31.965317   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:31.965387   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:31.998461   71926 cri.go:89] found id: ""
	I0913 20:00:31.998491   71926 logs.go:276] 0 containers: []
	W0913 20:00:31.998505   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:31.998512   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:31.998564   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:32.039654   71926 cri.go:89] found id: ""
	I0913 20:00:32.039681   71926 logs.go:276] 0 containers: []
	W0913 20:00:32.039690   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:32.039696   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:32.039747   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:32.073387   71926 cri.go:89] found id: ""
	I0913 20:00:32.073413   71926 logs.go:276] 0 containers: []
	W0913 20:00:32.073424   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:32.073432   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:32.073491   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:32.112119   71926 cri.go:89] found id: ""
	I0913 20:00:32.112148   71926 logs.go:276] 0 containers: []
	W0913 20:00:32.112159   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:32.112166   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:32.112231   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:32.156508   71926 cri.go:89] found id: ""
	I0913 20:00:32.156539   71926 logs.go:276] 0 containers: []
	W0913 20:00:32.156548   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:32.156556   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:32.156567   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:32.210961   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:32.210994   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:32.224674   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:32.224703   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:32.297699   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:32.297725   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:32.297738   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:32.383090   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:32.383130   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:34.926212   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:34.942930   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:34.943015   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:34.983115   71926 cri.go:89] found id: ""
	I0913 20:00:34.983140   71926 logs.go:276] 0 containers: []
	W0913 20:00:34.983151   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:34.983159   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:34.983232   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:35.022893   71926 cri.go:89] found id: ""
	I0913 20:00:35.022916   71926 logs.go:276] 0 containers: []
	W0913 20:00:35.022924   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:35.022931   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:35.022980   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:35.061105   71926 cri.go:89] found id: ""
	I0913 20:00:35.061129   71926 logs.go:276] 0 containers: []
	W0913 20:00:35.061137   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:35.061143   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:35.061191   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:35.095853   71926 cri.go:89] found id: ""
	I0913 20:00:35.095879   71926 logs.go:276] 0 containers: []
	W0913 20:00:35.095890   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:35.095897   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:35.095966   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:35.132771   71926 cri.go:89] found id: ""
	I0913 20:00:35.132796   71926 logs.go:276] 0 containers: []
	W0913 20:00:35.132811   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:35.132816   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:35.132879   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:35.171692   71926 cri.go:89] found id: ""
	I0913 20:00:35.171720   71926 logs.go:276] 0 containers: []
	W0913 20:00:35.171729   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:35.171734   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:35.171782   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:35.212217   71926 cri.go:89] found id: ""
	I0913 20:00:35.212248   71926 logs.go:276] 0 containers: []
	W0913 20:00:35.212258   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:35.212266   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:35.212318   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:35.247910   71926 cri.go:89] found id: ""
	I0913 20:00:35.247938   71926 logs.go:276] 0 containers: []
	W0913 20:00:35.247949   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:35.247958   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:35.247972   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:35.321607   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:35.321627   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:35.321641   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:35.405442   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:35.405483   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:35.450174   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:35.450201   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:35.503640   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:35.503673   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:35.336847   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:37.337476   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:34.931775   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:37.430241   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:39.432113   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:37.268324   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:39.766957   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:38.019116   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:38.033625   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:38.033696   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:38.068648   71926 cri.go:89] found id: ""
	I0913 20:00:38.068677   71926 logs.go:276] 0 containers: []
	W0913 20:00:38.068688   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:38.068696   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:38.068764   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:38.103808   71926 cri.go:89] found id: ""
	I0913 20:00:38.103837   71926 logs.go:276] 0 containers: []
	W0913 20:00:38.103850   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:38.103857   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:38.103920   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:38.143305   71926 cri.go:89] found id: ""
	I0913 20:00:38.143326   71926 logs.go:276] 0 containers: []
	W0913 20:00:38.143335   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:38.143341   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:38.143397   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:38.181356   71926 cri.go:89] found id: ""
	I0913 20:00:38.181382   71926 logs.go:276] 0 containers: []
	W0913 20:00:38.181394   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:38.181401   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:38.181459   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:38.216935   71926 cri.go:89] found id: ""
	I0913 20:00:38.216966   71926 logs.go:276] 0 containers: []
	W0913 20:00:38.216977   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:38.216985   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:38.217043   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:38.253565   71926 cri.go:89] found id: ""
	I0913 20:00:38.253594   71926 logs.go:276] 0 containers: []
	W0913 20:00:38.253606   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:38.253614   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:38.253670   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:38.288672   71926 cri.go:89] found id: ""
	I0913 20:00:38.288698   71926 logs.go:276] 0 containers: []
	W0913 20:00:38.288714   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:38.288721   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:38.288782   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:38.323979   71926 cri.go:89] found id: ""
	I0913 20:00:38.324001   71926 logs.go:276] 0 containers: []
	W0913 20:00:38.324009   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:38.324017   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:38.324028   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:38.375830   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:38.375861   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:38.391703   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:38.391742   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:38.464586   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:38.464615   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:38.464630   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:38.545577   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:38.545616   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:41.090184   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:41.104040   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:41.104129   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:41.148332   71926 cri.go:89] found id: ""
	I0913 20:00:41.148362   71926 logs.go:276] 0 containers: []
	W0913 20:00:41.148371   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:41.148377   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:41.148441   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:41.187205   71926 cri.go:89] found id: ""
	I0913 20:00:41.187230   71926 logs.go:276] 0 containers: []
	W0913 20:00:41.187238   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:41.187244   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:41.187299   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:41.223654   71926 cri.go:89] found id: ""
	I0913 20:00:41.223677   71926 logs.go:276] 0 containers: []
	W0913 20:00:41.223685   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:41.223690   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:41.223750   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:41.258481   71926 cri.go:89] found id: ""
	I0913 20:00:41.258505   71926 logs.go:276] 0 containers: []
	W0913 20:00:41.258515   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:41.258520   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:41.258578   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:41.293205   71926 cri.go:89] found id: ""
	I0913 20:00:41.293236   71926 logs.go:276] 0 containers: []
	W0913 20:00:41.293248   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:41.293255   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:41.293313   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:41.331425   71926 cri.go:89] found id: ""
	I0913 20:00:41.331454   71926 logs.go:276] 0 containers: []
	W0913 20:00:41.331466   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:41.331474   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:41.331524   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:41.367916   71926 cri.go:89] found id: ""
	I0913 20:00:41.367942   71926 logs.go:276] 0 containers: []
	W0913 20:00:41.367953   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:41.367960   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:41.368024   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:41.413683   71926 cri.go:89] found id: ""
	I0913 20:00:41.413713   71926 logs.go:276] 0 containers: []
	W0913 20:00:41.413724   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:41.413736   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:41.413752   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:41.468018   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:41.468053   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:41.482397   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:41.482424   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:41.552203   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:41.552223   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:41.552238   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:41.628515   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:41.628553   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:39.835678   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:42.336092   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:41.932753   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:44.431833   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:41.768156   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:44.268056   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:44.167382   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:44.180046   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:44.180127   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:44.218376   71926 cri.go:89] found id: ""
	I0913 20:00:44.218400   71926 logs.go:276] 0 containers: []
	W0913 20:00:44.218409   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:44.218415   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:44.218474   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:44.255250   71926 cri.go:89] found id: ""
	I0913 20:00:44.255275   71926 logs.go:276] 0 containers: []
	W0913 20:00:44.255284   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:44.255289   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:44.255338   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:44.292561   71926 cri.go:89] found id: ""
	I0913 20:00:44.292587   71926 logs.go:276] 0 containers: []
	W0913 20:00:44.292597   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:44.292604   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:44.292662   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:44.326876   71926 cri.go:89] found id: ""
	I0913 20:00:44.326900   71926 logs.go:276] 0 containers: []
	W0913 20:00:44.326912   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:44.326919   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:44.326977   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:44.362531   71926 cri.go:89] found id: ""
	I0913 20:00:44.362562   71926 logs.go:276] 0 containers: []
	W0913 20:00:44.362574   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:44.362582   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:44.362646   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:44.396145   71926 cri.go:89] found id: ""
	I0913 20:00:44.396170   71926 logs.go:276] 0 containers: []
	W0913 20:00:44.396181   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:44.396188   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:44.396251   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:44.434531   71926 cri.go:89] found id: ""
	I0913 20:00:44.434556   71926 logs.go:276] 0 containers: []
	W0913 20:00:44.434564   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:44.434570   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:44.434627   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:44.470926   71926 cri.go:89] found id: ""
	I0913 20:00:44.470955   71926 logs.go:276] 0 containers: []
	W0913 20:00:44.470965   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:44.470976   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:44.470991   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:44.523651   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:44.523685   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:44.537739   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:44.537766   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:44.608055   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:44.608083   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:44.608098   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:44.694384   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:44.694416   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:44.835785   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:47.336699   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:46.932718   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:49.431805   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:46.766589   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:48.773406   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:47.235609   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:47.248691   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:47.248749   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:47.284011   71926 cri.go:89] found id: ""
	I0913 20:00:47.284037   71926 logs.go:276] 0 containers: []
	W0913 20:00:47.284049   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:47.284056   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:47.284118   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:47.319729   71926 cri.go:89] found id: ""
	I0913 20:00:47.319755   71926 logs.go:276] 0 containers: []
	W0913 20:00:47.319766   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:47.319772   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:47.319833   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:47.356053   71926 cri.go:89] found id: ""
	I0913 20:00:47.356079   71926 logs.go:276] 0 containers: []
	W0913 20:00:47.356087   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:47.356094   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:47.356172   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:47.390054   71926 cri.go:89] found id: ""
	I0913 20:00:47.390083   71926 logs.go:276] 0 containers: []
	W0913 20:00:47.390101   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:47.390109   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:47.390171   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:47.428894   71926 cri.go:89] found id: ""
	I0913 20:00:47.428920   71926 logs.go:276] 0 containers: []
	W0913 20:00:47.428932   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:47.428939   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:47.428996   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:47.463328   71926 cri.go:89] found id: ""
	I0913 20:00:47.463363   71926 logs.go:276] 0 containers: []
	W0913 20:00:47.463376   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:47.463389   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:47.463450   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:47.500739   71926 cri.go:89] found id: ""
	I0913 20:00:47.500764   71926 logs.go:276] 0 containers: []
	W0913 20:00:47.500773   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:47.500779   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:47.500827   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:47.534425   71926 cri.go:89] found id: ""
	I0913 20:00:47.534456   71926 logs.go:276] 0 containers: []
	W0913 20:00:47.534468   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:47.534479   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:47.534493   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:47.584525   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:47.584553   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:47.599468   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:47.599497   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:47.683020   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:47.683044   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:47.683055   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:47.767236   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:47.767272   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:50.310385   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:50.324059   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:50.324136   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:50.359347   71926 cri.go:89] found id: ""
	I0913 20:00:50.359381   71926 logs.go:276] 0 containers: []
	W0913 20:00:50.359393   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:50.359401   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:50.359451   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:50.395291   71926 cri.go:89] found id: ""
	I0913 20:00:50.395339   71926 logs.go:276] 0 containers: []
	W0913 20:00:50.395352   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:50.395360   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:50.395416   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:50.430783   71926 cri.go:89] found id: ""
	I0913 20:00:50.430806   71926 logs.go:276] 0 containers: []
	W0913 20:00:50.430816   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:50.430823   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:50.430884   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:50.467883   71926 cri.go:89] found id: ""
	I0913 20:00:50.467916   71926 logs.go:276] 0 containers: []
	W0913 20:00:50.467928   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:50.467935   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:50.467997   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:50.505962   71926 cri.go:89] found id: ""
	I0913 20:00:50.505995   71926 logs.go:276] 0 containers: []
	W0913 20:00:50.506007   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:50.506014   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:50.506080   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:50.542408   71926 cri.go:89] found id: ""
	I0913 20:00:50.542432   71926 logs.go:276] 0 containers: []
	W0913 20:00:50.542440   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:50.542445   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:50.542492   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:50.576431   71926 cri.go:89] found id: ""
	I0913 20:00:50.576454   71926 logs.go:276] 0 containers: []
	W0913 20:00:50.576463   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:50.576469   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:50.576533   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:50.609940   71926 cri.go:89] found id: ""
	I0913 20:00:50.609982   71926 logs.go:276] 0 containers: []
	W0913 20:00:50.609994   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:50.610004   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:50.610022   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:50.661793   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:50.661829   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:50.676085   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:50.676128   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:50.745092   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:50.745118   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:50.745132   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:50.830719   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:50.830754   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:49.835228   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:51.835655   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:53.835956   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:51.930403   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:53.931943   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:51.266576   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:53.267140   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:55.267966   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:53.369220   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:53.381944   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:53.382009   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:53.414010   71926 cri.go:89] found id: ""
	I0913 20:00:53.414032   71926 logs.go:276] 0 containers: []
	W0913 20:00:53.414041   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:53.414047   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:53.414131   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:53.448480   71926 cri.go:89] found id: ""
	I0913 20:00:53.448512   71926 logs.go:276] 0 containers: []
	W0913 20:00:53.448523   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:53.448530   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:53.448589   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:53.489451   71926 cri.go:89] found id: ""
	I0913 20:00:53.489483   71926 logs.go:276] 0 containers: []
	W0913 20:00:53.489494   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:53.489502   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:53.489562   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:53.531122   71926 cri.go:89] found id: ""
	I0913 20:00:53.531143   71926 logs.go:276] 0 containers: []
	W0913 20:00:53.531154   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:53.531169   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:53.531228   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:53.568070   71926 cri.go:89] found id: ""
	I0913 20:00:53.568098   71926 logs.go:276] 0 containers: []
	W0913 20:00:53.568109   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:53.568116   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:53.568187   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:53.601557   71926 cri.go:89] found id: ""
	I0913 20:00:53.601580   71926 logs.go:276] 0 containers: []
	W0913 20:00:53.601589   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:53.601595   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:53.601646   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:53.637689   71926 cri.go:89] found id: ""
	I0913 20:00:53.637710   71926 logs.go:276] 0 containers: []
	W0913 20:00:53.637719   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:53.637724   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:53.637782   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:53.672876   71926 cri.go:89] found id: ""
	I0913 20:00:53.672899   71926 logs.go:276] 0 containers: []
	W0913 20:00:53.672908   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:53.672916   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:53.672932   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:53.686015   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:53.686044   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:53.753998   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:53.754023   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:53.754042   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:53.844298   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:53.844329   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:53.883828   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:53.883853   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:56.440474   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:56.453970   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:56.454039   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:56.490986   71926 cri.go:89] found id: ""
	I0913 20:00:56.491013   71926 logs.go:276] 0 containers: []
	W0913 20:00:56.491024   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:56.491031   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:56.491094   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:56.530035   71926 cri.go:89] found id: ""
	I0913 20:00:56.530059   71926 logs.go:276] 0 containers: []
	W0913 20:00:56.530070   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:56.530077   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:56.530175   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:56.565256   71926 cri.go:89] found id: ""
	I0913 20:00:56.565280   71926 logs.go:276] 0 containers: []
	W0913 20:00:56.565290   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:56.565297   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:56.565358   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:56.600685   71926 cri.go:89] found id: ""
	I0913 20:00:56.600705   71926 logs.go:276] 0 containers: []
	W0913 20:00:56.600714   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:56.600725   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:56.600777   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:56.635015   71926 cri.go:89] found id: ""
	I0913 20:00:56.635042   71926 logs.go:276] 0 containers: []
	W0913 20:00:56.635053   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:56.635060   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:56.635125   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:56.671319   71926 cri.go:89] found id: ""
	I0913 20:00:56.671346   71926 logs.go:276] 0 containers: []
	W0913 20:00:56.671354   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:56.671359   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:56.671407   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:56.732238   71926 cri.go:89] found id: ""
	I0913 20:00:56.732269   71926 logs.go:276] 0 containers: []
	W0913 20:00:56.732280   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:56.732287   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:56.732347   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:56.779316   71926 cri.go:89] found id: ""
	I0913 20:00:56.779345   71926 logs.go:276] 0 containers: []
	W0913 20:00:56.779356   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:56.779366   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:56.779378   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:56.858727   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:56.858755   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:56.899449   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:56.899480   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:55.836469   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:58.335760   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:56.431305   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:58.431336   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:57.766219   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:59.767250   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:56.952972   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:56.953003   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:56.967092   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:56.967131   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:57.052690   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:59.553696   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:59.569295   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:59.569370   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:59.608548   71926 cri.go:89] found id: ""
	I0913 20:00:59.608592   71926 logs.go:276] 0 containers: []
	W0913 20:00:59.608605   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:59.608612   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:59.608674   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:59.643673   71926 cri.go:89] found id: ""
	I0913 20:00:59.643699   71926 logs.go:276] 0 containers: []
	W0913 20:00:59.643710   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:59.643716   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:59.643776   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:59.687204   71926 cri.go:89] found id: ""
	I0913 20:00:59.687228   71926 logs.go:276] 0 containers: []
	W0913 20:00:59.687237   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:59.687242   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:59.687306   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:59.720335   71926 cri.go:89] found id: ""
	I0913 20:00:59.720360   71926 logs.go:276] 0 containers: []
	W0913 20:00:59.720371   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:59.720378   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:59.720446   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:59.761164   71926 cri.go:89] found id: ""
	I0913 20:00:59.761194   71926 logs.go:276] 0 containers: []
	W0913 20:00:59.761205   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:59.761213   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:59.761278   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:59.796770   71926 cri.go:89] found id: ""
	I0913 20:00:59.796796   71926 logs.go:276] 0 containers: []
	W0913 20:00:59.796807   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:59.796814   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:59.796880   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:59.832333   71926 cri.go:89] found id: ""
	I0913 20:00:59.832364   71926 logs.go:276] 0 containers: []
	W0913 20:00:59.832377   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:59.832385   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:59.832444   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:59.868387   71926 cri.go:89] found id: ""
	I0913 20:00:59.868415   71926 logs.go:276] 0 containers: []
	W0913 20:00:59.868427   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:59.868437   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:59.868450   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:59.920632   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:59.920663   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:59.937573   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:59.937609   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:00.007803   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:00.007826   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:00.007837   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:00.085289   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:00.085324   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:00.336553   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:02.835544   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:00.931173   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:02.931879   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:02.267501   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:04.766302   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:02.628580   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:02.642122   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:02.642200   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:02.678238   71926 cri.go:89] found id: ""
	I0913 20:01:02.678260   71926 logs.go:276] 0 containers: []
	W0913 20:01:02.678268   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:02.678274   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:02.678325   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:02.714164   71926 cri.go:89] found id: ""
	I0913 20:01:02.714187   71926 logs.go:276] 0 containers: []
	W0913 20:01:02.714197   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:02.714202   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:02.714267   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:02.750200   71926 cri.go:89] found id: ""
	I0913 20:01:02.750228   71926 logs.go:276] 0 containers: []
	W0913 20:01:02.750237   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:02.750243   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:02.750291   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:02.786893   71926 cri.go:89] found id: ""
	I0913 20:01:02.786921   71926 logs.go:276] 0 containers: []
	W0913 20:01:02.786929   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:02.786935   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:02.786987   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:02.824161   71926 cri.go:89] found id: ""
	I0913 20:01:02.824192   71926 logs.go:276] 0 containers: []
	W0913 20:01:02.824204   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:02.824211   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:02.824274   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:02.861567   71926 cri.go:89] found id: ""
	I0913 20:01:02.861594   71926 logs.go:276] 0 containers: []
	W0913 20:01:02.861606   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:02.861613   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:02.861678   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:02.897791   71926 cri.go:89] found id: ""
	I0913 20:01:02.897813   71926 logs.go:276] 0 containers: []
	W0913 20:01:02.897822   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:02.897827   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:02.897875   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:02.935790   71926 cri.go:89] found id: ""
	I0913 20:01:02.935818   71926 logs.go:276] 0 containers: []
	W0913 20:01:02.935830   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:02.935840   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:02.935853   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:02.987011   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:02.987044   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:03.000688   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:03.000722   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:03.075757   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:03.075780   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:03.075795   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:03.167565   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:03.167611   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:05.713852   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:05.727810   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:05.727876   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:05.765630   71926 cri.go:89] found id: ""
	I0913 20:01:05.765655   71926 logs.go:276] 0 containers: []
	W0913 20:01:05.765666   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:05.765672   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:05.765720   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:05.801085   71926 cri.go:89] found id: ""
	I0913 20:01:05.801123   71926 logs.go:276] 0 containers: []
	W0913 20:01:05.801136   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:05.801142   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:05.801209   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:05.840112   71926 cri.go:89] found id: ""
	I0913 20:01:05.840146   71926 logs.go:276] 0 containers: []
	W0913 20:01:05.840156   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:05.840163   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:05.840225   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:05.876082   71926 cri.go:89] found id: ""
	I0913 20:01:05.876107   71926 logs.go:276] 0 containers: []
	W0913 20:01:05.876118   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:05.876125   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:05.876188   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:05.913772   71926 cri.go:89] found id: ""
	I0913 20:01:05.913801   71926 logs.go:276] 0 containers: []
	W0913 20:01:05.913813   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:05.913820   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:05.913879   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:05.947670   71926 cri.go:89] found id: ""
	I0913 20:01:05.947694   71926 logs.go:276] 0 containers: []
	W0913 20:01:05.947702   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:05.947708   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:05.947756   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:05.989763   71926 cri.go:89] found id: ""
	I0913 20:01:05.989794   71926 logs.go:276] 0 containers: []
	W0913 20:01:05.989807   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:05.989814   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:05.989875   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:06.030319   71926 cri.go:89] found id: ""
	I0913 20:01:06.030361   71926 logs.go:276] 0 containers: []
	W0913 20:01:06.030373   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:06.030383   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:06.030397   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:06.111153   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:06.111185   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:06.149331   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:06.149357   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:06.202013   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:06.202056   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:06.216224   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:06.216252   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:06.291004   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:04.839716   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:07.334774   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:04.932814   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:07.431144   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:09.431578   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:06.766410   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:09.267184   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:08.791573   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:08.804872   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:08.804930   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:08.846040   71926 cri.go:89] found id: ""
	I0913 20:01:08.846068   71926 logs.go:276] 0 containers: []
	W0913 20:01:08.846081   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:08.846088   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:08.846159   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:08.892954   71926 cri.go:89] found id: ""
	I0913 20:01:08.892986   71926 logs.go:276] 0 containers: []
	W0913 20:01:08.892998   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:08.893005   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:08.893068   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:08.949737   71926 cri.go:89] found id: ""
	I0913 20:01:08.949762   71926 logs.go:276] 0 containers: []
	W0913 20:01:08.949773   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:08.949779   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:08.949847   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:08.991915   71926 cri.go:89] found id: ""
	I0913 20:01:08.991938   71926 logs.go:276] 0 containers: []
	W0913 20:01:08.991950   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:08.991956   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:08.992009   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:09.028702   71926 cri.go:89] found id: ""
	I0913 20:01:09.028733   71926 logs.go:276] 0 containers: []
	W0913 20:01:09.028754   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:09.028764   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:09.028828   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:09.063615   71926 cri.go:89] found id: ""
	I0913 20:01:09.063639   71926 logs.go:276] 0 containers: []
	W0913 20:01:09.063648   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:09.063653   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:09.063713   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:09.096739   71926 cri.go:89] found id: ""
	I0913 20:01:09.096766   71926 logs.go:276] 0 containers: []
	W0913 20:01:09.096776   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:09.096782   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:09.096838   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:09.129605   71926 cri.go:89] found id: ""
	I0913 20:01:09.129635   71926 logs.go:276] 0 containers: []
	W0913 20:01:09.129647   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:09.129657   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:09.129674   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:09.181245   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:09.181280   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:09.195598   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:09.195627   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:09.271676   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:09.271703   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:09.271718   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:09.353657   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:09.353688   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:11.896613   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:11.910334   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:11.910410   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:09.336081   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:11.336204   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:13.336445   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:11.934825   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:14.430581   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:11.766779   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:14.267119   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:11.947632   71926 cri.go:89] found id: ""
	I0913 20:01:11.947654   71926 logs.go:276] 0 containers: []
	W0913 20:01:11.947663   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:11.947669   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:11.947727   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:11.983996   71926 cri.go:89] found id: ""
	I0913 20:01:11.984019   71926 logs.go:276] 0 containers: []
	W0913 20:01:11.984028   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:11.984034   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:11.984082   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:12.020497   71926 cri.go:89] found id: ""
	I0913 20:01:12.020536   71926 logs.go:276] 0 containers: []
	W0913 20:01:12.020548   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:12.020556   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:12.020615   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:12.059799   71926 cri.go:89] found id: ""
	I0913 20:01:12.059821   71926 logs.go:276] 0 containers: []
	W0913 20:01:12.059829   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:12.059835   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:12.059882   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:12.099079   71926 cri.go:89] found id: ""
	I0913 20:01:12.099113   71926 logs.go:276] 0 containers: []
	W0913 20:01:12.099125   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:12.099132   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:12.099193   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:12.132677   71926 cri.go:89] found id: ""
	I0913 20:01:12.132704   71926 logs.go:276] 0 containers: []
	W0913 20:01:12.132716   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:12.132722   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:12.132769   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:12.165522   71926 cri.go:89] found id: ""
	I0913 20:01:12.165546   71926 logs.go:276] 0 containers: []
	W0913 20:01:12.165560   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:12.165565   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:12.165625   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:12.207145   71926 cri.go:89] found id: ""
	I0913 20:01:12.207168   71926 logs.go:276] 0 containers: []
	W0913 20:01:12.207177   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:12.207185   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:12.207195   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:12.260523   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:12.260556   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:12.275208   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:12.275236   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:12.346424   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:12.346447   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:12.346463   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:12.431012   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:12.431050   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:14.970909   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:14.984452   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:14.984512   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:15.020614   71926 cri.go:89] found id: ""
	I0913 20:01:15.020635   71926 logs.go:276] 0 containers: []
	W0913 20:01:15.020645   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:15.020650   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:15.020695   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:15.060481   71926 cri.go:89] found id: ""
	I0913 20:01:15.060508   71926 logs.go:276] 0 containers: []
	W0913 20:01:15.060516   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:15.060520   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:15.060580   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:15.095109   71926 cri.go:89] found id: ""
	I0913 20:01:15.095134   71926 logs.go:276] 0 containers: []
	W0913 20:01:15.095143   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:15.095148   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:15.095201   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:15.130733   71926 cri.go:89] found id: ""
	I0913 20:01:15.130754   71926 logs.go:276] 0 containers: []
	W0913 20:01:15.130762   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:15.130768   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:15.130816   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:15.164998   71926 cri.go:89] found id: ""
	I0913 20:01:15.165027   71926 logs.go:276] 0 containers: []
	W0913 20:01:15.165042   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:15.165049   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:15.165100   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:15.205957   71926 cri.go:89] found id: ""
	I0913 20:01:15.205981   71926 logs.go:276] 0 containers: []
	W0913 20:01:15.205989   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:15.205994   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:15.206053   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:15.240502   71926 cri.go:89] found id: ""
	I0913 20:01:15.240526   71926 logs.go:276] 0 containers: []
	W0913 20:01:15.240535   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:15.240540   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:15.240586   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:15.275900   71926 cri.go:89] found id: ""
	I0913 20:01:15.275920   71926 logs.go:276] 0 containers: []
	W0913 20:01:15.275928   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:15.275936   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:15.275946   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:15.343658   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:15.343677   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:15.343690   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:15.426317   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:15.426355   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:15.474538   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:15.474569   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:15.525405   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:15.525439   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:15.836259   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:18.336529   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:16.431423   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:18.930385   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:16.766863   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:19.266906   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:18.040698   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:18.053759   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:18.053820   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:18.086056   71926 cri.go:89] found id: ""
	I0913 20:01:18.086078   71926 logs.go:276] 0 containers: []
	W0913 20:01:18.086087   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:18.086092   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:18.086166   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:18.119247   71926 cri.go:89] found id: ""
	I0913 20:01:18.119277   71926 logs.go:276] 0 containers: []
	W0913 20:01:18.119290   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:18.119301   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:18.119366   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:18.163417   71926 cri.go:89] found id: ""
	I0913 20:01:18.163442   71926 logs.go:276] 0 containers: []
	W0913 20:01:18.163450   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:18.163456   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:18.163504   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:18.196786   71926 cri.go:89] found id: ""
	I0913 20:01:18.196812   71926 logs.go:276] 0 containers: []
	W0913 20:01:18.196820   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:18.196826   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:18.196878   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:18.231864   71926 cri.go:89] found id: ""
	I0913 20:01:18.231893   71926 logs.go:276] 0 containers: []
	W0913 20:01:18.231903   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:18.231909   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:18.231973   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:18.268498   71926 cri.go:89] found id: ""
	I0913 20:01:18.268518   71926 logs.go:276] 0 containers: []
	W0913 20:01:18.268527   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:18.268534   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:18.268586   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:18.305391   71926 cri.go:89] found id: ""
	I0913 20:01:18.305418   71926 logs.go:276] 0 containers: []
	W0913 20:01:18.305430   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:18.305438   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:18.305499   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:18.340160   71926 cri.go:89] found id: ""
	I0913 20:01:18.340187   71926 logs.go:276] 0 containers: []
	W0913 20:01:18.340197   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:18.340207   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:18.340220   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:18.391723   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:18.391757   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:18.405914   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:18.405943   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:18.476941   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:18.476960   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:18.476972   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:18.556812   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:18.556845   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:21.098172   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:21.111147   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:21.111211   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:21.147273   71926 cri.go:89] found id: ""
	I0913 20:01:21.147299   71926 logs.go:276] 0 containers: []
	W0913 20:01:21.147316   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:21.147322   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:21.147370   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:21.185018   71926 cri.go:89] found id: ""
	I0913 20:01:21.185047   71926 logs.go:276] 0 containers: []
	W0913 20:01:21.185059   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:21.185066   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:21.185148   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:21.219763   71926 cri.go:89] found id: ""
	I0913 20:01:21.219798   71926 logs.go:276] 0 containers: []
	W0913 20:01:21.219809   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:21.219816   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:21.219882   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:21.257121   71926 cri.go:89] found id: ""
	I0913 20:01:21.257149   71926 logs.go:276] 0 containers: []
	W0913 20:01:21.257161   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:21.257167   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:21.257229   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:21.292011   71926 cri.go:89] found id: ""
	I0913 20:01:21.292038   71926 logs.go:276] 0 containers: []
	W0913 20:01:21.292049   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:21.292060   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:21.292124   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:21.327563   71926 cri.go:89] found id: ""
	I0913 20:01:21.327592   71926 logs.go:276] 0 containers: []
	W0913 20:01:21.327604   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:21.327612   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:21.327679   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:21.365175   71926 cri.go:89] found id: ""
	I0913 20:01:21.365201   71926 logs.go:276] 0 containers: []
	W0913 20:01:21.365209   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:21.365215   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:21.365272   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:21.403238   71926 cri.go:89] found id: ""
	I0913 20:01:21.403266   71926 logs.go:276] 0 containers: []
	W0913 20:01:21.403278   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:21.403288   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:21.403310   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:21.417704   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:21.417737   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:21.490232   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:21.490264   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:21.490283   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:21.573376   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:21.573414   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:21.619573   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:21.619604   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:20.835709   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:22.835800   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:20.931257   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:22.932350   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:21.267729   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:23.767489   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:25.768029   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:24.173931   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:24.187538   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:24.187608   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:24.227803   71926 cri.go:89] found id: ""
	I0913 20:01:24.227827   71926 logs.go:276] 0 containers: []
	W0913 20:01:24.227836   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:24.227841   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:24.227898   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:24.263194   71926 cri.go:89] found id: ""
	I0913 20:01:24.263223   71926 logs.go:276] 0 containers: []
	W0913 20:01:24.263239   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:24.263246   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:24.263308   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:24.298159   71926 cri.go:89] found id: ""
	I0913 20:01:24.298188   71926 logs.go:276] 0 containers: []
	W0913 20:01:24.298201   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:24.298209   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:24.298267   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:24.340590   71926 cri.go:89] found id: ""
	I0913 20:01:24.340621   71926 logs.go:276] 0 containers: []
	W0913 20:01:24.340633   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:24.340640   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:24.340699   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:24.385839   71926 cri.go:89] found id: ""
	I0913 20:01:24.385866   71926 logs.go:276] 0 containers: []
	W0913 20:01:24.385875   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:24.385880   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:24.385944   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:24.424723   71926 cri.go:89] found id: ""
	I0913 20:01:24.424750   71926 logs.go:276] 0 containers: []
	W0913 20:01:24.424761   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:24.424767   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:24.424828   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:24.461879   71926 cri.go:89] found id: ""
	I0913 20:01:24.461911   71926 logs.go:276] 0 containers: []
	W0913 20:01:24.461922   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:24.461929   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:24.461995   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:24.497230   71926 cri.go:89] found id: ""
	I0913 20:01:24.497257   71926 logs.go:276] 0 containers: []
	W0913 20:01:24.497269   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:24.497278   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:24.497297   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:24.538048   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:24.538084   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:24.592840   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:24.592880   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:24.608817   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:24.608851   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:24.683335   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:24.683356   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:24.683367   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:24.836044   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:27.335709   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:25.431310   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:27.931864   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:28.266427   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:30.765946   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:27.262211   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:27.275199   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:27.275277   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:27.308960   71926 cri.go:89] found id: ""
	I0913 20:01:27.308986   71926 logs.go:276] 0 containers: []
	W0913 20:01:27.308994   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:27.309000   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:27.309065   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:27.345030   71926 cri.go:89] found id: ""
	I0913 20:01:27.345055   71926 logs.go:276] 0 containers: []
	W0913 20:01:27.345067   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:27.345074   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:27.345132   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:27.380791   71926 cri.go:89] found id: ""
	I0913 20:01:27.380823   71926 logs.go:276] 0 containers: []
	W0913 20:01:27.380831   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:27.380837   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:27.380896   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:27.416561   71926 cri.go:89] found id: ""
	I0913 20:01:27.416589   71926 logs.go:276] 0 containers: []
	W0913 20:01:27.416599   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:27.416604   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:27.416654   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:27.455781   71926 cri.go:89] found id: ""
	I0913 20:01:27.455809   71926 logs.go:276] 0 containers: []
	W0913 20:01:27.455820   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:27.455827   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:27.455887   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:27.493920   71926 cri.go:89] found id: ""
	I0913 20:01:27.493950   71926 logs.go:276] 0 containers: []
	W0913 20:01:27.493959   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:27.493967   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:27.494016   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:27.531706   71926 cri.go:89] found id: ""
	I0913 20:01:27.531730   71926 logs.go:276] 0 containers: []
	W0913 20:01:27.531740   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:27.531746   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:27.531796   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:27.568697   71926 cri.go:89] found id: ""
	I0913 20:01:27.568726   71926 logs.go:276] 0 containers: []
	W0913 20:01:27.568735   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:27.568744   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:27.568755   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:27.620618   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:27.620655   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:27.636353   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:27.636378   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:27.709779   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:27.709806   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:27.709819   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:27.784476   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:27.784508   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:30.331351   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:30.344583   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:30.344647   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:30.379992   71926 cri.go:89] found id: ""
	I0913 20:01:30.380018   71926 logs.go:276] 0 containers: []
	W0913 20:01:30.380051   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:30.380059   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:30.380129   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:30.415148   71926 cri.go:89] found id: ""
	I0913 20:01:30.415174   71926 logs.go:276] 0 containers: []
	W0913 20:01:30.415185   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:30.415192   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:30.415250   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:30.450578   71926 cri.go:89] found id: ""
	I0913 20:01:30.450602   71926 logs.go:276] 0 containers: []
	W0913 20:01:30.450611   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:30.450616   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:30.450675   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:30.484582   71926 cri.go:89] found id: ""
	I0913 20:01:30.484612   71926 logs.go:276] 0 containers: []
	W0913 20:01:30.484623   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:30.484631   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:30.484683   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:30.520476   71926 cri.go:89] found id: ""
	I0913 20:01:30.520498   71926 logs.go:276] 0 containers: []
	W0913 20:01:30.520506   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:30.520512   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:30.520559   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:30.556898   71926 cri.go:89] found id: ""
	I0913 20:01:30.556928   71926 logs.go:276] 0 containers: []
	W0913 20:01:30.556937   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:30.556943   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:30.556993   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:30.595216   71926 cri.go:89] found id: ""
	I0913 20:01:30.595240   71926 logs.go:276] 0 containers: []
	W0913 20:01:30.595252   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:30.595259   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:30.595312   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:30.629834   71926 cri.go:89] found id: ""
	I0913 20:01:30.629866   71926 logs.go:276] 0 containers: []
	W0913 20:01:30.629880   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:30.629891   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:30.629907   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:30.643487   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:30.643516   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:30.718267   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:30.718290   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:30.718304   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:30.803014   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:30.803044   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:30.843670   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:30.843694   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:29.336064   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:31.836582   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:29.932193   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:32.431217   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:32.766473   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:34.767287   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:33.396566   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:33.409579   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:33.409641   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:33.444647   71926 cri.go:89] found id: ""
	I0913 20:01:33.444671   71926 logs.go:276] 0 containers: []
	W0913 20:01:33.444682   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:33.444689   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:33.444747   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:33.479545   71926 cri.go:89] found id: ""
	I0913 20:01:33.479568   71926 logs.go:276] 0 containers: []
	W0913 20:01:33.479577   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:33.479583   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:33.479634   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:33.517343   71926 cri.go:89] found id: ""
	I0913 20:01:33.517366   71926 logs.go:276] 0 containers: []
	W0913 20:01:33.517375   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:33.517380   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:33.517437   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:33.551394   71926 cri.go:89] found id: ""
	I0913 20:01:33.551424   71926 logs.go:276] 0 containers: []
	W0913 20:01:33.551436   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:33.551449   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:33.551499   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:33.583871   71926 cri.go:89] found id: ""
	I0913 20:01:33.583893   71926 logs.go:276] 0 containers: []
	W0913 20:01:33.583902   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:33.583907   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:33.583966   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:33.616984   71926 cri.go:89] found id: ""
	I0913 20:01:33.617008   71926 logs.go:276] 0 containers: []
	W0913 20:01:33.617018   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:33.617023   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:33.617078   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:33.660231   71926 cri.go:89] found id: ""
	I0913 20:01:33.660252   71926 logs.go:276] 0 containers: []
	W0913 20:01:33.660261   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:33.660267   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:33.660315   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:33.705140   71926 cri.go:89] found id: ""
	I0913 20:01:33.705176   71926 logs.go:276] 0 containers: []
	W0913 20:01:33.705188   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:33.705198   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:33.705213   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:33.781889   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:33.781916   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:33.781931   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:33.860132   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:33.860171   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:33.899723   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:33.899754   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:33.953380   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:33.953416   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:36.469123   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:36.482260   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:36.482324   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:36.516469   71926 cri.go:89] found id: ""
	I0913 20:01:36.516494   71926 logs.go:276] 0 containers: []
	W0913 20:01:36.516504   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:36.516509   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:36.516599   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:36.554065   71926 cri.go:89] found id: ""
	I0913 20:01:36.554103   71926 logs.go:276] 0 containers: []
	W0913 20:01:36.554116   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:36.554123   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:36.554197   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:36.587706   71926 cri.go:89] found id: ""
	I0913 20:01:36.587736   71926 logs.go:276] 0 containers: []
	W0913 20:01:36.587745   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:36.587750   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:36.587812   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:36.621866   71926 cri.go:89] found id: ""
	I0913 20:01:36.621897   71926 logs.go:276] 0 containers: []
	W0913 20:01:36.621908   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:36.621915   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:36.621984   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:36.656374   71926 cri.go:89] found id: ""
	I0913 20:01:36.656403   71926 logs.go:276] 0 containers: []
	W0913 20:01:36.656411   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:36.656416   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:36.656465   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:36.690236   71926 cri.go:89] found id: ""
	I0913 20:01:36.690259   71926 logs.go:276] 0 containers: []
	W0913 20:01:36.690268   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:36.690273   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:36.690329   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:36.725544   71926 cri.go:89] found id: ""
	I0913 20:01:36.725580   71926 logs.go:276] 0 containers: []
	W0913 20:01:36.725592   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:36.725600   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:36.725656   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:36.762143   71926 cri.go:89] found id: ""
	I0913 20:01:36.762175   71926 logs.go:276] 0 containers: []
	W0913 20:01:36.762186   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:36.762195   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:36.762210   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:36.837436   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:36.837459   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:36.837473   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:36.916272   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:36.916310   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:34.334975   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:36.335436   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:38.835559   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:34.930444   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:36.931136   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:39.430007   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:37.266186   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:39.769801   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:36.962300   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:36.962332   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:37.016419   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:37.016449   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:39.532924   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:39.546066   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:39.546150   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:39.585711   71926 cri.go:89] found id: ""
	I0913 20:01:39.585733   71926 logs.go:276] 0 containers: []
	W0913 20:01:39.585741   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:39.585747   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:39.585803   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:39.623366   71926 cri.go:89] found id: ""
	I0913 20:01:39.623409   71926 logs.go:276] 0 containers: []
	W0913 20:01:39.623419   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:39.623425   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:39.623487   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:39.660533   71926 cri.go:89] found id: ""
	I0913 20:01:39.660558   71926 logs.go:276] 0 containers: []
	W0913 20:01:39.660567   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:39.660575   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:39.660637   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:39.694266   71926 cri.go:89] found id: ""
	I0913 20:01:39.694293   71926 logs.go:276] 0 containers: []
	W0913 20:01:39.694304   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:39.694311   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:39.694373   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:39.734258   71926 cri.go:89] found id: ""
	I0913 20:01:39.734284   71926 logs.go:276] 0 containers: []
	W0913 20:01:39.734295   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:39.734302   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:39.734361   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:39.770202   71926 cri.go:89] found id: ""
	I0913 20:01:39.770219   71926 logs.go:276] 0 containers: []
	W0913 20:01:39.770227   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:39.770233   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:39.770284   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:39.804380   71926 cri.go:89] found id: ""
	I0913 20:01:39.804417   71926 logs.go:276] 0 containers: []
	W0913 20:01:39.804425   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:39.804431   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:39.804481   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:39.837961   71926 cri.go:89] found id: ""
	I0913 20:01:39.837989   71926 logs.go:276] 0 containers: []
	W0913 20:01:39.838011   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:39.838021   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:39.838047   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:39.851152   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:39.851176   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:39.930199   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:39.930224   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:39.930239   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:40.007475   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:40.007507   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:40.050291   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:40.050320   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:40.835948   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:42.836933   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:41.431508   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:43.930509   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:42.265895   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:44.267214   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:42.599100   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:42.613586   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:42.613662   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:42.648187   71926 cri.go:89] found id: ""
	I0913 20:01:42.648213   71926 logs.go:276] 0 containers: []
	W0913 20:01:42.648225   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:42.648232   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:42.648283   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:42.692692   71926 cri.go:89] found id: ""
	I0913 20:01:42.692721   71926 logs.go:276] 0 containers: []
	W0913 20:01:42.692730   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:42.692736   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:42.692787   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:42.726248   71926 cri.go:89] found id: ""
	I0913 20:01:42.726273   71926 logs.go:276] 0 containers: []
	W0913 20:01:42.726283   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:42.726291   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:42.726364   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:42.764372   71926 cri.go:89] found id: ""
	I0913 20:01:42.764416   71926 logs.go:276] 0 containers: []
	W0913 20:01:42.764428   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:42.764434   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:42.764495   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:42.802871   71926 cri.go:89] found id: ""
	I0913 20:01:42.802902   71926 logs.go:276] 0 containers: []
	W0913 20:01:42.802914   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:42.802922   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:42.802983   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:42.840018   71926 cri.go:89] found id: ""
	I0913 20:01:42.840048   71926 logs.go:276] 0 containers: []
	W0913 20:01:42.840060   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:42.840067   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:42.840127   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:42.876359   71926 cri.go:89] found id: ""
	I0913 20:01:42.876388   71926 logs.go:276] 0 containers: []
	W0913 20:01:42.876400   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:42.876408   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:42.876473   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:42.911673   71926 cri.go:89] found id: ""
	I0913 20:01:42.911697   71926 logs.go:276] 0 containers: []
	W0913 20:01:42.911706   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:42.911713   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:42.911725   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:43.016107   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:43.016143   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:43.061781   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:43.061811   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:43.112989   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:43.113035   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:43.129172   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:43.129201   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:43.209596   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:45.710278   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:45.723939   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:45.724013   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:45.759393   71926 cri.go:89] found id: ""
	I0913 20:01:45.759420   71926 logs.go:276] 0 containers: []
	W0913 20:01:45.759432   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:45.759439   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:45.759500   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:45.796795   71926 cri.go:89] found id: ""
	I0913 20:01:45.796820   71926 logs.go:276] 0 containers: []
	W0913 20:01:45.796831   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:45.796838   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:45.796911   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:45.837904   71926 cri.go:89] found id: ""
	I0913 20:01:45.837929   71926 logs.go:276] 0 containers: []
	W0913 20:01:45.837939   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:45.837945   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:45.838005   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:45.873079   71926 cri.go:89] found id: ""
	I0913 20:01:45.873104   71926 logs.go:276] 0 containers: []
	W0913 20:01:45.873115   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:45.873122   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:45.873194   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:45.910115   71926 cri.go:89] found id: ""
	I0913 20:01:45.910144   71926 logs.go:276] 0 containers: []
	W0913 20:01:45.910160   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:45.910167   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:45.910228   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:45.949135   71926 cri.go:89] found id: ""
	I0913 20:01:45.949166   71926 logs.go:276] 0 containers: []
	W0913 20:01:45.949178   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:45.949186   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:45.949246   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:45.984997   71926 cri.go:89] found id: ""
	I0913 20:01:45.985023   71926 logs.go:276] 0 containers: []
	W0913 20:01:45.985033   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:45.985038   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:45.985088   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:46.023590   71926 cri.go:89] found id: ""
	I0913 20:01:46.023618   71926 logs.go:276] 0 containers: []
	W0913 20:01:46.023632   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:46.023642   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:46.023656   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:46.062364   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:46.062392   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:46.114617   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:46.114650   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:46.129756   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:46.129799   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:46.202031   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:46.202052   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:46.202063   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:45.337317   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:47.834948   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:45.931344   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:47.932508   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:46.776369   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:49.268050   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:48.782933   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:48.796685   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:48.796758   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:48.832035   71926 cri.go:89] found id: ""
	I0913 20:01:48.832061   71926 logs.go:276] 0 containers: []
	W0913 20:01:48.832074   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:48.832081   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:48.832155   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:48.866904   71926 cri.go:89] found id: ""
	I0913 20:01:48.866929   71926 logs.go:276] 0 containers: []
	W0913 20:01:48.866942   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:48.866947   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:48.867004   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:48.904082   71926 cri.go:89] found id: ""
	I0913 20:01:48.904105   71926 logs.go:276] 0 containers: []
	W0913 20:01:48.904113   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:48.904118   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:48.904174   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:48.947496   71926 cri.go:89] found id: ""
	I0913 20:01:48.947519   71926 logs.go:276] 0 containers: []
	W0913 20:01:48.947526   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:48.947532   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:48.947588   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:48.982918   71926 cri.go:89] found id: ""
	I0913 20:01:48.982954   71926 logs.go:276] 0 containers: []
	W0913 20:01:48.982964   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:48.982969   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:48.983034   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:49.023593   71926 cri.go:89] found id: ""
	I0913 20:01:49.023615   71926 logs.go:276] 0 containers: []
	W0913 20:01:49.023623   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:49.023629   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:49.023690   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:49.068211   71926 cri.go:89] found id: ""
	I0913 20:01:49.068233   71926 logs.go:276] 0 containers: []
	W0913 20:01:49.068241   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:49.068246   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:49.068310   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:49.104174   71926 cri.go:89] found id: ""
	I0913 20:01:49.104195   71926 logs.go:276] 0 containers: []
	W0913 20:01:49.104203   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:49.104210   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:49.104221   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:49.158282   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:49.158317   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:49.173592   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:49.173617   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:49.249039   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:49.249067   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:49.249082   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:49.334924   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:49.334969   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:51.884986   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:51.898020   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:51.898172   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:51.937858   71926 cri.go:89] found id: ""
	I0913 20:01:51.937885   71926 logs.go:276] 0 containers: []
	W0913 20:01:51.937896   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:51.937905   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:51.937971   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:49.836646   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:52.337477   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:50.432045   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:52.930984   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:51.765027   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:53.766659   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:55.766923   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:51.977712   71926 cri.go:89] found id: ""
	I0913 20:01:51.977743   71926 logs.go:276] 0 containers: []
	W0913 20:01:51.977756   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:51.977766   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:51.977852   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:52.012504   71926 cri.go:89] found id: ""
	I0913 20:01:52.012529   71926 logs.go:276] 0 containers: []
	W0913 20:01:52.012539   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:52.012544   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:52.012604   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:52.047642   71926 cri.go:89] found id: ""
	I0913 20:01:52.047671   71926 logs.go:276] 0 containers: []
	W0913 20:01:52.047683   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:52.047692   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:52.047743   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:52.087213   71926 cri.go:89] found id: ""
	I0913 20:01:52.087240   71926 logs.go:276] 0 containers: []
	W0913 20:01:52.087251   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:52.087258   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:52.087317   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:52.126609   71926 cri.go:89] found id: ""
	I0913 20:01:52.126633   71926 logs.go:276] 0 containers: []
	W0913 20:01:52.126641   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:52.126647   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:52.126699   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:52.160754   71926 cri.go:89] found id: ""
	I0913 20:01:52.160778   71926 logs.go:276] 0 containers: []
	W0913 20:01:52.160789   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:52.160796   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:52.160857   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:52.196945   71926 cri.go:89] found id: ""
	I0913 20:01:52.196967   71926 logs.go:276] 0 containers: []
	W0913 20:01:52.196975   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:52.196983   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:52.196994   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:52.248515   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:52.248552   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:52.264602   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:52.264629   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:52.339626   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:52.339653   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:52.339668   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:52.419793   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:52.419827   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:54.966220   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:54.981234   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:54.981299   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:55.015806   71926 cri.go:89] found id: ""
	I0913 20:01:55.015843   71926 logs.go:276] 0 containers: []
	W0913 20:01:55.015855   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:55.015861   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:55.015931   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:55.053979   71926 cri.go:89] found id: ""
	I0913 20:01:55.054002   71926 logs.go:276] 0 containers: []
	W0913 20:01:55.054011   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:55.054016   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:55.054075   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:55.096464   71926 cri.go:89] found id: ""
	I0913 20:01:55.096495   71926 logs.go:276] 0 containers: []
	W0913 20:01:55.096506   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:55.096514   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:55.096591   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:55.146557   71926 cri.go:89] found id: ""
	I0913 20:01:55.146586   71926 logs.go:276] 0 containers: []
	W0913 20:01:55.146599   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:55.146606   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:55.146667   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:55.199034   71926 cri.go:89] found id: ""
	I0913 20:01:55.199063   71926 logs.go:276] 0 containers: []
	W0913 20:01:55.199074   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:55.199083   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:55.199144   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:55.234731   71926 cri.go:89] found id: ""
	I0913 20:01:55.234760   71926 logs.go:276] 0 containers: []
	W0913 20:01:55.234772   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:55.234780   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:55.234830   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:55.273543   71926 cri.go:89] found id: ""
	I0913 20:01:55.273570   71926 logs.go:276] 0 containers: []
	W0913 20:01:55.273579   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:55.273585   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:55.273643   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:55.310449   71926 cri.go:89] found id: ""
	I0913 20:01:55.310483   71926 logs.go:276] 0 containers: []
	W0913 20:01:55.310496   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:55.310507   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:55.310521   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:55.360725   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:55.360761   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:55.375346   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:55.375374   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:55.445075   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:55.445098   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:55.445108   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:55.520105   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:55.520144   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:54.835305   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:56.835825   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:58.836975   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:55.431354   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:57.930223   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:57.767026   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:00.266415   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:58.059379   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:58.072373   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:58.072454   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:58.108624   71926 cri.go:89] found id: ""
	I0913 20:01:58.108650   71926 logs.go:276] 0 containers: []
	W0913 20:01:58.108659   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:58.108665   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:58.108722   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:58.143978   71926 cri.go:89] found id: ""
	I0913 20:01:58.143999   71926 logs.go:276] 0 containers: []
	W0913 20:01:58.144007   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:58.144013   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:58.144059   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:58.178996   71926 cri.go:89] found id: ""
	I0913 20:01:58.179024   71926 logs.go:276] 0 containers: []
	W0913 20:01:58.179032   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:58.179038   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:58.179097   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:58.211596   71926 cri.go:89] found id: ""
	I0913 20:01:58.211624   71926 logs.go:276] 0 containers: []
	W0913 20:01:58.211634   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:58.211640   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:58.211696   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:58.245034   71926 cri.go:89] found id: ""
	I0913 20:01:58.245065   71926 logs.go:276] 0 containers: []
	W0913 20:01:58.245077   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:58.245085   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:58.245148   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:58.283216   71926 cri.go:89] found id: ""
	I0913 20:01:58.283238   71926 logs.go:276] 0 containers: []
	W0913 20:01:58.283247   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:58.283252   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:58.283309   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:58.321463   71926 cri.go:89] found id: ""
	I0913 20:01:58.321484   71926 logs.go:276] 0 containers: []
	W0913 20:01:58.321492   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:58.321498   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:58.321544   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:58.355902   71926 cri.go:89] found id: ""
	I0913 20:01:58.355926   71926 logs.go:276] 0 containers: []
	W0913 20:01:58.355937   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:58.355947   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:58.355965   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:58.413005   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:58.413035   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:58.428082   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:58.428108   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:58.498169   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:58.498197   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:58.498212   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:58.578725   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:58.578757   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:02:01.119256   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:02:01.146017   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:02:01.146114   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:02:01.196918   71926 cri.go:89] found id: ""
	I0913 20:02:01.196945   71926 logs.go:276] 0 containers: []
	W0913 20:02:01.196953   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:02:01.196959   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:02:01.197016   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:02:01.233679   71926 cri.go:89] found id: ""
	I0913 20:02:01.233718   71926 logs.go:276] 0 containers: []
	W0913 20:02:01.233729   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:02:01.233736   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:02:01.233800   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:02:01.269464   71926 cri.go:89] found id: ""
	I0913 20:02:01.269492   71926 logs.go:276] 0 containers: []
	W0913 20:02:01.269503   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:02:01.269510   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:02:01.269570   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:02:01.305729   71926 cri.go:89] found id: ""
	I0913 20:02:01.305754   71926 logs.go:276] 0 containers: []
	W0913 20:02:01.305763   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:02:01.305769   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:02:01.305828   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:02:01.342388   71926 cri.go:89] found id: ""
	I0913 20:02:01.342415   71926 logs.go:276] 0 containers: []
	W0913 20:02:01.342426   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:02:01.342433   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:02:01.342496   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:02:01.381841   71926 cri.go:89] found id: ""
	I0913 20:02:01.381870   71926 logs.go:276] 0 containers: []
	W0913 20:02:01.381887   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:02:01.381895   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:02:01.381959   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:02:01.418835   71926 cri.go:89] found id: ""
	I0913 20:02:01.418875   71926 logs.go:276] 0 containers: []
	W0913 20:02:01.418886   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:02:01.418893   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:02:01.418957   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:02:01.457113   71926 cri.go:89] found id: ""
	I0913 20:02:01.457147   71926 logs.go:276] 0 containers: []
	W0913 20:02:01.457158   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:02:01.457168   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:02:01.457178   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:02:01.513024   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:02:01.513060   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:02:01.528102   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:02:01.528128   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:02:01.602459   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:02:01.602480   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:02:01.602495   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:02:01.685567   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:02:01.685599   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:02:01.336152   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:03.836139   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:59.931408   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:02.430247   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:04.431966   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:02.266731   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:04.768148   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:04.231832   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:02:04.246134   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:02:04.246215   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:02:04.285749   71926 cri.go:89] found id: ""
	I0913 20:02:04.285779   71926 logs.go:276] 0 containers: []
	W0913 20:02:04.285789   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:02:04.285796   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:02:04.285857   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:02:04.320201   71926 cri.go:89] found id: ""
	I0913 20:02:04.320235   71926 logs.go:276] 0 containers: []
	W0913 20:02:04.320246   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:02:04.320252   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:02:04.320314   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:02:04.356354   71926 cri.go:89] found id: ""
	I0913 20:02:04.356376   71926 logs.go:276] 0 containers: []
	W0913 20:02:04.356387   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:02:04.356394   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:02:04.356452   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:02:04.391995   71926 cri.go:89] found id: ""
	I0913 20:02:04.392025   71926 logs.go:276] 0 containers: []
	W0913 20:02:04.392036   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:02:04.392044   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:02:04.392111   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:02:04.425216   71926 cri.go:89] found id: ""
	I0913 20:02:04.425245   71926 logs.go:276] 0 containers: []
	W0913 20:02:04.425255   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:02:04.425262   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:02:04.425327   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:02:04.473200   71926 cri.go:89] found id: ""
	I0913 20:02:04.473223   71926 logs.go:276] 0 containers: []
	W0913 20:02:04.473232   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:02:04.473238   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:02:04.473283   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:02:04.508088   71926 cri.go:89] found id: ""
	I0913 20:02:04.508110   71926 logs.go:276] 0 containers: []
	W0913 20:02:04.508119   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:02:04.508124   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:02:04.508175   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:02:04.543322   71926 cri.go:89] found id: ""
	I0913 20:02:04.543343   71926 logs.go:276] 0 containers: []
	W0913 20:02:04.543351   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:02:04.543360   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:02:04.543370   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:02:04.618660   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:02:04.618679   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:02:04.618691   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:02:04.695411   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:02:04.695446   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:02:04.735797   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:02:04.735829   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:02:04.792281   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:02:04.792318   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:02:05.836177   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:07.837164   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:06.931841   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:09.432062   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:07.266508   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:07.266540   71424 pod_ready.go:82] duration metric: took 4m0.00658418s for pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace to be "Ready" ...
	E0913 20:02:07.266553   71424 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0913 20:02:07.266569   71424 pod_ready.go:39] duration metric: took 4m3.201709894s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0913 20:02:07.266588   71424 api_server.go:52] waiting for apiserver process to appear ...
	I0913 20:02:07.266618   71424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:02:07.266671   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:02:07.316650   71424 cri.go:89] found id: "7b1108fd5841778e635c87c901ca2bc56071cc7f48f9b9812e1bf8fa063284c3"
	I0913 20:02:07.316674   71424 cri.go:89] found id: ""
	I0913 20:02:07.316681   71424 logs.go:276] 1 containers: [7b1108fd5841778e635c87c901ca2bc56071cc7f48f9b9812e1bf8fa063284c3]
	I0913 20:02:07.316740   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:07.321334   71424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:02:07.321407   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:02:07.373164   71424 cri.go:89] found id: "a3490cc2f99b270938b5fed10c34d8466f4e2d304dac9cdacb69b239c36988e3"
	I0913 20:02:07.373187   71424 cri.go:89] found id: ""
	I0913 20:02:07.373197   71424 logs.go:276] 1 containers: [a3490cc2f99b270938b5fed10c34d8466f4e2d304dac9cdacb69b239c36988e3]
	I0913 20:02:07.373247   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:07.377883   71424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:02:07.377954   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:02:07.424142   71424 cri.go:89] found id: "e70559352db6a4d712cb06065541ce8338cb0ea989eac571f538aa205d022f73"
	I0913 20:02:07.424169   71424 cri.go:89] found id: ""
	I0913 20:02:07.424179   71424 logs.go:276] 1 containers: [e70559352db6a4d712cb06065541ce8338cb0ea989eac571f538aa205d022f73]
	I0913 20:02:07.424241   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:07.429508   71424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:02:07.429578   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:02:07.484114   71424 cri.go:89] found id: "4c2bf4fed4e33c404d568d430bf58fe30ca0c9f0cf8964c530e93f1fd1a51edf"
	I0913 20:02:07.484180   71424 cri.go:89] found id: ""
	I0913 20:02:07.484193   71424 logs.go:276] 1 containers: [4c2bf4fed4e33c404d568d430bf58fe30ca0c9f0cf8964c530e93f1fd1a51edf]
	I0913 20:02:07.484250   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:07.488689   71424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:02:07.488757   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:02:07.527755   71424 cri.go:89] found id: "adbec8ff0ed7ae2d76b2b4191fbf08117c3c2c17aaf2f56bf763ce14fba83991"
	I0913 20:02:07.527777   71424 cri.go:89] found id: ""
	I0913 20:02:07.527786   71424 logs.go:276] 1 containers: [adbec8ff0ed7ae2d76b2b4191fbf08117c3c2c17aaf2f56bf763ce14fba83991]
	I0913 20:02:07.527840   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:07.532748   71424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:02:07.532806   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:02:07.570018   71424 cri.go:89] found id: "e6169bebe5711890f53f70a7ec514779cf495ee725c60dab67e7acdca64ef03d"
	I0913 20:02:07.570043   71424 cri.go:89] found id: ""
	I0913 20:02:07.570052   71424 logs.go:276] 1 containers: [e6169bebe5711890f53f70a7ec514779cf495ee725c60dab67e7acdca64ef03d]
	I0913 20:02:07.570125   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:07.574697   71424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:02:07.574765   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:02:07.618877   71424 cri.go:89] found id: ""
	I0913 20:02:07.618971   71424 logs.go:276] 0 containers: []
	W0913 20:02:07.618998   71424 logs.go:278] No container was found matching "kindnet"
	I0913 20:02:07.619014   71424 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0913 20:02:07.619122   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0913 20:02:07.659244   71424 cri.go:89] found id: "fc01d7b17bbc929489326ae7e806a248f2d3d5cbc145bf51eb1b0cf2ba88d950"
	I0913 20:02:07.659270   71424 cri.go:89] found id: "4a9c61bb677322f851f9504c0ffe2044897d80333391be76078a4f7825afe9fe"
	I0913 20:02:07.659275   71424 cri.go:89] found id: ""
	I0913 20:02:07.659283   71424 logs.go:276] 2 containers: [fc01d7b17bbc929489326ae7e806a248f2d3d5cbc145bf51eb1b0cf2ba88d950 4a9c61bb677322f851f9504c0ffe2044897d80333391be76078a4f7825afe9fe]
	I0913 20:02:07.659335   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:07.664257   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:07.668591   71424 logs.go:123] Gathering logs for kube-scheduler [4c2bf4fed4e33c404d568d430bf58fe30ca0c9f0cf8964c530e93f1fd1a51edf] ...
	I0913 20:02:07.668613   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c2bf4fed4e33c404d568d430bf58fe30ca0c9f0cf8964c530e93f1fd1a51edf"
	I0913 20:02:07.709612   71424 logs.go:123] Gathering logs for kube-controller-manager [e6169bebe5711890f53f70a7ec514779cf495ee725c60dab67e7acdca64ef03d] ...
	I0913 20:02:07.709638   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e6169bebe5711890f53f70a7ec514779cf495ee725c60dab67e7acdca64ef03d"
	I0913 20:02:07.765784   71424 logs.go:123] Gathering logs for storage-provisioner [4a9c61bb677322f851f9504c0ffe2044897d80333391be76078a4f7825afe9fe] ...
	I0913 20:02:07.765838   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4a9c61bb677322f851f9504c0ffe2044897d80333391be76078a4f7825afe9fe"
	I0913 20:02:07.808828   71424 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:02:07.808853   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:02:08.315417   71424 logs.go:123] Gathering logs for container status ...
	I0913 20:02:08.315462   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:02:08.361953   71424 logs.go:123] Gathering logs for kubelet ...
	I0913 20:02:08.361984   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:02:08.434091   71424 logs.go:123] Gathering logs for dmesg ...
	I0913 20:02:08.434143   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:02:08.448853   71424 logs.go:123] Gathering logs for etcd [a3490cc2f99b270938b5fed10c34d8466f4e2d304dac9cdacb69b239c36988e3] ...
	I0913 20:02:08.448877   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a3490cc2f99b270938b5fed10c34d8466f4e2d304dac9cdacb69b239c36988e3"
	I0913 20:02:08.510886   71424 logs.go:123] Gathering logs for coredns [e70559352db6a4d712cb06065541ce8338cb0ea989eac571f538aa205d022f73] ...
	I0913 20:02:08.510919   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e70559352db6a4d712cb06065541ce8338cb0ea989eac571f538aa205d022f73"
	I0913 20:02:08.547445   71424 logs.go:123] Gathering logs for kube-proxy [adbec8ff0ed7ae2d76b2b4191fbf08117c3c2c17aaf2f56bf763ce14fba83991] ...
	I0913 20:02:08.547482   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 adbec8ff0ed7ae2d76b2b4191fbf08117c3c2c17aaf2f56bf763ce14fba83991"
	I0913 20:02:08.585883   71424 logs.go:123] Gathering logs for storage-provisioner [fc01d7b17bbc929489326ae7e806a248f2d3d5cbc145bf51eb1b0cf2ba88d950] ...
	I0913 20:02:08.585907   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc01d7b17bbc929489326ae7e806a248f2d3d5cbc145bf51eb1b0cf2ba88d950"
	I0913 20:02:08.628105   71424 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:02:08.628134   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 20:02:08.764531   71424 logs.go:123] Gathering logs for kube-apiserver [7b1108fd5841778e635c87c901ca2bc56071cc7f48f9b9812e1bf8fa063284c3] ...
	I0913 20:02:08.764562   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7b1108fd5841778e635c87c901ca2bc56071cc7f48f9b9812e1bf8fa063284c3"
	I0913 20:02:07.307778   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:02:07.322032   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:02:07.322110   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:02:07.357558   71926 cri.go:89] found id: ""
	I0913 20:02:07.357585   71926 logs.go:276] 0 containers: []
	W0913 20:02:07.357597   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:02:07.357605   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:02:07.357664   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:02:07.391428   71926 cri.go:89] found id: ""
	I0913 20:02:07.391457   71926 logs.go:276] 0 containers: []
	W0913 20:02:07.391468   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:02:07.391476   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:02:07.391531   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:02:07.427244   71926 cri.go:89] found id: ""
	I0913 20:02:07.427268   71926 logs.go:276] 0 containers: []
	W0913 20:02:07.427281   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:02:07.427289   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:02:07.427364   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:02:07.477369   71926 cri.go:89] found id: ""
	I0913 20:02:07.477399   71926 logs.go:276] 0 containers: []
	W0913 20:02:07.477411   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:02:07.477420   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:02:07.477478   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:02:07.519477   71926 cri.go:89] found id: ""
	I0913 20:02:07.519505   71926 logs.go:276] 0 containers: []
	W0913 20:02:07.519516   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:02:07.519524   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:02:07.519586   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:02:07.556229   71926 cri.go:89] found id: ""
	I0913 20:02:07.556252   71926 logs.go:276] 0 containers: []
	W0913 20:02:07.556260   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:02:07.556270   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:02:07.556329   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:02:07.594498   71926 cri.go:89] found id: ""
	I0913 20:02:07.594531   71926 logs.go:276] 0 containers: []
	W0913 20:02:07.594543   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:02:07.594551   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:02:07.594609   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:02:07.634000   71926 cri.go:89] found id: ""
	I0913 20:02:07.634027   71926 logs.go:276] 0 containers: []
	W0913 20:02:07.634038   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:02:07.634048   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:02:07.634061   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:02:07.690769   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:02:07.690801   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:02:07.708059   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:02:07.708087   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:02:07.783421   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:02:07.783440   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:02:07.783481   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:02:07.866138   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:02:07.866169   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:02:10.416560   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:02:10.430464   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:02:10.430536   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:02:10.467821   71926 cri.go:89] found id: ""
	I0913 20:02:10.467849   71926 logs.go:276] 0 containers: []
	W0913 20:02:10.467858   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:02:10.467864   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:02:10.467930   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:02:10.504320   71926 cri.go:89] found id: ""
	I0913 20:02:10.504347   71926 logs.go:276] 0 containers: []
	W0913 20:02:10.504358   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:02:10.504371   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:02:10.504461   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:02:10.541261   71926 cri.go:89] found id: ""
	I0913 20:02:10.541290   71926 logs.go:276] 0 containers: []
	W0913 20:02:10.541302   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:02:10.541309   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:02:10.541376   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:02:10.576270   71926 cri.go:89] found id: ""
	I0913 20:02:10.576297   71926 logs.go:276] 0 containers: []
	W0913 20:02:10.576310   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:02:10.576317   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:02:10.576373   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:02:10.614974   71926 cri.go:89] found id: ""
	I0913 20:02:10.615004   71926 logs.go:276] 0 containers: []
	W0913 20:02:10.615022   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:02:10.615029   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:02:10.615091   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:02:10.650917   71926 cri.go:89] found id: ""
	I0913 20:02:10.650947   71926 logs.go:276] 0 containers: []
	W0913 20:02:10.650959   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:02:10.650967   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:02:10.651028   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:02:10.688597   71926 cri.go:89] found id: ""
	I0913 20:02:10.688622   71926 logs.go:276] 0 containers: []
	W0913 20:02:10.688632   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:02:10.688640   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:02:10.688699   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:02:10.723937   71926 cri.go:89] found id: ""
	I0913 20:02:10.723962   71926 logs.go:276] 0 containers: []
	W0913 20:02:10.723973   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:02:10.723983   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:02:10.723998   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:02:10.776033   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:02:10.776065   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:02:10.791601   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:02:10.791624   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:02:10.870427   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:02:10.870453   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:02:10.870481   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:02:10.950924   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:02:10.950958   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:02:10.335945   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:12.336240   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:11.932240   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:14.430527   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:11.311597   71424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:02:11.329620   71424 api_server.go:72] duration metric: took 4m14.578764648s to wait for apiserver process to appear ...
	I0913 20:02:11.329645   71424 api_server.go:88] waiting for apiserver healthz status ...
	I0913 20:02:11.329689   71424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:02:11.329748   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:02:11.372419   71424 cri.go:89] found id: "7b1108fd5841778e635c87c901ca2bc56071cc7f48f9b9812e1bf8fa063284c3"
	I0913 20:02:11.372443   71424 cri.go:89] found id: ""
	I0913 20:02:11.372454   71424 logs.go:276] 1 containers: [7b1108fd5841778e635c87c901ca2bc56071cc7f48f9b9812e1bf8fa063284c3]
	I0913 20:02:11.372510   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:11.377048   71424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:02:11.377112   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:02:11.415150   71424 cri.go:89] found id: "a3490cc2f99b270938b5fed10c34d8466f4e2d304dac9cdacb69b239c36988e3"
	I0913 20:02:11.415177   71424 cri.go:89] found id: ""
	I0913 20:02:11.415186   71424 logs.go:276] 1 containers: [a3490cc2f99b270938b5fed10c34d8466f4e2d304dac9cdacb69b239c36988e3]
	I0913 20:02:11.415255   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:11.420007   71424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:02:11.420092   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:02:11.459538   71424 cri.go:89] found id: "e70559352db6a4d712cb06065541ce8338cb0ea989eac571f538aa205d022f73"
	I0913 20:02:11.459560   71424 cri.go:89] found id: ""
	I0913 20:02:11.459568   71424 logs.go:276] 1 containers: [e70559352db6a4d712cb06065541ce8338cb0ea989eac571f538aa205d022f73]
	I0913 20:02:11.459626   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:11.464079   71424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:02:11.464133   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:02:11.502877   71424 cri.go:89] found id: "4c2bf4fed4e33c404d568d430bf58fe30ca0c9f0cf8964c530e93f1fd1a51edf"
	I0913 20:02:11.502902   71424 cri.go:89] found id: ""
	I0913 20:02:11.502909   71424 logs.go:276] 1 containers: [4c2bf4fed4e33c404d568d430bf58fe30ca0c9f0cf8964c530e93f1fd1a51edf]
	I0913 20:02:11.502958   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:11.507529   71424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:02:11.507614   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:02:11.553452   71424 cri.go:89] found id: "adbec8ff0ed7ae2d76b2b4191fbf08117c3c2c17aaf2f56bf763ce14fba83991"
	I0913 20:02:11.553476   71424 cri.go:89] found id: ""
	I0913 20:02:11.553484   71424 logs.go:276] 1 containers: [adbec8ff0ed7ae2d76b2b4191fbf08117c3c2c17aaf2f56bf763ce14fba83991]
	I0913 20:02:11.553538   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:11.557584   71424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:02:11.557649   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:02:11.598606   71424 cri.go:89] found id: "e6169bebe5711890f53f70a7ec514779cf495ee725c60dab67e7acdca64ef03d"
	I0913 20:02:11.598632   71424 cri.go:89] found id: ""
	I0913 20:02:11.598641   71424 logs.go:276] 1 containers: [e6169bebe5711890f53f70a7ec514779cf495ee725c60dab67e7acdca64ef03d]
	I0913 20:02:11.598694   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:11.602735   71424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:02:11.602803   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:02:11.637072   71424 cri.go:89] found id: ""
	I0913 20:02:11.637099   71424 logs.go:276] 0 containers: []
	W0913 20:02:11.637110   71424 logs.go:278] No container was found matching "kindnet"
	I0913 20:02:11.637133   71424 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0913 20:02:11.637197   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0913 20:02:11.680922   71424 cri.go:89] found id: "fc01d7b17bbc929489326ae7e806a248f2d3d5cbc145bf51eb1b0cf2ba88d950"
	I0913 20:02:11.680941   71424 cri.go:89] found id: "4a9c61bb677322f851f9504c0ffe2044897d80333391be76078a4f7825afe9fe"
	I0913 20:02:11.680945   71424 cri.go:89] found id: ""
	I0913 20:02:11.680952   71424 logs.go:276] 2 containers: [fc01d7b17bbc929489326ae7e806a248f2d3d5cbc145bf51eb1b0cf2ba88d950 4a9c61bb677322f851f9504c0ffe2044897d80333391be76078a4f7825afe9fe]
	I0913 20:02:11.680993   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:11.685264   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:11.689862   71424 logs.go:123] Gathering logs for etcd [a3490cc2f99b270938b5fed10c34d8466f4e2d304dac9cdacb69b239c36988e3] ...
	I0913 20:02:11.689887   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a3490cc2f99b270938b5fed10c34d8466f4e2d304dac9cdacb69b239c36988e3"
	I0913 20:02:11.758440   71424 logs.go:123] Gathering logs for coredns [e70559352db6a4d712cb06065541ce8338cb0ea989eac571f538aa205d022f73] ...
	I0913 20:02:11.758475   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e70559352db6a4d712cb06065541ce8338cb0ea989eac571f538aa205d022f73"
	I0913 20:02:11.799263   71424 logs.go:123] Gathering logs for kube-proxy [adbec8ff0ed7ae2d76b2b4191fbf08117c3c2c17aaf2f56bf763ce14fba83991] ...
	I0913 20:02:11.799295   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 adbec8ff0ed7ae2d76b2b4191fbf08117c3c2c17aaf2f56bf763ce14fba83991"
	I0913 20:02:11.837890   71424 logs.go:123] Gathering logs for kube-controller-manager [e6169bebe5711890f53f70a7ec514779cf495ee725c60dab67e7acdca64ef03d] ...
	I0913 20:02:11.837918   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e6169bebe5711890f53f70a7ec514779cf495ee725c60dab67e7acdca64ef03d"
	I0913 20:02:11.902156   71424 logs.go:123] Gathering logs for storage-provisioner [fc01d7b17bbc929489326ae7e806a248f2d3d5cbc145bf51eb1b0cf2ba88d950] ...
	I0913 20:02:11.902189   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc01d7b17bbc929489326ae7e806a248f2d3d5cbc145bf51eb1b0cf2ba88d950"
	I0913 20:02:11.953825   71424 logs.go:123] Gathering logs for kubelet ...
	I0913 20:02:11.953854   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:02:12.022461   71424 logs.go:123] Gathering logs for dmesg ...
	I0913 20:02:12.022498   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:02:12.038744   71424 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:02:12.038773   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 20:02:12.156945   71424 logs.go:123] Gathering logs for storage-provisioner [4a9c61bb677322f851f9504c0ffe2044897d80333391be76078a4f7825afe9fe] ...
	I0913 20:02:12.156982   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4a9c61bb677322f851f9504c0ffe2044897d80333391be76078a4f7825afe9fe"
	I0913 20:02:12.191539   71424 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:02:12.191576   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:02:12.615499   71424 logs.go:123] Gathering logs for kube-apiserver [7b1108fd5841778e635c87c901ca2bc56071cc7f48f9b9812e1bf8fa063284c3] ...
	I0913 20:02:12.615539   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7b1108fd5841778e635c87c901ca2bc56071cc7f48f9b9812e1bf8fa063284c3"
	I0913 20:02:12.662305   71424 logs.go:123] Gathering logs for kube-scheduler [4c2bf4fed4e33c404d568d430bf58fe30ca0c9f0cf8964c530e93f1fd1a51edf] ...
	I0913 20:02:12.662340   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c2bf4fed4e33c404d568d430bf58fe30ca0c9f0cf8964c530e93f1fd1a51edf"
	I0913 20:02:12.701720   71424 logs.go:123] Gathering logs for container status ...
	I0913 20:02:12.701747   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:02:15.241370   71424 api_server.go:253] Checking apiserver healthz at https://192.168.50.13:8443/healthz ...
	I0913 20:02:15.246417   71424 api_server.go:279] https://192.168.50.13:8443/healthz returned 200:
	ok
	I0913 20:02:15.247538   71424 api_server.go:141] control plane version: v1.31.1
	I0913 20:02:15.247557   71424 api_server.go:131] duration metric: took 3.917905929s to wait for apiserver health ...
	I0913 20:02:15.247565   71424 system_pods.go:43] waiting for kube-system pods to appear ...
	I0913 20:02:15.247592   71424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:02:15.247646   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:02:15.287202   71424 cri.go:89] found id: "7b1108fd5841778e635c87c901ca2bc56071cc7f48f9b9812e1bf8fa063284c3"
	I0913 20:02:15.287223   71424 cri.go:89] found id: ""
	I0913 20:02:15.287231   71424 logs.go:276] 1 containers: [7b1108fd5841778e635c87c901ca2bc56071cc7f48f9b9812e1bf8fa063284c3]
	I0913 20:02:15.287285   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:15.292060   71424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:02:15.292115   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:02:15.327342   71424 cri.go:89] found id: "a3490cc2f99b270938b5fed10c34d8466f4e2d304dac9cdacb69b239c36988e3"
	I0913 20:02:15.327367   71424 cri.go:89] found id: ""
	I0913 20:02:15.327376   71424 logs.go:276] 1 containers: [a3490cc2f99b270938b5fed10c34d8466f4e2d304dac9cdacb69b239c36988e3]
	I0913 20:02:15.327441   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:15.332284   71424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:02:15.332356   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:02:15.374686   71424 cri.go:89] found id: "e70559352db6a4d712cb06065541ce8338cb0ea989eac571f538aa205d022f73"
	I0913 20:02:15.374708   71424 cri.go:89] found id: ""
	I0913 20:02:15.374714   71424 logs.go:276] 1 containers: [e70559352db6a4d712cb06065541ce8338cb0ea989eac571f538aa205d022f73]
	I0913 20:02:15.374771   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:15.379199   71424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:02:15.379269   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:02:15.422011   71424 cri.go:89] found id: "4c2bf4fed4e33c404d568d430bf58fe30ca0c9f0cf8964c530e93f1fd1a51edf"
	I0913 20:02:15.422034   71424 cri.go:89] found id: ""
	I0913 20:02:15.422044   71424 logs.go:276] 1 containers: [4c2bf4fed4e33c404d568d430bf58fe30ca0c9f0cf8964c530e93f1fd1a51edf]
	I0913 20:02:15.422110   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:15.426331   71424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:02:15.426395   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:02:15.471552   71424 cri.go:89] found id: "adbec8ff0ed7ae2d76b2b4191fbf08117c3c2c17aaf2f56bf763ce14fba83991"
	I0913 20:02:15.471570   71424 cri.go:89] found id: ""
	I0913 20:02:15.471577   71424 logs.go:276] 1 containers: [adbec8ff0ed7ae2d76b2b4191fbf08117c3c2c17aaf2f56bf763ce14fba83991]
	I0913 20:02:15.471630   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:15.475964   71424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:02:15.476021   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:02:15.520619   71424 cri.go:89] found id: "e6169bebe5711890f53f70a7ec514779cf495ee725c60dab67e7acdca64ef03d"
	I0913 20:02:15.520647   71424 cri.go:89] found id: ""
	I0913 20:02:15.520656   71424 logs.go:276] 1 containers: [e6169bebe5711890f53f70a7ec514779cf495ee725c60dab67e7acdca64ef03d]
	I0913 20:02:15.520713   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:15.524851   71424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:02:15.524912   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:02:15.559283   71424 cri.go:89] found id: ""
	I0913 20:02:15.559309   71424 logs.go:276] 0 containers: []
	W0913 20:02:15.559320   71424 logs.go:278] No container was found matching "kindnet"
	I0913 20:02:15.559327   71424 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0913 20:02:15.559383   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0913 20:02:15.597439   71424 cri.go:89] found id: "fc01d7b17bbc929489326ae7e806a248f2d3d5cbc145bf51eb1b0cf2ba88d950"
	I0913 20:02:15.597465   71424 cri.go:89] found id: "4a9c61bb677322f851f9504c0ffe2044897d80333391be76078a4f7825afe9fe"
	I0913 20:02:15.597471   71424 cri.go:89] found id: ""
	I0913 20:02:15.597480   71424 logs.go:276] 2 containers: [fc01d7b17bbc929489326ae7e806a248f2d3d5cbc145bf51eb1b0cf2ba88d950 4a9c61bb677322f851f9504c0ffe2044897d80333391be76078a4f7825afe9fe]
	I0913 20:02:15.597540   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:15.601932   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:15.605741   71424 logs.go:123] Gathering logs for kube-scheduler [4c2bf4fed4e33c404d568d430bf58fe30ca0c9f0cf8964c530e93f1fd1a51edf] ...
	I0913 20:02:15.605765   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c2bf4fed4e33c404d568d430bf58fe30ca0c9f0cf8964c530e93f1fd1a51edf"
	I0913 20:02:15.641300   71424 logs.go:123] Gathering logs for kube-proxy [adbec8ff0ed7ae2d76b2b4191fbf08117c3c2c17aaf2f56bf763ce14fba83991] ...
	I0913 20:02:15.641328   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 adbec8ff0ed7ae2d76b2b4191fbf08117c3c2c17aaf2f56bf763ce14fba83991"
	I0913 20:02:15.679604   71424 logs.go:123] Gathering logs for kube-controller-manager [e6169bebe5711890f53f70a7ec514779cf495ee725c60dab67e7acdca64ef03d] ...
	I0913 20:02:15.679633   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e6169bebe5711890f53f70a7ec514779cf495ee725c60dab67e7acdca64ef03d"
	I0913 20:02:15.731316   71424 logs.go:123] Gathering logs for storage-provisioner [fc01d7b17bbc929489326ae7e806a248f2d3d5cbc145bf51eb1b0cf2ba88d950] ...
	I0913 20:02:15.731348   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc01d7b17bbc929489326ae7e806a248f2d3d5cbc145bf51eb1b0cf2ba88d950"
	I0913 20:02:15.774692   71424 logs.go:123] Gathering logs for dmesg ...
	I0913 20:02:15.774719   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:02:15.789708   71424 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:02:15.789733   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 20:02:15.899485   71424 logs.go:123] Gathering logs for kube-apiserver [7b1108fd5841778e635c87c901ca2bc56071cc7f48f9b9812e1bf8fa063284c3] ...
	I0913 20:02:15.899517   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7b1108fd5841778e635c87c901ca2bc56071cc7f48f9b9812e1bf8fa063284c3"
	I0913 20:02:15.953758   71424 logs.go:123] Gathering logs for coredns [e70559352db6a4d712cb06065541ce8338cb0ea989eac571f538aa205d022f73] ...
	I0913 20:02:15.953795   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e70559352db6a4d712cb06065541ce8338cb0ea989eac571f538aa205d022f73"
	I0913 20:02:15.996235   71424 logs.go:123] Gathering logs for storage-provisioner [4a9c61bb677322f851f9504c0ffe2044897d80333391be76078a4f7825afe9fe] ...
	I0913 20:02:15.996266   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4a9c61bb677322f851f9504c0ffe2044897d80333391be76078a4f7825afe9fe"
	I0913 20:02:16.033729   71424 logs.go:123] Gathering logs for container status ...
	I0913 20:02:16.033765   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:02:16.083481   71424 logs.go:123] Gathering logs for kubelet ...
	I0913 20:02:16.083514   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:02:13.497982   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:02:13.511795   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:02:13.511865   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:02:13.551686   71926 cri.go:89] found id: ""
	I0913 20:02:13.551714   71926 logs.go:276] 0 containers: []
	W0913 20:02:13.551723   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:02:13.551729   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:02:13.551779   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:02:13.584641   71926 cri.go:89] found id: ""
	I0913 20:02:13.584671   71926 logs.go:276] 0 containers: []
	W0913 20:02:13.584682   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:02:13.584689   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:02:13.584740   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:02:13.622693   71926 cri.go:89] found id: ""
	I0913 20:02:13.622720   71926 logs.go:276] 0 containers: []
	W0913 20:02:13.622731   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:02:13.622739   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:02:13.622801   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:02:13.661315   71926 cri.go:89] found id: ""
	I0913 20:02:13.661343   71926 logs.go:276] 0 containers: []
	W0913 20:02:13.661355   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:02:13.661363   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:02:13.661422   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:02:13.698433   71926 cri.go:89] found id: ""
	I0913 20:02:13.698460   71926 logs.go:276] 0 containers: []
	W0913 20:02:13.698471   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:02:13.698485   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:02:13.698541   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:02:13.733216   71926 cri.go:89] found id: ""
	I0913 20:02:13.733245   71926 logs.go:276] 0 containers: []
	W0913 20:02:13.733256   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:02:13.733264   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:02:13.733323   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:02:13.769397   71926 cri.go:89] found id: ""
	I0913 20:02:13.769426   71926 logs.go:276] 0 containers: []
	W0913 20:02:13.769436   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:02:13.769441   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:02:13.769502   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:02:13.806343   71926 cri.go:89] found id: ""
	I0913 20:02:13.806367   71926 logs.go:276] 0 containers: []
	W0913 20:02:13.806378   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:02:13.806389   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:02:13.806402   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:02:13.885918   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:02:13.885958   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:02:13.927909   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:02:13.927948   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:02:13.983289   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:02:13.983325   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:02:13.996867   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:02:13.996892   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:02:14.065876   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:02:16.566876   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:02:16.581980   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:02:16.582059   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:02:16.620669   71926 cri.go:89] found id: ""
	I0913 20:02:16.620694   71926 logs.go:276] 0 containers: []
	W0913 20:02:16.620703   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:02:16.620709   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:02:16.620758   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:02:16.660133   71926 cri.go:89] found id: ""
	I0913 20:02:16.660156   71926 logs.go:276] 0 containers: []
	W0913 20:02:16.660165   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:02:16.660171   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:02:16.660218   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:02:16.695475   71926 cri.go:89] found id: ""
	I0913 20:02:16.695503   71926 logs.go:276] 0 containers: []
	W0913 20:02:16.695515   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:02:16.695522   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:02:16.695581   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:02:16.730018   71926 cri.go:89] found id: ""
	I0913 20:02:16.730051   71926 logs.go:276] 0 containers: []
	W0913 20:02:16.730063   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:02:16.730069   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:02:16.730136   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:02:16.763193   71926 cri.go:89] found id: ""
	I0913 20:02:16.763219   71926 logs.go:276] 0 containers: []
	W0913 20:02:16.763230   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:02:16.763236   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:02:16.763303   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:02:16.800623   71926 cri.go:89] found id: ""
	I0913 20:02:16.800650   71926 logs.go:276] 0 containers: []
	W0913 20:02:16.800662   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:02:16.800670   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:02:16.800730   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:02:16.834922   71926 cri.go:89] found id: ""
	I0913 20:02:16.834950   71926 logs.go:276] 0 containers: []
	W0913 20:02:16.834961   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:02:16.834968   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:02:16.835012   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:02:16.872570   71926 cri.go:89] found id: ""
	I0913 20:02:16.872598   71926 logs.go:276] 0 containers: []
	W0913 20:02:16.872607   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:02:16.872615   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:02:16.872625   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:02:16.922229   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:02:16.922265   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:02:16.936954   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:02:16.936985   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 20:02:16.155161   71424 logs.go:123] Gathering logs for etcd [a3490cc2f99b270938b5fed10c34d8466f4e2d304dac9cdacb69b239c36988e3] ...
	I0913 20:02:16.155202   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a3490cc2f99b270938b5fed10c34d8466f4e2d304dac9cdacb69b239c36988e3"
	I0913 20:02:16.213457   71424 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:02:16.213494   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:02:19.078923   71424 system_pods.go:59] 8 kube-system pods found
	I0913 20:02:19.078950   71424 system_pods.go:61] "coredns-7c65d6cfc9-fjzxv" [984f1946-61b1-4881-ae99-495855aaf948] Running
	I0913 20:02:19.078956   71424 system_pods.go:61] "etcd-no-preload-239327" [d0514967-93ea-4792-b49d-500c6800d102] Running
	I0913 20:02:19.078959   71424 system_pods.go:61] "kube-apiserver-no-preload-239327" [2e37433d-d767-4e6a-9697-79e99bb2cf74] Running
	I0913 20:02:19.078964   71424 system_pods.go:61] "kube-controller-manager-no-preload-239327" [71d86891-1378-43b5-af10-c45acb1ef854] Running
	I0913 20:02:19.078967   71424 system_pods.go:61] "kube-proxy-b24zg" [67fffd9e-ddf7-4abb-bfce-1528060d6b43] Running
	I0913 20:02:19.078971   71424 system_pods.go:61] "kube-scheduler-no-preload-239327" [22e17a4f-902f-4593-82dd-d2f04104e66a] Running
	I0913 20:02:19.078976   71424 system_pods.go:61] "metrics-server-6867b74b74-bq7jp" [9920ad88-3d00-458f-94d4-3dcfd0cd9a01] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0913 20:02:19.078980   71424 system_pods.go:61] "storage-provisioner" [1cb55fe9-4adb-4d3e-9f26-34ee4b3f01a2] Running
	I0913 20:02:19.078988   71424 system_pods.go:74] duration metric: took 3.831418395s to wait for pod list to return data ...
	I0913 20:02:19.078995   71424 default_sa.go:34] waiting for default service account to be created ...
	I0913 20:02:19.081391   71424 default_sa.go:45] found service account: "default"
	I0913 20:02:19.081412   71424 default_sa.go:55] duration metric: took 2.412971ms for default service account to be created ...
	I0913 20:02:19.081419   71424 system_pods.go:116] waiting for k8s-apps to be running ...
	I0913 20:02:19.085561   71424 system_pods.go:86] 8 kube-system pods found
	I0913 20:02:19.085580   71424 system_pods.go:89] "coredns-7c65d6cfc9-fjzxv" [984f1946-61b1-4881-ae99-495855aaf948] Running
	I0913 20:02:19.085586   71424 system_pods.go:89] "etcd-no-preload-239327" [d0514967-93ea-4792-b49d-500c6800d102] Running
	I0913 20:02:19.085590   71424 system_pods.go:89] "kube-apiserver-no-preload-239327" [2e37433d-d767-4e6a-9697-79e99bb2cf74] Running
	I0913 20:02:19.085594   71424 system_pods.go:89] "kube-controller-manager-no-preload-239327" [71d86891-1378-43b5-af10-c45acb1ef854] Running
	I0913 20:02:19.085597   71424 system_pods.go:89] "kube-proxy-b24zg" [67fffd9e-ddf7-4abb-bfce-1528060d6b43] Running
	I0913 20:02:19.085601   71424 system_pods.go:89] "kube-scheduler-no-preload-239327" [22e17a4f-902f-4593-82dd-d2f04104e66a] Running
	I0913 20:02:19.085607   71424 system_pods.go:89] "metrics-server-6867b74b74-bq7jp" [9920ad88-3d00-458f-94d4-3dcfd0cd9a01] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0913 20:02:19.085610   71424 system_pods.go:89] "storage-provisioner" [1cb55fe9-4adb-4d3e-9f26-34ee4b3f01a2] Running
	I0913 20:02:19.085616   71424 system_pods.go:126] duration metric: took 4.193561ms to wait for k8s-apps to be running ...
	I0913 20:02:19.085625   71424 system_svc.go:44] waiting for kubelet service to be running ....
	I0913 20:02:19.085664   71424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 20:02:19.105440   71424 system_svc.go:56] duration metric: took 19.808703ms WaitForService to wait for kubelet
	I0913 20:02:19.105469   71424 kubeadm.go:582] duration metric: took 4m22.354619761s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0913 20:02:19.105491   71424 node_conditions.go:102] verifying NodePressure condition ...
	I0913 20:02:19.109107   71424 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0913 20:02:19.109126   71424 node_conditions.go:123] node cpu capacity is 2
	I0913 20:02:19.109136   71424 node_conditions.go:105] duration metric: took 3.640406ms to run NodePressure ...
	I0913 20:02:19.109146   71424 start.go:241] waiting for startup goroutines ...
	I0913 20:02:19.109153   71424 start.go:246] waiting for cluster config update ...
	I0913 20:02:19.109163   71424 start.go:255] writing updated cluster config ...
	I0913 20:02:19.109412   71424 ssh_runner.go:195] Run: rm -f paused
	I0913 20:02:19.156906   71424 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0913 20:02:19.158757   71424 out.go:177] * Done! kubectl is now configured to use "no-preload-239327" cluster and "default" namespace by default
	I0913 20:02:14.835749   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:17.335566   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:16.431024   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:18.434223   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:19.425264   71702 pod_ready.go:82] duration metric: took 4m0.000872269s for pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace to be "Ready" ...
	E0913 20:02:19.425295   71702 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0913 20:02:19.425314   71702 pod_ready.go:39] duration metric: took 4m14.083085064s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0913 20:02:19.425344   71702 kubeadm.go:597] duration metric: took 4m21.72399516s to restartPrimaryControlPlane
	W0913 20:02:19.425404   71702 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0913 20:02:19.425434   71702 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	W0913 20:02:17.022260   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:02:17.022281   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:02:17.022292   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:02:17.103233   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:02:17.103262   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:02:19.648871   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:02:19.664542   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:02:19.664619   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:02:19.704693   71926 cri.go:89] found id: ""
	I0913 20:02:19.704725   71926 logs.go:276] 0 containers: []
	W0913 20:02:19.704738   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:02:19.704745   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:02:19.704808   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:02:19.746522   71926 cri.go:89] found id: ""
	I0913 20:02:19.746550   71926 logs.go:276] 0 containers: []
	W0913 20:02:19.746562   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:02:19.746569   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:02:19.746630   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:02:19.782277   71926 cri.go:89] found id: ""
	I0913 20:02:19.782305   71926 logs.go:276] 0 containers: []
	W0913 20:02:19.782316   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:02:19.782323   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:02:19.782390   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:02:19.822083   71926 cri.go:89] found id: ""
	I0913 20:02:19.822147   71926 logs.go:276] 0 containers: []
	W0913 20:02:19.822157   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:02:19.822163   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:02:19.822221   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:02:19.865403   71926 cri.go:89] found id: ""
	I0913 20:02:19.865433   71926 logs.go:276] 0 containers: []
	W0913 20:02:19.865443   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:02:19.865451   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:02:19.865513   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:02:19.906436   71926 cri.go:89] found id: ""
	I0913 20:02:19.906462   71926 logs.go:276] 0 containers: []
	W0913 20:02:19.906471   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:02:19.906477   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:02:19.906535   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:02:19.943277   71926 cri.go:89] found id: ""
	I0913 20:02:19.943301   71926 logs.go:276] 0 containers: []
	W0913 20:02:19.943311   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:02:19.943318   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:02:19.943379   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:02:19.978660   71926 cri.go:89] found id: ""
	I0913 20:02:19.978683   71926 logs.go:276] 0 containers: []
	W0913 20:02:19.978694   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:02:19.978704   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:02:19.978718   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:02:20.051748   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:02:20.051773   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:02:20.051788   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:02:20.133912   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:02:20.133951   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:02:20.174826   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:02:20.174854   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:02:20.228002   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:02:20.228038   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:02:22.743346   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:02:22.757641   71926 kubeadm.go:597] duration metric: took 4m3.377721408s to restartPrimaryControlPlane
	W0913 20:02:22.757719   71926 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0913 20:02:22.757750   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0913 20:02:23.489114   71926 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 20:02:23.505494   71926 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0913 20:02:23.516804   71926 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0913 20:02:23.527373   71926 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0913 20:02:23.527392   71926 kubeadm.go:157] found existing configuration files:
	
	I0913 20:02:23.527433   71926 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0913 20:02:23.537725   71926 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0913 20:02:23.537797   71926 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0913 20:02:23.548667   71926 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0913 20:02:23.558212   71926 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0913 20:02:23.558288   71926 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0913 20:02:23.568235   71926 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0913 20:02:23.577925   71926 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0913 20:02:23.577989   71926 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0913 20:02:23.587819   71926 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0913 20:02:23.597266   71926 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0913 20:02:23.597330   71926 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0913 20:02:23.607021   71926 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0913 20:02:23.681432   71926 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0913 20:02:23.681576   71926 kubeadm.go:310] [preflight] Running pre-flight checks
	I0913 20:02:23.837573   71926 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0913 20:02:23.837727   71926 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0913 20:02:23.837858   71926 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0913 20:02:24.016267   71926 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0913 20:02:19.336285   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:21.836115   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:23.837035   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:24.018067   71926 out.go:235]   - Generating certificates and keys ...
	I0913 20:02:24.018195   71926 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0913 20:02:24.018297   71926 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0913 20:02:24.018394   71926 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0913 20:02:24.018482   71926 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0913 20:02:24.018586   71926 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0913 20:02:24.019167   71926 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0913 20:02:24.019590   71926 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0913 20:02:24.020258   71926 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0913 20:02:24.020814   71926 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0913 20:02:24.021409   71926 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0913 20:02:24.021519   71926 kubeadm.go:310] [certs] Using the existing "sa" key
	I0913 20:02:24.021596   71926 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0913 20:02:24.094249   71926 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0913 20:02:24.178186   71926 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0913 20:02:24.412313   71926 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0913 20:02:24.570296   71926 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0913 20:02:24.585365   71926 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0913 20:02:24.586493   71926 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0913 20:02:24.586548   71926 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0913 20:02:24.726026   71926 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0913 20:02:24.727979   71926 out.go:235]   - Booting up control plane ...
	I0913 20:02:24.728117   71926 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0913 20:02:24.740176   71926 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0913 20:02:24.741334   71926 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0913 20:02:24.742057   71926 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0913 20:02:24.744724   71926 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0913 20:02:26.336853   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:28.841632   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:31.336243   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:33.835739   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:36.337341   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:38.835188   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:40.836019   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:42.836112   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:45.681212   71702 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.255746666s)
	I0913 20:02:45.681319   71702 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 20:02:45.700645   71702 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0913 20:02:45.716032   71702 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0913 20:02:45.735914   71702 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0913 20:02:45.735934   71702 kubeadm.go:157] found existing configuration files:
	
	I0913 20:02:45.735991   71702 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0913 20:02:45.746143   71702 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0913 20:02:45.746212   71702 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0913 20:02:45.756542   71702 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0913 20:02:45.774317   71702 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0913 20:02:45.774371   71702 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0913 20:02:45.786627   71702 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0913 20:02:45.796851   71702 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0913 20:02:45.796913   71702 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0913 20:02:45.817449   71702 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0913 20:02:45.827702   71702 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0913 20:02:45.827769   71702 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0913 20:02:45.838431   71702 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0913 20:02:45.891108   71702 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0913 20:02:45.891320   71702 kubeadm.go:310] [preflight] Running pre-flight checks
	I0913 20:02:46.000041   71702 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0913 20:02:46.000212   71702 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0913 20:02:46.000375   71702 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0913 20:02:46.008967   71702 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0913 20:02:46.010730   71702 out.go:235]   - Generating certificates and keys ...
	I0913 20:02:46.010839   71702 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0913 20:02:46.010943   71702 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0913 20:02:46.011058   71702 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0913 20:02:46.011180   71702 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0913 20:02:46.011270   71702 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0913 20:02:46.011352   71702 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0913 20:02:46.011438   71702 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0913 20:02:46.011528   71702 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0913 20:02:46.011627   71702 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0913 20:02:46.011727   71702 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0913 20:02:46.011781   71702 kubeadm.go:310] [certs] Using the existing "sa" key
	I0913 20:02:46.011850   71702 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0913 20:02:46.203740   71702 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0913 20:02:46.287426   71702 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0913 20:02:46.417622   71702 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0913 20:02:46.837809   71702 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0913 20:02:47.159346   71702 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0913 20:02:47.159994   71702 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0913 20:02:47.162768   71702 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0913 20:02:45.335134   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:47.338183   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:47.164508   71702 out.go:235]   - Booting up control plane ...
	I0913 20:02:47.164636   71702 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0913 20:02:47.164740   71702 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0913 20:02:47.164827   71702 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0913 20:02:47.182734   71702 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0913 20:02:47.188946   71702 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0913 20:02:47.189012   71702 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0913 20:02:47.311613   71702 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0913 20:02:47.311820   71702 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0913 20:02:47.812730   71702 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.220732ms
	I0913 20:02:47.812859   71702 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0913 20:02:53.314958   71702 kubeadm.go:310] [api-check] The API server is healthy after 5.502078323s
	I0913 20:02:53.332711   71702 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0913 20:02:53.363295   71702 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0913 20:02:53.416780   71702 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0913 20:02:53.417000   71702 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-512125 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0913 20:02:53.450532   71702 kubeadm.go:310] [bootstrap-token] Using token: omlshd.2vtm45ugvt4lb37m
	I0913 20:02:49.837005   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:52.336369   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:53.451903   71702 out.go:235]   - Configuring RBAC rules ...
	I0913 20:02:53.452024   71702 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0913 20:02:53.474646   71702 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0913 20:02:53.501155   71702 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0913 20:02:53.510978   71702 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0913 20:02:53.529034   71702 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0913 20:02:53.540839   71702 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0913 20:02:53.724625   71702 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0913 20:02:54.178585   71702 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0913 20:02:54.728758   71702 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0913 20:02:54.729745   71702 kubeadm.go:310] 
	I0913 20:02:54.729808   71702 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0913 20:02:54.729816   71702 kubeadm.go:310] 
	I0913 20:02:54.729906   71702 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0913 20:02:54.729931   71702 kubeadm.go:310] 
	I0913 20:02:54.729981   71702 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0913 20:02:54.730079   71702 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0913 20:02:54.730170   71702 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0913 20:02:54.730180   71702 kubeadm.go:310] 
	I0913 20:02:54.730386   71702 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0913 20:02:54.730403   71702 kubeadm.go:310] 
	I0913 20:02:54.730453   71702 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0913 20:02:54.730476   71702 kubeadm.go:310] 
	I0913 20:02:54.730538   71702 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0913 20:02:54.730642   71702 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0913 20:02:54.730737   71702 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0913 20:02:54.730746   71702 kubeadm.go:310] 
	I0913 20:02:54.730866   71702 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0913 20:02:54.730978   71702 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0913 20:02:54.730990   71702 kubeadm.go:310] 
	I0913 20:02:54.731059   71702 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token omlshd.2vtm45ugvt4lb37m \
	I0913 20:02:54.731147   71702 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a4240e11abfbfd373505036507e5ef67d548fb372e560a526b421dfcccc65691 \
	I0913 20:02:54.731172   71702 kubeadm.go:310] 	--control-plane 
	I0913 20:02:54.731178   71702 kubeadm.go:310] 
	I0913 20:02:54.731250   71702 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0913 20:02:54.731265   71702 kubeadm.go:310] 
	I0913 20:02:54.731385   71702 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token omlshd.2vtm45ugvt4lb37m \
	I0913 20:02:54.731537   71702 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a4240e11abfbfd373505036507e5ef67d548fb372e560a526b421dfcccc65691 
	I0913 20:02:54.732490   71702 kubeadm.go:310] W0913 20:02:45.866846    2534 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0913 20:02:54.732825   71702 kubeadm.go:310] W0913 20:02:45.867680    2534 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0913 20:02:54.732991   71702 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0913 20:02:54.733013   71702 cni.go:84] Creating CNI manager for ""
	I0913 20:02:54.733024   71702 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0913 20:02:54.734613   71702 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0913 20:02:54.735888   71702 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0913 20:02:54.747812   71702 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0913 20:02:54.769810   71702 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0913 20:02:54.769849   71702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 20:02:54.769936   71702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-512125 minikube.k8s.io/updated_at=2024_09_13T20_02_54_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=fdd33bebc6743cfd1c61ec7fe066add478610a92 minikube.k8s.io/name=default-k8s-diff-port-512125 minikube.k8s.io/primary=true
	I0913 20:02:54.934477   71702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 20:02:55.021422   71702 ops.go:34] apiserver oom_adj: -16
	I0913 20:02:55.435528   71702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 20:02:55.935089   71702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 20:02:56.434609   71702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 20:02:56.934698   71702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 20:02:57.434523   71702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 20:02:57.935430   71702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 20:02:58.434786   71702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 20:02:58.935296   71702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 20:02:59.068131   71702 kubeadm.go:1113] duration metric: took 4.298327621s to wait for elevateKubeSystemPrivileges
	I0913 20:02:59.068171   71702 kubeadm.go:394] duration metric: took 5m1.428919049s to StartCluster
	I0913 20:02:59.068191   71702 settings.go:142] acquiring lock: {Name:mk3b751ea8a9f3a6c2d19469cffad500e96411b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 20:02:59.068274   71702 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19636-3902/kubeconfig
	I0913 20:02:59.069936   71702 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/kubeconfig: {Name:mk324cf401f96b93fed93af51e24b0634e5fa1a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 20:02:59.070196   71702 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.3 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0913 20:02:59.070258   71702 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0913 20:02:59.070355   71702 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-512125"
	I0913 20:02:59.070373   71702 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-512125"
	W0913 20:02:59.070386   71702 addons.go:243] addon storage-provisioner should already be in state true
	I0913 20:02:59.070383   71702 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-512125"
	I0913 20:02:59.070407   71702 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-512125"
	I0913 20:02:59.070407   71702 config.go:182] Loaded profile config "default-k8s-diff-port-512125": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 20:02:59.070425   71702 host.go:66] Checking if "default-k8s-diff-port-512125" exists ...
	I0913 20:02:59.070413   71702 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-512125"
	I0913 20:02:59.070447   71702 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-512125"
	W0913 20:02:59.070457   71702 addons.go:243] addon metrics-server should already be in state true
	I0913 20:02:59.070481   71702 host.go:66] Checking if "default-k8s-diff-port-512125" exists ...
	I0913 20:02:59.070819   71702 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 20:02:59.070863   71702 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 20:02:59.070866   71702 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 20:02:59.070891   71702 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 20:02:59.070911   71702 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 20:02:59.070935   71702 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 20:02:59.072027   71702 out.go:177] * Verifying Kubernetes components...
	I0913 20:02:59.073600   71702 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 20:02:59.088175   71702 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46813
	I0913 20:02:59.088737   71702 main.go:141] libmachine: () Calling .GetVersion
	I0913 20:02:59.089296   71702 main.go:141] libmachine: Using API Version  1
	I0913 20:02:59.089321   71702 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 20:02:59.089716   71702 main.go:141] libmachine: () Calling .GetMachineName
	I0913 20:02:59.090168   71702 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41303
	I0913 20:02:59.090184   71702 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43597
	I0913 20:02:59.090323   71702 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 20:02:59.090370   71702 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 20:02:59.090639   71702 main.go:141] libmachine: () Calling .GetVersion
	I0913 20:02:59.090642   71702 main.go:141] libmachine: () Calling .GetVersion
	I0913 20:02:59.091125   71702 main.go:141] libmachine: Using API Version  1
	I0913 20:02:59.091157   71702 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 20:02:59.091295   71702 main.go:141] libmachine: Using API Version  1
	I0913 20:02:59.091309   71702 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 20:02:59.091691   71702 main.go:141] libmachine: () Calling .GetMachineName
	I0913 20:02:59.091749   71702 main.go:141] libmachine: () Calling .GetMachineName
	I0913 20:02:59.092208   71702 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 20:02:59.092244   71702 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 20:02:59.092420   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetState
	I0913 20:02:59.096383   71702 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-512125"
	W0913 20:02:59.096408   71702 addons.go:243] addon default-storageclass should already be in state true
	I0913 20:02:59.096439   71702 host.go:66] Checking if "default-k8s-diff-port-512125" exists ...
	I0913 20:02:59.096799   71702 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 20:02:59.096839   71702 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 20:02:59.110299   71702 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32853
	I0913 20:02:59.110382   71702 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41891
	I0913 20:02:59.110847   71702 main.go:141] libmachine: () Calling .GetVersion
	I0913 20:02:59.110951   71702 main.go:141] libmachine: () Calling .GetVersion
	I0913 20:02:59.111458   71702 main.go:141] libmachine: Using API Version  1
	I0913 20:02:59.111472   71702 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 20:02:59.111483   71702 main.go:141] libmachine: Using API Version  1
	I0913 20:02:59.111500   71702 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 20:02:59.111815   71702 main.go:141] libmachine: () Calling .GetMachineName
	I0913 20:02:59.111979   71702 main.go:141] libmachine: () Calling .GetMachineName
	I0913 20:02:59.112029   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetState
	I0913 20:02:59.112585   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetState
	I0913 20:02:59.114070   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .DriverName
	I0913 20:02:59.114919   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .DriverName
	I0913 20:02:59.116054   71702 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 20:02:59.116911   71702 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0913 20:02:54.837749   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:57.335281   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:57.335308   71233 pod_ready.go:82] duration metric: took 4m0.006028535s for pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace to be "Ready" ...
	E0913 20:02:57.335316   71233 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0913 20:02:57.335325   71233 pod_ready.go:39] duration metric: took 4m4.043499675s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0913 20:02:57.335338   71233 api_server.go:52] waiting for apiserver process to appear ...
	I0913 20:02:57.335365   71233 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:02:57.335429   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:02:57.384724   71233 cri.go:89] found id: "8c6b66cfda64cc01b2add3842317e8e38bd44940ec52b87ff6868e2b782ceb0d"
	I0913 20:02:57.384750   71233 cri.go:89] found id: ""
	I0913 20:02:57.384759   71233 logs.go:276] 1 containers: [8c6b66cfda64cc01b2add3842317e8e38bd44940ec52b87ff6868e2b782ceb0d]
	I0913 20:02:57.384816   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:02:57.393335   71233 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:02:57.393406   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:02:57.432064   71233 cri.go:89] found id: "b7288e6c437a29f9a016812446ce5905f4cc78974775e1f26b227cd41414a8c0"
	I0913 20:02:57.432112   71233 cri.go:89] found id: ""
	I0913 20:02:57.432121   71233 logs.go:276] 1 containers: [b7288e6c437a29f9a016812446ce5905f4cc78974775e1f26b227cd41414a8c0]
	I0913 20:02:57.432170   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:02:57.437305   71233 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:02:57.437363   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:02:57.484101   71233 cri.go:89] found id: "5a58f184d570427c2adde5e982b72f455eb375ced496ce62a3217f22f7c409e7"
	I0913 20:02:57.484125   71233 cri.go:89] found id: ""
	I0913 20:02:57.484135   71233 logs.go:276] 1 containers: [5a58f184d570427c2adde5e982b72f455eb375ced496ce62a3217f22f7c409e7]
	I0913 20:02:57.484204   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:02:57.489057   71233 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:02:57.489129   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:02:57.531094   71233 cri.go:89] found id: "c32212fb06588456e47850075601e1e9622d8fab0cd30faa9a6289ff74e4bc9f"
	I0913 20:02:57.531138   71233 cri.go:89] found id: ""
	I0913 20:02:57.531147   71233 logs.go:276] 1 containers: [c32212fb06588456e47850075601e1e9622d8fab0cd30faa9a6289ff74e4bc9f]
	I0913 20:02:57.531208   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:02:57.536227   71233 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:02:57.536290   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:02:57.575177   71233 cri.go:89] found id: "57402126568c76ff63c95511c256896d8fb9ec9889deab2a2816385513386b86"
	I0913 20:02:57.575204   71233 cri.go:89] found id: ""
	I0913 20:02:57.575213   71233 logs.go:276] 1 containers: [57402126568c76ff63c95511c256896d8fb9ec9889deab2a2816385513386b86]
	I0913 20:02:57.575265   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:02:57.580702   71233 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:02:57.580772   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:02:57.616846   71233 cri.go:89] found id: "3e8d6c49b3b396aea040cc47de27b36e0f6018361f1925fe36424adffe6a6b73"
	I0913 20:02:57.616872   71233 cri.go:89] found id: ""
	I0913 20:02:57.616881   71233 logs.go:276] 1 containers: [3e8d6c49b3b396aea040cc47de27b36e0f6018361f1925fe36424adffe6a6b73]
	I0913 20:02:57.616937   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:02:57.626381   71233 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:02:57.626438   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:02:57.665834   71233 cri.go:89] found id: ""
	I0913 20:02:57.665859   71233 logs.go:276] 0 containers: []
	W0913 20:02:57.665868   71233 logs.go:278] No container was found matching "kindnet"
	I0913 20:02:57.665873   71233 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0913 20:02:57.665924   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0913 20:02:57.709261   71233 cri.go:89] found id: "db0694e689431d2416b903e0f01588bdf740ca0d24d9561799e853cfb065cb11"
	I0913 20:02:57.709282   71233 cri.go:89] found id: "d21ac9f9341fd40b5f181668035e7327fd6c2310c56a7f5f0158d440bfd2e9a3"
	I0913 20:02:57.709286   71233 cri.go:89] found id: ""
	I0913 20:02:57.709293   71233 logs.go:276] 2 containers: [db0694e689431d2416b903e0f01588bdf740ca0d24d9561799e853cfb065cb11 d21ac9f9341fd40b5f181668035e7327fd6c2310c56a7f5f0158d440bfd2e9a3]
	I0913 20:02:57.709352   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:02:57.713629   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:02:57.717722   71233 logs.go:123] Gathering logs for kubelet ...
	I0913 20:02:57.717739   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:02:57.791226   71233 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:02:57.791258   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 20:02:57.967572   71233 logs.go:123] Gathering logs for kube-apiserver [8c6b66cfda64cc01b2add3842317e8e38bd44940ec52b87ff6868e2b782ceb0d] ...
	I0913 20:02:57.967614   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8c6b66cfda64cc01b2add3842317e8e38bd44940ec52b87ff6868e2b782ceb0d"
	I0913 20:02:58.035311   71233 logs.go:123] Gathering logs for kube-scheduler [c32212fb06588456e47850075601e1e9622d8fab0cd30faa9a6289ff74e4bc9f] ...
	I0913 20:02:58.035356   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c32212fb06588456e47850075601e1e9622d8fab0cd30faa9a6289ff74e4bc9f"
	I0913 20:02:58.076771   71233 logs.go:123] Gathering logs for kube-proxy [57402126568c76ff63c95511c256896d8fb9ec9889deab2a2816385513386b86] ...
	I0913 20:02:58.076801   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57402126568c76ff63c95511c256896d8fb9ec9889deab2a2816385513386b86"
	I0913 20:02:58.120108   71233 logs.go:123] Gathering logs for storage-provisioner [db0694e689431d2416b903e0f01588bdf740ca0d24d9561799e853cfb065cb11] ...
	I0913 20:02:58.120138   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db0694e689431d2416b903e0f01588bdf740ca0d24d9561799e853cfb065cb11"
	I0913 20:02:58.169935   71233 logs.go:123] Gathering logs for storage-provisioner [d21ac9f9341fd40b5f181668035e7327fd6c2310c56a7f5f0158d440bfd2e9a3] ...
	I0913 20:02:58.169964   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d21ac9f9341fd40b5f181668035e7327fd6c2310c56a7f5f0158d440bfd2e9a3"
	I0913 20:02:58.213552   71233 logs.go:123] Gathering logs for dmesg ...
	I0913 20:02:58.213579   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:02:58.227590   71233 logs.go:123] Gathering logs for etcd [b7288e6c437a29f9a016812446ce5905f4cc78974775e1f26b227cd41414a8c0] ...
	I0913 20:02:58.227618   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b7288e6c437a29f9a016812446ce5905f4cc78974775e1f26b227cd41414a8c0"
	I0913 20:02:58.272273   71233 logs.go:123] Gathering logs for coredns [5a58f184d570427c2adde5e982b72f455eb375ced496ce62a3217f22f7c409e7] ...
	I0913 20:02:58.272304   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a58f184d570427c2adde5e982b72f455eb375ced496ce62a3217f22f7c409e7"
	I0913 20:02:58.325246   71233 logs.go:123] Gathering logs for kube-controller-manager [3e8d6c49b3b396aea040cc47de27b36e0f6018361f1925fe36424adffe6a6b73] ...
	I0913 20:02:58.325282   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e8d6c49b3b396aea040cc47de27b36e0f6018361f1925fe36424adffe6a6b73"
	I0913 20:02:58.383314   71233 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:02:58.383344   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:02:58.878384   71233 logs.go:123] Gathering logs for container status ...
	I0913 20:02:58.878423   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:02:59.116960   71702 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35415
	I0913 20:02:59.117841   71702 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0913 20:02:59.117861   71702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0913 20:02:59.117881   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHHostname
	I0913 20:02:59.117970   71702 main.go:141] libmachine: () Calling .GetVersion
	I0913 20:02:59.118540   71702 main.go:141] libmachine: Using API Version  1
	I0913 20:02:59.118559   71702 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 20:02:59.118756   71702 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0913 20:02:59.118776   71702 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0913 20:02:59.118795   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHHostname
	I0913 20:02:59.118937   71702 main.go:141] libmachine: () Calling .GetMachineName
	I0913 20:02:59.120038   71702 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 20:02:59.120119   71702 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 20:02:59.122253   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 20:02:59.122695   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 20:02:59.122693   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:54:e0", ip: ""} in network mk-default-k8s-diff-port-512125: {Iface:virbr2 ExpiryTime:2024-09-13 20:49:35 +0000 UTC Type:0 Mac:52:54:00:5b:54:e0 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-512125 Clientid:01:52:54:00:5b:54:e0}
	I0913 20:02:59.122727   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined IP address 192.168.61.3 and MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 20:02:59.122937   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHPort
	I0913 20:02:59.123131   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHKeyPath
	I0913 20:02:59.123151   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:54:e0", ip: ""} in network mk-default-k8s-diff-port-512125: {Iface:virbr2 ExpiryTime:2024-09-13 20:49:35 +0000 UTC Type:0 Mac:52:54:00:5b:54:e0 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-512125 Clientid:01:52:54:00:5b:54:e0}
	I0913 20:02:59.123172   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined IP address 192.168.61.3 and MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 20:02:59.123321   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHUsername
	I0913 20:02:59.123523   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHPort
	I0913 20:02:59.123531   71702 sshutil.go:53] new ssh client: &{IP:192.168.61.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/default-k8s-diff-port-512125/id_rsa Username:docker}
	I0913 20:02:59.123629   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHKeyPath
	I0913 20:02:59.123727   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHUsername
	I0913 20:02:59.123835   71702 sshutil.go:53] new ssh client: &{IP:192.168.61.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/default-k8s-diff-port-512125/id_rsa Username:docker}
	I0913 20:02:59.137333   71702 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45929
	I0913 20:02:59.137767   71702 main.go:141] libmachine: () Calling .GetVersion
	I0913 20:02:59.138291   71702 main.go:141] libmachine: Using API Version  1
	I0913 20:02:59.138311   71702 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 20:02:59.138659   71702 main.go:141] libmachine: () Calling .GetMachineName
	I0913 20:02:59.138865   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetState
	I0913 20:02:59.140658   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .DriverName
	I0913 20:02:59.140891   71702 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0913 20:02:59.140908   71702 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0913 20:02:59.140934   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHHostname
	I0913 20:02:59.144330   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 20:02:59.144802   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:54:e0", ip: ""} in network mk-default-k8s-diff-port-512125: {Iface:virbr2 ExpiryTime:2024-09-13 20:49:35 +0000 UTC Type:0 Mac:52:54:00:5b:54:e0 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-512125 Clientid:01:52:54:00:5b:54:e0}
	I0913 20:02:59.144834   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined IP address 192.168.61.3 and MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 20:02:59.144971   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHPort
	I0913 20:02:59.145149   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHKeyPath
	I0913 20:02:59.145280   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHUsername
	I0913 20:02:59.145398   71702 sshutil.go:53] new ssh client: &{IP:192.168.61.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/default-k8s-diff-port-512125/id_rsa Username:docker}
	I0913 20:02:59.313139   71702 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0913 20:02:59.364703   71702 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-512125" to be "Ready" ...
	I0913 20:02:59.390283   71702 node_ready.go:49] node "default-k8s-diff-port-512125" has status "Ready":"True"
	I0913 20:02:59.390322   71702 node_ready.go:38] duration metric: took 25.568477ms for node "default-k8s-diff-port-512125" to be "Ready" ...
	I0913 20:02:59.390335   71702 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0913 20:02:59.404911   71702 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-2qg68" in "kube-system" namespace to be "Ready" ...
	I0913 20:02:59.534386   71702 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0913 20:02:59.534414   71702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0913 20:02:59.562931   71702 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0913 20:02:59.562958   71702 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0913 20:02:59.569447   71702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0913 20:02:59.630245   71702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0913 20:02:59.664309   71702 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0913 20:02:59.664341   71702 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0913 20:02:59.766546   71702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0913 20:03:00.996748   71702 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.366470603s)
	I0913 20:03:00.996799   71702 main.go:141] libmachine: Making call to close driver server
	I0913 20:03:00.996814   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .Close
	I0913 20:03:00.996831   71702 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.427344727s)
	I0913 20:03:00.996874   71702 main.go:141] libmachine: Making call to close driver server
	I0913 20:03:00.996886   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .Close
	I0913 20:03:00.997187   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | Closing plugin on server side
	I0913 20:03:00.997223   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | Closing plugin on server side
	I0913 20:03:00.997216   71702 main.go:141] libmachine: Successfully made call to close driver server
	I0913 20:03:00.997261   71702 main.go:141] libmachine: Successfully made call to close driver server
	I0913 20:03:00.997272   71702 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 20:03:00.997283   71702 main.go:141] libmachine: Making call to close driver server
	I0913 20:03:00.997293   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .Close
	I0913 20:03:00.997261   71702 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 20:03:00.997352   71702 main.go:141] libmachine: Making call to close driver server
	I0913 20:03:00.997360   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .Close
	I0913 20:03:00.997576   71702 main.go:141] libmachine: Successfully made call to close driver server
	I0913 20:03:00.997619   71702 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 20:03:00.997631   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | Closing plugin on server side
	I0913 20:03:00.997657   71702 main.go:141] libmachine: Successfully made call to close driver server
	I0913 20:03:00.997717   71702 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 20:03:01.017603   71702 main.go:141] libmachine: Making call to close driver server
	I0913 20:03:01.017629   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .Close
	I0913 20:03:01.017896   71702 main.go:141] libmachine: Successfully made call to close driver server
	I0913 20:03:01.017913   71702 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 20:03:01.034684   71702 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.268104844s)
	I0913 20:03:01.034739   71702 main.go:141] libmachine: Making call to close driver server
	I0913 20:03:01.034756   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .Close
	I0913 20:03:01.035100   71702 main.go:141] libmachine: Successfully made call to close driver server
	I0913 20:03:01.035120   71702 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 20:03:01.035137   71702 main.go:141] libmachine: Making call to close driver server
	I0913 20:03:01.035145   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .Close
	I0913 20:03:01.036842   71702 main.go:141] libmachine: Successfully made call to close driver server
	I0913 20:03:01.036871   71702 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 20:03:01.036882   71702 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-512125"
	I0913 20:03:01.039496   71702 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0913 20:03:01.432233   71233 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:03:01.452473   71233 api_server.go:72] duration metric: took 4m15.872372226s to wait for apiserver process to appear ...
	I0913 20:03:01.452503   71233 api_server.go:88] waiting for apiserver healthz status ...
	I0913 20:03:01.452544   71233 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:03:01.452600   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:03:01.495509   71233 cri.go:89] found id: "8c6b66cfda64cc01b2add3842317e8e38bd44940ec52b87ff6868e2b782ceb0d"
	I0913 20:03:01.495532   71233 cri.go:89] found id: ""
	I0913 20:03:01.495539   71233 logs.go:276] 1 containers: [8c6b66cfda64cc01b2add3842317e8e38bd44940ec52b87ff6868e2b782ceb0d]
	I0913 20:03:01.495601   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:03:01.502156   71233 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:03:01.502244   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:03:01.545020   71233 cri.go:89] found id: "b7288e6c437a29f9a016812446ce5905f4cc78974775e1f26b227cd41414a8c0"
	I0913 20:03:01.545046   71233 cri.go:89] found id: ""
	I0913 20:03:01.545056   71233 logs.go:276] 1 containers: [b7288e6c437a29f9a016812446ce5905f4cc78974775e1f26b227cd41414a8c0]
	I0913 20:03:01.545114   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:03:01.549607   71233 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:03:01.549675   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:03:01.589590   71233 cri.go:89] found id: "5a58f184d570427c2adde5e982b72f455eb375ced496ce62a3217f22f7c409e7"
	I0913 20:03:01.589619   71233 cri.go:89] found id: ""
	I0913 20:03:01.589627   71233 logs.go:276] 1 containers: [5a58f184d570427c2adde5e982b72f455eb375ced496ce62a3217f22f7c409e7]
	I0913 20:03:01.589677   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:03:01.595352   71233 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:03:01.595429   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:03:01.642418   71233 cri.go:89] found id: "c32212fb06588456e47850075601e1e9622d8fab0cd30faa9a6289ff74e4bc9f"
	I0913 20:03:01.642441   71233 cri.go:89] found id: ""
	I0913 20:03:01.642449   71233 logs.go:276] 1 containers: [c32212fb06588456e47850075601e1e9622d8fab0cd30faa9a6289ff74e4bc9f]
	I0913 20:03:01.642511   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:03:01.647937   71233 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:03:01.648004   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:03:01.691575   71233 cri.go:89] found id: "57402126568c76ff63c95511c256896d8fb9ec9889deab2a2816385513386b86"
	I0913 20:03:01.691603   71233 cri.go:89] found id: ""
	I0913 20:03:01.691612   71233 logs.go:276] 1 containers: [57402126568c76ff63c95511c256896d8fb9ec9889deab2a2816385513386b86]
	I0913 20:03:01.691669   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:03:01.697223   71233 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:03:01.697296   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:03:01.737359   71233 cri.go:89] found id: "3e8d6c49b3b396aea040cc47de27b36e0f6018361f1925fe36424adffe6a6b73"
	I0913 20:03:01.737386   71233 cri.go:89] found id: ""
	I0913 20:03:01.737395   71233 logs.go:276] 1 containers: [3e8d6c49b3b396aea040cc47de27b36e0f6018361f1925fe36424adffe6a6b73]
	I0913 20:03:01.737453   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:03:01.743717   71233 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:03:01.743779   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:03:01.784813   71233 cri.go:89] found id: ""
	I0913 20:03:01.784836   71233 logs.go:276] 0 containers: []
	W0913 20:03:01.784845   71233 logs.go:278] No container was found matching "kindnet"
	I0913 20:03:01.784849   71233 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0913 20:03:01.784898   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0913 20:03:01.823391   71233 cri.go:89] found id: "db0694e689431d2416b903e0f01588bdf740ca0d24d9561799e853cfb065cb11"
	I0913 20:03:01.823420   71233 cri.go:89] found id: "d21ac9f9341fd40b5f181668035e7327fd6c2310c56a7f5f0158d440bfd2e9a3"
	I0913 20:03:01.823427   71233 cri.go:89] found id: ""
	I0913 20:03:01.823436   71233 logs.go:276] 2 containers: [db0694e689431d2416b903e0f01588bdf740ca0d24d9561799e853cfb065cb11 d21ac9f9341fd40b5f181668035e7327fd6c2310c56a7f5f0158d440bfd2e9a3]
	I0913 20:03:01.823484   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:03:01.828764   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:03:01.834519   71233 logs.go:123] Gathering logs for storage-provisioner [db0694e689431d2416b903e0f01588bdf740ca0d24d9561799e853cfb065cb11] ...
	I0913 20:03:01.834546   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db0694e689431d2416b903e0f01588bdf740ca0d24d9561799e853cfb065cb11"
	I0913 20:03:01.872925   71233 logs.go:123] Gathering logs for etcd [b7288e6c437a29f9a016812446ce5905f4cc78974775e1f26b227cd41414a8c0] ...
	I0913 20:03:01.872954   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b7288e6c437a29f9a016812446ce5905f4cc78974775e1f26b227cd41414a8c0"
	I0913 20:03:01.927669   71233 logs.go:123] Gathering logs for coredns [5a58f184d570427c2adde5e982b72f455eb375ced496ce62a3217f22f7c409e7] ...
	I0913 20:03:01.927702   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a58f184d570427c2adde5e982b72f455eb375ced496ce62a3217f22f7c409e7"
	I0913 20:03:01.973537   71233 logs.go:123] Gathering logs for kube-scheduler [c32212fb06588456e47850075601e1e9622d8fab0cd30faa9a6289ff74e4bc9f] ...
	I0913 20:03:01.973576   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c32212fb06588456e47850075601e1e9622d8fab0cd30faa9a6289ff74e4bc9f"
	I0913 20:03:02.017320   71233 logs.go:123] Gathering logs for kube-proxy [57402126568c76ff63c95511c256896d8fb9ec9889deab2a2816385513386b86] ...
	I0913 20:03:02.017353   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57402126568c76ff63c95511c256896d8fb9ec9889deab2a2816385513386b86"
	I0913 20:03:02.064003   71233 logs.go:123] Gathering logs for kubelet ...
	I0913 20:03:02.064042   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:03:02.134901   71233 logs.go:123] Gathering logs for dmesg ...
	I0913 20:03:02.134933   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:03:02.150541   71233 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:03:02.150575   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 20:03:02.268583   71233 logs.go:123] Gathering logs for kube-apiserver [8c6b66cfda64cc01b2add3842317e8e38bd44940ec52b87ff6868e2b782ceb0d] ...
	I0913 20:03:02.268626   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8c6b66cfda64cc01b2add3842317e8e38bd44940ec52b87ff6868e2b782ceb0d"
	I0913 20:03:02.320972   71233 logs.go:123] Gathering logs for kube-controller-manager [3e8d6c49b3b396aea040cc47de27b36e0f6018361f1925fe36424adffe6a6b73] ...
	I0913 20:03:02.321004   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e8d6c49b3b396aea040cc47de27b36e0f6018361f1925fe36424adffe6a6b73"
	I0913 20:03:02.373848   71233 logs.go:123] Gathering logs for storage-provisioner [d21ac9f9341fd40b5f181668035e7327fd6c2310c56a7f5f0158d440bfd2e9a3] ...
	I0913 20:03:02.373881   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d21ac9f9341fd40b5f181668035e7327fd6c2310c56a7f5f0158d440bfd2e9a3"
	I0913 20:03:02.409851   71233 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:03:02.409882   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:03:02.833329   71233 logs.go:123] Gathering logs for container status ...
	I0913 20:03:02.833384   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:03:01.041611   71702 addons.go:510] duration metric: took 1.971356508s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0913 20:03:01.415839   71702 pod_ready.go:103] pod "coredns-7c65d6cfc9-2qg68" in "kube-system" namespace has status "Ready":"False"
	I0913 20:03:03.911854   71702 pod_ready.go:103] pod "coredns-7c65d6cfc9-2qg68" in "kube-system" namespace has status "Ready":"False"
	I0913 20:03:04.745279   71926 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0913 20:03:04.745917   71926 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0913 20:03:04.746165   71926 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0913 20:03:05.413146   71702 pod_ready.go:93] pod "coredns-7c65d6cfc9-2qg68" in "kube-system" namespace has status "Ready":"True"
	I0913 20:03:05.413172   71702 pod_ready.go:82] duration metric: took 6.008227569s for pod "coredns-7c65d6cfc9-2qg68" in "kube-system" namespace to be "Ready" ...
	I0913 20:03:05.413184   71702 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-pm4s9" in "kube-system" namespace to be "Ready" ...
	I0913 20:03:07.420197   71702 pod_ready.go:103] pod "coredns-7c65d6cfc9-pm4s9" in "kube-system" namespace has status "Ready":"False"
	I0913 20:03:07.920309   71702 pod_ready.go:93] pod "coredns-7c65d6cfc9-pm4s9" in "kube-system" namespace has status "Ready":"True"
	I0913 20:03:07.920333   71702 pod_ready.go:82] duration metric: took 2.507141455s for pod "coredns-7c65d6cfc9-pm4s9" in "kube-system" namespace to be "Ready" ...
	I0913 20:03:07.920342   71702 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-512125" in "kube-system" namespace to be "Ready" ...
	I0913 20:03:07.924871   71702 pod_ready.go:93] pod "etcd-default-k8s-diff-port-512125" in "kube-system" namespace has status "Ready":"True"
	I0913 20:03:07.924892   71702 pod_ready.go:82] duration metric: took 4.543474ms for pod "etcd-default-k8s-diff-port-512125" in "kube-system" namespace to be "Ready" ...
	I0913 20:03:07.924901   71702 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-512125" in "kube-system" namespace to be "Ready" ...
	I0913 20:03:07.929323   71702 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-512125" in "kube-system" namespace has status "Ready":"True"
	I0913 20:03:07.929343   71702 pod_ready.go:82] duration metric: took 4.435416ms for pod "kube-apiserver-default-k8s-diff-port-512125" in "kube-system" namespace to be "Ready" ...
	I0913 20:03:07.929351   71702 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-512125" in "kube-system" namespace to be "Ready" ...
	I0913 20:03:07.933200   71702 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-512125" in "kube-system" namespace has status "Ready":"True"
	I0913 20:03:07.933225   71702 pod_ready.go:82] duration metric: took 3.865423ms for pod "kube-controller-manager-default-k8s-diff-port-512125" in "kube-system" namespace to be "Ready" ...
	I0913 20:03:07.933237   71702 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-6zfwm" in "kube-system" namespace to be "Ready" ...
	I0913 20:03:07.938215   71702 pod_ready.go:93] pod "kube-proxy-6zfwm" in "kube-system" namespace has status "Ready":"True"
	I0913 20:03:07.938241   71702 pod_ready.go:82] duration metric: took 4.996366ms for pod "kube-proxy-6zfwm" in "kube-system" namespace to be "Ready" ...
	I0913 20:03:07.938251   71702 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-512125" in "kube-system" namespace to be "Ready" ...
	I0913 20:03:08.317175   71702 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-512125" in "kube-system" namespace has status "Ready":"True"
	I0913 20:03:08.317200   71702 pod_ready.go:82] duration metric: took 378.941006ms for pod "kube-scheduler-default-k8s-diff-port-512125" in "kube-system" namespace to be "Ready" ...
	I0913 20:03:08.317207   71702 pod_ready.go:39] duration metric: took 8.926861264s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0913 20:03:08.317220   71702 api_server.go:52] waiting for apiserver process to appear ...
	I0913 20:03:08.317270   71702 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:03:08.332715   71702 api_server.go:72] duration metric: took 9.262487177s to wait for apiserver process to appear ...
	I0913 20:03:08.332745   71702 api_server.go:88] waiting for apiserver healthz status ...
	I0913 20:03:08.332766   71702 api_server.go:253] Checking apiserver healthz at https://192.168.61.3:8444/healthz ...
	I0913 20:03:08.337492   71702 api_server.go:279] https://192.168.61.3:8444/healthz returned 200:
	ok
	I0913 20:03:08.338513   71702 api_server.go:141] control plane version: v1.31.1
	I0913 20:03:08.338534   71702 api_server.go:131] duration metric: took 5.781718ms to wait for apiserver health ...
	I0913 20:03:08.338540   71702 system_pods.go:43] waiting for kube-system pods to appear ...
	I0913 20:03:08.519723   71702 system_pods.go:59] 9 kube-system pods found
	I0913 20:03:08.519751   71702 system_pods.go:61] "coredns-7c65d6cfc9-2qg68" [06d7bc39-7f7b-405c-828f-22d68741b063] Running
	I0913 20:03:08.519756   71702 system_pods.go:61] "coredns-7c65d6cfc9-pm4s9" [82a23abb-d3a2-415b-a992-971fe65ee840] Running
	I0913 20:03:08.519760   71702 system_pods.go:61] "etcd-default-k8s-diff-port-512125" [b146d4ea-c125-42c6-8435-b23a83c75597] Running
	I0913 20:03:08.519764   71702 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-512125" [69a6fc72-2e73-41ad-a945-985c4f3b406c] Running
	I0913 20:03:08.519767   71702 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-512125" [7875db7b-84da-4dae-b78c-beb539869479] Running
	I0913 20:03:08.519770   71702 system_pods.go:61] "kube-proxy-6zfwm" [b62cff15-1c67-42d6-a30b-6f43a914fa0c] Running
	I0913 20:03:08.519773   71702 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-512125" [840349ee-5068-4ccf-9e6f-9879904b3647] Running
	I0913 20:03:08.519779   71702 system_pods.go:61] "metrics-server-6867b74b74-tk8qn" [e4e5d427-7760-4397-8529-3ae3734ed891] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0913 20:03:08.519782   71702 system_pods.go:61] "storage-provisioner" [7cd5034f-5d90-4155-acab-804dca90a2ed] Running
	I0913 20:03:08.519790   71702 system_pods.go:74] duration metric: took 181.244915ms to wait for pod list to return data ...
	I0913 20:03:08.519797   71702 default_sa.go:34] waiting for default service account to be created ...
	I0913 20:03:08.717123   71702 default_sa.go:45] found service account: "default"
	I0913 20:03:08.717146   71702 default_sa.go:55] duration metric: took 197.343901ms for default service account to be created ...
	I0913 20:03:08.717155   71702 system_pods.go:116] waiting for k8s-apps to be running ...
	I0913 20:03:08.920347   71702 system_pods.go:86] 9 kube-system pods found
	I0913 20:03:08.920378   71702 system_pods.go:89] "coredns-7c65d6cfc9-2qg68" [06d7bc39-7f7b-405c-828f-22d68741b063] Running
	I0913 20:03:08.920383   71702 system_pods.go:89] "coredns-7c65d6cfc9-pm4s9" [82a23abb-d3a2-415b-a992-971fe65ee840] Running
	I0913 20:03:08.920388   71702 system_pods.go:89] "etcd-default-k8s-diff-port-512125" [b146d4ea-c125-42c6-8435-b23a83c75597] Running
	I0913 20:03:08.920392   71702 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-512125" [69a6fc72-2e73-41ad-a945-985c4f3b406c] Running
	I0913 20:03:08.920396   71702 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-512125" [7875db7b-84da-4dae-b78c-beb539869479] Running
	I0913 20:03:08.920401   71702 system_pods.go:89] "kube-proxy-6zfwm" [b62cff15-1c67-42d6-a30b-6f43a914fa0c] Running
	I0913 20:03:08.920407   71702 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-512125" [840349ee-5068-4ccf-9e6f-9879904b3647] Running
	I0913 20:03:08.920415   71702 system_pods.go:89] "metrics-server-6867b74b74-tk8qn" [e4e5d427-7760-4397-8529-3ae3734ed891] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0913 20:03:08.920421   71702 system_pods.go:89] "storage-provisioner" [7cd5034f-5d90-4155-acab-804dca90a2ed] Running
	I0913 20:03:08.920433   71702 system_pods.go:126] duration metric: took 203.271141ms to wait for k8s-apps to be running ...
	I0913 20:03:08.920446   71702 system_svc.go:44] waiting for kubelet service to be running ....
	I0913 20:03:08.920492   71702 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 20:03:08.937818   71702 system_svc.go:56] duration metric: took 17.363979ms WaitForService to wait for kubelet
	I0913 20:03:08.937850   71702 kubeadm.go:582] duration metric: took 9.867627646s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0913 20:03:08.937866   71702 node_conditions.go:102] verifying NodePressure condition ...
	I0913 20:03:09.117836   71702 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0913 20:03:09.117861   71702 node_conditions.go:123] node cpu capacity is 2
	I0913 20:03:09.117870   71702 node_conditions.go:105] duration metric: took 180.000591ms to run NodePressure ...
	I0913 20:03:09.117880   71702 start.go:241] waiting for startup goroutines ...
	I0913 20:03:09.117886   71702 start.go:246] waiting for cluster config update ...
	I0913 20:03:09.117896   71702 start.go:255] writing updated cluster config ...
	I0913 20:03:09.118224   71702 ssh_runner.go:195] Run: rm -f paused
	I0913 20:03:09.166470   71702 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0913 20:03:09.168569   71702 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-512125" cluster and "default" namespace by default
	I0913 20:03:05.379534   71233 api_server.go:253] Checking apiserver healthz at https://192.168.39.32:8443/healthz ...
	I0913 20:03:05.385296   71233 api_server.go:279] https://192.168.39.32:8443/healthz returned 200:
	ok
	I0913 20:03:05.386447   71233 api_server.go:141] control plane version: v1.31.1
	I0913 20:03:05.386467   71233 api_server.go:131] duration metric: took 3.933956718s to wait for apiserver health ...
	I0913 20:03:05.386476   71233 system_pods.go:43] waiting for kube-system pods to appear ...
	I0913 20:03:05.386501   71233 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:03:05.386558   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:03:05.435632   71233 cri.go:89] found id: "8c6b66cfda64cc01b2add3842317e8e38bd44940ec52b87ff6868e2b782ceb0d"
	I0913 20:03:05.435663   71233 cri.go:89] found id: ""
	I0913 20:03:05.435674   71233 logs.go:276] 1 containers: [8c6b66cfda64cc01b2add3842317e8e38bd44940ec52b87ff6868e2b782ceb0d]
	I0913 20:03:05.435734   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:03:05.440489   71233 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:03:05.440552   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:03:05.479659   71233 cri.go:89] found id: "b7288e6c437a29f9a016812446ce5905f4cc78974775e1f26b227cd41414a8c0"
	I0913 20:03:05.479684   71233 cri.go:89] found id: ""
	I0913 20:03:05.479692   71233 logs.go:276] 1 containers: [b7288e6c437a29f9a016812446ce5905f4cc78974775e1f26b227cd41414a8c0]
	I0913 20:03:05.479739   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:03:05.483811   71233 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:03:05.483868   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:03:05.519053   71233 cri.go:89] found id: "5a58f184d570427c2adde5e982b72f455eb375ced496ce62a3217f22f7c409e7"
	I0913 20:03:05.519077   71233 cri.go:89] found id: ""
	I0913 20:03:05.519085   71233 logs.go:276] 1 containers: [5a58f184d570427c2adde5e982b72f455eb375ced496ce62a3217f22f7c409e7]
	I0913 20:03:05.519139   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:03:05.523529   71233 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:03:05.523596   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:03:05.560575   71233 cri.go:89] found id: "c32212fb06588456e47850075601e1e9622d8fab0cd30faa9a6289ff74e4bc9f"
	I0913 20:03:05.560599   71233 cri.go:89] found id: ""
	I0913 20:03:05.560608   71233 logs.go:276] 1 containers: [c32212fb06588456e47850075601e1e9622d8fab0cd30faa9a6289ff74e4bc9f]
	I0913 20:03:05.560655   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:03:05.564712   71233 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:03:05.564761   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:03:05.602092   71233 cri.go:89] found id: "57402126568c76ff63c95511c256896d8fb9ec9889deab2a2816385513386b86"
	I0913 20:03:05.602131   71233 cri.go:89] found id: ""
	I0913 20:03:05.602141   71233 logs.go:276] 1 containers: [57402126568c76ff63c95511c256896d8fb9ec9889deab2a2816385513386b86]
	I0913 20:03:05.602202   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:03:05.606465   71233 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:03:05.606531   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:03:05.652471   71233 cri.go:89] found id: "3e8d6c49b3b396aea040cc47de27b36e0f6018361f1925fe36424adffe6a6b73"
	I0913 20:03:05.652499   71233 cri.go:89] found id: ""
	I0913 20:03:05.652509   71233 logs.go:276] 1 containers: [3e8d6c49b3b396aea040cc47de27b36e0f6018361f1925fe36424adffe6a6b73]
	I0913 20:03:05.652567   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:03:05.656969   71233 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:03:05.657028   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:03:05.695549   71233 cri.go:89] found id: ""
	I0913 20:03:05.695575   71233 logs.go:276] 0 containers: []
	W0913 20:03:05.695586   71233 logs.go:278] No container was found matching "kindnet"
	I0913 20:03:05.695594   71233 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0913 20:03:05.695657   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0913 20:03:05.732796   71233 cri.go:89] found id: "db0694e689431d2416b903e0f01588bdf740ca0d24d9561799e853cfb065cb11"
	I0913 20:03:05.732824   71233 cri.go:89] found id: "d21ac9f9341fd40b5f181668035e7327fd6c2310c56a7f5f0158d440bfd2e9a3"
	I0913 20:03:05.732830   71233 cri.go:89] found id: ""
	I0913 20:03:05.732838   71233 logs.go:276] 2 containers: [db0694e689431d2416b903e0f01588bdf740ca0d24d9561799e853cfb065cb11 d21ac9f9341fd40b5f181668035e7327fd6c2310c56a7f5f0158d440bfd2e9a3]
	I0913 20:03:05.732905   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:03:05.737676   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:03:05.742071   71233 logs.go:123] Gathering logs for etcd [b7288e6c437a29f9a016812446ce5905f4cc78974775e1f26b227cd41414a8c0] ...
	I0913 20:03:05.742109   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b7288e6c437a29f9a016812446ce5905f4cc78974775e1f26b227cd41414a8c0"
	I0913 20:03:05.792956   71233 logs.go:123] Gathering logs for kube-scheduler [c32212fb06588456e47850075601e1e9622d8fab0cd30faa9a6289ff74e4bc9f] ...
	I0913 20:03:05.792984   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c32212fb06588456e47850075601e1e9622d8fab0cd30faa9a6289ff74e4bc9f"
	I0913 20:03:05.834623   71233 logs.go:123] Gathering logs for kube-proxy [57402126568c76ff63c95511c256896d8fb9ec9889deab2a2816385513386b86] ...
	I0913 20:03:05.834651   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57402126568c76ff63c95511c256896d8fb9ec9889deab2a2816385513386b86"
	I0913 20:03:05.872365   71233 logs.go:123] Gathering logs for storage-provisioner [db0694e689431d2416b903e0f01588bdf740ca0d24d9561799e853cfb065cb11] ...
	I0913 20:03:05.872395   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db0694e689431d2416b903e0f01588bdf740ca0d24d9561799e853cfb065cb11"
	I0913 20:03:05.909565   71233 logs.go:123] Gathering logs for storage-provisioner [d21ac9f9341fd40b5f181668035e7327fd6c2310c56a7f5f0158d440bfd2e9a3] ...
	I0913 20:03:05.909589   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d21ac9f9341fd40b5f181668035e7327fd6c2310c56a7f5f0158d440bfd2e9a3"
	I0913 20:03:05.950037   71233 logs.go:123] Gathering logs for container status ...
	I0913 20:03:05.950073   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:03:06.006670   71233 logs.go:123] Gathering logs for kubelet ...
	I0913 20:03:06.006702   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:03:06.075591   71233 logs.go:123] Gathering logs for dmesg ...
	I0913 20:03:06.075633   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:03:06.090020   71233 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:03:06.090051   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 20:03:06.193190   71233 logs.go:123] Gathering logs for kube-apiserver [8c6b66cfda64cc01b2add3842317e8e38bd44940ec52b87ff6868e2b782ceb0d] ...
	I0913 20:03:06.193216   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8c6b66cfda64cc01b2add3842317e8e38bd44940ec52b87ff6868e2b782ceb0d"
	I0913 20:03:06.236386   71233 logs.go:123] Gathering logs for coredns [5a58f184d570427c2adde5e982b72f455eb375ced496ce62a3217f22f7c409e7] ...
	I0913 20:03:06.236414   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a58f184d570427c2adde5e982b72f455eb375ced496ce62a3217f22f7c409e7"
	I0913 20:03:06.276618   71233 logs.go:123] Gathering logs for kube-controller-manager [3e8d6c49b3b396aea040cc47de27b36e0f6018361f1925fe36424adffe6a6b73] ...
	I0913 20:03:06.276644   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e8d6c49b3b396aea040cc47de27b36e0f6018361f1925fe36424adffe6a6b73"
	I0913 20:03:06.332088   71233 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:03:06.332119   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:03:09.189499   71233 system_pods.go:59] 8 kube-system pods found
	I0913 20:03:09.189533   71233 system_pods.go:61] "coredns-7c65d6cfc9-lrrkx" [3b5420cc-a0cd-4f34-8f65-9fa6fd5dd02f] Running
	I0913 20:03:09.189542   71233 system_pods.go:61] "etcd-embed-certs-175374" [4f645ba5-cffa-49a2-9219-8e635f58abd3] Running
	I0913 20:03:09.189549   71233 system_pods.go:61] "kube-apiserver-embed-certs-175374" [4c21b983-d949-4642-b17c-b547330b0d05] Running
	I0913 20:03:09.189564   71233 system_pods.go:61] "kube-controller-manager-embed-certs-175374" [defb4f6c-81f8-4405-b2c0-abe91c846f4c] Running
	I0913 20:03:09.189571   71233 system_pods.go:61] "kube-proxy-jv77q" [28580bbe-7c5f-4161-8370-41f3286d508c] Running
	I0913 20:03:09.189577   71233 system_pods.go:61] "kube-scheduler-embed-certs-175374" [c5aa66b9-523c-46e8-b8e4-70680490dbf5] Running
	I0913 20:03:09.189588   71233 system_pods.go:61] "metrics-server-6867b74b74-fnznh" [9ca67e1c-a852-4513-abfc-ace5908d2727] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0913 20:03:09.189597   71233 system_pods.go:61] "storage-provisioner" [afa99920-b51a-4d30-a8e0-269a0beeee8a] Running
	I0913 20:03:09.189610   71233 system_pods.go:74] duration metric: took 3.803122963s to wait for pod list to return data ...
	I0913 20:03:09.189618   71233 default_sa.go:34] waiting for default service account to be created ...
	I0913 20:03:09.192997   71233 default_sa.go:45] found service account: "default"
	I0913 20:03:09.193023   71233 default_sa.go:55] duration metric: took 3.397513ms for default service account to be created ...
	I0913 20:03:09.193033   71233 system_pods.go:116] waiting for k8s-apps to be running ...
	I0913 20:03:09.198238   71233 system_pods.go:86] 8 kube-system pods found
	I0913 20:03:09.198263   71233 system_pods.go:89] "coredns-7c65d6cfc9-lrrkx" [3b5420cc-a0cd-4f34-8f65-9fa6fd5dd02f] Running
	I0913 20:03:09.198268   71233 system_pods.go:89] "etcd-embed-certs-175374" [4f645ba5-cffa-49a2-9219-8e635f58abd3] Running
	I0913 20:03:09.198272   71233 system_pods.go:89] "kube-apiserver-embed-certs-175374" [4c21b983-d949-4642-b17c-b547330b0d05] Running
	I0913 20:03:09.198276   71233 system_pods.go:89] "kube-controller-manager-embed-certs-175374" [defb4f6c-81f8-4405-b2c0-abe91c846f4c] Running
	I0913 20:03:09.198280   71233 system_pods.go:89] "kube-proxy-jv77q" [28580bbe-7c5f-4161-8370-41f3286d508c] Running
	I0913 20:03:09.198284   71233 system_pods.go:89] "kube-scheduler-embed-certs-175374" [c5aa66b9-523c-46e8-b8e4-70680490dbf5] Running
	I0913 20:03:09.198291   71233 system_pods.go:89] "metrics-server-6867b74b74-fnznh" [9ca67e1c-a852-4513-abfc-ace5908d2727] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0913 20:03:09.198298   71233 system_pods.go:89] "storage-provisioner" [afa99920-b51a-4d30-a8e0-269a0beeee8a] Running
	I0913 20:03:09.198305   71233 system_pods.go:126] duration metric: took 5.267005ms to wait for k8s-apps to be running ...
	I0913 20:03:09.198314   71233 system_svc.go:44] waiting for kubelet service to be running ....
	I0913 20:03:09.198349   71233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 20:03:09.216256   71233 system_svc.go:56] duration metric: took 17.93212ms WaitForService to wait for kubelet
	I0913 20:03:09.216295   71233 kubeadm.go:582] duration metric: took 4m23.636198466s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0913 20:03:09.216318   71233 node_conditions.go:102] verifying NodePressure condition ...
	I0913 20:03:09.219598   71233 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0913 20:03:09.219623   71233 node_conditions.go:123] node cpu capacity is 2
	I0913 20:03:09.219634   71233 node_conditions.go:105] duration metric: took 3.310981ms to run NodePressure ...
	I0913 20:03:09.219644   71233 start.go:241] waiting for startup goroutines ...
	I0913 20:03:09.219650   71233 start.go:246] waiting for cluster config update ...
	I0913 20:03:09.219659   71233 start.go:255] writing updated cluster config ...
	I0913 20:03:09.219956   71233 ssh_runner.go:195] Run: rm -f paused
	I0913 20:03:09.275861   71233 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0913 20:03:09.277856   71233 out.go:177] * Done! kubectl is now configured to use "embed-certs-175374" cluster and "default" namespace by default
	I0913 20:03:09.746358   71926 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0913 20:03:09.746651   71926 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0913 20:03:19.746817   71926 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0913 20:03:19.747178   71926 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0913 20:03:39.747563   71926 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0913 20:03:39.747837   71926 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0913 20:04:19.749006   71926 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0913 20:04:19.749293   71926 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0913 20:04:19.749318   71926 kubeadm.go:310] 
	I0913 20:04:19.749381   71926 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0913 20:04:19.749450   71926 kubeadm.go:310] 		timed out waiting for the condition
	I0913 20:04:19.749486   71926 kubeadm.go:310] 
	I0913 20:04:19.749554   71926 kubeadm.go:310] 	This error is likely caused by:
	I0913 20:04:19.749588   71926 kubeadm.go:310] 		- The kubelet is not running
	I0913 20:04:19.749737   71926 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0913 20:04:19.749752   71926 kubeadm.go:310] 
	I0913 20:04:19.749887   71926 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0913 20:04:19.749920   71926 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0913 20:04:19.749949   71926 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0913 20:04:19.749955   71926 kubeadm.go:310] 
	I0913 20:04:19.750044   71926 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0913 20:04:19.750136   71926 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0913 20:04:19.750145   71926 kubeadm.go:310] 
	I0913 20:04:19.750247   71926 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0913 20:04:19.750339   71926 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0913 20:04:19.750430   71926 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0913 20:04:19.750523   71926 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0913 20:04:19.750574   71926 kubeadm.go:310] 
	I0913 20:04:19.751362   71926 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0913 20:04:19.751496   71926 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0913 20:04:19.751584   71926 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0913 20:04:19.751725   71926 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0913 20:04:19.751774   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0913 20:04:20.207124   71926 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 20:04:20.223940   71926 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0913 20:04:20.234902   71926 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0913 20:04:20.234923   71926 kubeadm.go:157] found existing configuration files:
	
	I0913 20:04:20.234970   71926 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0913 20:04:20.246057   71926 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0913 20:04:20.246154   71926 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0913 20:04:20.257102   71926 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0913 20:04:20.266497   71926 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0913 20:04:20.266545   71926 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0913 20:04:20.276259   71926 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0913 20:04:20.285640   71926 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0913 20:04:20.285690   71926 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0913 20:04:20.294988   71926 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0913 20:04:20.303978   71926 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0913 20:04:20.304021   71926 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0913 20:04:20.313426   71926 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0913 20:04:20.392431   71926 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0913 20:04:20.392519   71926 kubeadm.go:310] [preflight] Running pre-flight checks
	I0913 20:04:20.550959   71926 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0913 20:04:20.551072   71926 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0913 20:04:20.551169   71926 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0913 20:04:20.746999   71926 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0913 20:04:20.749008   71926 out.go:235]   - Generating certificates and keys ...
	I0913 20:04:20.749104   71926 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0913 20:04:20.749181   71926 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0913 20:04:20.749255   71926 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0913 20:04:20.749339   71926 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0913 20:04:20.749457   71926 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0913 20:04:20.749540   71926 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0913 20:04:20.749608   71926 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0913 20:04:20.749670   71926 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0913 20:04:20.749732   71926 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0913 20:04:20.749801   71926 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0913 20:04:20.749833   71926 kubeadm.go:310] [certs] Using the existing "sa" key
	I0913 20:04:20.749877   71926 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0913 20:04:20.997924   71926 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0913 20:04:21.175932   71926 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0913 20:04:21.442609   71926 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0913 20:04:21.714181   71926 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0913 20:04:21.737741   71926 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0913 20:04:21.738987   71926 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0913 20:04:21.739058   71926 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0913 20:04:21.885498   71926 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0913 20:04:21.887033   71926 out.go:235]   - Booting up control plane ...
	I0913 20:04:21.887170   71926 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0913 20:04:21.893768   71926 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0913 20:04:21.893887   71926 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0913 20:04:21.894035   71926 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0913 20:04:21.904503   71926 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0913 20:05:01.906985   71926 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0913 20:05:01.907109   71926 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0913 20:05:01.907459   71926 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0913 20:05:06.907586   71926 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0913 20:05:06.907859   71926 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0913 20:05:16.908086   71926 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0913 20:05:16.908310   71926 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0913 20:05:36.908887   71926 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0913 20:05:36.909114   71926 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0913 20:06:16.909944   71926 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0913 20:06:16.910449   71926 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0913 20:06:16.910509   71926 kubeadm.go:310] 
	I0913 20:06:16.910582   71926 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0913 20:06:16.910675   71926 kubeadm.go:310] 		timed out waiting for the condition
	I0913 20:06:16.910694   71926 kubeadm.go:310] 
	I0913 20:06:16.910764   71926 kubeadm.go:310] 	This error is likely caused by:
	I0913 20:06:16.910837   71926 kubeadm.go:310] 		- The kubelet is not running
	I0913 20:06:16.910965   71926 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0913 20:06:16.910979   71926 kubeadm.go:310] 
	I0913 20:06:16.911088   71926 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0913 20:06:16.911126   71926 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0913 20:06:16.911172   71926 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0913 20:06:16.911180   71926 kubeadm.go:310] 
	I0913 20:06:16.911298   71926 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0913 20:06:16.911400   71926 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0913 20:06:16.911415   71926 kubeadm.go:310] 
	I0913 20:06:16.911586   71926 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0913 20:06:16.911787   71926 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0913 20:06:16.911938   71926 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0913 20:06:16.912063   71926 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0913 20:06:16.912087   71926 kubeadm.go:310] 
	I0913 20:06:16.912489   71926 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0913 20:06:16.912671   71926 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0913 20:06:16.912770   71926 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0913 20:06:16.912842   71926 kubeadm.go:394] duration metric: took 7m57.58767006s to StartCluster
	I0913 20:06:16.912893   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:06:16.912960   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:06:16.957280   71926 cri.go:89] found id: ""
	I0913 20:06:16.957307   71926 logs.go:276] 0 containers: []
	W0913 20:06:16.957319   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:06:16.957326   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:06:16.957393   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:06:16.997065   71926 cri.go:89] found id: ""
	I0913 20:06:16.997095   71926 logs.go:276] 0 containers: []
	W0913 20:06:16.997106   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:06:16.997115   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:06:16.997193   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:06:17.033075   71926 cri.go:89] found id: ""
	I0913 20:06:17.033099   71926 logs.go:276] 0 containers: []
	W0913 20:06:17.033107   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:06:17.033112   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:06:17.033180   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:06:17.071062   71926 cri.go:89] found id: ""
	I0913 20:06:17.071090   71926 logs.go:276] 0 containers: []
	W0913 20:06:17.071101   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:06:17.071108   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:06:17.071176   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:06:17.107558   71926 cri.go:89] found id: ""
	I0913 20:06:17.107584   71926 logs.go:276] 0 containers: []
	W0913 20:06:17.107594   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:06:17.107599   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:06:17.107658   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:06:17.146037   71926 cri.go:89] found id: ""
	I0913 20:06:17.146066   71926 logs.go:276] 0 containers: []
	W0913 20:06:17.146076   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:06:17.146083   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:06:17.146158   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:06:17.189129   71926 cri.go:89] found id: ""
	I0913 20:06:17.189163   71926 logs.go:276] 0 containers: []
	W0913 20:06:17.189174   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:06:17.189181   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:06:17.189241   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:06:17.224018   71926 cri.go:89] found id: ""
	I0913 20:06:17.224045   71926 logs.go:276] 0 containers: []
	W0913 20:06:17.224056   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:06:17.224067   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:06:17.224081   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:06:17.238494   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:06:17.238526   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:06:17.319627   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:06:17.319650   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:06:17.319663   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:06:17.472823   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:06:17.472854   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:06:17.515919   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:06:17.515957   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0913 20:06:17.566082   71926 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0913 20:06:17.566145   71926 out.go:270] * 
	W0913 20:06:17.566249   71926 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0913 20:06:17.566272   71926 out.go:270] * 
	W0913 20:06:17.567031   71926 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0913 20:06:17.570669   71926 out.go:201] 
	W0913 20:06:17.571961   71926 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0913 20:06:17.572007   71926 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0913 20:06:17.572024   71926 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0913 20:06:17.573440   71926 out.go:201] 
	
	
	==> CRI-O <==
	Sep 13 20:06:19 old-k8s-version-234290 crio[635]: time="2024-09-13 20:06:19.467566359Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726257979467544480,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4c97e494-7fe2-45ec-b755-bc0e7d1f164f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 20:06:19 old-k8s-version-234290 crio[635]: time="2024-09-13 20:06:19.468097700Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=66174734-df33-4991-93d8-1789c4275d0c name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 20:06:19 old-k8s-version-234290 crio[635]: time="2024-09-13 20:06:19.468140851Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=66174734-df33-4991-93d8-1789c4275d0c name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 20:06:19 old-k8s-version-234290 crio[635]: time="2024-09-13 20:06:19.468169923Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=66174734-df33-4991-93d8-1789c4275d0c name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 20:06:19 old-k8s-version-234290 crio[635]: time="2024-09-13 20:06:19.501542677Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=053b601d-7021-4d81-9ecc-be6c46558f4c name=/runtime.v1.RuntimeService/Version
	Sep 13 20:06:19 old-k8s-version-234290 crio[635]: time="2024-09-13 20:06:19.501628017Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=053b601d-7021-4d81-9ecc-be6c46558f4c name=/runtime.v1.RuntimeService/Version
	Sep 13 20:06:19 old-k8s-version-234290 crio[635]: time="2024-09-13 20:06:19.502563376Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=93ac3f23-64bd-4c0a-bc42-adcc99f6c75e name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 20:06:19 old-k8s-version-234290 crio[635]: time="2024-09-13 20:06:19.502997855Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726257979502974305,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=93ac3f23-64bd-4c0a-bc42-adcc99f6c75e name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 20:06:19 old-k8s-version-234290 crio[635]: time="2024-09-13 20:06:19.503447150Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1bf71eeb-40af-4bd2-9285-6f20a47833c8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 20:06:19 old-k8s-version-234290 crio[635]: time="2024-09-13 20:06:19.503492150Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1bf71eeb-40af-4bd2-9285-6f20a47833c8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 20:06:19 old-k8s-version-234290 crio[635]: time="2024-09-13 20:06:19.503526593Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=1bf71eeb-40af-4bd2-9285-6f20a47833c8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 20:06:19 old-k8s-version-234290 crio[635]: time="2024-09-13 20:06:19.542242934Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=94aa82b0-d2c1-4f86-ae6f-3d6f8fb2d6b6 name=/runtime.v1.RuntimeService/Version
	Sep 13 20:06:19 old-k8s-version-234290 crio[635]: time="2024-09-13 20:06:19.542345489Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=94aa82b0-d2c1-4f86-ae6f-3d6f8fb2d6b6 name=/runtime.v1.RuntimeService/Version
	Sep 13 20:06:19 old-k8s-version-234290 crio[635]: time="2024-09-13 20:06:19.543958294Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bfdf724d-15e1-4964-a504-641746163689 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 20:06:19 old-k8s-version-234290 crio[635]: time="2024-09-13 20:06:19.544318814Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726257979544296635,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bfdf724d-15e1-4964-a504-641746163689 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 20:06:19 old-k8s-version-234290 crio[635]: time="2024-09-13 20:06:19.545136483Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8d1fc628-f46e-4cbc-87a9-386f3caa4678 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 20:06:19 old-k8s-version-234290 crio[635]: time="2024-09-13 20:06:19.545201520Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8d1fc628-f46e-4cbc-87a9-386f3caa4678 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 20:06:19 old-k8s-version-234290 crio[635]: time="2024-09-13 20:06:19.545245961Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=8d1fc628-f46e-4cbc-87a9-386f3caa4678 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 20:06:19 old-k8s-version-234290 crio[635]: time="2024-09-13 20:06:19.582903180Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=67a23d02-dae6-4666-9352-d1f52b96e186 name=/runtime.v1.RuntimeService/Version
	Sep 13 20:06:19 old-k8s-version-234290 crio[635]: time="2024-09-13 20:06:19.582972991Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=67a23d02-dae6-4666-9352-d1f52b96e186 name=/runtime.v1.RuntimeService/Version
	Sep 13 20:06:19 old-k8s-version-234290 crio[635]: time="2024-09-13 20:06:19.584631532Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c70b6e29-b575-4eb4-833d-f6c9416dae0c name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 20:06:19 old-k8s-version-234290 crio[635]: time="2024-09-13 20:06:19.585059685Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726257979585036308,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c70b6e29-b575-4eb4-833d-f6c9416dae0c name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 20:06:19 old-k8s-version-234290 crio[635]: time="2024-09-13 20:06:19.585584637Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cc4f7caf-7c61-4be2-ac96-e6f9e6cbc00e name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 20:06:19 old-k8s-version-234290 crio[635]: time="2024-09-13 20:06:19.585640139Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cc4f7caf-7c61-4be2-ac96-e6f9e6cbc00e name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 20:06:19 old-k8s-version-234290 crio[635]: time="2024-09-13 20:06:19.585674179Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=cc4f7caf-7c61-4be2-ac96-e6f9e6cbc00e name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Sep13 19:57] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.066109] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.048142] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Sep13 19:58] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.610500] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.676115] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.362178] systemd-fstab-generator[561]: Ignoring "noauto" option for root device
	[  +0.066050] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.062575] systemd-fstab-generator[573]: Ignoring "noauto" option for root device
	[  +0.203353] systemd-fstab-generator[587]: Ignoring "noauto" option for root device
	[  +0.197412] systemd-fstab-generator[599]: Ignoring "noauto" option for root device
	[  +0.328737] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +6.657608] systemd-fstab-generator[890]: Ignoring "noauto" option for root device
	[  +0.063640] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.000194] systemd-fstab-generator[1014]: Ignoring "noauto" option for root device
	[ +13.374485] kauditd_printk_skb: 46 callbacks suppressed
	[Sep13 20:02] systemd-fstab-generator[5056]: Ignoring "noauto" option for root device
	[Sep13 20:04] systemd-fstab-generator[5327]: Ignoring "noauto" option for root device
	[  +0.071026] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 20:06:19 up 8 min,  0 users,  load average: 0.01, 0.07, 0.04
	Linux old-k8s-version-234290 5.10.207 #1 SMP Thu Sep 12 19:03:33 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Sep 13 20:06:16 old-k8s-version-234290 kubelet[5503]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run.func1(0xc0001000c0, 0xc000bc4360)
	Sep 13 20:06:16 old-k8s-version-234290 kubelet[5503]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:130 +0x34
	Sep 13 20:06:16 old-k8s-version-234290 kubelet[5503]: created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run
	Sep 13 20:06:16 old-k8s-version-234290 kubelet[5503]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:129 +0xa5
	Sep 13 20:06:16 old-k8s-version-234290 kubelet[5503]: goroutine 148 [select]:
	Sep 13 20:06:16 old-k8s-version-234290 kubelet[5503]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000c69ef0, 0x4f0ac20, 0xc000aed540, 0x1, 0xc0001000c0)
	Sep 13 20:06:16 old-k8s-version-234290 kubelet[5503]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
	Sep 13 20:06:16 old-k8s-version-234290 kubelet[5503]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc000a9ec40, 0xc0001000c0)
	Sep 13 20:06:16 old-k8s-version-234290 kubelet[5503]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Sep 13 20:06:16 old-k8s-version-234290 kubelet[5503]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Sep 13 20:06:16 old-k8s-version-234290 kubelet[5503]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Sep 13 20:06:16 old-k8s-version-234290 kubelet[5503]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000ba6530, 0xc000b98e80)
	Sep 13 20:06:16 old-k8s-version-234290 kubelet[5503]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Sep 13 20:06:16 old-k8s-version-234290 kubelet[5503]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Sep 13 20:06:16 old-k8s-version-234290 kubelet[5503]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Sep 13 20:06:16 old-k8s-version-234290 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Sep 13 20:06:16 old-k8s-version-234290 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Sep 13 20:06:17 old-k8s-version-234290 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Sep 13 20:06:17 old-k8s-version-234290 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Sep 13 20:06:17 old-k8s-version-234290 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Sep 13 20:06:17 old-k8s-version-234290 kubelet[5559]: I0913 20:06:17.436860    5559 server.go:416] Version: v1.20.0
	Sep 13 20:06:17 old-k8s-version-234290 kubelet[5559]: I0913 20:06:17.437202    5559 server.go:837] Client rotation is on, will bootstrap in background
	Sep 13 20:06:17 old-k8s-version-234290 kubelet[5559]: I0913 20:06:17.444074    5559 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Sep 13 20:06:17 old-k8s-version-234290 kubelet[5559]: I0913 20:06:17.446693    5559 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Sep 13 20:06:17 old-k8s-version-234290 kubelet[5559]: W0913 20:06:17.446952    5559 manager.go:159] Cannot detect current cgroup on cgroup v2
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-234290 -n old-k8s-version-234290
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-234290 -n old-k8s-version-234290: exit status 2 (225.656712ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-234290" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (759.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0913 20:02:19.587950   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/enable-default-cni-604714/client.crt: no such file or directory" logger="UnhandledError"
E0913 20:02:48.868737   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/flannel-604714/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-239327 -n no-preload-239327
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-09-13 20:11:19.690597167 +0000 UTC m=+6633.169602401
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-239327 -n no-preload-239327
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-239327 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-239327 logs -n 25: (2.183851762s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-604714 sudo cat                              | bridge-604714                | jenkins | v1.34.0 | 13 Sep 24 19:49 UTC | 13 Sep 24 19:49 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-604714 sudo                                  | bridge-604714                | jenkins | v1.34.0 | 13 Sep 24 19:49 UTC | 13 Sep 24 19:49 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-604714 sudo                                  | bridge-604714                | jenkins | v1.34.0 | 13 Sep 24 19:49 UTC | 13 Sep 24 19:49 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-604714 sudo                                  | bridge-604714                | jenkins | v1.34.0 | 13 Sep 24 19:49 UTC | 13 Sep 24 19:49 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-604714 sudo find                             | bridge-604714                | jenkins | v1.34.0 | 13 Sep 24 19:49 UTC | 13 Sep 24 19:49 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-604714 sudo crio                             | bridge-604714                | jenkins | v1.34.0 | 13 Sep 24 19:49 UTC | 13 Sep 24 19:49 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-604714                                       | bridge-604714                | jenkins | v1.34.0 | 13 Sep 24 19:49 UTC | 13 Sep 24 19:49 UTC |
	| delete  | -p                                                     | disable-driver-mounts-221882 | jenkins | v1.34.0 | 13 Sep 24 19:49 UTC | 13 Sep 24 19:49 UTC |
	|         | disable-driver-mounts-221882                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-512125 | jenkins | v1.34.0 | 13 Sep 24 19:49 UTC | 13 Sep 24 19:50 UTC |
	|         | default-k8s-diff-port-512125                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-175374            | embed-certs-175374           | jenkins | v1.34.0 | 13 Sep 24 19:50 UTC | 13 Sep 24 19:50 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-175374                                  | embed-certs-175374           | jenkins | v1.34.0 | 13 Sep 24 19:50 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-239327             | no-preload-239327            | jenkins | v1.34.0 | 13 Sep 24 19:50 UTC | 13 Sep 24 19:50 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-239327                                   | no-preload-239327            | jenkins | v1.34.0 | 13 Sep 24 19:50 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-512125  | default-k8s-diff-port-512125 | jenkins | v1.34.0 | 13 Sep 24 19:50 UTC | 13 Sep 24 19:50 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-512125 | jenkins | v1.34.0 | 13 Sep 24 19:50 UTC |                     |
	|         | default-k8s-diff-port-512125                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-234290        | old-k8s-version-234290       | jenkins | v1.34.0 | 13 Sep 24 19:52 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-175374                 | embed-certs-175374           | jenkins | v1.34.0 | 13 Sep 24 19:52 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-175374                                  | embed-certs-175374           | jenkins | v1.34.0 | 13 Sep 24 19:52 UTC | 13 Sep 24 20:03 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-239327                  | no-preload-239327            | jenkins | v1.34.0 | 13 Sep 24 19:52 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-239327                                   | no-preload-239327            | jenkins | v1.34.0 | 13 Sep 24 19:52 UTC | 13 Sep 24 20:02 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-512125       | default-k8s-diff-port-512125 | jenkins | v1.34.0 | 13 Sep 24 19:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-512125 | jenkins | v1.34.0 | 13 Sep 24 19:53 UTC | 13 Sep 24 20:03 UTC |
	|         | default-k8s-diff-port-512125                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-234290                              | old-k8s-version-234290       | jenkins | v1.34.0 | 13 Sep 24 19:53 UTC | 13 Sep 24 19:53 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-234290             | old-k8s-version-234290       | jenkins | v1.34.0 | 13 Sep 24 19:53 UTC | 13 Sep 24 19:53 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-234290                              | old-k8s-version-234290       | jenkins | v1.34.0 | 13 Sep 24 19:53 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/13 19:53:41
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0913 19:53:41.943032   71926 out.go:345] Setting OutFile to fd 1 ...
	I0913 19:53:41.943180   71926 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 19:53:41.943190   71926 out.go:358] Setting ErrFile to fd 2...
	I0913 19:53:41.943197   71926 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 19:53:41.943402   71926 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19636-3902/.minikube/bin
	I0913 19:53:41.943930   71926 out.go:352] Setting JSON to false
	I0913 19:53:41.944812   71926 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5765,"bootTime":1726251457,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0913 19:53:41.944913   71926 start.go:139] virtualization: kvm guest
	I0913 19:53:41.946864   71926 out.go:177] * [old-k8s-version-234290] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0913 19:53:41.948276   71926 out.go:177]   - MINIKUBE_LOCATION=19636
	I0913 19:53:41.948281   71926 notify.go:220] Checking for updates...
	I0913 19:53:41.950620   71926 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 19:53:41.951967   71926 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19636-3902/kubeconfig
	I0913 19:53:41.953232   71926 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19636-3902/.minikube
	I0913 19:53:41.954348   71926 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0913 19:53:41.955441   71926 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 19:53:41.957011   71926 config.go:182] Loaded profile config "old-k8s-version-234290": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0913 19:53:41.957371   71926 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:53:41.957441   71926 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:53:41.972835   71926 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38069
	I0913 19:53:41.973170   71926 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:53:41.973680   71926 main.go:141] libmachine: Using API Version  1
	I0913 19:53:41.973702   71926 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:53:41.974021   71926 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:53:41.974203   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .DriverName
	I0913 19:53:41.975950   71926 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0913 19:53:41.977070   71926 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 19:53:41.977361   71926 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:53:41.977393   71926 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:53:41.992564   71926 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46407
	I0913 19:53:41.992937   71926 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:53:41.993374   71926 main.go:141] libmachine: Using API Version  1
	I0913 19:53:41.993394   71926 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:53:41.993696   71926 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:53:41.993887   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .DriverName
	I0913 19:53:42.028938   71926 out.go:177] * Using the kvm2 driver based on existing profile
	I0913 19:53:42.030049   71926 start.go:297] selected driver: kvm2
	I0913 19:53:42.030059   71926 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-234290 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-234290 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.137 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 19:53:42.030200   71926 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 19:53:42.030849   71926 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 19:53:42.030932   71926 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19636-3902/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0913 19:53:42.045463   71926 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0913 19:53:42.045953   71926 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0913 19:53:42.045989   71926 cni.go:84] Creating CNI manager for ""
	I0913 19:53:42.046049   71926 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0913 19:53:42.046110   71926 start.go:340] cluster config:
	{Name:old-k8s-version-234290 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-234290 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.137 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 19:53:42.046256   71926 iso.go:125] acquiring lock: {Name:mk6d09a9e7ffc35d34bf41cb51ad15df14a4d34d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 19:53:42.047975   71926 out.go:177] * Starting "old-k8s-version-234290" primary control-plane node in "old-k8s-version-234290" cluster
	I0913 19:53:42.049111   71926 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0913 19:53:42.049142   71926 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0913 19:53:42.049152   71926 cache.go:56] Caching tarball of preloaded images
	I0913 19:53:42.049244   71926 preload.go:172] Found /home/jenkins/minikube-integration/19636-3902/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0913 19:53:42.049256   71926 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0913 19:53:42.049381   71926 profile.go:143] Saving config to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/old-k8s-version-234290/config.json ...
	I0913 19:53:42.049610   71926 start.go:360] acquireMachinesLock for old-k8s-version-234290: {Name:mk2a4fc9f87a3264088553785e086036ce1d8c5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 19:53:44.338294   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:53:47.410436   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:53:53.490365   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:53:56.562332   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:54:02.642421   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:54:05.714373   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:54:11.794509   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:54:14.866446   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:54:20.946376   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:54:24.018394   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:54:30.098454   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:54:33.170427   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:54:39.250379   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:54:42.322396   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:54:48.402383   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:54:51.474349   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:54:57.554326   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:55:00.626470   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:55:06.706406   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:55:09.778406   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:55:15.858396   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:55:18.930350   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:55:25.010369   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:55:28.082351   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:55:34.162384   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:55:37.234340   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:55:43.314402   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:55:46.386350   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:55:52.466366   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:55:55.538393   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:56:01.618347   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:56:04.690441   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:56:10.770383   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:56:13.842385   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:56:19.922294   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:56:22.994351   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:56:29.074375   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:56:32.146398   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:56:38.226398   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:56:41.298354   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:56:47.378372   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:56:50.450410   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:56:56.530367   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:56:59.602397   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:57:05.682382   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:57:08.754412   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:57:11.758611   71424 start.go:364] duration metric: took 4m20.559966284s to acquireMachinesLock for "no-preload-239327"
	I0913 19:57:11.758664   71424 start.go:96] Skipping create...Using existing machine configuration
	I0913 19:57:11.758671   71424 fix.go:54] fixHost starting: 
	I0913 19:57:11.759024   71424 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:57:11.759062   71424 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:57:11.773946   71424 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43109
	I0913 19:57:11.774454   71424 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:57:11.774923   71424 main.go:141] libmachine: Using API Version  1
	I0913 19:57:11.774944   71424 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:57:11.775249   71424 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:57:11.775449   71424 main.go:141] libmachine: (no-preload-239327) Calling .DriverName
	I0913 19:57:11.775561   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetState
	I0913 19:57:11.777226   71424 fix.go:112] recreateIfNeeded on no-preload-239327: state=Stopped err=<nil>
	I0913 19:57:11.777255   71424 main.go:141] libmachine: (no-preload-239327) Calling .DriverName
	W0913 19:57:11.777386   71424 fix.go:138] unexpected machine state, will restart: <nil>
	I0913 19:57:11.778991   71424 out.go:177] * Restarting existing kvm2 VM for "no-preload-239327" ...
	I0913 19:57:11.756000   71233 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0913 19:57:11.756057   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetMachineName
	I0913 19:57:11.756380   71233 buildroot.go:166] provisioning hostname "embed-certs-175374"
	I0913 19:57:11.756419   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetMachineName
	I0913 19:57:11.756625   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHHostname
	I0913 19:57:11.758480   71233 machine.go:96] duration metric: took 4m37.434582624s to provisionDockerMachine
	I0913 19:57:11.758528   71233 fix.go:56] duration metric: took 4m37.454978505s for fixHost
	I0913 19:57:11.758535   71233 start.go:83] releasing machines lock for "embed-certs-175374", held for 4m37.454997672s
	W0913 19:57:11.758553   71233 start.go:714] error starting host: provision: host is not running
	W0913 19:57:11.758636   71233 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0913 19:57:11.758644   71233 start.go:729] Will try again in 5 seconds ...
	I0913 19:57:11.780324   71424 main.go:141] libmachine: (no-preload-239327) Calling .Start
	I0913 19:57:11.780481   71424 main.go:141] libmachine: (no-preload-239327) Ensuring networks are active...
	I0913 19:57:11.781265   71424 main.go:141] libmachine: (no-preload-239327) Ensuring network default is active
	I0913 19:57:11.781663   71424 main.go:141] libmachine: (no-preload-239327) Ensuring network mk-no-preload-239327 is active
	I0913 19:57:11.782007   71424 main.go:141] libmachine: (no-preload-239327) Getting domain xml...
	I0913 19:57:11.782826   71424 main.go:141] libmachine: (no-preload-239327) Creating domain...
	I0913 19:57:12.992355   71424 main.go:141] libmachine: (no-preload-239327) Waiting to get IP...
	I0913 19:57:12.993373   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:12.993782   71424 main.go:141] libmachine: (no-preload-239327) DBG | unable to find current IP address of domain no-preload-239327 in network mk-no-preload-239327
	I0913 19:57:12.993855   71424 main.go:141] libmachine: (no-preload-239327) DBG | I0913 19:57:12.993770   72661 retry.go:31] will retry after 199.574184ms: waiting for machine to come up
	I0913 19:57:13.195419   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:13.195877   71424 main.go:141] libmachine: (no-preload-239327) DBG | unable to find current IP address of domain no-preload-239327 in network mk-no-preload-239327
	I0913 19:57:13.195911   71424 main.go:141] libmachine: (no-preload-239327) DBG | I0913 19:57:13.195826   72661 retry.go:31] will retry after 380.700462ms: waiting for machine to come up
	I0913 19:57:13.578683   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:13.579202   71424 main.go:141] libmachine: (no-preload-239327) DBG | unable to find current IP address of domain no-preload-239327 in network mk-no-preload-239327
	I0913 19:57:13.579222   71424 main.go:141] libmachine: (no-preload-239327) DBG | I0913 19:57:13.579162   72661 retry.go:31] will retry after 398.874813ms: waiting for machine to come up
	I0913 19:57:13.979670   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:13.979999   71424 main.go:141] libmachine: (no-preload-239327) DBG | unable to find current IP address of domain no-preload-239327 in network mk-no-preload-239327
	I0913 19:57:13.980026   71424 main.go:141] libmachine: (no-preload-239327) DBG | I0913 19:57:13.979969   72661 retry.go:31] will retry after 430.946638ms: waiting for machine to come up
	I0913 19:57:14.412524   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:14.412887   71424 main.go:141] libmachine: (no-preload-239327) DBG | unable to find current IP address of domain no-preload-239327 in network mk-no-preload-239327
	I0913 19:57:14.412919   71424 main.go:141] libmachine: (no-preload-239327) DBG | I0913 19:57:14.412851   72661 retry.go:31] will retry after 619.103851ms: waiting for machine to come up
	I0913 19:57:15.033546   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:15.034023   71424 main.go:141] libmachine: (no-preload-239327) DBG | unable to find current IP address of domain no-preload-239327 in network mk-no-preload-239327
	I0913 19:57:15.034049   71424 main.go:141] libmachine: (no-preload-239327) DBG | I0913 19:57:15.033968   72661 retry.go:31] will retry after 686.825946ms: waiting for machine to come up
	I0913 19:57:15.722892   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:15.723272   71424 main.go:141] libmachine: (no-preload-239327) DBG | unable to find current IP address of domain no-preload-239327 in network mk-no-preload-239327
	I0913 19:57:15.723291   71424 main.go:141] libmachine: (no-preload-239327) DBG | I0913 19:57:15.723232   72661 retry.go:31] will retry after 950.457281ms: waiting for machine to come up
	I0913 19:57:16.760330   71233 start.go:360] acquireMachinesLock for embed-certs-175374: {Name:mk2a4fc9f87a3264088553785e086036ce1d8c5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 19:57:16.675363   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:16.675847   71424 main.go:141] libmachine: (no-preload-239327) DBG | unable to find current IP address of domain no-preload-239327 in network mk-no-preload-239327
	I0913 19:57:16.675877   71424 main.go:141] libmachine: (no-preload-239327) DBG | I0913 19:57:16.675800   72661 retry.go:31] will retry after 1.216886459s: waiting for machine to come up
	I0913 19:57:17.894808   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:17.895217   71424 main.go:141] libmachine: (no-preload-239327) DBG | unable to find current IP address of domain no-preload-239327 in network mk-no-preload-239327
	I0913 19:57:17.895239   71424 main.go:141] libmachine: (no-preload-239327) DBG | I0913 19:57:17.895175   72661 retry.go:31] will retry after 1.427837109s: waiting for machine to come up
	I0913 19:57:19.324743   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:19.325196   71424 main.go:141] libmachine: (no-preload-239327) DBG | unable to find current IP address of domain no-preload-239327 in network mk-no-preload-239327
	I0913 19:57:19.325217   71424 main.go:141] libmachine: (no-preload-239327) DBG | I0913 19:57:19.325162   72661 retry.go:31] will retry after 1.457475552s: waiting for machine to come up
	I0913 19:57:20.783805   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:20.784266   71424 main.go:141] libmachine: (no-preload-239327) DBG | unable to find current IP address of domain no-preload-239327 in network mk-no-preload-239327
	I0913 19:57:20.784330   71424 main.go:141] libmachine: (no-preload-239327) DBG | I0913 19:57:20.784199   72661 retry.go:31] will retry after 1.982491512s: waiting for machine to come up
	I0913 19:57:22.768091   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:22.768617   71424 main.go:141] libmachine: (no-preload-239327) DBG | unable to find current IP address of domain no-preload-239327 in network mk-no-preload-239327
	I0913 19:57:22.768648   71424 main.go:141] libmachine: (no-preload-239327) DBG | I0913 19:57:22.768571   72661 retry.go:31] will retry after 2.984595157s: waiting for machine to come up
	I0913 19:57:25.756723   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:25.757201   71424 main.go:141] libmachine: (no-preload-239327) DBG | unable to find current IP address of domain no-preload-239327 in network mk-no-preload-239327
	I0913 19:57:25.757254   71424 main.go:141] libmachine: (no-preload-239327) DBG | I0913 19:57:25.757153   72661 retry.go:31] will retry after 3.54213444s: waiting for machine to come up
	I0913 19:57:30.479236   71702 start.go:364] duration metric: took 4m5.481713344s to acquireMachinesLock for "default-k8s-diff-port-512125"
	I0913 19:57:30.479302   71702 start.go:96] Skipping create...Using existing machine configuration
	I0913 19:57:30.479311   71702 fix.go:54] fixHost starting: 
	I0913 19:57:30.479747   71702 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:57:30.479800   71702 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:57:30.496493   71702 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36631
	I0913 19:57:30.497088   71702 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:57:30.497677   71702 main.go:141] libmachine: Using API Version  1
	I0913 19:57:30.497710   71702 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:57:30.498088   71702 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:57:30.498293   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .DriverName
	I0913 19:57:30.498469   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetState
	I0913 19:57:30.500176   71702 fix.go:112] recreateIfNeeded on default-k8s-diff-port-512125: state=Stopped err=<nil>
	I0913 19:57:30.500202   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .DriverName
	W0913 19:57:30.500367   71702 fix.go:138] unexpected machine state, will restart: <nil>
	I0913 19:57:30.503496   71702 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-512125" ...
	I0913 19:57:29.301999   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:29.302506   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has current primary IP address 192.168.50.13 and MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:29.302529   71424 main.go:141] libmachine: (no-preload-239327) Found IP for machine: 192.168.50.13
	I0913 19:57:29.302571   71424 main.go:141] libmachine: (no-preload-239327) Reserving static IP address...
	I0913 19:57:29.302937   71424 main.go:141] libmachine: (no-preload-239327) Reserved static IP address: 192.168.50.13
	I0913 19:57:29.302956   71424 main.go:141] libmachine: (no-preload-239327) Waiting for SSH to be available...
	I0913 19:57:29.302980   71424 main.go:141] libmachine: (no-preload-239327) DBG | found host DHCP lease matching {name: "no-preload-239327", mac: "52:54:00:14:8c:9d", ip: "192.168.50.13"} in network mk-no-preload-239327: {Iface:virbr1 ExpiryTime:2024-09-13 20:57:22 +0000 UTC Type:0 Mac:52:54:00:14:8c:9d Iaid: IPaddr:192.168.50.13 Prefix:24 Hostname:no-preload-239327 Clientid:01:52:54:00:14:8c:9d}
	I0913 19:57:29.303002   71424 main.go:141] libmachine: (no-preload-239327) DBG | skip adding static IP to network mk-no-preload-239327 - found existing host DHCP lease matching {name: "no-preload-239327", mac: "52:54:00:14:8c:9d", ip: "192.168.50.13"}
	I0913 19:57:29.303016   71424 main.go:141] libmachine: (no-preload-239327) DBG | Getting to WaitForSSH function...
	I0913 19:57:29.305047   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:29.305362   71424 main.go:141] libmachine: (no-preload-239327) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:8c:9d", ip: ""} in network mk-no-preload-239327: {Iface:virbr1 ExpiryTime:2024-09-13 20:57:22 +0000 UTC Type:0 Mac:52:54:00:14:8c:9d Iaid: IPaddr:192.168.50.13 Prefix:24 Hostname:no-preload-239327 Clientid:01:52:54:00:14:8c:9d}
	I0913 19:57:29.305404   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined IP address 192.168.50.13 and MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:29.305515   71424 main.go:141] libmachine: (no-preload-239327) DBG | Using SSH client type: external
	I0913 19:57:29.305542   71424 main.go:141] libmachine: (no-preload-239327) DBG | Using SSH private key: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/no-preload-239327/id_rsa (-rw-------)
	I0913 19:57:29.305564   71424 main.go:141] libmachine: (no-preload-239327) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.13 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19636-3902/.minikube/machines/no-preload-239327/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0913 19:57:29.305573   71424 main.go:141] libmachine: (no-preload-239327) DBG | About to run SSH command:
	I0913 19:57:29.305581   71424 main.go:141] libmachine: (no-preload-239327) DBG | exit 0
	I0913 19:57:29.425845   71424 main.go:141] libmachine: (no-preload-239327) DBG | SSH cmd err, output: <nil>: 
	I0913 19:57:29.426277   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetConfigRaw
	I0913 19:57:29.426883   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetIP
	I0913 19:57:29.429328   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:29.429569   71424 main.go:141] libmachine: (no-preload-239327) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:8c:9d", ip: ""} in network mk-no-preload-239327: {Iface:virbr1 ExpiryTime:2024-09-13 20:57:22 +0000 UTC Type:0 Mac:52:54:00:14:8c:9d Iaid: IPaddr:192.168.50.13 Prefix:24 Hostname:no-preload-239327 Clientid:01:52:54:00:14:8c:9d}
	I0913 19:57:29.429604   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined IP address 192.168.50.13 and MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:29.429866   71424 profile.go:143] Saving config to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/no-preload-239327/config.json ...
	I0913 19:57:29.430088   71424 machine.go:93] provisionDockerMachine start ...
	I0913 19:57:29.430124   71424 main.go:141] libmachine: (no-preload-239327) Calling .DriverName
	I0913 19:57:29.430316   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHHostname
	I0913 19:57:29.432371   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:29.432697   71424 main.go:141] libmachine: (no-preload-239327) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:8c:9d", ip: ""} in network mk-no-preload-239327: {Iface:virbr1 ExpiryTime:2024-09-13 20:57:22 +0000 UTC Type:0 Mac:52:54:00:14:8c:9d Iaid: IPaddr:192.168.50.13 Prefix:24 Hostname:no-preload-239327 Clientid:01:52:54:00:14:8c:9d}
	I0913 19:57:29.432723   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined IP address 192.168.50.13 and MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:29.432877   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHPort
	I0913 19:57:29.433028   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHKeyPath
	I0913 19:57:29.433161   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHKeyPath
	I0913 19:57:29.433304   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHUsername
	I0913 19:57:29.433452   71424 main.go:141] libmachine: Using SSH client type: native
	I0913 19:57:29.433659   71424 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.13 22 <nil> <nil>}
	I0913 19:57:29.433671   71424 main.go:141] libmachine: About to run SSH command:
	hostname
	I0913 19:57:29.530650   71424 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0913 19:57:29.530683   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetMachineName
	I0913 19:57:29.530900   71424 buildroot.go:166] provisioning hostname "no-preload-239327"
	I0913 19:57:29.530926   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetMachineName
	I0913 19:57:29.531118   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHHostname
	I0913 19:57:29.533702   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:29.534171   71424 main.go:141] libmachine: (no-preload-239327) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:8c:9d", ip: ""} in network mk-no-preload-239327: {Iface:virbr1 ExpiryTime:2024-09-13 20:57:22 +0000 UTC Type:0 Mac:52:54:00:14:8c:9d Iaid: IPaddr:192.168.50.13 Prefix:24 Hostname:no-preload-239327 Clientid:01:52:54:00:14:8c:9d}
	I0913 19:57:29.534199   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined IP address 192.168.50.13 and MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:29.534417   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHPort
	I0913 19:57:29.534572   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHKeyPath
	I0913 19:57:29.534745   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHKeyPath
	I0913 19:57:29.534891   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHUsername
	I0913 19:57:29.535019   71424 main.go:141] libmachine: Using SSH client type: native
	I0913 19:57:29.535187   71424 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.13 22 <nil> <nil>}
	I0913 19:57:29.535199   71424 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-239327 && echo "no-preload-239327" | sudo tee /etc/hostname
	I0913 19:57:29.648889   71424 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-239327
	
	I0913 19:57:29.648913   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHHostname
	I0913 19:57:29.651418   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:29.651794   71424 main.go:141] libmachine: (no-preload-239327) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:8c:9d", ip: ""} in network mk-no-preload-239327: {Iface:virbr1 ExpiryTime:2024-09-13 20:57:22 +0000 UTC Type:0 Mac:52:54:00:14:8c:9d Iaid: IPaddr:192.168.50.13 Prefix:24 Hostname:no-preload-239327 Clientid:01:52:54:00:14:8c:9d}
	I0913 19:57:29.651818   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined IP address 192.168.50.13 and MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:29.651947   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHPort
	I0913 19:57:29.652123   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHKeyPath
	I0913 19:57:29.652233   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHKeyPath
	I0913 19:57:29.652398   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHUsername
	I0913 19:57:29.652574   71424 main.go:141] libmachine: Using SSH client type: native
	I0913 19:57:29.652776   71424 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.13 22 <nil> <nil>}
	I0913 19:57:29.652794   71424 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-239327' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-239327/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-239327' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0913 19:57:29.762739   71424 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0913 19:57:29.762770   71424 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19636-3902/.minikube CaCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19636-3902/.minikube}
	I0913 19:57:29.762788   71424 buildroot.go:174] setting up certificates
	I0913 19:57:29.762798   71424 provision.go:84] configureAuth start
	I0913 19:57:29.762807   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetMachineName
	I0913 19:57:29.763076   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetIP
	I0913 19:57:29.765579   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:29.765844   71424 main.go:141] libmachine: (no-preload-239327) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:8c:9d", ip: ""} in network mk-no-preload-239327: {Iface:virbr1 ExpiryTime:2024-09-13 20:57:22 +0000 UTC Type:0 Mac:52:54:00:14:8c:9d Iaid: IPaddr:192.168.50.13 Prefix:24 Hostname:no-preload-239327 Clientid:01:52:54:00:14:8c:9d}
	I0913 19:57:29.765881   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined IP address 192.168.50.13 and MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:29.766037   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHHostname
	I0913 19:57:29.768073   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:29.768363   71424 main.go:141] libmachine: (no-preload-239327) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:8c:9d", ip: ""} in network mk-no-preload-239327: {Iface:virbr1 ExpiryTime:2024-09-13 20:57:22 +0000 UTC Type:0 Mac:52:54:00:14:8c:9d Iaid: IPaddr:192.168.50.13 Prefix:24 Hostname:no-preload-239327 Clientid:01:52:54:00:14:8c:9d}
	I0913 19:57:29.768389   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined IP address 192.168.50.13 and MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:29.768465   71424 provision.go:143] copyHostCerts
	I0913 19:57:29.768517   71424 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem, removing ...
	I0913 19:57:29.768527   71424 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem
	I0913 19:57:29.768590   71424 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem (1679 bytes)
	I0913 19:57:29.768687   71424 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem, removing ...
	I0913 19:57:29.768694   71424 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem
	I0913 19:57:29.768722   71424 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem (1082 bytes)
	I0913 19:57:29.768788   71424 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem, removing ...
	I0913 19:57:29.768795   71424 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem
	I0913 19:57:29.768817   71424 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem (1123 bytes)
	I0913 19:57:29.768889   71424 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem org=jenkins.no-preload-239327 san=[127.0.0.1 192.168.50.13 localhost minikube no-preload-239327]
	I0913 19:57:29.880624   71424 provision.go:177] copyRemoteCerts
	I0913 19:57:29.880682   71424 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0913 19:57:29.880717   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHHostname
	I0913 19:57:29.883382   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:29.883679   71424 main.go:141] libmachine: (no-preload-239327) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:8c:9d", ip: ""} in network mk-no-preload-239327: {Iface:virbr1 ExpiryTime:2024-09-13 20:57:22 +0000 UTC Type:0 Mac:52:54:00:14:8c:9d Iaid: IPaddr:192.168.50.13 Prefix:24 Hostname:no-preload-239327 Clientid:01:52:54:00:14:8c:9d}
	I0913 19:57:29.883706   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined IP address 192.168.50.13 and MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:29.883861   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHPort
	I0913 19:57:29.884034   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHKeyPath
	I0913 19:57:29.884172   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHUsername
	I0913 19:57:29.884299   71424 sshutil.go:53] new ssh client: &{IP:192.168.50.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/no-preload-239327/id_rsa Username:docker}
	I0913 19:57:29.964073   71424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0913 19:57:29.988940   71424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0913 19:57:30.013491   71424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0913 19:57:30.038401   71424 provision.go:87] duration metric: took 275.590034ms to configureAuth
	I0913 19:57:30.038427   71424 buildroot.go:189] setting minikube options for container-runtime
	I0913 19:57:30.038638   71424 config.go:182] Loaded profile config "no-preload-239327": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 19:57:30.038726   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHHostname
	I0913 19:57:30.041435   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:30.041734   71424 main.go:141] libmachine: (no-preload-239327) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:8c:9d", ip: ""} in network mk-no-preload-239327: {Iface:virbr1 ExpiryTime:2024-09-13 20:57:22 +0000 UTC Type:0 Mac:52:54:00:14:8c:9d Iaid: IPaddr:192.168.50.13 Prefix:24 Hostname:no-preload-239327 Clientid:01:52:54:00:14:8c:9d}
	I0913 19:57:30.041758   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined IP address 192.168.50.13 and MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:30.041939   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHPort
	I0913 19:57:30.042135   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHKeyPath
	I0913 19:57:30.042328   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHKeyPath
	I0913 19:57:30.042488   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHUsername
	I0913 19:57:30.042633   71424 main.go:141] libmachine: Using SSH client type: native
	I0913 19:57:30.042788   71424 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.13 22 <nil> <nil>}
	I0913 19:57:30.042803   71424 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0913 19:57:30.253339   71424 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0913 19:57:30.253366   71424 machine.go:96] duration metric: took 823.250507ms to provisionDockerMachine
	I0913 19:57:30.253379   71424 start.go:293] postStartSetup for "no-preload-239327" (driver="kvm2")
	I0913 19:57:30.253391   71424 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0913 19:57:30.253413   71424 main.go:141] libmachine: (no-preload-239327) Calling .DriverName
	I0913 19:57:30.253755   71424 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0913 19:57:30.253789   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHHostname
	I0913 19:57:30.256252   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:30.256514   71424 main.go:141] libmachine: (no-preload-239327) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:8c:9d", ip: ""} in network mk-no-preload-239327: {Iface:virbr1 ExpiryTime:2024-09-13 20:57:22 +0000 UTC Type:0 Mac:52:54:00:14:8c:9d Iaid: IPaddr:192.168.50.13 Prefix:24 Hostname:no-preload-239327 Clientid:01:52:54:00:14:8c:9d}
	I0913 19:57:30.256540   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined IP address 192.168.50.13 and MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:30.256711   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHPort
	I0913 19:57:30.256876   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHKeyPath
	I0913 19:57:30.257073   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHUsername
	I0913 19:57:30.257214   71424 sshutil.go:53] new ssh client: &{IP:192.168.50.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/no-preload-239327/id_rsa Username:docker}
	I0913 19:57:30.337478   71424 ssh_runner.go:195] Run: cat /etc/os-release
	I0913 19:57:30.342399   71424 info.go:137] Remote host: Buildroot 2023.02.9
	I0913 19:57:30.342432   71424 filesync.go:126] Scanning /home/jenkins/minikube-integration/19636-3902/.minikube/addons for local assets ...
	I0913 19:57:30.342520   71424 filesync.go:126] Scanning /home/jenkins/minikube-integration/19636-3902/.minikube/files for local assets ...
	I0913 19:57:30.342602   71424 filesync.go:149] local asset: /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem -> 110792.pem in /etc/ssl/certs
	I0913 19:57:30.342687   71424 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0913 19:57:30.352513   71424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem --> /etc/ssl/certs/110792.pem (1708 bytes)
	I0913 19:57:30.377672   71424 start.go:296] duration metric: took 124.280454ms for postStartSetup
	I0913 19:57:30.377713   71424 fix.go:56] duration metric: took 18.619042375s for fixHost
	I0913 19:57:30.377736   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHHostname
	I0913 19:57:30.380480   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:30.380762   71424 main.go:141] libmachine: (no-preload-239327) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:8c:9d", ip: ""} in network mk-no-preload-239327: {Iface:virbr1 ExpiryTime:2024-09-13 20:57:22 +0000 UTC Type:0 Mac:52:54:00:14:8c:9d Iaid: IPaddr:192.168.50.13 Prefix:24 Hostname:no-preload-239327 Clientid:01:52:54:00:14:8c:9d}
	I0913 19:57:30.380784   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined IP address 192.168.50.13 and MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:30.380956   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHPort
	I0913 19:57:30.381202   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHKeyPath
	I0913 19:57:30.381348   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHKeyPath
	I0913 19:57:30.381458   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHUsername
	I0913 19:57:30.381616   71424 main.go:141] libmachine: Using SSH client type: native
	I0913 19:57:30.381771   71424 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.13 22 <nil> <nil>}
	I0913 19:57:30.381780   71424 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0913 19:57:30.479035   71424 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726257450.452618583
	
	I0913 19:57:30.479060   71424 fix.go:216] guest clock: 1726257450.452618583
	I0913 19:57:30.479069   71424 fix.go:229] Guest: 2024-09-13 19:57:30.452618583 +0000 UTC Remote: 2024-09-13 19:57:30.377717716 +0000 UTC m=+279.312798159 (delta=74.900867ms)
	I0913 19:57:30.479125   71424 fix.go:200] guest clock delta is within tolerance: 74.900867ms
	I0913 19:57:30.479144   71424 start.go:83] releasing machines lock for "no-preload-239327", held for 18.720496354s
	I0913 19:57:30.479184   71424 main.go:141] libmachine: (no-preload-239327) Calling .DriverName
	I0913 19:57:30.479427   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetIP
	I0913 19:57:30.481882   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:30.482255   71424 main.go:141] libmachine: (no-preload-239327) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:8c:9d", ip: ""} in network mk-no-preload-239327: {Iface:virbr1 ExpiryTime:2024-09-13 20:57:22 +0000 UTC Type:0 Mac:52:54:00:14:8c:9d Iaid: IPaddr:192.168.50.13 Prefix:24 Hostname:no-preload-239327 Clientid:01:52:54:00:14:8c:9d}
	I0913 19:57:30.482282   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined IP address 192.168.50.13 and MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:30.482456   71424 main.go:141] libmachine: (no-preload-239327) Calling .DriverName
	I0913 19:57:30.482964   71424 main.go:141] libmachine: (no-preload-239327) Calling .DriverName
	I0913 19:57:30.483140   71424 main.go:141] libmachine: (no-preload-239327) Calling .DriverName
	I0913 19:57:30.483216   71424 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0913 19:57:30.483243   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHHostname
	I0913 19:57:30.483423   71424 ssh_runner.go:195] Run: cat /version.json
	I0913 19:57:30.483453   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHHostname
	I0913 19:57:30.485658   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:30.486000   71424 main.go:141] libmachine: (no-preload-239327) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:8c:9d", ip: ""} in network mk-no-preload-239327: {Iface:virbr1 ExpiryTime:2024-09-13 20:57:22 +0000 UTC Type:0 Mac:52:54:00:14:8c:9d Iaid: IPaddr:192.168.50.13 Prefix:24 Hostname:no-preload-239327 Clientid:01:52:54:00:14:8c:9d}
	I0913 19:57:30.486026   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined IP address 192.168.50.13 and MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:30.486080   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:30.486173   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHPort
	I0913 19:57:30.486321   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHKeyPath
	I0913 19:57:30.486463   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHUsername
	I0913 19:57:30.486536   71424 main.go:141] libmachine: (no-preload-239327) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:8c:9d", ip: ""} in network mk-no-preload-239327: {Iface:virbr1 ExpiryTime:2024-09-13 20:57:22 +0000 UTC Type:0 Mac:52:54:00:14:8c:9d Iaid: IPaddr:192.168.50.13 Prefix:24 Hostname:no-preload-239327 Clientid:01:52:54:00:14:8c:9d}
	I0913 19:57:30.486556   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined IP address 192.168.50.13 and MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:30.486581   71424 sshutil.go:53] new ssh client: &{IP:192.168.50.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/no-preload-239327/id_rsa Username:docker}
	I0913 19:57:30.486717   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHPort
	I0913 19:57:30.486859   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHKeyPath
	I0913 19:57:30.487019   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHUsername
	I0913 19:57:30.487177   71424 sshutil.go:53] new ssh client: &{IP:192.168.50.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/no-preload-239327/id_rsa Username:docker}
	I0913 19:57:30.567383   71424 ssh_runner.go:195] Run: systemctl --version
	I0913 19:57:30.589782   71424 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0913 19:57:30.731014   71424 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0913 19:57:30.737329   71424 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0913 19:57:30.737400   71424 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0913 19:57:30.753326   71424 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0913 19:57:30.753355   71424 start.go:495] detecting cgroup driver to use...
	I0913 19:57:30.753427   71424 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0913 19:57:30.769188   71424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0913 19:57:30.783273   71424 docker.go:217] disabling cri-docker service (if available) ...
	I0913 19:57:30.783338   71424 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0913 19:57:30.796488   71424 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0913 19:57:30.809856   71424 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0913 19:57:30.920704   71424 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0913 19:57:31.096766   71424 docker.go:233] disabling docker service ...
	I0913 19:57:31.096843   71424 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0913 19:57:31.111766   71424 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0913 19:57:31.127537   71424 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0913 19:57:31.243075   71424 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0913 19:57:31.367950   71424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0913 19:57:31.382349   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0913 19:57:31.401339   71424 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0913 19:57:31.401408   71424 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:57:31.412154   71424 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0913 19:57:31.412230   71424 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:57:31.423247   71424 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:57:31.433976   71424 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:57:31.445438   71424 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0913 19:57:31.457530   71424 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:57:31.468624   71424 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:57:31.487026   71424 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:57:31.498412   71424 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0913 19:57:31.508829   71424 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0913 19:57:31.508895   71424 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0913 19:57:31.524710   71424 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0913 19:57:31.535524   71424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 19:57:31.653359   71424 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0913 19:57:31.747320   71424 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0913 19:57:31.747407   71424 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0913 19:57:31.752629   71424 start.go:563] Will wait 60s for crictl version
	I0913 19:57:31.752688   71424 ssh_runner.go:195] Run: which crictl
	I0913 19:57:31.756745   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0913 19:57:31.801760   71424 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0913 19:57:31.801845   71424 ssh_runner.go:195] Run: crio --version
	I0913 19:57:31.831043   71424 ssh_runner.go:195] Run: crio --version
	I0913 19:57:31.864324   71424 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0913 19:57:30.504936   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .Start
	I0913 19:57:30.505113   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Ensuring networks are active...
	I0913 19:57:30.505954   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Ensuring network default is active
	I0913 19:57:30.506465   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Ensuring network mk-default-k8s-diff-port-512125 is active
	I0913 19:57:30.506848   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Getting domain xml...
	I0913 19:57:30.507643   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Creating domain...
	I0913 19:57:31.762345   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting to get IP...
	I0913 19:57:31.763307   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:31.763782   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | unable to find current IP address of domain default-k8s-diff-port-512125 in network mk-default-k8s-diff-port-512125
	I0913 19:57:31.763844   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | I0913 19:57:31.763764   72780 retry.go:31] will retry after 200.585233ms: waiting for machine to come up
	I0913 19:57:31.966496   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:31.968386   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | unable to find current IP address of domain default-k8s-diff-port-512125 in network mk-default-k8s-diff-port-512125
	I0913 19:57:31.968411   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | I0913 19:57:31.968318   72780 retry.go:31] will retry after 263.858664ms: waiting for machine to come up
	I0913 19:57:32.234115   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:32.234585   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | unable to find current IP address of domain default-k8s-diff-port-512125 in network mk-default-k8s-diff-port-512125
	I0913 19:57:32.234611   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | I0913 19:57:32.234528   72780 retry.go:31] will retry after 372.592721ms: waiting for machine to come up
	I0913 19:57:32.609295   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:32.609822   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | unable to find current IP address of domain default-k8s-diff-port-512125 in network mk-default-k8s-diff-port-512125
	I0913 19:57:32.609852   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | I0913 19:57:32.609783   72780 retry.go:31] will retry after 570.937116ms: waiting for machine to come up
	I0913 19:57:33.182680   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:33.183060   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | unable to find current IP address of domain default-k8s-diff-port-512125 in network mk-default-k8s-diff-port-512125
	I0913 19:57:33.183090   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | I0913 19:57:33.183013   72780 retry.go:31] will retry after 573.320817ms: waiting for machine to come up
	I0913 19:57:33.757741   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:33.758113   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | unable to find current IP address of domain default-k8s-diff-port-512125 in network mk-default-k8s-diff-port-512125
	I0913 19:57:33.758145   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | I0913 19:57:33.758052   72780 retry.go:31] will retry after 732.322448ms: waiting for machine to come up
	I0913 19:57:34.492123   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:34.492507   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | unable to find current IP address of domain default-k8s-diff-port-512125 in network mk-default-k8s-diff-port-512125
	I0913 19:57:34.492538   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | I0913 19:57:34.492457   72780 retry.go:31] will retry after 958.042939ms: waiting for machine to come up
	I0913 19:57:31.865671   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetIP
	I0913 19:57:31.868390   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:31.868769   71424 main.go:141] libmachine: (no-preload-239327) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:8c:9d", ip: ""} in network mk-no-preload-239327: {Iface:virbr1 ExpiryTime:2024-09-13 20:57:22 +0000 UTC Type:0 Mac:52:54:00:14:8c:9d Iaid: IPaddr:192.168.50.13 Prefix:24 Hostname:no-preload-239327 Clientid:01:52:54:00:14:8c:9d}
	I0913 19:57:31.868809   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined IP address 192.168.50.13 and MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:31.868948   71424 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0913 19:57:31.873443   71424 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0913 19:57:31.886704   71424 kubeadm.go:883] updating cluster {Name:no-preload-239327 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:no-preload-239327 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.13 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0913 19:57:31.886832   71424 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0913 19:57:31.886886   71424 ssh_runner.go:195] Run: sudo crictl images --output json
	I0913 19:57:31.925232   71424 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0913 19:57:31.925256   71424 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.1 registry.k8s.io/kube-controller-manager:v1.31.1 registry.k8s.io/kube-scheduler:v1.31.1 registry.k8s.io/kube-proxy:v1.31.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0913 19:57:31.925331   71424 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 19:57:31.925351   71424 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0913 19:57:31.925350   71424 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I0913 19:57:31.925433   71424 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0913 19:57:31.925483   71424 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0913 19:57:31.925542   71424 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I0913 19:57:31.925553   71424 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I0913 19:57:31.925619   71424 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0913 19:57:31.927195   71424 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0913 19:57:31.927210   71424 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I0913 19:57:31.927221   71424 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0913 19:57:31.927234   71424 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0913 19:57:31.927201   71424 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I0913 19:57:31.927210   71424 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 19:57:31.927265   71424 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I0913 19:57:31.927291   71424 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0913 19:57:32.127330   71424 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0913 19:57:32.132821   71424 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I0913 19:57:32.142922   71424 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.1
	I0913 19:57:32.151533   71424 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.1
	I0913 19:57:32.187158   71424 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.1
	I0913 19:57:32.196395   71424 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0913 19:57:32.196447   71424 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0913 19:57:32.196495   71424 ssh_runner.go:195] Run: which crictl
	I0913 19:57:32.197121   71424 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.1
	I0913 19:57:32.223747   71424 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0913 19:57:32.241044   71424 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.1" does not exist at hash "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee" in container runtime
	I0913 19:57:32.241098   71424 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.1
	I0913 19:57:32.241146   71424 ssh_runner.go:195] Run: which crictl
	I0913 19:57:32.241193   71424 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I0913 19:57:32.241248   71424 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I0913 19:57:32.241305   71424 ssh_runner.go:195] Run: which crictl
	I0913 19:57:32.307038   71424 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.1" does not exist at hash "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1" in container runtime
	I0913 19:57:32.307081   71424 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0913 19:57:32.307161   71424 ssh_runner.go:195] Run: which crictl
	I0913 19:57:32.310315   71424 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.1" does not exist at hash "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b" in container runtime
	I0913 19:57:32.310353   71424 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.1
	I0913 19:57:32.310403   71424 ssh_runner.go:195] Run: which crictl
	I0913 19:57:32.310456   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0913 19:57:32.310513   71424 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.1" needs transfer: "registry.k8s.io/kube-proxy:v1.31.1" does not exist at hash "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561" in container runtime
	I0913 19:57:32.310544   71424 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.1
	I0913 19:57:32.310579   71424 ssh_runner.go:195] Run: which crictl
	I0913 19:57:32.432848   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0913 19:57:32.432949   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0913 19:57:32.432981   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0913 19:57:32.433034   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0913 19:57:32.433086   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0913 19:57:32.433185   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0913 19:57:32.568999   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0913 19:57:32.569071   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0913 19:57:32.569090   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0913 19:57:32.569137   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0913 19:57:32.569158   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0913 19:57:32.569239   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0913 19:57:32.686591   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0913 19:57:32.709864   71424 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0913 19:57:32.709957   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0913 19:57:32.709984   71424 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0913 19:57:32.710022   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0913 19:57:32.710074   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0913 19:57:32.714371   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0913 19:57:32.812533   71424 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I0913 19:57:32.812546   71424 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I0913 19:57:32.812646   71424 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0913 19:57:32.812679   71424 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I0913 19:57:32.822802   71424 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0913 19:57:32.822821   71424 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0913 19:57:32.822870   71424 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0913 19:57:32.822949   71424 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I0913 19:57:32.823020   71424 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I0913 19:57:32.823036   71424 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I0913 19:57:32.823105   71424 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0913 19:57:32.823127   71424 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0913 19:57:32.823108   71424 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1
	I0913 19:57:32.827694   71424 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I0913 19:57:32.827935   71424 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.1 (exists)
	I0913 19:57:33.133519   71424 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 19:57:35.452314   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:35.452807   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | unable to find current IP address of domain default-k8s-diff-port-512125 in network mk-default-k8s-diff-port-512125
	I0913 19:57:35.452832   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | I0913 19:57:35.452764   72780 retry.go:31] will retry after 1.050724369s: waiting for machine to come up
	I0913 19:57:36.504580   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:36.505059   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | unable to find current IP address of domain default-k8s-diff-port-512125 in network mk-default-k8s-diff-port-512125
	I0913 19:57:36.505083   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | I0913 19:57:36.505005   72780 retry.go:31] will retry after 1.828970571s: waiting for machine to come up
	I0913 19:57:38.336079   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:38.336524   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | unable to find current IP address of domain default-k8s-diff-port-512125 in network mk-default-k8s-diff-port-512125
	I0913 19:57:38.336551   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | I0913 19:57:38.336484   72780 retry.go:31] will retry after 1.745975748s: waiting for machine to come up
	I0913 19:57:36.540092   71424 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.717200665s)
	I0913 19:57:36.540120   71424 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0913 19:57:36.540143   71424 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I0913 19:57:36.540185   71424 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1: (3.717045749s)
	I0913 19:57:36.540088   71424 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1: (3.716939076s)
	I0913 19:57:36.540246   71424 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1: (3.717074576s)
	I0913 19:57:36.540263   71424 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.1 (exists)
	I0913 19:57:36.540196   71424 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I0913 19:57:36.540247   71424 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.1 (exists)
	I0913 19:57:36.540220   71424 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.1 (exists)
	I0913 19:57:36.540318   71424 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (3.406769496s)
	I0913 19:57:36.540350   71424 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0913 19:57:36.540383   71424 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 19:57:36.540425   71424 ssh_runner.go:195] Run: which crictl
	I0913 19:57:38.607617   71424 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (2.06732841s)
	I0913 19:57:38.607656   71424 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I0913 19:57:38.607657   71424 ssh_runner.go:235] Completed: which crictl: (2.067217735s)
	I0913 19:57:38.607681   71424 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0913 19:57:38.607717   71424 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0913 19:57:38.607717   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 19:57:38.655710   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 19:57:40.096743   71424 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.440995963s)
	I0913 19:57:40.096836   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 19:57:40.096885   71424 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1: (1.489140573s)
	I0913 19:57:40.096912   71424 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 from cache
	I0913 19:57:40.096946   71424 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.1
	I0913 19:57:40.097003   71424 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1
	I0913 19:57:40.142959   71424 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0913 19:57:40.143072   71424 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0913 19:57:40.083781   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:40.084316   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | unable to find current IP address of domain default-k8s-diff-port-512125 in network mk-default-k8s-diff-port-512125
	I0913 19:57:40.084339   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | I0913 19:57:40.084202   72780 retry.go:31] will retry after 2.736824298s: waiting for machine to come up
	I0913 19:57:42.823269   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:42.823689   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | unable to find current IP address of domain default-k8s-diff-port-512125 in network mk-default-k8s-diff-port-512125
	I0913 19:57:42.823723   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | I0913 19:57:42.823648   72780 retry.go:31] will retry after 3.517461718s: waiting for machine to come up
	I0913 19:57:42.266895   71424 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1: (2.169865218s)
	I0913 19:57:42.266929   71424 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 from cache
	I0913 19:57:42.266971   71424 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0913 19:57:42.267074   71424 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0913 19:57:42.266978   71424 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (2.123869445s)
	I0913 19:57:42.267185   71424 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0913 19:57:44.129215   71424 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1: (1.86211411s)
	I0913 19:57:44.129248   71424 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 from cache
	I0913 19:57:44.129280   71424 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0913 19:57:44.129356   71424 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0913 19:57:46.077759   71424 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1: (1.948382667s)
	I0913 19:57:46.077791   71424 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 from cache
	I0913 19:57:46.077818   71424 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0913 19:57:46.077859   71424 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0913 19:57:46.342187   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:46.342624   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | unable to find current IP address of domain default-k8s-diff-port-512125 in network mk-default-k8s-diff-port-512125
	I0913 19:57:46.342661   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | I0913 19:57:46.342555   72780 retry.go:31] will retry after 3.728072283s: waiting for machine to come up
	I0913 19:57:46.728210   71424 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0913 19:57:46.728256   71424 cache_images.go:123] Successfully loaded all cached images
	I0913 19:57:46.728261   71424 cache_images.go:92] duration metric: took 14.802990931s to LoadCachedImages
	I0913 19:57:46.728274   71424 kubeadm.go:934] updating node { 192.168.50.13 8443 v1.31.1 crio true true} ...
	I0913 19:57:46.728393   71424 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-239327 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.13
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-239327 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0913 19:57:46.728503   71424 ssh_runner.go:195] Run: crio config
	I0913 19:57:46.777890   71424 cni.go:84] Creating CNI manager for ""
	I0913 19:57:46.777916   71424 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0913 19:57:46.777928   71424 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0913 19:57:46.777948   71424 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.13 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-239327 NodeName:no-preload-239327 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.13"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.13 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0913 19:57:46.778129   71424 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.13
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-239327"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.13
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.13"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0913 19:57:46.778201   71424 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0913 19:57:46.788550   71424 binaries.go:44] Found k8s binaries, skipping transfer
	I0913 19:57:46.788612   71424 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0913 19:57:46.797610   71424 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0913 19:57:46.813683   71424 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0913 19:57:46.829359   71424 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0913 19:57:46.846055   71424 ssh_runner.go:195] Run: grep 192.168.50.13	control-plane.minikube.internal$ /etc/hosts
	I0913 19:57:46.849820   71424 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.13	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0913 19:57:46.861351   71424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 19:57:46.976645   71424 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0913 19:57:46.993359   71424 certs.go:68] Setting up /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/no-preload-239327 for IP: 192.168.50.13
	I0913 19:57:46.993390   71424 certs.go:194] generating shared ca certs ...
	I0913 19:57:46.993410   71424 certs.go:226] acquiring lock for ca certs: {Name:mke780aab4c2895bded11772e31c7a357d07742c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 19:57:46.993586   71424 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.key
	I0913 19:57:46.993648   71424 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.key
	I0913 19:57:46.993661   71424 certs.go:256] generating profile certs ...
	I0913 19:57:46.993761   71424 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/no-preload-239327/client.key
	I0913 19:57:46.993845   71424 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/no-preload-239327/apiserver.key.1d2f30c2
	I0913 19:57:46.993896   71424 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/no-preload-239327/proxy-client.key
	I0913 19:57:46.994053   71424 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079.pem (1338 bytes)
	W0913 19:57:46.994120   71424 certs.go:480] ignoring /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079_empty.pem, impossibly tiny 0 bytes
	I0913 19:57:46.994134   71424 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem (1675 bytes)
	I0913 19:57:46.994178   71424 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem (1082 bytes)
	I0913 19:57:46.994218   71424 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem (1123 bytes)
	I0913 19:57:46.994250   71424 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem (1679 bytes)
	I0913 19:57:46.994307   71424 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem (1708 bytes)
	I0913 19:57:46.995114   71424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0913 19:57:47.025538   71424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0913 19:57:47.078641   71424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0913 19:57:47.107063   71424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0913 19:57:47.147536   71424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/no-preload-239327/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0913 19:57:47.179796   71424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/no-preload-239327/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0913 19:57:47.202593   71424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/no-preload-239327/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0913 19:57:47.227536   71424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/no-preload-239327/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0913 19:57:47.251324   71424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0913 19:57:47.274447   71424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079.pem --> /usr/share/ca-certificates/11079.pem (1338 bytes)
	I0913 19:57:47.297216   71424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem --> /usr/share/ca-certificates/110792.pem (1708 bytes)
	I0913 19:57:47.320138   71424 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0913 19:57:47.336696   71424 ssh_runner.go:195] Run: openssl version
	I0913 19:57:47.342403   71424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0913 19:57:47.352378   71424 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0913 19:57:47.356749   71424 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 13 18:22 /usr/share/ca-certificates/minikubeCA.pem
	I0913 19:57:47.356793   71424 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0913 19:57:47.362541   71424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0913 19:57:47.372621   71424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11079.pem && ln -fs /usr/share/ca-certificates/11079.pem /etc/ssl/certs/11079.pem"
	I0913 19:57:47.382729   71424 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11079.pem
	I0913 19:57:47.387369   71424 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 13 18:38 /usr/share/ca-certificates/11079.pem
	I0913 19:57:47.387431   71424 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11079.pem
	I0913 19:57:47.393218   71424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11079.pem /etc/ssl/certs/51391683.0"
	I0913 19:57:47.403529   71424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110792.pem && ln -fs /usr/share/ca-certificates/110792.pem /etc/ssl/certs/110792.pem"
	I0913 19:57:47.414210   71424 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110792.pem
	I0913 19:57:47.418917   71424 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 13 18:38 /usr/share/ca-certificates/110792.pem
	I0913 19:57:47.418965   71424 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110792.pem
	I0913 19:57:47.424414   71424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110792.pem /etc/ssl/certs/3ec20f2e.0"
	I0913 19:57:47.434850   71424 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0913 19:57:47.439245   71424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0913 19:57:47.445052   71424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0913 19:57:47.450680   71424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0913 19:57:47.456489   71424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0913 19:57:47.462051   71424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0913 19:57:47.467582   71424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0913 19:57:47.473181   71424 kubeadm.go:392] StartCluster: {Name:no-preload-239327 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:no-preload-239327 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.13 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 19:57:47.473256   71424 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0913 19:57:47.473295   71424 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0913 19:57:47.510432   71424 cri.go:89] found id: ""
	I0913 19:57:47.510508   71424 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0913 19:57:47.520272   71424 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0913 19:57:47.520293   71424 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0913 19:57:47.520338   71424 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0913 19:57:47.529391   71424 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0913 19:57:47.530298   71424 kubeconfig.go:125] found "no-preload-239327" server: "https://192.168.50.13:8443"
	I0913 19:57:47.532275   71424 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0913 19:57:47.541080   71424 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.13
	I0913 19:57:47.541115   71424 kubeadm.go:1160] stopping kube-system containers ...
	I0913 19:57:47.541130   71424 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0913 19:57:47.541167   71424 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0913 19:57:47.575726   71424 cri.go:89] found id: ""
	I0913 19:57:47.575797   71424 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0913 19:57:47.591640   71424 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0913 19:57:47.600616   71424 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0913 19:57:47.600634   71424 kubeadm.go:157] found existing configuration files:
	
	I0913 19:57:47.600680   71424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0913 19:57:47.609317   71424 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0913 19:57:47.609360   71424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0913 19:57:47.618729   71424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0913 19:57:47.627198   71424 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0913 19:57:47.627241   71424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0913 19:57:47.636259   71424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0913 19:57:47.645245   71424 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0913 19:57:47.645303   71424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0913 19:57:47.654245   71424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0913 19:57:47.662970   71424 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0913 19:57:47.663045   71424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0913 19:57:47.672250   71424 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0913 19:57:47.681504   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:57:47.783618   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:57:48.614939   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:57:48.812739   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:57:48.888885   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:57:48.999877   71424 api_server.go:52] waiting for apiserver process to appear ...
	I0913 19:57:48.999966   71424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:57:49.500587   71424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:57:50.001072   71424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:57:50.026939   71424 api_server.go:72] duration metric: took 1.027062019s to wait for apiserver process to appear ...
	I0913 19:57:50.026965   71424 api_server.go:88] waiting for apiserver healthz status ...
	I0913 19:57:50.026983   71424 api_server.go:253] Checking apiserver healthz at https://192.168.50.13:8443/healthz ...
	I0913 19:57:51.327259   71926 start.go:364] duration metric: took 4m9.277620447s to acquireMachinesLock for "old-k8s-version-234290"
	I0913 19:57:51.327324   71926 start.go:96] Skipping create...Using existing machine configuration
	I0913 19:57:51.327338   71926 fix.go:54] fixHost starting: 
	I0913 19:57:51.327769   71926 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:57:51.327815   71926 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:57:51.344030   71926 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37477
	I0913 19:57:51.344527   71926 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:57:51.344994   71926 main.go:141] libmachine: Using API Version  1
	I0913 19:57:51.345018   71926 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:57:51.345360   71926 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:57:51.345563   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .DriverName
	I0913 19:57:51.345700   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetState
	I0913 19:57:51.347144   71926 fix.go:112] recreateIfNeeded on old-k8s-version-234290: state=Stopped err=<nil>
	I0913 19:57:51.347169   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .DriverName
	W0913 19:57:51.347304   71926 fix.go:138] unexpected machine state, will restart: <nil>
	I0913 19:57:51.349231   71926 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-234290" ...
	I0913 19:57:51.350756   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .Start
	I0913 19:57:51.350906   71926 main.go:141] libmachine: (old-k8s-version-234290) Ensuring networks are active...
	I0913 19:57:51.351585   71926 main.go:141] libmachine: (old-k8s-version-234290) Ensuring network default is active
	I0913 19:57:51.351974   71926 main.go:141] libmachine: (old-k8s-version-234290) Ensuring network mk-old-k8s-version-234290 is active
	I0913 19:57:51.352333   71926 main.go:141] libmachine: (old-k8s-version-234290) Getting domain xml...
	I0913 19:57:51.352947   71926 main.go:141] libmachine: (old-k8s-version-234290) Creating domain...
	I0913 19:57:50.075284   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:50.075782   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has current primary IP address 192.168.61.3 and MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:50.075801   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Found IP for machine: 192.168.61.3
	I0913 19:57:50.075813   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Reserving static IP address...
	I0913 19:57:50.076344   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Reserved static IP address: 192.168.61.3
	I0913 19:57:50.076383   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting for SSH to be available...
	I0913 19:57:50.076420   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-512125", mac: "52:54:00:5b:54:e0", ip: "192.168.61.3"} in network mk-default-k8s-diff-port-512125: {Iface:virbr2 ExpiryTime:2024-09-13 20:49:35 +0000 UTC Type:0 Mac:52:54:00:5b:54:e0 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-512125 Clientid:01:52:54:00:5b:54:e0}
	I0913 19:57:50.076452   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | skip adding static IP to network mk-default-k8s-diff-port-512125 - found existing host DHCP lease matching {name: "default-k8s-diff-port-512125", mac: "52:54:00:5b:54:e0", ip: "192.168.61.3"}
	I0913 19:57:50.076468   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | Getting to WaitForSSH function...
	I0913 19:57:50.078783   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:50.079184   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:54:e0", ip: ""} in network mk-default-k8s-diff-port-512125: {Iface:virbr2 ExpiryTime:2024-09-13 20:49:35 +0000 UTC Type:0 Mac:52:54:00:5b:54:e0 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-512125 Clientid:01:52:54:00:5b:54:e0}
	I0913 19:57:50.079251   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined IP address 192.168.61.3 and MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:50.079322   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | Using SSH client type: external
	I0913 19:57:50.079363   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | Using SSH private key: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/default-k8s-diff-port-512125/id_rsa (-rw-------)
	I0913 19:57:50.079395   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.3 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19636-3902/.minikube/machines/default-k8s-diff-port-512125/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0913 19:57:50.079422   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | About to run SSH command:
	I0913 19:57:50.079444   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | exit 0
	I0913 19:57:50.206454   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | SSH cmd err, output: <nil>: 
	I0913 19:57:50.206818   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetConfigRaw
	I0913 19:57:50.207468   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetIP
	I0913 19:57:50.210231   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:50.210663   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:54:e0", ip: ""} in network mk-default-k8s-diff-port-512125: {Iface:virbr2 ExpiryTime:2024-09-13 20:49:35 +0000 UTC Type:0 Mac:52:54:00:5b:54:e0 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-512125 Clientid:01:52:54:00:5b:54:e0}
	I0913 19:57:50.210690   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined IP address 192.168.61.3 and MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:50.210983   71702 profile.go:143] Saving config to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/default-k8s-diff-port-512125/config.json ...
	I0913 19:57:50.211209   71702 machine.go:93] provisionDockerMachine start ...
	I0913 19:57:50.211228   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .DriverName
	I0913 19:57:50.211520   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHHostname
	I0913 19:57:50.214581   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:50.214920   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:54:e0", ip: ""} in network mk-default-k8s-diff-port-512125: {Iface:virbr2 ExpiryTime:2024-09-13 20:49:35 +0000 UTC Type:0 Mac:52:54:00:5b:54:e0 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-512125 Clientid:01:52:54:00:5b:54:e0}
	I0913 19:57:50.214943   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined IP address 192.168.61.3 and MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:50.215121   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHPort
	I0913 19:57:50.215303   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHKeyPath
	I0913 19:57:50.215451   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHKeyPath
	I0913 19:57:50.215645   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHUsername
	I0913 19:57:50.215804   71702 main.go:141] libmachine: Using SSH client type: native
	I0913 19:57:50.216045   71702 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.3 22 <nil> <nil>}
	I0913 19:57:50.216060   71702 main.go:141] libmachine: About to run SSH command:
	hostname
	I0913 19:57:50.331657   71702 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0913 19:57:50.331684   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetMachineName
	I0913 19:57:50.331934   71702 buildroot.go:166] provisioning hostname "default-k8s-diff-port-512125"
	I0913 19:57:50.331950   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetMachineName
	I0913 19:57:50.332149   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHHostname
	I0913 19:57:50.335159   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:50.335537   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:54:e0", ip: ""} in network mk-default-k8s-diff-port-512125: {Iface:virbr2 ExpiryTime:2024-09-13 20:49:35 +0000 UTC Type:0 Mac:52:54:00:5b:54:e0 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-512125 Clientid:01:52:54:00:5b:54:e0}
	I0913 19:57:50.335567   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined IP address 192.168.61.3 and MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:50.335723   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHPort
	I0913 19:57:50.335908   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHKeyPath
	I0913 19:57:50.336046   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHKeyPath
	I0913 19:57:50.336226   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHUsername
	I0913 19:57:50.336384   71702 main.go:141] libmachine: Using SSH client type: native
	I0913 19:57:50.336597   71702 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.3 22 <nil> <nil>}
	I0913 19:57:50.336616   71702 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-512125 && echo "default-k8s-diff-port-512125" | sudo tee /etc/hostname
	I0913 19:57:50.467731   71702 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-512125
	
	I0913 19:57:50.467765   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHHostname
	I0913 19:57:50.470668   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:50.471106   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:54:e0", ip: ""} in network mk-default-k8s-diff-port-512125: {Iface:virbr2 ExpiryTime:2024-09-13 20:49:35 +0000 UTC Type:0 Mac:52:54:00:5b:54:e0 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-512125 Clientid:01:52:54:00:5b:54:e0}
	I0913 19:57:50.471135   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined IP address 192.168.61.3 and MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:50.471401   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHPort
	I0913 19:57:50.471588   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHKeyPath
	I0913 19:57:50.471784   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHKeyPath
	I0913 19:57:50.471944   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHUsername
	I0913 19:57:50.472126   71702 main.go:141] libmachine: Using SSH client type: native
	I0913 19:57:50.472334   71702 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.3 22 <nil> <nil>}
	I0913 19:57:50.472352   71702 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-512125' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-512125/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-512125' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0913 19:57:50.587535   71702 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0913 19:57:50.587565   71702 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19636-3902/.minikube CaCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19636-3902/.minikube}
	I0913 19:57:50.587599   71702 buildroot.go:174] setting up certificates
	I0913 19:57:50.587608   71702 provision.go:84] configureAuth start
	I0913 19:57:50.587617   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetMachineName
	I0913 19:57:50.587881   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetIP
	I0913 19:57:50.590622   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:50.591016   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:54:e0", ip: ""} in network mk-default-k8s-diff-port-512125: {Iface:virbr2 ExpiryTime:2024-09-13 20:49:35 +0000 UTC Type:0 Mac:52:54:00:5b:54:e0 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-512125 Clientid:01:52:54:00:5b:54:e0}
	I0913 19:57:50.591046   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined IP address 192.168.61.3 and MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:50.591235   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHHostname
	I0913 19:57:50.593758   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:50.594161   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:54:e0", ip: ""} in network mk-default-k8s-diff-port-512125: {Iface:virbr2 ExpiryTime:2024-09-13 20:49:35 +0000 UTC Type:0 Mac:52:54:00:5b:54:e0 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-512125 Clientid:01:52:54:00:5b:54:e0}
	I0913 19:57:50.594188   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined IP address 192.168.61.3 and MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:50.594290   71702 provision.go:143] copyHostCerts
	I0913 19:57:50.594351   71702 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem, removing ...
	I0913 19:57:50.594364   71702 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem
	I0913 19:57:50.594423   71702 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem (1123 bytes)
	I0913 19:57:50.594504   71702 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem, removing ...
	I0913 19:57:50.594511   71702 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem
	I0913 19:57:50.594529   71702 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem (1679 bytes)
	I0913 19:57:50.594580   71702 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem, removing ...
	I0913 19:57:50.594586   71702 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem
	I0913 19:57:50.594603   71702 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem (1082 bytes)
	I0913 19:57:50.594654   71702 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-512125 san=[127.0.0.1 192.168.61.3 default-k8s-diff-port-512125 localhost minikube]
	I0913 19:57:50.688827   71702 provision.go:177] copyRemoteCerts
	I0913 19:57:50.688879   71702 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0913 19:57:50.688903   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHHostname
	I0913 19:57:50.691724   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:50.692112   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:54:e0", ip: ""} in network mk-default-k8s-diff-port-512125: {Iface:virbr2 ExpiryTime:2024-09-13 20:49:35 +0000 UTC Type:0 Mac:52:54:00:5b:54:e0 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-512125 Clientid:01:52:54:00:5b:54:e0}
	I0913 19:57:50.692142   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined IP address 192.168.61.3 and MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:50.692387   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHPort
	I0913 19:57:50.692579   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHKeyPath
	I0913 19:57:50.692754   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHUsername
	I0913 19:57:50.692876   71702 sshutil.go:53] new ssh client: &{IP:192.168.61.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/default-k8s-diff-port-512125/id_rsa Username:docker}
	I0913 19:57:50.776582   71702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0913 19:57:50.802453   71702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0913 19:57:50.827446   71702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0913 19:57:50.855966   71702 provision.go:87] duration metric: took 268.344608ms to configureAuth
	I0913 19:57:50.855995   71702 buildroot.go:189] setting minikube options for container-runtime
	I0913 19:57:50.856210   71702 config.go:182] Loaded profile config "default-k8s-diff-port-512125": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 19:57:50.856298   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHHostname
	I0913 19:57:50.859097   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:50.859426   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:54:e0", ip: ""} in network mk-default-k8s-diff-port-512125: {Iface:virbr2 ExpiryTime:2024-09-13 20:49:35 +0000 UTC Type:0 Mac:52:54:00:5b:54:e0 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-512125 Clientid:01:52:54:00:5b:54:e0}
	I0913 19:57:50.859464   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined IP address 192.168.61.3 and MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:50.859667   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHPort
	I0913 19:57:50.859851   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHKeyPath
	I0913 19:57:50.860001   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHKeyPath
	I0913 19:57:50.860103   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHUsername
	I0913 19:57:50.860270   71702 main.go:141] libmachine: Using SSH client type: native
	I0913 19:57:50.860450   71702 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.3 22 <nil> <nil>}
	I0913 19:57:50.860472   71702 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0913 19:57:51.091137   71702 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0913 19:57:51.091162   71702 machine.go:96] duration metric: took 879.939352ms to provisionDockerMachine
	I0913 19:57:51.091174   71702 start.go:293] postStartSetup for "default-k8s-diff-port-512125" (driver="kvm2")
	I0913 19:57:51.091187   71702 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0913 19:57:51.091208   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .DriverName
	I0913 19:57:51.091525   71702 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0913 19:57:51.091558   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHHostname
	I0913 19:57:51.094398   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:51.094755   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:54:e0", ip: ""} in network mk-default-k8s-diff-port-512125: {Iface:virbr2 ExpiryTime:2024-09-13 20:49:35 +0000 UTC Type:0 Mac:52:54:00:5b:54:e0 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-512125 Clientid:01:52:54:00:5b:54:e0}
	I0913 19:57:51.094783   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined IP address 192.168.61.3 and MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:51.094945   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHPort
	I0913 19:57:51.095112   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHKeyPath
	I0913 19:57:51.095269   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHUsername
	I0913 19:57:51.095391   71702 sshutil.go:53] new ssh client: &{IP:192.168.61.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/default-k8s-diff-port-512125/id_rsa Username:docker}
	I0913 19:57:51.176959   71702 ssh_runner.go:195] Run: cat /etc/os-release
	I0913 19:57:51.181585   71702 info.go:137] Remote host: Buildroot 2023.02.9
	I0913 19:57:51.181614   71702 filesync.go:126] Scanning /home/jenkins/minikube-integration/19636-3902/.minikube/addons for local assets ...
	I0913 19:57:51.181687   71702 filesync.go:126] Scanning /home/jenkins/minikube-integration/19636-3902/.minikube/files for local assets ...
	I0913 19:57:51.181768   71702 filesync.go:149] local asset: /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem -> 110792.pem in /etc/ssl/certs
	I0913 19:57:51.181857   71702 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0913 19:57:51.191417   71702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem --> /etc/ssl/certs/110792.pem (1708 bytes)
	I0913 19:57:51.218033   71702 start.go:296] duration metric: took 126.844149ms for postStartSetup
	I0913 19:57:51.218076   71702 fix.go:56] duration metric: took 20.738765131s for fixHost
	I0913 19:57:51.218119   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHHostname
	I0913 19:57:51.221206   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:51.221713   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:54:e0", ip: ""} in network mk-default-k8s-diff-port-512125: {Iface:virbr2 ExpiryTime:2024-09-13 20:49:35 +0000 UTC Type:0 Mac:52:54:00:5b:54:e0 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-512125 Clientid:01:52:54:00:5b:54:e0}
	I0913 19:57:51.221748   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined IP address 192.168.61.3 and MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:51.221946   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHPort
	I0913 19:57:51.222151   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHKeyPath
	I0913 19:57:51.222344   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHKeyPath
	I0913 19:57:51.222538   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHUsername
	I0913 19:57:51.222673   71702 main.go:141] libmachine: Using SSH client type: native
	I0913 19:57:51.222834   71702 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.3 22 <nil> <nil>}
	I0913 19:57:51.222844   71702 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0913 19:57:51.327091   71702 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726257471.303496315
	
	I0913 19:57:51.327121   71702 fix.go:216] guest clock: 1726257471.303496315
	I0913 19:57:51.327132   71702 fix.go:229] Guest: 2024-09-13 19:57:51.303496315 +0000 UTC Remote: 2024-09-13 19:57:51.218080493 +0000 UTC m=+266.360246627 (delta=85.415822ms)
	I0913 19:57:51.327179   71702 fix.go:200] guest clock delta is within tolerance: 85.415822ms
	I0913 19:57:51.327187   71702 start.go:83] releasing machines lock for "default-k8s-diff-port-512125", held for 20.847905198s
	I0913 19:57:51.327218   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .DriverName
	I0913 19:57:51.327478   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetIP
	I0913 19:57:51.330295   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:51.330668   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:54:e0", ip: ""} in network mk-default-k8s-diff-port-512125: {Iface:virbr2 ExpiryTime:2024-09-13 20:49:35 +0000 UTC Type:0 Mac:52:54:00:5b:54:e0 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-512125 Clientid:01:52:54:00:5b:54:e0}
	I0913 19:57:51.330701   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined IP address 192.168.61.3 and MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:51.330809   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .DriverName
	I0913 19:57:51.331309   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .DriverName
	I0913 19:57:51.331492   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .DriverName
	I0913 19:57:51.331611   71702 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0913 19:57:51.331653   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHHostname
	I0913 19:57:51.331703   71702 ssh_runner.go:195] Run: cat /version.json
	I0913 19:57:51.331728   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHHostname
	I0913 19:57:51.334221   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:51.334411   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:51.334581   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:54:e0", ip: ""} in network mk-default-k8s-diff-port-512125: {Iface:virbr2 ExpiryTime:2024-09-13 20:49:35 +0000 UTC Type:0 Mac:52:54:00:5b:54:e0 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-512125 Clientid:01:52:54:00:5b:54:e0}
	I0913 19:57:51.334609   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined IP address 192.168.61.3 and MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:51.334779   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHPort
	I0913 19:57:51.334879   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:54:e0", ip: ""} in network mk-default-k8s-diff-port-512125: {Iface:virbr2 ExpiryTime:2024-09-13 20:49:35 +0000 UTC Type:0 Mac:52:54:00:5b:54:e0 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-512125 Clientid:01:52:54:00:5b:54:e0}
	I0913 19:57:51.334919   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined IP address 192.168.61.3 and MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:51.334966   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHKeyPath
	I0913 19:57:51.335052   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHPort
	I0913 19:57:51.335126   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHUsername
	I0913 19:57:51.335198   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHKeyPath
	I0913 19:57:51.335270   71702 sshutil.go:53] new ssh client: &{IP:192.168.61.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/default-k8s-diff-port-512125/id_rsa Username:docker}
	I0913 19:57:51.335331   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHUsername
	I0913 19:57:51.335546   71702 sshutil.go:53] new ssh client: &{IP:192.168.61.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/default-k8s-diff-port-512125/id_rsa Username:docker}
	I0913 19:57:51.415552   71702 ssh_runner.go:195] Run: systemctl --version
	I0913 19:57:51.440411   71702 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0913 19:57:51.584757   71702 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0913 19:57:51.590531   71702 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0913 19:57:51.590604   71702 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0913 19:57:51.606595   71702 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0913 19:57:51.606619   71702 start.go:495] detecting cgroup driver to use...
	I0913 19:57:51.606678   71702 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0913 19:57:51.622887   71702 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0913 19:57:51.642168   71702 docker.go:217] disabling cri-docker service (if available) ...
	I0913 19:57:51.642235   71702 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0913 19:57:51.657201   71702 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0913 19:57:51.672504   71702 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0913 19:57:51.797046   71702 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0913 19:57:51.944856   71702 docker.go:233] disabling docker service ...
	I0913 19:57:51.944930   71702 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0913 19:57:51.962885   71702 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0913 19:57:51.979765   71702 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0913 19:57:52.144865   71702 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0913 19:57:52.305549   71702 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0913 19:57:52.319742   71702 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0913 19:57:52.341814   71702 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0913 19:57:52.341877   71702 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:57:52.356233   71702 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0913 19:57:52.356304   71702 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:57:52.367867   71702 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:57:52.380357   71702 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:57:52.396158   71702 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0913 19:57:52.409682   71702 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:57:52.425012   71702 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:57:52.443770   71702 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:57:52.455296   71702 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0913 19:57:52.471321   71702 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0913 19:57:52.471384   71702 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0913 19:57:52.486626   71702 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0913 19:57:52.503172   71702 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 19:57:52.637550   71702 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0913 19:57:52.749215   71702 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0913 19:57:52.749314   71702 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0913 19:57:52.755695   71702 start.go:563] Will wait 60s for crictl version
	I0913 19:57:52.755764   71702 ssh_runner.go:195] Run: which crictl
	I0913 19:57:52.760759   71702 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0913 19:57:52.810845   71702 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0913 19:57:52.810938   71702 ssh_runner.go:195] Run: crio --version
	I0913 19:57:52.843238   71702 ssh_runner.go:195] Run: crio --version
	I0913 19:57:52.881367   71702 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0913 19:57:52.882926   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetIP
	I0913 19:57:52.886161   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:52.886611   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:54:e0", ip: ""} in network mk-default-k8s-diff-port-512125: {Iface:virbr2 ExpiryTime:2024-09-13 20:49:35 +0000 UTC Type:0 Mac:52:54:00:5b:54:e0 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-512125 Clientid:01:52:54:00:5b:54:e0}
	I0913 19:57:52.886640   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined IP address 192.168.61.3 and MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:52.886873   71702 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0913 19:57:52.891585   71702 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0913 19:57:52.909764   71702 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-512125 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:default-k8s-diff-port-512125 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.3 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0913 19:57:52.909895   71702 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0913 19:57:52.909946   71702 ssh_runner.go:195] Run: sudo crictl images --output json
	I0913 19:57:52.951579   71702 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0913 19:57:52.951663   71702 ssh_runner.go:195] Run: which lz4
	I0913 19:57:52.956284   71702 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0913 19:57:52.961057   71702 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0913 19:57:52.961107   71702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0913 19:57:54.413207   71702 crio.go:462] duration metric: took 1.457013899s to copy over tarball
	I0913 19:57:54.413281   71702 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0913 19:57:53.355482   71424 api_server.go:279] https://192.168.50.13:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0913 19:57:53.355515   71424 api_server.go:103] status: https://192.168.50.13:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0913 19:57:53.355532   71424 api_server.go:253] Checking apiserver healthz at https://192.168.50.13:8443/healthz ...
	I0913 19:57:53.403530   71424 api_server.go:279] https://192.168.50.13:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0913 19:57:53.403563   71424 api_server.go:103] status: https://192.168.50.13:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0913 19:57:53.527891   71424 api_server.go:253] Checking apiserver healthz at https://192.168.50.13:8443/healthz ...
	I0913 19:57:53.540614   71424 api_server.go:279] https://192.168.50.13:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0913 19:57:53.540645   71424 api_server.go:103] status: https://192.168.50.13:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0913 19:57:54.027103   71424 api_server.go:253] Checking apiserver healthz at https://192.168.50.13:8443/healthz ...
	I0913 19:57:54.033969   71424 api_server.go:279] https://192.168.50.13:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0913 19:57:54.034007   71424 api_server.go:103] status: https://192.168.50.13:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0913 19:57:54.527232   71424 api_server.go:253] Checking apiserver healthz at https://192.168.50.13:8443/healthz ...
	I0913 19:57:54.533061   71424 api_server.go:279] https://192.168.50.13:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0913 19:57:54.533101   71424 api_server.go:103] status: https://192.168.50.13:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0913 19:57:55.027284   71424 api_server.go:253] Checking apiserver healthz at https://192.168.50.13:8443/healthz ...
	I0913 19:57:55.033940   71424 api_server.go:279] https://192.168.50.13:8443/healthz returned 200:
	ok
	I0913 19:57:55.041955   71424 api_server.go:141] control plane version: v1.31.1
	I0913 19:57:55.041994   71424 api_server.go:131] duration metric: took 5.01501979s to wait for apiserver health ...
	I0913 19:57:55.042004   71424 cni.go:84] Creating CNI manager for ""
	I0913 19:57:55.042012   71424 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0913 19:57:55.043980   71424 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0913 19:57:55.045528   71424 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0913 19:57:55.095694   71424 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0913 19:57:55.130974   71424 system_pods.go:43] waiting for kube-system pods to appear ...
	I0913 19:57:55.144810   71424 system_pods.go:59] 8 kube-system pods found
	I0913 19:57:55.144850   71424 system_pods.go:61] "coredns-7c65d6cfc9-fjzxv" [984f1946-61b1-4881-ae99-495855aaf948] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0913 19:57:55.144861   71424 system_pods.go:61] "etcd-no-preload-239327" [d0514967-93ea-4792-b49d-500c6800d102] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0913 19:57:55.144871   71424 system_pods.go:61] "kube-apiserver-no-preload-239327" [2e37433d-d767-4e6a-9697-79e99bb2cf74] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0913 19:57:55.144879   71424 system_pods.go:61] "kube-controller-manager-no-preload-239327" [71d86891-1378-43b5-af10-c45acb1ef854] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0913 19:57:55.144885   71424 system_pods.go:61] "kube-proxy-b24zg" [67fffd9e-ddf7-4abb-bfce-1528060d6b43] Running
	I0913 19:57:55.144892   71424 system_pods.go:61] "kube-scheduler-no-preload-239327" [22e17a4f-902f-4593-82dd-d2f04104e66a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0913 19:57:55.144899   71424 system_pods.go:61] "metrics-server-6867b74b74-bq7jp" [9920ad88-3d00-458f-94d4-3dcfd0cd9a01] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0913 19:57:55.144904   71424 system_pods.go:61] "storage-provisioner" [1cb55fe9-4adb-4d3e-9f26-34ee4b3f01a2] Running
	I0913 19:57:55.144912   71424 system_pods.go:74] duration metric: took 13.911878ms to wait for pod list to return data ...
	I0913 19:57:55.144925   71424 node_conditions.go:102] verifying NodePressure condition ...
	I0913 19:57:55.150452   71424 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0913 19:57:55.150485   71424 node_conditions.go:123] node cpu capacity is 2
	I0913 19:57:55.150498   71424 node_conditions.go:105] duration metric: took 5.568616ms to run NodePressure ...
	I0913 19:57:55.150517   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:57:55.469599   71424 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0913 19:57:55.475337   71424 kubeadm.go:739] kubelet initialised
	I0913 19:57:55.475361   71424 kubeadm.go:740] duration metric: took 5.681154ms waiting for restarted kubelet to initialise ...
	I0913 19:57:55.475372   71424 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0913 19:57:55.485218   71424 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-fjzxv" in "kube-system" namespace to be "Ready" ...
	I0913 19:57:55.495426   71424 pod_ready.go:98] node "no-preload-239327" hosting pod "coredns-7c65d6cfc9-fjzxv" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-239327" has status "Ready":"False"
	I0913 19:57:55.495451   71424 pod_ready.go:82] duration metric: took 10.207619ms for pod "coredns-7c65d6cfc9-fjzxv" in "kube-system" namespace to be "Ready" ...
	E0913 19:57:55.495464   71424 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-239327" hosting pod "coredns-7c65d6cfc9-fjzxv" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-239327" has status "Ready":"False"
	I0913 19:57:55.495474   71424 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-239327" in "kube-system" namespace to be "Ready" ...
	I0913 19:57:55.501722   71424 pod_ready.go:98] node "no-preload-239327" hosting pod "etcd-no-preload-239327" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-239327" has status "Ready":"False"
	I0913 19:57:55.501746   71424 pod_ready.go:82] duration metric: took 6.262633ms for pod "etcd-no-preload-239327" in "kube-system" namespace to be "Ready" ...
	E0913 19:57:55.501758   71424 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-239327" hosting pod "etcd-no-preload-239327" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-239327" has status "Ready":"False"
	I0913 19:57:55.501766   71424 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-239327" in "kube-system" namespace to be "Ready" ...
	I0913 19:57:55.508771   71424 pod_ready.go:98] node "no-preload-239327" hosting pod "kube-apiserver-no-preload-239327" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-239327" has status "Ready":"False"
	I0913 19:57:55.508797   71424 pod_ready.go:82] duration metric: took 7.022139ms for pod "kube-apiserver-no-preload-239327" in "kube-system" namespace to be "Ready" ...
	E0913 19:57:55.508808   71424 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-239327" hosting pod "kube-apiserver-no-preload-239327" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-239327" has status "Ready":"False"
	I0913 19:57:55.508816   71424 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-239327" in "kube-system" namespace to be "Ready" ...
	I0913 19:57:55.533464   71424 pod_ready.go:98] node "no-preload-239327" hosting pod "kube-controller-manager-no-preload-239327" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-239327" has status "Ready":"False"
	I0913 19:57:55.533494   71424 pod_ready.go:82] duration metric: took 24.667318ms for pod "kube-controller-manager-no-preload-239327" in "kube-system" namespace to be "Ready" ...
	E0913 19:57:55.533505   71424 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-239327" hosting pod "kube-controller-manager-no-preload-239327" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-239327" has status "Ready":"False"
	I0913 19:57:55.533515   71424 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-b24zg" in "kube-system" namespace to be "Ready" ...
	I0913 19:57:55.935346   71424 pod_ready.go:98] node "no-preload-239327" hosting pod "kube-proxy-b24zg" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-239327" has status "Ready":"False"
	I0913 19:57:55.935376   71424 pod_ready.go:82] duration metric: took 401.852235ms for pod "kube-proxy-b24zg" in "kube-system" namespace to be "Ready" ...
	E0913 19:57:55.935388   71424 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-239327" hosting pod "kube-proxy-b24zg" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-239327" has status "Ready":"False"
	I0913 19:57:55.935399   71424 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-239327" in "kube-system" namespace to be "Ready" ...
	I0913 19:57:56.335156   71424 pod_ready.go:98] node "no-preload-239327" hosting pod "kube-scheduler-no-preload-239327" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-239327" has status "Ready":"False"
	I0913 19:57:56.335194   71424 pod_ready.go:82] duration metric: took 399.782959ms for pod "kube-scheduler-no-preload-239327" in "kube-system" namespace to be "Ready" ...
	E0913 19:57:56.335207   71424 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-239327" hosting pod "kube-scheduler-no-preload-239327" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-239327" has status "Ready":"False"
	I0913 19:57:56.335216   71424 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace to be "Ready" ...
	I0913 19:57:56.734606   71424 pod_ready.go:98] node "no-preload-239327" hosting pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-239327" has status "Ready":"False"
	I0913 19:57:56.734633   71424 pod_ready.go:82] duration metric: took 399.405497ms for pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace to be "Ready" ...
	E0913 19:57:56.734644   71424 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-239327" hosting pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-239327" has status "Ready":"False"
	I0913 19:57:56.734654   71424 pod_ready.go:39] duration metric: took 1.259272309s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0913 19:57:56.734673   71424 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0913 19:57:56.748215   71424 ops.go:34] apiserver oom_adj: -16
	I0913 19:57:56.748236   71424 kubeadm.go:597] duration metric: took 9.227936606s to restartPrimaryControlPlane
	I0913 19:57:56.748247   71424 kubeadm.go:394] duration metric: took 9.275070425s to StartCluster
	I0913 19:57:56.748267   71424 settings.go:142] acquiring lock: {Name:mk3b751ea8a9f3a6c2d19469cffad500e96411b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 19:57:56.748361   71424 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19636-3902/kubeconfig
	I0913 19:57:56.750523   71424 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/kubeconfig: {Name:mk324cf401f96b93fed93af51e24b0634e5fa1a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 19:57:56.750818   71424 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.13 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0913 19:57:56.750914   71424 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0913 19:57:56.751016   71424 addons.go:69] Setting storage-provisioner=true in profile "no-preload-239327"
	I0913 19:57:56.751037   71424 addons.go:234] Setting addon storage-provisioner=true in "no-preload-239327"
	W0913 19:57:56.751046   71424 addons.go:243] addon storage-provisioner should already be in state true
	I0913 19:57:56.751034   71424 addons.go:69] Setting default-storageclass=true in profile "no-preload-239327"
	I0913 19:57:56.751066   71424 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-239327"
	I0913 19:57:56.751076   71424 host.go:66] Checking if "no-preload-239327" exists ...
	I0913 19:57:56.751108   71424 config.go:182] Loaded profile config "no-preload-239327": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 19:57:56.751172   71424 addons.go:69] Setting metrics-server=true in profile "no-preload-239327"
	I0913 19:57:56.751186   71424 addons.go:234] Setting addon metrics-server=true in "no-preload-239327"
	W0913 19:57:56.751208   71424 addons.go:243] addon metrics-server should already be in state true
	I0913 19:57:56.751231   71424 host.go:66] Checking if "no-preload-239327" exists ...
	I0913 19:57:56.751527   71424 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:57:56.751550   71424 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:57:56.751568   71424 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:57:56.751581   71424 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:57:56.751735   71424 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:57:56.751799   71424 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:57:56.753086   71424 out.go:177] * Verifying Kubernetes components...
	I0913 19:57:56.755069   71424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 19:57:56.769111   71424 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39191
	I0913 19:57:56.769722   71424 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:57:56.770138   71424 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37643
	I0913 19:57:56.770380   71424 main.go:141] libmachine: Using API Version  1
	I0913 19:57:56.770397   71424 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:57:56.770472   71424 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44401
	I0913 19:57:56.770616   71424 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:57:56.770858   71424 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:57:56.771033   71424 main.go:141] libmachine: Using API Version  1
	I0913 19:57:56.771054   71424 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:57:56.771358   71424 main.go:141] libmachine: Using API Version  1
	I0913 19:57:56.771375   71424 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:57:56.771393   71424 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:57:56.771418   71424 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:57:56.771553   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetState
	I0913 19:57:56.772058   71424 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:57:56.772097   71424 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:57:56.772313   71424 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:57:56.772870   71424 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:57:56.772911   71424 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:57:56.791429   71424 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39909
	I0913 19:57:56.791741   71424 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:57:56.791800   71424 addons.go:234] Setting addon default-storageclass=true in "no-preload-239327"
	W0913 19:57:56.791813   71424 addons.go:243] addon default-storageclass should already be in state true
	I0913 19:57:56.791841   71424 host.go:66] Checking if "no-preload-239327" exists ...
	I0913 19:57:56.792127   71424 main.go:141] libmachine: Using API Version  1
	I0913 19:57:56.792142   71424 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:57:56.792204   71424 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:57:56.792234   71424 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:57:56.792419   71424 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:57:56.792545   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetState
	I0913 19:57:56.794360   71424 main.go:141] libmachine: (no-preload-239327) Calling .DriverName
	I0913 19:57:56.796432   71424 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0913 19:57:56.797889   71424 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0913 19:57:56.797906   71424 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0913 19:57:56.797936   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHHostname
	I0913 19:57:56.801559   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:56.801916   71424 main.go:141] libmachine: (no-preload-239327) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:8c:9d", ip: ""} in network mk-no-preload-239327: {Iface:virbr1 ExpiryTime:2024-09-13 20:57:22 +0000 UTC Type:0 Mac:52:54:00:14:8c:9d Iaid: IPaddr:192.168.50.13 Prefix:24 Hostname:no-preload-239327 Clientid:01:52:54:00:14:8c:9d}
	I0913 19:57:56.801937   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined IP address 192.168.50.13 and MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:56.803787   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHPort
	I0913 19:57:56.803937   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHKeyPath
	I0913 19:57:56.806185   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHUsername
	I0913 19:57:56.806357   71424 sshutil.go:53] new ssh client: &{IP:192.168.50.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/no-preload-239327/id_rsa Username:docker}
	I0913 19:57:56.809000   71424 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40391
	I0913 19:57:56.809444   71424 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:57:56.809928   71424 main.go:141] libmachine: Using API Version  1
	I0913 19:57:56.809943   71424 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:57:56.809962   71424 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36435
	I0913 19:57:56.810309   71424 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:57:56.810511   71424 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:57:56.810829   71424 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:57:56.810862   71424 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:57:56.810872   71424 main.go:141] libmachine: Using API Version  1
	I0913 19:57:56.810886   71424 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:57:56.811194   71424 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:57:56.811321   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetState
	I0913 19:57:56.812760   71424 main.go:141] libmachine: (no-preload-239327) Calling .DriverName
	I0913 19:57:56.814270   71424 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 19:57:52.718732   71926 main.go:141] libmachine: (old-k8s-version-234290) Waiting to get IP...
	I0913 19:57:52.719575   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:57:52.720004   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | unable to find current IP address of domain old-k8s-version-234290 in network mk-old-k8s-version-234290
	I0913 19:57:52.720083   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | I0913 19:57:52.720009   72953 retry.go:31] will retry after 304.912151ms: waiting for machine to come up
	I0913 19:57:53.026797   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:57:53.027578   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | unable to find current IP address of domain old-k8s-version-234290 in network mk-old-k8s-version-234290
	I0913 19:57:53.027703   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | I0913 19:57:53.027641   72953 retry.go:31] will retry after 242.676909ms: waiting for machine to come up
	I0913 19:57:53.272108   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:57:53.272588   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | unable to find current IP address of domain old-k8s-version-234290 in network mk-old-k8s-version-234290
	I0913 19:57:53.272612   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | I0913 19:57:53.272518   72953 retry.go:31] will retry after 405.559393ms: waiting for machine to come up
	I0913 19:57:53.679940   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:57:53.680380   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | unable to find current IP address of domain old-k8s-version-234290 in network mk-old-k8s-version-234290
	I0913 19:57:53.680414   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | I0913 19:57:53.680348   72953 retry.go:31] will retry after 378.743628ms: waiting for machine to come up
	I0913 19:57:54.061169   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:57:54.061778   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | unable to find current IP address of domain old-k8s-version-234290 in network mk-old-k8s-version-234290
	I0913 19:57:54.061805   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | I0913 19:57:54.061698   72953 retry.go:31] will retry after 481.46563ms: waiting for machine to come up
	I0913 19:57:54.545134   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:57:54.545582   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | unable to find current IP address of domain old-k8s-version-234290 in network mk-old-k8s-version-234290
	I0913 19:57:54.545604   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | I0913 19:57:54.545548   72953 retry.go:31] will retry after 836.433898ms: waiting for machine to come up
	I0913 19:57:55.383396   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:57:55.384063   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | unable to find current IP address of domain old-k8s-version-234290 in network mk-old-k8s-version-234290
	I0913 19:57:55.384094   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | I0913 19:57:55.384023   72953 retry.go:31] will retry after 848.706378ms: waiting for machine to come up
	I0913 19:57:56.233996   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:57:56.234429   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | unable to find current IP address of domain old-k8s-version-234290 in network mk-old-k8s-version-234290
	I0913 19:57:56.234456   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | I0913 19:57:56.234381   72953 retry.go:31] will retry after 969.158848ms: waiting for machine to come up
	I0913 19:57:56.815854   71424 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0913 19:57:56.815866   71424 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0913 19:57:56.815878   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHHostname
	I0913 19:57:56.822710   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:56.823097   71424 main.go:141] libmachine: (no-preload-239327) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:8c:9d", ip: ""} in network mk-no-preload-239327: {Iface:virbr1 ExpiryTime:2024-09-13 20:57:22 +0000 UTC Type:0 Mac:52:54:00:14:8c:9d Iaid: IPaddr:192.168.50.13 Prefix:24 Hostname:no-preload-239327 Clientid:01:52:54:00:14:8c:9d}
	I0913 19:57:56.823115   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined IP address 192.168.50.13 and MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:56.823379   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHPort
	I0913 19:57:56.823519   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHKeyPath
	I0913 19:57:56.823634   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHUsername
	I0913 19:57:56.823721   71424 sshutil.go:53] new ssh client: &{IP:192.168.50.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/no-preload-239327/id_rsa Username:docker}
	I0913 19:57:56.830245   71424 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38447
	I0913 19:57:56.830634   71424 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:57:56.831243   71424 main.go:141] libmachine: Using API Version  1
	I0913 19:57:56.831258   71424 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:57:56.831746   71424 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:57:56.831977   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetState
	I0913 19:57:56.833771   71424 main.go:141] libmachine: (no-preload-239327) Calling .DriverName
	I0913 19:57:56.833953   71424 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0913 19:57:56.833966   71424 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0913 19:57:56.833981   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHHostname
	I0913 19:57:56.837171   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:56.837611   71424 main.go:141] libmachine: (no-preload-239327) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:8c:9d", ip: ""} in network mk-no-preload-239327: {Iface:virbr1 ExpiryTime:2024-09-13 20:57:22 +0000 UTC Type:0 Mac:52:54:00:14:8c:9d Iaid: IPaddr:192.168.50.13 Prefix:24 Hostname:no-preload-239327 Clientid:01:52:54:00:14:8c:9d}
	I0913 19:57:56.837630   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined IP address 192.168.50.13 and MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:56.837793   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHPort
	I0913 19:57:56.837962   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHKeyPath
	I0913 19:57:56.838198   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHUsername
	I0913 19:57:56.838323   71424 sshutil.go:53] new ssh client: &{IP:192.168.50.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/no-preload-239327/id_rsa Username:docker}
	I0913 19:57:57.030836   71424 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0913 19:57:57.056630   71424 node_ready.go:35] waiting up to 6m0s for node "no-preload-239327" to be "Ready" ...
	I0913 19:57:57.157478   71424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0913 19:57:57.169686   71424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0913 19:57:57.302368   71424 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0913 19:57:57.302395   71424 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0913 19:57:57.355982   71424 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0913 19:57:57.356013   71424 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0913 19:57:57.378079   71424 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0913 19:57:57.378128   71424 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0913 19:57:57.437879   71424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0913 19:57:59.395739   71424 node_ready.go:53] node "no-preload-239327" has status "Ready":"False"
	I0913 19:57:59.399929   71424 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.230206257s)
	I0913 19:57:59.399976   71424 main.go:141] libmachine: Making call to close driver server
	I0913 19:57:59.399988   71424 main.go:141] libmachine: (no-preload-239327) Calling .Close
	I0913 19:57:59.400026   71424 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.242509219s)
	I0913 19:57:59.400067   71424 main.go:141] libmachine: Making call to close driver server
	I0913 19:57:59.400083   71424 main.go:141] libmachine: (no-preload-239327) Calling .Close
	I0913 19:57:59.400273   71424 main.go:141] libmachine: Successfully made call to close driver server
	I0913 19:57:59.400287   71424 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 19:57:59.400297   71424 main.go:141] libmachine: Making call to close driver server
	I0913 19:57:59.400305   71424 main.go:141] libmachine: (no-preload-239327) Calling .Close
	I0913 19:57:59.400481   71424 main.go:141] libmachine: (no-preload-239327) DBG | Closing plugin on server side
	I0913 19:57:59.400514   71424 main.go:141] libmachine: Successfully made call to close driver server
	I0913 19:57:59.400529   71424 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 19:57:59.400548   71424 main.go:141] libmachine: Making call to close driver server
	I0913 19:57:59.400556   71424 main.go:141] libmachine: (no-preload-239327) Calling .Close
	I0913 19:57:59.400706   71424 main.go:141] libmachine: Successfully made call to close driver server
	I0913 19:57:59.400716   71424 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 19:57:59.402063   71424 main.go:141] libmachine: Successfully made call to close driver server
	I0913 19:57:59.402078   71424 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 19:57:59.402110   71424 main.go:141] libmachine: (no-preload-239327) DBG | Closing plugin on server side
	I0913 19:57:59.729071   71424 main.go:141] libmachine: Making call to close driver server
	I0913 19:57:59.729097   71424 main.go:141] libmachine: (no-preload-239327) Calling .Close
	I0913 19:57:59.729396   71424 main.go:141] libmachine: Successfully made call to close driver server
	I0913 19:57:59.729416   71424 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 19:57:59.862773   71424 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.424844753s)
	I0913 19:57:59.862831   71424 main.go:141] libmachine: Making call to close driver server
	I0913 19:57:59.862847   71424 main.go:141] libmachine: (no-preload-239327) Calling .Close
	I0913 19:57:59.863167   71424 main.go:141] libmachine: (no-preload-239327) DBG | Closing plugin on server side
	I0913 19:57:59.863223   71424 main.go:141] libmachine: Successfully made call to close driver server
	I0913 19:57:59.863241   71424 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 19:57:59.863253   71424 main.go:141] libmachine: Making call to close driver server
	I0913 19:57:59.863261   71424 main.go:141] libmachine: (no-preload-239327) Calling .Close
	I0913 19:57:59.863505   71424 main.go:141] libmachine: Successfully made call to close driver server
	I0913 19:57:59.863521   71424 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 19:57:59.863536   71424 addons.go:475] Verifying addon metrics-server=true in "no-preload-239327"
	I0913 19:57:59.865569   71424 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0913 19:57:56.673474   71702 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.260118506s)
	I0913 19:57:56.673521   71702 crio.go:469] duration metric: took 2.260277637s to extract the tarball
	I0913 19:57:56.673535   71702 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0913 19:57:56.710512   71702 ssh_runner.go:195] Run: sudo crictl images --output json
	I0913 19:57:56.757884   71702 crio.go:514] all images are preloaded for cri-o runtime.
	I0913 19:57:56.757904   71702 cache_images.go:84] Images are preloaded, skipping loading
	I0913 19:57:56.757913   71702 kubeadm.go:934] updating node { 192.168.61.3 8444 v1.31.1 crio true true} ...
	I0913 19:57:56.758026   71702 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-512125 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-512125 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0913 19:57:56.758115   71702 ssh_runner.go:195] Run: crio config
	I0913 19:57:56.832109   71702 cni.go:84] Creating CNI manager for ""
	I0913 19:57:56.832131   71702 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0913 19:57:56.832143   71702 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0913 19:57:56.832170   71702 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.3 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-512125 NodeName:default-k8s-diff-port-512125 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.3"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0913 19:57:56.832376   71702 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.3
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-512125"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.3"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0913 19:57:56.832442   71702 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0913 19:57:56.845057   71702 binaries.go:44] Found k8s binaries, skipping transfer
	I0913 19:57:56.845112   71702 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0913 19:57:56.855452   71702 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I0913 19:57:56.874607   71702 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0913 19:57:56.891656   71702 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0913 19:57:56.910268   71702 ssh_runner.go:195] Run: grep 192.168.61.3	control-plane.minikube.internal$ /etc/hosts
	I0913 19:57:56.915416   71702 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.3	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0913 19:57:56.929858   71702 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 19:57:57.051400   71702 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0913 19:57:57.073706   71702 certs.go:68] Setting up /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/default-k8s-diff-port-512125 for IP: 192.168.61.3
	I0913 19:57:57.073736   71702 certs.go:194] generating shared ca certs ...
	I0913 19:57:57.073756   71702 certs.go:226] acquiring lock for ca certs: {Name:mke780aab4c2895bded11772e31c7a357d07742c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 19:57:57.073920   71702 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.key
	I0913 19:57:57.073981   71702 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.key
	I0913 19:57:57.073997   71702 certs.go:256] generating profile certs ...
	I0913 19:57:57.074130   71702 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/default-k8s-diff-port-512125/client.key
	I0913 19:57:57.074222   71702 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/default-k8s-diff-port-512125/apiserver.key.c56bc154
	I0913 19:57:57.074281   71702 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/default-k8s-diff-port-512125/proxy-client.key
	I0913 19:57:57.074428   71702 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079.pem (1338 bytes)
	W0913 19:57:57.074478   71702 certs.go:480] ignoring /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079_empty.pem, impossibly tiny 0 bytes
	I0913 19:57:57.074492   71702 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem (1675 bytes)
	I0913 19:57:57.074524   71702 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem (1082 bytes)
	I0913 19:57:57.074552   71702 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem (1123 bytes)
	I0913 19:57:57.074588   71702 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem (1679 bytes)
	I0913 19:57:57.074648   71702 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem (1708 bytes)
	I0913 19:57:57.075352   71702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0913 19:57:57.116487   71702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0913 19:57:57.149579   71702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0913 19:57:57.181669   71702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0913 19:57:57.222493   71702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/default-k8s-diff-port-512125/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0913 19:57:57.265591   71702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/default-k8s-diff-port-512125/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0913 19:57:57.309431   71702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/default-k8s-diff-port-512125/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0913 19:57:57.337978   71702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/default-k8s-diff-port-512125/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0913 19:57:57.368737   71702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem --> /usr/share/ca-certificates/110792.pem (1708 bytes)
	I0913 19:57:57.395163   71702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0913 19:57:57.422620   71702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079.pem --> /usr/share/ca-certificates/11079.pem (1338 bytes)
	I0913 19:57:57.452103   71702 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0913 19:57:57.473413   71702 ssh_runner.go:195] Run: openssl version
	I0913 19:57:57.481312   71702 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11079.pem && ln -fs /usr/share/ca-certificates/11079.pem /etc/ssl/certs/11079.pem"
	I0913 19:57:57.492674   71702 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11079.pem
	I0913 19:57:57.497758   71702 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 13 18:38 /usr/share/ca-certificates/11079.pem
	I0913 19:57:57.497839   71702 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11079.pem
	I0913 19:57:57.504428   71702 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11079.pem /etc/ssl/certs/51391683.0"
	I0913 19:57:57.516174   71702 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110792.pem && ln -fs /usr/share/ca-certificates/110792.pem /etc/ssl/certs/110792.pem"
	I0913 19:57:57.531615   71702 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110792.pem
	I0913 19:57:57.536963   71702 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 13 18:38 /usr/share/ca-certificates/110792.pem
	I0913 19:57:57.537044   71702 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110792.pem
	I0913 19:57:57.543533   71702 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110792.pem /etc/ssl/certs/3ec20f2e.0"
	I0913 19:57:57.555225   71702 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0913 19:57:57.567042   71702 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0913 19:57:57.571812   71702 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 13 18:22 /usr/share/ca-certificates/minikubeCA.pem
	I0913 19:57:57.571880   71702 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0913 19:57:57.578078   71702 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0913 19:57:57.589068   71702 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0913 19:57:57.593977   71702 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0913 19:57:57.600118   71702 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0913 19:57:57.608059   71702 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0913 19:57:57.616018   71702 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0913 19:57:57.623731   71702 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0913 19:57:57.631334   71702 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0913 19:57:57.639262   71702 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-512125 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:default-k8s-diff-port-512125 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.3 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2
6280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 19:57:57.639371   71702 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0913 19:57:57.639428   71702 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0913 19:57:57.690322   71702 cri.go:89] found id: ""
	I0913 19:57:57.690474   71702 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0913 19:57:57.701319   71702 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0913 19:57:57.701343   71702 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0913 19:57:57.701398   71702 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0913 19:57:57.714480   71702 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0913 19:57:57.715899   71702 kubeconfig.go:125] found "default-k8s-diff-port-512125" server: "https://192.168.61.3:8444"
	I0913 19:57:57.719013   71702 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0913 19:57:57.732186   71702 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.3
	I0913 19:57:57.732229   71702 kubeadm.go:1160] stopping kube-system containers ...
	I0913 19:57:57.732243   71702 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0913 19:57:57.732295   71702 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0913 19:57:57.777389   71702 cri.go:89] found id: ""
	I0913 19:57:57.777469   71702 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0913 19:57:57.800158   71702 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0913 19:57:57.813502   71702 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0913 19:57:57.813524   71702 kubeadm.go:157] found existing configuration files:
	
	I0913 19:57:57.813587   71702 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0913 19:57:57.824010   71702 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0913 19:57:57.824089   71702 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0913 19:57:57.837916   71702 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0913 19:57:57.848018   71702 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0913 19:57:57.848100   71702 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0913 19:57:57.858224   71702 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0913 19:57:57.867720   71702 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0913 19:57:57.867791   71702 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0913 19:57:57.877546   71702 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0913 19:57:57.886880   71702 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0913 19:57:57.886946   71702 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0913 19:57:57.897287   71702 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0913 19:57:57.907278   71702 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:57:58.066862   71702 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:57:59.038179   71702 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:57:59.245671   71702 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:57:59.306302   71702 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:57:59.366665   71702 api_server.go:52] waiting for apiserver process to appear ...
	I0913 19:57:59.366755   71702 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:57:59.867295   71702 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:57:59.867010   71424 addons.go:510] duration metric: took 3.116105462s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0913 19:57:57.205383   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:57:57.205897   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | unable to find current IP address of domain old-k8s-version-234290 in network mk-old-k8s-version-234290
	I0913 19:57:57.205926   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | I0913 19:57:57.205815   72953 retry.go:31] will retry after 1.270443953s: waiting for machine to come up
	I0913 19:57:58.477621   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:57:58.478121   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | unable to find current IP address of domain old-k8s-version-234290 in network mk-old-k8s-version-234290
	I0913 19:57:58.478142   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | I0913 19:57:58.478071   72953 retry.go:31] will retry after 1.698380616s: waiting for machine to come up
	I0913 19:58:00.179093   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:00.179578   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | unable to find current IP address of domain old-k8s-version-234290 in network mk-old-k8s-version-234290
	I0913 19:58:00.179602   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | I0913 19:58:00.179528   72953 retry.go:31] will retry after 2.83575453s: waiting for machine to come up
	I0913 19:58:00.367089   71702 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:00.386556   71702 api_server.go:72] duration metric: took 1.019888667s to wait for apiserver process to appear ...
	I0913 19:58:00.386585   71702 api_server.go:88] waiting for apiserver healthz status ...
	I0913 19:58:00.386612   71702 api_server.go:253] Checking apiserver healthz at https://192.168.61.3:8444/healthz ...
	I0913 19:58:00.387195   71702 api_server.go:269] stopped: https://192.168.61.3:8444/healthz: Get "https://192.168.61.3:8444/healthz": dial tcp 192.168.61.3:8444: connect: connection refused
	I0913 19:58:00.887556   71702 api_server.go:253] Checking apiserver healthz at https://192.168.61.3:8444/healthz ...
	I0913 19:58:03.321626   71702 api_server.go:279] https://192.168.61.3:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0913 19:58:03.321655   71702 api_server.go:103] status: https://192.168.61.3:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0913 19:58:03.321671   71702 api_server.go:253] Checking apiserver healthz at https://192.168.61.3:8444/healthz ...
	I0913 19:58:03.348469   71702 api_server.go:279] https://192.168.61.3:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0913 19:58:03.348523   71702 api_server.go:103] status: https://192.168.61.3:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0913 19:58:03.386697   71702 api_server.go:253] Checking apiserver healthz at https://192.168.61.3:8444/healthz ...
	I0913 19:58:03.431803   71702 api_server.go:279] https://192.168.61.3:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0913 19:58:03.431840   71702 api_server.go:103] status: https://192.168.61.3:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0913 19:58:03.887458   71702 api_server.go:253] Checking apiserver healthz at https://192.168.61.3:8444/healthz ...
	I0913 19:58:03.892461   71702 api_server.go:279] https://192.168.61.3:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0913 19:58:03.892542   71702 api_server.go:103] status: https://192.168.61.3:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0913 19:58:04.387025   71702 api_server.go:253] Checking apiserver healthz at https://192.168.61.3:8444/healthz ...
	I0913 19:58:04.392727   71702 api_server.go:279] https://192.168.61.3:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0913 19:58:04.392754   71702 api_server.go:103] status: https://192.168.61.3:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0913 19:58:04.887683   71702 api_server.go:253] Checking apiserver healthz at https://192.168.61.3:8444/healthz ...
	I0913 19:58:04.892753   71702 api_server.go:279] https://192.168.61.3:8444/healthz returned 200:
	ok
	I0913 19:58:04.904148   71702 api_server.go:141] control plane version: v1.31.1
	I0913 19:58:04.904182   71702 api_server.go:131] duration metric: took 4.517588824s to wait for apiserver health ...
	I0913 19:58:04.904194   71702 cni.go:84] Creating CNI manager for ""
	I0913 19:58:04.904202   71702 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0913 19:58:04.905663   71702 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0913 19:58:01.560970   71424 node_ready.go:53] node "no-preload-239327" has status "Ready":"False"
	I0913 19:58:04.064801   71424 node_ready.go:49] node "no-preload-239327" has status "Ready":"True"
	I0913 19:58:04.064833   71424 node_ready.go:38] duration metric: took 7.008173513s for node "no-preload-239327" to be "Ready" ...
	I0913 19:58:04.064847   71424 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0913 19:58:04.071226   71424 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-fjzxv" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:04.075856   71424 pod_ready.go:93] pod "coredns-7c65d6cfc9-fjzxv" in "kube-system" namespace has status "Ready":"True"
	I0913 19:58:04.075876   71424 pod_ready.go:82] duration metric: took 4.620688ms for pod "coredns-7c65d6cfc9-fjzxv" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:04.075886   71424 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-239327" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:06.082608   71424 pod_ready.go:103] pod "etcd-no-preload-239327" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:03.017261   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:03.017797   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | unable to find current IP address of domain old-k8s-version-234290 in network mk-old-k8s-version-234290
	I0913 19:58:03.017824   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | I0913 19:58:03.017750   72953 retry.go:31] will retry after 2.837073214s: waiting for machine to come up
	I0913 19:58:05.856138   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:05.856521   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | unable to find current IP address of domain old-k8s-version-234290 in network mk-old-k8s-version-234290
	I0913 19:58:05.856541   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | I0913 19:58:05.856478   72953 retry.go:31] will retry after 3.468611434s: waiting for machine to come up
	I0913 19:58:04.907086   71702 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0913 19:58:04.935755   71702 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0913 19:58:04.972552   71702 system_pods.go:43] waiting for kube-system pods to appear ...
	I0913 19:58:04.987070   71702 system_pods.go:59] 8 kube-system pods found
	I0913 19:58:04.987104   71702 system_pods.go:61] "coredns-7c65d6cfc9-zvnss" [b6584e3d-4140-4666-8303-94c0900eaf8c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0913 19:58:04.987118   71702 system_pods.go:61] "etcd-default-k8s-diff-port-512125" [5eb1e9b1-b89a-427d-83f5-96d9109b10c6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0913 19:58:04.987128   71702 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-512125" [5118097e-a1ed-403e-8acb-22c7619a6db9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0913 19:58:04.987148   71702 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-512125" [37f11854-a2b8-45d5-8491-e2f92b860220] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0913 19:58:04.987160   71702 system_pods.go:61] "kube-proxy-xqv9m" [92c9dda2-fabe-4b3b-9bae-892e6daf0889] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0913 19:58:04.987172   71702 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-512125" [a9f4fa75-b73d-477a-83e9-e855ec50f90d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0913 19:58:04.987180   71702 system_pods.go:61] "metrics-server-6867b74b74-7ltrm" [8560dbda-82b3-49a1-8ed8-f149e5e99168] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0913 19:58:04.987188   71702 system_pods.go:61] "storage-provisioner" [d8f393fe-0f71-4f3c-b17e-6132503c2b9d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0913 19:58:04.987198   71702 system_pods.go:74] duration metric: took 14.623093ms to wait for pod list to return data ...
	I0913 19:58:04.987207   71702 node_conditions.go:102] verifying NodePressure condition ...
	I0913 19:58:04.991659   71702 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0913 19:58:04.991686   71702 node_conditions.go:123] node cpu capacity is 2
	I0913 19:58:04.991701   71702 node_conditions.go:105] duration metric: took 4.488975ms to run NodePressure ...
	I0913 19:58:04.991720   71702 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:58:05.329547   71702 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0913 19:58:05.342174   71702 kubeadm.go:739] kubelet initialised
	I0913 19:58:05.342208   71702 kubeadm.go:740] duration metric: took 12.632654ms waiting for restarted kubelet to initialise ...
	I0913 19:58:05.342218   71702 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0913 19:58:05.351246   71702 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-zvnss" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:07.371790   71702 pod_ready.go:103] pod "coredns-7c65d6cfc9-zvnss" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:09.857936   71702 pod_ready.go:93] pod "coredns-7c65d6cfc9-zvnss" in "kube-system" namespace has status "Ready":"True"
	I0913 19:58:09.857956   71702 pod_ready.go:82] duration metric: took 4.506679998s for pod "coredns-7c65d6cfc9-zvnss" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:09.857966   71702 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-512125" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:10.763154   71233 start.go:364] duration metric: took 54.002772677s to acquireMachinesLock for "embed-certs-175374"
	I0913 19:58:10.763209   71233 start.go:96] Skipping create...Using existing machine configuration
	I0913 19:58:10.763220   71233 fix.go:54] fixHost starting: 
	I0913 19:58:10.763652   71233 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:58:10.763701   71233 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:58:10.780781   71233 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34533
	I0913 19:58:10.781257   71233 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:58:10.781767   71233 main.go:141] libmachine: Using API Version  1
	I0913 19:58:10.781792   71233 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:58:10.782108   71233 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:58:10.782297   71233 main.go:141] libmachine: (embed-certs-175374) Calling .DriverName
	I0913 19:58:10.782435   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetState
	I0913 19:58:10.783818   71233 fix.go:112] recreateIfNeeded on embed-certs-175374: state=Stopped err=<nil>
	I0913 19:58:10.783838   71233 main.go:141] libmachine: (embed-certs-175374) Calling .DriverName
	W0913 19:58:10.783968   71233 fix.go:138] unexpected machine state, will restart: <nil>
	I0913 19:58:10.786142   71233 out.go:177] * Restarting existing kvm2 VM for "embed-certs-175374" ...
	I0913 19:58:07.082571   71424 pod_ready.go:93] pod "etcd-no-preload-239327" in "kube-system" namespace has status "Ready":"True"
	I0913 19:58:07.082601   71424 pod_ready.go:82] duration metric: took 3.006705611s for pod "etcd-no-preload-239327" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:07.082614   71424 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-239327" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:07.087377   71424 pod_ready.go:93] pod "kube-apiserver-no-preload-239327" in "kube-system" namespace has status "Ready":"True"
	I0913 19:58:07.087394   71424 pod_ready.go:82] duration metric: took 4.772922ms for pod "kube-apiserver-no-preload-239327" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:07.087403   71424 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-239327" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:07.091167   71424 pod_ready.go:93] pod "kube-controller-manager-no-preload-239327" in "kube-system" namespace has status "Ready":"True"
	I0913 19:58:07.091181   71424 pod_ready.go:82] duration metric: took 3.772461ms for pod "kube-controller-manager-no-preload-239327" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:07.091188   71424 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-b24zg" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:07.095143   71424 pod_ready.go:93] pod "kube-proxy-b24zg" in "kube-system" namespace has status "Ready":"True"
	I0913 19:58:07.095158   71424 pod_ready.go:82] duration metric: took 3.964773ms for pod "kube-proxy-b24zg" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:07.095164   71424 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-239327" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:07.259916   71424 pod_ready.go:93] pod "kube-scheduler-no-preload-239327" in "kube-system" namespace has status "Ready":"True"
	I0913 19:58:07.259939   71424 pod_ready.go:82] duration metric: took 164.768229ms for pod "kube-scheduler-no-preload-239327" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:07.259948   71424 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:09.267203   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:09.327843   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:09.328294   71926 main.go:141] libmachine: (old-k8s-version-234290) Found IP for machine: 192.168.72.137
	I0913 19:58:09.328318   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has current primary IP address 192.168.72.137 and MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:09.328326   71926 main.go:141] libmachine: (old-k8s-version-234290) Reserving static IP address...
	I0913 19:58:09.328829   71926 main.go:141] libmachine: (old-k8s-version-234290) Reserved static IP address: 192.168.72.137
	I0913 19:58:09.328863   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | found host DHCP lease matching {name: "old-k8s-version-234290", mac: "52:54:00:11:33:43", ip: "192.168.72.137"} in network mk-old-k8s-version-234290: {Iface:virbr4 ExpiryTime:2024-09-13 20:58:03 +0000 UTC Type:0 Mac:52:54:00:11:33:43 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-234290 Clientid:01:52:54:00:11:33:43}
	I0913 19:58:09.328879   71926 main.go:141] libmachine: (old-k8s-version-234290) Waiting for SSH to be available...
	I0913 19:58:09.328907   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | skip adding static IP to network mk-old-k8s-version-234290 - found existing host DHCP lease matching {name: "old-k8s-version-234290", mac: "52:54:00:11:33:43", ip: "192.168.72.137"}
	I0913 19:58:09.328936   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | Getting to WaitForSSH function...
	I0913 19:58:09.331039   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:09.331303   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:33:43", ip: ""} in network mk-old-k8s-version-234290: {Iface:virbr4 ExpiryTime:2024-09-13 20:58:03 +0000 UTC Type:0 Mac:52:54:00:11:33:43 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-234290 Clientid:01:52:54:00:11:33:43}
	I0913 19:58:09.331334   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined IP address 192.168.72.137 and MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:09.331400   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | Using SSH client type: external
	I0913 19:58:09.331435   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | Using SSH private key: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/old-k8s-version-234290/id_rsa (-rw-------)
	I0913 19:58:09.331513   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.137 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19636-3902/.minikube/machines/old-k8s-version-234290/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0913 19:58:09.331538   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | About to run SSH command:
	I0913 19:58:09.331555   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | exit 0
	I0913 19:58:09.458142   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | SSH cmd err, output: <nil>: 
	I0913 19:58:09.458503   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetConfigRaw
	I0913 19:58:09.459065   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetIP
	I0913 19:58:09.461622   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:09.461915   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:33:43", ip: ""} in network mk-old-k8s-version-234290: {Iface:virbr4 ExpiryTime:2024-09-13 20:58:03 +0000 UTC Type:0 Mac:52:54:00:11:33:43 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-234290 Clientid:01:52:54:00:11:33:43}
	I0913 19:58:09.461939   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined IP address 192.168.72.137 and MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:09.462215   71926 profile.go:143] Saving config to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/old-k8s-version-234290/config.json ...
	I0913 19:58:09.462430   71926 machine.go:93] provisionDockerMachine start ...
	I0913 19:58:09.462454   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .DriverName
	I0913 19:58:09.462652   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHHostname
	I0913 19:58:09.464731   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:09.465001   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:33:43", ip: ""} in network mk-old-k8s-version-234290: {Iface:virbr4 ExpiryTime:2024-09-13 20:58:03 +0000 UTC Type:0 Mac:52:54:00:11:33:43 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-234290 Clientid:01:52:54:00:11:33:43}
	I0913 19:58:09.465025   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined IP address 192.168.72.137 and MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:09.465140   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHPort
	I0913 19:58:09.465292   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHKeyPath
	I0913 19:58:09.465448   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHKeyPath
	I0913 19:58:09.465586   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHUsername
	I0913 19:58:09.465754   71926 main.go:141] libmachine: Using SSH client type: native
	I0913 19:58:09.465934   71926 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.137 22 <nil> <nil>}
	I0913 19:58:09.465944   71926 main.go:141] libmachine: About to run SSH command:
	hostname
	I0913 19:58:09.578790   71926 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0913 19:58:09.578821   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetMachineName
	I0913 19:58:09.579119   71926 buildroot.go:166] provisioning hostname "old-k8s-version-234290"
	I0913 19:58:09.579148   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetMachineName
	I0913 19:58:09.579352   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHHostname
	I0913 19:58:09.582085   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:09.582501   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:33:43", ip: ""} in network mk-old-k8s-version-234290: {Iface:virbr4 ExpiryTime:2024-09-13 20:58:03 +0000 UTC Type:0 Mac:52:54:00:11:33:43 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-234290 Clientid:01:52:54:00:11:33:43}
	I0913 19:58:09.582530   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined IP address 192.168.72.137 and MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:09.582677   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHPort
	I0913 19:58:09.582828   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHKeyPath
	I0913 19:58:09.582982   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHKeyPath
	I0913 19:58:09.583111   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHUsername
	I0913 19:58:09.583310   71926 main.go:141] libmachine: Using SSH client type: native
	I0913 19:58:09.583519   71926 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.137 22 <nil> <nil>}
	I0913 19:58:09.583536   71926 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-234290 && echo "old-k8s-version-234290" | sudo tee /etc/hostname
	I0913 19:58:09.712818   71926 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-234290
	
	I0913 19:58:09.712849   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHHostname
	I0913 19:58:09.715668   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:09.716012   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:33:43", ip: ""} in network mk-old-k8s-version-234290: {Iface:virbr4 ExpiryTime:2024-09-13 20:58:03 +0000 UTC Type:0 Mac:52:54:00:11:33:43 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-234290 Clientid:01:52:54:00:11:33:43}
	I0913 19:58:09.716033   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined IP address 192.168.72.137 and MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:09.716166   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHPort
	I0913 19:58:09.716370   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHKeyPath
	I0913 19:58:09.716550   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHKeyPath
	I0913 19:58:09.716740   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHUsername
	I0913 19:58:09.716935   71926 main.go:141] libmachine: Using SSH client type: native
	I0913 19:58:09.717120   71926 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.137 22 <nil> <nil>}
	I0913 19:58:09.717145   71926 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-234290' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-234290/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-234290' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0913 19:58:09.835137   71926 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0913 19:58:09.835169   71926 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19636-3902/.minikube CaCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19636-3902/.minikube}
	I0913 19:58:09.835198   71926 buildroot.go:174] setting up certificates
	I0913 19:58:09.835207   71926 provision.go:84] configureAuth start
	I0913 19:58:09.835220   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetMachineName
	I0913 19:58:09.835493   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetIP
	I0913 19:58:09.838090   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:09.838460   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:33:43", ip: ""} in network mk-old-k8s-version-234290: {Iface:virbr4 ExpiryTime:2024-09-13 20:58:03 +0000 UTC Type:0 Mac:52:54:00:11:33:43 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-234290 Clientid:01:52:54:00:11:33:43}
	I0913 19:58:09.838496   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined IP address 192.168.72.137 and MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:09.838570   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHHostname
	I0913 19:58:09.840786   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:09.841121   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:33:43", ip: ""} in network mk-old-k8s-version-234290: {Iface:virbr4 ExpiryTime:2024-09-13 20:58:03 +0000 UTC Type:0 Mac:52:54:00:11:33:43 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-234290 Clientid:01:52:54:00:11:33:43}
	I0913 19:58:09.841147   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined IP address 192.168.72.137 and MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:09.841283   71926 provision.go:143] copyHostCerts
	I0913 19:58:09.841339   71926 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem, removing ...
	I0913 19:58:09.841349   71926 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem
	I0913 19:58:09.841404   71926 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem (1082 bytes)
	I0913 19:58:09.841495   71926 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem, removing ...
	I0913 19:58:09.841503   71926 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem
	I0913 19:58:09.841525   71926 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem (1123 bytes)
	I0913 19:58:09.841589   71926 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem, removing ...
	I0913 19:58:09.841596   71926 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem
	I0913 19:58:09.841613   71926 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem (1679 bytes)
	I0913 19:58:09.841671   71926 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-234290 san=[127.0.0.1 192.168.72.137 localhost minikube old-k8s-version-234290]
	I0913 19:58:10.111960   71926 provision.go:177] copyRemoteCerts
	I0913 19:58:10.112014   71926 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0913 19:58:10.112042   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHHostname
	I0913 19:58:10.114801   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:10.115156   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:33:43", ip: ""} in network mk-old-k8s-version-234290: {Iface:virbr4 ExpiryTime:2024-09-13 20:58:03 +0000 UTC Type:0 Mac:52:54:00:11:33:43 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-234290 Clientid:01:52:54:00:11:33:43}
	I0913 19:58:10.115203   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined IP address 192.168.72.137 and MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:10.115378   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHPort
	I0913 19:58:10.115543   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHKeyPath
	I0913 19:58:10.115703   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHUsername
	I0913 19:58:10.115816   71926 sshutil.go:53] new ssh client: &{IP:192.168.72.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/old-k8s-version-234290/id_rsa Username:docker}
	I0913 19:58:10.201034   71926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0913 19:58:10.225250   71926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0913 19:58:10.248653   71926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0913 19:58:10.273946   71926 provision.go:87] duration metric: took 438.72916ms to configureAuth
	I0913 19:58:10.273971   71926 buildroot.go:189] setting minikube options for container-runtime
	I0913 19:58:10.274216   71926 config.go:182] Loaded profile config "old-k8s-version-234290": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0913 19:58:10.274293   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHHostname
	I0913 19:58:10.276661   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:10.276973   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:33:43", ip: ""} in network mk-old-k8s-version-234290: {Iface:virbr4 ExpiryTime:2024-09-13 20:58:03 +0000 UTC Type:0 Mac:52:54:00:11:33:43 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-234290 Clientid:01:52:54:00:11:33:43}
	I0913 19:58:10.277010   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined IP address 192.168.72.137 and MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:10.277098   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHPort
	I0913 19:58:10.277315   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHKeyPath
	I0913 19:58:10.277465   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHKeyPath
	I0913 19:58:10.277593   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHUsername
	I0913 19:58:10.277755   71926 main.go:141] libmachine: Using SSH client type: native
	I0913 19:58:10.277914   71926 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.137 22 <nil> <nil>}
	I0913 19:58:10.277929   71926 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0913 19:58:10.506712   71926 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0913 19:58:10.506740   71926 machine.go:96] duration metric: took 1.044293936s to provisionDockerMachine
	I0913 19:58:10.506752   71926 start.go:293] postStartSetup for "old-k8s-version-234290" (driver="kvm2")
	I0913 19:58:10.506761   71926 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0913 19:58:10.506786   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .DriverName
	I0913 19:58:10.507087   71926 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0913 19:58:10.507121   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHHostname
	I0913 19:58:10.509746   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:10.510073   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:33:43", ip: ""} in network mk-old-k8s-version-234290: {Iface:virbr4 ExpiryTime:2024-09-13 20:58:03 +0000 UTC Type:0 Mac:52:54:00:11:33:43 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-234290 Clientid:01:52:54:00:11:33:43}
	I0913 19:58:10.510115   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined IP address 192.168.72.137 and MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:10.510319   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHPort
	I0913 19:58:10.510493   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHKeyPath
	I0913 19:58:10.510652   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHUsername
	I0913 19:58:10.510791   71926 sshutil.go:53] new ssh client: &{IP:192.168.72.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/old-k8s-version-234290/id_rsa Username:docker}
	I0913 19:58:10.608187   71926 ssh_runner.go:195] Run: cat /etc/os-release
	I0913 19:58:10.612545   71926 info.go:137] Remote host: Buildroot 2023.02.9
	I0913 19:58:10.612570   71926 filesync.go:126] Scanning /home/jenkins/minikube-integration/19636-3902/.minikube/addons for local assets ...
	I0913 19:58:10.612659   71926 filesync.go:126] Scanning /home/jenkins/minikube-integration/19636-3902/.minikube/files for local assets ...
	I0913 19:58:10.612760   71926 filesync.go:149] local asset: /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem -> 110792.pem in /etc/ssl/certs
	I0913 19:58:10.612876   71926 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0913 19:58:10.623923   71926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem --> /etc/ssl/certs/110792.pem (1708 bytes)
	I0913 19:58:10.649632   71926 start.go:296] duration metric: took 142.866871ms for postStartSetup
	I0913 19:58:10.649667   71926 fix.go:56] duration metric: took 19.32233086s for fixHost
	I0913 19:58:10.649685   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHHostname
	I0913 19:58:10.652317   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:10.652626   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:33:43", ip: ""} in network mk-old-k8s-version-234290: {Iface:virbr4 ExpiryTime:2024-09-13 20:58:03 +0000 UTC Type:0 Mac:52:54:00:11:33:43 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-234290 Clientid:01:52:54:00:11:33:43}
	I0913 19:58:10.652654   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined IP address 192.168.72.137 and MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:10.652772   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHPort
	I0913 19:58:10.652954   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHKeyPath
	I0913 19:58:10.653102   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHKeyPath
	I0913 19:58:10.653224   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHUsername
	I0913 19:58:10.653391   71926 main.go:141] libmachine: Using SSH client type: native
	I0913 19:58:10.653546   71926 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.137 22 <nil> <nil>}
	I0913 19:58:10.653555   71926 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0913 19:58:10.762947   71926 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726257490.741456651
	
	I0913 19:58:10.763010   71926 fix.go:216] guest clock: 1726257490.741456651
	I0913 19:58:10.763025   71926 fix.go:229] Guest: 2024-09-13 19:58:10.741456651 +0000 UTC Remote: 2024-09-13 19:58:10.649671047 +0000 UTC m=+268.740736518 (delta=91.785604ms)
	I0913 19:58:10.763052   71926 fix.go:200] guest clock delta is within tolerance: 91.785604ms
	I0913 19:58:10.763059   71926 start.go:83] releasing machines lock for "old-k8s-version-234290", held for 19.435752772s
	I0913 19:58:10.763095   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .DriverName
	I0913 19:58:10.763368   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetIP
	I0913 19:58:10.766318   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:10.766797   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:33:43", ip: ""} in network mk-old-k8s-version-234290: {Iface:virbr4 ExpiryTime:2024-09-13 20:58:03 +0000 UTC Type:0 Mac:52:54:00:11:33:43 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-234290 Clientid:01:52:54:00:11:33:43}
	I0913 19:58:10.766838   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined IP address 192.168.72.137 and MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:10.767094   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .DriverName
	I0913 19:58:10.767602   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .DriverName
	I0913 19:58:10.767791   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .DriverName
	I0913 19:58:10.767901   71926 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0913 19:58:10.767938   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHHostname
	I0913 19:58:10.768057   71926 ssh_runner.go:195] Run: cat /version.json
	I0913 19:58:10.768077   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHHostname
	I0913 19:58:10.770835   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:10.770860   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:10.771204   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:33:43", ip: ""} in network mk-old-k8s-version-234290: {Iface:virbr4 ExpiryTime:2024-09-13 20:58:03 +0000 UTC Type:0 Mac:52:54:00:11:33:43 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-234290 Clientid:01:52:54:00:11:33:43}
	I0913 19:58:10.771231   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined IP address 192.168.72.137 and MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:10.771269   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:33:43", ip: ""} in network mk-old-k8s-version-234290: {Iface:virbr4 ExpiryTime:2024-09-13 20:58:03 +0000 UTC Type:0 Mac:52:54:00:11:33:43 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-234290 Clientid:01:52:54:00:11:33:43}
	I0913 19:58:10.771299   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined IP address 192.168.72.137 and MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:10.771390   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHPort
	I0913 19:58:10.771538   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHKeyPath
	I0913 19:58:10.771562   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHPort
	I0913 19:58:10.771722   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHUsername
	I0913 19:58:10.771758   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHKeyPath
	I0913 19:58:10.771888   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHUsername
	I0913 19:58:10.771910   71926 sshutil.go:53] new ssh client: &{IP:192.168.72.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/old-k8s-version-234290/id_rsa Username:docker}
	I0913 19:58:10.772009   71926 sshutil.go:53] new ssh client: &{IP:192.168.72.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/old-k8s-version-234290/id_rsa Username:docker}
	I0913 19:58:10.851445   71926 ssh_runner.go:195] Run: systemctl --version
	I0913 19:58:10.876291   71926 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0913 19:58:11.022514   71926 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0913 19:58:11.029415   71926 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0913 19:58:11.029478   71926 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0913 19:58:11.046313   71926 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0913 19:58:11.046338   71926 start.go:495] detecting cgroup driver to use...
	I0913 19:58:11.046411   71926 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0913 19:58:11.064638   71926 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0913 19:58:11.079465   71926 docker.go:217] disabling cri-docker service (if available) ...
	I0913 19:58:11.079555   71926 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0913 19:58:11.092965   71926 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0913 19:58:11.107487   71926 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0913 19:58:11.225260   71926 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0913 19:58:11.379777   71926 docker.go:233] disabling docker service ...
	I0913 19:58:11.379918   71926 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0913 19:58:11.399146   71926 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0913 19:58:11.418820   71926 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0913 19:58:11.608056   71926 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0913 19:58:11.793596   71926 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0913 19:58:11.809432   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0913 19:58:11.831794   71926 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0913 19:58:11.831867   71926 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:58:11.843613   71926 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0913 19:58:11.843681   71926 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:58:11.856437   71926 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:58:11.868563   71926 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:58:11.880448   71926 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0913 19:58:11.892795   71926 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0913 19:58:11.903756   71926 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0913 19:58:11.903820   71926 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0913 19:58:11.919323   71926 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0913 19:58:11.932414   71926 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 19:58:12.084112   71926 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0913 19:58:12.186561   71926 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0913 19:58:12.186626   71926 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0913 19:58:12.200935   71926 start.go:563] Will wait 60s for crictl version
	I0913 19:58:12.200999   71926 ssh_runner.go:195] Run: which crictl
	I0913 19:58:12.204888   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0913 19:58:12.251729   71926 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0913 19:58:12.251822   71926 ssh_runner.go:195] Run: crio --version
	I0913 19:58:12.284955   71926 ssh_runner.go:195] Run: crio --version
	I0913 19:58:12.316561   71926 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0913 19:58:10.787457   71233 main.go:141] libmachine: (embed-certs-175374) Calling .Start
	I0913 19:58:10.787620   71233 main.go:141] libmachine: (embed-certs-175374) Ensuring networks are active...
	I0913 19:58:10.788313   71233 main.go:141] libmachine: (embed-certs-175374) Ensuring network default is active
	I0913 19:58:10.788694   71233 main.go:141] libmachine: (embed-certs-175374) Ensuring network mk-embed-certs-175374 is active
	I0913 19:58:10.789203   71233 main.go:141] libmachine: (embed-certs-175374) Getting domain xml...
	I0913 19:58:10.790255   71233 main.go:141] libmachine: (embed-certs-175374) Creating domain...
	I0913 19:58:12.138157   71233 main.go:141] libmachine: (embed-certs-175374) Waiting to get IP...
	I0913 19:58:12.139236   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:12.139700   71233 main.go:141] libmachine: (embed-certs-175374) DBG | unable to find current IP address of domain embed-certs-175374 in network mk-embed-certs-175374
	I0913 19:58:12.139753   71233 main.go:141] libmachine: (embed-certs-175374) DBG | I0913 19:58:12.139667   73146 retry.go:31] will retry after 297.211027ms: waiting for machine to come up
	I0913 19:58:12.438089   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:12.438546   71233 main.go:141] libmachine: (embed-certs-175374) DBG | unable to find current IP address of domain embed-certs-175374 in network mk-embed-certs-175374
	I0913 19:58:12.438573   71233 main.go:141] libmachine: (embed-certs-175374) DBG | I0913 19:58:12.438508   73146 retry.go:31] will retry after 295.16699ms: waiting for machine to come up
	I0913 19:58:12.735114   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:12.735588   71233 main.go:141] libmachine: (embed-certs-175374) DBG | unable to find current IP address of domain embed-certs-175374 in network mk-embed-certs-175374
	I0913 19:58:12.735624   71233 main.go:141] libmachine: (embed-certs-175374) DBG | I0913 19:58:12.735558   73146 retry.go:31] will retry after 439.751807ms: waiting for machine to come up
	I0913 19:58:13.177095   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:13.177613   71233 main.go:141] libmachine: (embed-certs-175374) DBG | unable to find current IP address of domain embed-certs-175374 in network mk-embed-certs-175374
	I0913 19:58:13.177643   71233 main.go:141] libmachine: (embed-certs-175374) DBG | I0913 19:58:13.177584   73146 retry.go:31] will retry after 561.896034ms: waiting for machine to come up
	I0913 19:58:13.741520   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:13.742128   71233 main.go:141] libmachine: (embed-certs-175374) DBG | unable to find current IP address of domain embed-certs-175374 in network mk-embed-certs-175374
	I0913 19:58:13.742164   71233 main.go:141] libmachine: (embed-certs-175374) DBG | I0913 19:58:13.742027   73146 retry.go:31] will retry after 713.20889ms: waiting for machine to come up
	I0913 19:58:11.865414   71702 pod_ready.go:103] pod "etcd-default-k8s-diff-port-512125" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:13.865756   71702 pod_ready.go:103] pod "etcd-default-k8s-diff-port-512125" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:11.267770   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:13.269041   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:15.768231   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:12.317917   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetIP
	I0913 19:58:12.320920   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:12.321261   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:33:43", ip: ""} in network mk-old-k8s-version-234290: {Iface:virbr4 ExpiryTime:2024-09-13 20:58:03 +0000 UTC Type:0 Mac:52:54:00:11:33:43 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-234290 Clientid:01:52:54:00:11:33:43}
	I0913 19:58:12.321291   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined IP address 192.168.72.137 and MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:12.321498   71926 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0913 19:58:12.325745   71926 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0913 19:58:12.340042   71926 kubeadm.go:883] updating cluster {Name:old-k8s-version-234290 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-234290 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.137 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0913 19:58:12.340163   71926 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0913 19:58:12.340227   71926 ssh_runner.go:195] Run: sudo crictl images --output json
	I0913 19:58:12.387772   71926 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0913 19:58:12.387831   71926 ssh_runner.go:195] Run: which lz4
	I0913 19:58:12.391877   71926 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0913 19:58:12.397084   71926 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0913 19:58:12.397111   71926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0913 19:58:14.108496   71926 crio.go:462] duration metric: took 1.716639607s to copy over tarball
	I0913 19:58:14.108580   71926 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0913 19:58:14.457047   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:14.457530   71233 main.go:141] libmachine: (embed-certs-175374) DBG | unable to find current IP address of domain embed-certs-175374 in network mk-embed-certs-175374
	I0913 19:58:14.457578   71233 main.go:141] libmachine: (embed-certs-175374) DBG | I0913 19:58:14.457461   73146 retry.go:31] will retry after 696.737044ms: waiting for machine to come up
	I0913 19:58:15.156145   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:15.156601   71233 main.go:141] libmachine: (embed-certs-175374) DBG | unable to find current IP address of domain embed-certs-175374 in network mk-embed-certs-175374
	I0913 19:58:15.156634   71233 main.go:141] libmachine: (embed-certs-175374) DBG | I0913 19:58:15.156555   73146 retry.go:31] will retry after 799.457406ms: waiting for machine to come up
	I0913 19:58:15.957762   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:15.958268   71233 main.go:141] libmachine: (embed-certs-175374) DBG | unable to find current IP address of domain embed-certs-175374 in network mk-embed-certs-175374
	I0913 19:58:15.958296   71233 main.go:141] libmachine: (embed-certs-175374) DBG | I0913 19:58:15.958218   73146 retry.go:31] will retry after 1.037426883s: waiting for machine to come up
	I0913 19:58:16.996752   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:16.997283   71233 main.go:141] libmachine: (embed-certs-175374) DBG | unable to find current IP address of domain embed-certs-175374 in network mk-embed-certs-175374
	I0913 19:58:16.997310   71233 main.go:141] libmachine: (embed-certs-175374) DBG | I0913 19:58:16.997233   73146 retry.go:31] will retry after 1.529310984s: waiting for machine to come up
	I0913 19:58:18.528167   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:18.528770   71233 main.go:141] libmachine: (embed-certs-175374) DBG | unable to find current IP address of domain embed-certs-175374 in network mk-embed-certs-175374
	I0913 19:58:18.528817   71233 main.go:141] libmachine: (embed-certs-175374) DBG | I0913 19:58:18.528732   73146 retry.go:31] will retry after 1.63281335s: waiting for machine to come up
	I0913 19:58:15.866154   71702 pod_ready.go:103] pod "etcd-default-k8s-diff-port-512125" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:16.865395   71702 pod_ready.go:93] pod "etcd-default-k8s-diff-port-512125" in "kube-system" namespace has status "Ready":"True"
	I0913 19:58:16.865434   71702 pod_ready.go:82] duration metric: took 7.007454177s for pod "etcd-default-k8s-diff-port-512125" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:16.865449   71702 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-512125" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:16.871374   71702 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-512125" in "kube-system" namespace has status "Ready":"True"
	I0913 19:58:16.871398   71702 pod_ready.go:82] duration metric: took 5.94123ms for pod "kube-apiserver-default-k8s-diff-port-512125" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:16.871410   71702 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-512125" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:19.122189   71702 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-512125" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:19.413846   71702 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-512125" in "kube-system" namespace has status "Ready":"True"
	I0913 19:58:19.413866   71702 pod_ready.go:82] duration metric: took 2.542449272s for pod "kube-controller-manager-default-k8s-diff-port-512125" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:19.413880   71702 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-xqv9m" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:19.419124   71702 pod_ready.go:93] pod "kube-proxy-xqv9m" in "kube-system" namespace has status "Ready":"True"
	I0913 19:58:19.419146   71702 pod_ready.go:82] duration metric: took 5.258451ms for pod "kube-proxy-xqv9m" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:19.419157   71702 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-512125" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:19.424347   71702 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-512125" in "kube-system" namespace has status "Ready":"True"
	I0913 19:58:19.424369   71702 pod_ready.go:82] duration metric: took 5.205567ms for pod "kube-scheduler-default-k8s-diff-port-512125" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:19.424378   71702 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:18.266585   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:20.267496   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:17.092899   71926 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.984287393s)
	I0913 19:58:17.092927   71926 crio.go:469] duration metric: took 2.984399164s to extract the tarball
	I0913 19:58:17.092936   71926 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0913 19:58:17.134595   71926 ssh_runner.go:195] Run: sudo crictl images --output json
	I0913 19:58:17.173233   71926 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0913 19:58:17.173261   71926 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0913 19:58:17.173353   71926 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 19:58:17.173420   71926 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0913 19:58:17.173496   71926 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0913 19:58:17.173545   71926 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0913 19:58:17.173432   71926 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0913 19:58:17.173391   71926 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0913 19:58:17.173404   71926 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0913 19:58:17.173354   71926 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0913 19:58:17.174795   71926 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0913 19:58:17.174996   71926 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0913 19:58:17.175016   71926 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0913 19:58:17.175093   71926 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0913 19:58:17.175261   71926 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0913 19:58:17.175274   71926 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0913 19:58:17.175314   71926 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 19:58:17.175374   71926 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0913 19:58:17.389976   71926 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0913 19:58:17.401858   71926 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0913 19:58:17.422207   71926 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0913 19:58:17.437983   71926 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0913 19:58:17.441827   71926 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0913 19:58:17.444353   71926 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0913 19:58:17.448291   71926 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0913 19:58:17.448327   71926 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0913 19:58:17.448360   71926 ssh_runner.go:195] Run: which crictl
	I0913 19:58:17.484630   71926 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0913 19:58:17.486907   71926 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0913 19:58:17.486953   71926 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0913 19:58:17.486992   71926 ssh_runner.go:195] Run: which crictl
	I0913 19:58:17.552972   71926 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0913 19:58:17.553017   71926 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0913 19:58:17.553070   71926 ssh_runner.go:195] Run: which crictl
	I0913 19:58:17.553094   71926 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0913 19:58:17.553131   71926 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0913 19:58:17.553172   71926 ssh_runner.go:195] Run: which crictl
	I0913 19:58:17.573201   71926 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0913 19:58:17.573248   71926 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0913 19:58:17.573298   71926 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0913 19:58:17.573320   71926 ssh_runner.go:195] Run: which crictl
	I0913 19:58:17.573340   71926 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0913 19:58:17.573386   71926 ssh_runner.go:195] Run: which crictl
	I0913 19:58:17.573434   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0913 19:58:17.605151   71926 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0913 19:58:17.605199   71926 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0913 19:58:17.605248   71926 ssh_runner.go:195] Run: which crictl
	I0913 19:58:17.605300   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0913 19:58:17.605347   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0913 19:58:17.605439   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0913 19:58:17.628947   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0913 19:58:17.628992   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0913 19:58:17.629158   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0913 19:58:17.629483   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0913 19:58:17.714772   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0913 19:58:17.714812   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0913 19:58:17.755168   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0913 19:58:17.813880   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0913 19:58:17.813980   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0913 19:58:17.822268   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0913 19:58:17.822277   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0913 19:58:17.864418   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0913 19:58:17.864418   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0913 19:58:17.930419   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0913 19:58:18.000454   71926 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0913 19:58:18.000613   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0913 19:58:18.000655   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0913 19:58:18.014400   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0913 19:58:18.065836   71926 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0913 19:58:18.065876   71926 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0913 19:58:18.065922   71926 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0913 19:58:18.068943   71926 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0913 19:58:18.075442   71926 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0913 19:58:18.102221   71926 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0913 19:58:18.330677   71926 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 19:58:18.474412   71926 cache_images.go:92] duration metric: took 1.301129458s to LoadCachedImages
	W0913 19:58:18.474515   71926 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0913 19:58:18.474532   71926 kubeadm.go:934] updating node { 192.168.72.137 8443 v1.20.0 crio true true} ...
	I0913 19:58:18.474668   71926 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-234290 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.137
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-234290 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0913 19:58:18.474764   71926 ssh_runner.go:195] Run: crio config
	I0913 19:58:18.528116   71926 cni.go:84] Creating CNI manager for ""
	I0913 19:58:18.528142   71926 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0913 19:58:18.528153   71926 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0913 19:58:18.528175   71926 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.137 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-234290 NodeName:old-k8s-version-234290 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.137"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.137 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0913 19:58:18.528341   71926 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.137
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-234290"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.137
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.137"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0913 19:58:18.528421   71926 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0913 19:58:18.539309   71926 binaries.go:44] Found k8s binaries, skipping transfer
	I0913 19:58:18.539396   71926 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0913 19:58:18.549279   71926 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0913 19:58:18.566974   71926 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0913 19:58:18.585652   71926 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0913 19:58:18.606817   71926 ssh_runner.go:195] Run: grep 192.168.72.137	control-plane.minikube.internal$ /etc/hosts
	I0913 19:58:18.610999   71926 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.137	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0913 19:58:18.623650   71926 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 19:58:18.738375   71926 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0913 19:58:18.759935   71926 certs.go:68] Setting up /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/old-k8s-version-234290 for IP: 192.168.72.137
	I0913 19:58:18.759960   71926 certs.go:194] generating shared ca certs ...
	I0913 19:58:18.759976   71926 certs.go:226] acquiring lock for ca certs: {Name:mke780aab4c2895bded11772e31c7a357d07742c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 19:58:18.760149   71926 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.key
	I0913 19:58:18.760202   71926 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.key
	I0913 19:58:18.760217   71926 certs.go:256] generating profile certs ...
	I0913 19:58:18.760337   71926 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/old-k8s-version-234290/client.key
	I0913 19:58:18.760412   71926 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/old-k8s-version-234290/apiserver.key.e5f62d17
	I0913 19:58:18.760468   71926 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/old-k8s-version-234290/proxy-client.key
	I0913 19:58:18.760623   71926 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079.pem (1338 bytes)
	W0913 19:58:18.760669   71926 certs.go:480] ignoring /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079_empty.pem, impossibly tiny 0 bytes
	I0913 19:58:18.760681   71926 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem (1675 bytes)
	I0913 19:58:18.760718   71926 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem (1082 bytes)
	I0913 19:58:18.760751   71926 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem (1123 bytes)
	I0913 19:58:18.760779   71926 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem (1679 bytes)
	I0913 19:58:18.760832   71926 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem (1708 bytes)
	I0913 19:58:18.761583   71926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0913 19:58:18.793014   71926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0913 19:58:18.848745   71926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0913 19:58:18.886745   71926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0913 19:58:18.924588   71926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/old-k8s-version-234290/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0913 19:58:18.958481   71926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/old-k8s-version-234290/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0913 19:58:18.991482   71926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/old-k8s-version-234290/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0913 19:58:19.032324   71926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/old-k8s-version-234290/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0913 19:58:19.059068   71926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079.pem --> /usr/share/ca-certificates/11079.pem (1338 bytes)
	I0913 19:58:19.085949   71926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem --> /usr/share/ca-certificates/110792.pem (1708 bytes)
	I0913 19:58:19.113643   71926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0913 19:58:19.145333   71926 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0913 19:58:19.163687   71926 ssh_runner.go:195] Run: openssl version
	I0913 19:58:19.171767   71926 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11079.pem && ln -fs /usr/share/ca-certificates/11079.pem /etc/ssl/certs/11079.pem"
	I0913 19:58:19.186554   71926 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11079.pem
	I0913 19:58:19.192330   71926 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 13 18:38 /usr/share/ca-certificates/11079.pem
	I0913 19:58:19.192401   71926 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11079.pem
	I0913 19:58:19.198792   71926 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11079.pem /etc/ssl/certs/51391683.0"
	I0913 19:58:19.210300   71926 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110792.pem && ln -fs /usr/share/ca-certificates/110792.pem /etc/ssl/certs/110792.pem"
	I0913 19:58:19.223407   71926 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110792.pem
	I0913 19:58:19.228291   71926 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 13 18:38 /usr/share/ca-certificates/110792.pem
	I0913 19:58:19.228349   71926 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110792.pem
	I0913 19:58:19.234308   71926 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110792.pem /etc/ssl/certs/3ec20f2e.0"
	I0913 19:58:19.245203   71926 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0913 19:58:19.256773   71926 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0913 19:58:19.262488   71926 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 13 18:22 /usr/share/ca-certificates/minikubeCA.pem
	I0913 19:58:19.262571   71926 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0913 19:58:19.269483   71926 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0913 19:58:19.281592   71926 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0913 19:58:19.286741   71926 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0913 19:58:19.293353   71926 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0913 19:58:19.299808   71926 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0913 19:58:19.306799   71926 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0913 19:58:19.313162   71926 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0913 19:58:19.319027   71926 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0913 19:58:19.325179   71926 kubeadm.go:392] StartCluster: {Name:old-k8s-version-234290 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-234290 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.137 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 19:58:19.325264   71926 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0913 19:58:19.325324   71926 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0913 19:58:19.369346   71926 cri.go:89] found id: ""
	I0913 19:58:19.369426   71926 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0913 19:58:19.379886   71926 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0913 19:58:19.379909   71926 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0913 19:58:19.379970   71926 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0913 19:58:19.390431   71926 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0913 19:58:19.391399   71926 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-234290" does not appear in /home/jenkins/minikube-integration/19636-3902/kubeconfig
	I0913 19:58:19.392019   71926 kubeconfig.go:62] /home/jenkins/minikube-integration/19636-3902/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-234290" cluster setting kubeconfig missing "old-k8s-version-234290" context setting]
	I0913 19:58:19.392914   71926 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/kubeconfig: {Name:mk324cf401f96b93fed93af51e24b0634e5fa1a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 19:58:19.407513   71926 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0913 19:58:19.419283   71926 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.137
	I0913 19:58:19.419314   71926 kubeadm.go:1160] stopping kube-system containers ...
	I0913 19:58:19.419326   71926 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0913 19:58:19.419379   71926 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0913 19:58:19.461864   71926 cri.go:89] found id: ""
	I0913 19:58:19.461935   71926 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0913 19:58:19.479746   71926 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0913 19:58:19.490722   71926 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0913 19:58:19.490746   71926 kubeadm.go:157] found existing configuration files:
	
	I0913 19:58:19.490793   71926 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0913 19:58:19.500968   71926 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0913 19:58:19.501031   71926 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0913 19:58:19.511172   71926 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0913 19:58:19.521623   71926 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0913 19:58:19.521690   71926 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0913 19:58:19.532035   71926 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0913 19:58:19.542058   71926 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0913 19:58:19.542139   71926 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0913 19:58:19.551747   71926 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0913 19:58:19.561183   71926 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0913 19:58:19.561240   71926 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0913 19:58:19.571631   71926 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0913 19:58:19.582073   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:58:19.730805   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:58:20.577463   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:58:20.815243   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:58:20.907599   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:58:21.008611   71926 api_server.go:52] waiting for apiserver process to appear ...
	I0913 19:58:21.008687   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:21.509128   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:20.163342   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:20.163836   71233 main.go:141] libmachine: (embed-certs-175374) DBG | unable to find current IP address of domain embed-certs-175374 in network mk-embed-certs-175374
	I0913 19:58:20.163866   71233 main.go:141] libmachine: (embed-certs-175374) DBG | I0913 19:58:20.163797   73146 retry.go:31] will retry after 2.608130242s: waiting for machine to come up
	I0913 19:58:22.773220   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:22.773746   71233 main.go:141] libmachine: (embed-certs-175374) DBG | unable to find current IP address of domain embed-certs-175374 in network mk-embed-certs-175374
	I0913 19:58:22.773773   71233 main.go:141] libmachine: (embed-certs-175374) DBG | I0913 19:58:22.773702   73146 retry.go:31] will retry after 2.358024102s: waiting for machine to come up
	I0913 19:58:21.432080   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:23.931775   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:22.766841   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:24.767073   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:22.009465   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:22.509128   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:23.009680   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:23.509066   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:24.009717   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:24.509499   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:25.008831   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:25.509742   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:26.009748   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:26.509405   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:25.134055   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:25.134613   71233 main.go:141] libmachine: (embed-certs-175374) DBG | unable to find current IP address of domain embed-certs-175374 in network mk-embed-certs-175374
	I0913 19:58:25.134637   71233 main.go:141] libmachine: (embed-certs-175374) DBG | I0913 19:58:25.134569   73146 retry.go:31] will retry after 3.938314294s: waiting for machine to come up
	I0913 19:58:29.076283   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:29.076741   71233 main.go:141] libmachine: (embed-certs-175374) Found IP for machine: 192.168.39.32
	I0913 19:58:29.076760   71233 main.go:141] libmachine: (embed-certs-175374) Reserving static IP address...
	I0913 19:58:29.076770   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has current primary IP address 192.168.39.32 and MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:29.077137   71233 main.go:141] libmachine: (embed-certs-175374) DBG | found host DHCP lease matching {name: "embed-certs-175374", mac: "52:54:00:72:57:cd", ip: "192.168.39.32"} in network mk-embed-certs-175374: {Iface:virbr3 ExpiryTime:2024-09-13 20:58:22 +0000 UTC Type:0 Mac:52:54:00:72:57:cd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:embed-certs-175374 Clientid:01:52:54:00:72:57:cd}
	I0913 19:58:29.077164   71233 main.go:141] libmachine: (embed-certs-175374) DBG | skip adding static IP to network mk-embed-certs-175374 - found existing host DHCP lease matching {name: "embed-certs-175374", mac: "52:54:00:72:57:cd", ip: "192.168.39.32"}
	I0913 19:58:29.077174   71233 main.go:141] libmachine: (embed-certs-175374) Reserved static IP address: 192.168.39.32
	I0913 19:58:29.077185   71233 main.go:141] libmachine: (embed-certs-175374) Waiting for SSH to be available...
	I0913 19:58:29.077194   71233 main.go:141] libmachine: (embed-certs-175374) DBG | Getting to WaitForSSH function...
	I0913 19:58:29.079065   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:29.079375   71233 main.go:141] libmachine: (embed-certs-175374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:57:cd", ip: ""} in network mk-embed-certs-175374: {Iface:virbr3 ExpiryTime:2024-09-13 20:58:22 +0000 UTC Type:0 Mac:52:54:00:72:57:cd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:embed-certs-175374 Clientid:01:52:54:00:72:57:cd}
	I0913 19:58:29.079407   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined IP address 192.168.39.32 and MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:29.079508   71233 main.go:141] libmachine: (embed-certs-175374) DBG | Using SSH client type: external
	I0913 19:58:29.079559   71233 main.go:141] libmachine: (embed-certs-175374) DBG | Using SSH private key: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/embed-certs-175374/id_rsa (-rw-------)
	I0913 19:58:29.079600   71233 main.go:141] libmachine: (embed-certs-175374) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.32 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19636-3902/.minikube/machines/embed-certs-175374/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0913 19:58:29.079615   71233 main.go:141] libmachine: (embed-certs-175374) DBG | About to run SSH command:
	I0913 19:58:29.079643   71233 main.go:141] libmachine: (embed-certs-175374) DBG | exit 0
	I0913 19:58:29.202138   71233 main.go:141] libmachine: (embed-certs-175374) DBG | SSH cmd err, output: <nil>: 
	I0913 19:58:29.202522   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetConfigRaw
	I0913 19:58:26.431735   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:28.930537   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:27.266331   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:29.272314   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:29.203122   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetIP
	I0913 19:58:29.205936   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:29.206304   71233 main.go:141] libmachine: (embed-certs-175374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:57:cd", ip: ""} in network mk-embed-certs-175374: {Iface:virbr3 ExpiryTime:2024-09-13 20:58:22 +0000 UTC Type:0 Mac:52:54:00:72:57:cd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:embed-certs-175374 Clientid:01:52:54:00:72:57:cd}
	I0913 19:58:29.206326   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined IP address 192.168.39.32 and MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:29.206567   71233 profile.go:143] Saving config to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/embed-certs-175374/config.json ...
	I0913 19:58:29.206799   71233 machine.go:93] provisionDockerMachine start ...
	I0913 19:58:29.206820   71233 main.go:141] libmachine: (embed-certs-175374) Calling .DriverName
	I0913 19:58:29.207047   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHHostname
	I0913 19:58:29.209407   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:29.209733   71233 main.go:141] libmachine: (embed-certs-175374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:57:cd", ip: ""} in network mk-embed-certs-175374: {Iface:virbr3 ExpiryTime:2024-09-13 20:58:22 +0000 UTC Type:0 Mac:52:54:00:72:57:cd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:embed-certs-175374 Clientid:01:52:54:00:72:57:cd}
	I0913 19:58:29.209755   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined IP address 192.168.39.32 and MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:29.209880   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHPort
	I0913 19:58:29.210087   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHKeyPath
	I0913 19:58:29.210264   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHKeyPath
	I0913 19:58:29.210475   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHUsername
	I0913 19:58:29.210613   71233 main.go:141] libmachine: Using SSH client type: native
	I0913 19:58:29.210806   71233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.32 22 <nil> <nil>}
	I0913 19:58:29.210819   71233 main.go:141] libmachine: About to run SSH command:
	hostname
	I0913 19:58:29.318615   71233 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0913 19:58:29.318647   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetMachineName
	I0913 19:58:29.318874   71233 buildroot.go:166] provisioning hostname "embed-certs-175374"
	I0913 19:58:29.318891   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetMachineName
	I0913 19:58:29.319050   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHHostname
	I0913 19:58:29.321627   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:29.321981   71233 main.go:141] libmachine: (embed-certs-175374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:57:cd", ip: ""} in network mk-embed-certs-175374: {Iface:virbr3 ExpiryTime:2024-09-13 20:58:22 +0000 UTC Type:0 Mac:52:54:00:72:57:cd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:embed-certs-175374 Clientid:01:52:54:00:72:57:cd}
	I0913 19:58:29.322007   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined IP address 192.168.39.32 and MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:29.322233   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHPort
	I0913 19:58:29.322411   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHKeyPath
	I0913 19:58:29.322524   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHKeyPath
	I0913 19:58:29.322665   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHUsername
	I0913 19:58:29.322814   71233 main.go:141] libmachine: Using SSH client type: native
	I0913 19:58:29.322993   71233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.32 22 <nil> <nil>}
	I0913 19:58:29.323011   71233 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-175374 && echo "embed-certs-175374" | sudo tee /etc/hostname
	I0913 19:58:29.441656   71233 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-175374
	
	I0913 19:58:29.441686   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHHostname
	I0913 19:58:29.444529   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:29.444942   71233 main.go:141] libmachine: (embed-certs-175374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:57:cd", ip: ""} in network mk-embed-certs-175374: {Iface:virbr3 ExpiryTime:2024-09-13 20:58:22 +0000 UTC Type:0 Mac:52:54:00:72:57:cd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:embed-certs-175374 Clientid:01:52:54:00:72:57:cd}
	I0913 19:58:29.444973   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined IP address 192.168.39.32 and MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:29.445107   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHPort
	I0913 19:58:29.445291   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHKeyPath
	I0913 19:58:29.445451   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHKeyPath
	I0913 19:58:29.445560   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHUsername
	I0913 19:58:29.445756   71233 main.go:141] libmachine: Using SSH client type: native
	I0913 19:58:29.445939   71233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.32 22 <nil> <nil>}
	I0913 19:58:29.445961   71233 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-175374' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-175374/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-175374' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0913 19:58:29.555773   71233 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0913 19:58:29.555798   71233 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19636-3902/.minikube CaCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19636-3902/.minikube}
	I0913 19:58:29.555815   71233 buildroot.go:174] setting up certificates
	I0913 19:58:29.555836   71233 provision.go:84] configureAuth start
	I0913 19:58:29.555845   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetMachineName
	I0913 19:58:29.556128   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetIP
	I0913 19:58:29.559064   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:29.559438   71233 main.go:141] libmachine: (embed-certs-175374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:57:cd", ip: ""} in network mk-embed-certs-175374: {Iface:virbr3 ExpiryTime:2024-09-13 20:58:22 +0000 UTC Type:0 Mac:52:54:00:72:57:cd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:embed-certs-175374 Clientid:01:52:54:00:72:57:cd}
	I0913 19:58:29.559459   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined IP address 192.168.39.32 and MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:29.559589   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHHostname
	I0913 19:58:29.561763   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:29.562078   71233 main.go:141] libmachine: (embed-certs-175374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:57:cd", ip: ""} in network mk-embed-certs-175374: {Iface:virbr3 ExpiryTime:2024-09-13 20:58:22 +0000 UTC Type:0 Mac:52:54:00:72:57:cd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:embed-certs-175374 Clientid:01:52:54:00:72:57:cd}
	I0913 19:58:29.562120   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined IP address 192.168.39.32 and MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:29.562218   71233 provision.go:143] copyHostCerts
	I0913 19:58:29.562277   71233 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem, removing ...
	I0913 19:58:29.562288   71233 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem
	I0913 19:58:29.562362   71233 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem (1082 bytes)
	I0913 19:58:29.562476   71233 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem, removing ...
	I0913 19:58:29.562487   71233 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem
	I0913 19:58:29.562519   71233 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem (1123 bytes)
	I0913 19:58:29.562621   71233 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem, removing ...
	I0913 19:58:29.562630   71233 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem
	I0913 19:58:29.562657   71233 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem (1679 bytes)
	I0913 19:58:29.562729   71233 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem org=jenkins.embed-certs-175374 san=[127.0.0.1 192.168.39.32 embed-certs-175374 localhost minikube]
	I0913 19:58:29.724450   71233 provision.go:177] copyRemoteCerts
	I0913 19:58:29.724502   71233 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0913 19:58:29.724524   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHHostname
	I0913 19:58:29.727348   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:29.727653   71233 main.go:141] libmachine: (embed-certs-175374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:57:cd", ip: ""} in network mk-embed-certs-175374: {Iface:virbr3 ExpiryTime:2024-09-13 20:58:22 +0000 UTC Type:0 Mac:52:54:00:72:57:cd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:embed-certs-175374 Clientid:01:52:54:00:72:57:cd}
	I0913 19:58:29.727680   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined IP address 192.168.39.32 and MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:29.727870   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHPort
	I0913 19:58:29.728028   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHKeyPath
	I0913 19:58:29.728142   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHUsername
	I0913 19:58:29.728291   71233 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/embed-certs-175374/id_rsa Username:docker}
	I0913 19:58:29.807752   71233 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0913 19:58:29.832344   71233 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0913 19:58:29.856275   71233 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0913 19:58:29.879235   71233 provision.go:87] duration metric: took 323.386002ms to configureAuth
	I0913 19:58:29.879264   71233 buildroot.go:189] setting minikube options for container-runtime
	I0913 19:58:29.879464   71233 config.go:182] Loaded profile config "embed-certs-175374": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 19:58:29.879535   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHHostname
	I0913 19:58:29.882178   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:29.882577   71233 main.go:141] libmachine: (embed-certs-175374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:57:cd", ip: ""} in network mk-embed-certs-175374: {Iface:virbr3 ExpiryTime:2024-09-13 20:58:22 +0000 UTC Type:0 Mac:52:54:00:72:57:cd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:embed-certs-175374 Clientid:01:52:54:00:72:57:cd}
	I0913 19:58:29.882608   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined IP address 192.168.39.32 and MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:29.882736   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHPort
	I0913 19:58:29.883001   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHKeyPath
	I0913 19:58:29.883187   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHKeyPath
	I0913 19:58:29.883328   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHUsername
	I0913 19:58:29.883519   71233 main.go:141] libmachine: Using SSH client type: native
	I0913 19:58:29.883723   71233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.32 22 <nil> <nil>}
	I0913 19:58:29.883747   71233 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0913 19:58:30.103532   71233 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0913 19:58:30.103557   71233 machine.go:96] duration metric: took 896.744413ms to provisionDockerMachine
	I0913 19:58:30.103574   71233 start.go:293] postStartSetup for "embed-certs-175374" (driver="kvm2")
	I0913 19:58:30.103588   71233 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0913 19:58:30.103610   71233 main.go:141] libmachine: (embed-certs-175374) Calling .DriverName
	I0913 19:58:30.103908   71233 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0913 19:58:30.103935   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHHostname
	I0913 19:58:30.106889   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:30.107288   71233 main.go:141] libmachine: (embed-certs-175374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:57:cd", ip: ""} in network mk-embed-certs-175374: {Iface:virbr3 ExpiryTime:2024-09-13 20:58:22 +0000 UTC Type:0 Mac:52:54:00:72:57:cd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:embed-certs-175374 Clientid:01:52:54:00:72:57:cd}
	I0913 19:58:30.107320   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined IP address 192.168.39.32 and MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:30.107434   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHPort
	I0913 19:58:30.107613   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHKeyPath
	I0913 19:58:30.107766   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHUsername
	I0913 19:58:30.107900   71233 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/embed-certs-175374/id_rsa Username:docker}
	I0913 19:58:30.189085   71233 ssh_runner.go:195] Run: cat /etc/os-release
	I0913 19:58:30.193560   71233 info.go:137] Remote host: Buildroot 2023.02.9
	I0913 19:58:30.193587   71233 filesync.go:126] Scanning /home/jenkins/minikube-integration/19636-3902/.minikube/addons for local assets ...
	I0913 19:58:30.193667   71233 filesync.go:126] Scanning /home/jenkins/minikube-integration/19636-3902/.minikube/files for local assets ...
	I0913 19:58:30.193767   71233 filesync.go:149] local asset: /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem -> 110792.pem in /etc/ssl/certs
	I0913 19:58:30.193878   71233 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0913 19:58:30.203533   71233 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem --> /etc/ssl/certs/110792.pem (1708 bytes)
	I0913 19:58:30.227895   71233 start.go:296] duration metric: took 124.307474ms for postStartSetup
	I0913 19:58:30.227936   71233 fix.go:56] duration metric: took 19.464716966s for fixHost
	I0913 19:58:30.227956   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHHostname
	I0913 19:58:30.230672   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:30.230977   71233 main.go:141] libmachine: (embed-certs-175374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:57:cd", ip: ""} in network mk-embed-certs-175374: {Iface:virbr3 ExpiryTime:2024-09-13 20:58:22 +0000 UTC Type:0 Mac:52:54:00:72:57:cd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:embed-certs-175374 Clientid:01:52:54:00:72:57:cd}
	I0913 19:58:30.231003   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined IP address 192.168.39.32 and MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:30.231167   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHPort
	I0913 19:58:30.231432   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHKeyPath
	I0913 19:58:30.231610   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHKeyPath
	I0913 19:58:30.231758   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHUsername
	I0913 19:58:30.231913   71233 main.go:141] libmachine: Using SSH client type: native
	I0913 19:58:30.232089   71233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.32 22 <nil> <nil>}
	I0913 19:58:30.232100   71233 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0913 19:58:30.331036   71233 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726257510.303110870
	
	I0913 19:58:30.331065   71233 fix.go:216] guest clock: 1726257510.303110870
	I0913 19:58:30.331076   71233 fix.go:229] Guest: 2024-09-13 19:58:30.30311087 +0000 UTC Remote: 2024-09-13 19:58:30.227940037 +0000 UTC m=+356.058673795 (delta=75.170833ms)
	I0913 19:58:30.331112   71233 fix.go:200] guest clock delta is within tolerance: 75.170833ms
	I0913 19:58:30.331117   71233 start.go:83] releasing machines lock for "embed-certs-175374", held for 19.567934671s
	I0913 19:58:30.331140   71233 main.go:141] libmachine: (embed-certs-175374) Calling .DriverName
	I0913 19:58:30.331423   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetIP
	I0913 19:58:30.334022   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:30.334506   71233 main.go:141] libmachine: (embed-certs-175374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:57:cd", ip: ""} in network mk-embed-certs-175374: {Iface:virbr3 ExpiryTime:2024-09-13 20:58:22 +0000 UTC Type:0 Mac:52:54:00:72:57:cd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:embed-certs-175374 Clientid:01:52:54:00:72:57:cd}
	I0913 19:58:30.334533   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined IP address 192.168.39.32 and MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:30.334671   71233 main.go:141] libmachine: (embed-certs-175374) Calling .DriverName
	I0913 19:58:30.335259   71233 main.go:141] libmachine: (embed-certs-175374) Calling .DriverName
	I0913 19:58:30.335431   71233 main.go:141] libmachine: (embed-certs-175374) Calling .DriverName
	I0913 19:58:30.335489   71233 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0913 19:58:30.335528   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHHostname
	I0913 19:58:30.335642   71233 ssh_runner.go:195] Run: cat /version.json
	I0913 19:58:30.335660   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHHostname
	I0913 19:58:30.338223   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:30.338556   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:30.338585   71233 main.go:141] libmachine: (embed-certs-175374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:57:cd", ip: ""} in network mk-embed-certs-175374: {Iface:virbr3 ExpiryTime:2024-09-13 20:58:22 +0000 UTC Type:0 Mac:52:54:00:72:57:cd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:embed-certs-175374 Clientid:01:52:54:00:72:57:cd}
	I0913 19:58:30.338608   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined IP address 192.168.39.32 and MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:30.338738   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHPort
	I0913 19:58:30.338891   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHKeyPath
	I0913 19:58:30.339037   71233 main.go:141] libmachine: (embed-certs-175374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:57:cd", ip: ""} in network mk-embed-certs-175374: {Iface:virbr3 ExpiryTime:2024-09-13 20:58:22 +0000 UTC Type:0 Mac:52:54:00:72:57:cd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:embed-certs-175374 Clientid:01:52:54:00:72:57:cd}
	I0913 19:58:30.339057   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined IP address 192.168.39.32 and MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:30.339072   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHUsername
	I0913 19:58:30.339199   71233 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/embed-certs-175374/id_rsa Username:docker}
	I0913 19:58:30.339247   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHPort
	I0913 19:58:30.339387   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHKeyPath
	I0913 19:58:30.339526   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHUsername
	I0913 19:58:30.339639   71233 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/embed-certs-175374/id_rsa Username:docker}
	I0913 19:58:30.415622   71233 ssh_runner.go:195] Run: systemctl --version
	I0913 19:58:30.440604   71233 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0913 19:58:30.586022   71233 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0913 19:58:30.594584   71233 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0913 19:58:30.594660   71233 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0913 19:58:30.611349   71233 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0913 19:58:30.611371   71233 start.go:495] detecting cgroup driver to use...
	I0913 19:58:30.611431   71233 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0913 19:58:30.626916   71233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0913 19:58:30.641834   71233 docker.go:217] disabling cri-docker service (if available) ...
	I0913 19:58:30.641899   71233 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0913 19:58:30.656109   71233 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0913 19:58:30.670053   71233 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0913 19:58:30.785264   71233 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0913 19:58:30.936484   71233 docker.go:233] disabling docker service ...
	I0913 19:58:30.936548   71233 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0913 19:58:30.951998   71233 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0913 19:58:30.965863   71233 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0913 19:58:31.117753   71233 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0913 19:58:31.241750   71233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0913 19:58:31.255910   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0913 19:58:31.276372   71233 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0913 19:58:31.276453   71233 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:58:31.286686   71233 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0913 19:58:31.286749   71233 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:58:31.296762   71233 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:58:31.306752   71233 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:58:31.317435   71233 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0913 19:58:31.328859   71233 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:58:31.339508   71233 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:58:31.358855   71233 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:58:31.369756   71233 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0913 19:58:31.379838   71233 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0913 19:58:31.379908   71233 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0913 19:58:31.392714   71233 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0913 19:58:31.402973   71233 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 19:58:31.543089   71233 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0913 19:58:31.635184   71233 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0913 19:58:31.635259   71233 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0913 19:58:31.640122   71233 start.go:563] Will wait 60s for crictl version
	I0913 19:58:31.640190   71233 ssh_runner.go:195] Run: which crictl
	I0913 19:58:31.644326   71233 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0913 19:58:31.687840   71233 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0913 19:58:31.687936   71233 ssh_runner.go:195] Run: crio --version
	I0913 19:58:31.716376   71233 ssh_runner.go:195] Run: crio --version
	I0913 19:58:31.749357   71233 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0913 19:58:27.009130   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:27.509574   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:28.009714   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:28.508758   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:29.008768   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:29.509523   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:30.009031   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:30.508827   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:31.009653   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:31.509554   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:31.750649   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetIP
	I0913 19:58:31.753235   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:31.753547   71233 main.go:141] libmachine: (embed-certs-175374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:57:cd", ip: ""} in network mk-embed-certs-175374: {Iface:virbr3 ExpiryTime:2024-09-13 20:58:22 +0000 UTC Type:0 Mac:52:54:00:72:57:cd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:embed-certs-175374 Clientid:01:52:54:00:72:57:cd}
	I0913 19:58:31.753576   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined IP address 192.168.39.32 and MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:31.753809   71233 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0913 19:58:31.757927   71233 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0913 19:58:31.771018   71233 kubeadm.go:883] updating cluster {Name:embed-certs-175374 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:embed-certs-175374 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.32 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0913 19:58:31.771171   71233 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0913 19:58:31.771221   71233 ssh_runner.go:195] Run: sudo crictl images --output json
	I0913 19:58:31.810741   71233 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0913 19:58:31.810798   71233 ssh_runner.go:195] Run: which lz4
	I0913 19:58:31.814892   71233 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0913 19:58:31.819229   71233 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0913 19:58:31.819269   71233 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0913 19:58:33.221865   71233 crio.go:462] duration metric: took 1.407002501s to copy over tarball
	I0913 19:58:33.221951   71233 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0913 19:58:30.931694   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:32.934639   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:31.767243   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:33.767834   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:35.768301   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:32.009337   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:32.509870   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:33.009618   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:33.509364   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:34.009124   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:34.509744   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:35.009610   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:35.509772   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:36.009680   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:36.509743   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:35.282125   71233 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.060124935s)
	I0913 19:58:35.282151   71233 crio.go:469] duration metric: took 2.060254719s to extract the tarball
	I0913 19:58:35.282158   71233 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0913 19:58:35.320685   71233 ssh_runner.go:195] Run: sudo crictl images --output json
	I0913 19:58:35.364371   71233 crio.go:514] all images are preloaded for cri-o runtime.
	I0913 19:58:35.364396   71233 cache_images.go:84] Images are preloaded, skipping loading
	I0913 19:58:35.364404   71233 kubeadm.go:934] updating node { 192.168.39.32 8443 v1.31.1 crio true true} ...
	I0913 19:58:35.364505   71233 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-175374 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.32
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-175374 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0913 19:58:35.364574   71233 ssh_runner.go:195] Run: crio config
	I0913 19:58:35.409662   71233 cni.go:84] Creating CNI manager for ""
	I0913 19:58:35.409684   71233 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0913 19:58:35.409692   71233 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0913 19:58:35.409711   71233 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.32 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-175374 NodeName:embed-certs-175374 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.32"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.32 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0913 19:58:35.409829   71233 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.32
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-175374"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.32
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.32"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0913 19:58:35.409886   71233 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0913 19:58:35.420286   71233 binaries.go:44] Found k8s binaries, skipping transfer
	I0913 19:58:35.420354   71233 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0913 19:58:35.430624   71233 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0913 19:58:35.448662   71233 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0913 19:58:35.465838   71233 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0913 19:58:35.483262   71233 ssh_runner.go:195] Run: grep 192.168.39.32	control-plane.minikube.internal$ /etc/hosts
	I0913 19:58:35.487299   71233 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.32	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0913 19:58:35.500571   71233 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 19:58:35.615618   71233 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0913 19:58:35.634191   71233 certs.go:68] Setting up /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/embed-certs-175374 for IP: 192.168.39.32
	I0913 19:58:35.634216   71233 certs.go:194] generating shared ca certs ...
	I0913 19:58:35.634237   71233 certs.go:226] acquiring lock for ca certs: {Name:mke780aab4c2895bded11772e31c7a357d07742c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 19:58:35.634421   71233 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.key
	I0913 19:58:35.634489   71233 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.key
	I0913 19:58:35.634503   71233 certs.go:256] generating profile certs ...
	I0913 19:58:35.634599   71233 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/embed-certs-175374/client.key
	I0913 19:58:35.634664   71233 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/embed-certs-175374/apiserver.key.f26b0d46
	I0913 19:58:35.634719   71233 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/embed-certs-175374/proxy-client.key
	I0913 19:58:35.634847   71233 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079.pem (1338 bytes)
	W0913 19:58:35.634888   71233 certs.go:480] ignoring /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079_empty.pem, impossibly tiny 0 bytes
	I0913 19:58:35.634903   71233 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem (1675 bytes)
	I0913 19:58:35.634940   71233 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem (1082 bytes)
	I0913 19:58:35.634974   71233 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem (1123 bytes)
	I0913 19:58:35.635013   71233 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem (1679 bytes)
	I0913 19:58:35.635070   71233 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem (1708 bytes)
	I0913 19:58:35.635679   71233 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0913 19:58:35.680013   71233 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0913 19:58:35.708836   71233 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0913 19:58:35.742138   71233 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0913 19:58:35.783230   71233 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/embed-certs-175374/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0913 19:58:35.816022   71233 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/embed-certs-175374/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0913 19:58:35.847365   71233 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/embed-certs-175374/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0913 19:58:35.871389   71233 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/embed-certs-175374/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0913 19:58:35.896617   71233 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem --> /usr/share/ca-certificates/110792.pem (1708 bytes)
	I0913 19:58:35.920811   71233 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0913 19:58:35.947119   71233 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079.pem --> /usr/share/ca-certificates/11079.pem (1338 bytes)
	I0913 19:58:35.971590   71233 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0913 19:58:35.988797   71233 ssh_runner.go:195] Run: openssl version
	I0913 19:58:35.994690   71233 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11079.pem && ln -fs /usr/share/ca-certificates/11079.pem /etc/ssl/certs/11079.pem"
	I0913 19:58:36.006056   71233 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11079.pem
	I0913 19:58:36.010744   71233 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 13 18:38 /usr/share/ca-certificates/11079.pem
	I0913 19:58:36.010813   71233 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11079.pem
	I0913 19:58:36.016820   71233 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11079.pem /etc/ssl/certs/51391683.0"
	I0913 19:58:36.028895   71233 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110792.pem && ln -fs /usr/share/ca-certificates/110792.pem /etc/ssl/certs/110792.pem"
	I0913 19:58:36.040296   71233 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110792.pem
	I0913 19:58:36.044904   71233 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 13 18:38 /usr/share/ca-certificates/110792.pem
	I0913 19:58:36.044948   71233 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110792.pem
	I0913 19:58:36.050727   71233 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110792.pem /etc/ssl/certs/3ec20f2e.0"
	I0913 19:58:36.061195   71233 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0913 19:58:36.071527   71233 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0913 19:58:36.076171   71233 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 13 18:22 /usr/share/ca-certificates/minikubeCA.pem
	I0913 19:58:36.076204   71233 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0913 19:58:36.081765   71233 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0913 19:58:36.093815   71233 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0913 19:58:36.098729   71233 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0913 19:58:36.105238   71233 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0913 19:58:36.111340   71233 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0913 19:58:36.117349   71233 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0913 19:58:36.123329   71233 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0913 19:58:36.129083   71233 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0913 19:58:36.134952   71233 kubeadm.go:392] StartCluster: {Name:embed-certs-175374 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:embed-certs-175374 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.32 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 19:58:36.135035   71233 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0913 19:58:36.135095   71233 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0913 19:58:36.177680   71233 cri.go:89] found id: ""
	I0913 19:58:36.177743   71233 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0913 19:58:36.188511   71233 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0913 19:58:36.188531   71233 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0913 19:58:36.188580   71233 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0913 19:58:36.199007   71233 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0913 19:58:36.200034   71233 kubeconfig.go:125] found "embed-certs-175374" server: "https://192.168.39.32:8443"
	I0913 19:58:36.201838   71233 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0913 19:58:36.211823   71233 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.32
	I0913 19:58:36.211850   71233 kubeadm.go:1160] stopping kube-system containers ...
	I0913 19:58:36.211863   71233 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0913 19:58:36.211907   71233 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0913 19:58:36.254383   71233 cri.go:89] found id: ""
	I0913 19:58:36.254452   71233 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0913 19:58:36.274482   71233 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0913 19:58:36.284752   71233 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0913 19:58:36.284776   71233 kubeadm.go:157] found existing configuration files:
	
	I0913 19:58:36.284826   71233 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0913 19:58:36.294122   71233 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0913 19:58:36.294186   71233 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0913 19:58:36.303848   71233 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0913 19:58:36.313197   71233 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0913 19:58:36.313270   71233 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0913 19:58:36.322754   71233 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0913 19:58:36.332018   71233 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0913 19:58:36.332078   71233 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0913 19:58:36.341980   71233 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0913 19:58:36.351251   71233 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0913 19:58:36.351308   71233 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0913 19:58:36.360867   71233 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0913 19:58:36.370253   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:58:36.476811   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:58:37.459731   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:58:37.701271   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:58:37.795569   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:58:37.884961   71233 api_server.go:52] waiting for apiserver process to appear ...
	I0913 19:58:37.885054   71233 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:38.385265   71233 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:38.886038   71233 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:35.431757   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:37.930698   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:38.869696   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:37.009084   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:37.509368   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:38.009698   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:38.509699   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:39.008821   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:39.509724   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:40.008865   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:40.509533   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:41.009397   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:41.508872   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:39.385638   71233 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:39.885566   71233 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:39.901409   71233 api_server.go:72] duration metric: took 2.016446791s to wait for apiserver process to appear ...
	I0913 19:58:39.901438   71233 api_server.go:88] waiting for apiserver healthz status ...
	I0913 19:58:39.901469   71233 api_server.go:253] Checking apiserver healthz at https://192.168.39.32:8443/healthz ...
	I0913 19:58:42.607623   71233 api_server.go:279] https://192.168.39.32:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0913 19:58:42.607656   71233 api_server.go:103] status: https://192.168.39.32:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0913 19:58:42.607672   71233 api_server.go:253] Checking apiserver healthz at https://192.168.39.32:8443/healthz ...
	I0913 19:58:42.625107   71233 api_server.go:279] https://192.168.39.32:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0913 19:58:42.625134   71233 api_server.go:103] status: https://192.168.39.32:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0913 19:58:42.902512   71233 api_server.go:253] Checking apiserver healthz at https://192.168.39.32:8443/healthz ...
	I0913 19:58:42.912382   71233 api_server.go:279] https://192.168.39.32:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0913 19:58:42.912424   71233 api_server.go:103] status: https://192.168.39.32:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0913 19:58:43.401981   71233 api_server.go:253] Checking apiserver healthz at https://192.168.39.32:8443/healthz ...
	I0913 19:58:43.406231   71233 api_server.go:279] https://192.168.39.32:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0913 19:58:43.406253   71233 api_server.go:103] status: https://192.168.39.32:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0913 19:58:43.901758   71233 api_server.go:253] Checking apiserver healthz at https://192.168.39.32:8443/healthz ...
	I0913 19:58:43.909236   71233 api_server.go:279] https://192.168.39.32:8443/healthz returned 200:
	ok
	I0913 19:58:43.915858   71233 api_server.go:141] control plane version: v1.31.1
	I0913 19:58:43.915878   71233 api_server.go:131] duration metric: took 4.014433541s to wait for apiserver health ...
	I0913 19:58:43.915886   71233 cni.go:84] Creating CNI manager for ""
	I0913 19:58:43.915892   71233 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0913 19:58:43.917333   71233 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0913 19:58:43.918437   71233 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0913 19:58:43.929803   71233 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0913 19:58:43.962264   71233 system_pods.go:43] waiting for kube-system pods to appear ...
	I0913 19:58:43.974064   71233 system_pods.go:59] 8 kube-system pods found
	I0913 19:58:43.974124   71233 system_pods.go:61] "coredns-7c65d6cfc9-lrrkx" [3b5420cc-a0cd-4f34-8f65-9fa6fd5dd02f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0913 19:58:43.974132   71233 system_pods.go:61] "etcd-embed-certs-175374" [4f645ba5-cffa-49a2-9219-8e635f58abd3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0913 19:58:43.974140   71233 system_pods.go:61] "kube-apiserver-embed-certs-175374" [4c21b983-d949-4642-b17c-b547330b0d05] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0913 19:58:43.974146   71233 system_pods.go:61] "kube-controller-manager-embed-certs-175374" [defb4f6c-81f8-4405-b2c0-abe91c846f4c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0913 19:58:43.974154   71233 system_pods.go:61] "kube-proxy-jv77q" [28580bbe-7c5f-4161-8370-41f3286d508c] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0913 19:58:43.974159   71233 system_pods.go:61] "kube-scheduler-embed-certs-175374" [c5aa66b9-523c-46e8-b8e4-70680490dbf5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0913 19:58:43.974168   71233 system_pods.go:61] "metrics-server-6867b74b74-fnznh" [9ca67e1c-a852-4513-abfc-ace5908d2727] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0913 19:58:43.974174   71233 system_pods.go:61] "storage-provisioner" [afa99920-b51a-4d30-a8e0-269a0beeee8a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0913 19:58:43.974180   71233 system_pods.go:74] duration metric: took 11.890984ms to wait for pod list to return data ...
	I0913 19:58:43.974191   71233 node_conditions.go:102] verifying NodePressure condition ...
	I0913 19:58:43.978060   71233 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0913 19:58:43.978084   71233 node_conditions.go:123] node cpu capacity is 2
	I0913 19:58:43.978115   71233 node_conditions.go:105] duration metric: took 3.91914ms to run NodePressure ...
	I0913 19:58:43.978136   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:58:39.931725   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:41.931904   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:43.932454   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:44.265300   71233 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0913 19:58:44.270133   71233 kubeadm.go:739] kubelet initialised
	I0913 19:58:44.270161   71233 kubeadm.go:740] duration metric: took 4.829768ms waiting for restarted kubelet to initialise ...
	I0913 19:58:44.270170   71233 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0913 19:58:44.275324   71233 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-lrrkx" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:44.280420   71233 pod_ready.go:98] node "embed-certs-175374" hosting pod "coredns-7c65d6cfc9-lrrkx" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-175374" has status "Ready":"False"
	I0913 19:58:44.280443   71233 pod_ready.go:82] duration metric: took 5.093507ms for pod "coredns-7c65d6cfc9-lrrkx" in "kube-system" namespace to be "Ready" ...
	E0913 19:58:44.280452   71233 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-175374" hosting pod "coredns-7c65d6cfc9-lrrkx" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-175374" has status "Ready":"False"
	I0913 19:58:44.280459   71233 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-175374" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:44.284917   71233 pod_ready.go:98] node "embed-certs-175374" hosting pod "etcd-embed-certs-175374" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-175374" has status "Ready":"False"
	I0913 19:58:44.284937   71233 pod_ready.go:82] duration metric: took 4.469078ms for pod "etcd-embed-certs-175374" in "kube-system" namespace to be "Ready" ...
	E0913 19:58:44.284945   71233 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-175374" hosting pod "etcd-embed-certs-175374" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-175374" has status "Ready":"False"
	I0913 19:58:44.284952   71233 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-175374" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:44.288979   71233 pod_ready.go:98] node "embed-certs-175374" hosting pod "kube-apiserver-embed-certs-175374" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-175374" has status "Ready":"False"
	I0913 19:58:44.289001   71233 pod_ready.go:82] duration metric: took 4.040314ms for pod "kube-apiserver-embed-certs-175374" in "kube-system" namespace to be "Ready" ...
	E0913 19:58:44.289012   71233 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-175374" hosting pod "kube-apiserver-embed-certs-175374" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-175374" has status "Ready":"False"
	I0913 19:58:44.289019   71233 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-175374" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:44.366067   71233 pod_ready.go:98] node "embed-certs-175374" hosting pod "kube-controller-manager-embed-certs-175374" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-175374" has status "Ready":"False"
	I0913 19:58:44.366115   71233 pod_ready.go:82] duration metric: took 77.081723ms for pod "kube-controller-manager-embed-certs-175374" in "kube-system" namespace to be "Ready" ...
	E0913 19:58:44.366130   71233 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-175374" hosting pod "kube-controller-manager-embed-certs-175374" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-175374" has status "Ready":"False"
	I0913 19:58:44.366138   71233 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-jv77q" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:44.768797   71233 pod_ready.go:98] node "embed-certs-175374" hosting pod "kube-proxy-jv77q" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-175374" has status "Ready":"False"
	I0913 19:58:44.768829   71233 pod_ready.go:82] duration metric: took 402.677833ms for pod "kube-proxy-jv77q" in "kube-system" namespace to be "Ready" ...
	E0913 19:58:44.768838   71233 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-175374" hosting pod "kube-proxy-jv77q" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-175374" has status "Ready":"False"
	I0913 19:58:44.768845   71233 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-175374" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:45.166011   71233 pod_ready.go:98] node "embed-certs-175374" hosting pod "kube-scheduler-embed-certs-175374" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-175374" has status "Ready":"False"
	I0913 19:58:45.166046   71233 pod_ready.go:82] duration metric: took 397.193399ms for pod "kube-scheduler-embed-certs-175374" in "kube-system" namespace to be "Ready" ...
	E0913 19:58:45.166059   71233 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-175374" hosting pod "kube-scheduler-embed-certs-175374" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-175374" has status "Ready":"False"
	I0913 19:58:45.166068   71233 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:45.565304   71233 pod_ready.go:98] node "embed-certs-175374" hosting pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-175374" has status "Ready":"False"
	I0913 19:58:45.565328   71233 pod_ready.go:82] duration metric: took 399.249933ms for pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace to be "Ready" ...
	E0913 19:58:45.565337   71233 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-175374" hosting pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-175374" has status "Ready":"False"
	I0913 19:58:45.565350   71233 pod_ready.go:39] duration metric: took 1.295171906s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0913 19:58:45.565371   71233 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0913 19:58:45.577831   71233 ops.go:34] apiserver oom_adj: -16
	I0913 19:58:45.577857   71233 kubeadm.go:597] duration metric: took 9.389319229s to restartPrimaryControlPlane
	I0913 19:58:45.577868   71233 kubeadm.go:394] duration metric: took 9.442921883s to StartCluster
	I0913 19:58:45.577884   71233 settings.go:142] acquiring lock: {Name:mk3b751ea8a9f3a6c2d19469cffad500e96411b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 19:58:45.577967   71233 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19636-3902/kubeconfig
	I0913 19:58:45.579765   71233 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/kubeconfig: {Name:mk324cf401f96b93fed93af51e24b0634e5fa1a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 19:58:45.580068   71233 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.32 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0913 19:58:45.580156   71233 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0913 19:58:45.580249   71233 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-175374"
	I0913 19:58:45.580272   71233 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-175374"
	W0913 19:58:45.580281   71233 addons.go:243] addon storage-provisioner should already be in state true
	I0913 19:58:45.580295   71233 config.go:182] Loaded profile config "embed-certs-175374": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 19:58:45.580311   71233 host.go:66] Checking if "embed-certs-175374" exists ...
	I0913 19:58:45.580300   71233 addons.go:69] Setting default-storageclass=true in profile "embed-certs-175374"
	I0913 19:58:45.580353   71233 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-175374"
	I0913 19:58:45.580341   71233 addons.go:69] Setting metrics-server=true in profile "embed-certs-175374"
	I0913 19:58:45.580395   71233 addons.go:234] Setting addon metrics-server=true in "embed-certs-175374"
	W0913 19:58:45.580409   71233 addons.go:243] addon metrics-server should already be in state true
	I0913 19:58:45.580482   71233 host.go:66] Checking if "embed-certs-175374" exists ...
	I0913 19:58:45.580753   71233 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:58:45.580799   71233 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:58:45.580846   71233 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:58:45.580894   71233 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:58:45.580952   71233 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:58:45.581001   71233 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:58:45.581828   71233 out.go:177] * Verifying Kubernetes components...
	I0913 19:58:45.583145   71233 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 19:58:45.596215   71233 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38289
	I0913 19:58:45.596347   71233 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43797
	I0913 19:58:45.596650   71233 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:58:45.596775   71233 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44367
	I0913 19:58:45.596889   71233 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:58:45.597150   71233 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:58:45.597156   71233 main.go:141] libmachine: Using API Version  1
	I0913 19:58:45.597175   71233 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:58:45.597345   71233 main.go:141] libmachine: Using API Version  1
	I0913 19:58:45.597359   71233 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:58:45.597606   71233 main.go:141] libmachine: Using API Version  1
	I0913 19:58:45.597623   71233 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:58:45.597659   71233 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:58:45.597683   71233 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:58:45.597842   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetState
	I0913 19:58:45.597952   71233 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:58:45.598212   71233 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:58:45.598243   71233 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:58:45.598512   71233 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:58:45.598541   71233 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:58:45.601548   71233 addons.go:234] Setting addon default-storageclass=true in "embed-certs-175374"
	W0913 19:58:45.601569   71233 addons.go:243] addon default-storageclass should already be in state true
	I0913 19:58:45.601596   71233 host.go:66] Checking if "embed-certs-175374" exists ...
	I0913 19:58:45.601941   71233 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:58:45.601971   71233 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:58:45.613596   71233 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37017
	I0913 19:58:45.614086   71233 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:58:45.614646   71233 main.go:141] libmachine: Using API Version  1
	I0913 19:58:45.614670   71233 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:58:45.615015   71233 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:58:45.615328   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetState
	I0913 19:58:45.615792   71233 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42809
	I0913 19:58:45.616459   71233 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:58:45.617057   71233 main.go:141] libmachine: Using API Version  1
	I0913 19:58:45.617076   71233 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:58:45.617135   71233 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40251
	I0913 19:58:45.617429   71233 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:58:45.617492   71233 main.go:141] libmachine: (embed-certs-175374) Calling .DriverName
	I0913 19:58:45.617538   71233 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:58:45.617720   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetState
	I0913 19:58:45.618009   71233 main.go:141] libmachine: Using API Version  1
	I0913 19:58:45.618029   71233 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:58:45.618610   71233 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:58:45.619215   71233 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:58:45.619257   71233 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:58:45.619496   71233 main.go:141] libmachine: (embed-certs-175374) Calling .DriverName
	I0913 19:58:45.619734   71233 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0913 19:58:45.620863   71233 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 19:58:41.266572   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:43.267658   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:45.768086   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:42.009722   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:42.509784   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:43.009630   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:43.508726   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:44.009339   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:44.509674   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:45.009675   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:45.509437   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:46.009589   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:46.509457   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:45.620906   71233 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0913 19:58:45.620921   71233 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0913 19:58:45.620940   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHHostname
	I0913 19:58:45.622242   71233 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0913 19:58:45.622255   71233 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0913 19:58:45.622272   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHHostname
	I0913 19:58:45.624230   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:45.624735   71233 main.go:141] libmachine: (embed-certs-175374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:57:cd", ip: ""} in network mk-embed-certs-175374: {Iface:virbr3 ExpiryTime:2024-09-13 20:58:22 +0000 UTC Type:0 Mac:52:54:00:72:57:cd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:embed-certs-175374 Clientid:01:52:54:00:72:57:cd}
	I0913 19:58:45.624763   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined IP address 192.168.39.32 and MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:45.624903   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHPort
	I0913 19:58:45.625063   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHKeyPath
	I0913 19:58:45.625200   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHUsername
	I0913 19:58:45.625354   71233 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/embed-certs-175374/id_rsa Username:docker}
	I0913 19:58:45.625501   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:45.625915   71233 main.go:141] libmachine: (embed-certs-175374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:57:cd", ip: ""} in network mk-embed-certs-175374: {Iface:virbr3 ExpiryTime:2024-09-13 20:58:22 +0000 UTC Type:0 Mac:52:54:00:72:57:cd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:embed-certs-175374 Clientid:01:52:54:00:72:57:cd}
	I0913 19:58:45.625938   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined IP address 192.168.39.32 and MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:45.626141   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHPort
	I0913 19:58:45.626285   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHKeyPath
	I0913 19:58:45.626451   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHUsername
	I0913 19:58:45.626625   71233 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/embed-certs-175374/id_rsa Username:docker}
	I0913 19:58:45.658599   71233 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45219
	I0913 19:58:45.659088   71233 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:58:45.659729   71233 main.go:141] libmachine: Using API Version  1
	I0913 19:58:45.659752   71233 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:58:45.660087   71233 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:58:45.660266   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetState
	I0913 19:58:45.661894   71233 main.go:141] libmachine: (embed-certs-175374) Calling .DriverName
	I0913 19:58:45.662127   71233 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0913 19:58:45.662143   71233 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0913 19:58:45.662159   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHHostname
	I0913 19:58:45.664987   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:45.665347   71233 main.go:141] libmachine: (embed-certs-175374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:57:cd", ip: ""} in network mk-embed-certs-175374: {Iface:virbr3 ExpiryTime:2024-09-13 20:58:22 +0000 UTC Type:0 Mac:52:54:00:72:57:cd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:embed-certs-175374 Clientid:01:52:54:00:72:57:cd}
	I0913 19:58:45.665369   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined IP address 192.168.39.32 and MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:45.665475   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHPort
	I0913 19:58:45.665622   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHKeyPath
	I0913 19:58:45.665765   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHUsername
	I0913 19:58:45.665890   71233 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/embed-certs-175374/id_rsa Username:docker}
	I0913 19:58:45.771910   71233 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0913 19:58:45.788103   71233 node_ready.go:35] waiting up to 6m0s for node "embed-certs-175374" to be "Ready" ...
	I0913 19:58:45.849115   71233 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0913 19:58:45.954823   71233 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0913 19:58:45.954845   71233 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0913 19:58:45.972602   71233 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0913 19:58:46.008217   71233 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0913 19:58:46.008243   71233 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0913 19:58:46.087347   71233 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0913 19:58:46.087374   71233 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0913 19:58:46.145493   71233 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0913 19:58:46.413833   71233 main.go:141] libmachine: Making call to close driver server
	I0913 19:58:46.413867   71233 main.go:141] libmachine: (embed-certs-175374) Calling .Close
	I0913 19:58:46.414152   71233 main.go:141] libmachine: Successfully made call to close driver server
	I0913 19:58:46.414211   71233 main.go:141] libmachine: (embed-certs-175374) DBG | Closing plugin on server side
	I0913 19:58:46.414228   71233 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 19:58:46.414239   71233 main.go:141] libmachine: Making call to close driver server
	I0913 19:58:46.414257   71233 main.go:141] libmachine: (embed-certs-175374) Calling .Close
	I0913 19:58:46.414562   71233 main.go:141] libmachine: (embed-certs-175374) DBG | Closing plugin on server side
	I0913 19:58:46.414574   71233 main.go:141] libmachine: Successfully made call to close driver server
	I0913 19:58:46.414587   71233 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 19:58:46.420582   71233 main.go:141] libmachine: Making call to close driver server
	I0913 19:58:46.420600   71233 main.go:141] libmachine: (embed-certs-175374) Calling .Close
	I0913 19:58:46.420839   71233 main.go:141] libmachine: Successfully made call to close driver server
	I0913 19:58:46.420855   71233 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 19:58:46.960928   71233 main.go:141] libmachine: Making call to close driver server
	I0913 19:58:46.960961   71233 main.go:141] libmachine: (embed-certs-175374) Calling .Close
	I0913 19:58:46.961258   71233 main.go:141] libmachine: Successfully made call to close driver server
	I0913 19:58:46.961292   71233 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 19:58:46.961298   71233 main.go:141] libmachine: (embed-certs-175374) DBG | Closing plugin on server side
	I0913 19:58:46.961314   71233 main.go:141] libmachine: Making call to close driver server
	I0913 19:58:46.961325   71233 main.go:141] libmachine: (embed-certs-175374) Calling .Close
	I0913 19:58:46.961592   71233 main.go:141] libmachine: Successfully made call to close driver server
	I0913 19:58:46.961607   71233 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 19:58:47.205831   71233 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.060299398s)
	I0913 19:58:47.205881   71233 main.go:141] libmachine: Making call to close driver server
	I0913 19:58:47.205896   71233 main.go:141] libmachine: (embed-certs-175374) Calling .Close
	I0913 19:58:47.206177   71233 main.go:141] libmachine: Successfully made call to close driver server
	I0913 19:58:47.206198   71233 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 19:58:47.206211   71233 main.go:141] libmachine: Making call to close driver server
	I0913 19:58:47.206209   71233 main.go:141] libmachine: (embed-certs-175374) DBG | Closing plugin on server side
	I0913 19:58:47.206218   71233 main.go:141] libmachine: (embed-certs-175374) Calling .Close
	I0913 19:58:47.206422   71233 main.go:141] libmachine: (embed-certs-175374) DBG | Closing plugin on server side
	I0913 19:58:47.206461   71233 main.go:141] libmachine: Successfully made call to close driver server
	I0913 19:58:47.206469   71233 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 19:58:47.206482   71233 addons.go:475] Verifying addon metrics-server=true in "embed-certs-175374"
	I0913 19:58:47.208308   71233 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0913 19:58:47.209327   71233 addons.go:510] duration metric: took 1.629176141s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0913 19:58:47.792485   71233 node_ready.go:53] node "embed-certs-175374" has status "Ready":"False"
	I0913 19:58:46.431055   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:48.930705   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:48.265994   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:50.266158   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:47.009340   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:47.509159   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:48.009550   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:48.509199   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:49.009364   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:49.509522   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:50.008790   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:50.509733   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:51.009675   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:51.509423   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:50.293136   71233 node_ready.go:53] node "embed-certs-175374" has status "Ready":"False"
	I0913 19:58:52.792201   71233 node_ready.go:53] node "embed-certs-175374" has status "Ready":"False"
	I0913 19:58:53.291781   71233 node_ready.go:49] node "embed-certs-175374" has status "Ready":"True"
	I0913 19:58:53.291808   71233 node_ready.go:38] duration metric: took 7.503674244s for node "embed-certs-175374" to be "Ready" ...
	I0913 19:58:53.291817   71233 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0913 19:58:53.297601   71233 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-lrrkx" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:53.304575   71233 pod_ready.go:93] pod "coredns-7c65d6cfc9-lrrkx" in "kube-system" namespace has status "Ready":"True"
	I0913 19:58:53.304599   71233 pod_ready.go:82] duration metric: took 6.973055ms for pod "coredns-7c65d6cfc9-lrrkx" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:53.304608   71233 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-175374" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:50.932102   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:53.431177   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:52.267198   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:54.267301   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:52.009563   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:52.509133   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:53.008751   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:53.508990   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:54.009430   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:54.508835   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:55.009332   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:55.509474   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:56.009345   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:56.509025   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:55.312022   71233 pod_ready.go:103] pod "etcd-embed-certs-175374" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:57.310407   71233 pod_ready.go:93] pod "etcd-embed-certs-175374" in "kube-system" namespace has status "Ready":"True"
	I0913 19:58:57.310430   71233 pod_ready.go:82] duration metric: took 4.0058159s for pod "etcd-embed-certs-175374" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:57.310440   71233 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-175374" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:57.315573   71233 pod_ready.go:93] pod "kube-apiserver-embed-certs-175374" in "kube-system" namespace has status "Ready":"True"
	I0913 19:58:57.315592   71233 pod_ready.go:82] duration metric: took 5.146474ms for pod "kube-apiserver-embed-certs-175374" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:57.315600   71233 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-175374" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:57.319332   71233 pod_ready.go:93] pod "kube-controller-manager-embed-certs-175374" in "kube-system" namespace has status "Ready":"True"
	I0913 19:58:57.319347   71233 pod_ready.go:82] duration metric: took 3.741976ms for pod "kube-controller-manager-embed-certs-175374" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:57.319356   71233 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-jv77q" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:57.323231   71233 pod_ready.go:93] pod "kube-proxy-jv77q" in "kube-system" namespace has status "Ready":"True"
	I0913 19:58:57.323247   71233 pod_ready.go:82] duration metric: took 3.886178ms for pod "kube-proxy-jv77q" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:57.323254   71233 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-175374" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:57.329250   71233 pod_ready.go:93] pod "kube-scheduler-embed-certs-175374" in "kube-system" namespace has status "Ready":"True"
	I0913 19:58:57.329264   71233 pod_ready.go:82] duration metric: took 6.005366ms for pod "kube-scheduler-embed-certs-175374" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:57.329273   71233 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:55.932146   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:58.430922   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:56.765730   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:58.767104   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:57.009384   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:57.509443   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:58.008849   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:58.509514   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:59.009139   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:59.509433   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:00.009778   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:00.508827   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:01.009427   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:01.508910   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:59.335308   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:01.335559   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:03.337207   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:00.930860   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:02.932443   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:01.267236   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:03.765856   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:05.766799   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:02.009696   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:02.509043   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:03.008825   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:03.509139   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:04.009549   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:04.509093   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:05.009633   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:05.509496   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:06.008914   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:06.509007   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:05.835701   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:07.836050   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:05.431045   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:07.431161   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:08.266221   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:10.267540   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:07.009527   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:07.509637   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:08.009264   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:08.509505   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:09.008838   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:09.509478   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:10.009569   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:10.509697   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:11.009352   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:11.509280   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:10.335743   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:12.835060   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:09.930272   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:11.930469   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:14.431325   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:12.766317   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:14.766811   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:12.009200   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:12.509540   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:13.008811   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:13.509512   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:14.008877   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:14.509361   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:15.008823   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:15.509022   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:16.009176   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:16.509654   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:14.836303   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:17.336034   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:16.431384   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:18.930816   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:17.266683   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:19.268476   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:17.009758   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:17.509429   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:18.009470   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:18.509732   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:19.008954   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:19.509475   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:20.008916   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:20.509623   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:21.009697   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 19:59:21.009781   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 19:59:21.044701   71926 cri.go:89] found id: ""
	I0913 19:59:21.044724   71926 logs.go:276] 0 containers: []
	W0913 19:59:21.044734   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 19:59:21.044739   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 19:59:21.044786   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 19:59:21.086372   71926 cri.go:89] found id: ""
	I0913 19:59:21.086399   71926 logs.go:276] 0 containers: []
	W0913 19:59:21.086408   71926 logs.go:278] No container was found matching "etcd"
	I0913 19:59:21.086413   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 19:59:21.086474   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 19:59:21.121663   71926 cri.go:89] found id: ""
	I0913 19:59:21.121691   71926 logs.go:276] 0 containers: []
	W0913 19:59:21.121702   71926 logs.go:278] No container was found matching "coredns"
	I0913 19:59:21.121709   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 19:59:21.121772   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 19:59:21.159432   71926 cri.go:89] found id: ""
	I0913 19:59:21.159462   71926 logs.go:276] 0 containers: []
	W0913 19:59:21.159474   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 19:59:21.159481   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 19:59:21.159547   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 19:59:21.196077   71926 cri.go:89] found id: ""
	I0913 19:59:21.196108   71926 logs.go:276] 0 containers: []
	W0913 19:59:21.196120   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 19:59:21.196128   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 19:59:21.196202   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 19:59:21.228527   71926 cri.go:89] found id: ""
	I0913 19:59:21.228560   71926 logs.go:276] 0 containers: []
	W0913 19:59:21.228572   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 19:59:21.228579   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 19:59:21.228642   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 19:59:21.261972   71926 cri.go:89] found id: ""
	I0913 19:59:21.262004   71926 logs.go:276] 0 containers: []
	W0913 19:59:21.262015   71926 logs.go:278] No container was found matching "kindnet"
	I0913 19:59:21.262023   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 19:59:21.262088   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 19:59:21.297147   71926 cri.go:89] found id: ""
	I0913 19:59:21.297178   71926 logs.go:276] 0 containers: []
	W0913 19:59:21.297191   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 19:59:21.297201   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 19:59:21.297214   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 19:59:21.351067   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 19:59:21.351099   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 19:59:21.367453   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 19:59:21.367492   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 19:59:21.497454   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 19:59:21.497474   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 19:59:21.497487   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 19:59:21.568483   71926 logs.go:123] Gathering logs for container status ...
	I0913 19:59:21.568519   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 19:59:19.836218   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:22.336293   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:21.430519   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:23.930458   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:21.767677   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:24.267717   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:24.114940   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:24.127514   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 19:59:24.127597   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 19:59:24.164868   71926 cri.go:89] found id: ""
	I0913 19:59:24.164896   71926 logs.go:276] 0 containers: []
	W0913 19:59:24.164909   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 19:59:24.164917   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 19:59:24.164976   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 19:59:24.200342   71926 cri.go:89] found id: ""
	I0913 19:59:24.200374   71926 logs.go:276] 0 containers: []
	W0913 19:59:24.200386   71926 logs.go:278] No container was found matching "etcd"
	I0913 19:59:24.200393   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 19:59:24.200453   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 19:59:24.237566   71926 cri.go:89] found id: ""
	I0913 19:59:24.237592   71926 logs.go:276] 0 containers: []
	W0913 19:59:24.237603   71926 logs.go:278] No container was found matching "coredns"
	I0913 19:59:24.237619   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 19:59:24.237691   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 19:59:24.275357   71926 cri.go:89] found id: ""
	I0913 19:59:24.275386   71926 logs.go:276] 0 containers: []
	W0913 19:59:24.275397   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 19:59:24.275412   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 19:59:24.275476   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 19:59:24.312715   71926 cri.go:89] found id: ""
	I0913 19:59:24.312744   71926 logs.go:276] 0 containers: []
	W0913 19:59:24.312754   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 19:59:24.312759   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 19:59:24.312821   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 19:59:24.348027   71926 cri.go:89] found id: ""
	I0913 19:59:24.348053   71926 logs.go:276] 0 containers: []
	W0913 19:59:24.348064   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 19:59:24.348071   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 19:59:24.348149   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 19:59:24.382195   71926 cri.go:89] found id: ""
	I0913 19:59:24.382219   71926 logs.go:276] 0 containers: []
	W0913 19:59:24.382229   71926 logs.go:278] No container was found matching "kindnet"
	I0913 19:59:24.382235   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 19:59:24.382282   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 19:59:24.418783   71926 cri.go:89] found id: ""
	I0913 19:59:24.418806   71926 logs.go:276] 0 containers: []
	W0913 19:59:24.418815   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 19:59:24.418823   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 19:59:24.418833   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 19:59:24.504681   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 19:59:24.504705   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 19:59:24.504718   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 19:59:24.578961   71926 logs.go:123] Gathering logs for container status ...
	I0913 19:59:24.578993   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 19:59:24.623057   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 19:59:24.623083   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 19:59:24.673801   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 19:59:24.673835   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 19:59:24.336593   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:26.835014   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:28.836636   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:25.932213   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:28.431013   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:26.767205   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:29.266801   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:27.188790   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:27.201419   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 19:59:27.201476   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 19:59:27.234511   71926 cri.go:89] found id: ""
	I0913 19:59:27.234535   71926 logs.go:276] 0 containers: []
	W0913 19:59:27.234543   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 19:59:27.234550   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 19:59:27.234610   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 19:59:27.273066   71926 cri.go:89] found id: ""
	I0913 19:59:27.273089   71926 logs.go:276] 0 containers: []
	W0913 19:59:27.273098   71926 logs.go:278] No container was found matching "etcd"
	I0913 19:59:27.273112   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 19:59:27.273169   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 19:59:27.306510   71926 cri.go:89] found id: ""
	I0913 19:59:27.306531   71926 logs.go:276] 0 containers: []
	W0913 19:59:27.306540   71926 logs.go:278] No container was found matching "coredns"
	I0913 19:59:27.306545   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 19:59:27.306630   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 19:59:27.343335   71926 cri.go:89] found id: ""
	I0913 19:59:27.343359   71926 logs.go:276] 0 containers: []
	W0913 19:59:27.343371   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 19:59:27.343378   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 19:59:27.343427   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 19:59:27.380440   71926 cri.go:89] found id: ""
	I0913 19:59:27.380469   71926 logs.go:276] 0 containers: []
	W0913 19:59:27.380478   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 19:59:27.380483   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 19:59:27.380536   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 19:59:27.419250   71926 cri.go:89] found id: ""
	I0913 19:59:27.419280   71926 logs.go:276] 0 containers: []
	W0913 19:59:27.419292   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 19:59:27.419299   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 19:59:27.419370   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 19:59:27.454315   71926 cri.go:89] found id: ""
	I0913 19:59:27.454337   71926 logs.go:276] 0 containers: []
	W0913 19:59:27.454346   71926 logs.go:278] No container was found matching "kindnet"
	I0913 19:59:27.454352   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 19:59:27.454402   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 19:59:27.491075   71926 cri.go:89] found id: ""
	I0913 19:59:27.491107   71926 logs.go:276] 0 containers: []
	W0913 19:59:27.491118   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 19:59:27.491128   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 19:59:27.491170   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 19:59:27.540849   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 19:59:27.540877   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 19:59:27.554829   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 19:59:27.554860   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 19:59:27.624534   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 19:59:27.624562   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 19:59:27.624577   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 19:59:27.702577   71926 logs.go:123] Gathering logs for container status ...
	I0913 19:59:27.702612   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 19:59:30.242489   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:30.255585   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 19:59:30.255667   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 19:59:30.298214   71926 cri.go:89] found id: ""
	I0913 19:59:30.298241   71926 logs.go:276] 0 containers: []
	W0913 19:59:30.298253   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 19:59:30.298259   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 19:59:30.298332   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 19:59:30.335270   71926 cri.go:89] found id: ""
	I0913 19:59:30.335300   71926 logs.go:276] 0 containers: []
	W0913 19:59:30.335309   71926 logs.go:278] No container was found matching "etcd"
	I0913 19:59:30.335314   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 19:59:30.335379   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 19:59:30.373478   71926 cri.go:89] found id: ""
	I0913 19:59:30.373506   71926 logs.go:276] 0 containers: []
	W0913 19:59:30.373517   71926 logs.go:278] No container was found matching "coredns"
	I0913 19:59:30.373524   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 19:59:30.373583   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 19:59:30.411820   71926 cri.go:89] found id: ""
	I0913 19:59:30.411845   71926 logs.go:276] 0 containers: []
	W0913 19:59:30.411854   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 19:59:30.411863   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 19:59:30.411908   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 19:59:30.449434   71926 cri.go:89] found id: ""
	I0913 19:59:30.449463   71926 logs.go:276] 0 containers: []
	W0913 19:59:30.449479   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 19:59:30.449486   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 19:59:30.449547   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 19:59:30.482794   71926 cri.go:89] found id: ""
	I0913 19:59:30.482822   71926 logs.go:276] 0 containers: []
	W0913 19:59:30.482831   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 19:59:30.482837   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 19:59:30.482887   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 19:59:30.517320   71926 cri.go:89] found id: ""
	I0913 19:59:30.517342   71926 logs.go:276] 0 containers: []
	W0913 19:59:30.517351   71926 logs.go:278] No container was found matching "kindnet"
	I0913 19:59:30.517357   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 19:59:30.517404   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 19:59:30.553119   71926 cri.go:89] found id: ""
	I0913 19:59:30.553146   71926 logs.go:276] 0 containers: []
	W0913 19:59:30.553154   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 19:59:30.553162   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 19:59:30.553172   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 19:59:30.605857   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 19:59:30.605886   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 19:59:30.620823   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 19:59:30.620855   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 19:59:30.689618   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 19:59:30.689637   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 19:59:30.689650   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 19:59:30.772324   71926 logs.go:123] Gathering logs for container status ...
	I0913 19:59:30.772359   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 19:59:31.335265   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:33.336711   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:30.431957   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:32.930866   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:31.765595   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:33.768217   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:33.313109   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:33.327584   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 19:59:33.327642   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 19:59:33.362880   71926 cri.go:89] found id: ""
	I0913 19:59:33.362904   71926 logs.go:276] 0 containers: []
	W0913 19:59:33.362912   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 19:59:33.362919   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 19:59:33.362979   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 19:59:33.395790   71926 cri.go:89] found id: ""
	I0913 19:59:33.395818   71926 logs.go:276] 0 containers: []
	W0913 19:59:33.395828   71926 logs.go:278] No container was found matching "etcd"
	I0913 19:59:33.395833   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 19:59:33.395883   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 19:59:33.430371   71926 cri.go:89] found id: ""
	I0913 19:59:33.430397   71926 logs.go:276] 0 containers: []
	W0913 19:59:33.430405   71926 logs.go:278] No container was found matching "coredns"
	I0913 19:59:33.430410   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 19:59:33.430522   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 19:59:33.465466   71926 cri.go:89] found id: ""
	I0913 19:59:33.465494   71926 logs.go:276] 0 containers: []
	W0913 19:59:33.465502   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 19:59:33.465508   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 19:59:33.465554   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 19:59:33.500340   71926 cri.go:89] found id: ""
	I0913 19:59:33.500370   71926 logs.go:276] 0 containers: []
	W0913 19:59:33.500385   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 19:59:33.500390   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 19:59:33.500440   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 19:59:33.537219   71926 cri.go:89] found id: ""
	I0913 19:59:33.537248   71926 logs.go:276] 0 containers: []
	W0913 19:59:33.537259   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 19:59:33.537267   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 19:59:33.537315   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 19:59:33.576171   71926 cri.go:89] found id: ""
	I0913 19:59:33.576201   71926 logs.go:276] 0 containers: []
	W0913 19:59:33.576209   71926 logs.go:278] No container was found matching "kindnet"
	I0913 19:59:33.576214   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 19:59:33.576261   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 19:59:33.618525   71926 cri.go:89] found id: ""
	I0913 19:59:33.618552   71926 logs.go:276] 0 containers: []
	W0913 19:59:33.618564   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 19:59:33.618574   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 19:59:33.618588   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 19:59:33.667903   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 19:59:33.667932   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 19:59:33.683870   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 19:59:33.683897   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 19:59:33.755651   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 19:59:33.755675   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 19:59:33.755687   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 19:59:33.834518   71926 logs.go:123] Gathering logs for container status ...
	I0913 19:59:33.834563   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 19:59:36.375763   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:36.389874   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 19:59:36.389945   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 19:59:36.423918   71926 cri.go:89] found id: ""
	I0913 19:59:36.423943   71926 logs.go:276] 0 containers: []
	W0913 19:59:36.423955   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 19:59:36.423962   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 19:59:36.424021   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 19:59:36.461591   71926 cri.go:89] found id: ""
	I0913 19:59:36.461615   71926 logs.go:276] 0 containers: []
	W0913 19:59:36.461627   71926 logs.go:278] No container was found matching "etcd"
	I0913 19:59:36.461633   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 19:59:36.461686   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 19:59:36.500927   71926 cri.go:89] found id: ""
	I0913 19:59:36.500951   71926 logs.go:276] 0 containers: []
	W0913 19:59:36.500961   71926 logs.go:278] No container was found matching "coredns"
	I0913 19:59:36.500966   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 19:59:36.501024   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 19:59:36.538153   71926 cri.go:89] found id: ""
	I0913 19:59:36.538178   71926 logs.go:276] 0 containers: []
	W0913 19:59:36.538189   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 19:59:36.538196   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 19:59:36.538253   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 19:59:36.586559   71926 cri.go:89] found id: ""
	I0913 19:59:36.586593   71926 logs.go:276] 0 containers: []
	W0913 19:59:36.586604   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 19:59:36.586612   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 19:59:36.586671   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 19:59:36.635198   71926 cri.go:89] found id: ""
	I0913 19:59:36.635226   71926 logs.go:276] 0 containers: []
	W0913 19:59:36.635238   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 19:59:36.635246   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 19:59:36.635312   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 19:59:36.679531   71926 cri.go:89] found id: ""
	I0913 19:59:36.679554   71926 logs.go:276] 0 containers: []
	W0913 19:59:36.679565   71926 logs.go:278] No container was found matching "kindnet"
	I0913 19:59:36.679572   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 19:59:36.679635   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 19:59:36.714288   71926 cri.go:89] found id: ""
	I0913 19:59:36.714315   71926 logs.go:276] 0 containers: []
	W0913 19:59:36.714327   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 19:59:36.714338   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 19:59:36.714352   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 19:59:36.793900   71926 logs.go:123] Gathering logs for container status ...
	I0913 19:59:36.793938   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 19:59:36.831700   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 19:59:36.831732   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 19:59:36.884424   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 19:59:36.884466   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 19:59:36.900593   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 19:59:36.900622   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 19:59:35.835628   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:37.836645   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:34.931979   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:37.429866   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:39.431100   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:36.265867   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:38.266340   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:40.767051   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	W0913 19:59:36.979173   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 19:59:39.480036   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:39.494368   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 19:59:39.494434   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 19:59:39.528620   71926 cri.go:89] found id: ""
	I0913 19:59:39.528647   71926 logs.go:276] 0 containers: []
	W0913 19:59:39.528655   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 19:59:39.528661   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 19:59:39.528708   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 19:59:39.564307   71926 cri.go:89] found id: ""
	I0913 19:59:39.564339   71926 logs.go:276] 0 containers: []
	W0913 19:59:39.564348   71926 logs.go:278] No container was found matching "etcd"
	I0913 19:59:39.564354   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 19:59:39.564402   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 19:59:39.596786   71926 cri.go:89] found id: ""
	I0913 19:59:39.596813   71926 logs.go:276] 0 containers: []
	W0913 19:59:39.596822   71926 logs.go:278] No container was found matching "coredns"
	I0913 19:59:39.596828   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 19:59:39.596887   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 19:59:39.640612   71926 cri.go:89] found id: ""
	I0913 19:59:39.640636   71926 logs.go:276] 0 containers: []
	W0913 19:59:39.640649   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 19:59:39.640654   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 19:59:39.640701   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 19:59:39.685855   71926 cri.go:89] found id: ""
	I0913 19:59:39.685876   71926 logs.go:276] 0 containers: []
	W0913 19:59:39.685884   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 19:59:39.685890   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 19:59:39.685937   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 19:59:39.723549   71926 cri.go:89] found id: ""
	I0913 19:59:39.723578   71926 logs.go:276] 0 containers: []
	W0913 19:59:39.723586   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 19:59:39.723592   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 19:59:39.723647   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 19:59:39.761901   71926 cri.go:89] found id: ""
	I0913 19:59:39.761928   71926 logs.go:276] 0 containers: []
	W0913 19:59:39.761938   71926 logs.go:278] No container was found matching "kindnet"
	I0913 19:59:39.761944   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 19:59:39.762005   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 19:59:39.804200   71926 cri.go:89] found id: ""
	I0913 19:59:39.804233   71926 logs.go:276] 0 containers: []
	W0913 19:59:39.804244   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 19:59:39.804254   71926 logs.go:123] Gathering logs for container status ...
	I0913 19:59:39.804268   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 19:59:39.843760   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 19:59:39.843792   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 19:59:39.898610   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 19:59:39.898640   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 19:59:39.915710   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 19:59:39.915733   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 19:59:39.991138   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 19:59:39.991161   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 19:59:39.991175   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 19:59:40.335372   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:42.339270   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:41.431411   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:43.930395   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:43.266899   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:45.769316   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:42.567023   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:42.579927   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 19:59:42.580001   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 19:59:42.613569   71926 cri.go:89] found id: ""
	I0913 19:59:42.613595   71926 logs.go:276] 0 containers: []
	W0913 19:59:42.613603   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 19:59:42.613608   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 19:59:42.613654   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 19:59:42.649375   71926 cri.go:89] found id: ""
	I0913 19:59:42.649408   71926 logs.go:276] 0 containers: []
	W0913 19:59:42.649421   71926 logs.go:278] No container was found matching "etcd"
	I0913 19:59:42.649433   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 19:59:42.649502   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 19:59:42.685299   71926 cri.go:89] found id: ""
	I0913 19:59:42.685322   71926 logs.go:276] 0 containers: []
	W0913 19:59:42.685330   71926 logs.go:278] No container was found matching "coredns"
	I0913 19:59:42.685336   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 19:59:42.685383   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 19:59:42.718646   71926 cri.go:89] found id: ""
	I0913 19:59:42.718671   71926 logs.go:276] 0 containers: []
	W0913 19:59:42.718680   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 19:59:42.718686   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 19:59:42.718736   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 19:59:42.755277   71926 cri.go:89] found id: ""
	I0913 19:59:42.755310   71926 logs.go:276] 0 containers: []
	W0913 19:59:42.755322   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 19:59:42.755330   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 19:59:42.755399   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 19:59:42.791071   71926 cri.go:89] found id: ""
	I0913 19:59:42.791099   71926 logs.go:276] 0 containers: []
	W0913 19:59:42.791110   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 19:59:42.791117   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 19:59:42.791191   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 19:59:42.824895   71926 cri.go:89] found id: ""
	I0913 19:59:42.824924   71926 logs.go:276] 0 containers: []
	W0913 19:59:42.824935   71926 logs.go:278] No container was found matching "kindnet"
	I0913 19:59:42.824942   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 19:59:42.825004   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 19:59:42.864526   71926 cri.go:89] found id: ""
	I0913 19:59:42.864555   71926 logs.go:276] 0 containers: []
	W0913 19:59:42.864567   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 19:59:42.864576   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 19:59:42.864590   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 19:59:42.913990   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 19:59:42.914023   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 19:59:42.929285   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 19:59:42.929319   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 19:59:43.003029   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 19:59:43.003061   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 19:59:43.003075   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 19:59:43.083457   71926 logs.go:123] Gathering logs for container status ...
	I0913 19:59:43.083490   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 19:59:45.625941   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:45.639111   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 19:59:45.639200   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 19:59:45.672356   71926 cri.go:89] found id: ""
	I0913 19:59:45.672383   71926 logs.go:276] 0 containers: []
	W0913 19:59:45.672391   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 19:59:45.672397   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 19:59:45.672463   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 19:59:45.713528   71926 cri.go:89] found id: ""
	I0913 19:59:45.713551   71926 logs.go:276] 0 containers: []
	W0913 19:59:45.713557   71926 logs.go:278] No container was found matching "etcd"
	I0913 19:59:45.713564   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 19:59:45.713618   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 19:59:45.749950   71926 cri.go:89] found id: ""
	I0913 19:59:45.749974   71926 logs.go:276] 0 containers: []
	W0913 19:59:45.749982   71926 logs.go:278] No container was found matching "coredns"
	I0913 19:59:45.749988   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 19:59:45.750036   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 19:59:45.787366   71926 cri.go:89] found id: ""
	I0913 19:59:45.787403   71926 logs.go:276] 0 containers: []
	W0913 19:59:45.787415   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 19:59:45.787423   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 19:59:45.787482   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 19:59:45.822464   71926 cri.go:89] found id: ""
	I0913 19:59:45.822493   71926 logs.go:276] 0 containers: []
	W0913 19:59:45.822504   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 19:59:45.822511   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 19:59:45.822574   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 19:59:45.857614   71926 cri.go:89] found id: ""
	I0913 19:59:45.857643   71926 logs.go:276] 0 containers: []
	W0913 19:59:45.857654   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 19:59:45.857666   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 19:59:45.857716   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 19:59:45.896323   71926 cri.go:89] found id: ""
	I0913 19:59:45.896349   71926 logs.go:276] 0 containers: []
	W0913 19:59:45.896357   71926 logs.go:278] No container was found matching "kindnet"
	I0913 19:59:45.896362   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 19:59:45.896416   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 19:59:45.935706   71926 cri.go:89] found id: ""
	I0913 19:59:45.935731   71926 logs.go:276] 0 containers: []
	W0913 19:59:45.935742   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 19:59:45.935751   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 19:59:45.935763   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 19:59:45.986687   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 19:59:45.986721   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 19:59:46.001549   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 19:59:46.001584   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 19:59:46.075482   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 19:59:46.075505   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 19:59:46.075520   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 19:59:46.152094   71926 logs.go:123] Gathering logs for container status ...
	I0913 19:59:46.152130   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 19:59:44.836085   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:46.836175   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:45.932069   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:47.932660   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:48.266623   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:50.766356   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:48.698127   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:48.711198   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 19:59:48.711271   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 19:59:48.746560   71926 cri.go:89] found id: ""
	I0913 19:59:48.746588   71926 logs.go:276] 0 containers: []
	W0913 19:59:48.746598   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 19:59:48.746605   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 19:59:48.746662   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 19:59:48.780588   71926 cri.go:89] found id: ""
	I0913 19:59:48.780614   71926 logs.go:276] 0 containers: []
	W0913 19:59:48.780624   71926 logs.go:278] No container was found matching "etcd"
	I0913 19:59:48.780631   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 19:59:48.780689   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 19:59:48.812521   71926 cri.go:89] found id: ""
	I0913 19:59:48.812547   71926 logs.go:276] 0 containers: []
	W0913 19:59:48.812560   71926 logs.go:278] No container was found matching "coredns"
	I0913 19:59:48.812567   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 19:59:48.812626   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 19:59:48.850273   71926 cri.go:89] found id: ""
	I0913 19:59:48.850303   71926 logs.go:276] 0 containers: []
	W0913 19:59:48.850314   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 19:59:48.850322   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 19:59:48.850384   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 19:59:48.887865   71926 cri.go:89] found id: ""
	I0913 19:59:48.887888   71926 logs.go:276] 0 containers: []
	W0913 19:59:48.887896   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 19:59:48.887901   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 19:59:48.887966   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 19:59:48.928155   71926 cri.go:89] found id: ""
	I0913 19:59:48.928182   71926 logs.go:276] 0 containers: []
	W0913 19:59:48.928193   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 19:59:48.928201   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 19:59:48.928263   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 19:59:48.966159   71926 cri.go:89] found id: ""
	I0913 19:59:48.966185   71926 logs.go:276] 0 containers: []
	W0913 19:59:48.966194   71926 logs.go:278] No container was found matching "kindnet"
	I0913 19:59:48.966199   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 19:59:48.966267   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 19:59:48.999627   71926 cri.go:89] found id: ""
	I0913 19:59:48.999654   71926 logs.go:276] 0 containers: []
	W0913 19:59:48.999663   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 19:59:48.999674   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 19:59:48.999685   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 19:59:49.087362   71926 logs.go:123] Gathering logs for container status ...
	I0913 19:59:49.087398   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 19:59:49.131037   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 19:59:49.131065   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 19:59:49.183183   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 19:59:49.183214   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 19:59:49.197511   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 19:59:49.197536   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 19:59:49.269251   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 19:59:51.769494   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:51.783274   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 19:59:51.783334   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 19:59:51.819611   71926 cri.go:89] found id: ""
	I0913 19:59:51.819644   71926 logs.go:276] 0 containers: []
	W0913 19:59:51.819655   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 19:59:51.819662   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 19:59:51.819722   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 19:59:51.857054   71926 cri.go:89] found id: ""
	I0913 19:59:51.857081   71926 logs.go:276] 0 containers: []
	W0913 19:59:51.857093   71926 logs.go:278] No container was found matching "etcd"
	I0913 19:59:51.857101   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 19:59:51.857183   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 19:59:51.892270   71926 cri.go:89] found id: ""
	I0913 19:59:51.892292   71926 logs.go:276] 0 containers: []
	W0913 19:59:51.892301   71926 logs.go:278] No container was found matching "coredns"
	I0913 19:59:51.892306   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 19:59:51.892354   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 19:59:51.926615   71926 cri.go:89] found id: ""
	I0913 19:59:51.926641   71926 logs.go:276] 0 containers: []
	W0913 19:59:51.926653   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 19:59:51.926667   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 19:59:51.926728   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 19:59:49.336581   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:51.837000   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:53.838872   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:49.936518   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:52.430631   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:52.767109   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:55.265920   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:51.966322   71926 cri.go:89] found id: ""
	I0913 19:59:51.966352   71926 logs.go:276] 0 containers: []
	W0913 19:59:51.966372   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 19:59:51.966380   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 19:59:51.966446   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 19:59:52.008145   71926 cri.go:89] found id: ""
	I0913 19:59:52.008173   71926 logs.go:276] 0 containers: []
	W0913 19:59:52.008182   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 19:59:52.008189   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 19:59:52.008241   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 19:59:52.044486   71926 cri.go:89] found id: ""
	I0913 19:59:52.044512   71926 logs.go:276] 0 containers: []
	W0913 19:59:52.044520   71926 logs.go:278] No container was found matching "kindnet"
	I0913 19:59:52.044527   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 19:59:52.044590   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 19:59:52.080845   71926 cri.go:89] found id: ""
	I0913 19:59:52.080873   71926 logs.go:276] 0 containers: []
	W0913 19:59:52.080885   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 19:59:52.080895   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 19:59:52.080910   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 19:59:52.094040   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 19:59:52.094067   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 19:59:52.163809   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 19:59:52.163836   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 19:59:52.163850   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 19:59:52.244680   71926 logs.go:123] Gathering logs for container status ...
	I0913 19:59:52.244724   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 19:59:52.284651   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 19:59:52.284686   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 19:59:54.841167   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:54.853992   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 19:59:54.854055   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 19:59:54.886801   71926 cri.go:89] found id: ""
	I0913 19:59:54.886830   71926 logs.go:276] 0 containers: []
	W0913 19:59:54.886841   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 19:59:54.886848   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 19:59:54.886922   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 19:59:54.921963   71926 cri.go:89] found id: ""
	I0913 19:59:54.921990   71926 logs.go:276] 0 containers: []
	W0913 19:59:54.922001   71926 logs.go:278] No container was found matching "etcd"
	I0913 19:59:54.922009   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 19:59:54.922074   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 19:59:54.960805   71926 cri.go:89] found id: ""
	I0913 19:59:54.960840   71926 logs.go:276] 0 containers: []
	W0913 19:59:54.960852   71926 logs.go:278] No container was found matching "coredns"
	I0913 19:59:54.960859   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 19:59:54.960938   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 19:59:54.998466   71926 cri.go:89] found id: ""
	I0913 19:59:54.998490   71926 logs.go:276] 0 containers: []
	W0913 19:59:54.998501   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 19:59:54.998508   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 19:59:54.998570   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 19:59:55.036768   71926 cri.go:89] found id: ""
	I0913 19:59:55.036795   71926 logs.go:276] 0 containers: []
	W0913 19:59:55.036803   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 19:59:55.036809   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 19:59:55.036870   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 19:59:55.075135   71926 cri.go:89] found id: ""
	I0913 19:59:55.075165   71926 logs.go:276] 0 containers: []
	W0913 19:59:55.075176   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 19:59:55.075184   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 19:59:55.075244   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 19:59:55.112784   71926 cri.go:89] found id: ""
	I0913 19:59:55.112806   71926 logs.go:276] 0 containers: []
	W0913 19:59:55.112815   71926 logs.go:278] No container was found matching "kindnet"
	I0913 19:59:55.112821   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 19:59:55.112866   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 19:59:55.147189   71926 cri.go:89] found id: ""
	I0913 19:59:55.147215   71926 logs.go:276] 0 containers: []
	W0913 19:59:55.147226   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 19:59:55.147236   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 19:59:55.147247   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 19:59:55.199769   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 19:59:55.199802   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 19:59:55.214075   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 19:59:55.214124   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 19:59:55.285621   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 19:59:55.285640   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 19:59:55.285650   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 19:59:55.366727   71926 logs.go:123] Gathering logs for container status ...
	I0913 19:59:55.366762   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 19:59:56.336491   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:58.836762   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:54.932054   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:57.431007   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:57.266309   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:59.266774   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:57.913453   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:57.927109   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 19:59:57.927249   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 19:59:57.964607   71926 cri.go:89] found id: ""
	I0913 19:59:57.964635   71926 logs.go:276] 0 containers: []
	W0913 19:59:57.964647   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 19:59:57.964655   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 19:59:57.964723   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 19:59:58.006749   71926 cri.go:89] found id: ""
	I0913 19:59:58.006773   71926 logs.go:276] 0 containers: []
	W0913 19:59:58.006782   71926 logs.go:278] No container was found matching "etcd"
	I0913 19:59:58.006788   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 19:59:58.006835   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 19:59:58.042438   71926 cri.go:89] found id: ""
	I0913 19:59:58.042461   71926 logs.go:276] 0 containers: []
	W0913 19:59:58.042470   71926 logs.go:278] No container was found matching "coredns"
	I0913 19:59:58.042475   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 19:59:58.042526   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 19:59:58.076413   71926 cri.go:89] found id: ""
	I0913 19:59:58.076443   71926 logs.go:276] 0 containers: []
	W0913 19:59:58.076456   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 19:59:58.076463   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 19:59:58.076525   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 19:59:58.110474   71926 cri.go:89] found id: ""
	I0913 19:59:58.110495   71926 logs.go:276] 0 containers: []
	W0913 19:59:58.110502   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 19:59:58.110507   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 19:59:58.110555   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 19:59:58.145068   71926 cri.go:89] found id: ""
	I0913 19:59:58.145090   71926 logs.go:276] 0 containers: []
	W0913 19:59:58.145098   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 19:59:58.145104   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 19:59:58.145152   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 19:59:58.179071   71926 cri.go:89] found id: ""
	I0913 19:59:58.179102   71926 logs.go:276] 0 containers: []
	W0913 19:59:58.179115   71926 logs.go:278] No container was found matching "kindnet"
	I0913 19:59:58.179122   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 19:59:58.179176   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 19:59:58.212749   71926 cri.go:89] found id: ""
	I0913 19:59:58.212779   71926 logs.go:276] 0 containers: []
	W0913 19:59:58.212791   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 19:59:58.212801   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 19:59:58.212814   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 19:59:58.297012   71926 logs.go:123] Gathering logs for container status ...
	I0913 19:59:58.297046   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 19:59:58.336206   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 19:59:58.336229   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 19:59:58.388982   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 19:59:58.389014   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 19:59:58.404582   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 19:59:58.404612   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 19:59:58.476882   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:00.977647   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:00.991660   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:00.991743   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:01.028753   71926 cri.go:89] found id: ""
	I0913 20:00:01.028779   71926 logs.go:276] 0 containers: []
	W0913 20:00:01.028788   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:01.028794   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:01.028843   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:01.064606   71926 cri.go:89] found id: ""
	I0913 20:00:01.064638   71926 logs.go:276] 0 containers: []
	W0913 20:00:01.064647   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:01.064652   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:01.064701   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:01.100564   71926 cri.go:89] found id: ""
	I0913 20:00:01.100597   71926 logs.go:276] 0 containers: []
	W0913 20:00:01.100609   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:01.100615   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:01.100678   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:01.135197   71926 cri.go:89] found id: ""
	I0913 20:00:01.135230   71926 logs.go:276] 0 containers: []
	W0913 20:00:01.135242   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:01.135249   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:01.135313   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:01.187712   71926 cri.go:89] found id: ""
	I0913 20:00:01.187783   71926 logs.go:276] 0 containers: []
	W0913 20:00:01.187796   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:01.187804   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:01.187870   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:01.225400   71926 cri.go:89] found id: ""
	I0913 20:00:01.225434   71926 logs.go:276] 0 containers: []
	W0913 20:00:01.225443   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:01.225449   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:01.225511   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:01.263652   71926 cri.go:89] found id: ""
	I0913 20:00:01.263685   71926 logs.go:276] 0 containers: []
	W0913 20:00:01.263696   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:01.263703   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:01.263764   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:01.302630   71926 cri.go:89] found id: ""
	I0913 20:00:01.302652   71926 logs.go:276] 0 containers: []
	W0913 20:00:01.302661   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:01.302669   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:01.302680   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:01.316837   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:01.316871   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:01.389124   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:01.389149   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:01.389159   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:01.475528   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:01.475565   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:01.518896   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:01.518922   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:01.338229   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:03.836029   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:59.932112   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:01.932389   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:03.932525   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:01.267699   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:03.268309   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:05.765913   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:04.069894   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:04.083877   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:04.083940   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:04.119079   71926 cri.go:89] found id: ""
	I0913 20:00:04.119109   71926 logs.go:276] 0 containers: []
	W0913 20:00:04.119118   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:04.119123   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:04.119175   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:04.154999   71926 cri.go:89] found id: ""
	I0913 20:00:04.155026   71926 logs.go:276] 0 containers: []
	W0913 20:00:04.155035   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:04.155040   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:04.155087   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:04.189390   71926 cri.go:89] found id: ""
	I0913 20:00:04.189414   71926 logs.go:276] 0 containers: []
	W0913 20:00:04.189422   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:04.189428   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:04.189477   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:04.224883   71926 cri.go:89] found id: ""
	I0913 20:00:04.224912   71926 logs.go:276] 0 containers: []
	W0913 20:00:04.224924   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:04.224932   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:04.224990   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:04.265300   71926 cri.go:89] found id: ""
	I0913 20:00:04.265328   71926 logs.go:276] 0 containers: []
	W0913 20:00:04.265340   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:04.265347   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:04.265403   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:04.308225   71926 cri.go:89] found id: ""
	I0913 20:00:04.308253   71926 logs.go:276] 0 containers: []
	W0913 20:00:04.308264   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:04.308271   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:04.308339   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:04.347508   71926 cri.go:89] found id: ""
	I0913 20:00:04.347539   71926 logs.go:276] 0 containers: []
	W0913 20:00:04.347552   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:04.347558   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:04.347615   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:04.385610   71926 cri.go:89] found id: ""
	I0913 20:00:04.385635   71926 logs.go:276] 0 containers: []
	W0913 20:00:04.385644   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:04.385651   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:04.385664   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:04.438181   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:04.438210   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:04.452207   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:04.452231   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:04.527920   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:04.527942   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:04.527956   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:04.617212   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:04.617256   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:05.836478   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:08.336478   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:06.429978   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:08.430153   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:08.266149   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:10.267683   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:07.159525   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:07.172355   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:07.172433   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:07.210683   71926 cri.go:89] found id: ""
	I0913 20:00:07.210713   71926 logs.go:276] 0 containers: []
	W0913 20:00:07.210725   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:07.210733   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:07.210794   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:07.252477   71926 cri.go:89] found id: ""
	I0913 20:00:07.252501   71926 logs.go:276] 0 containers: []
	W0913 20:00:07.252511   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:07.252516   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:07.252563   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:07.293646   71926 cri.go:89] found id: ""
	I0913 20:00:07.293671   71926 logs.go:276] 0 containers: []
	W0913 20:00:07.293679   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:07.293685   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:07.293744   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:07.328603   71926 cri.go:89] found id: ""
	I0913 20:00:07.328631   71926 logs.go:276] 0 containers: []
	W0913 20:00:07.328644   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:07.328652   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:07.328716   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:07.366358   71926 cri.go:89] found id: ""
	I0913 20:00:07.366385   71926 logs.go:276] 0 containers: []
	W0913 20:00:07.366395   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:07.366402   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:07.366480   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:07.400380   71926 cri.go:89] found id: ""
	I0913 20:00:07.400406   71926 logs.go:276] 0 containers: []
	W0913 20:00:07.400417   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:07.400425   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:07.400476   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:07.438332   71926 cri.go:89] found id: ""
	I0913 20:00:07.438360   71926 logs.go:276] 0 containers: []
	W0913 20:00:07.438371   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:07.438380   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:07.438437   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:07.477262   71926 cri.go:89] found id: ""
	I0913 20:00:07.477294   71926 logs.go:276] 0 containers: []
	W0913 20:00:07.477305   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:07.477315   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:07.477329   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:07.528404   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:07.528437   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:07.543025   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:07.543058   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:07.619580   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:07.619599   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:07.619611   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:07.698002   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:07.698037   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:10.241528   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:10.255274   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:10.255350   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:10.289635   71926 cri.go:89] found id: ""
	I0913 20:00:10.289660   71926 logs.go:276] 0 containers: []
	W0913 20:00:10.289670   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:10.289675   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:10.289726   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:10.323749   71926 cri.go:89] found id: ""
	I0913 20:00:10.323772   71926 logs.go:276] 0 containers: []
	W0913 20:00:10.323781   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:10.323788   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:10.323835   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:10.360399   71926 cri.go:89] found id: ""
	I0913 20:00:10.360424   71926 logs.go:276] 0 containers: []
	W0913 20:00:10.360432   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:10.360441   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:10.360517   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:10.396685   71926 cri.go:89] found id: ""
	I0913 20:00:10.396714   71926 logs.go:276] 0 containers: []
	W0913 20:00:10.396724   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:10.396731   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:10.396793   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:10.439076   71926 cri.go:89] found id: ""
	I0913 20:00:10.439104   71926 logs.go:276] 0 containers: []
	W0913 20:00:10.439116   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:10.439126   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:10.439202   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:10.484451   71926 cri.go:89] found id: ""
	I0913 20:00:10.484474   71926 logs.go:276] 0 containers: []
	W0913 20:00:10.484483   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:10.484490   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:10.484553   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:10.521271   71926 cri.go:89] found id: ""
	I0913 20:00:10.521297   71926 logs.go:276] 0 containers: []
	W0913 20:00:10.521307   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:10.521313   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:10.521371   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:10.556468   71926 cri.go:89] found id: ""
	I0913 20:00:10.556490   71926 logs.go:276] 0 containers: []
	W0913 20:00:10.556499   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:10.556507   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:10.556519   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:10.593210   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:10.593237   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:10.645456   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:10.645493   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:10.659365   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:10.659393   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:10.734764   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:10.734786   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:10.734800   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:10.338631   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:12.835744   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:10.430954   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:12.931007   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:12.767070   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:15.267220   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:13.320229   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:13.335065   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:13.335136   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:13.373500   71926 cri.go:89] found id: ""
	I0913 20:00:13.373531   71926 logs.go:276] 0 containers: []
	W0913 20:00:13.373543   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:13.373550   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:13.373613   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:13.410784   71926 cri.go:89] found id: ""
	I0913 20:00:13.410815   71926 logs.go:276] 0 containers: []
	W0913 20:00:13.410827   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:13.410834   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:13.410900   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:13.447564   71926 cri.go:89] found id: ""
	I0913 20:00:13.447592   71926 logs.go:276] 0 containers: []
	W0913 20:00:13.447603   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:13.447611   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:13.447668   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:13.482858   71926 cri.go:89] found id: ""
	I0913 20:00:13.482885   71926 logs.go:276] 0 containers: []
	W0913 20:00:13.482895   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:13.482902   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:13.482963   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:13.517827   71926 cri.go:89] found id: ""
	I0913 20:00:13.517853   71926 logs.go:276] 0 containers: []
	W0913 20:00:13.517864   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:13.517870   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:13.517932   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:13.553032   71926 cri.go:89] found id: ""
	I0913 20:00:13.553063   71926 logs.go:276] 0 containers: []
	W0913 20:00:13.553081   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:13.553088   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:13.553149   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:13.588527   71926 cri.go:89] found id: ""
	I0913 20:00:13.588553   71926 logs.go:276] 0 containers: []
	W0913 20:00:13.588561   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:13.588567   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:13.588620   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:13.625094   71926 cri.go:89] found id: ""
	I0913 20:00:13.625118   71926 logs.go:276] 0 containers: []
	W0913 20:00:13.625131   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:13.625141   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:13.625158   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:13.677821   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:13.677851   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:13.691860   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:13.691887   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:13.764966   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:13.764993   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:13.765009   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:13.847866   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:13.847908   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:16.389642   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:16.403272   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:16.403331   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:16.440078   71926 cri.go:89] found id: ""
	I0913 20:00:16.440104   71926 logs.go:276] 0 containers: []
	W0913 20:00:16.440114   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:16.440122   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:16.440190   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:16.474256   71926 cri.go:89] found id: ""
	I0913 20:00:16.474283   71926 logs.go:276] 0 containers: []
	W0913 20:00:16.474301   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:16.474308   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:16.474366   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:16.510715   71926 cri.go:89] found id: ""
	I0913 20:00:16.510749   71926 logs.go:276] 0 containers: []
	W0913 20:00:16.510760   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:16.510767   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:16.510828   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:16.547051   71926 cri.go:89] found id: ""
	I0913 20:00:16.547081   71926 logs.go:276] 0 containers: []
	W0913 20:00:16.547090   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:16.547095   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:16.547181   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:16.583643   71926 cri.go:89] found id: ""
	I0913 20:00:16.583673   71926 logs.go:276] 0 containers: []
	W0913 20:00:16.583684   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:16.583692   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:16.583751   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:16.620508   71926 cri.go:89] found id: ""
	I0913 20:00:16.620531   71926 logs.go:276] 0 containers: []
	W0913 20:00:16.620538   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:16.620544   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:16.620591   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:16.659447   71926 cri.go:89] found id: ""
	I0913 20:00:16.659474   71926 logs.go:276] 0 containers: []
	W0913 20:00:16.659483   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:16.659487   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:16.659551   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:16.696858   71926 cri.go:89] found id: ""
	I0913 20:00:16.696883   71926 logs.go:276] 0 containers: []
	W0913 20:00:16.696892   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:16.696900   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:16.696913   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:16.767299   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:16.767322   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:16.767337   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:16.847320   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:16.847356   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:16.922176   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:16.922209   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:14.836490   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:16.838300   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:15.430562   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:17.431842   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:17.766696   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:19.767921   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:16.982583   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:16.982627   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:19.496581   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:19.509942   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:19.510040   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:19.546457   71926 cri.go:89] found id: ""
	I0913 20:00:19.546493   71926 logs.go:276] 0 containers: []
	W0913 20:00:19.546510   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:19.546517   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:19.546584   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:19.585577   71926 cri.go:89] found id: ""
	I0913 20:00:19.585613   71926 logs.go:276] 0 containers: []
	W0913 20:00:19.585624   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:19.585631   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:19.585681   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:19.624383   71926 cri.go:89] found id: ""
	I0913 20:00:19.624416   71926 logs.go:276] 0 containers: []
	W0913 20:00:19.624428   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:19.624436   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:19.624492   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:19.662531   71926 cri.go:89] found id: ""
	I0913 20:00:19.662558   71926 logs.go:276] 0 containers: []
	W0913 20:00:19.662570   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:19.662578   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:19.662636   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:19.704253   71926 cri.go:89] found id: ""
	I0913 20:00:19.704278   71926 logs.go:276] 0 containers: []
	W0913 20:00:19.704290   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:19.704296   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:19.704354   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:19.743087   71926 cri.go:89] found id: ""
	I0913 20:00:19.743113   71926 logs.go:276] 0 containers: []
	W0913 20:00:19.743122   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:19.743128   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:19.743175   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:19.779598   71926 cri.go:89] found id: ""
	I0913 20:00:19.779625   71926 logs.go:276] 0 containers: []
	W0913 20:00:19.779635   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:19.779643   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:19.779692   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:19.817509   71926 cri.go:89] found id: ""
	I0913 20:00:19.817541   71926 logs.go:276] 0 containers: []
	W0913 20:00:19.817553   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:19.817564   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:19.817577   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:19.870071   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:19.870120   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:19.884612   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:19.884638   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:19.959650   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:19.959675   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:19.959687   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:20.040351   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:20.040384   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:19.335437   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:21.335913   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:23.838023   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:19.931244   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:22.430934   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:24.431456   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:22.266411   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:24.266828   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:22.581978   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:22.599144   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:22.599205   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:22.652862   71926 cri.go:89] found id: ""
	I0913 20:00:22.652894   71926 logs.go:276] 0 containers: []
	W0913 20:00:22.652906   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:22.652913   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:22.652998   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:22.712188   71926 cri.go:89] found id: ""
	I0913 20:00:22.712220   71926 logs.go:276] 0 containers: []
	W0913 20:00:22.712231   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:22.712238   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:22.712299   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:22.750212   71926 cri.go:89] found id: ""
	I0913 20:00:22.750238   71926 logs.go:276] 0 containers: []
	W0913 20:00:22.750249   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:22.750257   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:22.750319   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:22.786432   71926 cri.go:89] found id: ""
	I0913 20:00:22.786463   71926 logs.go:276] 0 containers: []
	W0913 20:00:22.786475   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:22.786483   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:22.786547   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:22.822681   71926 cri.go:89] found id: ""
	I0913 20:00:22.822707   71926 logs.go:276] 0 containers: []
	W0913 20:00:22.822716   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:22.822722   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:22.822780   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:22.862076   71926 cri.go:89] found id: ""
	I0913 20:00:22.862143   71926 logs.go:276] 0 containers: []
	W0913 20:00:22.862157   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:22.862166   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:22.862230   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:22.899489   71926 cri.go:89] found id: ""
	I0913 20:00:22.899519   71926 logs.go:276] 0 containers: []
	W0913 20:00:22.899528   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:22.899535   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:22.899604   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:22.937229   71926 cri.go:89] found id: ""
	I0913 20:00:22.937255   71926 logs.go:276] 0 containers: []
	W0913 20:00:22.937270   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:22.937282   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:22.937300   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:23.007842   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:23.007871   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:23.007887   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:23.092726   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:23.092763   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:23.131372   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:23.131403   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:23.183785   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:23.183819   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:25.698367   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:25.712256   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:25.712338   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:25.747797   71926 cri.go:89] found id: ""
	I0913 20:00:25.747824   71926 logs.go:276] 0 containers: []
	W0913 20:00:25.747835   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:25.747842   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:25.747929   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:25.783257   71926 cri.go:89] found id: ""
	I0913 20:00:25.783285   71926 logs.go:276] 0 containers: []
	W0913 20:00:25.783295   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:25.783301   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:25.783352   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:25.817087   71926 cri.go:89] found id: ""
	I0913 20:00:25.817120   71926 logs.go:276] 0 containers: []
	W0913 20:00:25.817132   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:25.817142   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:25.817203   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:25.854055   71926 cri.go:89] found id: ""
	I0913 20:00:25.854082   71926 logs.go:276] 0 containers: []
	W0913 20:00:25.854108   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:25.854116   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:25.854188   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:25.889972   71926 cri.go:89] found id: ""
	I0913 20:00:25.889994   71926 logs.go:276] 0 containers: []
	W0913 20:00:25.890002   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:25.890008   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:25.890058   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:25.927061   71926 cri.go:89] found id: ""
	I0913 20:00:25.927093   71926 logs.go:276] 0 containers: []
	W0913 20:00:25.927104   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:25.927115   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:25.927169   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:25.965541   71926 cri.go:89] found id: ""
	I0913 20:00:25.965570   71926 logs.go:276] 0 containers: []
	W0913 20:00:25.965582   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:25.965588   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:25.965649   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:26.002772   71926 cri.go:89] found id: ""
	I0913 20:00:26.002801   71926 logs.go:276] 0 containers: []
	W0913 20:00:26.002814   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:26.002825   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:26.002840   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:26.054407   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:26.054442   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:26.069608   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:26.069634   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:26.141529   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:26.141557   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:26.141572   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:26.223623   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:26.223657   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:26.336386   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:28.836218   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:26.431607   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:28.431821   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:26.267742   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:28.766624   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:30.767391   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:28.764312   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:28.779319   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:28.779383   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:28.813431   71926 cri.go:89] found id: ""
	I0913 20:00:28.813457   71926 logs.go:276] 0 containers: []
	W0913 20:00:28.813465   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:28.813473   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:28.813532   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:28.850083   71926 cri.go:89] found id: ""
	I0913 20:00:28.850145   71926 logs.go:276] 0 containers: []
	W0913 20:00:28.850157   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:28.850163   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:28.850221   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:28.890348   71926 cri.go:89] found id: ""
	I0913 20:00:28.890373   71926 logs.go:276] 0 containers: []
	W0913 20:00:28.890384   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:28.890390   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:28.890440   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:28.926531   71926 cri.go:89] found id: ""
	I0913 20:00:28.926564   71926 logs.go:276] 0 containers: []
	W0913 20:00:28.926576   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:28.926583   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:28.926650   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:28.968307   71926 cri.go:89] found id: ""
	I0913 20:00:28.968336   71926 logs.go:276] 0 containers: []
	W0913 20:00:28.968349   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:28.968356   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:28.968420   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:29.004257   71926 cri.go:89] found id: ""
	I0913 20:00:29.004285   71926 logs.go:276] 0 containers: []
	W0913 20:00:29.004297   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:29.004304   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:29.004369   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:29.039438   71926 cri.go:89] found id: ""
	I0913 20:00:29.039466   71926 logs.go:276] 0 containers: []
	W0913 20:00:29.039478   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:29.039486   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:29.039546   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:29.075138   71926 cri.go:89] found id: ""
	I0913 20:00:29.075172   71926 logs.go:276] 0 containers: []
	W0913 20:00:29.075184   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:29.075195   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:29.075210   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:29.151631   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:29.151671   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:29.193680   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:29.193707   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:29.244926   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:29.244960   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:29.258897   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:29.258922   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:29.329212   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:31.829955   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:31.846683   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:31.846747   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:31.888898   71926 cri.go:89] found id: ""
	I0913 20:00:31.888933   71926 logs.go:276] 0 containers: []
	W0913 20:00:31.888944   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:31.888952   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:31.889023   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:31.923896   71926 cri.go:89] found id: ""
	I0913 20:00:31.923922   71926 logs.go:276] 0 containers: []
	W0913 20:00:31.923933   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:31.923940   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:31.923999   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:30.836587   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:33.335323   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:30.431964   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:32.931375   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:32.770852   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:35.267129   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:31.965263   71926 cri.go:89] found id: ""
	I0913 20:00:31.965297   71926 logs.go:276] 0 containers: []
	W0913 20:00:31.965309   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:31.965317   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:31.965387   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:31.998461   71926 cri.go:89] found id: ""
	I0913 20:00:31.998491   71926 logs.go:276] 0 containers: []
	W0913 20:00:31.998505   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:31.998512   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:31.998564   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:32.039654   71926 cri.go:89] found id: ""
	I0913 20:00:32.039681   71926 logs.go:276] 0 containers: []
	W0913 20:00:32.039690   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:32.039696   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:32.039747   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:32.073387   71926 cri.go:89] found id: ""
	I0913 20:00:32.073413   71926 logs.go:276] 0 containers: []
	W0913 20:00:32.073424   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:32.073432   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:32.073491   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:32.112119   71926 cri.go:89] found id: ""
	I0913 20:00:32.112148   71926 logs.go:276] 0 containers: []
	W0913 20:00:32.112159   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:32.112166   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:32.112231   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:32.156508   71926 cri.go:89] found id: ""
	I0913 20:00:32.156539   71926 logs.go:276] 0 containers: []
	W0913 20:00:32.156548   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:32.156556   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:32.156567   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:32.210961   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:32.210994   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:32.224674   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:32.224703   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:32.297699   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:32.297725   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:32.297738   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:32.383090   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:32.383130   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:34.926212   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:34.942930   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:34.943015   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:34.983115   71926 cri.go:89] found id: ""
	I0913 20:00:34.983140   71926 logs.go:276] 0 containers: []
	W0913 20:00:34.983151   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:34.983159   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:34.983232   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:35.022893   71926 cri.go:89] found id: ""
	I0913 20:00:35.022916   71926 logs.go:276] 0 containers: []
	W0913 20:00:35.022924   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:35.022931   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:35.022980   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:35.061105   71926 cri.go:89] found id: ""
	I0913 20:00:35.061129   71926 logs.go:276] 0 containers: []
	W0913 20:00:35.061137   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:35.061143   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:35.061191   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:35.095853   71926 cri.go:89] found id: ""
	I0913 20:00:35.095879   71926 logs.go:276] 0 containers: []
	W0913 20:00:35.095890   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:35.095897   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:35.095966   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:35.132771   71926 cri.go:89] found id: ""
	I0913 20:00:35.132796   71926 logs.go:276] 0 containers: []
	W0913 20:00:35.132811   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:35.132816   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:35.132879   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:35.171692   71926 cri.go:89] found id: ""
	I0913 20:00:35.171720   71926 logs.go:276] 0 containers: []
	W0913 20:00:35.171729   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:35.171734   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:35.171782   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:35.212217   71926 cri.go:89] found id: ""
	I0913 20:00:35.212248   71926 logs.go:276] 0 containers: []
	W0913 20:00:35.212258   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:35.212266   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:35.212318   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:35.247910   71926 cri.go:89] found id: ""
	I0913 20:00:35.247938   71926 logs.go:276] 0 containers: []
	W0913 20:00:35.247949   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:35.247958   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:35.247972   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:35.321607   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:35.321627   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:35.321641   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:35.405442   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:35.405483   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:35.450174   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:35.450201   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:35.503640   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:35.503673   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:35.336847   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:37.337476   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:34.931775   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:37.430241   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:39.432113   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:37.268324   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:39.766957   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:38.019116   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:38.033625   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:38.033696   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:38.068648   71926 cri.go:89] found id: ""
	I0913 20:00:38.068677   71926 logs.go:276] 0 containers: []
	W0913 20:00:38.068688   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:38.068696   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:38.068764   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:38.103808   71926 cri.go:89] found id: ""
	I0913 20:00:38.103837   71926 logs.go:276] 0 containers: []
	W0913 20:00:38.103850   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:38.103857   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:38.103920   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:38.143305   71926 cri.go:89] found id: ""
	I0913 20:00:38.143326   71926 logs.go:276] 0 containers: []
	W0913 20:00:38.143335   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:38.143341   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:38.143397   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:38.181356   71926 cri.go:89] found id: ""
	I0913 20:00:38.181382   71926 logs.go:276] 0 containers: []
	W0913 20:00:38.181394   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:38.181401   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:38.181459   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:38.216935   71926 cri.go:89] found id: ""
	I0913 20:00:38.216966   71926 logs.go:276] 0 containers: []
	W0913 20:00:38.216977   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:38.216985   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:38.217043   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:38.253565   71926 cri.go:89] found id: ""
	I0913 20:00:38.253594   71926 logs.go:276] 0 containers: []
	W0913 20:00:38.253606   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:38.253614   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:38.253670   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:38.288672   71926 cri.go:89] found id: ""
	I0913 20:00:38.288698   71926 logs.go:276] 0 containers: []
	W0913 20:00:38.288714   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:38.288721   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:38.288782   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:38.323979   71926 cri.go:89] found id: ""
	I0913 20:00:38.324001   71926 logs.go:276] 0 containers: []
	W0913 20:00:38.324009   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:38.324017   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:38.324028   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:38.375830   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:38.375861   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:38.391703   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:38.391742   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:38.464586   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:38.464615   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:38.464630   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:38.545577   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:38.545616   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:41.090184   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:41.104040   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:41.104129   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:41.148332   71926 cri.go:89] found id: ""
	I0913 20:00:41.148362   71926 logs.go:276] 0 containers: []
	W0913 20:00:41.148371   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:41.148377   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:41.148441   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:41.187205   71926 cri.go:89] found id: ""
	I0913 20:00:41.187230   71926 logs.go:276] 0 containers: []
	W0913 20:00:41.187238   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:41.187244   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:41.187299   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:41.223654   71926 cri.go:89] found id: ""
	I0913 20:00:41.223677   71926 logs.go:276] 0 containers: []
	W0913 20:00:41.223685   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:41.223690   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:41.223750   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:41.258481   71926 cri.go:89] found id: ""
	I0913 20:00:41.258505   71926 logs.go:276] 0 containers: []
	W0913 20:00:41.258515   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:41.258520   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:41.258578   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:41.293205   71926 cri.go:89] found id: ""
	I0913 20:00:41.293236   71926 logs.go:276] 0 containers: []
	W0913 20:00:41.293248   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:41.293255   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:41.293313   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:41.331425   71926 cri.go:89] found id: ""
	I0913 20:00:41.331454   71926 logs.go:276] 0 containers: []
	W0913 20:00:41.331466   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:41.331474   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:41.331524   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:41.367916   71926 cri.go:89] found id: ""
	I0913 20:00:41.367942   71926 logs.go:276] 0 containers: []
	W0913 20:00:41.367953   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:41.367960   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:41.368024   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:41.413683   71926 cri.go:89] found id: ""
	I0913 20:00:41.413713   71926 logs.go:276] 0 containers: []
	W0913 20:00:41.413724   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:41.413736   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:41.413752   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:41.468018   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:41.468053   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:41.482397   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:41.482424   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:41.552203   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:41.552223   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:41.552238   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:41.628515   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:41.628553   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:39.835678   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:42.336092   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:41.932753   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:44.431833   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:41.768156   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:44.268056   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:44.167382   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:44.180046   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:44.180127   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:44.218376   71926 cri.go:89] found id: ""
	I0913 20:00:44.218400   71926 logs.go:276] 0 containers: []
	W0913 20:00:44.218409   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:44.218415   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:44.218474   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:44.255250   71926 cri.go:89] found id: ""
	I0913 20:00:44.255275   71926 logs.go:276] 0 containers: []
	W0913 20:00:44.255284   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:44.255289   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:44.255338   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:44.292561   71926 cri.go:89] found id: ""
	I0913 20:00:44.292587   71926 logs.go:276] 0 containers: []
	W0913 20:00:44.292597   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:44.292604   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:44.292662   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:44.326876   71926 cri.go:89] found id: ""
	I0913 20:00:44.326900   71926 logs.go:276] 0 containers: []
	W0913 20:00:44.326912   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:44.326919   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:44.326977   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:44.362531   71926 cri.go:89] found id: ""
	I0913 20:00:44.362562   71926 logs.go:276] 0 containers: []
	W0913 20:00:44.362574   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:44.362582   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:44.362646   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:44.396145   71926 cri.go:89] found id: ""
	I0913 20:00:44.396170   71926 logs.go:276] 0 containers: []
	W0913 20:00:44.396181   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:44.396188   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:44.396251   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:44.434531   71926 cri.go:89] found id: ""
	I0913 20:00:44.434556   71926 logs.go:276] 0 containers: []
	W0913 20:00:44.434564   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:44.434570   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:44.434627   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:44.470926   71926 cri.go:89] found id: ""
	I0913 20:00:44.470955   71926 logs.go:276] 0 containers: []
	W0913 20:00:44.470965   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:44.470976   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:44.470991   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:44.523651   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:44.523685   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:44.537739   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:44.537766   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:44.608055   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:44.608083   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:44.608098   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:44.694384   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:44.694416   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:44.835785   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:47.336699   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:46.932718   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:49.431805   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:46.766589   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:48.773406   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:47.235609   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:47.248691   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:47.248749   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:47.284011   71926 cri.go:89] found id: ""
	I0913 20:00:47.284037   71926 logs.go:276] 0 containers: []
	W0913 20:00:47.284049   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:47.284056   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:47.284118   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:47.319729   71926 cri.go:89] found id: ""
	I0913 20:00:47.319755   71926 logs.go:276] 0 containers: []
	W0913 20:00:47.319766   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:47.319772   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:47.319833   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:47.356053   71926 cri.go:89] found id: ""
	I0913 20:00:47.356079   71926 logs.go:276] 0 containers: []
	W0913 20:00:47.356087   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:47.356094   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:47.356172   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:47.390054   71926 cri.go:89] found id: ""
	I0913 20:00:47.390083   71926 logs.go:276] 0 containers: []
	W0913 20:00:47.390101   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:47.390109   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:47.390171   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:47.428894   71926 cri.go:89] found id: ""
	I0913 20:00:47.428920   71926 logs.go:276] 0 containers: []
	W0913 20:00:47.428932   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:47.428939   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:47.428996   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:47.463328   71926 cri.go:89] found id: ""
	I0913 20:00:47.463363   71926 logs.go:276] 0 containers: []
	W0913 20:00:47.463376   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:47.463389   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:47.463450   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:47.500739   71926 cri.go:89] found id: ""
	I0913 20:00:47.500764   71926 logs.go:276] 0 containers: []
	W0913 20:00:47.500773   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:47.500779   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:47.500827   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:47.534425   71926 cri.go:89] found id: ""
	I0913 20:00:47.534456   71926 logs.go:276] 0 containers: []
	W0913 20:00:47.534468   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:47.534479   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:47.534493   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:47.584525   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:47.584553   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:47.599468   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:47.599497   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:47.683020   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:47.683044   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:47.683055   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:47.767236   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:47.767272   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:50.310385   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:50.324059   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:50.324136   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:50.359347   71926 cri.go:89] found id: ""
	I0913 20:00:50.359381   71926 logs.go:276] 0 containers: []
	W0913 20:00:50.359393   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:50.359401   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:50.359451   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:50.395291   71926 cri.go:89] found id: ""
	I0913 20:00:50.395339   71926 logs.go:276] 0 containers: []
	W0913 20:00:50.395352   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:50.395360   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:50.395416   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:50.430783   71926 cri.go:89] found id: ""
	I0913 20:00:50.430806   71926 logs.go:276] 0 containers: []
	W0913 20:00:50.430816   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:50.430823   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:50.430884   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:50.467883   71926 cri.go:89] found id: ""
	I0913 20:00:50.467916   71926 logs.go:276] 0 containers: []
	W0913 20:00:50.467928   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:50.467935   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:50.467997   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:50.505962   71926 cri.go:89] found id: ""
	I0913 20:00:50.505995   71926 logs.go:276] 0 containers: []
	W0913 20:00:50.506007   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:50.506014   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:50.506080   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:50.542408   71926 cri.go:89] found id: ""
	I0913 20:00:50.542432   71926 logs.go:276] 0 containers: []
	W0913 20:00:50.542440   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:50.542445   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:50.542492   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:50.576431   71926 cri.go:89] found id: ""
	I0913 20:00:50.576454   71926 logs.go:276] 0 containers: []
	W0913 20:00:50.576463   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:50.576469   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:50.576533   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:50.609940   71926 cri.go:89] found id: ""
	I0913 20:00:50.609982   71926 logs.go:276] 0 containers: []
	W0913 20:00:50.609994   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:50.610004   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:50.610022   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:50.661793   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:50.661829   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:50.676085   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:50.676128   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:50.745092   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:50.745118   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:50.745132   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:50.830719   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:50.830754   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:49.835228   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:51.835655   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:53.835956   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:51.930403   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:53.931943   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:51.266576   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:53.267140   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:55.267966   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:53.369220   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:53.381944   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:53.382009   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:53.414010   71926 cri.go:89] found id: ""
	I0913 20:00:53.414032   71926 logs.go:276] 0 containers: []
	W0913 20:00:53.414041   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:53.414047   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:53.414131   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:53.448480   71926 cri.go:89] found id: ""
	I0913 20:00:53.448512   71926 logs.go:276] 0 containers: []
	W0913 20:00:53.448523   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:53.448530   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:53.448589   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:53.489451   71926 cri.go:89] found id: ""
	I0913 20:00:53.489483   71926 logs.go:276] 0 containers: []
	W0913 20:00:53.489494   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:53.489502   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:53.489562   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:53.531122   71926 cri.go:89] found id: ""
	I0913 20:00:53.531143   71926 logs.go:276] 0 containers: []
	W0913 20:00:53.531154   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:53.531169   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:53.531228   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:53.568070   71926 cri.go:89] found id: ""
	I0913 20:00:53.568098   71926 logs.go:276] 0 containers: []
	W0913 20:00:53.568109   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:53.568116   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:53.568187   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:53.601557   71926 cri.go:89] found id: ""
	I0913 20:00:53.601580   71926 logs.go:276] 0 containers: []
	W0913 20:00:53.601589   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:53.601595   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:53.601646   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:53.637689   71926 cri.go:89] found id: ""
	I0913 20:00:53.637710   71926 logs.go:276] 0 containers: []
	W0913 20:00:53.637719   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:53.637724   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:53.637782   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:53.672876   71926 cri.go:89] found id: ""
	I0913 20:00:53.672899   71926 logs.go:276] 0 containers: []
	W0913 20:00:53.672908   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:53.672916   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:53.672932   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:53.686015   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:53.686044   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:53.753998   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:53.754023   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:53.754042   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:53.844298   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:53.844329   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:53.883828   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:53.883853   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:56.440474   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:56.453970   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:56.454039   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:56.490986   71926 cri.go:89] found id: ""
	I0913 20:00:56.491013   71926 logs.go:276] 0 containers: []
	W0913 20:00:56.491024   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:56.491031   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:56.491094   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:56.530035   71926 cri.go:89] found id: ""
	I0913 20:00:56.530059   71926 logs.go:276] 0 containers: []
	W0913 20:00:56.530070   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:56.530077   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:56.530175   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:56.565256   71926 cri.go:89] found id: ""
	I0913 20:00:56.565280   71926 logs.go:276] 0 containers: []
	W0913 20:00:56.565290   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:56.565297   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:56.565358   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:56.600685   71926 cri.go:89] found id: ""
	I0913 20:00:56.600705   71926 logs.go:276] 0 containers: []
	W0913 20:00:56.600714   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:56.600725   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:56.600777   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:56.635015   71926 cri.go:89] found id: ""
	I0913 20:00:56.635042   71926 logs.go:276] 0 containers: []
	W0913 20:00:56.635053   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:56.635060   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:56.635125   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:56.671319   71926 cri.go:89] found id: ""
	I0913 20:00:56.671346   71926 logs.go:276] 0 containers: []
	W0913 20:00:56.671354   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:56.671359   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:56.671407   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:56.732238   71926 cri.go:89] found id: ""
	I0913 20:00:56.732269   71926 logs.go:276] 0 containers: []
	W0913 20:00:56.732280   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:56.732287   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:56.732347   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:56.779316   71926 cri.go:89] found id: ""
	I0913 20:00:56.779345   71926 logs.go:276] 0 containers: []
	W0913 20:00:56.779356   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:56.779366   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:56.779378   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:56.858727   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:56.858755   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:56.899449   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:56.899480   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:55.836469   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:58.335760   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:56.431305   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:58.431336   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:57.766219   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:59.767250   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:56.952972   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:56.953003   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:56.967092   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:56.967131   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:57.052690   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:59.553696   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:59.569295   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:59.569370   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:59.608548   71926 cri.go:89] found id: ""
	I0913 20:00:59.608592   71926 logs.go:276] 0 containers: []
	W0913 20:00:59.608605   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:59.608612   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:59.608674   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:59.643673   71926 cri.go:89] found id: ""
	I0913 20:00:59.643699   71926 logs.go:276] 0 containers: []
	W0913 20:00:59.643710   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:59.643716   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:59.643776   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:59.687204   71926 cri.go:89] found id: ""
	I0913 20:00:59.687228   71926 logs.go:276] 0 containers: []
	W0913 20:00:59.687237   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:59.687242   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:59.687306   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:59.720335   71926 cri.go:89] found id: ""
	I0913 20:00:59.720360   71926 logs.go:276] 0 containers: []
	W0913 20:00:59.720371   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:59.720378   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:59.720446   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:59.761164   71926 cri.go:89] found id: ""
	I0913 20:00:59.761194   71926 logs.go:276] 0 containers: []
	W0913 20:00:59.761205   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:59.761213   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:59.761278   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:59.796770   71926 cri.go:89] found id: ""
	I0913 20:00:59.796796   71926 logs.go:276] 0 containers: []
	W0913 20:00:59.796807   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:59.796814   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:59.796880   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:59.832333   71926 cri.go:89] found id: ""
	I0913 20:00:59.832364   71926 logs.go:276] 0 containers: []
	W0913 20:00:59.832377   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:59.832385   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:59.832444   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:59.868387   71926 cri.go:89] found id: ""
	I0913 20:00:59.868415   71926 logs.go:276] 0 containers: []
	W0913 20:00:59.868427   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:59.868437   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:59.868450   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:59.920632   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:59.920663   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:59.937573   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:59.937609   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:00.007803   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:00.007826   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:00.007837   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:00.085289   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:00.085324   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:00.336553   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:02.835544   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:00.931173   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:02.931879   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:02.267501   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:04.766302   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:02.628580   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:02.642122   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:02.642200   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:02.678238   71926 cri.go:89] found id: ""
	I0913 20:01:02.678260   71926 logs.go:276] 0 containers: []
	W0913 20:01:02.678268   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:02.678274   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:02.678325   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:02.714164   71926 cri.go:89] found id: ""
	I0913 20:01:02.714187   71926 logs.go:276] 0 containers: []
	W0913 20:01:02.714197   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:02.714202   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:02.714267   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:02.750200   71926 cri.go:89] found id: ""
	I0913 20:01:02.750228   71926 logs.go:276] 0 containers: []
	W0913 20:01:02.750237   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:02.750243   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:02.750291   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:02.786893   71926 cri.go:89] found id: ""
	I0913 20:01:02.786921   71926 logs.go:276] 0 containers: []
	W0913 20:01:02.786929   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:02.786935   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:02.786987   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:02.824161   71926 cri.go:89] found id: ""
	I0913 20:01:02.824192   71926 logs.go:276] 0 containers: []
	W0913 20:01:02.824204   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:02.824211   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:02.824274   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:02.861567   71926 cri.go:89] found id: ""
	I0913 20:01:02.861594   71926 logs.go:276] 0 containers: []
	W0913 20:01:02.861606   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:02.861613   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:02.861678   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:02.897791   71926 cri.go:89] found id: ""
	I0913 20:01:02.897813   71926 logs.go:276] 0 containers: []
	W0913 20:01:02.897822   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:02.897827   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:02.897875   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:02.935790   71926 cri.go:89] found id: ""
	I0913 20:01:02.935818   71926 logs.go:276] 0 containers: []
	W0913 20:01:02.935830   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:02.935840   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:02.935853   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:02.987011   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:02.987044   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:03.000688   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:03.000722   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:03.075757   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:03.075780   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:03.075795   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:03.167565   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:03.167611   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:05.713852   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:05.727810   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:05.727876   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:05.765630   71926 cri.go:89] found id: ""
	I0913 20:01:05.765655   71926 logs.go:276] 0 containers: []
	W0913 20:01:05.765666   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:05.765672   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:05.765720   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:05.801085   71926 cri.go:89] found id: ""
	I0913 20:01:05.801123   71926 logs.go:276] 0 containers: []
	W0913 20:01:05.801136   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:05.801142   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:05.801209   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:05.840112   71926 cri.go:89] found id: ""
	I0913 20:01:05.840146   71926 logs.go:276] 0 containers: []
	W0913 20:01:05.840156   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:05.840163   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:05.840225   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:05.876082   71926 cri.go:89] found id: ""
	I0913 20:01:05.876107   71926 logs.go:276] 0 containers: []
	W0913 20:01:05.876118   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:05.876125   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:05.876188   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:05.913772   71926 cri.go:89] found id: ""
	I0913 20:01:05.913801   71926 logs.go:276] 0 containers: []
	W0913 20:01:05.913813   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:05.913820   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:05.913879   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:05.947670   71926 cri.go:89] found id: ""
	I0913 20:01:05.947694   71926 logs.go:276] 0 containers: []
	W0913 20:01:05.947702   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:05.947708   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:05.947756   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:05.989763   71926 cri.go:89] found id: ""
	I0913 20:01:05.989794   71926 logs.go:276] 0 containers: []
	W0913 20:01:05.989807   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:05.989814   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:05.989875   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:06.030319   71926 cri.go:89] found id: ""
	I0913 20:01:06.030361   71926 logs.go:276] 0 containers: []
	W0913 20:01:06.030373   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:06.030383   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:06.030397   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:06.111153   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:06.111185   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:06.149331   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:06.149357   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:06.202013   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:06.202056   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:06.216224   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:06.216252   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:06.291004   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:04.839716   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:07.334774   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:04.932814   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:07.431144   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:09.431578   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:06.766410   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:09.267184   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:08.791573   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:08.804872   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:08.804930   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:08.846040   71926 cri.go:89] found id: ""
	I0913 20:01:08.846068   71926 logs.go:276] 0 containers: []
	W0913 20:01:08.846081   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:08.846088   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:08.846159   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:08.892954   71926 cri.go:89] found id: ""
	I0913 20:01:08.892986   71926 logs.go:276] 0 containers: []
	W0913 20:01:08.892998   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:08.893005   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:08.893068   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:08.949737   71926 cri.go:89] found id: ""
	I0913 20:01:08.949762   71926 logs.go:276] 0 containers: []
	W0913 20:01:08.949773   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:08.949779   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:08.949847   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:08.991915   71926 cri.go:89] found id: ""
	I0913 20:01:08.991938   71926 logs.go:276] 0 containers: []
	W0913 20:01:08.991950   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:08.991956   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:08.992009   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:09.028702   71926 cri.go:89] found id: ""
	I0913 20:01:09.028733   71926 logs.go:276] 0 containers: []
	W0913 20:01:09.028754   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:09.028764   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:09.028828   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:09.063615   71926 cri.go:89] found id: ""
	I0913 20:01:09.063639   71926 logs.go:276] 0 containers: []
	W0913 20:01:09.063648   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:09.063653   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:09.063713   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:09.096739   71926 cri.go:89] found id: ""
	I0913 20:01:09.096766   71926 logs.go:276] 0 containers: []
	W0913 20:01:09.096776   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:09.096782   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:09.096838   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:09.129605   71926 cri.go:89] found id: ""
	I0913 20:01:09.129635   71926 logs.go:276] 0 containers: []
	W0913 20:01:09.129647   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:09.129657   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:09.129674   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:09.181245   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:09.181280   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:09.195598   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:09.195627   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:09.271676   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:09.271703   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:09.271718   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:09.353657   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:09.353688   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:11.896613   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:11.910334   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:11.910410   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:09.336081   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:11.336204   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:13.336445   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:11.934825   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:14.430581   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:11.766779   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:14.267119   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:11.947632   71926 cri.go:89] found id: ""
	I0913 20:01:11.947654   71926 logs.go:276] 0 containers: []
	W0913 20:01:11.947663   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:11.947669   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:11.947727   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:11.983996   71926 cri.go:89] found id: ""
	I0913 20:01:11.984019   71926 logs.go:276] 0 containers: []
	W0913 20:01:11.984028   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:11.984034   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:11.984082   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:12.020497   71926 cri.go:89] found id: ""
	I0913 20:01:12.020536   71926 logs.go:276] 0 containers: []
	W0913 20:01:12.020548   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:12.020556   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:12.020615   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:12.059799   71926 cri.go:89] found id: ""
	I0913 20:01:12.059821   71926 logs.go:276] 0 containers: []
	W0913 20:01:12.059829   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:12.059835   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:12.059882   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:12.099079   71926 cri.go:89] found id: ""
	I0913 20:01:12.099113   71926 logs.go:276] 0 containers: []
	W0913 20:01:12.099125   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:12.099132   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:12.099193   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:12.132677   71926 cri.go:89] found id: ""
	I0913 20:01:12.132704   71926 logs.go:276] 0 containers: []
	W0913 20:01:12.132716   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:12.132722   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:12.132769   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:12.165522   71926 cri.go:89] found id: ""
	I0913 20:01:12.165546   71926 logs.go:276] 0 containers: []
	W0913 20:01:12.165560   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:12.165565   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:12.165625   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:12.207145   71926 cri.go:89] found id: ""
	I0913 20:01:12.207168   71926 logs.go:276] 0 containers: []
	W0913 20:01:12.207177   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:12.207185   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:12.207195   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:12.260523   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:12.260556   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:12.275208   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:12.275236   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:12.346424   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:12.346447   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:12.346463   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:12.431012   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:12.431050   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:14.970909   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:14.984452   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:14.984512   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:15.020614   71926 cri.go:89] found id: ""
	I0913 20:01:15.020635   71926 logs.go:276] 0 containers: []
	W0913 20:01:15.020645   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:15.020650   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:15.020695   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:15.060481   71926 cri.go:89] found id: ""
	I0913 20:01:15.060508   71926 logs.go:276] 0 containers: []
	W0913 20:01:15.060516   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:15.060520   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:15.060580   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:15.095109   71926 cri.go:89] found id: ""
	I0913 20:01:15.095134   71926 logs.go:276] 0 containers: []
	W0913 20:01:15.095143   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:15.095148   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:15.095201   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:15.130733   71926 cri.go:89] found id: ""
	I0913 20:01:15.130754   71926 logs.go:276] 0 containers: []
	W0913 20:01:15.130762   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:15.130768   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:15.130816   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:15.164998   71926 cri.go:89] found id: ""
	I0913 20:01:15.165027   71926 logs.go:276] 0 containers: []
	W0913 20:01:15.165042   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:15.165049   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:15.165100   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:15.205957   71926 cri.go:89] found id: ""
	I0913 20:01:15.205981   71926 logs.go:276] 0 containers: []
	W0913 20:01:15.205989   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:15.205994   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:15.206053   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:15.240502   71926 cri.go:89] found id: ""
	I0913 20:01:15.240526   71926 logs.go:276] 0 containers: []
	W0913 20:01:15.240535   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:15.240540   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:15.240586   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:15.275900   71926 cri.go:89] found id: ""
	I0913 20:01:15.275920   71926 logs.go:276] 0 containers: []
	W0913 20:01:15.275928   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:15.275936   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:15.275946   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:15.343658   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:15.343677   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:15.343690   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:15.426317   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:15.426355   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:15.474538   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:15.474569   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:15.525405   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:15.525439   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:15.836259   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:18.336529   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:16.431423   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:18.930385   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:16.766863   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:19.266906   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:18.040698   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:18.053759   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:18.053820   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:18.086056   71926 cri.go:89] found id: ""
	I0913 20:01:18.086078   71926 logs.go:276] 0 containers: []
	W0913 20:01:18.086087   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:18.086092   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:18.086166   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:18.119247   71926 cri.go:89] found id: ""
	I0913 20:01:18.119277   71926 logs.go:276] 0 containers: []
	W0913 20:01:18.119290   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:18.119301   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:18.119366   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:18.163417   71926 cri.go:89] found id: ""
	I0913 20:01:18.163442   71926 logs.go:276] 0 containers: []
	W0913 20:01:18.163450   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:18.163456   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:18.163504   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:18.196786   71926 cri.go:89] found id: ""
	I0913 20:01:18.196812   71926 logs.go:276] 0 containers: []
	W0913 20:01:18.196820   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:18.196826   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:18.196878   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:18.231864   71926 cri.go:89] found id: ""
	I0913 20:01:18.231893   71926 logs.go:276] 0 containers: []
	W0913 20:01:18.231903   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:18.231909   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:18.231973   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:18.268498   71926 cri.go:89] found id: ""
	I0913 20:01:18.268518   71926 logs.go:276] 0 containers: []
	W0913 20:01:18.268527   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:18.268534   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:18.268586   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:18.305391   71926 cri.go:89] found id: ""
	I0913 20:01:18.305418   71926 logs.go:276] 0 containers: []
	W0913 20:01:18.305430   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:18.305438   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:18.305499   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:18.340160   71926 cri.go:89] found id: ""
	I0913 20:01:18.340187   71926 logs.go:276] 0 containers: []
	W0913 20:01:18.340197   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:18.340207   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:18.340220   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:18.391723   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:18.391757   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:18.405914   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:18.405943   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:18.476941   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:18.476960   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:18.476972   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:18.556812   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:18.556845   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:21.098172   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:21.111147   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:21.111211   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:21.147273   71926 cri.go:89] found id: ""
	I0913 20:01:21.147299   71926 logs.go:276] 0 containers: []
	W0913 20:01:21.147316   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:21.147322   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:21.147370   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:21.185018   71926 cri.go:89] found id: ""
	I0913 20:01:21.185047   71926 logs.go:276] 0 containers: []
	W0913 20:01:21.185059   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:21.185066   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:21.185148   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:21.219763   71926 cri.go:89] found id: ""
	I0913 20:01:21.219798   71926 logs.go:276] 0 containers: []
	W0913 20:01:21.219809   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:21.219816   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:21.219882   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:21.257121   71926 cri.go:89] found id: ""
	I0913 20:01:21.257149   71926 logs.go:276] 0 containers: []
	W0913 20:01:21.257161   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:21.257167   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:21.257229   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:21.292011   71926 cri.go:89] found id: ""
	I0913 20:01:21.292038   71926 logs.go:276] 0 containers: []
	W0913 20:01:21.292049   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:21.292060   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:21.292124   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:21.327563   71926 cri.go:89] found id: ""
	I0913 20:01:21.327592   71926 logs.go:276] 0 containers: []
	W0913 20:01:21.327604   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:21.327612   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:21.327679   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:21.365175   71926 cri.go:89] found id: ""
	I0913 20:01:21.365201   71926 logs.go:276] 0 containers: []
	W0913 20:01:21.365209   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:21.365215   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:21.365272   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:21.403238   71926 cri.go:89] found id: ""
	I0913 20:01:21.403266   71926 logs.go:276] 0 containers: []
	W0913 20:01:21.403278   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:21.403288   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:21.403310   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:21.417704   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:21.417737   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:21.490232   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:21.490264   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:21.490283   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:21.573376   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:21.573414   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:21.619573   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:21.619604   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:20.835709   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:22.835800   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:20.931257   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:22.932350   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:21.267729   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:23.767489   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:25.768029   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:24.173931   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:24.187538   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:24.187608   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:24.227803   71926 cri.go:89] found id: ""
	I0913 20:01:24.227827   71926 logs.go:276] 0 containers: []
	W0913 20:01:24.227836   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:24.227841   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:24.227898   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:24.263194   71926 cri.go:89] found id: ""
	I0913 20:01:24.263223   71926 logs.go:276] 0 containers: []
	W0913 20:01:24.263239   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:24.263246   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:24.263308   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:24.298159   71926 cri.go:89] found id: ""
	I0913 20:01:24.298188   71926 logs.go:276] 0 containers: []
	W0913 20:01:24.298201   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:24.298209   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:24.298267   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:24.340590   71926 cri.go:89] found id: ""
	I0913 20:01:24.340621   71926 logs.go:276] 0 containers: []
	W0913 20:01:24.340633   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:24.340640   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:24.340699   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:24.385839   71926 cri.go:89] found id: ""
	I0913 20:01:24.385866   71926 logs.go:276] 0 containers: []
	W0913 20:01:24.385875   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:24.385880   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:24.385944   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:24.424723   71926 cri.go:89] found id: ""
	I0913 20:01:24.424750   71926 logs.go:276] 0 containers: []
	W0913 20:01:24.424761   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:24.424767   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:24.424828   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:24.461879   71926 cri.go:89] found id: ""
	I0913 20:01:24.461911   71926 logs.go:276] 0 containers: []
	W0913 20:01:24.461922   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:24.461929   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:24.461995   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:24.497230   71926 cri.go:89] found id: ""
	I0913 20:01:24.497257   71926 logs.go:276] 0 containers: []
	W0913 20:01:24.497269   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:24.497278   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:24.497297   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:24.538048   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:24.538084   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:24.592840   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:24.592880   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:24.608817   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:24.608851   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:24.683335   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:24.683356   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:24.683367   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:24.836044   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:27.335709   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:25.431310   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:27.931864   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:28.266427   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:30.765946   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:27.262211   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:27.275199   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:27.275277   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:27.308960   71926 cri.go:89] found id: ""
	I0913 20:01:27.308986   71926 logs.go:276] 0 containers: []
	W0913 20:01:27.308994   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:27.309000   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:27.309065   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:27.345030   71926 cri.go:89] found id: ""
	I0913 20:01:27.345055   71926 logs.go:276] 0 containers: []
	W0913 20:01:27.345067   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:27.345074   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:27.345132   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:27.380791   71926 cri.go:89] found id: ""
	I0913 20:01:27.380823   71926 logs.go:276] 0 containers: []
	W0913 20:01:27.380831   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:27.380837   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:27.380896   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:27.416561   71926 cri.go:89] found id: ""
	I0913 20:01:27.416589   71926 logs.go:276] 0 containers: []
	W0913 20:01:27.416599   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:27.416604   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:27.416654   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:27.455781   71926 cri.go:89] found id: ""
	I0913 20:01:27.455809   71926 logs.go:276] 0 containers: []
	W0913 20:01:27.455820   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:27.455827   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:27.455887   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:27.493920   71926 cri.go:89] found id: ""
	I0913 20:01:27.493950   71926 logs.go:276] 0 containers: []
	W0913 20:01:27.493959   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:27.493967   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:27.494016   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:27.531706   71926 cri.go:89] found id: ""
	I0913 20:01:27.531730   71926 logs.go:276] 0 containers: []
	W0913 20:01:27.531740   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:27.531746   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:27.531796   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:27.568697   71926 cri.go:89] found id: ""
	I0913 20:01:27.568726   71926 logs.go:276] 0 containers: []
	W0913 20:01:27.568735   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:27.568744   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:27.568755   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:27.620618   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:27.620655   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:27.636353   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:27.636378   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:27.709779   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:27.709806   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:27.709819   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:27.784476   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:27.784508   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:30.331351   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:30.344583   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:30.344647   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:30.379992   71926 cri.go:89] found id: ""
	I0913 20:01:30.380018   71926 logs.go:276] 0 containers: []
	W0913 20:01:30.380051   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:30.380059   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:30.380129   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:30.415148   71926 cri.go:89] found id: ""
	I0913 20:01:30.415174   71926 logs.go:276] 0 containers: []
	W0913 20:01:30.415185   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:30.415192   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:30.415250   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:30.450578   71926 cri.go:89] found id: ""
	I0913 20:01:30.450602   71926 logs.go:276] 0 containers: []
	W0913 20:01:30.450611   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:30.450616   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:30.450675   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:30.484582   71926 cri.go:89] found id: ""
	I0913 20:01:30.484612   71926 logs.go:276] 0 containers: []
	W0913 20:01:30.484623   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:30.484631   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:30.484683   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:30.520476   71926 cri.go:89] found id: ""
	I0913 20:01:30.520498   71926 logs.go:276] 0 containers: []
	W0913 20:01:30.520506   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:30.520512   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:30.520559   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:30.556898   71926 cri.go:89] found id: ""
	I0913 20:01:30.556928   71926 logs.go:276] 0 containers: []
	W0913 20:01:30.556937   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:30.556943   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:30.556993   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:30.595216   71926 cri.go:89] found id: ""
	I0913 20:01:30.595240   71926 logs.go:276] 0 containers: []
	W0913 20:01:30.595252   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:30.595259   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:30.595312   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:30.629834   71926 cri.go:89] found id: ""
	I0913 20:01:30.629866   71926 logs.go:276] 0 containers: []
	W0913 20:01:30.629880   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:30.629891   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:30.629907   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:30.643487   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:30.643516   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:30.718267   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:30.718290   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:30.718304   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:30.803014   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:30.803044   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:30.843670   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:30.843694   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:29.336064   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:31.836582   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:29.932193   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:32.431217   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:32.766473   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:34.767287   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:33.396566   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:33.409579   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:33.409641   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:33.444647   71926 cri.go:89] found id: ""
	I0913 20:01:33.444671   71926 logs.go:276] 0 containers: []
	W0913 20:01:33.444682   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:33.444689   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:33.444747   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:33.479545   71926 cri.go:89] found id: ""
	I0913 20:01:33.479568   71926 logs.go:276] 0 containers: []
	W0913 20:01:33.479577   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:33.479583   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:33.479634   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:33.517343   71926 cri.go:89] found id: ""
	I0913 20:01:33.517366   71926 logs.go:276] 0 containers: []
	W0913 20:01:33.517375   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:33.517380   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:33.517437   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:33.551394   71926 cri.go:89] found id: ""
	I0913 20:01:33.551424   71926 logs.go:276] 0 containers: []
	W0913 20:01:33.551436   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:33.551449   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:33.551499   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:33.583871   71926 cri.go:89] found id: ""
	I0913 20:01:33.583893   71926 logs.go:276] 0 containers: []
	W0913 20:01:33.583902   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:33.583907   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:33.583966   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:33.616984   71926 cri.go:89] found id: ""
	I0913 20:01:33.617008   71926 logs.go:276] 0 containers: []
	W0913 20:01:33.617018   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:33.617023   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:33.617078   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:33.660231   71926 cri.go:89] found id: ""
	I0913 20:01:33.660252   71926 logs.go:276] 0 containers: []
	W0913 20:01:33.660261   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:33.660267   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:33.660315   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:33.705140   71926 cri.go:89] found id: ""
	I0913 20:01:33.705176   71926 logs.go:276] 0 containers: []
	W0913 20:01:33.705188   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:33.705198   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:33.705213   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:33.781889   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:33.781916   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:33.781931   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:33.860132   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:33.860171   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:33.899723   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:33.899754   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:33.953380   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:33.953416   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:36.469123   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:36.482260   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:36.482324   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:36.516469   71926 cri.go:89] found id: ""
	I0913 20:01:36.516494   71926 logs.go:276] 0 containers: []
	W0913 20:01:36.516504   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:36.516509   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:36.516599   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:36.554065   71926 cri.go:89] found id: ""
	I0913 20:01:36.554103   71926 logs.go:276] 0 containers: []
	W0913 20:01:36.554116   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:36.554123   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:36.554197   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:36.587706   71926 cri.go:89] found id: ""
	I0913 20:01:36.587736   71926 logs.go:276] 0 containers: []
	W0913 20:01:36.587745   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:36.587750   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:36.587812   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:36.621866   71926 cri.go:89] found id: ""
	I0913 20:01:36.621897   71926 logs.go:276] 0 containers: []
	W0913 20:01:36.621908   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:36.621915   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:36.621984   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:36.656374   71926 cri.go:89] found id: ""
	I0913 20:01:36.656403   71926 logs.go:276] 0 containers: []
	W0913 20:01:36.656411   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:36.656416   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:36.656465   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:36.690236   71926 cri.go:89] found id: ""
	I0913 20:01:36.690259   71926 logs.go:276] 0 containers: []
	W0913 20:01:36.690268   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:36.690273   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:36.690329   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:36.725544   71926 cri.go:89] found id: ""
	I0913 20:01:36.725580   71926 logs.go:276] 0 containers: []
	W0913 20:01:36.725592   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:36.725600   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:36.725656   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:36.762143   71926 cri.go:89] found id: ""
	I0913 20:01:36.762175   71926 logs.go:276] 0 containers: []
	W0913 20:01:36.762186   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:36.762195   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:36.762210   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:36.837436   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:36.837459   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:36.837473   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:36.916272   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:36.916310   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:34.334975   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:36.335436   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:38.835559   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:34.930444   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:36.931136   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:39.430007   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:37.266186   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:39.769801   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:36.962300   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:36.962332   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:37.016419   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:37.016449   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:39.532924   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:39.546066   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:39.546150   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:39.585711   71926 cri.go:89] found id: ""
	I0913 20:01:39.585733   71926 logs.go:276] 0 containers: []
	W0913 20:01:39.585741   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:39.585747   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:39.585803   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:39.623366   71926 cri.go:89] found id: ""
	I0913 20:01:39.623409   71926 logs.go:276] 0 containers: []
	W0913 20:01:39.623419   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:39.623425   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:39.623487   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:39.660533   71926 cri.go:89] found id: ""
	I0913 20:01:39.660558   71926 logs.go:276] 0 containers: []
	W0913 20:01:39.660567   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:39.660575   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:39.660637   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:39.694266   71926 cri.go:89] found id: ""
	I0913 20:01:39.694293   71926 logs.go:276] 0 containers: []
	W0913 20:01:39.694304   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:39.694311   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:39.694373   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:39.734258   71926 cri.go:89] found id: ""
	I0913 20:01:39.734284   71926 logs.go:276] 0 containers: []
	W0913 20:01:39.734295   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:39.734302   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:39.734361   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:39.770202   71926 cri.go:89] found id: ""
	I0913 20:01:39.770219   71926 logs.go:276] 0 containers: []
	W0913 20:01:39.770227   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:39.770233   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:39.770284   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:39.804380   71926 cri.go:89] found id: ""
	I0913 20:01:39.804417   71926 logs.go:276] 0 containers: []
	W0913 20:01:39.804425   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:39.804431   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:39.804481   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:39.837961   71926 cri.go:89] found id: ""
	I0913 20:01:39.837989   71926 logs.go:276] 0 containers: []
	W0913 20:01:39.838011   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:39.838021   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:39.838047   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:39.851152   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:39.851176   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:39.930199   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:39.930224   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:39.930239   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:40.007475   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:40.007507   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:40.050291   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:40.050320   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:40.835948   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:42.836933   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:41.431508   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:43.930509   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:42.265895   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:44.267214   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:42.599100   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:42.613586   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:42.613662   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:42.648187   71926 cri.go:89] found id: ""
	I0913 20:01:42.648213   71926 logs.go:276] 0 containers: []
	W0913 20:01:42.648225   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:42.648232   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:42.648283   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:42.692692   71926 cri.go:89] found id: ""
	I0913 20:01:42.692721   71926 logs.go:276] 0 containers: []
	W0913 20:01:42.692730   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:42.692736   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:42.692787   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:42.726248   71926 cri.go:89] found id: ""
	I0913 20:01:42.726273   71926 logs.go:276] 0 containers: []
	W0913 20:01:42.726283   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:42.726291   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:42.726364   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:42.764372   71926 cri.go:89] found id: ""
	I0913 20:01:42.764416   71926 logs.go:276] 0 containers: []
	W0913 20:01:42.764428   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:42.764434   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:42.764495   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:42.802871   71926 cri.go:89] found id: ""
	I0913 20:01:42.802902   71926 logs.go:276] 0 containers: []
	W0913 20:01:42.802914   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:42.802922   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:42.802983   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:42.840018   71926 cri.go:89] found id: ""
	I0913 20:01:42.840048   71926 logs.go:276] 0 containers: []
	W0913 20:01:42.840060   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:42.840067   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:42.840127   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:42.876359   71926 cri.go:89] found id: ""
	I0913 20:01:42.876388   71926 logs.go:276] 0 containers: []
	W0913 20:01:42.876400   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:42.876408   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:42.876473   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:42.911673   71926 cri.go:89] found id: ""
	I0913 20:01:42.911697   71926 logs.go:276] 0 containers: []
	W0913 20:01:42.911706   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:42.911713   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:42.911725   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:43.016107   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:43.016143   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:43.061781   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:43.061811   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:43.112989   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:43.113035   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:43.129172   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:43.129201   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:43.209596   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:45.710278   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:45.723939   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:45.724013   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:45.759393   71926 cri.go:89] found id: ""
	I0913 20:01:45.759420   71926 logs.go:276] 0 containers: []
	W0913 20:01:45.759432   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:45.759439   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:45.759500   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:45.796795   71926 cri.go:89] found id: ""
	I0913 20:01:45.796820   71926 logs.go:276] 0 containers: []
	W0913 20:01:45.796831   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:45.796838   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:45.796911   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:45.837904   71926 cri.go:89] found id: ""
	I0913 20:01:45.837929   71926 logs.go:276] 0 containers: []
	W0913 20:01:45.837939   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:45.837945   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:45.838005   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:45.873079   71926 cri.go:89] found id: ""
	I0913 20:01:45.873104   71926 logs.go:276] 0 containers: []
	W0913 20:01:45.873115   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:45.873122   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:45.873194   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:45.910115   71926 cri.go:89] found id: ""
	I0913 20:01:45.910144   71926 logs.go:276] 0 containers: []
	W0913 20:01:45.910160   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:45.910167   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:45.910228   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:45.949135   71926 cri.go:89] found id: ""
	I0913 20:01:45.949166   71926 logs.go:276] 0 containers: []
	W0913 20:01:45.949178   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:45.949186   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:45.949246   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:45.984997   71926 cri.go:89] found id: ""
	I0913 20:01:45.985023   71926 logs.go:276] 0 containers: []
	W0913 20:01:45.985033   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:45.985038   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:45.985088   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:46.023590   71926 cri.go:89] found id: ""
	I0913 20:01:46.023618   71926 logs.go:276] 0 containers: []
	W0913 20:01:46.023632   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:46.023642   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:46.023656   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:46.062364   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:46.062392   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:46.114617   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:46.114650   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:46.129756   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:46.129799   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:46.202031   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:46.202052   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:46.202063   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:45.337317   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:47.834948   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:45.931344   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:47.932508   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:46.776369   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:49.268050   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:48.782933   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:48.796685   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:48.796758   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:48.832035   71926 cri.go:89] found id: ""
	I0913 20:01:48.832061   71926 logs.go:276] 0 containers: []
	W0913 20:01:48.832074   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:48.832081   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:48.832155   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:48.866904   71926 cri.go:89] found id: ""
	I0913 20:01:48.866929   71926 logs.go:276] 0 containers: []
	W0913 20:01:48.866942   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:48.866947   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:48.867004   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:48.904082   71926 cri.go:89] found id: ""
	I0913 20:01:48.904105   71926 logs.go:276] 0 containers: []
	W0913 20:01:48.904113   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:48.904118   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:48.904174   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:48.947496   71926 cri.go:89] found id: ""
	I0913 20:01:48.947519   71926 logs.go:276] 0 containers: []
	W0913 20:01:48.947526   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:48.947532   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:48.947588   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:48.982918   71926 cri.go:89] found id: ""
	I0913 20:01:48.982954   71926 logs.go:276] 0 containers: []
	W0913 20:01:48.982964   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:48.982969   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:48.983034   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:49.023593   71926 cri.go:89] found id: ""
	I0913 20:01:49.023615   71926 logs.go:276] 0 containers: []
	W0913 20:01:49.023623   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:49.023629   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:49.023690   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:49.068211   71926 cri.go:89] found id: ""
	I0913 20:01:49.068233   71926 logs.go:276] 0 containers: []
	W0913 20:01:49.068241   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:49.068246   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:49.068310   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:49.104174   71926 cri.go:89] found id: ""
	I0913 20:01:49.104195   71926 logs.go:276] 0 containers: []
	W0913 20:01:49.104203   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:49.104210   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:49.104221   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:49.158282   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:49.158317   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:49.173592   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:49.173617   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:49.249039   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:49.249067   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:49.249082   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:49.334924   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:49.334969   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:51.884986   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:51.898020   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:51.898172   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:51.937858   71926 cri.go:89] found id: ""
	I0913 20:01:51.937885   71926 logs.go:276] 0 containers: []
	W0913 20:01:51.937896   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:51.937905   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:51.937971   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:49.836646   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:52.337477   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:50.432045   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:52.930984   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:51.765027   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:53.766659   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:55.766923   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:51.977712   71926 cri.go:89] found id: ""
	I0913 20:01:51.977743   71926 logs.go:276] 0 containers: []
	W0913 20:01:51.977756   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:51.977766   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:51.977852   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:52.012504   71926 cri.go:89] found id: ""
	I0913 20:01:52.012529   71926 logs.go:276] 0 containers: []
	W0913 20:01:52.012539   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:52.012544   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:52.012604   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:52.047642   71926 cri.go:89] found id: ""
	I0913 20:01:52.047671   71926 logs.go:276] 0 containers: []
	W0913 20:01:52.047683   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:52.047692   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:52.047743   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:52.087213   71926 cri.go:89] found id: ""
	I0913 20:01:52.087240   71926 logs.go:276] 0 containers: []
	W0913 20:01:52.087251   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:52.087258   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:52.087317   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:52.126609   71926 cri.go:89] found id: ""
	I0913 20:01:52.126633   71926 logs.go:276] 0 containers: []
	W0913 20:01:52.126641   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:52.126647   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:52.126699   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:52.160754   71926 cri.go:89] found id: ""
	I0913 20:01:52.160778   71926 logs.go:276] 0 containers: []
	W0913 20:01:52.160789   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:52.160796   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:52.160857   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:52.196945   71926 cri.go:89] found id: ""
	I0913 20:01:52.196967   71926 logs.go:276] 0 containers: []
	W0913 20:01:52.196975   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:52.196983   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:52.196994   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:52.248515   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:52.248552   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:52.264602   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:52.264629   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:52.339626   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:52.339653   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:52.339668   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:52.419793   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:52.419827   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:54.966220   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:54.981234   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:54.981299   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:55.015806   71926 cri.go:89] found id: ""
	I0913 20:01:55.015843   71926 logs.go:276] 0 containers: []
	W0913 20:01:55.015855   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:55.015861   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:55.015931   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:55.053979   71926 cri.go:89] found id: ""
	I0913 20:01:55.054002   71926 logs.go:276] 0 containers: []
	W0913 20:01:55.054011   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:55.054016   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:55.054075   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:55.096464   71926 cri.go:89] found id: ""
	I0913 20:01:55.096495   71926 logs.go:276] 0 containers: []
	W0913 20:01:55.096506   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:55.096514   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:55.096591   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:55.146557   71926 cri.go:89] found id: ""
	I0913 20:01:55.146586   71926 logs.go:276] 0 containers: []
	W0913 20:01:55.146599   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:55.146606   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:55.146667   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:55.199034   71926 cri.go:89] found id: ""
	I0913 20:01:55.199063   71926 logs.go:276] 0 containers: []
	W0913 20:01:55.199074   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:55.199083   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:55.199144   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:55.234731   71926 cri.go:89] found id: ""
	I0913 20:01:55.234760   71926 logs.go:276] 0 containers: []
	W0913 20:01:55.234772   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:55.234780   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:55.234830   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:55.273543   71926 cri.go:89] found id: ""
	I0913 20:01:55.273570   71926 logs.go:276] 0 containers: []
	W0913 20:01:55.273579   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:55.273585   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:55.273643   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:55.310449   71926 cri.go:89] found id: ""
	I0913 20:01:55.310483   71926 logs.go:276] 0 containers: []
	W0913 20:01:55.310496   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:55.310507   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:55.310521   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:55.360725   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:55.360761   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:55.375346   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:55.375374   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:55.445075   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:55.445098   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:55.445108   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:55.520105   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:55.520144   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:54.835305   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:56.835825   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:58.836975   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:55.431354   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:57.930223   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:57.767026   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:00.266415   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:58.059379   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:58.072373   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:58.072454   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:58.108624   71926 cri.go:89] found id: ""
	I0913 20:01:58.108650   71926 logs.go:276] 0 containers: []
	W0913 20:01:58.108659   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:58.108665   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:58.108722   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:58.143978   71926 cri.go:89] found id: ""
	I0913 20:01:58.143999   71926 logs.go:276] 0 containers: []
	W0913 20:01:58.144007   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:58.144013   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:58.144059   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:58.178996   71926 cri.go:89] found id: ""
	I0913 20:01:58.179024   71926 logs.go:276] 0 containers: []
	W0913 20:01:58.179032   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:58.179038   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:58.179097   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:58.211596   71926 cri.go:89] found id: ""
	I0913 20:01:58.211624   71926 logs.go:276] 0 containers: []
	W0913 20:01:58.211634   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:58.211640   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:58.211696   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:58.245034   71926 cri.go:89] found id: ""
	I0913 20:01:58.245065   71926 logs.go:276] 0 containers: []
	W0913 20:01:58.245077   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:58.245085   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:58.245148   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:58.283216   71926 cri.go:89] found id: ""
	I0913 20:01:58.283238   71926 logs.go:276] 0 containers: []
	W0913 20:01:58.283247   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:58.283252   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:58.283309   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:58.321463   71926 cri.go:89] found id: ""
	I0913 20:01:58.321484   71926 logs.go:276] 0 containers: []
	W0913 20:01:58.321492   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:58.321498   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:58.321544   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:58.355902   71926 cri.go:89] found id: ""
	I0913 20:01:58.355926   71926 logs.go:276] 0 containers: []
	W0913 20:01:58.355937   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:58.355947   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:58.355965   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:58.413005   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:58.413035   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:58.428082   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:58.428108   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:58.498169   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:58.498197   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:58.498212   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:58.578725   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:58.578757   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:02:01.119256   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:02:01.146017   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:02:01.146114   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:02:01.196918   71926 cri.go:89] found id: ""
	I0913 20:02:01.196945   71926 logs.go:276] 0 containers: []
	W0913 20:02:01.196953   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:02:01.196959   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:02:01.197016   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:02:01.233679   71926 cri.go:89] found id: ""
	I0913 20:02:01.233718   71926 logs.go:276] 0 containers: []
	W0913 20:02:01.233729   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:02:01.233736   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:02:01.233800   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:02:01.269464   71926 cri.go:89] found id: ""
	I0913 20:02:01.269492   71926 logs.go:276] 0 containers: []
	W0913 20:02:01.269503   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:02:01.269510   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:02:01.269570   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:02:01.305729   71926 cri.go:89] found id: ""
	I0913 20:02:01.305754   71926 logs.go:276] 0 containers: []
	W0913 20:02:01.305763   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:02:01.305769   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:02:01.305828   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:02:01.342388   71926 cri.go:89] found id: ""
	I0913 20:02:01.342415   71926 logs.go:276] 0 containers: []
	W0913 20:02:01.342426   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:02:01.342433   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:02:01.342496   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:02:01.381841   71926 cri.go:89] found id: ""
	I0913 20:02:01.381870   71926 logs.go:276] 0 containers: []
	W0913 20:02:01.381887   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:02:01.381895   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:02:01.381959   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:02:01.418835   71926 cri.go:89] found id: ""
	I0913 20:02:01.418875   71926 logs.go:276] 0 containers: []
	W0913 20:02:01.418886   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:02:01.418893   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:02:01.418957   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:02:01.457113   71926 cri.go:89] found id: ""
	I0913 20:02:01.457147   71926 logs.go:276] 0 containers: []
	W0913 20:02:01.457158   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:02:01.457168   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:02:01.457178   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:02:01.513024   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:02:01.513060   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:02:01.528102   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:02:01.528128   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:02:01.602459   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:02:01.602480   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:02:01.602495   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:02:01.685567   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:02:01.685599   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:02:01.336152   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:03.836139   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:59.931408   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:02.430247   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:04.431966   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:02.266731   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:04.768148   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:04.231832   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:02:04.246134   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:02:04.246215   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:02:04.285749   71926 cri.go:89] found id: ""
	I0913 20:02:04.285779   71926 logs.go:276] 0 containers: []
	W0913 20:02:04.285789   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:02:04.285796   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:02:04.285857   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:02:04.320201   71926 cri.go:89] found id: ""
	I0913 20:02:04.320235   71926 logs.go:276] 0 containers: []
	W0913 20:02:04.320246   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:02:04.320252   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:02:04.320314   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:02:04.356354   71926 cri.go:89] found id: ""
	I0913 20:02:04.356376   71926 logs.go:276] 0 containers: []
	W0913 20:02:04.356387   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:02:04.356394   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:02:04.356452   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:02:04.391995   71926 cri.go:89] found id: ""
	I0913 20:02:04.392025   71926 logs.go:276] 0 containers: []
	W0913 20:02:04.392036   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:02:04.392044   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:02:04.392111   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:02:04.425216   71926 cri.go:89] found id: ""
	I0913 20:02:04.425245   71926 logs.go:276] 0 containers: []
	W0913 20:02:04.425255   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:02:04.425262   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:02:04.425327   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:02:04.473200   71926 cri.go:89] found id: ""
	I0913 20:02:04.473223   71926 logs.go:276] 0 containers: []
	W0913 20:02:04.473232   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:02:04.473238   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:02:04.473283   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:02:04.508088   71926 cri.go:89] found id: ""
	I0913 20:02:04.508110   71926 logs.go:276] 0 containers: []
	W0913 20:02:04.508119   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:02:04.508124   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:02:04.508175   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:02:04.543322   71926 cri.go:89] found id: ""
	I0913 20:02:04.543343   71926 logs.go:276] 0 containers: []
	W0913 20:02:04.543351   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:02:04.543360   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:02:04.543370   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:02:04.618660   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:02:04.618679   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:02:04.618691   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:02:04.695411   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:02:04.695446   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:02:04.735797   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:02:04.735829   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:02:04.792281   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:02:04.792318   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:02:05.836177   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:07.837164   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:06.931841   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:09.432062   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:07.266508   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:07.266540   71424 pod_ready.go:82] duration metric: took 4m0.00658418s for pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace to be "Ready" ...
	E0913 20:02:07.266553   71424 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0913 20:02:07.266569   71424 pod_ready.go:39] duration metric: took 4m3.201709894s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0913 20:02:07.266588   71424 api_server.go:52] waiting for apiserver process to appear ...
	I0913 20:02:07.266618   71424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:02:07.266671   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:02:07.316650   71424 cri.go:89] found id: "7b1108fd5841778e635c87c901ca2bc56071cc7f48f9b9812e1bf8fa063284c3"
	I0913 20:02:07.316674   71424 cri.go:89] found id: ""
	I0913 20:02:07.316681   71424 logs.go:276] 1 containers: [7b1108fd5841778e635c87c901ca2bc56071cc7f48f9b9812e1bf8fa063284c3]
	I0913 20:02:07.316740   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:07.321334   71424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:02:07.321407   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:02:07.373164   71424 cri.go:89] found id: "a3490cc2f99b270938b5fed10c34d8466f4e2d304dac9cdacb69b239c36988e3"
	I0913 20:02:07.373187   71424 cri.go:89] found id: ""
	I0913 20:02:07.373197   71424 logs.go:276] 1 containers: [a3490cc2f99b270938b5fed10c34d8466f4e2d304dac9cdacb69b239c36988e3]
	I0913 20:02:07.373247   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:07.377883   71424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:02:07.377954   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:02:07.424142   71424 cri.go:89] found id: "e70559352db6a4d712cb06065541ce8338cb0ea989eac571f538aa205d022f73"
	I0913 20:02:07.424169   71424 cri.go:89] found id: ""
	I0913 20:02:07.424179   71424 logs.go:276] 1 containers: [e70559352db6a4d712cb06065541ce8338cb0ea989eac571f538aa205d022f73]
	I0913 20:02:07.424241   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:07.429508   71424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:02:07.429578   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:02:07.484114   71424 cri.go:89] found id: "4c2bf4fed4e33c404d568d430bf58fe30ca0c9f0cf8964c530e93f1fd1a51edf"
	I0913 20:02:07.484180   71424 cri.go:89] found id: ""
	I0913 20:02:07.484193   71424 logs.go:276] 1 containers: [4c2bf4fed4e33c404d568d430bf58fe30ca0c9f0cf8964c530e93f1fd1a51edf]
	I0913 20:02:07.484250   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:07.488689   71424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:02:07.488757   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:02:07.527755   71424 cri.go:89] found id: "adbec8ff0ed7ae2d76b2b4191fbf08117c3c2c17aaf2f56bf763ce14fba83991"
	I0913 20:02:07.527777   71424 cri.go:89] found id: ""
	I0913 20:02:07.527786   71424 logs.go:276] 1 containers: [adbec8ff0ed7ae2d76b2b4191fbf08117c3c2c17aaf2f56bf763ce14fba83991]
	I0913 20:02:07.527840   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:07.532748   71424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:02:07.532806   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:02:07.570018   71424 cri.go:89] found id: "e6169bebe5711890f53f70a7ec514779cf495ee725c60dab67e7acdca64ef03d"
	I0913 20:02:07.570043   71424 cri.go:89] found id: ""
	I0913 20:02:07.570052   71424 logs.go:276] 1 containers: [e6169bebe5711890f53f70a7ec514779cf495ee725c60dab67e7acdca64ef03d]
	I0913 20:02:07.570125   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:07.574697   71424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:02:07.574765   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:02:07.618877   71424 cri.go:89] found id: ""
	I0913 20:02:07.618971   71424 logs.go:276] 0 containers: []
	W0913 20:02:07.618998   71424 logs.go:278] No container was found matching "kindnet"
	I0913 20:02:07.619014   71424 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0913 20:02:07.619122   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0913 20:02:07.659244   71424 cri.go:89] found id: "fc01d7b17bbc929489326ae7e806a248f2d3d5cbc145bf51eb1b0cf2ba88d950"
	I0913 20:02:07.659270   71424 cri.go:89] found id: "4a9c61bb677322f851f9504c0ffe2044897d80333391be76078a4f7825afe9fe"
	I0913 20:02:07.659275   71424 cri.go:89] found id: ""
	I0913 20:02:07.659283   71424 logs.go:276] 2 containers: [fc01d7b17bbc929489326ae7e806a248f2d3d5cbc145bf51eb1b0cf2ba88d950 4a9c61bb677322f851f9504c0ffe2044897d80333391be76078a4f7825afe9fe]
	I0913 20:02:07.659335   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:07.664257   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:07.668591   71424 logs.go:123] Gathering logs for kube-scheduler [4c2bf4fed4e33c404d568d430bf58fe30ca0c9f0cf8964c530e93f1fd1a51edf] ...
	I0913 20:02:07.668613   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c2bf4fed4e33c404d568d430bf58fe30ca0c9f0cf8964c530e93f1fd1a51edf"
	I0913 20:02:07.709612   71424 logs.go:123] Gathering logs for kube-controller-manager [e6169bebe5711890f53f70a7ec514779cf495ee725c60dab67e7acdca64ef03d] ...
	I0913 20:02:07.709638   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e6169bebe5711890f53f70a7ec514779cf495ee725c60dab67e7acdca64ef03d"
	I0913 20:02:07.765784   71424 logs.go:123] Gathering logs for storage-provisioner [4a9c61bb677322f851f9504c0ffe2044897d80333391be76078a4f7825afe9fe] ...
	I0913 20:02:07.765838   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4a9c61bb677322f851f9504c0ffe2044897d80333391be76078a4f7825afe9fe"
	I0913 20:02:07.808828   71424 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:02:07.808853   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:02:08.315417   71424 logs.go:123] Gathering logs for container status ...
	I0913 20:02:08.315462   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:02:08.361953   71424 logs.go:123] Gathering logs for kubelet ...
	I0913 20:02:08.361984   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:02:08.434091   71424 logs.go:123] Gathering logs for dmesg ...
	I0913 20:02:08.434143   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:02:08.448853   71424 logs.go:123] Gathering logs for etcd [a3490cc2f99b270938b5fed10c34d8466f4e2d304dac9cdacb69b239c36988e3] ...
	I0913 20:02:08.448877   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a3490cc2f99b270938b5fed10c34d8466f4e2d304dac9cdacb69b239c36988e3"
	I0913 20:02:08.510886   71424 logs.go:123] Gathering logs for coredns [e70559352db6a4d712cb06065541ce8338cb0ea989eac571f538aa205d022f73] ...
	I0913 20:02:08.510919   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e70559352db6a4d712cb06065541ce8338cb0ea989eac571f538aa205d022f73"
	I0913 20:02:08.547445   71424 logs.go:123] Gathering logs for kube-proxy [adbec8ff0ed7ae2d76b2b4191fbf08117c3c2c17aaf2f56bf763ce14fba83991] ...
	I0913 20:02:08.547482   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 adbec8ff0ed7ae2d76b2b4191fbf08117c3c2c17aaf2f56bf763ce14fba83991"
	I0913 20:02:08.585883   71424 logs.go:123] Gathering logs for storage-provisioner [fc01d7b17bbc929489326ae7e806a248f2d3d5cbc145bf51eb1b0cf2ba88d950] ...
	I0913 20:02:08.585907   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc01d7b17bbc929489326ae7e806a248f2d3d5cbc145bf51eb1b0cf2ba88d950"
	I0913 20:02:08.628105   71424 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:02:08.628134   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 20:02:08.764531   71424 logs.go:123] Gathering logs for kube-apiserver [7b1108fd5841778e635c87c901ca2bc56071cc7f48f9b9812e1bf8fa063284c3] ...
	I0913 20:02:08.764562   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7b1108fd5841778e635c87c901ca2bc56071cc7f48f9b9812e1bf8fa063284c3"
	I0913 20:02:07.307778   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:02:07.322032   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:02:07.322110   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:02:07.357558   71926 cri.go:89] found id: ""
	I0913 20:02:07.357585   71926 logs.go:276] 0 containers: []
	W0913 20:02:07.357597   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:02:07.357605   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:02:07.357664   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:02:07.391428   71926 cri.go:89] found id: ""
	I0913 20:02:07.391457   71926 logs.go:276] 0 containers: []
	W0913 20:02:07.391468   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:02:07.391476   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:02:07.391531   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:02:07.427244   71926 cri.go:89] found id: ""
	I0913 20:02:07.427268   71926 logs.go:276] 0 containers: []
	W0913 20:02:07.427281   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:02:07.427289   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:02:07.427364   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:02:07.477369   71926 cri.go:89] found id: ""
	I0913 20:02:07.477399   71926 logs.go:276] 0 containers: []
	W0913 20:02:07.477411   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:02:07.477420   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:02:07.477478   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:02:07.519477   71926 cri.go:89] found id: ""
	I0913 20:02:07.519505   71926 logs.go:276] 0 containers: []
	W0913 20:02:07.519516   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:02:07.519524   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:02:07.519586   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:02:07.556229   71926 cri.go:89] found id: ""
	I0913 20:02:07.556252   71926 logs.go:276] 0 containers: []
	W0913 20:02:07.556260   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:02:07.556270   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:02:07.556329   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:02:07.594498   71926 cri.go:89] found id: ""
	I0913 20:02:07.594531   71926 logs.go:276] 0 containers: []
	W0913 20:02:07.594543   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:02:07.594551   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:02:07.594609   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:02:07.634000   71926 cri.go:89] found id: ""
	I0913 20:02:07.634027   71926 logs.go:276] 0 containers: []
	W0913 20:02:07.634038   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:02:07.634048   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:02:07.634061   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:02:07.690769   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:02:07.690801   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:02:07.708059   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:02:07.708087   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:02:07.783421   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:02:07.783440   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:02:07.783481   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:02:07.866138   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:02:07.866169   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:02:10.416560   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:02:10.430464   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:02:10.430536   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:02:10.467821   71926 cri.go:89] found id: ""
	I0913 20:02:10.467849   71926 logs.go:276] 0 containers: []
	W0913 20:02:10.467858   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:02:10.467864   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:02:10.467930   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:02:10.504320   71926 cri.go:89] found id: ""
	I0913 20:02:10.504347   71926 logs.go:276] 0 containers: []
	W0913 20:02:10.504358   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:02:10.504371   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:02:10.504461   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:02:10.541261   71926 cri.go:89] found id: ""
	I0913 20:02:10.541290   71926 logs.go:276] 0 containers: []
	W0913 20:02:10.541302   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:02:10.541309   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:02:10.541376   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:02:10.576270   71926 cri.go:89] found id: ""
	I0913 20:02:10.576297   71926 logs.go:276] 0 containers: []
	W0913 20:02:10.576310   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:02:10.576317   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:02:10.576373   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:02:10.614974   71926 cri.go:89] found id: ""
	I0913 20:02:10.615004   71926 logs.go:276] 0 containers: []
	W0913 20:02:10.615022   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:02:10.615029   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:02:10.615091   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:02:10.650917   71926 cri.go:89] found id: ""
	I0913 20:02:10.650947   71926 logs.go:276] 0 containers: []
	W0913 20:02:10.650959   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:02:10.650967   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:02:10.651028   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:02:10.688597   71926 cri.go:89] found id: ""
	I0913 20:02:10.688622   71926 logs.go:276] 0 containers: []
	W0913 20:02:10.688632   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:02:10.688640   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:02:10.688699   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:02:10.723937   71926 cri.go:89] found id: ""
	I0913 20:02:10.723962   71926 logs.go:276] 0 containers: []
	W0913 20:02:10.723973   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:02:10.723983   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:02:10.723998   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:02:10.776033   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:02:10.776065   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:02:10.791601   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:02:10.791624   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:02:10.870427   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:02:10.870453   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:02:10.870481   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:02:10.950924   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:02:10.950958   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:02:10.335945   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:12.336240   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:11.932240   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:14.430527   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:11.311597   71424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:02:11.329620   71424 api_server.go:72] duration metric: took 4m14.578764648s to wait for apiserver process to appear ...
	I0913 20:02:11.329645   71424 api_server.go:88] waiting for apiserver healthz status ...
	I0913 20:02:11.329689   71424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:02:11.329748   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:02:11.372419   71424 cri.go:89] found id: "7b1108fd5841778e635c87c901ca2bc56071cc7f48f9b9812e1bf8fa063284c3"
	I0913 20:02:11.372443   71424 cri.go:89] found id: ""
	I0913 20:02:11.372454   71424 logs.go:276] 1 containers: [7b1108fd5841778e635c87c901ca2bc56071cc7f48f9b9812e1bf8fa063284c3]
	I0913 20:02:11.372510   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:11.377048   71424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:02:11.377112   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:02:11.415150   71424 cri.go:89] found id: "a3490cc2f99b270938b5fed10c34d8466f4e2d304dac9cdacb69b239c36988e3"
	I0913 20:02:11.415177   71424 cri.go:89] found id: ""
	I0913 20:02:11.415186   71424 logs.go:276] 1 containers: [a3490cc2f99b270938b5fed10c34d8466f4e2d304dac9cdacb69b239c36988e3]
	I0913 20:02:11.415255   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:11.420007   71424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:02:11.420092   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:02:11.459538   71424 cri.go:89] found id: "e70559352db6a4d712cb06065541ce8338cb0ea989eac571f538aa205d022f73"
	I0913 20:02:11.459560   71424 cri.go:89] found id: ""
	I0913 20:02:11.459568   71424 logs.go:276] 1 containers: [e70559352db6a4d712cb06065541ce8338cb0ea989eac571f538aa205d022f73]
	I0913 20:02:11.459626   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:11.464079   71424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:02:11.464133   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:02:11.502877   71424 cri.go:89] found id: "4c2bf4fed4e33c404d568d430bf58fe30ca0c9f0cf8964c530e93f1fd1a51edf"
	I0913 20:02:11.502902   71424 cri.go:89] found id: ""
	I0913 20:02:11.502909   71424 logs.go:276] 1 containers: [4c2bf4fed4e33c404d568d430bf58fe30ca0c9f0cf8964c530e93f1fd1a51edf]
	I0913 20:02:11.502958   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:11.507529   71424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:02:11.507614   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:02:11.553452   71424 cri.go:89] found id: "adbec8ff0ed7ae2d76b2b4191fbf08117c3c2c17aaf2f56bf763ce14fba83991"
	I0913 20:02:11.553476   71424 cri.go:89] found id: ""
	I0913 20:02:11.553484   71424 logs.go:276] 1 containers: [adbec8ff0ed7ae2d76b2b4191fbf08117c3c2c17aaf2f56bf763ce14fba83991]
	I0913 20:02:11.553538   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:11.557584   71424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:02:11.557649   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:02:11.598606   71424 cri.go:89] found id: "e6169bebe5711890f53f70a7ec514779cf495ee725c60dab67e7acdca64ef03d"
	I0913 20:02:11.598632   71424 cri.go:89] found id: ""
	I0913 20:02:11.598641   71424 logs.go:276] 1 containers: [e6169bebe5711890f53f70a7ec514779cf495ee725c60dab67e7acdca64ef03d]
	I0913 20:02:11.598694   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:11.602735   71424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:02:11.602803   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:02:11.637072   71424 cri.go:89] found id: ""
	I0913 20:02:11.637099   71424 logs.go:276] 0 containers: []
	W0913 20:02:11.637110   71424 logs.go:278] No container was found matching "kindnet"
	I0913 20:02:11.637133   71424 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0913 20:02:11.637197   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0913 20:02:11.680922   71424 cri.go:89] found id: "fc01d7b17bbc929489326ae7e806a248f2d3d5cbc145bf51eb1b0cf2ba88d950"
	I0913 20:02:11.680941   71424 cri.go:89] found id: "4a9c61bb677322f851f9504c0ffe2044897d80333391be76078a4f7825afe9fe"
	I0913 20:02:11.680945   71424 cri.go:89] found id: ""
	I0913 20:02:11.680952   71424 logs.go:276] 2 containers: [fc01d7b17bbc929489326ae7e806a248f2d3d5cbc145bf51eb1b0cf2ba88d950 4a9c61bb677322f851f9504c0ffe2044897d80333391be76078a4f7825afe9fe]
	I0913 20:02:11.680993   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:11.685264   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:11.689862   71424 logs.go:123] Gathering logs for etcd [a3490cc2f99b270938b5fed10c34d8466f4e2d304dac9cdacb69b239c36988e3] ...
	I0913 20:02:11.689887   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a3490cc2f99b270938b5fed10c34d8466f4e2d304dac9cdacb69b239c36988e3"
	I0913 20:02:11.758440   71424 logs.go:123] Gathering logs for coredns [e70559352db6a4d712cb06065541ce8338cb0ea989eac571f538aa205d022f73] ...
	I0913 20:02:11.758475   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e70559352db6a4d712cb06065541ce8338cb0ea989eac571f538aa205d022f73"
	I0913 20:02:11.799263   71424 logs.go:123] Gathering logs for kube-proxy [adbec8ff0ed7ae2d76b2b4191fbf08117c3c2c17aaf2f56bf763ce14fba83991] ...
	I0913 20:02:11.799295   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 adbec8ff0ed7ae2d76b2b4191fbf08117c3c2c17aaf2f56bf763ce14fba83991"
	I0913 20:02:11.837890   71424 logs.go:123] Gathering logs for kube-controller-manager [e6169bebe5711890f53f70a7ec514779cf495ee725c60dab67e7acdca64ef03d] ...
	I0913 20:02:11.837918   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e6169bebe5711890f53f70a7ec514779cf495ee725c60dab67e7acdca64ef03d"
	I0913 20:02:11.902156   71424 logs.go:123] Gathering logs for storage-provisioner [fc01d7b17bbc929489326ae7e806a248f2d3d5cbc145bf51eb1b0cf2ba88d950] ...
	I0913 20:02:11.902189   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc01d7b17bbc929489326ae7e806a248f2d3d5cbc145bf51eb1b0cf2ba88d950"
	I0913 20:02:11.953825   71424 logs.go:123] Gathering logs for kubelet ...
	I0913 20:02:11.953854   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:02:12.022461   71424 logs.go:123] Gathering logs for dmesg ...
	I0913 20:02:12.022498   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:02:12.038744   71424 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:02:12.038773   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 20:02:12.156945   71424 logs.go:123] Gathering logs for storage-provisioner [4a9c61bb677322f851f9504c0ffe2044897d80333391be76078a4f7825afe9fe] ...
	I0913 20:02:12.156982   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4a9c61bb677322f851f9504c0ffe2044897d80333391be76078a4f7825afe9fe"
	I0913 20:02:12.191539   71424 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:02:12.191576   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:02:12.615499   71424 logs.go:123] Gathering logs for kube-apiserver [7b1108fd5841778e635c87c901ca2bc56071cc7f48f9b9812e1bf8fa063284c3] ...
	I0913 20:02:12.615539   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7b1108fd5841778e635c87c901ca2bc56071cc7f48f9b9812e1bf8fa063284c3"
	I0913 20:02:12.662305   71424 logs.go:123] Gathering logs for kube-scheduler [4c2bf4fed4e33c404d568d430bf58fe30ca0c9f0cf8964c530e93f1fd1a51edf] ...
	I0913 20:02:12.662340   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c2bf4fed4e33c404d568d430bf58fe30ca0c9f0cf8964c530e93f1fd1a51edf"
	I0913 20:02:12.701720   71424 logs.go:123] Gathering logs for container status ...
	I0913 20:02:12.701747   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:02:15.241370   71424 api_server.go:253] Checking apiserver healthz at https://192.168.50.13:8443/healthz ...
	I0913 20:02:15.246417   71424 api_server.go:279] https://192.168.50.13:8443/healthz returned 200:
	ok
	I0913 20:02:15.247538   71424 api_server.go:141] control plane version: v1.31.1
	I0913 20:02:15.247557   71424 api_server.go:131] duration metric: took 3.917905929s to wait for apiserver health ...
	I0913 20:02:15.247565   71424 system_pods.go:43] waiting for kube-system pods to appear ...
	I0913 20:02:15.247592   71424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:02:15.247646   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:02:15.287202   71424 cri.go:89] found id: "7b1108fd5841778e635c87c901ca2bc56071cc7f48f9b9812e1bf8fa063284c3"
	I0913 20:02:15.287223   71424 cri.go:89] found id: ""
	I0913 20:02:15.287231   71424 logs.go:276] 1 containers: [7b1108fd5841778e635c87c901ca2bc56071cc7f48f9b9812e1bf8fa063284c3]
	I0913 20:02:15.287285   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:15.292060   71424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:02:15.292115   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:02:15.327342   71424 cri.go:89] found id: "a3490cc2f99b270938b5fed10c34d8466f4e2d304dac9cdacb69b239c36988e3"
	I0913 20:02:15.327367   71424 cri.go:89] found id: ""
	I0913 20:02:15.327376   71424 logs.go:276] 1 containers: [a3490cc2f99b270938b5fed10c34d8466f4e2d304dac9cdacb69b239c36988e3]
	I0913 20:02:15.327441   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:15.332284   71424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:02:15.332356   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:02:15.374686   71424 cri.go:89] found id: "e70559352db6a4d712cb06065541ce8338cb0ea989eac571f538aa205d022f73"
	I0913 20:02:15.374708   71424 cri.go:89] found id: ""
	I0913 20:02:15.374714   71424 logs.go:276] 1 containers: [e70559352db6a4d712cb06065541ce8338cb0ea989eac571f538aa205d022f73]
	I0913 20:02:15.374771   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:15.379199   71424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:02:15.379269   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:02:15.422011   71424 cri.go:89] found id: "4c2bf4fed4e33c404d568d430bf58fe30ca0c9f0cf8964c530e93f1fd1a51edf"
	I0913 20:02:15.422034   71424 cri.go:89] found id: ""
	I0913 20:02:15.422044   71424 logs.go:276] 1 containers: [4c2bf4fed4e33c404d568d430bf58fe30ca0c9f0cf8964c530e93f1fd1a51edf]
	I0913 20:02:15.422110   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:15.426331   71424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:02:15.426395   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:02:15.471552   71424 cri.go:89] found id: "adbec8ff0ed7ae2d76b2b4191fbf08117c3c2c17aaf2f56bf763ce14fba83991"
	I0913 20:02:15.471570   71424 cri.go:89] found id: ""
	I0913 20:02:15.471577   71424 logs.go:276] 1 containers: [adbec8ff0ed7ae2d76b2b4191fbf08117c3c2c17aaf2f56bf763ce14fba83991]
	I0913 20:02:15.471630   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:15.475964   71424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:02:15.476021   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:02:15.520619   71424 cri.go:89] found id: "e6169bebe5711890f53f70a7ec514779cf495ee725c60dab67e7acdca64ef03d"
	I0913 20:02:15.520647   71424 cri.go:89] found id: ""
	I0913 20:02:15.520656   71424 logs.go:276] 1 containers: [e6169bebe5711890f53f70a7ec514779cf495ee725c60dab67e7acdca64ef03d]
	I0913 20:02:15.520713   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:15.524851   71424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:02:15.524912   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:02:15.559283   71424 cri.go:89] found id: ""
	I0913 20:02:15.559309   71424 logs.go:276] 0 containers: []
	W0913 20:02:15.559320   71424 logs.go:278] No container was found matching "kindnet"
	I0913 20:02:15.559327   71424 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0913 20:02:15.559383   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0913 20:02:15.597439   71424 cri.go:89] found id: "fc01d7b17bbc929489326ae7e806a248f2d3d5cbc145bf51eb1b0cf2ba88d950"
	I0913 20:02:15.597465   71424 cri.go:89] found id: "4a9c61bb677322f851f9504c0ffe2044897d80333391be76078a4f7825afe9fe"
	I0913 20:02:15.597471   71424 cri.go:89] found id: ""
	I0913 20:02:15.597480   71424 logs.go:276] 2 containers: [fc01d7b17bbc929489326ae7e806a248f2d3d5cbc145bf51eb1b0cf2ba88d950 4a9c61bb677322f851f9504c0ffe2044897d80333391be76078a4f7825afe9fe]
	I0913 20:02:15.597540   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:15.601932   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:15.605741   71424 logs.go:123] Gathering logs for kube-scheduler [4c2bf4fed4e33c404d568d430bf58fe30ca0c9f0cf8964c530e93f1fd1a51edf] ...
	I0913 20:02:15.605765   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c2bf4fed4e33c404d568d430bf58fe30ca0c9f0cf8964c530e93f1fd1a51edf"
	I0913 20:02:15.641300   71424 logs.go:123] Gathering logs for kube-proxy [adbec8ff0ed7ae2d76b2b4191fbf08117c3c2c17aaf2f56bf763ce14fba83991] ...
	I0913 20:02:15.641328   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 adbec8ff0ed7ae2d76b2b4191fbf08117c3c2c17aaf2f56bf763ce14fba83991"
	I0913 20:02:15.679604   71424 logs.go:123] Gathering logs for kube-controller-manager [e6169bebe5711890f53f70a7ec514779cf495ee725c60dab67e7acdca64ef03d] ...
	I0913 20:02:15.679633   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e6169bebe5711890f53f70a7ec514779cf495ee725c60dab67e7acdca64ef03d"
	I0913 20:02:15.731316   71424 logs.go:123] Gathering logs for storage-provisioner [fc01d7b17bbc929489326ae7e806a248f2d3d5cbc145bf51eb1b0cf2ba88d950] ...
	I0913 20:02:15.731348   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc01d7b17bbc929489326ae7e806a248f2d3d5cbc145bf51eb1b0cf2ba88d950"
	I0913 20:02:15.774692   71424 logs.go:123] Gathering logs for dmesg ...
	I0913 20:02:15.774719   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:02:15.789708   71424 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:02:15.789733   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 20:02:15.899485   71424 logs.go:123] Gathering logs for kube-apiserver [7b1108fd5841778e635c87c901ca2bc56071cc7f48f9b9812e1bf8fa063284c3] ...
	I0913 20:02:15.899517   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7b1108fd5841778e635c87c901ca2bc56071cc7f48f9b9812e1bf8fa063284c3"
	I0913 20:02:15.953758   71424 logs.go:123] Gathering logs for coredns [e70559352db6a4d712cb06065541ce8338cb0ea989eac571f538aa205d022f73] ...
	I0913 20:02:15.953795   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e70559352db6a4d712cb06065541ce8338cb0ea989eac571f538aa205d022f73"
	I0913 20:02:15.996235   71424 logs.go:123] Gathering logs for storage-provisioner [4a9c61bb677322f851f9504c0ffe2044897d80333391be76078a4f7825afe9fe] ...
	I0913 20:02:15.996266   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4a9c61bb677322f851f9504c0ffe2044897d80333391be76078a4f7825afe9fe"
	I0913 20:02:16.033729   71424 logs.go:123] Gathering logs for container status ...
	I0913 20:02:16.033765   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:02:16.083481   71424 logs.go:123] Gathering logs for kubelet ...
	I0913 20:02:16.083514   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:02:13.497982   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:02:13.511795   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:02:13.511865   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:02:13.551686   71926 cri.go:89] found id: ""
	I0913 20:02:13.551714   71926 logs.go:276] 0 containers: []
	W0913 20:02:13.551723   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:02:13.551729   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:02:13.551779   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:02:13.584641   71926 cri.go:89] found id: ""
	I0913 20:02:13.584671   71926 logs.go:276] 0 containers: []
	W0913 20:02:13.584682   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:02:13.584689   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:02:13.584740   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:02:13.622693   71926 cri.go:89] found id: ""
	I0913 20:02:13.622720   71926 logs.go:276] 0 containers: []
	W0913 20:02:13.622731   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:02:13.622739   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:02:13.622801   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:02:13.661315   71926 cri.go:89] found id: ""
	I0913 20:02:13.661343   71926 logs.go:276] 0 containers: []
	W0913 20:02:13.661355   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:02:13.661363   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:02:13.661422   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:02:13.698433   71926 cri.go:89] found id: ""
	I0913 20:02:13.698460   71926 logs.go:276] 0 containers: []
	W0913 20:02:13.698471   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:02:13.698485   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:02:13.698541   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:02:13.733216   71926 cri.go:89] found id: ""
	I0913 20:02:13.733245   71926 logs.go:276] 0 containers: []
	W0913 20:02:13.733256   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:02:13.733264   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:02:13.733323   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:02:13.769397   71926 cri.go:89] found id: ""
	I0913 20:02:13.769426   71926 logs.go:276] 0 containers: []
	W0913 20:02:13.769436   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:02:13.769441   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:02:13.769502   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:02:13.806343   71926 cri.go:89] found id: ""
	I0913 20:02:13.806367   71926 logs.go:276] 0 containers: []
	W0913 20:02:13.806378   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:02:13.806389   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:02:13.806402   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:02:13.885918   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:02:13.885958   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:02:13.927909   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:02:13.927948   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:02:13.983289   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:02:13.983325   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:02:13.996867   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:02:13.996892   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:02:14.065876   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:02:16.566876   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:02:16.581980   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:02:16.582059   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:02:16.620669   71926 cri.go:89] found id: ""
	I0913 20:02:16.620694   71926 logs.go:276] 0 containers: []
	W0913 20:02:16.620703   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:02:16.620709   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:02:16.620758   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:02:16.660133   71926 cri.go:89] found id: ""
	I0913 20:02:16.660156   71926 logs.go:276] 0 containers: []
	W0913 20:02:16.660165   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:02:16.660171   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:02:16.660218   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:02:16.695475   71926 cri.go:89] found id: ""
	I0913 20:02:16.695503   71926 logs.go:276] 0 containers: []
	W0913 20:02:16.695515   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:02:16.695522   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:02:16.695581   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:02:16.730018   71926 cri.go:89] found id: ""
	I0913 20:02:16.730051   71926 logs.go:276] 0 containers: []
	W0913 20:02:16.730063   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:02:16.730069   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:02:16.730136   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:02:16.763193   71926 cri.go:89] found id: ""
	I0913 20:02:16.763219   71926 logs.go:276] 0 containers: []
	W0913 20:02:16.763230   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:02:16.763236   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:02:16.763303   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:02:16.800623   71926 cri.go:89] found id: ""
	I0913 20:02:16.800650   71926 logs.go:276] 0 containers: []
	W0913 20:02:16.800662   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:02:16.800670   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:02:16.800730   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:02:16.834922   71926 cri.go:89] found id: ""
	I0913 20:02:16.834950   71926 logs.go:276] 0 containers: []
	W0913 20:02:16.834961   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:02:16.834968   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:02:16.835012   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:02:16.872570   71926 cri.go:89] found id: ""
	I0913 20:02:16.872598   71926 logs.go:276] 0 containers: []
	W0913 20:02:16.872607   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:02:16.872615   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:02:16.872625   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:02:16.922229   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:02:16.922265   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:02:16.936954   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:02:16.936985   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 20:02:16.155161   71424 logs.go:123] Gathering logs for etcd [a3490cc2f99b270938b5fed10c34d8466f4e2d304dac9cdacb69b239c36988e3] ...
	I0913 20:02:16.155202   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a3490cc2f99b270938b5fed10c34d8466f4e2d304dac9cdacb69b239c36988e3"
	I0913 20:02:16.213457   71424 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:02:16.213494   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:02:19.078923   71424 system_pods.go:59] 8 kube-system pods found
	I0913 20:02:19.078950   71424 system_pods.go:61] "coredns-7c65d6cfc9-fjzxv" [984f1946-61b1-4881-ae99-495855aaf948] Running
	I0913 20:02:19.078956   71424 system_pods.go:61] "etcd-no-preload-239327" [d0514967-93ea-4792-b49d-500c6800d102] Running
	I0913 20:02:19.078959   71424 system_pods.go:61] "kube-apiserver-no-preload-239327" [2e37433d-d767-4e6a-9697-79e99bb2cf74] Running
	I0913 20:02:19.078964   71424 system_pods.go:61] "kube-controller-manager-no-preload-239327" [71d86891-1378-43b5-af10-c45acb1ef854] Running
	I0913 20:02:19.078967   71424 system_pods.go:61] "kube-proxy-b24zg" [67fffd9e-ddf7-4abb-bfce-1528060d6b43] Running
	I0913 20:02:19.078971   71424 system_pods.go:61] "kube-scheduler-no-preload-239327" [22e17a4f-902f-4593-82dd-d2f04104e66a] Running
	I0913 20:02:19.078976   71424 system_pods.go:61] "metrics-server-6867b74b74-bq7jp" [9920ad88-3d00-458f-94d4-3dcfd0cd9a01] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0913 20:02:19.078980   71424 system_pods.go:61] "storage-provisioner" [1cb55fe9-4adb-4d3e-9f26-34ee4b3f01a2] Running
	I0913 20:02:19.078988   71424 system_pods.go:74] duration metric: took 3.831418395s to wait for pod list to return data ...
	I0913 20:02:19.078995   71424 default_sa.go:34] waiting for default service account to be created ...
	I0913 20:02:19.081391   71424 default_sa.go:45] found service account: "default"
	I0913 20:02:19.081412   71424 default_sa.go:55] duration metric: took 2.412971ms for default service account to be created ...
	I0913 20:02:19.081419   71424 system_pods.go:116] waiting for k8s-apps to be running ...
	I0913 20:02:19.085561   71424 system_pods.go:86] 8 kube-system pods found
	I0913 20:02:19.085580   71424 system_pods.go:89] "coredns-7c65d6cfc9-fjzxv" [984f1946-61b1-4881-ae99-495855aaf948] Running
	I0913 20:02:19.085586   71424 system_pods.go:89] "etcd-no-preload-239327" [d0514967-93ea-4792-b49d-500c6800d102] Running
	I0913 20:02:19.085590   71424 system_pods.go:89] "kube-apiserver-no-preload-239327" [2e37433d-d767-4e6a-9697-79e99bb2cf74] Running
	I0913 20:02:19.085594   71424 system_pods.go:89] "kube-controller-manager-no-preload-239327" [71d86891-1378-43b5-af10-c45acb1ef854] Running
	I0913 20:02:19.085597   71424 system_pods.go:89] "kube-proxy-b24zg" [67fffd9e-ddf7-4abb-bfce-1528060d6b43] Running
	I0913 20:02:19.085601   71424 system_pods.go:89] "kube-scheduler-no-preload-239327" [22e17a4f-902f-4593-82dd-d2f04104e66a] Running
	I0913 20:02:19.085607   71424 system_pods.go:89] "metrics-server-6867b74b74-bq7jp" [9920ad88-3d00-458f-94d4-3dcfd0cd9a01] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0913 20:02:19.085610   71424 system_pods.go:89] "storage-provisioner" [1cb55fe9-4adb-4d3e-9f26-34ee4b3f01a2] Running
	I0913 20:02:19.085616   71424 system_pods.go:126] duration metric: took 4.193561ms to wait for k8s-apps to be running ...
	I0913 20:02:19.085625   71424 system_svc.go:44] waiting for kubelet service to be running ....
	I0913 20:02:19.085664   71424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 20:02:19.105440   71424 system_svc.go:56] duration metric: took 19.808703ms WaitForService to wait for kubelet
	I0913 20:02:19.105469   71424 kubeadm.go:582] duration metric: took 4m22.354619761s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0913 20:02:19.105491   71424 node_conditions.go:102] verifying NodePressure condition ...
	I0913 20:02:19.109107   71424 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0913 20:02:19.109126   71424 node_conditions.go:123] node cpu capacity is 2
	I0913 20:02:19.109136   71424 node_conditions.go:105] duration metric: took 3.640406ms to run NodePressure ...
	I0913 20:02:19.109146   71424 start.go:241] waiting for startup goroutines ...
	I0913 20:02:19.109153   71424 start.go:246] waiting for cluster config update ...
	I0913 20:02:19.109163   71424 start.go:255] writing updated cluster config ...
	I0913 20:02:19.109412   71424 ssh_runner.go:195] Run: rm -f paused
	I0913 20:02:19.156906   71424 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0913 20:02:19.158757   71424 out.go:177] * Done! kubectl is now configured to use "no-preload-239327" cluster and "default" namespace by default
	I0913 20:02:14.835749   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:17.335566   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:16.431024   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:18.434223   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:19.425264   71702 pod_ready.go:82] duration metric: took 4m0.000872269s for pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace to be "Ready" ...
	E0913 20:02:19.425295   71702 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0913 20:02:19.425314   71702 pod_ready.go:39] duration metric: took 4m14.083085064s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0913 20:02:19.425344   71702 kubeadm.go:597] duration metric: took 4m21.72399516s to restartPrimaryControlPlane
	W0913 20:02:19.425404   71702 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0913 20:02:19.425434   71702 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	W0913 20:02:17.022260   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:02:17.022281   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:02:17.022292   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:02:17.103233   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:02:17.103262   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:02:19.648871   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:02:19.664542   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:02:19.664619   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:02:19.704693   71926 cri.go:89] found id: ""
	I0913 20:02:19.704725   71926 logs.go:276] 0 containers: []
	W0913 20:02:19.704738   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:02:19.704745   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:02:19.704808   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:02:19.746522   71926 cri.go:89] found id: ""
	I0913 20:02:19.746550   71926 logs.go:276] 0 containers: []
	W0913 20:02:19.746562   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:02:19.746569   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:02:19.746630   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:02:19.782277   71926 cri.go:89] found id: ""
	I0913 20:02:19.782305   71926 logs.go:276] 0 containers: []
	W0913 20:02:19.782316   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:02:19.782323   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:02:19.782390   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:02:19.822083   71926 cri.go:89] found id: ""
	I0913 20:02:19.822147   71926 logs.go:276] 0 containers: []
	W0913 20:02:19.822157   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:02:19.822163   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:02:19.822221   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:02:19.865403   71926 cri.go:89] found id: ""
	I0913 20:02:19.865433   71926 logs.go:276] 0 containers: []
	W0913 20:02:19.865443   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:02:19.865451   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:02:19.865513   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:02:19.906436   71926 cri.go:89] found id: ""
	I0913 20:02:19.906462   71926 logs.go:276] 0 containers: []
	W0913 20:02:19.906471   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:02:19.906477   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:02:19.906535   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:02:19.943277   71926 cri.go:89] found id: ""
	I0913 20:02:19.943301   71926 logs.go:276] 0 containers: []
	W0913 20:02:19.943311   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:02:19.943318   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:02:19.943379   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:02:19.978660   71926 cri.go:89] found id: ""
	I0913 20:02:19.978683   71926 logs.go:276] 0 containers: []
	W0913 20:02:19.978694   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:02:19.978704   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:02:19.978718   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:02:20.051748   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:02:20.051773   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:02:20.051788   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:02:20.133912   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:02:20.133951   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:02:20.174826   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:02:20.174854   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:02:20.228002   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:02:20.228038   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:02:22.743346   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:02:22.757641   71926 kubeadm.go:597] duration metric: took 4m3.377721408s to restartPrimaryControlPlane
	W0913 20:02:22.757719   71926 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0913 20:02:22.757750   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0913 20:02:23.489114   71926 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 20:02:23.505494   71926 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0913 20:02:23.516804   71926 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0913 20:02:23.527373   71926 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0913 20:02:23.527392   71926 kubeadm.go:157] found existing configuration files:
	
	I0913 20:02:23.527433   71926 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0913 20:02:23.537725   71926 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0913 20:02:23.537797   71926 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0913 20:02:23.548667   71926 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0913 20:02:23.558212   71926 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0913 20:02:23.558288   71926 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0913 20:02:23.568235   71926 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0913 20:02:23.577925   71926 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0913 20:02:23.577989   71926 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0913 20:02:23.587819   71926 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0913 20:02:23.597266   71926 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0913 20:02:23.597330   71926 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0913 20:02:23.607021   71926 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0913 20:02:23.681432   71926 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0913 20:02:23.681576   71926 kubeadm.go:310] [preflight] Running pre-flight checks
	I0913 20:02:23.837573   71926 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0913 20:02:23.837727   71926 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0913 20:02:23.837858   71926 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0913 20:02:24.016267   71926 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0913 20:02:19.336285   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:21.836115   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:23.837035   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:24.018067   71926 out.go:235]   - Generating certificates and keys ...
	I0913 20:02:24.018195   71926 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0913 20:02:24.018297   71926 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0913 20:02:24.018394   71926 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0913 20:02:24.018482   71926 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0913 20:02:24.018586   71926 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0913 20:02:24.019167   71926 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0913 20:02:24.019590   71926 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0913 20:02:24.020258   71926 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0913 20:02:24.020814   71926 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0913 20:02:24.021409   71926 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0913 20:02:24.021519   71926 kubeadm.go:310] [certs] Using the existing "sa" key
	I0913 20:02:24.021596   71926 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0913 20:02:24.094249   71926 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0913 20:02:24.178186   71926 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0913 20:02:24.412313   71926 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0913 20:02:24.570296   71926 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0913 20:02:24.585365   71926 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0913 20:02:24.586493   71926 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0913 20:02:24.586548   71926 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0913 20:02:24.726026   71926 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0913 20:02:24.727979   71926 out.go:235]   - Booting up control plane ...
	I0913 20:02:24.728117   71926 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0913 20:02:24.740176   71926 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0913 20:02:24.741334   71926 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0913 20:02:24.742057   71926 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0913 20:02:24.744724   71926 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0913 20:02:26.336853   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:28.841632   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:31.336243   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:33.835739   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:36.337341   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:38.835188   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:40.836019   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:42.836112   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:45.681212   71702 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.255746666s)
	I0913 20:02:45.681319   71702 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 20:02:45.700645   71702 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0913 20:02:45.716032   71702 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0913 20:02:45.735914   71702 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0913 20:02:45.735934   71702 kubeadm.go:157] found existing configuration files:
	
	I0913 20:02:45.735991   71702 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0913 20:02:45.746143   71702 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0913 20:02:45.746212   71702 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0913 20:02:45.756542   71702 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0913 20:02:45.774317   71702 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0913 20:02:45.774371   71702 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0913 20:02:45.786627   71702 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0913 20:02:45.796851   71702 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0913 20:02:45.796913   71702 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0913 20:02:45.817449   71702 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0913 20:02:45.827702   71702 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0913 20:02:45.827769   71702 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0913 20:02:45.838431   71702 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0913 20:02:45.891108   71702 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0913 20:02:45.891320   71702 kubeadm.go:310] [preflight] Running pre-flight checks
	I0913 20:02:46.000041   71702 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0913 20:02:46.000212   71702 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0913 20:02:46.000375   71702 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0913 20:02:46.008967   71702 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0913 20:02:46.010730   71702 out.go:235]   - Generating certificates and keys ...
	I0913 20:02:46.010839   71702 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0913 20:02:46.010943   71702 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0913 20:02:46.011058   71702 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0913 20:02:46.011180   71702 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0913 20:02:46.011270   71702 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0913 20:02:46.011352   71702 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0913 20:02:46.011438   71702 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0913 20:02:46.011528   71702 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0913 20:02:46.011627   71702 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0913 20:02:46.011727   71702 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0913 20:02:46.011781   71702 kubeadm.go:310] [certs] Using the existing "sa" key
	I0913 20:02:46.011850   71702 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0913 20:02:46.203740   71702 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0913 20:02:46.287426   71702 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0913 20:02:46.417622   71702 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0913 20:02:46.837809   71702 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0913 20:02:47.159346   71702 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0913 20:02:47.159994   71702 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0913 20:02:47.162768   71702 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0913 20:02:45.335134   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:47.338183   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:47.164508   71702 out.go:235]   - Booting up control plane ...
	I0913 20:02:47.164636   71702 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0913 20:02:47.164740   71702 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0913 20:02:47.164827   71702 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0913 20:02:47.182734   71702 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0913 20:02:47.188946   71702 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0913 20:02:47.189012   71702 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0913 20:02:47.311613   71702 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0913 20:02:47.311820   71702 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0913 20:02:47.812730   71702 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.220732ms
	I0913 20:02:47.812859   71702 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0913 20:02:53.314958   71702 kubeadm.go:310] [api-check] The API server is healthy after 5.502078323s
	I0913 20:02:53.332711   71702 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0913 20:02:53.363295   71702 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0913 20:02:53.416780   71702 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0913 20:02:53.417000   71702 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-512125 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0913 20:02:53.450532   71702 kubeadm.go:310] [bootstrap-token] Using token: omlshd.2vtm45ugvt4lb37m
	I0913 20:02:49.837005   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:52.336369   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:53.451903   71702 out.go:235]   - Configuring RBAC rules ...
	I0913 20:02:53.452024   71702 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0913 20:02:53.474646   71702 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0913 20:02:53.501155   71702 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0913 20:02:53.510978   71702 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0913 20:02:53.529034   71702 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0913 20:02:53.540839   71702 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0913 20:02:53.724625   71702 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0913 20:02:54.178585   71702 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0913 20:02:54.728758   71702 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0913 20:02:54.729745   71702 kubeadm.go:310] 
	I0913 20:02:54.729808   71702 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0913 20:02:54.729816   71702 kubeadm.go:310] 
	I0913 20:02:54.729906   71702 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0913 20:02:54.729931   71702 kubeadm.go:310] 
	I0913 20:02:54.729981   71702 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0913 20:02:54.730079   71702 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0913 20:02:54.730170   71702 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0913 20:02:54.730180   71702 kubeadm.go:310] 
	I0913 20:02:54.730386   71702 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0913 20:02:54.730403   71702 kubeadm.go:310] 
	I0913 20:02:54.730453   71702 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0913 20:02:54.730476   71702 kubeadm.go:310] 
	I0913 20:02:54.730538   71702 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0913 20:02:54.730642   71702 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0913 20:02:54.730737   71702 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0913 20:02:54.730746   71702 kubeadm.go:310] 
	I0913 20:02:54.730866   71702 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0913 20:02:54.730978   71702 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0913 20:02:54.730990   71702 kubeadm.go:310] 
	I0913 20:02:54.731059   71702 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token omlshd.2vtm45ugvt4lb37m \
	I0913 20:02:54.731147   71702 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a4240e11abfbfd373505036507e5ef67d548fb372e560a526b421dfcccc65691 \
	I0913 20:02:54.731172   71702 kubeadm.go:310] 	--control-plane 
	I0913 20:02:54.731178   71702 kubeadm.go:310] 
	I0913 20:02:54.731250   71702 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0913 20:02:54.731265   71702 kubeadm.go:310] 
	I0913 20:02:54.731385   71702 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token omlshd.2vtm45ugvt4lb37m \
	I0913 20:02:54.731537   71702 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a4240e11abfbfd373505036507e5ef67d548fb372e560a526b421dfcccc65691 
	I0913 20:02:54.732490   71702 kubeadm.go:310] W0913 20:02:45.866846    2534 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0913 20:02:54.732825   71702 kubeadm.go:310] W0913 20:02:45.867680    2534 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0913 20:02:54.732991   71702 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0913 20:02:54.733013   71702 cni.go:84] Creating CNI manager for ""
	I0913 20:02:54.733024   71702 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0913 20:02:54.734613   71702 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0913 20:02:54.735888   71702 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0913 20:02:54.747812   71702 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0913 20:02:54.769810   71702 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0913 20:02:54.769849   71702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 20:02:54.769936   71702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-512125 minikube.k8s.io/updated_at=2024_09_13T20_02_54_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=fdd33bebc6743cfd1c61ec7fe066add478610a92 minikube.k8s.io/name=default-k8s-diff-port-512125 minikube.k8s.io/primary=true
	I0913 20:02:54.934477   71702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 20:02:55.021422   71702 ops.go:34] apiserver oom_adj: -16
	I0913 20:02:55.435528   71702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 20:02:55.935089   71702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 20:02:56.434609   71702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 20:02:56.934698   71702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 20:02:57.434523   71702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 20:02:57.935430   71702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 20:02:58.434786   71702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 20:02:58.935296   71702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 20:02:59.068131   71702 kubeadm.go:1113] duration metric: took 4.298327621s to wait for elevateKubeSystemPrivileges
	I0913 20:02:59.068171   71702 kubeadm.go:394] duration metric: took 5m1.428919049s to StartCluster
	I0913 20:02:59.068191   71702 settings.go:142] acquiring lock: {Name:mk3b751ea8a9f3a6c2d19469cffad500e96411b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 20:02:59.068274   71702 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19636-3902/kubeconfig
	I0913 20:02:59.069936   71702 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/kubeconfig: {Name:mk324cf401f96b93fed93af51e24b0634e5fa1a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 20:02:59.070196   71702 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.3 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0913 20:02:59.070258   71702 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0913 20:02:59.070355   71702 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-512125"
	I0913 20:02:59.070373   71702 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-512125"
	W0913 20:02:59.070386   71702 addons.go:243] addon storage-provisioner should already be in state true
	I0913 20:02:59.070383   71702 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-512125"
	I0913 20:02:59.070407   71702 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-512125"
	I0913 20:02:59.070407   71702 config.go:182] Loaded profile config "default-k8s-diff-port-512125": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 20:02:59.070425   71702 host.go:66] Checking if "default-k8s-diff-port-512125" exists ...
	I0913 20:02:59.070413   71702 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-512125"
	I0913 20:02:59.070447   71702 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-512125"
	W0913 20:02:59.070457   71702 addons.go:243] addon metrics-server should already be in state true
	I0913 20:02:59.070481   71702 host.go:66] Checking if "default-k8s-diff-port-512125" exists ...
	I0913 20:02:59.070819   71702 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 20:02:59.070863   71702 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 20:02:59.070866   71702 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 20:02:59.070891   71702 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 20:02:59.070911   71702 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 20:02:59.070935   71702 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 20:02:59.072027   71702 out.go:177] * Verifying Kubernetes components...
	I0913 20:02:59.073600   71702 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 20:02:59.088175   71702 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46813
	I0913 20:02:59.088737   71702 main.go:141] libmachine: () Calling .GetVersion
	I0913 20:02:59.089296   71702 main.go:141] libmachine: Using API Version  1
	I0913 20:02:59.089321   71702 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 20:02:59.089716   71702 main.go:141] libmachine: () Calling .GetMachineName
	I0913 20:02:59.090168   71702 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41303
	I0913 20:02:59.090184   71702 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43597
	I0913 20:02:59.090323   71702 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 20:02:59.090370   71702 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 20:02:59.090639   71702 main.go:141] libmachine: () Calling .GetVersion
	I0913 20:02:59.090642   71702 main.go:141] libmachine: () Calling .GetVersion
	I0913 20:02:59.091125   71702 main.go:141] libmachine: Using API Version  1
	I0913 20:02:59.091157   71702 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 20:02:59.091295   71702 main.go:141] libmachine: Using API Version  1
	I0913 20:02:59.091309   71702 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 20:02:59.091691   71702 main.go:141] libmachine: () Calling .GetMachineName
	I0913 20:02:59.091749   71702 main.go:141] libmachine: () Calling .GetMachineName
	I0913 20:02:59.092208   71702 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 20:02:59.092244   71702 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 20:02:59.092420   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetState
	I0913 20:02:59.096383   71702 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-512125"
	W0913 20:02:59.096408   71702 addons.go:243] addon default-storageclass should already be in state true
	I0913 20:02:59.096439   71702 host.go:66] Checking if "default-k8s-diff-port-512125" exists ...
	I0913 20:02:59.096799   71702 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 20:02:59.096839   71702 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 20:02:59.110299   71702 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32853
	I0913 20:02:59.110382   71702 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41891
	I0913 20:02:59.110847   71702 main.go:141] libmachine: () Calling .GetVersion
	I0913 20:02:59.110951   71702 main.go:141] libmachine: () Calling .GetVersion
	I0913 20:02:59.111458   71702 main.go:141] libmachine: Using API Version  1
	I0913 20:02:59.111472   71702 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 20:02:59.111483   71702 main.go:141] libmachine: Using API Version  1
	I0913 20:02:59.111500   71702 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 20:02:59.111815   71702 main.go:141] libmachine: () Calling .GetMachineName
	I0913 20:02:59.111979   71702 main.go:141] libmachine: () Calling .GetMachineName
	I0913 20:02:59.112029   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetState
	I0913 20:02:59.112585   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetState
	I0913 20:02:59.114070   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .DriverName
	I0913 20:02:59.114919   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .DriverName
	I0913 20:02:59.116054   71702 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 20:02:59.116911   71702 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0913 20:02:54.837749   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:57.335281   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:57.335308   71233 pod_ready.go:82] duration metric: took 4m0.006028535s for pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace to be "Ready" ...
	E0913 20:02:57.335316   71233 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0913 20:02:57.335325   71233 pod_ready.go:39] duration metric: took 4m4.043499675s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0913 20:02:57.335338   71233 api_server.go:52] waiting for apiserver process to appear ...
	I0913 20:02:57.335365   71233 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:02:57.335429   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:02:57.384724   71233 cri.go:89] found id: "8c6b66cfda64cc01b2add3842317e8e38bd44940ec52b87ff6868e2b782ceb0d"
	I0913 20:02:57.384750   71233 cri.go:89] found id: ""
	I0913 20:02:57.384759   71233 logs.go:276] 1 containers: [8c6b66cfda64cc01b2add3842317e8e38bd44940ec52b87ff6868e2b782ceb0d]
	I0913 20:02:57.384816   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:02:57.393335   71233 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:02:57.393406   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:02:57.432064   71233 cri.go:89] found id: "b7288e6c437a29f9a016812446ce5905f4cc78974775e1f26b227cd41414a8c0"
	I0913 20:02:57.432112   71233 cri.go:89] found id: ""
	I0913 20:02:57.432121   71233 logs.go:276] 1 containers: [b7288e6c437a29f9a016812446ce5905f4cc78974775e1f26b227cd41414a8c0]
	I0913 20:02:57.432170   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:02:57.437305   71233 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:02:57.437363   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:02:57.484101   71233 cri.go:89] found id: "5a58f184d570427c2adde5e982b72f455eb375ced496ce62a3217f22f7c409e7"
	I0913 20:02:57.484125   71233 cri.go:89] found id: ""
	I0913 20:02:57.484135   71233 logs.go:276] 1 containers: [5a58f184d570427c2adde5e982b72f455eb375ced496ce62a3217f22f7c409e7]
	I0913 20:02:57.484204   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:02:57.489057   71233 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:02:57.489129   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:02:57.531094   71233 cri.go:89] found id: "c32212fb06588456e47850075601e1e9622d8fab0cd30faa9a6289ff74e4bc9f"
	I0913 20:02:57.531138   71233 cri.go:89] found id: ""
	I0913 20:02:57.531147   71233 logs.go:276] 1 containers: [c32212fb06588456e47850075601e1e9622d8fab0cd30faa9a6289ff74e4bc9f]
	I0913 20:02:57.531208   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:02:57.536227   71233 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:02:57.536290   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:02:57.575177   71233 cri.go:89] found id: "57402126568c76ff63c95511c256896d8fb9ec9889deab2a2816385513386b86"
	I0913 20:02:57.575204   71233 cri.go:89] found id: ""
	I0913 20:02:57.575213   71233 logs.go:276] 1 containers: [57402126568c76ff63c95511c256896d8fb9ec9889deab2a2816385513386b86]
	I0913 20:02:57.575265   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:02:57.580702   71233 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:02:57.580772   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:02:57.616846   71233 cri.go:89] found id: "3e8d6c49b3b396aea040cc47de27b36e0f6018361f1925fe36424adffe6a6b73"
	I0913 20:02:57.616872   71233 cri.go:89] found id: ""
	I0913 20:02:57.616881   71233 logs.go:276] 1 containers: [3e8d6c49b3b396aea040cc47de27b36e0f6018361f1925fe36424adffe6a6b73]
	I0913 20:02:57.616937   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:02:57.626381   71233 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:02:57.626438   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:02:57.665834   71233 cri.go:89] found id: ""
	I0913 20:02:57.665859   71233 logs.go:276] 0 containers: []
	W0913 20:02:57.665868   71233 logs.go:278] No container was found matching "kindnet"
	I0913 20:02:57.665873   71233 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0913 20:02:57.665924   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0913 20:02:57.709261   71233 cri.go:89] found id: "db0694e689431d2416b903e0f01588bdf740ca0d24d9561799e853cfb065cb11"
	I0913 20:02:57.709282   71233 cri.go:89] found id: "d21ac9f9341fd40b5f181668035e7327fd6c2310c56a7f5f0158d440bfd2e9a3"
	I0913 20:02:57.709286   71233 cri.go:89] found id: ""
	I0913 20:02:57.709293   71233 logs.go:276] 2 containers: [db0694e689431d2416b903e0f01588bdf740ca0d24d9561799e853cfb065cb11 d21ac9f9341fd40b5f181668035e7327fd6c2310c56a7f5f0158d440bfd2e9a3]
	I0913 20:02:57.709352   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:02:57.713629   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:02:57.717722   71233 logs.go:123] Gathering logs for kubelet ...
	I0913 20:02:57.717739   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:02:57.791226   71233 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:02:57.791258   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 20:02:57.967572   71233 logs.go:123] Gathering logs for kube-apiserver [8c6b66cfda64cc01b2add3842317e8e38bd44940ec52b87ff6868e2b782ceb0d] ...
	I0913 20:02:57.967614   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8c6b66cfda64cc01b2add3842317e8e38bd44940ec52b87ff6868e2b782ceb0d"
	I0913 20:02:58.035311   71233 logs.go:123] Gathering logs for kube-scheduler [c32212fb06588456e47850075601e1e9622d8fab0cd30faa9a6289ff74e4bc9f] ...
	I0913 20:02:58.035356   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c32212fb06588456e47850075601e1e9622d8fab0cd30faa9a6289ff74e4bc9f"
	I0913 20:02:58.076771   71233 logs.go:123] Gathering logs for kube-proxy [57402126568c76ff63c95511c256896d8fb9ec9889deab2a2816385513386b86] ...
	I0913 20:02:58.076801   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57402126568c76ff63c95511c256896d8fb9ec9889deab2a2816385513386b86"
	I0913 20:02:58.120108   71233 logs.go:123] Gathering logs for storage-provisioner [db0694e689431d2416b903e0f01588bdf740ca0d24d9561799e853cfb065cb11] ...
	I0913 20:02:58.120138   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db0694e689431d2416b903e0f01588bdf740ca0d24d9561799e853cfb065cb11"
	I0913 20:02:58.169935   71233 logs.go:123] Gathering logs for storage-provisioner [d21ac9f9341fd40b5f181668035e7327fd6c2310c56a7f5f0158d440bfd2e9a3] ...
	I0913 20:02:58.169964   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d21ac9f9341fd40b5f181668035e7327fd6c2310c56a7f5f0158d440bfd2e9a3"
	I0913 20:02:58.213552   71233 logs.go:123] Gathering logs for dmesg ...
	I0913 20:02:58.213579   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:02:58.227590   71233 logs.go:123] Gathering logs for etcd [b7288e6c437a29f9a016812446ce5905f4cc78974775e1f26b227cd41414a8c0] ...
	I0913 20:02:58.227618   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b7288e6c437a29f9a016812446ce5905f4cc78974775e1f26b227cd41414a8c0"
	I0913 20:02:58.272273   71233 logs.go:123] Gathering logs for coredns [5a58f184d570427c2adde5e982b72f455eb375ced496ce62a3217f22f7c409e7] ...
	I0913 20:02:58.272304   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a58f184d570427c2adde5e982b72f455eb375ced496ce62a3217f22f7c409e7"
	I0913 20:02:58.325246   71233 logs.go:123] Gathering logs for kube-controller-manager [3e8d6c49b3b396aea040cc47de27b36e0f6018361f1925fe36424adffe6a6b73] ...
	I0913 20:02:58.325282   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e8d6c49b3b396aea040cc47de27b36e0f6018361f1925fe36424adffe6a6b73"
	I0913 20:02:58.383314   71233 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:02:58.383344   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:02:58.878384   71233 logs.go:123] Gathering logs for container status ...
	I0913 20:02:58.878423   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:02:59.116960   71702 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35415
	I0913 20:02:59.117841   71702 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0913 20:02:59.117861   71702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0913 20:02:59.117881   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHHostname
	I0913 20:02:59.117970   71702 main.go:141] libmachine: () Calling .GetVersion
	I0913 20:02:59.118540   71702 main.go:141] libmachine: Using API Version  1
	I0913 20:02:59.118559   71702 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 20:02:59.118756   71702 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0913 20:02:59.118776   71702 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0913 20:02:59.118795   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHHostname
	I0913 20:02:59.118937   71702 main.go:141] libmachine: () Calling .GetMachineName
	I0913 20:02:59.120038   71702 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 20:02:59.120119   71702 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 20:02:59.122253   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 20:02:59.122695   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 20:02:59.122693   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:54:e0", ip: ""} in network mk-default-k8s-diff-port-512125: {Iface:virbr2 ExpiryTime:2024-09-13 20:49:35 +0000 UTC Type:0 Mac:52:54:00:5b:54:e0 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-512125 Clientid:01:52:54:00:5b:54:e0}
	I0913 20:02:59.122727   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined IP address 192.168.61.3 and MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 20:02:59.122937   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHPort
	I0913 20:02:59.123131   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHKeyPath
	I0913 20:02:59.123151   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:54:e0", ip: ""} in network mk-default-k8s-diff-port-512125: {Iface:virbr2 ExpiryTime:2024-09-13 20:49:35 +0000 UTC Type:0 Mac:52:54:00:5b:54:e0 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-512125 Clientid:01:52:54:00:5b:54:e0}
	I0913 20:02:59.123172   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined IP address 192.168.61.3 and MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 20:02:59.123321   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHUsername
	I0913 20:02:59.123523   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHPort
	I0913 20:02:59.123531   71702 sshutil.go:53] new ssh client: &{IP:192.168.61.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/default-k8s-diff-port-512125/id_rsa Username:docker}
	I0913 20:02:59.123629   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHKeyPath
	I0913 20:02:59.123727   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHUsername
	I0913 20:02:59.123835   71702 sshutil.go:53] new ssh client: &{IP:192.168.61.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/default-k8s-diff-port-512125/id_rsa Username:docker}
	I0913 20:02:59.137333   71702 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45929
	I0913 20:02:59.137767   71702 main.go:141] libmachine: () Calling .GetVersion
	I0913 20:02:59.138291   71702 main.go:141] libmachine: Using API Version  1
	I0913 20:02:59.138311   71702 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 20:02:59.138659   71702 main.go:141] libmachine: () Calling .GetMachineName
	I0913 20:02:59.138865   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetState
	I0913 20:02:59.140658   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .DriverName
	I0913 20:02:59.140891   71702 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0913 20:02:59.140908   71702 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0913 20:02:59.140934   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHHostname
	I0913 20:02:59.144330   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 20:02:59.144802   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:54:e0", ip: ""} in network mk-default-k8s-diff-port-512125: {Iface:virbr2 ExpiryTime:2024-09-13 20:49:35 +0000 UTC Type:0 Mac:52:54:00:5b:54:e0 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-512125 Clientid:01:52:54:00:5b:54:e0}
	I0913 20:02:59.144834   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined IP address 192.168.61.3 and MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 20:02:59.144971   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHPort
	I0913 20:02:59.145149   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHKeyPath
	I0913 20:02:59.145280   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHUsername
	I0913 20:02:59.145398   71702 sshutil.go:53] new ssh client: &{IP:192.168.61.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/default-k8s-diff-port-512125/id_rsa Username:docker}
	I0913 20:02:59.313139   71702 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0913 20:02:59.364703   71702 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-512125" to be "Ready" ...
	I0913 20:02:59.390283   71702 node_ready.go:49] node "default-k8s-diff-port-512125" has status "Ready":"True"
	I0913 20:02:59.390322   71702 node_ready.go:38] duration metric: took 25.568477ms for node "default-k8s-diff-port-512125" to be "Ready" ...
	I0913 20:02:59.390335   71702 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0913 20:02:59.404911   71702 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-2qg68" in "kube-system" namespace to be "Ready" ...
	I0913 20:02:59.534386   71702 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0913 20:02:59.534414   71702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0913 20:02:59.562931   71702 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0913 20:02:59.562958   71702 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0913 20:02:59.569447   71702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0913 20:02:59.630245   71702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0913 20:02:59.664309   71702 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0913 20:02:59.664341   71702 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0913 20:02:59.766546   71702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0913 20:03:00.996748   71702 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.366470603s)
	I0913 20:03:00.996799   71702 main.go:141] libmachine: Making call to close driver server
	I0913 20:03:00.996814   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .Close
	I0913 20:03:00.996831   71702 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.427344727s)
	I0913 20:03:00.996874   71702 main.go:141] libmachine: Making call to close driver server
	I0913 20:03:00.996886   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .Close
	I0913 20:03:00.997187   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | Closing plugin on server side
	I0913 20:03:00.997223   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | Closing plugin on server side
	I0913 20:03:00.997216   71702 main.go:141] libmachine: Successfully made call to close driver server
	I0913 20:03:00.997261   71702 main.go:141] libmachine: Successfully made call to close driver server
	I0913 20:03:00.997272   71702 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 20:03:00.997283   71702 main.go:141] libmachine: Making call to close driver server
	I0913 20:03:00.997293   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .Close
	I0913 20:03:00.997261   71702 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 20:03:00.997352   71702 main.go:141] libmachine: Making call to close driver server
	I0913 20:03:00.997360   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .Close
	I0913 20:03:00.997576   71702 main.go:141] libmachine: Successfully made call to close driver server
	I0913 20:03:00.997619   71702 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 20:03:00.997631   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | Closing plugin on server side
	I0913 20:03:00.997657   71702 main.go:141] libmachine: Successfully made call to close driver server
	I0913 20:03:00.997717   71702 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 20:03:01.017603   71702 main.go:141] libmachine: Making call to close driver server
	I0913 20:03:01.017629   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .Close
	I0913 20:03:01.017896   71702 main.go:141] libmachine: Successfully made call to close driver server
	I0913 20:03:01.017913   71702 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 20:03:01.034684   71702 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.268104844s)
	I0913 20:03:01.034739   71702 main.go:141] libmachine: Making call to close driver server
	I0913 20:03:01.034756   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .Close
	I0913 20:03:01.035100   71702 main.go:141] libmachine: Successfully made call to close driver server
	I0913 20:03:01.035120   71702 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 20:03:01.035137   71702 main.go:141] libmachine: Making call to close driver server
	I0913 20:03:01.035145   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .Close
	I0913 20:03:01.036842   71702 main.go:141] libmachine: Successfully made call to close driver server
	I0913 20:03:01.036871   71702 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 20:03:01.036882   71702 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-512125"
	I0913 20:03:01.039496   71702 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0913 20:03:01.432233   71233 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:03:01.452473   71233 api_server.go:72] duration metric: took 4m15.872372226s to wait for apiserver process to appear ...
	I0913 20:03:01.452503   71233 api_server.go:88] waiting for apiserver healthz status ...
	I0913 20:03:01.452544   71233 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:03:01.452600   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:03:01.495509   71233 cri.go:89] found id: "8c6b66cfda64cc01b2add3842317e8e38bd44940ec52b87ff6868e2b782ceb0d"
	I0913 20:03:01.495532   71233 cri.go:89] found id: ""
	I0913 20:03:01.495539   71233 logs.go:276] 1 containers: [8c6b66cfda64cc01b2add3842317e8e38bd44940ec52b87ff6868e2b782ceb0d]
	I0913 20:03:01.495601   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:03:01.502156   71233 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:03:01.502244   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:03:01.545020   71233 cri.go:89] found id: "b7288e6c437a29f9a016812446ce5905f4cc78974775e1f26b227cd41414a8c0"
	I0913 20:03:01.545046   71233 cri.go:89] found id: ""
	I0913 20:03:01.545056   71233 logs.go:276] 1 containers: [b7288e6c437a29f9a016812446ce5905f4cc78974775e1f26b227cd41414a8c0]
	I0913 20:03:01.545114   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:03:01.549607   71233 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:03:01.549675   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:03:01.589590   71233 cri.go:89] found id: "5a58f184d570427c2adde5e982b72f455eb375ced496ce62a3217f22f7c409e7"
	I0913 20:03:01.589619   71233 cri.go:89] found id: ""
	I0913 20:03:01.589627   71233 logs.go:276] 1 containers: [5a58f184d570427c2adde5e982b72f455eb375ced496ce62a3217f22f7c409e7]
	I0913 20:03:01.589677   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:03:01.595352   71233 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:03:01.595429   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:03:01.642418   71233 cri.go:89] found id: "c32212fb06588456e47850075601e1e9622d8fab0cd30faa9a6289ff74e4bc9f"
	I0913 20:03:01.642441   71233 cri.go:89] found id: ""
	I0913 20:03:01.642449   71233 logs.go:276] 1 containers: [c32212fb06588456e47850075601e1e9622d8fab0cd30faa9a6289ff74e4bc9f]
	I0913 20:03:01.642511   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:03:01.647937   71233 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:03:01.648004   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:03:01.691575   71233 cri.go:89] found id: "57402126568c76ff63c95511c256896d8fb9ec9889deab2a2816385513386b86"
	I0913 20:03:01.691603   71233 cri.go:89] found id: ""
	I0913 20:03:01.691612   71233 logs.go:276] 1 containers: [57402126568c76ff63c95511c256896d8fb9ec9889deab2a2816385513386b86]
	I0913 20:03:01.691669   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:03:01.697223   71233 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:03:01.697296   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:03:01.737359   71233 cri.go:89] found id: "3e8d6c49b3b396aea040cc47de27b36e0f6018361f1925fe36424adffe6a6b73"
	I0913 20:03:01.737386   71233 cri.go:89] found id: ""
	I0913 20:03:01.737395   71233 logs.go:276] 1 containers: [3e8d6c49b3b396aea040cc47de27b36e0f6018361f1925fe36424adffe6a6b73]
	I0913 20:03:01.737453   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:03:01.743717   71233 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:03:01.743779   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:03:01.784813   71233 cri.go:89] found id: ""
	I0913 20:03:01.784836   71233 logs.go:276] 0 containers: []
	W0913 20:03:01.784845   71233 logs.go:278] No container was found matching "kindnet"
	I0913 20:03:01.784849   71233 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0913 20:03:01.784898   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0913 20:03:01.823391   71233 cri.go:89] found id: "db0694e689431d2416b903e0f01588bdf740ca0d24d9561799e853cfb065cb11"
	I0913 20:03:01.823420   71233 cri.go:89] found id: "d21ac9f9341fd40b5f181668035e7327fd6c2310c56a7f5f0158d440bfd2e9a3"
	I0913 20:03:01.823427   71233 cri.go:89] found id: ""
	I0913 20:03:01.823436   71233 logs.go:276] 2 containers: [db0694e689431d2416b903e0f01588bdf740ca0d24d9561799e853cfb065cb11 d21ac9f9341fd40b5f181668035e7327fd6c2310c56a7f5f0158d440bfd2e9a3]
	I0913 20:03:01.823484   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:03:01.828764   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:03:01.834519   71233 logs.go:123] Gathering logs for storage-provisioner [db0694e689431d2416b903e0f01588bdf740ca0d24d9561799e853cfb065cb11] ...
	I0913 20:03:01.834546   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db0694e689431d2416b903e0f01588bdf740ca0d24d9561799e853cfb065cb11"
	I0913 20:03:01.872925   71233 logs.go:123] Gathering logs for etcd [b7288e6c437a29f9a016812446ce5905f4cc78974775e1f26b227cd41414a8c0] ...
	I0913 20:03:01.872954   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b7288e6c437a29f9a016812446ce5905f4cc78974775e1f26b227cd41414a8c0"
	I0913 20:03:01.927669   71233 logs.go:123] Gathering logs for coredns [5a58f184d570427c2adde5e982b72f455eb375ced496ce62a3217f22f7c409e7] ...
	I0913 20:03:01.927702   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a58f184d570427c2adde5e982b72f455eb375ced496ce62a3217f22f7c409e7"
	I0913 20:03:01.973537   71233 logs.go:123] Gathering logs for kube-scheduler [c32212fb06588456e47850075601e1e9622d8fab0cd30faa9a6289ff74e4bc9f] ...
	I0913 20:03:01.973576   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c32212fb06588456e47850075601e1e9622d8fab0cd30faa9a6289ff74e4bc9f"
	I0913 20:03:02.017320   71233 logs.go:123] Gathering logs for kube-proxy [57402126568c76ff63c95511c256896d8fb9ec9889deab2a2816385513386b86] ...
	I0913 20:03:02.017353   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57402126568c76ff63c95511c256896d8fb9ec9889deab2a2816385513386b86"
	I0913 20:03:02.064003   71233 logs.go:123] Gathering logs for kubelet ...
	I0913 20:03:02.064042   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:03:02.134901   71233 logs.go:123] Gathering logs for dmesg ...
	I0913 20:03:02.134933   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:03:02.150541   71233 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:03:02.150575   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 20:03:02.268583   71233 logs.go:123] Gathering logs for kube-apiserver [8c6b66cfda64cc01b2add3842317e8e38bd44940ec52b87ff6868e2b782ceb0d] ...
	I0913 20:03:02.268626   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8c6b66cfda64cc01b2add3842317e8e38bd44940ec52b87ff6868e2b782ceb0d"
	I0913 20:03:02.320972   71233 logs.go:123] Gathering logs for kube-controller-manager [3e8d6c49b3b396aea040cc47de27b36e0f6018361f1925fe36424adffe6a6b73] ...
	I0913 20:03:02.321004   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e8d6c49b3b396aea040cc47de27b36e0f6018361f1925fe36424adffe6a6b73"
	I0913 20:03:02.373848   71233 logs.go:123] Gathering logs for storage-provisioner [d21ac9f9341fd40b5f181668035e7327fd6c2310c56a7f5f0158d440bfd2e9a3] ...
	I0913 20:03:02.373881   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d21ac9f9341fd40b5f181668035e7327fd6c2310c56a7f5f0158d440bfd2e9a3"
	I0913 20:03:02.409851   71233 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:03:02.409882   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:03:02.833329   71233 logs.go:123] Gathering logs for container status ...
	I0913 20:03:02.833384   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:03:01.041611   71702 addons.go:510] duration metric: took 1.971356508s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0913 20:03:01.415839   71702 pod_ready.go:103] pod "coredns-7c65d6cfc9-2qg68" in "kube-system" namespace has status "Ready":"False"
	I0913 20:03:03.911854   71702 pod_ready.go:103] pod "coredns-7c65d6cfc9-2qg68" in "kube-system" namespace has status "Ready":"False"
	I0913 20:03:04.745279   71926 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0913 20:03:04.745917   71926 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0913 20:03:04.746165   71926 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0913 20:03:05.413146   71702 pod_ready.go:93] pod "coredns-7c65d6cfc9-2qg68" in "kube-system" namespace has status "Ready":"True"
	I0913 20:03:05.413172   71702 pod_ready.go:82] duration metric: took 6.008227569s for pod "coredns-7c65d6cfc9-2qg68" in "kube-system" namespace to be "Ready" ...
	I0913 20:03:05.413184   71702 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-pm4s9" in "kube-system" namespace to be "Ready" ...
	I0913 20:03:07.420197   71702 pod_ready.go:103] pod "coredns-7c65d6cfc9-pm4s9" in "kube-system" namespace has status "Ready":"False"
	I0913 20:03:07.920309   71702 pod_ready.go:93] pod "coredns-7c65d6cfc9-pm4s9" in "kube-system" namespace has status "Ready":"True"
	I0913 20:03:07.920333   71702 pod_ready.go:82] duration metric: took 2.507141455s for pod "coredns-7c65d6cfc9-pm4s9" in "kube-system" namespace to be "Ready" ...
	I0913 20:03:07.920342   71702 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-512125" in "kube-system" namespace to be "Ready" ...
	I0913 20:03:07.924871   71702 pod_ready.go:93] pod "etcd-default-k8s-diff-port-512125" in "kube-system" namespace has status "Ready":"True"
	I0913 20:03:07.924892   71702 pod_ready.go:82] duration metric: took 4.543474ms for pod "etcd-default-k8s-diff-port-512125" in "kube-system" namespace to be "Ready" ...
	I0913 20:03:07.924901   71702 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-512125" in "kube-system" namespace to be "Ready" ...
	I0913 20:03:07.929323   71702 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-512125" in "kube-system" namespace has status "Ready":"True"
	I0913 20:03:07.929343   71702 pod_ready.go:82] duration metric: took 4.435416ms for pod "kube-apiserver-default-k8s-diff-port-512125" in "kube-system" namespace to be "Ready" ...
	I0913 20:03:07.929351   71702 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-512125" in "kube-system" namespace to be "Ready" ...
	I0913 20:03:07.933200   71702 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-512125" in "kube-system" namespace has status "Ready":"True"
	I0913 20:03:07.933225   71702 pod_ready.go:82] duration metric: took 3.865423ms for pod "kube-controller-manager-default-k8s-diff-port-512125" in "kube-system" namespace to be "Ready" ...
	I0913 20:03:07.933237   71702 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-6zfwm" in "kube-system" namespace to be "Ready" ...
	I0913 20:03:07.938215   71702 pod_ready.go:93] pod "kube-proxy-6zfwm" in "kube-system" namespace has status "Ready":"True"
	I0913 20:03:07.938241   71702 pod_ready.go:82] duration metric: took 4.996366ms for pod "kube-proxy-6zfwm" in "kube-system" namespace to be "Ready" ...
	I0913 20:03:07.938251   71702 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-512125" in "kube-system" namespace to be "Ready" ...
	I0913 20:03:08.317175   71702 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-512125" in "kube-system" namespace has status "Ready":"True"
	I0913 20:03:08.317200   71702 pod_ready.go:82] duration metric: took 378.941006ms for pod "kube-scheduler-default-k8s-diff-port-512125" in "kube-system" namespace to be "Ready" ...
	I0913 20:03:08.317207   71702 pod_ready.go:39] duration metric: took 8.926861264s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0913 20:03:08.317220   71702 api_server.go:52] waiting for apiserver process to appear ...
	I0913 20:03:08.317270   71702 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:03:08.332715   71702 api_server.go:72] duration metric: took 9.262487177s to wait for apiserver process to appear ...
	I0913 20:03:08.332745   71702 api_server.go:88] waiting for apiserver healthz status ...
	I0913 20:03:08.332766   71702 api_server.go:253] Checking apiserver healthz at https://192.168.61.3:8444/healthz ...
	I0913 20:03:08.337492   71702 api_server.go:279] https://192.168.61.3:8444/healthz returned 200:
	ok
	I0913 20:03:08.338513   71702 api_server.go:141] control plane version: v1.31.1
	I0913 20:03:08.338534   71702 api_server.go:131] duration metric: took 5.781718ms to wait for apiserver health ...
	I0913 20:03:08.338540   71702 system_pods.go:43] waiting for kube-system pods to appear ...
	I0913 20:03:08.519723   71702 system_pods.go:59] 9 kube-system pods found
	I0913 20:03:08.519751   71702 system_pods.go:61] "coredns-7c65d6cfc9-2qg68" [06d7bc39-7f7b-405c-828f-22d68741b063] Running
	I0913 20:03:08.519756   71702 system_pods.go:61] "coredns-7c65d6cfc9-pm4s9" [82a23abb-d3a2-415b-a992-971fe65ee840] Running
	I0913 20:03:08.519760   71702 system_pods.go:61] "etcd-default-k8s-diff-port-512125" [b146d4ea-c125-42c6-8435-b23a83c75597] Running
	I0913 20:03:08.519764   71702 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-512125" [69a6fc72-2e73-41ad-a945-985c4f3b406c] Running
	I0913 20:03:08.519767   71702 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-512125" [7875db7b-84da-4dae-b78c-beb539869479] Running
	I0913 20:03:08.519770   71702 system_pods.go:61] "kube-proxy-6zfwm" [b62cff15-1c67-42d6-a30b-6f43a914fa0c] Running
	I0913 20:03:08.519773   71702 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-512125" [840349ee-5068-4ccf-9e6f-9879904b3647] Running
	I0913 20:03:08.519779   71702 system_pods.go:61] "metrics-server-6867b74b74-tk8qn" [e4e5d427-7760-4397-8529-3ae3734ed891] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0913 20:03:08.519782   71702 system_pods.go:61] "storage-provisioner" [7cd5034f-5d90-4155-acab-804dca90a2ed] Running
	I0913 20:03:08.519790   71702 system_pods.go:74] duration metric: took 181.244915ms to wait for pod list to return data ...
	I0913 20:03:08.519797   71702 default_sa.go:34] waiting for default service account to be created ...
	I0913 20:03:08.717123   71702 default_sa.go:45] found service account: "default"
	I0913 20:03:08.717146   71702 default_sa.go:55] duration metric: took 197.343901ms for default service account to be created ...
	I0913 20:03:08.717155   71702 system_pods.go:116] waiting for k8s-apps to be running ...
	I0913 20:03:08.920347   71702 system_pods.go:86] 9 kube-system pods found
	I0913 20:03:08.920378   71702 system_pods.go:89] "coredns-7c65d6cfc9-2qg68" [06d7bc39-7f7b-405c-828f-22d68741b063] Running
	I0913 20:03:08.920383   71702 system_pods.go:89] "coredns-7c65d6cfc9-pm4s9" [82a23abb-d3a2-415b-a992-971fe65ee840] Running
	I0913 20:03:08.920388   71702 system_pods.go:89] "etcd-default-k8s-diff-port-512125" [b146d4ea-c125-42c6-8435-b23a83c75597] Running
	I0913 20:03:08.920392   71702 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-512125" [69a6fc72-2e73-41ad-a945-985c4f3b406c] Running
	I0913 20:03:08.920396   71702 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-512125" [7875db7b-84da-4dae-b78c-beb539869479] Running
	I0913 20:03:08.920401   71702 system_pods.go:89] "kube-proxy-6zfwm" [b62cff15-1c67-42d6-a30b-6f43a914fa0c] Running
	I0913 20:03:08.920407   71702 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-512125" [840349ee-5068-4ccf-9e6f-9879904b3647] Running
	I0913 20:03:08.920415   71702 system_pods.go:89] "metrics-server-6867b74b74-tk8qn" [e4e5d427-7760-4397-8529-3ae3734ed891] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0913 20:03:08.920421   71702 system_pods.go:89] "storage-provisioner" [7cd5034f-5d90-4155-acab-804dca90a2ed] Running
	I0913 20:03:08.920433   71702 system_pods.go:126] duration metric: took 203.271141ms to wait for k8s-apps to be running ...
	I0913 20:03:08.920446   71702 system_svc.go:44] waiting for kubelet service to be running ....
	I0913 20:03:08.920492   71702 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 20:03:08.937818   71702 system_svc.go:56] duration metric: took 17.363979ms WaitForService to wait for kubelet
	I0913 20:03:08.937850   71702 kubeadm.go:582] duration metric: took 9.867627646s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0913 20:03:08.937866   71702 node_conditions.go:102] verifying NodePressure condition ...
	I0913 20:03:09.117836   71702 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0913 20:03:09.117861   71702 node_conditions.go:123] node cpu capacity is 2
	I0913 20:03:09.117870   71702 node_conditions.go:105] duration metric: took 180.000591ms to run NodePressure ...
	I0913 20:03:09.117880   71702 start.go:241] waiting for startup goroutines ...
	I0913 20:03:09.117886   71702 start.go:246] waiting for cluster config update ...
	I0913 20:03:09.117896   71702 start.go:255] writing updated cluster config ...
	I0913 20:03:09.118224   71702 ssh_runner.go:195] Run: rm -f paused
	I0913 20:03:09.166470   71702 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0913 20:03:09.168569   71702 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-512125" cluster and "default" namespace by default
	I0913 20:03:05.379534   71233 api_server.go:253] Checking apiserver healthz at https://192.168.39.32:8443/healthz ...
	I0913 20:03:05.385296   71233 api_server.go:279] https://192.168.39.32:8443/healthz returned 200:
	ok
	I0913 20:03:05.386447   71233 api_server.go:141] control plane version: v1.31.1
	I0913 20:03:05.386467   71233 api_server.go:131] duration metric: took 3.933956718s to wait for apiserver health ...
	I0913 20:03:05.386476   71233 system_pods.go:43] waiting for kube-system pods to appear ...
	I0913 20:03:05.386501   71233 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:03:05.386558   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:03:05.435632   71233 cri.go:89] found id: "8c6b66cfda64cc01b2add3842317e8e38bd44940ec52b87ff6868e2b782ceb0d"
	I0913 20:03:05.435663   71233 cri.go:89] found id: ""
	I0913 20:03:05.435674   71233 logs.go:276] 1 containers: [8c6b66cfda64cc01b2add3842317e8e38bd44940ec52b87ff6868e2b782ceb0d]
	I0913 20:03:05.435734   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:03:05.440489   71233 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:03:05.440552   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:03:05.479659   71233 cri.go:89] found id: "b7288e6c437a29f9a016812446ce5905f4cc78974775e1f26b227cd41414a8c0"
	I0913 20:03:05.479684   71233 cri.go:89] found id: ""
	I0913 20:03:05.479692   71233 logs.go:276] 1 containers: [b7288e6c437a29f9a016812446ce5905f4cc78974775e1f26b227cd41414a8c0]
	I0913 20:03:05.479739   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:03:05.483811   71233 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:03:05.483868   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:03:05.519053   71233 cri.go:89] found id: "5a58f184d570427c2adde5e982b72f455eb375ced496ce62a3217f22f7c409e7"
	I0913 20:03:05.519077   71233 cri.go:89] found id: ""
	I0913 20:03:05.519085   71233 logs.go:276] 1 containers: [5a58f184d570427c2adde5e982b72f455eb375ced496ce62a3217f22f7c409e7]
	I0913 20:03:05.519139   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:03:05.523529   71233 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:03:05.523596   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:03:05.560575   71233 cri.go:89] found id: "c32212fb06588456e47850075601e1e9622d8fab0cd30faa9a6289ff74e4bc9f"
	I0913 20:03:05.560599   71233 cri.go:89] found id: ""
	I0913 20:03:05.560608   71233 logs.go:276] 1 containers: [c32212fb06588456e47850075601e1e9622d8fab0cd30faa9a6289ff74e4bc9f]
	I0913 20:03:05.560655   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:03:05.564712   71233 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:03:05.564761   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:03:05.602092   71233 cri.go:89] found id: "57402126568c76ff63c95511c256896d8fb9ec9889deab2a2816385513386b86"
	I0913 20:03:05.602131   71233 cri.go:89] found id: ""
	I0913 20:03:05.602141   71233 logs.go:276] 1 containers: [57402126568c76ff63c95511c256896d8fb9ec9889deab2a2816385513386b86]
	I0913 20:03:05.602202   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:03:05.606465   71233 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:03:05.606531   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:03:05.652471   71233 cri.go:89] found id: "3e8d6c49b3b396aea040cc47de27b36e0f6018361f1925fe36424adffe6a6b73"
	I0913 20:03:05.652499   71233 cri.go:89] found id: ""
	I0913 20:03:05.652509   71233 logs.go:276] 1 containers: [3e8d6c49b3b396aea040cc47de27b36e0f6018361f1925fe36424adffe6a6b73]
	I0913 20:03:05.652567   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:03:05.656969   71233 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:03:05.657028   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:03:05.695549   71233 cri.go:89] found id: ""
	I0913 20:03:05.695575   71233 logs.go:276] 0 containers: []
	W0913 20:03:05.695586   71233 logs.go:278] No container was found matching "kindnet"
	I0913 20:03:05.695594   71233 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0913 20:03:05.695657   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0913 20:03:05.732796   71233 cri.go:89] found id: "db0694e689431d2416b903e0f01588bdf740ca0d24d9561799e853cfb065cb11"
	I0913 20:03:05.732824   71233 cri.go:89] found id: "d21ac9f9341fd40b5f181668035e7327fd6c2310c56a7f5f0158d440bfd2e9a3"
	I0913 20:03:05.732830   71233 cri.go:89] found id: ""
	I0913 20:03:05.732838   71233 logs.go:276] 2 containers: [db0694e689431d2416b903e0f01588bdf740ca0d24d9561799e853cfb065cb11 d21ac9f9341fd40b5f181668035e7327fd6c2310c56a7f5f0158d440bfd2e9a3]
	I0913 20:03:05.732905   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:03:05.737676   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:03:05.742071   71233 logs.go:123] Gathering logs for etcd [b7288e6c437a29f9a016812446ce5905f4cc78974775e1f26b227cd41414a8c0] ...
	I0913 20:03:05.742109   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b7288e6c437a29f9a016812446ce5905f4cc78974775e1f26b227cd41414a8c0"
	I0913 20:03:05.792956   71233 logs.go:123] Gathering logs for kube-scheduler [c32212fb06588456e47850075601e1e9622d8fab0cd30faa9a6289ff74e4bc9f] ...
	I0913 20:03:05.792984   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c32212fb06588456e47850075601e1e9622d8fab0cd30faa9a6289ff74e4bc9f"
	I0913 20:03:05.834623   71233 logs.go:123] Gathering logs for kube-proxy [57402126568c76ff63c95511c256896d8fb9ec9889deab2a2816385513386b86] ...
	I0913 20:03:05.834651   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57402126568c76ff63c95511c256896d8fb9ec9889deab2a2816385513386b86"
	I0913 20:03:05.872365   71233 logs.go:123] Gathering logs for storage-provisioner [db0694e689431d2416b903e0f01588bdf740ca0d24d9561799e853cfb065cb11] ...
	I0913 20:03:05.872395   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db0694e689431d2416b903e0f01588bdf740ca0d24d9561799e853cfb065cb11"
	I0913 20:03:05.909565   71233 logs.go:123] Gathering logs for storage-provisioner [d21ac9f9341fd40b5f181668035e7327fd6c2310c56a7f5f0158d440bfd2e9a3] ...
	I0913 20:03:05.909589   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d21ac9f9341fd40b5f181668035e7327fd6c2310c56a7f5f0158d440bfd2e9a3"
	I0913 20:03:05.950037   71233 logs.go:123] Gathering logs for container status ...
	I0913 20:03:05.950073   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:03:06.006670   71233 logs.go:123] Gathering logs for kubelet ...
	I0913 20:03:06.006702   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:03:06.075591   71233 logs.go:123] Gathering logs for dmesg ...
	I0913 20:03:06.075633   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:03:06.090020   71233 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:03:06.090051   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 20:03:06.193190   71233 logs.go:123] Gathering logs for kube-apiserver [8c6b66cfda64cc01b2add3842317e8e38bd44940ec52b87ff6868e2b782ceb0d] ...
	I0913 20:03:06.193216   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8c6b66cfda64cc01b2add3842317e8e38bd44940ec52b87ff6868e2b782ceb0d"
	I0913 20:03:06.236386   71233 logs.go:123] Gathering logs for coredns [5a58f184d570427c2adde5e982b72f455eb375ced496ce62a3217f22f7c409e7] ...
	I0913 20:03:06.236414   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a58f184d570427c2adde5e982b72f455eb375ced496ce62a3217f22f7c409e7"
	I0913 20:03:06.276618   71233 logs.go:123] Gathering logs for kube-controller-manager [3e8d6c49b3b396aea040cc47de27b36e0f6018361f1925fe36424adffe6a6b73] ...
	I0913 20:03:06.276644   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e8d6c49b3b396aea040cc47de27b36e0f6018361f1925fe36424adffe6a6b73"
	I0913 20:03:06.332088   71233 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:03:06.332119   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:03:09.189499   71233 system_pods.go:59] 8 kube-system pods found
	I0913 20:03:09.189533   71233 system_pods.go:61] "coredns-7c65d6cfc9-lrrkx" [3b5420cc-a0cd-4f34-8f65-9fa6fd5dd02f] Running
	I0913 20:03:09.189542   71233 system_pods.go:61] "etcd-embed-certs-175374" [4f645ba5-cffa-49a2-9219-8e635f58abd3] Running
	I0913 20:03:09.189549   71233 system_pods.go:61] "kube-apiserver-embed-certs-175374" [4c21b983-d949-4642-b17c-b547330b0d05] Running
	I0913 20:03:09.189564   71233 system_pods.go:61] "kube-controller-manager-embed-certs-175374" [defb4f6c-81f8-4405-b2c0-abe91c846f4c] Running
	I0913 20:03:09.189571   71233 system_pods.go:61] "kube-proxy-jv77q" [28580bbe-7c5f-4161-8370-41f3286d508c] Running
	I0913 20:03:09.189577   71233 system_pods.go:61] "kube-scheduler-embed-certs-175374" [c5aa66b9-523c-46e8-b8e4-70680490dbf5] Running
	I0913 20:03:09.189588   71233 system_pods.go:61] "metrics-server-6867b74b74-fnznh" [9ca67e1c-a852-4513-abfc-ace5908d2727] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0913 20:03:09.189597   71233 system_pods.go:61] "storage-provisioner" [afa99920-b51a-4d30-a8e0-269a0beeee8a] Running
	I0913 20:03:09.189610   71233 system_pods.go:74] duration metric: took 3.803122963s to wait for pod list to return data ...
	I0913 20:03:09.189618   71233 default_sa.go:34] waiting for default service account to be created ...
	I0913 20:03:09.192997   71233 default_sa.go:45] found service account: "default"
	I0913 20:03:09.193023   71233 default_sa.go:55] duration metric: took 3.397513ms for default service account to be created ...
	I0913 20:03:09.193033   71233 system_pods.go:116] waiting for k8s-apps to be running ...
	I0913 20:03:09.198238   71233 system_pods.go:86] 8 kube-system pods found
	I0913 20:03:09.198263   71233 system_pods.go:89] "coredns-7c65d6cfc9-lrrkx" [3b5420cc-a0cd-4f34-8f65-9fa6fd5dd02f] Running
	I0913 20:03:09.198268   71233 system_pods.go:89] "etcd-embed-certs-175374" [4f645ba5-cffa-49a2-9219-8e635f58abd3] Running
	I0913 20:03:09.198272   71233 system_pods.go:89] "kube-apiserver-embed-certs-175374" [4c21b983-d949-4642-b17c-b547330b0d05] Running
	I0913 20:03:09.198276   71233 system_pods.go:89] "kube-controller-manager-embed-certs-175374" [defb4f6c-81f8-4405-b2c0-abe91c846f4c] Running
	I0913 20:03:09.198280   71233 system_pods.go:89] "kube-proxy-jv77q" [28580bbe-7c5f-4161-8370-41f3286d508c] Running
	I0913 20:03:09.198284   71233 system_pods.go:89] "kube-scheduler-embed-certs-175374" [c5aa66b9-523c-46e8-b8e4-70680490dbf5] Running
	I0913 20:03:09.198291   71233 system_pods.go:89] "metrics-server-6867b74b74-fnznh" [9ca67e1c-a852-4513-abfc-ace5908d2727] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0913 20:03:09.198298   71233 system_pods.go:89] "storage-provisioner" [afa99920-b51a-4d30-a8e0-269a0beeee8a] Running
	I0913 20:03:09.198305   71233 system_pods.go:126] duration metric: took 5.267005ms to wait for k8s-apps to be running ...
	I0913 20:03:09.198314   71233 system_svc.go:44] waiting for kubelet service to be running ....
	I0913 20:03:09.198349   71233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 20:03:09.216256   71233 system_svc.go:56] duration metric: took 17.93212ms WaitForService to wait for kubelet
	I0913 20:03:09.216295   71233 kubeadm.go:582] duration metric: took 4m23.636198466s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0913 20:03:09.216318   71233 node_conditions.go:102] verifying NodePressure condition ...
	I0913 20:03:09.219598   71233 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0913 20:03:09.219623   71233 node_conditions.go:123] node cpu capacity is 2
	I0913 20:03:09.219634   71233 node_conditions.go:105] duration metric: took 3.310981ms to run NodePressure ...
	I0913 20:03:09.219644   71233 start.go:241] waiting for startup goroutines ...
	I0913 20:03:09.219650   71233 start.go:246] waiting for cluster config update ...
	I0913 20:03:09.219659   71233 start.go:255] writing updated cluster config ...
	I0913 20:03:09.219956   71233 ssh_runner.go:195] Run: rm -f paused
	I0913 20:03:09.275861   71233 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0913 20:03:09.277856   71233 out.go:177] * Done! kubectl is now configured to use "embed-certs-175374" cluster and "default" namespace by default
	I0913 20:03:09.746358   71926 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0913 20:03:09.746651   71926 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0913 20:03:19.746817   71926 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0913 20:03:19.747178   71926 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0913 20:03:39.747563   71926 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0913 20:03:39.747837   71926 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0913 20:04:19.749006   71926 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0913 20:04:19.749293   71926 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0913 20:04:19.749318   71926 kubeadm.go:310] 
	I0913 20:04:19.749381   71926 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0913 20:04:19.749450   71926 kubeadm.go:310] 		timed out waiting for the condition
	I0913 20:04:19.749486   71926 kubeadm.go:310] 
	I0913 20:04:19.749554   71926 kubeadm.go:310] 	This error is likely caused by:
	I0913 20:04:19.749588   71926 kubeadm.go:310] 		- The kubelet is not running
	I0913 20:04:19.749737   71926 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0913 20:04:19.749752   71926 kubeadm.go:310] 
	I0913 20:04:19.749887   71926 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0913 20:04:19.749920   71926 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0913 20:04:19.749949   71926 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0913 20:04:19.749955   71926 kubeadm.go:310] 
	I0913 20:04:19.750044   71926 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0913 20:04:19.750136   71926 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0913 20:04:19.750145   71926 kubeadm.go:310] 
	I0913 20:04:19.750247   71926 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0913 20:04:19.750339   71926 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0913 20:04:19.750430   71926 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0913 20:04:19.750523   71926 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0913 20:04:19.750574   71926 kubeadm.go:310] 
	I0913 20:04:19.751362   71926 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0913 20:04:19.751496   71926 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0913 20:04:19.751584   71926 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0913 20:04:19.751725   71926 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0913 20:04:19.751774   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0913 20:04:20.207124   71926 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 20:04:20.223940   71926 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0913 20:04:20.234902   71926 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0913 20:04:20.234923   71926 kubeadm.go:157] found existing configuration files:
	
	I0913 20:04:20.234970   71926 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0913 20:04:20.246057   71926 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0913 20:04:20.246154   71926 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0913 20:04:20.257102   71926 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0913 20:04:20.266497   71926 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0913 20:04:20.266545   71926 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0913 20:04:20.276259   71926 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0913 20:04:20.285640   71926 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0913 20:04:20.285690   71926 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0913 20:04:20.294988   71926 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0913 20:04:20.303978   71926 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0913 20:04:20.304021   71926 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0913 20:04:20.313426   71926 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0913 20:04:20.392431   71926 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0913 20:04:20.392519   71926 kubeadm.go:310] [preflight] Running pre-flight checks
	I0913 20:04:20.550959   71926 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0913 20:04:20.551072   71926 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0913 20:04:20.551169   71926 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0913 20:04:20.746999   71926 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0913 20:04:20.749008   71926 out.go:235]   - Generating certificates and keys ...
	I0913 20:04:20.749104   71926 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0913 20:04:20.749181   71926 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0913 20:04:20.749255   71926 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0913 20:04:20.749339   71926 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0913 20:04:20.749457   71926 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0913 20:04:20.749540   71926 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0913 20:04:20.749608   71926 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0913 20:04:20.749670   71926 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0913 20:04:20.749732   71926 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0913 20:04:20.749801   71926 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0913 20:04:20.749833   71926 kubeadm.go:310] [certs] Using the existing "sa" key
	I0913 20:04:20.749877   71926 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0913 20:04:20.997924   71926 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0913 20:04:21.175932   71926 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0913 20:04:21.442609   71926 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0913 20:04:21.714181   71926 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0913 20:04:21.737741   71926 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0913 20:04:21.738987   71926 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0913 20:04:21.739058   71926 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0913 20:04:21.885498   71926 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0913 20:04:21.887033   71926 out.go:235]   - Booting up control plane ...
	I0913 20:04:21.887170   71926 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0913 20:04:21.893768   71926 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0913 20:04:21.893887   71926 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0913 20:04:21.894035   71926 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0913 20:04:21.904503   71926 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0913 20:05:01.906985   71926 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0913 20:05:01.907109   71926 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0913 20:05:01.907459   71926 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0913 20:05:06.907586   71926 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0913 20:05:06.907859   71926 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0913 20:05:16.908086   71926 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0913 20:05:16.908310   71926 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0913 20:05:36.908887   71926 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0913 20:05:36.909114   71926 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0913 20:06:16.909944   71926 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0913 20:06:16.910449   71926 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0913 20:06:16.910509   71926 kubeadm.go:310] 
	I0913 20:06:16.910582   71926 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0913 20:06:16.910675   71926 kubeadm.go:310] 		timed out waiting for the condition
	I0913 20:06:16.910694   71926 kubeadm.go:310] 
	I0913 20:06:16.910764   71926 kubeadm.go:310] 	This error is likely caused by:
	I0913 20:06:16.910837   71926 kubeadm.go:310] 		- The kubelet is not running
	I0913 20:06:16.910965   71926 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0913 20:06:16.910979   71926 kubeadm.go:310] 
	I0913 20:06:16.911088   71926 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0913 20:06:16.911126   71926 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0913 20:06:16.911172   71926 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0913 20:06:16.911180   71926 kubeadm.go:310] 
	I0913 20:06:16.911298   71926 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0913 20:06:16.911400   71926 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0913 20:06:16.911415   71926 kubeadm.go:310] 
	I0913 20:06:16.911586   71926 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0913 20:06:16.911787   71926 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0913 20:06:16.911938   71926 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0913 20:06:16.912063   71926 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0913 20:06:16.912087   71926 kubeadm.go:310] 
	I0913 20:06:16.912489   71926 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0913 20:06:16.912671   71926 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0913 20:06:16.912770   71926 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0913 20:06:16.912842   71926 kubeadm.go:394] duration metric: took 7m57.58767006s to StartCluster
	I0913 20:06:16.912893   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:06:16.912960   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:06:16.957280   71926 cri.go:89] found id: ""
	I0913 20:06:16.957307   71926 logs.go:276] 0 containers: []
	W0913 20:06:16.957319   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:06:16.957326   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:06:16.957393   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:06:16.997065   71926 cri.go:89] found id: ""
	I0913 20:06:16.997095   71926 logs.go:276] 0 containers: []
	W0913 20:06:16.997106   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:06:16.997115   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:06:16.997193   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:06:17.033075   71926 cri.go:89] found id: ""
	I0913 20:06:17.033099   71926 logs.go:276] 0 containers: []
	W0913 20:06:17.033107   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:06:17.033112   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:06:17.033180   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:06:17.071062   71926 cri.go:89] found id: ""
	I0913 20:06:17.071090   71926 logs.go:276] 0 containers: []
	W0913 20:06:17.071101   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:06:17.071108   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:06:17.071176   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:06:17.107558   71926 cri.go:89] found id: ""
	I0913 20:06:17.107584   71926 logs.go:276] 0 containers: []
	W0913 20:06:17.107594   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:06:17.107599   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:06:17.107658   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:06:17.146037   71926 cri.go:89] found id: ""
	I0913 20:06:17.146066   71926 logs.go:276] 0 containers: []
	W0913 20:06:17.146076   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:06:17.146083   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:06:17.146158   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:06:17.189129   71926 cri.go:89] found id: ""
	I0913 20:06:17.189163   71926 logs.go:276] 0 containers: []
	W0913 20:06:17.189174   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:06:17.189181   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:06:17.189241   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:06:17.224018   71926 cri.go:89] found id: ""
	I0913 20:06:17.224045   71926 logs.go:276] 0 containers: []
	W0913 20:06:17.224056   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:06:17.224067   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:06:17.224081   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:06:17.238494   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:06:17.238526   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:06:17.319627   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:06:17.319650   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:06:17.319663   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:06:17.472823   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:06:17.472854   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:06:17.515919   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:06:17.515957   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0913 20:06:17.566082   71926 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0913 20:06:17.566145   71926 out.go:270] * 
	W0913 20:06:17.566249   71926 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0913 20:06:17.566272   71926 out.go:270] * 
	W0913 20:06:17.567031   71926 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0913 20:06:17.570669   71926 out.go:201] 
	W0913 20:06:17.571961   71926 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0913 20:06:17.572007   71926 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0913 20:06:17.572024   71926 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0913 20:06:17.573440   71926 out.go:201] 
	
	
	==> CRI-O <==
	Sep 13 20:11:21 no-preload-239327 crio[706]: time="2024-09-13 20:11:21.262741240Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726258281262721172,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f26a2cd0-02a9-4362-af76-7e1e9374207d name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 20:11:21 no-preload-239327 crio[706]: time="2024-09-13 20:11:21.263375462Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=84e2a195-4bb7-42d5-92e4-3a48432003e2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 20:11:21 no-preload-239327 crio[706]: time="2024-09-13 20:11:21.263445221Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=84e2a195-4bb7-42d5-92e4-3a48432003e2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 20:11:21 no-preload-239327 crio[706]: time="2024-09-13 20:11:21.263632033Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fc01d7b17bbc929489326ae7e806a248f2d3d5cbc145bf51eb1b0cf2ba88d950,PodSandboxId:7385496e03b484759e09d27bb4f8bf5d45b469ce208eeb6f389e8363673d1376,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726257505260601200,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cb55fe9-4adb-4d3e-9f26-34ee4b3f01a2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7072134a2004c7ca1f46e019b8df505cf51ba9f28fd395093aa6bc3f953085e2,PodSandboxId:aad57f23f7b9d4707a8bfffdf0e25d4af243348041ce9e8d754b9cc72ffdedda,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1726257485507100102,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bbf45dbd-00fc-4d1f-952b-e3741f1e2e96,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e70559352db6a4d712cb06065541ce8338cb0ea989eac571f538aa205d022f73,PodSandboxId:2a5a9a6660ec0a956732d808a6966894dc14c94018e55071bb0bd441909aec88,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726257482009005191,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fjzxv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 984f1946-61b1-4881-ae99-495855aaf948,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:adbec8ff0ed7ae2d76b2b4191fbf08117c3c2c17aaf2f56bf763ce14fba83991,PodSandboxId:e122a3e335a48d85769ac9fa798ae16aa9d7379b4181a9497fa9fef5b04e0432,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726257474620787945,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b24zg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67fffd9e-ddf7-4abb-bf
ce-1528060d6b43,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a9c61bb677322f851f9504c0ffe2044897d80333391be76078a4f7825afe9fe,PodSandboxId:7385496e03b484759e09d27bb4f8bf5d45b469ce208eeb6f389e8363673d1376,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726257474692014207,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cb55fe9-4adb-4d3e-9f26-34ee4b3f01
a2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6169bebe5711890f53f70a7ec514779cf495ee725c60dab67e7acdca64ef03d,PodSandboxId:c47a79173c9568e56ea539306a842412f7e30ef571f434f95d7f524ecbf75bfe,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726257469623037132,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-239327,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d27d74d3adbf9e1a
8fb5f27e34765015,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3490cc2f99b270938b5fed10c34d8466f4e2d304dac9cdacb69b239c36988e3,PodSandboxId:414bfb6888204e21d8c51c3b0ae29137e36a30c27fc16f3c5885352b1a0f2fff,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726257469706994027,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-239327,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86f02b0f8fb994379d37353d7e37c6d9,},Annotations:map[string]s
tring{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c2bf4fed4e33c404d568d430bf58fe30ca0c9f0cf8964c530e93f1fd1a51edf,PodSandboxId:bf03222df8fb595c984c7172cd535c8969884b0d5f80cd5150cd0bf8da544763,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726257469685186220,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-239327,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6be3a374c3b71fab710754c2fbd15de6,},Annotations:map[string]string{io.kube
rnetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b1108fd5841778e635c87c901ca2bc56071cc7f48f9b9812e1bf8fa063284c3,PodSandboxId:db3f50d1e105c112dd17655faeda9d724a50e582413bae6643c5639081a5325e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726257469583573446,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-239327,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62f2766c1d6aefa961c8bc9e97da2ac4,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=84e2a195-4bb7-42d5-92e4-3a48432003e2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 20:11:21 no-preload-239327 crio[706]: time="2024-09-13 20:11:21.301184181Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2d7e4f2b-8cef-44f3-a976-8b9bb91acc28 name=/runtime.v1.RuntimeService/Version
	Sep 13 20:11:21 no-preload-239327 crio[706]: time="2024-09-13 20:11:21.301271391Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2d7e4f2b-8cef-44f3-a976-8b9bb91acc28 name=/runtime.v1.RuntimeService/Version
	Sep 13 20:11:21 no-preload-239327 crio[706]: time="2024-09-13 20:11:21.302779709Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ca2957df-d0d7-4ebd-9550-885f20968827 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 20:11:21 no-preload-239327 crio[706]: time="2024-09-13 20:11:21.303261342Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726258281303238636,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ca2957df-d0d7-4ebd-9550-885f20968827 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 20:11:21 no-preload-239327 crio[706]: time="2024-09-13 20:11:21.304051059Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f6b9dfc2-27ad-40ab-a6d2-5d57ff95638f name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 20:11:21 no-preload-239327 crio[706]: time="2024-09-13 20:11:21.304106674Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f6b9dfc2-27ad-40ab-a6d2-5d57ff95638f name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 20:11:21 no-preload-239327 crio[706]: time="2024-09-13 20:11:21.304288391Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fc01d7b17bbc929489326ae7e806a248f2d3d5cbc145bf51eb1b0cf2ba88d950,PodSandboxId:7385496e03b484759e09d27bb4f8bf5d45b469ce208eeb6f389e8363673d1376,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726257505260601200,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cb55fe9-4adb-4d3e-9f26-34ee4b3f01a2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7072134a2004c7ca1f46e019b8df505cf51ba9f28fd395093aa6bc3f953085e2,PodSandboxId:aad57f23f7b9d4707a8bfffdf0e25d4af243348041ce9e8d754b9cc72ffdedda,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1726257485507100102,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bbf45dbd-00fc-4d1f-952b-e3741f1e2e96,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e70559352db6a4d712cb06065541ce8338cb0ea989eac571f538aa205d022f73,PodSandboxId:2a5a9a6660ec0a956732d808a6966894dc14c94018e55071bb0bd441909aec88,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726257482009005191,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fjzxv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 984f1946-61b1-4881-ae99-495855aaf948,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:adbec8ff0ed7ae2d76b2b4191fbf08117c3c2c17aaf2f56bf763ce14fba83991,PodSandboxId:e122a3e335a48d85769ac9fa798ae16aa9d7379b4181a9497fa9fef5b04e0432,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726257474620787945,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b24zg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67fffd9e-ddf7-4abb-bf
ce-1528060d6b43,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a9c61bb677322f851f9504c0ffe2044897d80333391be76078a4f7825afe9fe,PodSandboxId:7385496e03b484759e09d27bb4f8bf5d45b469ce208eeb6f389e8363673d1376,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726257474692014207,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cb55fe9-4adb-4d3e-9f26-34ee4b3f01
a2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6169bebe5711890f53f70a7ec514779cf495ee725c60dab67e7acdca64ef03d,PodSandboxId:c47a79173c9568e56ea539306a842412f7e30ef571f434f95d7f524ecbf75bfe,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726257469623037132,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-239327,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d27d74d3adbf9e1a
8fb5f27e34765015,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3490cc2f99b270938b5fed10c34d8466f4e2d304dac9cdacb69b239c36988e3,PodSandboxId:414bfb6888204e21d8c51c3b0ae29137e36a30c27fc16f3c5885352b1a0f2fff,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726257469706994027,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-239327,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86f02b0f8fb994379d37353d7e37c6d9,},Annotations:map[string]s
tring{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c2bf4fed4e33c404d568d430bf58fe30ca0c9f0cf8964c530e93f1fd1a51edf,PodSandboxId:bf03222df8fb595c984c7172cd535c8969884b0d5f80cd5150cd0bf8da544763,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726257469685186220,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-239327,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6be3a374c3b71fab710754c2fbd15de6,},Annotations:map[string]string{io.kube
rnetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b1108fd5841778e635c87c901ca2bc56071cc7f48f9b9812e1bf8fa063284c3,PodSandboxId:db3f50d1e105c112dd17655faeda9d724a50e582413bae6643c5639081a5325e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726257469583573446,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-239327,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62f2766c1d6aefa961c8bc9e97da2ac4,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f6b9dfc2-27ad-40ab-a6d2-5d57ff95638f name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 20:11:21 no-preload-239327 crio[706]: time="2024-09-13 20:11:21.346644497Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f891ec1c-aabe-4d34-a3cc-33e59616babf name=/runtime.v1.RuntimeService/Version
	Sep 13 20:11:21 no-preload-239327 crio[706]: time="2024-09-13 20:11:21.346744156Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f891ec1c-aabe-4d34-a3cc-33e59616babf name=/runtime.v1.RuntimeService/Version
	Sep 13 20:11:21 no-preload-239327 crio[706]: time="2024-09-13 20:11:21.348756031Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ddb08ba2-b046-4a69-a22b-d8350e6ee5da name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 20:11:21 no-preload-239327 crio[706]: time="2024-09-13 20:11:21.349227849Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726258281349203124,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ddb08ba2-b046-4a69-a22b-d8350e6ee5da name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 20:11:21 no-preload-239327 crio[706]: time="2024-09-13 20:11:21.349980321Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=46309723-67e1-425e-aa1c-831b1cd14eaf name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 20:11:21 no-preload-239327 crio[706]: time="2024-09-13 20:11:21.350061525Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=46309723-67e1-425e-aa1c-831b1cd14eaf name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 20:11:21 no-preload-239327 crio[706]: time="2024-09-13 20:11:21.350366413Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fc01d7b17bbc929489326ae7e806a248f2d3d5cbc145bf51eb1b0cf2ba88d950,PodSandboxId:7385496e03b484759e09d27bb4f8bf5d45b469ce208eeb6f389e8363673d1376,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726257505260601200,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cb55fe9-4adb-4d3e-9f26-34ee4b3f01a2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7072134a2004c7ca1f46e019b8df505cf51ba9f28fd395093aa6bc3f953085e2,PodSandboxId:aad57f23f7b9d4707a8bfffdf0e25d4af243348041ce9e8d754b9cc72ffdedda,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1726257485507100102,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bbf45dbd-00fc-4d1f-952b-e3741f1e2e96,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e70559352db6a4d712cb06065541ce8338cb0ea989eac571f538aa205d022f73,PodSandboxId:2a5a9a6660ec0a956732d808a6966894dc14c94018e55071bb0bd441909aec88,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726257482009005191,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fjzxv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 984f1946-61b1-4881-ae99-495855aaf948,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:adbec8ff0ed7ae2d76b2b4191fbf08117c3c2c17aaf2f56bf763ce14fba83991,PodSandboxId:e122a3e335a48d85769ac9fa798ae16aa9d7379b4181a9497fa9fef5b04e0432,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726257474620787945,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b24zg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67fffd9e-ddf7-4abb-bf
ce-1528060d6b43,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a9c61bb677322f851f9504c0ffe2044897d80333391be76078a4f7825afe9fe,PodSandboxId:7385496e03b484759e09d27bb4f8bf5d45b469ce208eeb6f389e8363673d1376,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726257474692014207,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cb55fe9-4adb-4d3e-9f26-34ee4b3f01
a2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6169bebe5711890f53f70a7ec514779cf495ee725c60dab67e7acdca64ef03d,PodSandboxId:c47a79173c9568e56ea539306a842412f7e30ef571f434f95d7f524ecbf75bfe,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726257469623037132,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-239327,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d27d74d3adbf9e1a
8fb5f27e34765015,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3490cc2f99b270938b5fed10c34d8466f4e2d304dac9cdacb69b239c36988e3,PodSandboxId:414bfb6888204e21d8c51c3b0ae29137e36a30c27fc16f3c5885352b1a0f2fff,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726257469706994027,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-239327,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86f02b0f8fb994379d37353d7e37c6d9,},Annotations:map[string]s
tring{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c2bf4fed4e33c404d568d430bf58fe30ca0c9f0cf8964c530e93f1fd1a51edf,PodSandboxId:bf03222df8fb595c984c7172cd535c8969884b0d5f80cd5150cd0bf8da544763,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726257469685186220,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-239327,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6be3a374c3b71fab710754c2fbd15de6,},Annotations:map[string]string{io.kube
rnetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b1108fd5841778e635c87c901ca2bc56071cc7f48f9b9812e1bf8fa063284c3,PodSandboxId:db3f50d1e105c112dd17655faeda9d724a50e582413bae6643c5639081a5325e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726257469583573446,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-239327,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62f2766c1d6aefa961c8bc9e97da2ac4,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=46309723-67e1-425e-aa1c-831b1cd14eaf name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 20:11:21 no-preload-239327 crio[706]: time="2024-09-13 20:11:21.388066562Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ffb36abc-c6b1-41f3-8bad-aad10d273255 name=/runtime.v1.RuntimeService/Version
	Sep 13 20:11:21 no-preload-239327 crio[706]: time="2024-09-13 20:11:21.388141273Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ffb36abc-c6b1-41f3-8bad-aad10d273255 name=/runtime.v1.RuntimeService/Version
	Sep 13 20:11:21 no-preload-239327 crio[706]: time="2024-09-13 20:11:21.389685767Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ce749c45-6cc6-47eb-816b-443bd4a68458 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 20:11:21 no-preload-239327 crio[706]: time="2024-09-13 20:11:21.390096002Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726258281390071518,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ce749c45-6cc6-47eb-816b-443bd4a68458 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 20:11:21 no-preload-239327 crio[706]: time="2024-09-13 20:11:21.390871312Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cd749f61-eebd-406a-9e78-a92d9f00030e name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 20:11:21 no-preload-239327 crio[706]: time="2024-09-13 20:11:21.390946886Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cd749f61-eebd-406a-9e78-a92d9f00030e name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 20:11:21 no-preload-239327 crio[706]: time="2024-09-13 20:11:21.391141188Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fc01d7b17bbc929489326ae7e806a248f2d3d5cbc145bf51eb1b0cf2ba88d950,PodSandboxId:7385496e03b484759e09d27bb4f8bf5d45b469ce208eeb6f389e8363673d1376,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726257505260601200,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cb55fe9-4adb-4d3e-9f26-34ee4b3f01a2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7072134a2004c7ca1f46e019b8df505cf51ba9f28fd395093aa6bc3f953085e2,PodSandboxId:aad57f23f7b9d4707a8bfffdf0e25d4af243348041ce9e8d754b9cc72ffdedda,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1726257485507100102,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bbf45dbd-00fc-4d1f-952b-e3741f1e2e96,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e70559352db6a4d712cb06065541ce8338cb0ea989eac571f538aa205d022f73,PodSandboxId:2a5a9a6660ec0a956732d808a6966894dc14c94018e55071bb0bd441909aec88,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726257482009005191,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fjzxv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 984f1946-61b1-4881-ae99-495855aaf948,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:adbec8ff0ed7ae2d76b2b4191fbf08117c3c2c17aaf2f56bf763ce14fba83991,PodSandboxId:e122a3e335a48d85769ac9fa798ae16aa9d7379b4181a9497fa9fef5b04e0432,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726257474620787945,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b24zg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67fffd9e-ddf7-4abb-bf
ce-1528060d6b43,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a9c61bb677322f851f9504c0ffe2044897d80333391be76078a4f7825afe9fe,PodSandboxId:7385496e03b484759e09d27bb4f8bf5d45b469ce208eeb6f389e8363673d1376,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726257474692014207,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cb55fe9-4adb-4d3e-9f26-34ee4b3f01
a2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6169bebe5711890f53f70a7ec514779cf495ee725c60dab67e7acdca64ef03d,PodSandboxId:c47a79173c9568e56ea539306a842412f7e30ef571f434f95d7f524ecbf75bfe,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726257469623037132,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-239327,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d27d74d3adbf9e1a
8fb5f27e34765015,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3490cc2f99b270938b5fed10c34d8466f4e2d304dac9cdacb69b239c36988e3,PodSandboxId:414bfb6888204e21d8c51c3b0ae29137e36a30c27fc16f3c5885352b1a0f2fff,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726257469706994027,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-239327,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86f02b0f8fb994379d37353d7e37c6d9,},Annotations:map[string]s
tring{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c2bf4fed4e33c404d568d430bf58fe30ca0c9f0cf8964c530e93f1fd1a51edf,PodSandboxId:bf03222df8fb595c984c7172cd535c8969884b0d5f80cd5150cd0bf8da544763,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726257469685186220,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-239327,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6be3a374c3b71fab710754c2fbd15de6,},Annotations:map[string]string{io.kube
rnetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b1108fd5841778e635c87c901ca2bc56071cc7f48f9b9812e1bf8fa063284c3,PodSandboxId:db3f50d1e105c112dd17655faeda9d724a50e582413bae6643c5639081a5325e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726257469583573446,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-239327,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62f2766c1d6aefa961c8bc9e97da2ac4,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cd749f61-eebd-406a-9e78-a92d9f00030e name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	fc01d7b17bbc9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       2                   7385496e03b48       storage-provisioner
	7072134a2004c       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   aad57f23f7b9d       busybox
	e70559352db6a       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      13 minutes ago      Running             coredns                   1                   2a5a9a6660ec0       coredns-7c65d6cfc9-fjzxv
	4a9c61bb67732       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       1                   7385496e03b48       storage-provisioner
	adbec8ff0ed7a       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      13 minutes ago      Running             kube-proxy                1                   e122a3e335a48       kube-proxy-b24zg
	a3490cc2f99b2       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      13 minutes ago      Running             etcd                      1                   414bfb6888204       etcd-no-preload-239327
	4c2bf4fed4e33       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      13 minutes ago      Running             kube-scheduler            1                   bf03222df8fb5       kube-scheduler-no-preload-239327
	e6169bebe5711       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      13 minutes ago      Running             kube-controller-manager   1                   c47a79173c956       kube-controller-manager-no-preload-239327
	7b1108fd58417       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      13 minutes ago      Running             kube-apiserver            1                   db3f50d1e105c       kube-apiserver-no-preload-239327
	
	
	==> coredns [e70559352db6a4d712cb06065541ce8338cb0ea989eac571f538aa205d022f73] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:42306 - 61102 "HINFO IN 6614262023756072451.4504198368740859932. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015714591s
	
	
	==> describe nodes <==
	Name:               no-preload-239327
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-239327
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fdd33bebc6743cfd1c61ec7fe066add478610a92
	                    minikube.k8s.io/name=no-preload-239327
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_13T19_49_25_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Sep 2024 19:49:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-239327
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 13 Sep 2024 20:11:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Sep 2024 20:08:36 +0000   Fri, 13 Sep 2024 19:49:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Sep 2024 20:08:36 +0000   Fri, 13 Sep 2024 19:49:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Sep 2024 20:08:36 +0000   Fri, 13 Sep 2024 19:49:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 13 Sep 2024 20:08:36 +0000   Fri, 13 Sep 2024 19:58:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.13
	  Hostname:    no-preload-239327
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 8853583287464402a98383f1ee71c8a5
	  System UUID:                88535832-8746-4402-a983-83f1ee71c8a5
	  Boot ID:                    299616d8-5112-4b28-a916-bc79aca3145c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 coredns-7c65d6cfc9-fjzxv                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     21m
	  kube-system                 etcd-no-preload-239327                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         21m
	  kube-system                 kube-apiserver-no-preload-239327             250m (12%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-controller-manager-no-preload-239327    200m (10%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-proxy-b24zg                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-scheduler-no-preload-239327             100m (5%)     0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 metrics-server-6867b74b74-bq7jp              100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         21m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (17%)  170Mi (8%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 21m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientPID     21m                kubelet          Node no-preload-239327 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  21m                kubelet          Node no-preload-239327 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m                kubelet          Node no-preload-239327 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeReady                21m                kubelet          Node no-preload-239327 status is now: NodeReady
	  Normal  RegisteredNode           21m                node-controller  Node no-preload-239327 event: Registered Node no-preload-239327 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node no-preload-239327 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node no-preload-239327 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node no-preload-239327 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           13m                node-controller  Node no-preload-239327 event: Registered Node no-preload-239327 in Controller
	
	
	==> dmesg <==
	[Sep13 19:57] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050867] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040055] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.774977] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.450333] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.553765] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.972016] systemd-fstab-generator[630]: Ignoring "noauto" option for root device
	[  +0.061361] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.063867] systemd-fstab-generator[642]: Ignoring "noauto" option for root device
	[  +0.199840] systemd-fstab-generator[656]: Ignoring "noauto" option for root device
	[  +0.117715] systemd-fstab-generator[668]: Ignoring "noauto" option for root device
	[  +0.286650] systemd-fstab-generator[698]: Ignoring "noauto" option for root device
	[ +15.330883] systemd-fstab-generator[1236]: Ignoring "noauto" option for root device
	[  +0.057567] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.769303] systemd-fstab-generator[1358]: Ignoring "noauto" option for root device
	[  +4.446395] kauditd_printk_skb: 97 callbacks suppressed
	[  +3.675493] systemd-fstab-generator[1993]: Ignoring "noauto" option for root device
	[  +3.180062] kauditd_printk_skb: 61 callbacks suppressed
	[Sep13 19:58] kauditd_printk_skb: 41 callbacks suppressed
	
	
	==> etcd [a3490cc2f99b270938b5fed10c34d8466f4e2d304dac9cdacb69b239c36988e3] <==
	{"level":"info","ts":"2024-09-13T19:58:38.595220Z","caller":"traceutil/trace.go:171","msg":"trace[1616674464] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-6867b74b74-bq7jp; range_end:; response_count:1; response_revision:635; }","duration":"340.563694ms","start":"2024-09-13T19:58:38.254645Z","end":"2024-09-13T19:58:38.595209Z","steps":["trace[1616674464] 'agreement among raft nodes before linearized reading'  (duration: 339.525871ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-13T19:58:38.595746Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-13T19:58:38.254613Z","time spent":"341.119743ms","remote":"127.0.0.1:48220","response type":"/etcdserverpb.KV/Range","request count":0,"request size":60,"response count":1,"response size":4362,"request content":"key:\"/registry/pods/kube-system/metrics-server-6867b74b74-bq7jp\" "}
	{"level":"warn","ts":"2024-09-13T19:58:38.594435Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"506.372854ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kube-system/metrics-server-6867b74b74-bq7jp.17f4e608407c5c44\" ","response":"range_response_count:1 size:940"}
	{"level":"info","ts":"2024-09-13T19:58:38.596911Z","caller":"traceutil/trace.go:171","msg":"trace[127256959] range","detail":"{range_begin:/registry/events/kube-system/metrics-server-6867b74b74-bq7jp.17f4e608407c5c44; range_end:; response_count:1; response_revision:635; }","duration":"508.846423ms","start":"2024-09-13T19:58:38.088050Z","end":"2024-09-13T19:58:38.596896Z","steps":["trace[127256959] 'agreement among raft nodes before linearized reading'  (duration: 506.270908ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-13T19:58:38.596971Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-13T19:58:38.088013Z","time spent":"508.945413ms","remote":"127.0.0.1:48108","response type":"/etcdserverpb.KV/Range","request count":0,"request size":79,"response count":1,"response size":963,"request content":"key:\"/registry/events/kube-system/metrics-server-6867b74b74-bq7jp.17f4e608407c5c44\" "}
	{"level":"warn","ts":"2024-09-13T19:58:38.860065Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"136.535743ms","expected-duration":"100ms","prefix":"","request":"header:<ID:2041698453018029477 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/metrics-server-6867b74b74-bq7jp.17f4e608407c5c44\" mod_revision:617 > success:<request_put:<key:\"/registry/events/kube-system/metrics-server-6867b74b74-bq7jp.17f4e608407c5c44\" value_size:830 lease:2041698453018028692 >> failure:<request_range:<key:\"/registry/events/kube-system/metrics-server-6867b74b74-bq7jp.17f4e608407c5c44\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-09-13T19:58:38.860148Z","caller":"traceutil/trace.go:171","msg":"trace[1786721962] linearizableReadLoop","detail":"{readStateIndex:681; appliedIndex:680; }","duration":"258.992391ms","start":"2024-09-13T19:58:38.601141Z","end":"2024-09-13T19:58:38.860134Z","steps":["trace[1786721962] 'read index received'  (duration: 122.294491ms)","trace[1786721962] 'applied index is now lower than readState.Index'  (duration: 136.697094ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-13T19:58:38.860252Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"259.106747ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/no-preload-239327\" ","response":"range_response_count:1 size:4663"}
	{"level":"info","ts":"2024-09-13T19:58:38.860285Z","caller":"traceutil/trace.go:171","msg":"trace[1944658975] range","detail":"{range_begin:/registry/minions/no-preload-239327; range_end:; response_count:1; response_revision:636; }","duration":"259.139804ms","start":"2024-09-13T19:58:38.601139Z","end":"2024-09-13T19:58:38.860279Z","steps":["trace[1944658975] 'agreement among raft nodes before linearized reading'  (duration: 259.027053ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-13T19:58:38.860416Z","caller":"traceutil/trace.go:171","msg":"trace[1729978713] transaction","detail":"{read_only:false; response_revision:636; number_of_response:1; }","duration":"260.593969ms","start":"2024-09-13T19:58:38.599815Z","end":"2024-09-13T19:58:38.860409Z","steps":["trace[1729978713] 'process raft request'  (duration: 123.647902ms)","trace[1729978713] 'compare'  (duration: 136.265452ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-13T19:58:39.351624Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"365.492804ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-13T19:58:39.351707Z","caller":"traceutil/trace.go:171","msg":"trace[788229311] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:636; }","duration":"365.587976ms","start":"2024-09-13T19:58:38.986107Z","end":"2024-09-13T19:58:39.351695Z","steps":["trace[788229311] 'range keys from in-memory index tree'  (duration: 365.48118ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-13T19:58:39.352463Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"365.732487ms","expected-duration":"100ms","prefix":"","request":"header:<ID:2041698453018029481 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/metrics-server-6867b74b74-bq7jp.17f4e608407ccf27\" mod_revision:618 > success:<request_put:<key:\"/registry/events/kube-system/metrics-server-6867b74b74-bq7jp.17f4e608407ccf27\" value_size:668 lease:2041698453018028692 >> failure:<request_range:<key:\"/registry/events/kube-system/metrics-server-6867b74b74-bq7jp.17f4e608407ccf27\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-09-13T19:58:39.352574Z","caller":"traceutil/trace.go:171","msg":"trace[1188443948] linearizableReadLoop","detail":"{readStateIndex:682; appliedIndex:681; }","duration":"486.471612ms","start":"2024-09-13T19:58:38.866095Z","end":"2024-09-13T19:58:39.352567Z","steps":["trace[1188443948] 'read index received'  (duration: 120.524016ms)","trace[1188443948] 'applied index is now lower than readState.Index'  (duration: 365.946524ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-13T19:58:39.352655Z","caller":"traceutil/trace.go:171","msg":"trace[1463682818] transaction","detail":"{read_only:false; response_revision:637; number_of_response:1; }","duration":"487.94197ms","start":"2024-09-13T19:58:38.864706Z","end":"2024-09-13T19:58:39.352648Z","steps":["trace[1463682818] 'process raft request'  (duration: 121.973667ms)","trace[1463682818] 'compare'  (duration: 364.520022ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-13T19:58:39.352865Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-13T19:58:38.864687Z","time spent":"488.057523ms","remote":"127.0.0.1:48108","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":763,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/metrics-server-6867b74b74-bq7jp.17f4e608407ccf27\" mod_revision:618 > success:<request_put:<key:\"/registry/events/kube-system/metrics-server-6867b74b74-bq7jp.17f4e608407ccf27\" value_size:668 lease:2041698453018028692 >> failure:<request_range:<key:\"/registry/events/kube-system/metrics-server-6867b74b74-bq7jp.17f4e608407ccf27\" > >"}
	{"level":"warn","ts":"2024-09-13T19:58:39.353100Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"428.286032ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-13T19:58:39.353158Z","caller":"traceutil/trace.go:171","msg":"trace[475232123] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:637; }","duration":"428.348122ms","start":"2024-09-13T19:58:38.924801Z","end":"2024-09-13T19:58:39.353149Z","steps":["trace[475232123] 'agreement among raft nodes before linearized reading'  (duration: 428.242447ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-13T19:58:39.353186Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-13T19:58:38.924772Z","time spent":"428.407395ms","remote":"127.0.0.1:48042","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-09-13T19:58:39.353118Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"487.01352ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/no-preload-239327\" ","response":"range_response_count:1 size:4663"}
	{"level":"info","ts":"2024-09-13T19:58:39.353371Z","caller":"traceutil/trace.go:171","msg":"trace[807714909] range","detail":"{range_begin:/registry/minions/no-preload-239327; range_end:; response_count:1; response_revision:637; }","duration":"487.26507ms","start":"2024-09-13T19:58:38.866092Z","end":"2024-09-13T19:58:39.353357Z","steps":["trace[807714909] 'agreement among raft nodes before linearized reading'  (duration: 486.932274ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-13T19:58:39.353412Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-13T19:58:38.866068Z","time spent":"487.335403ms","remote":"127.0.0.1:48208","response type":"/etcdserverpb.KV/Range","request count":0,"request size":37,"response count":1,"response size":4686,"request content":"key:\"/registry/minions/no-preload-239327\" "}
	{"level":"info","ts":"2024-09-13T20:07:51.913360Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":854}
	{"level":"info","ts":"2024-09-13T20:07:51.932008Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":854,"took":"17.84173ms","hash":4071954628,"current-db-size-bytes":2633728,"current-db-size":"2.6 MB","current-db-size-in-use-bytes":2633728,"current-db-size-in-use":"2.6 MB"}
	{"level":"info","ts":"2024-09-13T20:07:51.932116Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4071954628,"revision":854,"compact-revision":-1}
	
	
	==> kernel <==
	 20:11:21 up 14 min,  0 users,  load average: 0.10, 0.15, 0.15
	Linux no-preload-239327 5.10.207 #1 SMP Thu Sep 12 19:03:33 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [7b1108fd5841778e635c87c901ca2bc56071cc7f48f9b9812e1bf8fa063284c3] <==
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0913 20:07:54.494721       1 handler_proxy.go:99] no RequestInfo found in the context
	E0913 20:07:54.494929       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0913 20:07:54.496110       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0913 20:07:54.496211       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0913 20:08:54.496995       1 handler_proxy.go:99] no RequestInfo found in the context
	E0913 20:08:54.497110       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0913 20:08:54.497214       1 handler_proxy.go:99] no RequestInfo found in the context
	E0913 20:08:54.497266       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0913 20:08:54.498258       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0913 20:08:54.498469       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0913 20:10:54.499152       1 handler_proxy.go:99] no RequestInfo found in the context
	W0913 20:10:54.499253       1 handler_proxy.go:99] no RequestInfo found in the context
	E0913 20:10:54.499510       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0913 20:10:54.499505       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0913 20:10:54.501466       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0913 20:10:54.501516       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [e6169bebe5711890f53f70a7ec514779cf495ee725c60dab67e7acdca64ef03d] <==
	E0913 20:05:57.053462       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0913 20:05:57.520893       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0913 20:06:27.060773       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0913 20:06:27.528262       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0913 20:06:57.067222       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0913 20:06:57.535172       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0913 20:07:27.073033       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0913 20:07:27.543391       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0913 20:07:57.079022       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0913 20:07:57.552427       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0913 20:08:27.086331       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0913 20:08:27.559794       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0913 20:08:36.584480       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-239327"
	E0913 20:08:57.094465       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0913 20:08:57.568972       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0913 20:09:03.011531       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="224.777µs"
	I0913 20:09:15.012794       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="58.04µs"
	E0913 20:09:27.104730       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0913 20:09:27.581310       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0913 20:09:57.111165       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0913 20:09:57.589640       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0913 20:10:27.118659       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0913 20:10:27.599141       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0913 20:10:57.125710       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0913 20:10:57.607212       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [adbec8ff0ed7ae2d76b2b4191fbf08117c3c2c17aaf2f56bf763ce14fba83991] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0913 19:57:54.989255       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0913 19:57:54.998774       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.13"]
	E0913 19:57:54.999177       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0913 19:57:55.059172       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0913 19:57:55.059241       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0913 19:57:55.059285       1 server_linux.go:169] "Using iptables Proxier"
	I0913 19:57:55.065727       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0913 19:57:55.066754       1 server.go:483] "Version info" version="v1.31.1"
	I0913 19:57:55.066930       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0913 19:57:55.072475       1 config.go:199] "Starting service config controller"
	I0913 19:57:55.072498       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0913 19:57:55.072522       1 config.go:105] "Starting endpoint slice config controller"
	I0913 19:57:55.072528       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0913 19:57:55.096787       1 config.go:328] "Starting node config controller"
	I0913 19:57:55.096988       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0913 19:57:55.173653       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0913 19:57:55.173705       1 shared_informer.go:320] Caches are synced for service config
	I0913 19:57:55.202080       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [4c2bf4fed4e33c404d568d430bf58fe30ca0c9f0cf8964c530e93f1fd1a51edf] <==
	I0913 19:57:50.801470       1 serving.go:386] Generated self-signed cert in-memory
	W0913 19:57:53.398674       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0913 19:57:53.398773       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0913 19:57:53.398788       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0913 19:57:53.398796       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0913 19:57:53.468034       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0913 19:57:53.468095       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0913 19:57:53.478910       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0913 19:57:53.479011       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0913 19:57:53.481652       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0913 19:57:53.482128       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0913 19:57:53.585294       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 13 20:10:17 no-preload-239327 kubelet[1365]: E0913 20:10:17.994537    1365 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-bq7jp" podUID="9920ad88-3d00-458f-94d4-3dcfd0cd9a01"
	Sep 13 20:10:19 no-preload-239327 kubelet[1365]: E0913 20:10:19.143579    1365 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726258219143328674,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 20:10:19 no-preload-239327 kubelet[1365]: E0913 20:10:19.143676    1365 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726258219143328674,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 20:10:29 no-preload-239327 kubelet[1365]: E0913 20:10:29.145506    1365 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726258229144780018,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 20:10:29 no-preload-239327 kubelet[1365]: E0913 20:10:29.145719    1365 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726258229144780018,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 20:10:30 no-preload-239327 kubelet[1365]: E0913 20:10:30.995184    1365 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-bq7jp" podUID="9920ad88-3d00-458f-94d4-3dcfd0cd9a01"
	Sep 13 20:10:39 no-preload-239327 kubelet[1365]: E0913 20:10:39.147733    1365 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726258239147351322,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 20:10:39 no-preload-239327 kubelet[1365]: E0913 20:10:39.147780    1365 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726258239147351322,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 20:10:41 no-preload-239327 kubelet[1365]: E0913 20:10:41.993670    1365 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-bq7jp" podUID="9920ad88-3d00-458f-94d4-3dcfd0cd9a01"
	Sep 13 20:10:49 no-preload-239327 kubelet[1365]: E0913 20:10:49.015500    1365 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 13 20:10:49 no-preload-239327 kubelet[1365]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 13 20:10:49 no-preload-239327 kubelet[1365]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 13 20:10:49 no-preload-239327 kubelet[1365]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 13 20:10:49 no-preload-239327 kubelet[1365]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 13 20:10:49 no-preload-239327 kubelet[1365]: E0913 20:10:49.150108    1365 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726258249149625147,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 20:10:49 no-preload-239327 kubelet[1365]: E0913 20:10:49.150151    1365 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726258249149625147,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 20:10:55 no-preload-239327 kubelet[1365]: E0913 20:10:55.993655    1365 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-bq7jp" podUID="9920ad88-3d00-458f-94d4-3dcfd0cd9a01"
	Sep 13 20:10:59 no-preload-239327 kubelet[1365]: E0913 20:10:59.153728    1365 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726258259152690721,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 20:10:59 no-preload-239327 kubelet[1365]: E0913 20:10:59.154125    1365 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726258259152690721,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 20:11:06 no-preload-239327 kubelet[1365]: E0913 20:11:06.995248    1365 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-bq7jp" podUID="9920ad88-3d00-458f-94d4-3dcfd0cd9a01"
	Sep 13 20:11:09 no-preload-239327 kubelet[1365]: E0913 20:11:09.157212    1365 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726258269156758780,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 20:11:09 no-preload-239327 kubelet[1365]: E0913 20:11:09.158342    1365 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726258269156758780,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 20:11:17 no-preload-239327 kubelet[1365]: E0913 20:11:17.996031    1365 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-bq7jp" podUID="9920ad88-3d00-458f-94d4-3dcfd0cd9a01"
	Sep 13 20:11:19 no-preload-239327 kubelet[1365]: E0913 20:11:19.159905    1365 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726258279159423438,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 20:11:19 no-preload-239327 kubelet[1365]: E0913 20:11:19.160073    1365 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726258279159423438,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [4a9c61bb677322f851f9504c0ffe2044897d80333391be76078a4f7825afe9fe] <==
	I0913 19:57:54.844408       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0913 19:58:24.850436       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [fc01d7b17bbc929489326ae7e806a248f2d3d5cbc145bf51eb1b0cf2ba88d950] <==
	I0913 19:58:25.373098       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0913 19:58:25.383445       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0913 19:58:25.383565       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0913 19:58:42.787282       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0913 19:58:42.787524       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-239327_be5d3fbe-1a7b-4ab6-9f3a-8c29448760b2!
	I0913 19:58:42.795656       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d2fae039-fec2-4875-a26f-88621d1b9405", APIVersion:"v1", ResourceVersion:"638", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-239327_be5d3fbe-1a7b-4ab6-9f3a-8c29448760b2 became leader
	I0913 19:58:42.888766       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-239327_be5d3fbe-1a7b-4ab6-9f3a-8c29448760b2!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-239327 -n no-preload-239327
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-239327 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-bq7jp
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-239327 describe pod metrics-server-6867b74b74-bq7jp
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-239327 describe pod metrics-server-6867b74b74-bq7jp: exit status 1 (62.971574ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-bq7jp" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-239327 describe pod metrics-server-6867b74b74-bq7jp: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (545.66s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-512125 -n default-k8s-diff-port-512125
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-09-13 20:12:09.716960559 +0000 UTC m=+6683.195965782
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-512125 -n default-k8s-diff-port-512125
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-512125 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-512125 logs -n 25: (2.519931457s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-604714 sudo cat                              | bridge-604714                | jenkins | v1.34.0 | 13 Sep 24 19:49 UTC | 13 Sep 24 19:49 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-604714 sudo                                  | bridge-604714                | jenkins | v1.34.0 | 13 Sep 24 19:49 UTC | 13 Sep 24 19:49 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-604714 sudo                                  | bridge-604714                | jenkins | v1.34.0 | 13 Sep 24 19:49 UTC | 13 Sep 24 19:49 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-604714 sudo                                  | bridge-604714                | jenkins | v1.34.0 | 13 Sep 24 19:49 UTC | 13 Sep 24 19:49 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-604714 sudo find                             | bridge-604714                | jenkins | v1.34.0 | 13 Sep 24 19:49 UTC | 13 Sep 24 19:49 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-604714 sudo crio                             | bridge-604714                | jenkins | v1.34.0 | 13 Sep 24 19:49 UTC | 13 Sep 24 19:49 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-604714                                       | bridge-604714                | jenkins | v1.34.0 | 13 Sep 24 19:49 UTC | 13 Sep 24 19:49 UTC |
	| delete  | -p                                                     | disable-driver-mounts-221882 | jenkins | v1.34.0 | 13 Sep 24 19:49 UTC | 13 Sep 24 19:49 UTC |
	|         | disable-driver-mounts-221882                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-512125 | jenkins | v1.34.0 | 13 Sep 24 19:49 UTC | 13 Sep 24 19:50 UTC |
	|         | default-k8s-diff-port-512125                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-175374            | embed-certs-175374           | jenkins | v1.34.0 | 13 Sep 24 19:50 UTC | 13 Sep 24 19:50 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-175374                                  | embed-certs-175374           | jenkins | v1.34.0 | 13 Sep 24 19:50 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-239327             | no-preload-239327            | jenkins | v1.34.0 | 13 Sep 24 19:50 UTC | 13 Sep 24 19:50 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-239327                                   | no-preload-239327            | jenkins | v1.34.0 | 13 Sep 24 19:50 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-512125  | default-k8s-diff-port-512125 | jenkins | v1.34.0 | 13 Sep 24 19:50 UTC | 13 Sep 24 19:50 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-512125 | jenkins | v1.34.0 | 13 Sep 24 19:50 UTC |                     |
	|         | default-k8s-diff-port-512125                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-234290        | old-k8s-version-234290       | jenkins | v1.34.0 | 13 Sep 24 19:52 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-175374                 | embed-certs-175374           | jenkins | v1.34.0 | 13 Sep 24 19:52 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-175374                                  | embed-certs-175374           | jenkins | v1.34.0 | 13 Sep 24 19:52 UTC | 13 Sep 24 20:03 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-239327                  | no-preload-239327            | jenkins | v1.34.0 | 13 Sep 24 19:52 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-239327                                   | no-preload-239327            | jenkins | v1.34.0 | 13 Sep 24 19:52 UTC | 13 Sep 24 20:02 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-512125       | default-k8s-diff-port-512125 | jenkins | v1.34.0 | 13 Sep 24 19:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-512125 | jenkins | v1.34.0 | 13 Sep 24 19:53 UTC | 13 Sep 24 20:03 UTC |
	|         | default-k8s-diff-port-512125                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-234290                              | old-k8s-version-234290       | jenkins | v1.34.0 | 13 Sep 24 19:53 UTC | 13 Sep 24 19:53 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-234290             | old-k8s-version-234290       | jenkins | v1.34.0 | 13 Sep 24 19:53 UTC | 13 Sep 24 19:53 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-234290                              | old-k8s-version-234290       | jenkins | v1.34.0 | 13 Sep 24 19:53 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/13 19:53:41
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0913 19:53:41.943032   71926 out.go:345] Setting OutFile to fd 1 ...
	I0913 19:53:41.943180   71926 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 19:53:41.943190   71926 out.go:358] Setting ErrFile to fd 2...
	I0913 19:53:41.943197   71926 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 19:53:41.943402   71926 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19636-3902/.minikube/bin
	I0913 19:53:41.943930   71926 out.go:352] Setting JSON to false
	I0913 19:53:41.944812   71926 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5765,"bootTime":1726251457,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0913 19:53:41.944913   71926 start.go:139] virtualization: kvm guest
	I0913 19:53:41.946864   71926 out.go:177] * [old-k8s-version-234290] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0913 19:53:41.948276   71926 out.go:177]   - MINIKUBE_LOCATION=19636
	I0913 19:53:41.948281   71926 notify.go:220] Checking for updates...
	I0913 19:53:41.950620   71926 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 19:53:41.951967   71926 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19636-3902/kubeconfig
	I0913 19:53:41.953232   71926 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19636-3902/.minikube
	I0913 19:53:41.954348   71926 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0913 19:53:41.955441   71926 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 19:53:41.957011   71926 config.go:182] Loaded profile config "old-k8s-version-234290": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0913 19:53:41.957371   71926 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:53:41.957441   71926 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:53:41.972835   71926 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38069
	I0913 19:53:41.973170   71926 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:53:41.973680   71926 main.go:141] libmachine: Using API Version  1
	I0913 19:53:41.973702   71926 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:53:41.974021   71926 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:53:41.974203   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .DriverName
	I0913 19:53:41.975950   71926 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0913 19:53:41.977070   71926 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 19:53:41.977361   71926 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:53:41.977393   71926 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:53:41.992564   71926 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46407
	I0913 19:53:41.992937   71926 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:53:41.993374   71926 main.go:141] libmachine: Using API Version  1
	I0913 19:53:41.993394   71926 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:53:41.993696   71926 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:53:41.993887   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .DriverName
	I0913 19:53:42.028938   71926 out.go:177] * Using the kvm2 driver based on existing profile
	I0913 19:53:42.030049   71926 start.go:297] selected driver: kvm2
	I0913 19:53:42.030059   71926 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-234290 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-234290 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.137 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 19:53:42.030200   71926 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 19:53:42.030849   71926 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 19:53:42.030932   71926 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19636-3902/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0913 19:53:42.045463   71926 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0913 19:53:42.045953   71926 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0913 19:53:42.045989   71926 cni.go:84] Creating CNI manager for ""
	I0913 19:53:42.046049   71926 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0913 19:53:42.046110   71926 start.go:340] cluster config:
	{Name:old-k8s-version-234290 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-234290 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.137 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 19:53:42.046256   71926 iso.go:125] acquiring lock: {Name:mk6d09a9e7ffc35d34bf41cb51ad15df14a4d34d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 19:53:42.047975   71926 out.go:177] * Starting "old-k8s-version-234290" primary control-plane node in "old-k8s-version-234290" cluster
	I0913 19:53:42.049111   71926 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0913 19:53:42.049142   71926 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0913 19:53:42.049152   71926 cache.go:56] Caching tarball of preloaded images
	I0913 19:53:42.049244   71926 preload.go:172] Found /home/jenkins/minikube-integration/19636-3902/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0913 19:53:42.049256   71926 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0913 19:53:42.049381   71926 profile.go:143] Saving config to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/old-k8s-version-234290/config.json ...
	I0913 19:53:42.049610   71926 start.go:360] acquireMachinesLock for old-k8s-version-234290: {Name:mk2a4fc9f87a3264088553785e086036ce1d8c5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 19:53:44.338294   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:53:47.410436   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:53:53.490365   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:53:56.562332   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:54:02.642421   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:54:05.714373   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:54:11.794509   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:54:14.866446   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:54:20.946376   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:54:24.018394   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:54:30.098454   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:54:33.170427   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:54:39.250379   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:54:42.322396   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:54:48.402383   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:54:51.474349   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:54:57.554326   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:55:00.626470   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:55:06.706406   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:55:09.778406   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:55:15.858396   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:55:18.930350   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:55:25.010369   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:55:28.082351   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:55:34.162384   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:55:37.234340   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:55:43.314402   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:55:46.386350   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:55:52.466366   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:55:55.538393   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:56:01.618347   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:56:04.690441   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:56:10.770383   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:56:13.842385   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:56:19.922294   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:56:22.994351   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:56:29.074375   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:56:32.146398   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:56:38.226398   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:56:41.298354   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:56:47.378372   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:56:50.450410   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:56:56.530367   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:56:59.602397   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:57:05.682382   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:57:08.754412   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:57:11.758611   71424 start.go:364] duration metric: took 4m20.559966284s to acquireMachinesLock for "no-preload-239327"
	I0913 19:57:11.758664   71424 start.go:96] Skipping create...Using existing machine configuration
	I0913 19:57:11.758671   71424 fix.go:54] fixHost starting: 
	I0913 19:57:11.759024   71424 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:57:11.759062   71424 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:57:11.773946   71424 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43109
	I0913 19:57:11.774454   71424 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:57:11.774923   71424 main.go:141] libmachine: Using API Version  1
	I0913 19:57:11.774944   71424 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:57:11.775249   71424 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:57:11.775449   71424 main.go:141] libmachine: (no-preload-239327) Calling .DriverName
	I0913 19:57:11.775561   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetState
	I0913 19:57:11.777226   71424 fix.go:112] recreateIfNeeded on no-preload-239327: state=Stopped err=<nil>
	I0913 19:57:11.777255   71424 main.go:141] libmachine: (no-preload-239327) Calling .DriverName
	W0913 19:57:11.777386   71424 fix.go:138] unexpected machine state, will restart: <nil>
	I0913 19:57:11.778991   71424 out.go:177] * Restarting existing kvm2 VM for "no-preload-239327" ...
	I0913 19:57:11.756000   71233 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0913 19:57:11.756057   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetMachineName
	I0913 19:57:11.756380   71233 buildroot.go:166] provisioning hostname "embed-certs-175374"
	I0913 19:57:11.756419   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetMachineName
	I0913 19:57:11.756625   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHHostname
	I0913 19:57:11.758480   71233 machine.go:96] duration metric: took 4m37.434582624s to provisionDockerMachine
	I0913 19:57:11.758528   71233 fix.go:56] duration metric: took 4m37.454978505s for fixHost
	I0913 19:57:11.758535   71233 start.go:83] releasing machines lock for "embed-certs-175374", held for 4m37.454997672s
	W0913 19:57:11.758553   71233 start.go:714] error starting host: provision: host is not running
	W0913 19:57:11.758636   71233 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0913 19:57:11.758644   71233 start.go:729] Will try again in 5 seconds ...
	I0913 19:57:11.780324   71424 main.go:141] libmachine: (no-preload-239327) Calling .Start
	I0913 19:57:11.780481   71424 main.go:141] libmachine: (no-preload-239327) Ensuring networks are active...
	I0913 19:57:11.781265   71424 main.go:141] libmachine: (no-preload-239327) Ensuring network default is active
	I0913 19:57:11.781663   71424 main.go:141] libmachine: (no-preload-239327) Ensuring network mk-no-preload-239327 is active
	I0913 19:57:11.782007   71424 main.go:141] libmachine: (no-preload-239327) Getting domain xml...
	I0913 19:57:11.782826   71424 main.go:141] libmachine: (no-preload-239327) Creating domain...
	I0913 19:57:12.992355   71424 main.go:141] libmachine: (no-preload-239327) Waiting to get IP...
	I0913 19:57:12.993373   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:12.993782   71424 main.go:141] libmachine: (no-preload-239327) DBG | unable to find current IP address of domain no-preload-239327 in network mk-no-preload-239327
	I0913 19:57:12.993855   71424 main.go:141] libmachine: (no-preload-239327) DBG | I0913 19:57:12.993770   72661 retry.go:31] will retry after 199.574184ms: waiting for machine to come up
	I0913 19:57:13.195419   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:13.195877   71424 main.go:141] libmachine: (no-preload-239327) DBG | unable to find current IP address of domain no-preload-239327 in network mk-no-preload-239327
	I0913 19:57:13.195911   71424 main.go:141] libmachine: (no-preload-239327) DBG | I0913 19:57:13.195826   72661 retry.go:31] will retry after 380.700462ms: waiting for machine to come up
	I0913 19:57:13.578683   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:13.579202   71424 main.go:141] libmachine: (no-preload-239327) DBG | unable to find current IP address of domain no-preload-239327 in network mk-no-preload-239327
	I0913 19:57:13.579222   71424 main.go:141] libmachine: (no-preload-239327) DBG | I0913 19:57:13.579162   72661 retry.go:31] will retry after 398.874813ms: waiting for machine to come up
	I0913 19:57:13.979670   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:13.979999   71424 main.go:141] libmachine: (no-preload-239327) DBG | unable to find current IP address of domain no-preload-239327 in network mk-no-preload-239327
	I0913 19:57:13.980026   71424 main.go:141] libmachine: (no-preload-239327) DBG | I0913 19:57:13.979969   72661 retry.go:31] will retry after 430.946638ms: waiting for machine to come up
	I0913 19:57:14.412524   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:14.412887   71424 main.go:141] libmachine: (no-preload-239327) DBG | unable to find current IP address of domain no-preload-239327 in network mk-no-preload-239327
	I0913 19:57:14.412919   71424 main.go:141] libmachine: (no-preload-239327) DBG | I0913 19:57:14.412851   72661 retry.go:31] will retry after 619.103851ms: waiting for machine to come up
	I0913 19:57:15.033546   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:15.034023   71424 main.go:141] libmachine: (no-preload-239327) DBG | unable to find current IP address of domain no-preload-239327 in network mk-no-preload-239327
	I0913 19:57:15.034049   71424 main.go:141] libmachine: (no-preload-239327) DBG | I0913 19:57:15.033968   72661 retry.go:31] will retry after 686.825946ms: waiting for machine to come up
	I0913 19:57:15.722892   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:15.723272   71424 main.go:141] libmachine: (no-preload-239327) DBG | unable to find current IP address of domain no-preload-239327 in network mk-no-preload-239327
	I0913 19:57:15.723291   71424 main.go:141] libmachine: (no-preload-239327) DBG | I0913 19:57:15.723232   72661 retry.go:31] will retry after 950.457281ms: waiting for machine to come up
	I0913 19:57:16.760330   71233 start.go:360] acquireMachinesLock for embed-certs-175374: {Name:mk2a4fc9f87a3264088553785e086036ce1d8c5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 19:57:16.675363   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:16.675847   71424 main.go:141] libmachine: (no-preload-239327) DBG | unable to find current IP address of domain no-preload-239327 in network mk-no-preload-239327
	I0913 19:57:16.675877   71424 main.go:141] libmachine: (no-preload-239327) DBG | I0913 19:57:16.675800   72661 retry.go:31] will retry after 1.216886459s: waiting for machine to come up
	I0913 19:57:17.894808   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:17.895217   71424 main.go:141] libmachine: (no-preload-239327) DBG | unable to find current IP address of domain no-preload-239327 in network mk-no-preload-239327
	I0913 19:57:17.895239   71424 main.go:141] libmachine: (no-preload-239327) DBG | I0913 19:57:17.895175   72661 retry.go:31] will retry after 1.427837109s: waiting for machine to come up
	I0913 19:57:19.324743   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:19.325196   71424 main.go:141] libmachine: (no-preload-239327) DBG | unable to find current IP address of domain no-preload-239327 in network mk-no-preload-239327
	I0913 19:57:19.325217   71424 main.go:141] libmachine: (no-preload-239327) DBG | I0913 19:57:19.325162   72661 retry.go:31] will retry after 1.457475552s: waiting for machine to come up
	I0913 19:57:20.783805   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:20.784266   71424 main.go:141] libmachine: (no-preload-239327) DBG | unable to find current IP address of domain no-preload-239327 in network mk-no-preload-239327
	I0913 19:57:20.784330   71424 main.go:141] libmachine: (no-preload-239327) DBG | I0913 19:57:20.784199   72661 retry.go:31] will retry after 1.982491512s: waiting for machine to come up
	I0913 19:57:22.768091   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:22.768617   71424 main.go:141] libmachine: (no-preload-239327) DBG | unable to find current IP address of domain no-preload-239327 in network mk-no-preload-239327
	I0913 19:57:22.768648   71424 main.go:141] libmachine: (no-preload-239327) DBG | I0913 19:57:22.768571   72661 retry.go:31] will retry after 2.984595157s: waiting for machine to come up
	I0913 19:57:25.756723   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:25.757201   71424 main.go:141] libmachine: (no-preload-239327) DBG | unable to find current IP address of domain no-preload-239327 in network mk-no-preload-239327
	I0913 19:57:25.757254   71424 main.go:141] libmachine: (no-preload-239327) DBG | I0913 19:57:25.757153   72661 retry.go:31] will retry after 3.54213444s: waiting for machine to come up
	I0913 19:57:30.479236   71702 start.go:364] duration metric: took 4m5.481713344s to acquireMachinesLock for "default-k8s-diff-port-512125"
	I0913 19:57:30.479302   71702 start.go:96] Skipping create...Using existing machine configuration
	I0913 19:57:30.479311   71702 fix.go:54] fixHost starting: 
	I0913 19:57:30.479747   71702 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:57:30.479800   71702 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:57:30.496493   71702 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36631
	I0913 19:57:30.497088   71702 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:57:30.497677   71702 main.go:141] libmachine: Using API Version  1
	I0913 19:57:30.497710   71702 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:57:30.498088   71702 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:57:30.498293   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .DriverName
	I0913 19:57:30.498469   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetState
	I0913 19:57:30.500176   71702 fix.go:112] recreateIfNeeded on default-k8s-diff-port-512125: state=Stopped err=<nil>
	I0913 19:57:30.500202   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .DriverName
	W0913 19:57:30.500367   71702 fix.go:138] unexpected machine state, will restart: <nil>
	I0913 19:57:30.503496   71702 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-512125" ...
	I0913 19:57:29.301999   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:29.302506   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has current primary IP address 192.168.50.13 and MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:29.302529   71424 main.go:141] libmachine: (no-preload-239327) Found IP for machine: 192.168.50.13
	I0913 19:57:29.302571   71424 main.go:141] libmachine: (no-preload-239327) Reserving static IP address...
	I0913 19:57:29.302937   71424 main.go:141] libmachine: (no-preload-239327) Reserved static IP address: 192.168.50.13
	I0913 19:57:29.302956   71424 main.go:141] libmachine: (no-preload-239327) Waiting for SSH to be available...
	I0913 19:57:29.302980   71424 main.go:141] libmachine: (no-preload-239327) DBG | found host DHCP lease matching {name: "no-preload-239327", mac: "52:54:00:14:8c:9d", ip: "192.168.50.13"} in network mk-no-preload-239327: {Iface:virbr1 ExpiryTime:2024-09-13 20:57:22 +0000 UTC Type:0 Mac:52:54:00:14:8c:9d Iaid: IPaddr:192.168.50.13 Prefix:24 Hostname:no-preload-239327 Clientid:01:52:54:00:14:8c:9d}
	I0913 19:57:29.303002   71424 main.go:141] libmachine: (no-preload-239327) DBG | skip adding static IP to network mk-no-preload-239327 - found existing host DHCP lease matching {name: "no-preload-239327", mac: "52:54:00:14:8c:9d", ip: "192.168.50.13"}
	I0913 19:57:29.303016   71424 main.go:141] libmachine: (no-preload-239327) DBG | Getting to WaitForSSH function...
	I0913 19:57:29.305047   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:29.305362   71424 main.go:141] libmachine: (no-preload-239327) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:8c:9d", ip: ""} in network mk-no-preload-239327: {Iface:virbr1 ExpiryTime:2024-09-13 20:57:22 +0000 UTC Type:0 Mac:52:54:00:14:8c:9d Iaid: IPaddr:192.168.50.13 Prefix:24 Hostname:no-preload-239327 Clientid:01:52:54:00:14:8c:9d}
	I0913 19:57:29.305404   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined IP address 192.168.50.13 and MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:29.305515   71424 main.go:141] libmachine: (no-preload-239327) DBG | Using SSH client type: external
	I0913 19:57:29.305542   71424 main.go:141] libmachine: (no-preload-239327) DBG | Using SSH private key: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/no-preload-239327/id_rsa (-rw-------)
	I0913 19:57:29.305564   71424 main.go:141] libmachine: (no-preload-239327) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.13 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19636-3902/.minikube/machines/no-preload-239327/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0913 19:57:29.305573   71424 main.go:141] libmachine: (no-preload-239327) DBG | About to run SSH command:
	I0913 19:57:29.305581   71424 main.go:141] libmachine: (no-preload-239327) DBG | exit 0
	I0913 19:57:29.425845   71424 main.go:141] libmachine: (no-preload-239327) DBG | SSH cmd err, output: <nil>: 
	I0913 19:57:29.426277   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetConfigRaw
	I0913 19:57:29.426883   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetIP
	I0913 19:57:29.429328   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:29.429569   71424 main.go:141] libmachine: (no-preload-239327) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:8c:9d", ip: ""} in network mk-no-preload-239327: {Iface:virbr1 ExpiryTime:2024-09-13 20:57:22 +0000 UTC Type:0 Mac:52:54:00:14:8c:9d Iaid: IPaddr:192.168.50.13 Prefix:24 Hostname:no-preload-239327 Clientid:01:52:54:00:14:8c:9d}
	I0913 19:57:29.429604   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined IP address 192.168.50.13 and MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:29.429866   71424 profile.go:143] Saving config to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/no-preload-239327/config.json ...
	I0913 19:57:29.430088   71424 machine.go:93] provisionDockerMachine start ...
	I0913 19:57:29.430124   71424 main.go:141] libmachine: (no-preload-239327) Calling .DriverName
	I0913 19:57:29.430316   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHHostname
	I0913 19:57:29.432371   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:29.432697   71424 main.go:141] libmachine: (no-preload-239327) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:8c:9d", ip: ""} in network mk-no-preload-239327: {Iface:virbr1 ExpiryTime:2024-09-13 20:57:22 +0000 UTC Type:0 Mac:52:54:00:14:8c:9d Iaid: IPaddr:192.168.50.13 Prefix:24 Hostname:no-preload-239327 Clientid:01:52:54:00:14:8c:9d}
	I0913 19:57:29.432723   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined IP address 192.168.50.13 and MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:29.432877   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHPort
	I0913 19:57:29.433028   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHKeyPath
	I0913 19:57:29.433161   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHKeyPath
	I0913 19:57:29.433304   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHUsername
	I0913 19:57:29.433452   71424 main.go:141] libmachine: Using SSH client type: native
	I0913 19:57:29.433659   71424 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.13 22 <nil> <nil>}
	I0913 19:57:29.433671   71424 main.go:141] libmachine: About to run SSH command:
	hostname
	I0913 19:57:29.530650   71424 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0913 19:57:29.530683   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetMachineName
	I0913 19:57:29.530900   71424 buildroot.go:166] provisioning hostname "no-preload-239327"
	I0913 19:57:29.530926   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetMachineName
	I0913 19:57:29.531118   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHHostname
	I0913 19:57:29.533702   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:29.534171   71424 main.go:141] libmachine: (no-preload-239327) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:8c:9d", ip: ""} in network mk-no-preload-239327: {Iface:virbr1 ExpiryTime:2024-09-13 20:57:22 +0000 UTC Type:0 Mac:52:54:00:14:8c:9d Iaid: IPaddr:192.168.50.13 Prefix:24 Hostname:no-preload-239327 Clientid:01:52:54:00:14:8c:9d}
	I0913 19:57:29.534199   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined IP address 192.168.50.13 and MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:29.534417   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHPort
	I0913 19:57:29.534572   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHKeyPath
	I0913 19:57:29.534745   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHKeyPath
	I0913 19:57:29.534891   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHUsername
	I0913 19:57:29.535019   71424 main.go:141] libmachine: Using SSH client type: native
	I0913 19:57:29.535187   71424 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.13 22 <nil> <nil>}
	I0913 19:57:29.535199   71424 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-239327 && echo "no-preload-239327" | sudo tee /etc/hostname
	I0913 19:57:29.648889   71424 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-239327
	
	I0913 19:57:29.648913   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHHostname
	I0913 19:57:29.651418   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:29.651794   71424 main.go:141] libmachine: (no-preload-239327) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:8c:9d", ip: ""} in network mk-no-preload-239327: {Iface:virbr1 ExpiryTime:2024-09-13 20:57:22 +0000 UTC Type:0 Mac:52:54:00:14:8c:9d Iaid: IPaddr:192.168.50.13 Prefix:24 Hostname:no-preload-239327 Clientid:01:52:54:00:14:8c:9d}
	I0913 19:57:29.651818   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined IP address 192.168.50.13 and MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:29.651947   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHPort
	I0913 19:57:29.652123   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHKeyPath
	I0913 19:57:29.652233   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHKeyPath
	I0913 19:57:29.652398   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHUsername
	I0913 19:57:29.652574   71424 main.go:141] libmachine: Using SSH client type: native
	I0913 19:57:29.652776   71424 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.13 22 <nil> <nil>}
	I0913 19:57:29.652794   71424 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-239327' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-239327/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-239327' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0913 19:57:29.762739   71424 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0913 19:57:29.762770   71424 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19636-3902/.minikube CaCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19636-3902/.minikube}
	I0913 19:57:29.762788   71424 buildroot.go:174] setting up certificates
	I0913 19:57:29.762798   71424 provision.go:84] configureAuth start
	I0913 19:57:29.762807   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetMachineName
	I0913 19:57:29.763076   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetIP
	I0913 19:57:29.765579   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:29.765844   71424 main.go:141] libmachine: (no-preload-239327) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:8c:9d", ip: ""} in network mk-no-preload-239327: {Iface:virbr1 ExpiryTime:2024-09-13 20:57:22 +0000 UTC Type:0 Mac:52:54:00:14:8c:9d Iaid: IPaddr:192.168.50.13 Prefix:24 Hostname:no-preload-239327 Clientid:01:52:54:00:14:8c:9d}
	I0913 19:57:29.765881   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined IP address 192.168.50.13 and MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:29.766037   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHHostname
	I0913 19:57:29.768073   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:29.768363   71424 main.go:141] libmachine: (no-preload-239327) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:8c:9d", ip: ""} in network mk-no-preload-239327: {Iface:virbr1 ExpiryTime:2024-09-13 20:57:22 +0000 UTC Type:0 Mac:52:54:00:14:8c:9d Iaid: IPaddr:192.168.50.13 Prefix:24 Hostname:no-preload-239327 Clientid:01:52:54:00:14:8c:9d}
	I0913 19:57:29.768389   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined IP address 192.168.50.13 and MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:29.768465   71424 provision.go:143] copyHostCerts
	I0913 19:57:29.768517   71424 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem, removing ...
	I0913 19:57:29.768527   71424 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem
	I0913 19:57:29.768590   71424 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem (1679 bytes)
	I0913 19:57:29.768687   71424 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem, removing ...
	I0913 19:57:29.768694   71424 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem
	I0913 19:57:29.768722   71424 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem (1082 bytes)
	I0913 19:57:29.768788   71424 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem, removing ...
	I0913 19:57:29.768795   71424 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem
	I0913 19:57:29.768817   71424 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem (1123 bytes)
	I0913 19:57:29.768889   71424 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem org=jenkins.no-preload-239327 san=[127.0.0.1 192.168.50.13 localhost minikube no-preload-239327]
	I0913 19:57:29.880624   71424 provision.go:177] copyRemoteCerts
	I0913 19:57:29.880682   71424 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0913 19:57:29.880717   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHHostname
	I0913 19:57:29.883382   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:29.883679   71424 main.go:141] libmachine: (no-preload-239327) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:8c:9d", ip: ""} in network mk-no-preload-239327: {Iface:virbr1 ExpiryTime:2024-09-13 20:57:22 +0000 UTC Type:0 Mac:52:54:00:14:8c:9d Iaid: IPaddr:192.168.50.13 Prefix:24 Hostname:no-preload-239327 Clientid:01:52:54:00:14:8c:9d}
	I0913 19:57:29.883706   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined IP address 192.168.50.13 and MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:29.883861   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHPort
	I0913 19:57:29.884034   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHKeyPath
	I0913 19:57:29.884172   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHUsername
	I0913 19:57:29.884299   71424 sshutil.go:53] new ssh client: &{IP:192.168.50.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/no-preload-239327/id_rsa Username:docker}
	I0913 19:57:29.964073   71424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0913 19:57:29.988940   71424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0913 19:57:30.013491   71424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0913 19:57:30.038401   71424 provision.go:87] duration metric: took 275.590034ms to configureAuth
	I0913 19:57:30.038427   71424 buildroot.go:189] setting minikube options for container-runtime
	I0913 19:57:30.038638   71424 config.go:182] Loaded profile config "no-preload-239327": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 19:57:30.038726   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHHostname
	I0913 19:57:30.041435   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:30.041734   71424 main.go:141] libmachine: (no-preload-239327) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:8c:9d", ip: ""} in network mk-no-preload-239327: {Iface:virbr1 ExpiryTime:2024-09-13 20:57:22 +0000 UTC Type:0 Mac:52:54:00:14:8c:9d Iaid: IPaddr:192.168.50.13 Prefix:24 Hostname:no-preload-239327 Clientid:01:52:54:00:14:8c:9d}
	I0913 19:57:30.041758   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined IP address 192.168.50.13 and MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:30.041939   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHPort
	I0913 19:57:30.042135   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHKeyPath
	I0913 19:57:30.042328   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHKeyPath
	I0913 19:57:30.042488   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHUsername
	I0913 19:57:30.042633   71424 main.go:141] libmachine: Using SSH client type: native
	I0913 19:57:30.042788   71424 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.13 22 <nil> <nil>}
	I0913 19:57:30.042803   71424 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0913 19:57:30.253339   71424 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0913 19:57:30.253366   71424 machine.go:96] duration metric: took 823.250507ms to provisionDockerMachine
	I0913 19:57:30.253379   71424 start.go:293] postStartSetup for "no-preload-239327" (driver="kvm2")
	I0913 19:57:30.253391   71424 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0913 19:57:30.253413   71424 main.go:141] libmachine: (no-preload-239327) Calling .DriverName
	I0913 19:57:30.253755   71424 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0913 19:57:30.253789   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHHostname
	I0913 19:57:30.256252   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:30.256514   71424 main.go:141] libmachine: (no-preload-239327) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:8c:9d", ip: ""} in network mk-no-preload-239327: {Iface:virbr1 ExpiryTime:2024-09-13 20:57:22 +0000 UTC Type:0 Mac:52:54:00:14:8c:9d Iaid: IPaddr:192.168.50.13 Prefix:24 Hostname:no-preload-239327 Clientid:01:52:54:00:14:8c:9d}
	I0913 19:57:30.256540   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined IP address 192.168.50.13 and MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:30.256711   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHPort
	I0913 19:57:30.256876   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHKeyPath
	I0913 19:57:30.257073   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHUsername
	I0913 19:57:30.257214   71424 sshutil.go:53] new ssh client: &{IP:192.168.50.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/no-preload-239327/id_rsa Username:docker}
	I0913 19:57:30.337478   71424 ssh_runner.go:195] Run: cat /etc/os-release
	I0913 19:57:30.342399   71424 info.go:137] Remote host: Buildroot 2023.02.9
	I0913 19:57:30.342432   71424 filesync.go:126] Scanning /home/jenkins/minikube-integration/19636-3902/.minikube/addons for local assets ...
	I0913 19:57:30.342520   71424 filesync.go:126] Scanning /home/jenkins/minikube-integration/19636-3902/.minikube/files for local assets ...
	I0913 19:57:30.342602   71424 filesync.go:149] local asset: /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem -> 110792.pem in /etc/ssl/certs
	I0913 19:57:30.342687   71424 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0913 19:57:30.352513   71424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem --> /etc/ssl/certs/110792.pem (1708 bytes)
	I0913 19:57:30.377672   71424 start.go:296] duration metric: took 124.280454ms for postStartSetup
	I0913 19:57:30.377713   71424 fix.go:56] duration metric: took 18.619042375s for fixHost
	I0913 19:57:30.377736   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHHostname
	I0913 19:57:30.380480   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:30.380762   71424 main.go:141] libmachine: (no-preload-239327) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:8c:9d", ip: ""} in network mk-no-preload-239327: {Iface:virbr1 ExpiryTime:2024-09-13 20:57:22 +0000 UTC Type:0 Mac:52:54:00:14:8c:9d Iaid: IPaddr:192.168.50.13 Prefix:24 Hostname:no-preload-239327 Clientid:01:52:54:00:14:8c:9d}
	I0913 19:57:30.380784   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined IP address 192.168.50.13 and MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:30.380956   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHPort
	I0913 19:57:30.381202   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHKeyPath
	I0913 19:57:30.381348   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHKeyPath
	I0913 19:57:30.381458   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHUsername
	I0913 19:57:30.381616   71424 main.go:141] libmachine: Using SSH client type: native
	I0913 19:57:30.381771   71424 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.13 22 <nil> <nil>}
	I0913 19:57:30.381780   71424 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0913 19:57:30.479035   71424 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726257450.452618583
	
	I0913 19:57:30.479060   71424 fix.go:216] guest clock: 1726257450.452618583
	I0913 19:57:30.479069   71424 fix.go:229] Guest: 2024-09-13 19:57:30.452618583 +0000 UTC Remote: 2024-09-13 19:57:30.377717716 +0000 UTC m=+279.312798159 (delta=74.900867ms)
	I0913 19:57:30.479125   71424 fix.go:200] guest clock delta is within tolerance: 74.900867ms
	I0913 19:57:30.479144   71424 start.go:83] releasing machines lock for "no-preload-239327", held for 18.720496354s
	I0913 19:57:30.479184   71424 main.go:141] libmachine: (no-preload-239327) Calling .DriverName
	I0913 19:57:30.479427   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetIP
	I0913 19:57:30.481882   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:30.482255   71424 main.go:141] libmachine: (no-preload-239327) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:8c:9d", ip: ""} in network mk-no-preload-239327: {Iface:virbr1 ExpiryTime:2024-09-13 20:57:22 +0000 UTC Type:0 Mac:52:54:00:14:8c:9d Iaid: IPaddr:192.168.50.13 Prefix:24 Hostname:no-preload-239327 Clientid:01:52:54:00:14:8c:9d}
	I0913 19:57:30.482282   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined IP address 192.168.50.13 and MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:30.482456   71424 main.go:141] libmachine: (no-preload-239327) Calling .DriverName
	I0913 19:57:30.482964   71424 main.go:141] libmachine: (no-preload-239327) Calling .DriverName
	I0913 19:57:30.483140   71424 main.go:141] libmachine: (no-preload-239327) Calling .DriverName
	I0913 19:57:30.483216   71424 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0913 19:57:30.483243   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHHostname
	I0913 19:57:30.483423   71424 ssh_runner.go:195] Run: cat /version.json
	I0913 19:57:30.483453   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHHostname
	I0913 19:57:30.485658   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:30.486000   71424 main.go:141] libmachine: (no-preload-239327) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:8c:9d", ip: ""} in network mk-no-preload-239327: {Iface:virbr1 ExpiryTime:2024-09-13 20:57:22 +0000 UTC Type:0 Mac:52:54:00:14:8c:9d Iaid: IPaddr:192.168.50.13 Prefix:24 Hostname:no-preload-239327 Clientid:01:52:54:00:14:8c:9d}
	I0913 19:57:30.486026   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined IP address 192.168.50.13 and MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:30.486080   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:30.486173   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHPort
	I0913 19:57:30.486321   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHKeyPath
	I0913 19:57:30.486463   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHUsername
	I0913 19:57:30.486536   71424 main.go:141] libmachine: (no-preload-239327) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:8c:9d", ip: ""} in network mk-no-preload-239327: {Iface:virbr1 ExpiryTime:2024-09-13 20:57:22 +0000 UTC Type:0 Mac:52:54:00:14:8c:9d Iaid: IPaddr:192.168.50.13 Prefix:24 Hostname:no-preload-239327 Clientid:01:52:54:00:14:8c:9d}
	I0913 19:57:30.486556   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined IP address 192.168.50.13 and MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:30.486581   71424 sshutil.go:53] new ssh client: &{IP:192.168.50.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/no-preload-239327/id_rsa Username:docker}
	I0913 19:57:30.486717   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHPort
	I0913 19:57:30.486859   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHKeyPath
	I0913 19:57:30.487019   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHUsername
	I0913 19:57:30.487177   71424 sshutil.go:53] new ssh client: &{IP:192.168.50.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/no-preload-239327/id_rsa Username:docker}
	I0913 19:57:30.567383   71424 ssh_runner.go:195] Run: systemctl --version
	I0913 19:57:30.589782   71424 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0913 19:57:30.731014   71424 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0913 19:57:30.737329   71424 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0913 19:57:30.737400   71424 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0913 19:57:30.753326   71424 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0913 19:57:30.753355   71424 start.go:495] detecting cgroup driver to use...
	I0913 19:57:30.753427   71424 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0913 19:57:30.769188   71424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0913 19:57:30.783273   71424 docker.go:217] disabling cri-docker service (if available) ...
	I0913 19:57:30.783338   71424 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0913 19:57:30.796488   71424 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0913 19:57:30.809856   71424 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0913 19:57:30.920704   71424 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0913 19:57:31.096766   71424 docker.go:233] disabling docker service ...
	I0913 19:57:31.096843   71424 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0913 19:57:31.111766   71424 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0913 19:57:31.127537   71424 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0913 19:57:31.243075   71424 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0913 19:57:31.367950   71424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0913 19:57:31.382349   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0913 19:57:31.401339   71424 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0913 19:57:31.401408   71424 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:57:31.412154   71424 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0913 19:57:31.412230   71424 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:57:31.423247   71424 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:57:31.433976   71424 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:57:31.445438   71424 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0913 19:57:31.457530   71424 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:57:31.468624   71424 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:57:31.487026   71424 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:57:31.498412   71424 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0913 19:57:31.508829   71424 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0913 19:57:31.508895   71424 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0913 19:57:31.524710   71424 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0913 19:57:31.535524   71424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 19:57:31.653359   71424 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0913 19:57:31.747320   71424 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0913 19:57:31.747407   71424 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0913 19:57:31.752629   71424 start.go:563] Will wait 60s for crictl version
	I0913 19:57:31.752688   71424 ssh_runner.go:195] Run: which crictl
	I0913 19:57:31.756745   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0913 19:57:31.801760   71424 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0913 19:57:31.801845   71424 ssh_runner.go:195] Run: crio --version
	I0913 19:57:31.831043   71424 ssh_runner.go:195] Run: crio --version
	I0913 19:57:31.864324   71424 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0913 19:57:30.504936   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .Start
	I0913 19:57:30.505113   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Ensuring networks are active...
	I0913 19:57:30.505954   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Ensuring network default is active
	I0913 19:57:30.506465   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Ensuring network mk-default-k8s-diff-port-512125 is active
	I0913 19:57:30.506848   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Getting domain xml...
	I0913 19:57:30.507643   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Creating domain...
	I0913 19:57:31.762345   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting to get IP...
	I0913 19:57:31.763307   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:31.763782   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | unable to find current IP address of domain default-k8s-diff-port-512125 in network mk-default-k8s-diff-port-512125
	I0913 19:57:31.763844   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | I0913 19:57:31.763764   72780 retry.go:31] will retry after 200.585233ms: waiting for machine to come up
	I0913 19:57:31.966496   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:31.968386   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | unable to find current IP address of domain default-k8s-diff-port-512125 in network mk-default-k8s-diff-port-512125
	I0913 19:57:31.968411   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | I0913 19:57:31.968318   72780 retry.go:31] will retry after 263.858664ms: waiting for machine to come up
	I0913 19:57:32.234115   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:32.234585   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | unable to find current IP address of domain default-k8s-diff-port-512125 in network mk-default-k8s-diff-port-512125
	I0913 19:57:32.234611   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | I0913 19:57:32.234528   72780 retry.go:31] will retry after 372.592721ms: waiting for machine to come up
	I0913 19:57:32.609295   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:32.609822   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | unable to find current IP address of domain default-k8s-diff-port-512125 in network mk-default-k8s-diff-port-512125
	I0913 19:57:32.609852   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | I0913 19:57:32.609783   72780 retry.go:31] will retry after 570.937116ms: waiting for machine to come up
	I0913 19:57:33.182680   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:33.183060   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | unable to find current IP address of domain default-k8s-diff-port-512125 in network mk-default-k8s-diff-port-512125
	I0913 19:57:33.183090   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | I0913 19:57:33.183013   72780 retry.go:31] will retry after 573.320817ms: waiting for machine to come up
	I0913 19:57:33.757741   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:33.758113   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | unable to find current IP address of domain default-k8s-diff-port-512125 in network mk-default-k8s-diff-port-512125
	I0913 19:57:33.758145   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | I0913 19:57:33.758052   72780 retry.go:31] will retry after 732.322448ms: waiting for machine to come up
	I0913 19:57:34.492123   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:34.492507   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | unable to find current IP address of domain default-k8s-diff-port-512125 in network mk-default-k8s-diff-port-512125
	I0913 19:57:34.492538   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | I0913 19:57:34.492457   72780 retry.go:31] will retry after 958.042939ms: waiting for machine to come up
	I0913 19:57:31.865671   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetIP
	I0913 19:57:31.868390   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:31.868769   71424 main.go:141] libmachine: (no-preload-239327) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:8c:9d", ip: ""} in network mk-no-preload-239327: {Iface:virbr1 ExpiryTime:2024-09-13 20:57:22 +0000 UTC Type:0 Mac:52:54:00:14:8c:9d Iaid: IPaddr:192.168.50.13 Prefix:24 Hostname:no-preload-239327 Clientid:01:52:54:00:14:8c:9d}
	I0913 19:57:31.868809   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined IP address 192.168.50.13 and MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:31.868948   71424 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0913 19:57:31.873443   71424 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0913 19:57:31.886704   71424 kubeadm.go:883] updating cluster {Name:no-preload-239327 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:no-preload-239327 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.13 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0913 19:57:31.886832   71424 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0913 19:57:31.886886   71424 ssh_runner.go:195] Run: sudo crictl images --output json
	I0913 19:57:31.925232   71424 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0913 19:57:31.925256   71424 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.1 registry.k8s.io/kube-controller-manager:v1.31.1 registry.k8s.io/kube-scheduler:v1.31.1 registry.k8s.io/kube-proxy:v1.31.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0913 19:57:31.925331   71424 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 19:57:31.925351   71424 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0913 19:57:31.925350   71424 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I0913 19:57:31.925433   71424 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0913 19:57:31.925483   71424 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0913 19:57:31.925542   71424 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I0913 19:57:31.925553   71424 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I0913 19:57:31.925619   71424 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0913 19:57:31.927195   71424 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0913 19:57:31.927210   71424 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I0913 19:57:31.927221   71424 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0913 19:57:31.927234   71424 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0913 19:57:31.927201   71424 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I0913 19:57:31.927210   71424 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 19:57:31.927265   71424 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I0913 19:57:31.927291   71424 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0913 19:57:32.127330   71424 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0913 19:57:32.132821   71424 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I0913 19:57:32.142922   71424 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.1
	I0913 19:57:32.151533   71424 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.1
	I0913 19:57:32.187158   71424 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.1
	I0913 19:57:32.196395   71424 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0913 19:57:32.196447   71424 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0913 19:57:32.196495   71424 ssh_runner.go:195] Run: which crictl
	I0913 19:57:32.197121   71424 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.1
	I0913 19:57:32.223747   71424 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0913 19:57:32.241044   71424 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.1" does not exist at hash "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee" in container runtime
	I0913 19:57:32.241098   71424 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.1
	I0913 19:57:32.241146   71424 ssh_runner.go:195] Run: which crictl
	I0913 19:57:32.241193   71424 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I0913 19:57:32.241248   71424 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I0913 19:57:32.241305   71424 ssh_runner.go:195] Run: which crictl
	I0913 19:57:32.307038   71424 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.1" does not exist at hash "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1" in container runtime
	I0913 19:57:32.307081   71424 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0913 19:57:32.307161   71424 ssh_runner.go:195] Run: which crictl
	I0913 19:57:32.310315   71424 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.1" does not exist at hash "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b" in container runtime
	I0913 19:57:32.310353   71424 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.1
	I0913 19:57:32.310403   71424 ssh_runner.go:195] Run: which crictl
	I0913 19:57:32.310456   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0913 19:57:32.310513   71424 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.1" needs transfer: "registry.k8s.io/kube-proxy:v1.31.1" does not exist at hash "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561" in container runtime
	I0913 19:57:32.310544   71424 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.1
	I0913 19:57:32.310579   71424 ssh_runner.go:195] Run: which crictl
	I0913 19:57:32.432848   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0913 19:57:32.432949   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0913 19:57:32.432981   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0913 19:57:32.433034   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0913 19:57:32.433086   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0913 19:57:32.433185   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0913 19:57:32.568999   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0913 19:57:32.569071   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0913 19:57:32.569090   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0913 19:57:32.569137   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0913 19:57:32.569158   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0913 19:57:32.569239   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0913 19:57:32.686591   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0913 19:57:32.709864   71424 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0913 19:57:32.709957   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0913 19:57:32.709984   71424 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0913 19:57:32.710022   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0913 19:57:32.710074   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0913 19:57:32.714371   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0913 19:57:32.812533   71424 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I0913 19:57:32.812546   71424 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I0913 19:57:32.812646   71424 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0913 19:57:32.812679   71424 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I0913 19:57:32.822802   71424 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0913 19:57:32.822821   71424 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0913 19:57:32.822870   71424 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0913 19:57:32.822949   71424 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I0913 19:57:32.823020   71424 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I0913 19:57:32.823036   71424 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I0913 19:57:32.823105   71424 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0913 19:57:32.823127   71424 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0913 19:57:32.823108   71424 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1
	I0913 19:57:32.827694   71424 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I0913 19:57:32.827935   71424 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.1 (exists)
	I0913 19:57:33.133519   71424 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 19:57:35.452314   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:35.452807   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | unable to find current IP address of domain default-k8s-diff-port-512125 in network mk-default-k8s-diff-port-512125
	I0913 19:57:35.452832   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | I0913 19:57:35.452764   72780 retry.go:31] will retry after 1.050724369s: waiting for machine to come up
	I0913 19:57:36.504580   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:36.505059   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | unable to find current IP address of domain default-k8s-diff-port-512125 in network mk-default-k8s-diff-port-512125
	I0913 19:57:36.505083   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | I0913 19:57:36.505005   72780 retry.go:31] will retry after 1.828970571s: waiting for machine to come up
	I0913 19:57:38.336079   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:38.336524   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | unable to find current IP address of domain default-k8s-diff-port-512125 in network mk-default-k8s-diff-port-512125
	I0913 19:57:38.336551   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | I0913 19:57:38.336484   72780 retry.go:31] will retry after 1.745975748s: waiting for machine to come up
	I0913 19:57:36.540092   71424 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.717200665s)
	I0913 19:57:36.540120   71424 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0913 19:57:36.540143   71424 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I0913 19:57:36.540185   71424 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1: (3.717045749s)
	I0913 19:57:36.540088   71424 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1: (3.716939076s)
	I0913 19:57:36.540246   71424 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1: (3.717074576s)
	I0913 19:57:36.540263   71424 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.1 (exists)
	I0913 19:57:36.540196   71424 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I0913 19:57:36.540247   71424 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.1 (exists)
	I0913 19:57:36.540220   71424 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.1 (exists)
	I0913 19:57:36.540318   71424 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (3.406769496s)
	I0913 19:57:36.540350   71424 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0913 19:57:36.540383   71424 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 19:57:36.540425   71424 ssh_runner.go:195] Run: which crictl
	I0913 19:57:38.607617   71424 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (2.06732841s)
	I0913 19:57:38.607656   71424 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I0913 19:57:38.607657   71424 ssh_runner.go:235] Completed: which crictl: (2.067217735s)
	I0913 19:57:38.607681   71424 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0913 19:57:38.607717   71424 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0913 19:57:38.607717   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 19:57:38.655710   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 19:57:40.096743   71424 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.440995963s)
	I0913 19:57:40.096836   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 19:57:40.096885   71424 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1: (1.489140573s)
	I0913 19:57:40.096912   71424 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 from cache
	I0913 19:57:40.096946   71424 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.1
	I0913 19:57:40.097003   71424 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1
	I0913 19:57:40.142959   71424 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0913 19:57:40.143072   71424 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0913 19:57:40.083781   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:40.084316   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | unable to find current IP address of domain default-k8s-diff-port-512125 in network mk-default-k8s-diff-port-512125
	I0913 19:57:40.084339   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | I0913 19:57:40.084202   72780 retry.go:31] will retry after 2.736824298s: waiting for machine to come up
	I0913 19:57:42.823269   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:42.823689   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | unable to find current IP address of domain default-k8s-diff-port-512125 in network mk-default-k8s-diff-port-512125
	I0913 19:57:42.823723   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | I0913 19:57:42.823648   72780 retry.go:31] will retry after 3.517461718s: waiting for machine to come up
	I0913 19:57:42.266895   71424 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1: (2.169865218s)
	I0913 19:57:42.266929   71424 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 from cache
	I0913 19:57:42.266971   71424 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0913 19:57:42.267074   71424 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0913 19:57:42.266978   71424 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (2.123869445s)
	I0913 19:57:42.267185   71424 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0913 19:57:44.129215   71424 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1: (1.86211411s)
	I0913 19:57:44.129248   71424 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 from cache
	I0913 19:57:44.129280   71424 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0913 19:57:44.129356   71424 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0913 19:57:46.077759   71424 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1: (1.948382667s)
	I0913 19:57:46.077791   71424 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 from cache
	I0913 19:57:46.077818   71424 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0913 19:57:46.077859   71424 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0913 19:57:46.342187   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:46.342624   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | unable to find current IP address of domain default-k8s-diff-port-512125 in network mk-default-k8s-diff-port-512125
	I0913 19:57:46.342661   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | I0913 19:57:46.342555   72780 retry.go:31] will retry after 3.728072283s: waiting for machine to come up
	I0913 19:57:46.728210   71424 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0913 19:57:46.728256   71424 cache_images.go:123] Successfully loaded all cached images
	I0913 19:57:46.728261   71424 cache_images.go:92] duration metric: took 14.802990931s to LoadCachedImages
	I0913 19:57:46.728274   71424 kubeadm.go:934] updating node { 192.168.50.13 8443 v1.31.1 crio true true} ...
	I0913 19:57:46.728393   71424 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-239327 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.13
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-239327 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0913 19:57:46.728503   71424 ssh_runner.go:195] Run: crio config
	I0913 19:57:46.777890   71424 cni.go:84] Creating CNI manager for ""
	I0913 19:57:46.777916   71424 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0913 19:57:46.777928   71424 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0913 19:57:46.777948   71424 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.13 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-239327 NodeName:no-preload-239327 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.13"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.13 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0913 19:57:46.778129   71424 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.13
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-239327"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.13
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.13"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0913 19:57:46.778201   71424 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0913 19:57:46.788550   71424 binaries.go:44] Found k8s binaries, skipping transfer
	I0913 19:57:46.788612   71424 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0913 19:57:46.797610   71424 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0913 19:57:46.813683   71424 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0913 19:57:46.829359   71424 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0913 19:57:46.846055   71424 ssh_runner.go:195] Run: grep 192.168.50.13	control-plane.minikube.internal$ /etc/hosts
	I0913 19:57:46.849820   71424 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.13	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0913 19:57:46.861351   71424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 19:57:46.976645   71424 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0913 19:57:46.993359   71424 certs.go:68] Setting up /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/no-preload-239327 for IP: 192.168.50.13
	I0913 19:57:46.993390   71424 certs.go:194] generating shared ca certs ...
	I0913 19:57:46.993410   71424 certs.go:226] acquiring lock for ca certs: {Name:mke780aab4c2895bded11772e31c7a357d07742c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 19:57:46.993586   71424 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.key
	I0913 19:57:46.993648   71424 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.key
	I0913 19:57:46.993661   71424 certs.go:256] generating profile certs ...
	I0913 19:57:46.993761   71424 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/no-preload-239327/client.key
	I0913 19:57:46.993845   71424 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/no-preload-239327/apiserver.key.1d2f30c2
	I0913 19:57:46.993896   71424 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/no-preload-239327/proxy-client.key
	I0913 19:57:46.994053   71424 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079.pem (1338 bytes)
	W0913 19:57:46.994120   71424 certs.go:480] ignoring /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079_empty.pem, impossibly tiny 0 bytes
	I0913 19:57:46.994134   71424 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem (1675 bytes)
	I0913 19:57:46.994178   71424 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem (1082 bytes)
	I0913 19:57:46.994218   71424 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem (1123 bytes)
	I0913 19:57:46.994250   71424 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem (1679 bytes)
	I0913 19:57:46.994307   71424 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem (1708 bytes)
	I0913 19:57:46.995114   71424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0913 19:57:47.025538   71424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0913 19:57:47.078641   71424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0913 19:57:47.107063   71424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0913 19:57:47.147536   71424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/no-preload-239327/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0913 19:57:47.179796   71424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/no-preload-239327/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0913 19:57:47.202593   71424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/no-preload-239327/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0913 19:57:47.227536   71424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/no-preload-239327/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0913 19:57:47.251324   71424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0913 19:57:47.274447   71424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079.pem --> /usr/share/ca-certificates/11079.pem (1338 bytes)
	I0913 19:57:47.297216   71424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem --> /usr/share/ca-certificates/110792.pem (1708 bytes)
	I0913 19:57:47.320138   71424 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0913 19:57:47.336696   71424 ssh_runner.go:195] Run: openssl version
	I0913 19:57:47.342403   71424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0913 19:57:47.352378   71424 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0913 19:57:47.356749   71424 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 13 18:22 /usr/share/ca-certificates/minikubeCA.pem
	I0913 19:57:47.356793   71424 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0913 19:57:47.362541   71424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0913 19:57:47.372621   71424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11079.pem && ln -fs /usr/share/ca-certificates/11079.pem /etc/ssl/certs/11079.pem"
	I0913 19:57:47.382729   71424 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11079.pem
	I0913 19:57:47.387369   71424 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 13 18:38 /usr/share/ca-certificates/11079.pem
	I0913 19:57:47.387431   71424 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11079.pem
	I0913 19:57:47.393218   71424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11079.pem /etc/ssl/certs/51391683.0"
	I0913 19:57:47.403529   71424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110792.pem && ln -fs /usr/share/ca-certificates/110792.pem /etc/ssl/certs/110792.pem"
	I0913 19:57:47.414210   71424 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110792.pem
	I0913 19:57:47.418917   71424 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 13 18:38 /usr/share/ca-certificates/110792.pem
	I0913 19:57:47.418965   71424 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110792.pem
	I0913 19:57:47.424414   71424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110792.pem /etc/ssl/certs/3ec20f2e.0"
	I0913 19:57:47.434850   71424 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0913 19:57:47.439245   71424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0913 19:57:47.445052   71424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0913 19:57:47.450680   71424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0913 19:57:47.456489   71424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0913 19:57:47.462051   71424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0913 19:57:47.467582   71424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0913 19:57:47.473181   71424 kubeadm.go:392] StartCluster: {Name:no-preload-239327 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:no-preload-239327 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.13 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 19:57:47.473256   71424 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0913 19:57:47.473295   71424 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0913 19:57:47.510432   71424 cri.go:89] found id: ""
	I0913 19:57:47.510508   71424 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0913 19:57:47.520272   71424 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0913 19:57:47.520293   71424 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0913 19:57:47.520338   71424 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0913 19:57:47.529391   71424 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0913 19:57:47.530298   71424 kubeconfig.go:125] found "no-preload-239327" server: "https://192.168.50.13:8443"
	I0913 19:57:47.532275   71424 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0913 19:57:47.541080   71424 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.13
	I0913 19:57:47.541115   71424 kubeadm.go:1160] stopping kube-system containers ...
	I0913 19:57:47.541130   71424 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0913 19:57:47.541167   71424 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0913 19:57:47.575726   71424 cri.go:89] found id: ""
	I0913 19:57:47.575797   71424 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0913 19:57:47.591640   71424 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0913 19:57:47.600616   71424 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0913 19:57:47.600634   71424 kubeadm.go:157] found existing configuration files:
	
	I0913 19:57:47.600680   71424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0913 19:57:47.609317   71424 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0913 19:57:47.609360   71424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0913 19:57:47.618729   71424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0913 19:57:47.627198   71424 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0913 19:57:47.627241   71424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0913 19:57:47.636259   71424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0913 19:57:47.645245   71424 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0913 19:57:47.645303   71424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0913 19:57:47.654245   71424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0913 19:57:47.662970   71424 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0913 19:57:47.663045   71424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0913 19:57:47.672250   71424 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0913 19:57:47.681504   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:57:47.783618   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:57:48.614939   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:57:48.812739   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:57:48.888885   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:57:48.999877   71424 api_server.go:52] waiting for apiserver process to appear ...
	I0913 19:57:48.999966   71424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:57:49.500587   71424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:57:50.001072   71424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:57:50.026939   71424 api_server.go:72] duration metric: took 1.027062019s to wait for apiserver process to appear ...
	I0913 19:57:50.026965   71424 api_server.go:88] waiting for apiserver healthz status ...
	I0913 19:57:50.026983   71424 api_server.go:253] Checking apiserver healthz at https://192.168.50.13:8443/healthz ...
	I0913 19:57:51.327259   71926 start.go:364] duration metric: took 4m9.277620447s to acquireMachinesLock for "old-k8s-version-234290"
	I0913 19:57:51.327324   71926 start.go:96] Skipping create...Using existing machine configuration
	I0913 19:57:51.327338   71926 fix.go:54] fixHost starting: 
	I0913 19:57:51.327769   71926 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:57:51.327815   71926 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:57:51.344030   71926 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37477
	I0913 19:57:51.344527   71926 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:57:51.344994   71926 main.go:141] libmachine: Using API Version  1
	I0913 19:57:51.345018   71926 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:57:51.345360   71926 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:57:51.345563   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .DriverName
	I0913 19:57:51.345700   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetState
	I0913 19:57:51.347144   71926 fix.go:112] recreateIfNeeded on old-k8s-version-234290: state=Stopped err=<nil>
	I0913 19:57:51.347169   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .DriverName
	W0913 19:57:51.347304   71926 fix.go:138] unexpected machine state, will restart: <nil>
	I0913 19:57:51.349231   71926 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-234290" ...
	I0913 19:57:51.350756   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .Start
	I0913 19:57:51.350906   71926 main.go:141] libmachine: (old-k8s-version-234290) Ensuring networks are active...
	I0913 19:57:51.351585   71926 main.go:141] libmachine: (old-k8s-version-234290) Ensuring network default is active
	I0913 19:57:51.351974   71926 main.go:141] libmachine: (old-k8s-version-234290) Ensuring network mk-old-k8s-version-234290 is active
	I0913 19:57:51.352333   71926 main.go:141] libmachine: (old-k8s-version-234290) Getting domain xml...
	I0913 19:57:51.352947   71926 main.go:141] libmachine: (old-k8s-version-234290) Creating domain...
	I0913 19:57:50.075284   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:50.075782   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has current primary IP address 192.168.61.3 and MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:50.075801   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Found IP for machine: 192.168.61.3
	I0913 19:57:50.075813   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Reserving static IP address...
	I0913 19:57:50.076344   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Reserved static IP address: 192.168.61.3
	I0913 19:57:50.076383   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting for SSH to be available...
	I0913 19:57:50.076420   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-512125", mac: "52:54:00:5b:54:e0", ip: "192.168.61.3"} in network mk-default-k8s-diff-port-512125: {Iface:virbr2 ExpiryTime:2024-09-13 20:49:35 +0000 UTC Type:0 Mac:52:54:00:5b:54:e0 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-512125 Clientid:01:52:54:00:5b:54:e0}
	I0913 19:57:50.076452   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | skip adding static IP to network mk-default-k8s-diff-port-512125 - found existing host DHCP lease matching {name: "default-k8s-diff-port-512125", mac: "52:54:00:5b:54:e0", ip: "192.168.61.3"}
	I0913 19:57:50.076468   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | Getting to WaitForSSH function...
	I0913 19:57:50.078783   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:50.079184   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:54:e0", ip: ""} in network mk-default-k8s-diff-port-512125: {Iface:virbr2 ExpiryTime:2024-09-13 20:49:35 +0000 UTC Type:0 Mac:52:54:00:5b:54:e0 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-512125 Clientid:01:52:54:00:5b:54:e0}
	I0913 19:57:50.079251   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined IP address 192.168.61.3 and MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:50.079322   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | Using SSH client type: external
	I0913 19:57:50.079363   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | Using SSH private key: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/default-k8s-diff-port-512125/id_rsa (-rw-------)
	I0913 19:57:50.079395   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.3 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19636-3902/.minikube/machines/default-k8s-diff-port-512125/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0913 19:57:50.079422   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | About to run SSH command:
	I0913 19:57:50.079444   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | exit 0
	I0913 19:57:50.206454   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | SSH cmd err, output: <nil>: 
	I0913 19:57:50.206818   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetConfigRaw
	I0913 19:57:50.207468   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetIP
	I0913 19:57:50.210231   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:50.210663   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:54:e0", ip: ""} in network mk-default-k8s-diff-port-512125: {Iface:virbr2 ExpiryTime:2024-09-13 20:49:35 +0000 UTC Type:0 Mac:52:54:00:5b:54:e0 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-512125 Clientid:01:52:54:00:5b:54:e0}
	I0913 19:57:50.210690   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined IP address 192.168.61.3 and MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:50.210983   71702 profile.go:143] Saving config to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/default-k8s-diff-port-512125/config.json ...
	I0913 19:57:50.211209   71702 machine.go:93] provisionDockerMachine start ...
	I0913 19:57:50.211228   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .DriverName
	I0913 19:57:50.211520   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHHostname
	I0913 19:57:50.214581   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:50.214920   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:54:e0", ip: ""} in network mk-default-k8s-diff-port-512125: {Iface:virbr2 ExpiryTime:2024-09-13 20:49:35 +0000 UTC Type:0 Mac:52:54:00:5b:54:e0 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-512125 Clientid:01:52:54:00:5b:54:e0}
	I0913 19:57:50.214943   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined IP address 192.168.61.3 and MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:50.215121   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHPort
	I0913 19:57:50.215303   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHKeyPath
	I0913 19:57:50.215451   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHKeyPath
	I0913 19:57:50.215645   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHUsername
	I0913 19:57:50.215804   71702 main.go:141] libmachine: Using SSH client type: native
	I0913 19:57:50.216045   71702 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.3 22 <nil> <nil>}
	I0913 19:57:50.216060   71702 main.go:141] libmachine: About to run SSH command:
	hostname
	I0913 19:57:50.331657   71702 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0913 19:57:50.331684   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetMachineName
	I0913 19:57:50.331934   71702 buildroot.go:166] provisioning hostname "default-k8s-diff-port-512125"
	I0913 19:57:50.331950   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetMachineName
	I0913 19:57:50.332149   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHHostname
	I0913 19:57:50.335159   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:50.335537   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:54:e0", ip: ""} in network mk-default-k8s-diff-port-512125: {Iface:virbr2 ExpiryTime:2024-09-13 20:49:35 +0000 UTC Type:0 Mac:52:54:00:5b:54:e0 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-512125 Clientid:01:52:54:00:5b:54:e0}
	I0913 19:57:50.335567   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined IP address 192.168.61.3 and MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:50.335723   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHPort
	I0913 19:57:50.335908   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHKeyPath
	I0913 19:57:50.336046   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHKeyPath
	I0913 19:57:50.336226   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHUsername
	I0913 19:57:50.336384   71702 main.go:141] libmachine: Using SSH client type: native
	I0913 19:57:50.336597   71702 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.3 22 <nil> <nil>}
	I0913 19:57:50.336616   71702 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-512125 && echo "default-k8s-diff-port-512125" | sudo tee /etc/hostname
	I0913 19:57:50.467731   71702 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-512125
	
	I0913 19:57:50.467765   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHHostname
	I0913 19:57:50.470668   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:50.471106   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:54:e0", ip: ""} in network mk-default-k8s-diff-port-512125: {Iface:virbr2 ExpiryTime:2024-09-13 20:49:35 +0000 UTC Type:0 Mac:52:54:00:5b:54:e0 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-512125 Clientid:01:52:54:00:5b:54:e0}
	I0913 19:57:50.471135   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined IP address 192.168.61.3 and MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:50.471401   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHPort
	I0913 19:57:50.471588   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHKeyPath
	I0913 19:57:50.471784   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHKeyPath
	I0913 19:57:50.471944   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHUsername
	I0913 19:57:50.472126   71702 main.go:141] libmachine: Using SSH client type: native
	I0913 19:57:50.472334   71702 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.3 22 <nil> <nil>}
	I0913 19:57:50.472352   71702 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-512125' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-512125/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-512125' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0913 19:57:50.587535   71702 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0913 19:57:50.587565   71702 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19636-3902/.minikube CaCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19636-3902/.minikube}
	I0913 19:57:50.587599   71702 buildroot.go:174] setting up certificates
	I0913 19:57:50.587608   71702 provision.go:84] configureAuth start
	I0913 19:57:50.587617   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetMachineName
	I0913 19:57:50.587881   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetIP
	I0913 19:57:50.590622   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:50.591016   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:54:e0", ip: ""} in network mk-default-k8s-diff-port-512125: {Iface:virbr2 ExpiryTime:2024-09-13 20:49:35 +0000 UTC Type:0 Mac:52:54:00:5b:54:e0 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-512125 Clientid:01:52:54:00:5b:54:e0}
	I0913 19:57:50.591046   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined IP address 192.168.61.3 and MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:50.591235   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHHostname
	I0913 19:57:50.593758   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:50.594161   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:54:e0", ip: ""} in network mk-default-k8s-diff-port-512125: {Iface:virbr2 ExpiryTime:2024-09-13 20:49:35 +0000 UTC Type:0 Mac:52:54:00:5b:54:e0 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-512125 Clientid:01:52:54:00:5b:54:e0}
	I0913 19:57:50.594188   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined IP address 192.168.61.3 and MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:50.594290   71702 provision.go:143] copyHostCerts
	I0913 19:57:50.594351   71702 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem, removing ...
	I0913 19:57:50.594364   71702 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem
	I0913 19:57:50.594423   71702 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem (1123 bytes)
	I0913 19:57:50.594504   71702 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem, removing ...
	I0913 19:57:50.594511   71702 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem
	I0913 19:57:50.594529   71702 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem (1679 bytes)
	I0913 19:57:50.594580   71702 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem, removing ...
	I0913 19:57:50.594586   71702 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem
	I0913 19:57:50.594603   71702 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem (1082 bytes)
	I0913 19:57:50.594654   71702 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-512125 san=[127.0.0.1 192.168.61.3 default-k8s-diff-port-512125 localhost minikube]
	I0913 19:57:50.688827   71702 provision.go:177] copyRemoteCerts
	I0913 19:57:50.688879   71702 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0913 19:57:50.688903   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHHostname
	I0913 19:57:50.691724   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:50.692112   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:54:e0", ip: ""} in network mk-default-k8s-diff-port-512125: {Iface:virbr2 ExpiryTime:2024-09-13 20:49:35 +0000 UTC Type:0 Mac:52:54:00:5b:54:e0 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-512125 Clientid:01:52:54:00:5b:54:e0}
	I0913 19:57:50.692142   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined IP address 192.168.61.3 and MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:50.692387   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHPort
	I0913 19:57:50.692579   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHKeyPath
	I0913 19:57:50.692754   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHUsername
	I0913 19:57:50.692876   71702 sshutil.go:53] new ssh client: &{IP:192.168.61.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/default-k8s-diff-port-512125/id_rsa Username:docker}
	I0913 19:57:50.776582   71702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0913 19:57:50.802453   71702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0913 19:57:50.827446   71702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0913 19:57:50.855966   71702 provision.go:87] duration metric: took 268.344608ms to configureAuth
	I0913 19:57:50.855995   71702 buildroot.go:189] setting minikube options for container-runtime
	I0913 19:57:50.856210   71702 config.go:182] Loaded profile config "default-k8s-diff-port-512125": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 19:57:50.856298   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHHostname
	I0913 19:57:50.859097   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:50.859426   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:54:e0", ip: ""} in network mk-default-k8s-diff-port-512125: {Iface:virbr2 ExpiryTime:2024-09-13 20:49:35 +0000 UTC Type:0 Mac:52:54:00:5b:54:e0 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-512125 Clientid:01:52:54:00:5b:54:e0}
	I0913 19:57:50.859464   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined IP address 192.168.61.3 and MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:50.859667   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHPort
	I0913 19:57:50.859851   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHKeyPath
	I0913 19:57:50.860001   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHKeyPath
	I0913 19:57:50.860103   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHUsername
	I0913 19:57:50.860270   71702 main.go:141] libmachine: Using SSH client type: native
	I0913 19:57:50.860450   71702 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.3 22 <nil> <nil>}
	I0913 19:57:50.860472   71702 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0913 19:57:51.091137   71702 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0913 19:57:51.091162   71702 machine.go:96] duration metric: took 879.939352ms to provisionDockerMachine
	I0913 19:57:51.091174   71702 start.go:293] postStartSetup for "default-k8s-diff-port-512125" (driver="kvm2")
	I0913 19:57:51.091187   71702 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0913 19:57:51.091208   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .DriverName
	I0913 19:57:51.091525   71702 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0913 19:57:51.091558   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHHostname
	I0913 19:57:51.094398   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:51.094755   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:54:e0", ip: ""} in network mk-default-k8s-diff-port-512125: {Iface:virbr2 ExpiryTime:2024-09-13 20:49:35 +0000 UTC Type:0 Mac:52:54:00:5b:54:e0 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-512125 Clientid:01:52:54:00:5b:54:e0}
	I0913 19:57:51.094783   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined IP address 192.168.61.3 and MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:51.094945   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHPort
	I0913 19:57:51.095112   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHKeyPath
	I0913 19:57:51.095269   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHUsername
	I0913 19:57:51.095391   71702 sshutil.go:53] new ssh client: &{IP:192.168.61.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/default-k8s-diff-port-512125/id_rsa Username:docker}
	I0913 19:57:51.176959   71702 ssh_runner.go:195] Run: cat /etc/os-release
	I0913 19:57:51.181585   71702 info.go:137] Remote host: Buildroot 2023.02.9
	I0913 19:57:51.181614   71702 filesync.go:126] Scanning /home/jenkins/minikube-integration/19636-3902/.minikube/addons for local assets ...
	I0913 19:57:51.181687   71702 filesync.go:126] Scanning /home/jenkins/minikube-integration/19636-3902/.minikube/files for local assets ...
	I0913 19:57:51.181768   71702 filesync.go:149] local asset: /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem -> 110792.pem in /etc/ssl/certs
	I0913 19:57:51.181857   71702 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0913 19:57:51.191417   71702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem --> /etc/ssl/certs/110792.pem (1708 bytes)
	I0913 19:57:51.218033   71702 start.go:296] duration metric: took 126.844149ms for postStartSetup
	I0913 19:57:51.218076   71702 fix.go:56] duration metric: took 20.738765131s for fixHost
	I0913 19:57:51.218119   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHHostname
	I0913 19:57:51.221206   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:51.221713   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:54:e0", ip: ""} in network mk-default-k8s-diff-port-512125: {Iface:virbr2 ExpiryTime:2024-09-13 20:49:35 +0000 UTC Type:0 Mac:52:54:00:5b:54:e0 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-512125 Clientid:01:52:54:00:5b:54:e0}
	I0913 19:57:51.221748   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined IP address 192.168.61.3 and MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:51.221946   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHPort
	I0913 19:57:51.222151   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHKeyPath
	I0913 19:57:51.222344   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHKeyPath
	I0913 19:57:51.222538   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHUsername
	I0913 19:57:51.222673   71702 main.go:141] libmachine: Using SSH client type: native
	I0913 19:57:51.222834   71702 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.3 22 <nil> <nil>}
	I0913 19:57:51.222844   71702 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0913 19:57:51.327091   71702 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726257471.303496315
	
	I0913 19:57:51.327121   71702 fix.go:216] guest clock: 1726257471.303496315
	I0913 19:57:51.327132   71702 fix.go:229] Guest: 2024-09-13 19:57:51.303496315 +0000 UTC Remote: 2024-09-13 19:57:51.218080493 +0000 UTC m=+266.360246627 (delta=85.415822ms)
	I0913 19:57:51.327179   71702 fix.go:200] guest clock delta is within tolerance: 85.415822ms
	I0913 19:57:51.327187   71702 start.go:83] releasing machines lock for "default-k8s-diff-port-512125", held for 20.847905198s
	I0913 19:57:51.327218   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .DriverName
	I0913 19:57:51.327478   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetIP
	I0913 19:57:51.330295   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:51.330668   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:54:e0", ip: ""} in network mk-default-k8s-diff-port-512125: {Iface:virbr2 ExpiryTime:2024-09-13 20:49:35 +0000 UTC Type:0 Mac:52:54:00:5b:54:e0 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-512125 Clientid:01:52:54:00:5b:54:e0}
	I0913 19:57:51.330701   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined IP address 192.168.61.3 and MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:51.330809   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .DriverName
	I0913 19:57:51.331309   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .DriverName
	I0913 19:57:51.331492   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .DriverName
	I0913 19:57:51.331611   71702 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0913 19:57:51.331653   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHHostname
	I0913 19:57:51.331703   71702 ssh_runner.go:195] Run: cat /version.json
	I0913 19:57:51.331728   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHHostname
	I0913 19:57:51.334221   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:51.334411   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:51.334581   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:54:e0", ip: ""} in network mk-default-k8s-diff-port-512125: {Iface:virbr2 ExpiryTime:2024-09-13 20:49:35 +0000 UTC Type:0 Mac:52:54:00:5b:54:e0 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-512125 Clientid:01:52:54:00:5b:54:e0}
	I0913 19:57:51.334609   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined IP address 192.168.61.3 and MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:51.334779   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHPort
	I0913 19:57:51.334879   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:54:e0", ip: ""} in network mk-default-k8s-diff-port-512125: {Iface:virbr2 ExpiryTime:2024-09-13 20:49:35 +0000 UTC Type:0 Mac:52:54:00:5b:54:e0 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-512125 Clientid:01:52:54:00:5b:54:e0}
	I0913 19:57:51.334919   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined IP address 192.168.61.3 and MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:51.334966   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHKeyPath
	I0913 19:57:51.335052   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHPort
	I0913 19:57:51.335126   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHUsername
	I0913 19:57:51.335198   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHKeyPath
	I0913 19:57:51.335270   71702 sshutil.go:53] new ssh client: &{IP:192.168.61.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/default-k8s-diff-port-512125/id_rsa Username:docker}
	I0913 19:57:51.335331   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHUsername
	I0913 19:57:51.335546   71702 sshutil.go:53] new ssh client: &{IP:192.168.61.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/default-k8s-diff-port-512125/id_rsa Username:docker}
	I0913 19:57:51.415552   71702 ssh_runner.go:195] Run: systemctl --version
	I0913 19:57:51.440411   71702 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0913 19:57:51.584757   71702 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0913 19:57:51.590531   71702 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0913 19:57:51.590604   71702 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0913 19:57:51.606595   71702 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0913 19:57:51.606619   71702 start.go:495] detecting cgroup driver to use...
	I0913 19:57:51.606678   71702 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0913 19:57:51.622887   71702 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0913 19:57:51.642168   71702 docker.go:217] disabling cri-docker service (if available) ...
	I0913 19:57:51.642235   71702 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0913 19:57:51.657201   71702 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0913 19:57:51.672504   71702 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0913 19:57:51.797046   71702 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0913 19:57:51.944856   71702 docker.go:233] disabling docker service ...
	I0913 19:57:51.944930   71702 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0913 19:57:51.962885   71702 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0913 19:57:51.979765   71702 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0913 19:57:52.144865   71702 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0913 19:57:52.305549   71702 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0913 19:57:52.319742   71702 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0913 19:57:52.341814   71702 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0913 19:57:52.341877   71702 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:57:52.356233   71702 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0913 19:57:52.356304   71702 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:57:52.367867   71702 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:57:52.380357   71702 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:57:52.396158   71702 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0913 19:57:52.409682   71702 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:57:52.425012   71702 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:57:52.443770   71702 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:57:52.455296   71702 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0913 19:57:52.471321   71702 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0913 19:57:52.471384   71702 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0913 19:57:52.486626   71702 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0913 19:57:52.503172   71702 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 19:57:52.637550   71702 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0913 19:57:52.749215   71702 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0913 19:57:52.749314   71702 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0913 19:57:52.755695   71702 start.go:563] Will wait 60s for crictl version
	I0913 19:57:52.755764   71702 ssh_runner.go:195] Run: which crictl
	I0913 19:57:52.760759   71702 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0913 19:57:52.810845   71702 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0913 19:57:52.810938   71702 ssh_runner.go:195] Run: crio --version
	I0913 19:57:52.843238   71702 ssh_runner.go:195] Run: crio --version
	I0913 19:57:52.881367   71702 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0913 19:57:52.882926   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetIP
	I0913 19:57:52.886161   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:52.886611   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:54:e0", ip: ""} in network mk-default-k8s-diff-port-512125: {Iface:virbr2 ExpiryTime:2024-09-13 20:49:35 +0000 UTC Type:0 Mac:52:54:00:5b:54:e0 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-512125 Clientid:01:52:54:00:5b:54:e0}
	I0913 19:57:52.886640   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined IP address 192.168.61.3 and MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:52.886873   71702 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0913 19:57:52.891585   71702 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0913 19:57:52.909764   71702 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-512125 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:default-k8s-diff-port-512125 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.3 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0913 19:57:52.909895   71702 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0913 19:57:52.909946   71702 ssh_runner.go:195] Run: sudo crictl images --output json
	I0913 19:57:52.951579   71702 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0913 19:57:52.951663   71702 ssh_runner.go:195] Run: which lz4
	I0913 19:57:52.956284   71702 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0913 19:57:52.961057   71702 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0913 19:57:52.961107   71702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0913 19:57:54.413207   71702 crio.go:462] duration metric: took 1.457013899s to copy over tarball
	I0913 19:57:54.413281   71702 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0913 19:57:53.355482   71424 api_server.go:279] https://192.168.50.13:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0913 19:57:53.355515   71424 api_server.go:103] status: https://192.168.50.13:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0913 19:57:53.355532   71424 api_server.go:253] Checking apiserver healthz at https://192.168.50.13:8443/healthz ...
	I0913 19:57:53.403530   71424 api_server.go:279] https://192.168.50.13:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0913 19:57:53.403563   71424 api_server.go:103] status: https://192.168.50.13:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0913 19:57:53.527891   71424 api_server.go:253] Checking apiserver healthz at https://192.168.50.13:8443/healthz ...
	I0913 19:57:53.540614   71424 api_server.go:279] https://192.168.50.13:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0913 19:57:53.540645   71424 api_server.go:103] status: https://192.168.50.13:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0913 19:57:54.027103   71424 api_server.go:253] Checking apiserver healthz at https://192.168.50.13:8443/healthz ...
	I0913 19:57:54.033969   71424 api_server.go:279] https://192.168.50.13:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0913 19:57:54.034007   71424 api_server.go:103] status: https://192.168.50.13:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0913 19:57:54.527232   71424 api_server.go:253] Checking apiserver healthz at https://192.168.50.13:8443/healthz ...
	I0913 19:57:54.533061   71424 api_server.go:279] https://192.168.50.13:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0913 19:57:54.533101   71424 api_server.go:103] status: https://192.168.50.13:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0913 19:57:55.027284   71424 api_server.go:253] Checking apiserver healthz at https://192.168.50.13:8443/healthz ...
	I0913 19:57:55.033940   71424 api_server.go:279] https://192.168.50.13:8443/healthz returned 200:
	ok
	I0913 19:57:55.041955   71424 api_server.go:141] control plane version: v1.31.1
	I0913 19:57:55.041994   71424 api_server.go:131] duration metric: took 5.01501979s to wait for apiserver health ...
	I0913 19:57:55.042004   71424 cni.go:84] Creating CNI manager for ""
	I0913 19:57:55.042012   71424 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0913 19:57:55.043980   71424 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0913 19:57:55.045528   71424 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0913 19:57:55.095694   71424 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0913 19:57:55.130974   71424 system_pods.go:43] waiting for kube-system pods to appear ...
	I0913 19:57:55.144810   71424 system_pods.go:59] 8 kube-system pods found
	I0913 19:57:55.144850   71424 system_pods.go:61] "coredns-7c65d6cfc9-fjzxv" [984f1946-61b1-4881-ae99-495855aaf948] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0913 19:57:55.144861   71424 system_pods.go:61] "etcd-no-preload-239327" [d0514967-93ea-4792-b49d-500c6800d102] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0913 19:57:55.144871   71424 system_pods.go:61] "kube-apiserver-no-preload-239327" [2e37433d-d767-4e6a-9697-79e99bb2cf74] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0913 19:57:55.144879   71424 system_pods.go:61] "kube-controller-manager-no-preload-239327" [71d86891-1378-43b5-af10-c45acb1ef854] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0913 19:57:55.144885   71424 system_pods.go:61] "kube-proxy-b24zg" [67fffd9e-ddf7-4abb-bfce-1528060d6b43] Running
	I0913 19:57:55.144892   71424 system_pods.go:61] "kube-scheduler-no-preload-239327" [22e17a4f-902f-4593-82dd-d2f04104e66a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0913 19:57:55.144899   71424 system_pods.go:61] "metrics-server-6867b74b74-bq7jp" [9920ad88-3d00-458f-94d4-3dcfd0cd9a01] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0913 19:57:55.144904   71424 system_pods.go:61] "storage-provisioner" [1cb55fe9-4adb-4d3e-9f26-34ee4b3f01a2] Running
	I0913 19:57:55.144912   71424 system_pods.go:74] duration metric: took 13.911878ms to wait for pod list to return data ...
	I0913 19:57:55.144925   71424 node_conditions.go:102] verifying NodePressure condition ...
	I0913 19:57:55.150452   71424 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0913 19:57:55.150485   71424 node_conditions.go:123] node cpu capacity is 2
	I0913 19:57:55.150498   71424 node_conditions.go:105] duration metric: took 5.568616ms to run NodePressure ...
	I0913 19:57:55.150517   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:57:55.469599   71424 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0913 19:57:55.475337   71424 kubeadm.go:739] kubelet initialised
	I0913 19:57:55.475361   71424 kubeadm.go:740] duration metric: took 5.681154ms waiting for restarted kubelet to initialise ...
	I0913 19:57:55.475372   71424 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0913 19:57:55.485218   71424 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-fjzxv" in "kube-system" namespace to be "Ready" ...
	I0913 19:57:55.495426   71424 pod_ready.go:98] node "no-preload-239327" hosting pod "coredns-7c65d6cfc9-fjzxv" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-239327" has status "Ready":"False"
	I0913 19:57:55.495451   71424 pod_ready.go:82] duration metric: took 10.207619ms for pod "coredns-7c65d6cfc9-fjzxv" in "kube-system" namespace to be "Ready" ...
	E0913 19:57:55.495464   71424 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-239327" hosting pod "coredns-7c65d6cfc9-fjzxv" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-239327" has status "Ready":"False"
	I0913 19:57:55.495474   71424 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-239327" in "kube-system" namespace to be "Ready" ...
	I0913 19:57:55.501722   71424 pod_ready.go:98] node "no-preload-239327" hosting pod "etcd-no-preload-239327" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-239327" has status "Ready":"False"
	I0913 19:57:55.501746   71424 pod_ready.go:82] duration metric: took 6.262633ms for pod "etcd-no-preload-239327" in "kube-system" namespace to be "Ready" ...
	E0913 19:57:55.501758   71424 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-239327" hosting pod "etcd-no-preload-239327" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-239327" has status "Ready":"False"
	I0913 19:57:55.501766   71424 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-239327" in "kube-system" namespace to be "Ready" ...
	I0913 19:57:55.508771   71424 pod_ready.go:98] node "no-preload-239327" hosting pod "kube-apiserver-no-preload-239327" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-239327" has status "Ready":"False"
	I0913 19:57:55.508797   71424 pod_ready.go:82] duration metric: took 7.022139ms for pod "kube-apiserver-no-preload-239327" in "kube-system" namespace to be "Ready" ...
	E0913 19:57:55.508808   71424 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-239327" hosting pod "kube-apiserver-no-preload-239327" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-239327" has status "Ready":"False"
	I0913 19:57:55.508816   71424 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-239327" in "kube-system" namespace to be "Ready" ...
	I0913 19:57:55.533464   71424 pod_ready.go:98] node "no-preload-239327" hosting pod "kube-controller-manager-no-preload-239327" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-239327" has status "Ready":"False"
	I0913 19:57:55.533494   71424 pod_ready.go:82] duration metric: took 24.667318ms for pod "kube-controller-manager-no-preload-239327" in "kube-system" namespace to be "Ready" ...
	E0913 19:57:55.533505   71424 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-239327" hosting pod "kube-controller-manager-no-preload-239327" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-239327" has status "Ready":"False"
	I0913 19:57:55.533515   71424 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-b24zg" in "kube-system" namespace to be "Ready" ...
	I0913 19:57:55.935346   71424 pod_ready.go:98] node "no-preload-239327" hosting pod "kube-proxy-b24zg" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-239327" has status "Ready":"False"
	I0913 19:57:55.935376   71424 pod_ready.go:82] duration metric: took 401.852235ms for pod "kube-proxy-b24zg" in "kube-system" namespace to be "Ready" ...
	E0913 19:57:55.935388   71424 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-239327" hosting pod "kube-proxy-b24zg" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-239327" has status "Ready":"False"
	I0913 19:57:55.935399   71424 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-239327" in "kube-system" namespace to be "Ready" ...
	I0913 19:57:56.335156   71424 pod_ready.go:98] node "no-preload-239327" hosting pod "kube-scheduler-no-preload-239327" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-239327" has status "Ready":"False"
	I0913 19:57:56.335194   71424 pod_ready.go:82] duration metric: took 399.782959ms for pod "kube-scheduler-no-preload-239327" in "kube-system" namespace to be "Ready" ...
	E0913 19:57:56.335207   71424 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-239327" hosting pod "kube-scheduler-no-preload-239327" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-239327" has status "Ready":"False"
	I0913 19:57:56.335216   71424 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace to be "Ready" ...
	I0913 19:57:56.734606   71424 pod_ready.go:98] node "no-preload-239327" hosting pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-239327" has status "Ready":"False"
	I0913 19:57:56.734633   71424 pod_ready.go:82] duration metric: took 399.405497ms for pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace to be "Ready" ...
	E0913 19:57:56.734644   71424 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-239327" hosting pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-239327" has status "Ready":"False"
	I0913 19:57:56.734654   71424 pod_ready.go:39] duration metric: took 1.259272309s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0913 19:57:56.734673   71424 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0913 19:57:56.748215   71424 ops.go:34] apiserver oom_adj: -16
	I0913 19:57:56.748236   71424 kubeadm.go:597] duration metric: took 9.227936606s to restartPrimaryControlPlane
	I0913 19:57:56.748247   71424 kubeadm.go:394] duration metric: took 9.275070425s to StartCluster
	I0913 19:57:56.748267   71424 settings.go:142] acquiring lock: {Name:mk3b751ea8a9f3a6c2d19469cffad500e96411b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 19:57:56.748361   71424 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19636-3902/kubeconfig
	I0913 19:57:56.750523   71424 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/kubeconfig: {Name:mk324cf401f96b93fed93af51e24b0634e5fa1a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 19:57:56.750818   71424 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.13 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0913 19:57:56.750914   71424 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0913 19:57:56.751016   71424 addons.go:69] Setting storage-provisioner=true in profile "no-preload-239327"
	I0913 19:57:56.751037   71424 addons.go:234] Setting addon storage-provisioner=true in "no-preload-239327"
	W0913 19:57:56.751046   71424 addons.go:243] addon storage-provisioner should already be in state true
	I0913 19:57:56.751034   71424 addons.go:69] Setting default-storageclass=true in profile "no-preload-239327"
	I0913 19:57:56.751066   71424 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-239327"
	I0913 19:57:56.751076   71424 host.go:66] Checking if "no-preload-239327" exists ...
	I0913 19:57:56.751108   71424 config.go:182] Loaded profile config "no-preload-239327": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 19:57:56.751172   71424 addons.go:69] Setting metrics-server=true in profile "no-preload-239327"
	I0913 19:57:56.751186   71424 addons.go:234] Setting addon metrics-server=true in "no-preload-239327"
	W0913 19:57:56.751208   71424 addons.go:243] addon metrics-server should already be in state true
	I0913 19:57:56.751231   71424 host.go:66] Checking if "no-preload-239327" exists ...
	I0913 19:57:56.751527   71424 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:57:56.751550   71424 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:57:56.751568   71424 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:57:56.751581   71424 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:57:56.751735   71424 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:57:56.751799   71424 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:57:56.753086   71424 out.go:177] * Verifying Kubernetes components...
	I0913 19:57:56.755069   71424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 19:57:56.769111   71424 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39191
	I0913 19:57:56.769722   71424 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:57:56.770138   71424 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37643
	I0913 19:57:56.770380   71424 main.go:141] libmachine: Using API Version  1
	I0913 19:57:56.770397   71424 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:57:56.770472   71424 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44401
	I0913 19:57:56.770616   71424 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:57:56.770858   71424 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:57:56.771033   71424 main.go:141] libmachine: Using API Version  1
	I0913 19:57:56.771054   71424 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:57:56.771358   71424 main.go:141] libmachine: Using API Version  1
	I0913 19:57:56.771375   71424 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:57:56.771393   71424 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:57:56.771418   71424 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:57:56.771553   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetState
	I0913 19:57:56.772058   71424 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:57:56.772097   71424 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:57:56.772313   71424 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:57:56.772870   71424 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:57:56.772911   71424 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:57:56.791429   71424 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39909
	I0913 19:57:56.791741   71424 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:57:56.791800   71424 addons.go:234] Setting addon default-storageclass=true in "no-preload-239327"
	W0913 19:57:56.791813   71424 addons.go:243] addon default-storageclass should already be in state true
	I0913 19:57:56.791841   71424 host.go:66] Checking if "no-preload-239327" exists ...
	I0913 19:57:56.792127   71424 main.go:141] libmachine: Using API Version  1
	I0913 19:57:56.792142   71424 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:57:56.792204   71424 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:57:56.792234   71424 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:57:56.792419   71424 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:57:56.792545   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetState
	I0913 19:57:56.794360   71424 main.go:141] libmachine: (no-preload-239327) Calling .DriverName
	I0913 19:57:56.796432   71424 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0913 19:57:56.797889   71424 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0913 19:57:56.797906   71424 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0913 19:57:56.797936   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHHostname
	I0913 19:57:56.801559   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:56.801916   71424 main.go:141] libmachine: (no-preload-239327) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:8c:9d", ip: ""} in network mk-no-preload-239327: {Iface:virbr1 ExpiryTime:2024-09-13 20:57:22 +0000 UTC Type:0 Mac:52:54:00:14:8c:9d Iaid: IPaddr:192.168.50.13 Prefix:24 Hostname:no-preload-239327 Clientid:01:52:54:00:14:8c:9d}
	I0913 19:57:56.801937   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined IP address 192.168.50.13 and MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:56.803787   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHPort
	I0913 19:57:56.803937   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHKeyPath
	I0913 19:57:56.806185   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHUsername
	I0913 19:57:56.806357   71424 sshutil.go:53] new ssh client: &{IP:192.168.50.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/no-preload-239327/id_rsa Username:docker}
	I0913 19:57:56.809000   71424 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40391
	I0913 19:57:56.809444   71424 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:57:56.809928   71424 main.go:141] libmachine: Using API Version  1
	I0913 19:57:56.809943   71424 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:57:56.809962   71424 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36435
	I0913 19:57:56.810309   71424 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:57:56.810511   71424 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:57:56.810829   71424 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:57:56.810862   71424 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:57:56.810872   71424 main.go:141] libmachine: Using API Version  1
	I0913 19:57:56.810886   71424 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:57:56.811194   71424 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:57:56.811321   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetState
	I0913 19:57:56.812760   71424 main.go:141] libmachine: (no-preload-239327) Calling .DriverName
	I0913 19:57:56.814270   71424 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 19:57:52.718732   71926 main.go:141] libmachine: (old-k8s-version-234290) Waiting to get IP...
	I0913 19:57:52.719575   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:57:52.720004   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | unable to find current IP address of domain old-k8s-version-234290 in network mk-old-k8s-version-234290
	I0913 19:57:52.720083   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | I0913 19:57:52.720009   72953 retry.go:31] will retry after 304.912151ms: waiting for machine to come up
	I0913 19:57:53.026797   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:57:53.027578   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | unable to find current IP address of domain old-k8s-version-234290 in network mk-old-k8s-version-234290
	I0913 19:57:53.027703   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | I0913 19:57:53.027641   72953 retry.go:31] will retry after 242.676909ms: waiting for machine to come up
	I0913 19:57:53.272108   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:57:53.272588   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | unable to find current IP address of domain old-k8s-version-234290 in network mk-old-k8s-version-234290
	I0913 19:57:53.272612   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | I0913 19:57:53.272518   72953 retry.go:31] will retry after 405.559393ms: waiting for machine to come up
	I0913 19:57:53.679940   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:57:53.680380   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | unable to find current IP address of domain old-k8s-version-234290 in network mk-old-k8s-version-234290
	I0913 19:57:53.680414   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | I0913 19:57:53.680348   72953 retry.go:31] will retry after 378.743628ms: waiting for machine to come up
	I0913 19:57:54.061169   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:57:54.061778   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | unable to find current IP address of domain old-k8s-version-234290 in network mk-old-k8s-version-234290
	I0913 19:57:54.061805   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | I0913 19:57:54.061698   72953 retry.go:31] will retry after 481.46563ms: waiting for machine to come up
	I0913 19:57:54.545134   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:57:54.545582   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | unable to find current IP address of domain old-k8s-version-234290 in network mk-old-k8s-version-234290
	I0913 19:57:54.545604   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | I0913 19:57:54.545548   72953 retry.go:31] will retry after 836.433898ms: waiting for machine to come up
	I0913 19:57:55.383396   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:57:55.384063   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | unable to find current IP address of domain old-k8s-version-234290 in network mk-old-k8s-version-234290
	I0913 19:57:55.384094   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | I0913 19:57:55.384023   72953 retry.go:31] will retry after 848.706378ms: waiting for machine to come up
	I0913 19:57:56.233996   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:57:56.234429   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | unable to find current IP address of domain old-k8s-version-234290 in network mk-old-k8s-version-234290
	I0913 19:57:56.234456   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | I0913 19:57:56.234381   72953 retry.go:31] will retry after 969.158848ms: waiting for machine to come up
	I0913 19:57:56.815854   71424 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0913 19:57:56.815866   71424 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0913 19:57:56.815878   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHHostname
	I0913 19:57:56.822710   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:56.823097   71424 main.go:141] libmachine: (no-preload-239327) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:8c:9d", ip: ""} in network mk-no-preload-239327: {Iface:virbr1 ExpiryTime:2024-09-13 20:57:22 +0000 UTC Type:0 Mac:52:54:00:14:8c:9d Iaid: IPaddr:192.168.50.13 Prefix:24 Hostname:no-preload-239327 Clientid:01:52:54:00:14:8c:9d}
	I0913 19:57:56.823115   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined IP address 192.168.50.13 and MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:56.823379   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHPort
	I0913 19:57:56.823519   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHKeyPath
	I0913 19:57:56.823634   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHUsername
	I0913 19:57:56.823721   71424 sshutil.go:53] new ssh client: &{IP:192.168.50.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/no-preload-239327/id_rsa Username:docker}
	I0913 19:57:56.830245   71424 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38447
	I0913 19:57:56.830634   71424 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:57:56.831243   71424 main.go:141] libmachine: Using API Version  1
	I0913 19:57:56.831258   71424 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:57:56.831746   71424 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:57:56.831977   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetState
	I0913 19:57:56.833771   71424 main.go:141] libmachine: (no-preload-239327) Calling .DriverName
	I0913 19:57:56.833953   71424 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0913 19:57:56.833966   71424 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0913 19:57:56.833981   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHHostname
	I0913 19:57:56.837171   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:56.837611   71424 main.go:141] libmachine: (no-preload-239327) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:8c:9d", ip: ""} in network mk-no-preload-239327: {Iface:virbr1 ExpiryTime:2024-09-13 20:57:22 +0000 UTC Type:0 Mac:52:54:00:14:8c:9d Iaid: IPaddr:192.168.50.13 Prefix:24 Hostname:no-preload-239327 Clientid:01:52:54:00:14:8c:9d}
	I0913 19:57:56.837630   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined IP address 192.168.50.13 and MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:56.837793   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHPort
	I0913 19:57:56.837962   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHKeyPath
	I0913 19:57:56.838198   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHUsername
	I0913 19:57:56.838323   71424 sshutil.go:53] new ssh client: &{IP:192.168.50.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/no-preload-239327/id_rsa Username:docker}
	I0913 19:57:57.030836   71424 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0913 19:57:57.056630   71424 node_ready.go:35] waiting up to 6m0s for node "no-preload-239327" to be "Ready" ...
	I0913 19:57:57.157478   71424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0913 19:57:57.169686   71424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0913 19:57:57.302368   71424 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0913 19:57:57.302395   71424 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0913 19:57:57.355982   71424 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0913 19:57:57.356013   71424 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0913 19:57:57.378079   71424 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0913 19:57:57.378128   71424 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0913 19:57:57.437879   71424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0913 19:57:59.395739   71424 node_ready.go:53] node "no-preload-239327" has status "Ready":"False"
	I0913 19:57:59.399929   71424 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.230206257s)
	I0913 19:57:59.399976   71424 main.go:141] libmachine: Making call to close driver server
	I0913 19:57:59.399988   71424 main.go:141] libmachine: (no-preload-239327) Calling .Close
	I0913 19:57:59.400026   71424 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.242509219s)
	I0913 19:57:59.400067   71424 main.go:141] libmachine: Making call to close driver server
	I0913 19:57:59.400083   71424 main.go:141] libmachine: (no-preload-239327) Calling .Close
	I0913 19:57:59.400273   71424 main.go:141] libmachine: Successfully made call to close driver server
	I0913 19:57:59.400287   71424 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 19:57:59.400297   71424 main.go:141] libmachine: Making call to close driver server
	I0913 19:57:59.400305   71424 main.go:141] libmachine: (no-preload-239327) Calling .Close
	I0913 19:57:59.400481   71424 main.go:141] libmachine: (no-preload-239327) DBG | Closing plugin on server side
	I0913 19:57:59.400514   71424 main.go:141] libmachine: Successfully made call to close driver server
	I0913 19:57:59.400529   71424 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 19:57:59.400548   71424 main.go:141] libmachine: Making call to close driver server
	I0913 19:57:59.400556   71424 main.go:141] libmachine: (no-preload-239327) Calling .Close
	I0913 19:57:59.400706   71424 main.go:141] libmachine: Successfully made call to close driver server
	I0913 19:57:59.400716   71424 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 19:57:59.402063   71424 main.go:141] libmachine: Successfully made call to close driver server
	I0913 19:57:59.402078   71424 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 19:57:59.402110   71424 main.go:141] libmachine: (no-preload-239327) DBG | Closing plugin on server side
	I0913 19:57:59.729071   71424 main.go:141] libmachine: Making call to close driver server
	I0913 19:57:59.729097   71424 main.go:141] libmachine: (no-preload-239327) Calling .Close
	I0913 19:57:59.729396   71424 main.go:141] libmachine: Successfully made call to close driver server
	I0913 19:57:59.729416   71424 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 19:57:59.862773   71424 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.424844753s)
	I0913 19:57:59.862831   71424 main.go:141] libmachine: Making call to close driver server
	I0913 19:57:59.862847   71424 main.go:141] libmachine: (no-preload-239327) Calling .Close
	I0913 19:57:59.863167   71424 main.go:141] libmachine: (no-preload-239327) DBG | Closing plugin on server side
	I0913 19:57:59.863223   71424 main.go:141] libmachine: Successfully made call to close driver server
	I0913 19:57:59.863241   71424 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 19:57:59.863253   71424 main.go:141] libmachine: Making call to close driver server
	I0913 19:57:59.863261   71424 main.go:141] libmachine: (no-preload-239327) Calling .Close
	I0913 19:57:59.863505   71424 main.go:141] libmachine: Successfully made call to close driver server
	I0913 19:57:59.863521   71424 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 19:57:59.863536   71424 addons.go:475] Verifying addon metrics-server=true in "no-preload-239327"
	I0913 19:57:59.865569   71424 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0913 19:57:56.673474   71702 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.260118506s)
	I0913 19:57:56.673521   71702 crio.go:469] duration metric: took 2.260277637s to extract the tarball
	I0913 19:57:56.673535   71702 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0913 19:57:56.710512   71702 ssh_runner.go:195] Run: sudo crictl images --output json
	I0913 19:57:56.757884   71702 crio.go:514] all images are preloaded for cri-o runtime.
	I0913 19:57:56.757904   71702 cache_images.go:84] Images are preloaded, skipping loading
	I0913 19:57:56.757913   71702 kubeadm.go:934] updating node { 192.168.61.3 8444 v1.31.1 crio true true} ...
	I0913 19:57:56.758026   71702 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-512125 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-512125 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0913 19:57:56.758115   71702 ssh_runner.go:195] Run: crio config
	I0913 19:57:56.832109   71702 cni.go:84] Creating CNI manager for ""
	I0913 19:57:56.832131   71702 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0913 19:57:56.832143   71702 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0913 19:57:56.832170   71702 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.3 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-512125 NodeName:default-k8s-diff-port-512125 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.3"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0913 19:57:56.832376   71702 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.3
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-512125"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.3"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0913 19:57:56.832442   71702 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0913 19:57:56.845057   71702 binaries.go:44] Found k8s binaries, skipping transfer
	I0913 19:57:56.845112   71702 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0913 19:57:56.855452   71702 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I0913 19:57:56.874607   71702 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0913 19:57:56.891656   71702 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0913 19:57:56.910268   71702 ssh_runner.go:195] Run: grep 192.168.61.3	control-plane.minikube.internal$ /etc/hosts
	I0913 19:57:56.915416   71702 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.3	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0913 19:57:56.929858   71702 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 19:57:57.051400   71702 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0913 19:57:57.073706   71702 certs.go:68] Setting up /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/default-k8s-diff-port-512125 for IP: 192.168.61.3
	I0913 19:57:57.073736   71702 certs.go:194] generating shared ca certs ...
	I0913 19:57:57.073756   71702 certs.go:226] acquiring lock for ca certs: {Name:mke780aab4c2895bded11772e31c7a357d07742c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 19:57:57.073920   71702 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.key
	I0913 19:57:57.073981   71702 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.key
	I0913 19:57:57.073997   71702 certs.go:256] generating profile certs ...
	I0913 19:57:57.074130   71702 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/default-k8s-diff-port-512125/client.key
	I0913 19:57:57.074222   71702 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/default-k8s-diff-port-512125/apiserver.key.c56bc154
	I0913 19:57:57.074281   71702 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/default-k8s-diff-port-512125/proxy-client.key
	I0913 19:57:57.074428   71702 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079.pem (1338 bytes)
	W0913 19:57:57.074478   71702 certs.go:480] ignoring /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079_empty.pem, impossibly tiny 0 bytes
	I0913 19:57:57.074492   71702 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem (1675 bytes)
	I0913 19:57:57.074524   71702 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem (1082 bytes)
	I0913 19:57:57.074552   71702 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem (1123 bytes)
	I0913 19:57:57.074588   71702 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem (1679 bytes)
	I0913 19:57:57.074648   71702 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem (1708 bytes)
	I0913 19:57:57.075352   71702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0913 19:57:57.116487   71702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0913 19:57:57.149579   71702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0913 19:57:57.181669   71702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0913 19:57:57.222493   71702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/default-k8s-diff-port-512125/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0913 19:57:57.265591   71702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/default-k8s-diff-port-512125/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0913 19:57:57.309431   71702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/default-k8s-diff-port-512125/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0913 19:57:57.337978   71702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/default-k8s-diff-port-512125/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0913 19:57:57.368737   71702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem --> /usr/share/ca-certificates/110792.pem (1708 bytes)
	I0913 19:57:57.395163   71702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0913 19:57:57.422620   71702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079.pem --> /usr/share/ca-certificates/11079.pem (1338 bytes)
	I0913 19:57:57.452103   71702 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0913 19:57:57.473413   71702 ssh_runner.go:195] Run: openssl version
	I0913 19:57:57.481312   71702 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11079.pem && ln -fs /usr/share/ca-certificates/11079.pem /etc/ssl/certs/11079.pem"
	I0913 19:57:57.492674   71702 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11079.pem
	I0913 19:57:57.497758   71702 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 13 18:38 /usr/share/ca-certificates/11079.pem
	I0913 19:57:57.497839   71702 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11079.pem
	I0913 19:57:57.504428   71702 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11079.pem /etc/ssl/certs/51391683.0"
	I0913 19:57:57.516174   71702 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110792.pem && ln -fs /usr/share/ca-certificates/110792.pem /etc/ssl/certs/110792.pem"
	I0913 19:57:57.531615   71702 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110792.pem
	I0913 19:57:57.536963   71702 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 13 18:38 /usr/share/ca-certificates/110792.pem
	I0913 19:57:57.537044   71702 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110792.pem
	I0913 19:57:57.543533   71702 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110792.pem /etc/ssl/certs/3ec20f2e.0"
	I0913 19:57:57.555225   71702 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0913 19:57:57.567042   71702 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0913 19:57:57.571812   71702 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 13 18:22 /usr/share/ca-certificates/minikubeCA.pem
	I0913 19:57:57.571880   71702 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0913 19:57:57.578078   71702 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0913 19:57:57.589068   71702 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0913 19:57:57.593977   71702 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0913 19:57:57.600118   71702 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0913 19:57:57.608059   71702 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0913 19:57:57.616018   71702 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0913 19:57:57.623731   71702 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0913 19:57:57.631334   71702 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0913 19:57:57.639262   71702 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-512125 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:default-k8s-diff-port-512125 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.3 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2
6280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 19:57:57.639371   71702 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0913 19:57:57.639428   71702 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0913 19:57:57.690322   71702 cri.go:89] found id: ""
	I0913 19:57:57.690474   71702 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0913 19:57:57.701319   71702 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0913 19:57:57.701343   71702 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0913 19:57:57.701398   71702 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0913 19:57:57.714480   71702 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0913 19:57:57.715899   71702 kubeconfig.go:125] found "default-k8s-diff-port-512125" server: "https://192.168.61.3:8444"
	I0913 19:57:57.719013   71702 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0913 19:57:57.732186   71702 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.3
	I0913 19:57:57.732229   71702 kubeadm.go:1160] stopping kube-system containers ...
	I0913 19:57:57.732243   71702 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0913 19:57:57.732295   71702 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0913 19:57:57.777389   71702 cri.go:89] found id: ""
	I0913 19:57:57.777469   71702 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0913 19:57:57.800158   71702 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0913 19:57:57.813502   71702 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0913 19:57:57.813524   71702 kubeadm.go:157] found existing configuration files:
	
	I0913 19:57:57.813587   71702 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0913 19:57:57.824010   71702 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0913 19:57:57.824089   71702 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0913 19:57:57.837916   71702 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0913 19:57:57.848018   71702 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0913 19:57:57.848100   71702 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0913 19:57:57.858224   71702 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0913 19:57:57.867720   71702 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0913 19:57:57.867791   71702 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0913 19:57:57.877546   71702 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0913 19:57:57.886880   71702 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0913 19:57:57.886946   71702 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0913 19:57:57.897287   71702 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0913 19:57:57.907278   71702 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:57:58.066862   71702 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:57:59.038179   71702 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:57:59.245671   71702 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:57:59.306302   71702 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:57:59.366665   71702 api_server.go:52] waiting for apiserver process to appear ...
	I0913 19:57:59.366755   71702 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:57:59.867295   71702 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:57:59.867010   71424 addons.go:510] duration metric: took 3.116105462s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0913 19:57:57.205383   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:57:57.205897   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | unable to find current IP address of domain old-k8s-version-234290 in network mk-old-k8s-version-234290
	I0913 19:57:57.205926   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | I0913 19:57:57.205815   72953 retry.go:31] will retry after 1.270443953s: waiting for machine to come up
	I0913 19:57:58.477621   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:57:58.478121   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | unable to find current IP address of domain old-k8s-version-234290 in network mk-old-k8s-version-234290
	I0913 19:57:58.478142   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | I0913 19:57:58.478071   72953 retry.go:31] will retry after 1.698380616s: waiting for machine to come up
	I0913 19:58:00.179093   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:00.179578   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | unable to find current IP address of domain old-k8s-version-234290 in network mk-old-k8s-version-234290
	I0913 19:58:00.179602   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | I0913 19:58:00.179528   72953 retry.go:31] will retry after 2.83575453s: waiting for machine to come up
	I0913 19:58:00.367089   71702 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:00.386556   71702 api_server.go:72] duration metric: took 1.019888667s to wait for apiserver process to appear ...
	I0913 19:58:00.386585   71702 api_server.go:88] waiting for apiserver healthz status ...
	I0913 19:58:00.386612   71702 api_server.go:253] Checking apiserver healthz at https://192.168.61.3:8444/healthz ...
	I0913 19:58:00.387195   71702 api_server.go:269] stopped: https://192.168.61.3:8444/healthz: Get "https://192.168.61.3:8444/healthz": dial tcp 192.168.61.3:8444: connect: connection refused
	I0913 19:58:00.887556   71702 api_server.go:253] Checking apiserver healthz at https://192.168.61.3:8444/healthz ...
	I0913 19:58:03.321626   71702 api_server.go:279] https://192.168.61.3:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0913 19:58:03.321655   71702 api_server.go:103] status: https://192.168.61.3:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0913 19:58:03.321671   71702 api_server.go:253] Checking apiserver healthz at https://192.168.61.3:8444/healthz ...
	I0913 19:58:03.348469   71702 api_server.go:279] https://192.168.61.3:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0913 19:58:03.348523   71702 api_server.go:103] status: https://192.168.61.3:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0913 19:58:03.386697   71702 api_server.go:253] Checking apiserver healthz at https://192.168.61.3:8444/healthz ...
	I0913 19:58:03.431803   71702 api_server.go:279] https://192.168.61.3:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0913 19:58:03.431840   71702 api_server.go:103] status: https://192.168.61.3:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0913 19:58:03.887458   71702 api_server.go:253] Checking apiserver healthz at https://192.168.61.3:8444/healthz ...
	I0913 19:58:03.892461   71702 api_server.go:279] https://192.168.61.3:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0913 19:58:03.892542   71702 api_server.go:103] status: https://192.168.61.3:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0913 19:58:04.387025   71702 api_server.go:253] Checking apiserver healthz at https://192.168.61.3:8444/healthz ...
	I0913 19:58:04.392727   71702 api_server.go:279] https://192.168.61.3:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0913 19:58:04.392754   71702 api_server.go:103] status: https://192.168.61.3:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0913 19:58:04.887683   71702 api_server.go:253] Checking apiserver healthz at https://192.168.61.3:8444/healthz ...
	I0913 19:58:04.892753   71702 api_server.go:279] https://192.168.61.3:8444/healthz returned 200:
	ok
	I0913 19:58:04.904148   71702 api_server.go:141] control plane version: v1.31.1
	I0913 19:58:04.904182   71702 api_server.go:131] duration metric: took 4.517588824s to wait for apiserver health ...
	I0913 19:58:04.904194   71702 cni.go:84] Creating CNI manager for ""
	I0913 19:58:04.904202   71702 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0913 19:58:04.905663   71702 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0913 19:58:01.560970   71424 node_ready.go:53] node "no-preload-239327" has status "Ready":"False"
	I0913 19:58:04.064801   71424 node_ready.go:49] node "no-preload-239327" has status "Ready":"True"
	I0913 19:58:04.064833   71424 node_ready.go:38] duration metric: took 7.008173513s for node "no-preload-239327" to be "Ready" ...
	I0913 19:58:04.064847   71424 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0913 19:58:04.071226   71424 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-fjzxv" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:04.075856   71424 pod_ready.go:93] pod "coredns-7c65d6cfc9-fjzxv" in "kube-system" namespace has status "Ready":"True"
	I0913 19:58:04.075876   71424 pod_ready.go:82] duration metric: took 4.620688ms for pod "coredns-7c65d6cfc9-fjzxv" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:04.075886   71424 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-239327" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:06.082608   71424 pod_ready.go:103] pod "etcd-no-preload-239327" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:03.017261   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:03.017797   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | unable to find current IP address of domain old-k8s-version-234290 in network mk-old-k8s-version-234290
	I0913 19:58:03.017824   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | I0913 19:58:03.017750   72953 retry.go:31] will retry after 2.837073214s: waiting for machine to come up
	I0913 19:58:05.856138   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:05.856521   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | unable to find current IP address of domain old-k8s-version-234290 in network mk-old-k8s-version-234290
	I0913 19:58:05.856541   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | I0913 19:58:05.856478   72953 retry.go:31] will retry after 3.468611434s: waiting for machine to come up
	I0913 19:58:04.907086   71702 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0913 19:58:04.935755   71702 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0913 19:58:04.972552   71702 system_pods.go:43] waiting for kube-system pods to appear ...
	I0913 19:58:04.987070   71702 system_pods.go:59] 8 kube-system pods found
	I0913 19:58:04.987104   71702 system_pods.go:61] "coredns-7c65d6cfc9-zvnss" [b6584e3d-4140-4666-8303-94c0900eaf8c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0913 19:58:04.987118   71702 system_pods.go:61] "etcd-default-k8s-diff-port-512125" [5eb1e9b1-b89a-427d-83f5-96d9109b10c6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0913 19:58:04.987128   71702 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-512125" [5118097e-a1ed-403e-8acb-22c7619a6db9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0913 19:58:04.987148   71702 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-512125" [37f11854-a2b8-45d5-8491-e2f92b860220] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0913 19:58:04.987160   71702 system_pods.go:61] "kube-proxy-xqv9m" [92c9dda2-fabe-4b3b-9bae-892e6daf0889] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0913 19:58:04.987172   71702 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-512125" [a9f4fa75-b73d-477a-83e9-e855ec50f90d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0913 19:58:04.987180   71702 system_pods.go:61] "metrics-server-6867b74b74-7ltrm" [8560dbda-82b3-49a1-8ed8-f149e5e99168] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0913 19:58:04.987188   71702 system_pods.go:61] "storage-provisioner" [d8f393fe-0f71-4f3c-b17e-6132503c2b9d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0913 19:58:04.987198   71702 system_pods.go:74] duration metric: took 14.623093ms to wait for pod list to return data ...
	I0913 19:58:04.987207   71702 node_conditions.go:102] verifying NodePressure condition ...
	I0913 19:58:04.991659   71702 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0913 19:58:04.991686   71702 node_conditions.go:123] node cpu capacity is 2
	I0913 19:58:04.991701   71702 node_conditions.go:105] duration metric: took 4.488975ms to run NodePressure ...
	I0913 19:58:04.991720   71702 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:58:05.329547   71702 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0913 19:58:05.342174   71702 kubeadm.go:739] kubelet initialised
	I0913 19:58:05.342208   71702 kubeadm.go:740] duration metric: took 12.632654ms waiting for restarted kubelet to initialise ...
	I0913 19:58:05.342218   71702 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0913 19:58:05.351246   71702 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-zvnss" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:07.371790   71702 pod_ready.go:103] pod "coredns-7c65d6cfc9-zvnss" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:09.857936   71702 pod_ready.go:93] pod "coredns-7c65d6cfc9-zvnss" in "kube-system" namespace has status "Ready":"True"
	I0913 19:58:09.857956   71702 pod_ready.go:82] duration metric: took 4.506679998s for pod "coredns-7c65d6cfc9-zvnss" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:09.857966   71702 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-512125" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:10.763154   71233 start.go:364] duration metric: took 54.002772677s to acquireMachinesLock for "embed-certs-175374"
	I0913 19:58:10.763209   71233 start.go:96] Skipping create...Using existing machine configuration
	I0913 19:58:10.763220   71233 fix.go:54] fixHost starting: 
	I0913 19:58:10.763652   71233 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:58:10.763701   71233 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:58:10.780781   71233 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34533
	I0913 19:58:10.781257   71233 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:58:10.781767   71233 main.go:141] libmachine: Using API Version  1
	I0913 19:58:10.781792   71233 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:58:10.782108   71233 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:58:10.782297   71233 main.go:141] libmachine: (embed-certs-175374) Calling .DriverName
	I0913 19:58:10.782435   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetState
	I0913 19:58:10.783818   71233 fix.go:112] recreateIfNeeded on embed-certs-175374: state=Stopped err=<nil>
	I0913 19:58:10.783838   71233 main.go:141] libmachine: (embed-certs-175374) Calling .DriverName
	W0913 19:58:10.783968   71233 fix.go:138] unexpected machine state, will restart: <nil>
	I0913 19:58:10.786142   71233 out.go:177] * Restarting existing kvm2 VM for "embed-certs-175374" ...
	I0913 19:58:07.082571   71424 pod_ready.go:93] pod "etcd-no-preload-239327" in "kube-system" namespace has status "Ready":"True"
	I0913 19:58:07.082601   71424 pod_ready.go:82] duration metric: took 3.006705611s for pod "etcd-no-preload-239327" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:07.082614   71424 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-239327" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:07.087377   71424 pod_ready.go:93] pod "kube-apiserver-no-preload-239327" in "kube-system" namespace has status "Ready":"True"
	I0913 19:58:07.087394   71424 pod_ready.go:82] duration metric: took 4.772922ms for pod "kube-apiserver-no-preload-239327" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:07.087403   71424 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-239327" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:07.091167   71424 pod_ready.go:93] pod "kube-controller-manager-no-preload-239327" in "kube-system" namespace has status "Ready":"True"
	I0913 19:58:07.091181   71424 pod_ready.go:82] duration metric: took 3.772461ms for pod "kube-controller-manager-no-preload-239327" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:07.091188   71424 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-b24zg" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:07.095143   71424 pod_ready.go:93] pod "kube-proxy-b24zg" in "kube-system" namespace has status "Ready":"True"
	I0913 19:58:07.095158   71424 pod_ready.go:82] duration metric: took 3.964773ms for pod "kube-proxy-b24zg" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:07.095164   71424 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-239327" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:07.259916   71424 pod_ready.go:93] pod "kube-scheduler-no-preload-239327" in "kube-system" namespace has status "Ready":"True"
	I0913 19:58:07.259939   71424 pod_ready.go:82] duration metric: took 164.768229ms for pod "kube-scheduler-no-preload-239327" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:07.259948   71424 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:09.267203   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:09.327843   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:09.328294   71926 main.go:141] libmachine: (old-k8s-version-234290) Found IP for machine: 192.168.72.137
	I0913 19:58:09.328318   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has current primary IP address 192.168.72.137 and MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:09.328326   71926 main.go:141] libmachine: (old-k8s-version-234290) Reserving static IP address...
	I0913 19:58:09.328829   71926 main.go:141] libmachine: (old-k8s-version-234290) Reserved static IP address: 192.168.72.137
	I0913 19:58:09.328863   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | found host DHCP lease matching {name: "old-k8s-version-234290", mac: "52:54:00:11:33:43", ip: "192.168.72.137"} in network mk-old-k8s-version-234290: {Iface:virbr4 ExpiryTime:2024-09-13 20:58:03 +0000 UTC Type:0 Mac:52:54:00:11:33:43 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-234290 Clientid:01:52:54:00:11:33:43}
	I0913 19:58:09.328879   71926 main.go:141] libmachine: (old-k8s-version-234290) Waiting for SSH to be available...
	I0913 19:58:09.328907   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | skip adding static IP to network mk-old-k8s-version-234290 - found existing host DHCP lease matching {name: "old-k8s-version-234290", mac: "52:54:00:11:33:43", ip: "192.168.72.137"}
	I0913 19:58:09.328936   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | Getting to WaitForSSH function...
	I0913 19:58:09.331039   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:09.331303   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:33:43", ip: ""} in network mk-old-k8s-version-234290: {Iface:virbr4 ExpiryTime:2024-09-13 20:58:03 +0000 UTC Type:0 Mac:52:54:00:11:33:43 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-234290 Clientid:01:52:54:00:11:33:43}
	I0913 19:58:09.331334   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined IP address 192.168.72.137 and MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:09.331400   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | Using SSH client type: external
	I0913 19:58:09.331435   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | Using SSH private key: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/old-k8s-version-234290/id_rsa (-rw-------)
	I0913 19:58:09.331513   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.137 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19636-3902/.minikube/machines/old-k8s-version-234290/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0913 19:58:09.331538   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | About to run SSH command:
	I0913 19:58:09.331555   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | exit 0
	I0913 19:58:09.458142   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | SSH cmd err, output: <nil>: 
	I0913 19:58:09.458503   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetConfigRaw
	I0913 19:58:09.459065   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetIP
	I0913 19:58:09.461622   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:09.461915   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:33:43", ip: ""} in network mk-old-k8s-version-234290: {Iface:virbr4 ExpiryTime:2024-09-13 20:58:03 +0000 UTC Type:0 Mac:52:54:00:11:33:43 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-234290 Clientid:01:52:54:00:11:33:43}
	I0913 19:58:09.461939   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined IP address 192.168.72.137 and MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:09.462215   71926 profile.go:143] Saving config to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/old-k8s-version-234290/config.json ...
	I0913 19:58:09.462430   71926 machine.go:93] provisionDockerMachine start ...
	I0913 19:58:09.462454   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .DriverName
	I0913 19:58:09.462652   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHHostname
	I0913 19:58:09.464731   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:09.465001   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:33:43", ip: ""} in network mk-old-k8s-version-234290: {Iface:virbr4 ExpiryTime:2024-09-13 20:58:03 +0000 UTC Type:0 Mac:52:54:00:11:33:43 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-234290 Clientid:01:52:54:00:11:33:43}
	I0913 19:58:09.465025   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined IP address 192.168.72.137 and MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:09.465140   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHPort
	I0913 19:58:09.465292   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHKeyPath
	I0913 19:58:09.465448   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHKeyPath
	I0913 19:58:09.465586   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHUsername
	I0913 19:58:09.465754   71926 main.go:141] libmachine: Using SSH client type: native
	I0913 19:58:09.465934   71926 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.137 22 <nil> <nil>}
	I0913 19:58:09.465944   71926 main.go:141] libmachine: About to run SSH command:
	hostname
	I0913 19:58:09.578790   71926 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0913 19:58:09.578821   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetMachineName
	I0913 19:58:09.579119   71926 buildroot.go:166] provisioning hostname "old-k8s-version-234290"
	I0913 19:58:09.579148   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetMachineName
	I0913 19:58:09.579352   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHHostname
	I0913 19:58:09.582085   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:09.582501   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:33:43", ip: ""} in network mk-old-k8s-version-234290: {Iface:virbr4 ExpiryTime:2024-09-13 20:58:03 +0000 UTC Type:0 Mac:52:54:00:11:33:43 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-234290 Clientid:01:52:54:00:11:33:43}
	I0913 19:58:09.582530   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined IP address 192.168.72.137 and MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:09.582677   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHPort
	I0913 19:58:09.582828   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHKeyPath
	I0913 19:58:09.582982   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHKeyPath
	I0913 19:58:09.583111   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHUsername
	I0913 19:58:09.583310   71926 main.go:141] libmachine: Using SSH client type: native
	I0913 19:58:09.583519   71926 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.137 22 <nil> <nil>}
	I0913 19:58:09.583536   71926 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-234290 && echo "old-k8s-version-234290" | sudo tee /etc/hostname
	I0913 19:58:09.712818   71926 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-234290
	
	I0913 19:58:09.712849   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHHostname
	I0913 19:58:09.715668   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:09.716012   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:33:43", ip: ""} in network mk-old-k8s-version-234290: {Iface:virbr4 ExpiryTime:2024-09-13 20:58:03 +0000 UTC Type:0 Mac:52:54:00:11:33:43 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-234290 Clientid:01:52:54:00:11:33:43}
	I0913 19:58:09.716033   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined IP address 192.168.72.137 and MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:09.716166   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHPort
	I0913 19:58:09.716370   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHKeyPath
	I0913 19:58:09.716550   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHKeyPath
	I0913 19:58:09.716740   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHUsername
	I0913 19:58:09.716935   71926 main.go:141] libmachine: Using SSH client type: native
	I0913 19:58:09.717120   71926 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.137 22 <nil> <nil>}
	I0913 19:58:09.717145   71926 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-234290' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-234290/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-234290' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0913 19:58:09.835137   71926 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0913 19:58:09.835169   71926 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19636-3902/.minikube CaCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19636-3902/.minikube}
	I0913 19:58:09.835198   71926 buildroot.go:174] setting up certificates
	I0913 19:58:09.835207   71926 provision.go:84] configureAuth start
	I0913 19:58:09.835220   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetMachineName
	I0913 19:58:09.835493   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetIP
	I0913 19:58:09.838090   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:09.838460   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:33:43", ip: ""} in network mk-old-k8s-version-234290: {Iface:virbr4 ExpiryTime:2024-09-13 20:58:03 +0000 UTC Type:0 Mac:52:54:00:11:33:43 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-234290 Clientid:01:52:54:00:11:33:43}
	I0913 19:58:09.838496   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined IP address 192.168.72.137 and MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:09.838570   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHHostname
	I0913 19:58:09.840786   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:09.841121   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:33:43", ip: ""} in network mk-old-k8s-version-234290: {Iface:virbr4 ExpiryTime:2024-09-13 20:58:03 +0000 UTC Type:0 Mac:52:54:00:11:33:43 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-234290 Clientid:01:52:54:00:11:33:43}
	I0913 19:58:09.841147   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined IP address 192.168.72.137 and MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:09.841283   71926 provision.go:143] copyHostCerts
	I0913 19:58:09.841339   71926 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem, removing ...
	I0913 19:58:09.841349   71926 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem
	I0913 19:58:09.841404   71926 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem (1082 bytes)
	I0913 19:58:09.841495   71926 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem, removing ...
	I0913 19:58:09.841503   71926 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem
	I0913 19:58:09.841525   71926 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem (1123 bytes)
	I0913 19:58:09.841589   71926 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem, removing ...
	I0913 19:58:09.841596   71926 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem
	I0913 19:58:09.841613   71926 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem (1679 bytes)
	I0913 19:58:09.841671   71926 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-234290 san=[127.0.0.1 192.168.72.137 localhost minikube old-k8s-version-234290]
	I0913 19:58:10.111960   71926 provision.go:177] copyRemoteCerts
	I0913 19:58:10.112014   71926 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0913 19:58:10.112042   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHHostname
	I0913 19:58:10.114801   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:10.115156   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:33:43", ip: ""} in network mk-old-k8s-version-234290: {Iface:virbr4 ExpiryTime:2024-09-13 20:58:03 +0000 UTC Type:0 Mac:52:54:00:11:33:43 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-234290 Clientid:01:52:54:00:11:33:43}
	I0913 19:58:10.115203   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined IP address 192.168.72.137 and MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:10.115378   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHPort
	I0913 19:58:10.115543   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHKeyPath
	I0913 19:58:10.115703   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHUsername
	I0913 19:58:10.115816   71926 sshutil.go:53] new ssh client: &{IP:192.168.72.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/old-k8s-version-234290/id_rsa Username:docker}
	I0913 19:58:10.201034   71926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0913 19:58:10.225250   71926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0913 19:58:10.248653   71926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0913 19:58:10.273946   71926 provision.go:87] duration metric: took 438.72916ms to configureAuth
	I0913 19:58:10.273971   71926 buildroot.go:189] setting minikube options for container-runtime
	I0913 19:58:10.274216   71926 config.go:182] Loaded profile config "old-k8s-version-234290": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0913 19:58:10.274293   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHHostname
	I0913 19:58:10.276661   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:10.276973   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:33:43", ip: ""} in network mk-old-k8s-version-234290: {Iface:virbr4 ExpiryTime:2024-09-13 20:58:03 +0000 UTC Type:0 Mac:52:54:00:11:33:43 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-234290 Clientid:01:52:54:00:11:33:43}
	I0913 19:58:10.277010   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined IP address 192.168.72.137 and MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:10.277098   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHPort
	I0913 19:58:10.277315   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHKeyPath
	I0913 19:58:10.277465   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHKeyPath
	I0913 19:58:10.277593   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHUsername
	I0913 19:58:10.277755   71926 main.go:141] libmachine: Using SSH client type: native
	I0913 19:58:10.277914   71926 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.137 22 <nil> <nil>}
	I0913 19:58:10.277929   71926 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0913 19:58:10.506712   71926 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0913 19:58:10.506740   71926 machine.go:96] duration metric: took 1.044293936s to provisionDockerMachine
	I0913 19:58:10.506752   71926 start.go:293] postStartSetup for "old-k8s-version-234290" (driver="kvm2")
	I0913 19:58:10.506761   71926 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0913 19:58:10.506786   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .DriverName
	I0913 19:58:10.507087   71926 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0913 19:58:10.507121   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHHostname
	I0913 19:58:10.509746   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:10.510073   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:33:43", ip: ""} in network mk-old-k8s-version-234290: {Iface:virbr4 ExpiryTime:2024-09-13 20:58:03 +0000 UTC Type:0 Mac:52:54:00:11:33:43 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-234290 Clientid:01:52:54:00:11:33:43}
	I0913 19:58:10.510115   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined IP address 192.168.72.137 and MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:10.510319   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHPort
	I0913 19:58:10.510493   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHKeyPath
	I0913 19:58:10.510652   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHUsername
	I0913 19:58:10.510791   71926 sshutil.go:53] new ssh client: &{IP:192.168.72.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/old-k8s-version-234290/id_rsa Username:docker}
	I0913 19:58:10.608187   71926 ssh_runner.go:195] Run: cat /etc/os-release
	I0913 19:58:10.612545   71926 info.go:137] Remote host: Buildroot 2023.02.9
	I0913 19:58:10.612570   71926 filesync.go:126] Scanning /home/jenkins/minikube-integration/19636-3902/.minikube/addons for local assets ...
	I0913 19:58:10.612659   71926 filesync.go:126] Scanning /home/jenkins/minikube-integration/19636-3902/.minikube/files for local assets ...
	I0913 19:58:10.612760   71926 filesync.go:149] local asset: /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem -> 110792.pem in /etc/ssl/certs
	I0913 19:58:10.612876   71926 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0913 19:58:10.623923   71926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem --> /etc/ssl/certs/110792.pem (1708 bytes)
	I0913 19:58:10.649632   71926 start.go:296] duration metric: took 142.866871ms for postStartSetup
	I0913 19:58:10.649667   71926 fix.go:56] duration metric: took 19.32233086s for fixHost
	I0913 19:58:10.649685   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHHostname
	I0913 19:58:10.652317   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:10.652626   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:33:43", ip: ""} in network mk-old-k8s-version-234290: {Iface:virbr4 ExpiryTime:2024-09-13 20:58:03 +0000 UTC Type:0 Mac:52:54:00:11:33:43 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-234290 Clientid:01:52:54:00:11:33:43}
	I0913 19:58:10.652654   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined IP address 192.168.72.137 and MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:10.652772   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHPort
	I0913 19:58:10.652954   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHKeyPath
	I0913 19:58:10.653102   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHKeyPath
	I0913 19:58:10.653224   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHUsername
	I0913 19:58:10.653391   71926 main.go:141] libmachine: Using SSH client type: native
	I0913 19:58:10.653546   71926 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.137 22 <nil> <nil>}
	I0913 19:58:10.653555   71926 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0913 19:58:10.762947   71926 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726257490.741456651
	
	I0913 19:58:10.763010   71926 fix.go:216] guest clock: 1726257490.741456651
	I0913 19:58:10.763025   71926 fix.go:229] Guest: 2024-09-13 19:58:10.741456651 +0000 UTC Remote: 2024-09-13 19:58:10.649671047 +0000 UTC m=+268.740736518 (delta=91.785604ms)
	I0913 19:58:10.763052   71926 fix.go:200] guest clock delta is within tolerance: 91.785604ms
	I0913 19:58:10.763059   71926 start.go:83] releasing machines lock for "old-k8s-version-234290", held for 19.435752772s
	I0913 19:58:10.763095   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .DriverName
	I0913 19:58:10.763368   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetIP
	I0913 19:58:10.766318   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:10.766797   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:33:43", ip: ""} in network mk-old-k8s-version-234290: {Iface:virbr4 ExpiryTime:2024-09-13 20:58:03 +0000 UTC Type:0 Mac:52:54:00:11:33:43 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-234290 Clientid:01:52:54:00:11:33:43}
	I0913 19:58:10.766838   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined IP address 192.168.72.137 and MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:10.767094   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .DriverName
	I0913 19:58:10.767602   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .DriverName
	I0913 19:58:10.767791   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .DriverName
	I0913 19:58:10.767901   71926 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0913 19:58:10.767938   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHHostname
	I0913 19:58:10.768057   71926 ssh_runner.go:195] Run: cat /version.json
	I0913 19:58:10.768077   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHHostname
	I0913 19:58:10.770835   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:10.770860   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:10.771204   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:33:43", ip: ""} in network mk-old-k8s-version-234290: {Iface:virbr4 ExpiryTime:2024-09-13 20:58:03 +0000 UTC Type:0 Mac:52:54:00:11:33:43 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-234290 Clientid:01:52:54:00:11:33:43}
	I0913 19:58:10.771231   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined IP address 192.168.72.137 and MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:10.771269   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:33:43", ip: ""} in network mk-old-k8s-version-234290: {Iface:virbr4 ExpiryTime:2024-09-13 20:58:03 +0000 UTC Type:0 Mac:52:54:00:11:33:43 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-234290 Clientid:01:52:54:00:11:33:43}
	I0913 19:58:10.771299   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined IP address 192.168.72.137 and MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:10.771390   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHPort
	I0913 19:58:10.771538   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHKeyPath
	I0913 19:58:10.771562   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHPort
	I0913 19:58:10.771722   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHUsername
	I0913 19:58:10.771758   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHKeyPath
	I0913 19:58:10.771888   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHUsername
	I0913 19:58:10.771910   71926 sshutil.go:53] new ssh client: &{IP:192.168.72.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/old-k8s-version-234290/id_rsa Username:docker}
	I0913 19:58:10.772009   71926 sshutil.go:53] new ssh client: &{IP:192.168.72.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/old-k8s-version-234290/id_rsa Username:docker}
	I0913 19:58:10.851445   71926 ssh_runner.go:195] Run: systemctl --version
	I0913 19:58:10.876291   71926 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0913 19:58:11.022514   71926 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0913 19:58:11.029415   71926 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0913 19:58:11.029478   71926 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0913 19:58:11.046313   71926 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0913 19:58:11.046338   71926 start.go:495] detecting cgroup driver to use...
	I0913 19:58:11.046411   71926 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0913 19:58:11.064638   71926 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0913 19:58:11.079465   71926 docker.go:217] disabling cri-docker service (if available) ...
	I0913 19:58:11.079555   71926 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0913 19:58:11.092965   71926 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0913 19:58:11.107487   71926 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0913 19:58:11.225260   71926 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0913 19:58:11.379777   71926 docker.go:233] disabling docker service ...
	I0913 19:58:11.379918   71926 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0913 19:58:11.399146   71926 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0913 19:58:11.418820   71926 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0913 19:58:11.608056   71926 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0913 19:58:11.793596   71926 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0913 19:58:11.809432   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0913 19:58:11.831794   71926 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0913 19:58:11.831867   71926 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:58:11.843613   71926 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0913 19:58:11.843681   71926 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:58:11.856437   71926 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:58:11.868563   71926 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:58:11.880448   71926 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0913 19:58:11.892795   71926 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0913 19:58:11.903756   71926 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0913 19:58:11.903820   71926 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0913 19:58:11.919323   71926 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0913 19:58:11.932414   71926 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 19:58:12.084112   71926 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0913 19:58:12.186561   71926 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0913 19:58:12.186626   71926 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0913 19:58:12.200935   71926 start.go:563] Will wait 60s for crictl version
	I0913 19:58:12.200999   71926 ssh_runner.go:195] Run: which crictl
	I0913 19:58:12.204888   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0913 19:58:12.251729   71926 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0913 19:58:12.251822   71926 ssh_runner.go:195] Run: crio --version
	I0913 19:58:12.284955   71926 ssh_runner.go:195] Run: crio --version
	I0913 19:58:12.316561   71926 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0913 19:58:10.787457   71233 main.go:141] libmachine: (embed-certs-175374) Calling .Start
	I0913 19:58:10.787620   71233 main.go:141] libmachine: (embed-certs-175374) Ensuring networks are active...
	I0913 19:58:10.788313   71233 main.go:141] libmachine: (embed-certs-175374) Ensuring network default is active
	I0913 19:58:10.788694   71233 main.go:141] libmachine: (embed-certs-175374) Ensuring network mk-embed-certs-175374 is active
	I0913 19:58:10.789203   71233 main.go:141] libmachine: (embed-certs-175374) Getting domain xml...
	I0913 19:58:10.790255   71233 main.go:141] libmachine: (embed-certs-175374) Creating domain...
	I0913 19:58:12.138157   71233 main.go:141] libmachine: (embed-certs-175374) Waiting to get IP...
	I0913 19:58:12.139236   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:12.139700   71233 main.go:141] libmachine: (embed-certs-175374) DBG | unable to find current IP address of domain embed-certs-175374 in network mk-embed-certs-175374
	I0913 19:58:12.139753   71233 main.go:141] libmachine: (embed-certs-175374) DBG | I0913 19:58:12.139667   73146 retry.go:31] will retry after 297.211027ms: waiting for machine to come up
	I0913 19:58:12.438089   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:12.438546   71233 main.go:141] libmachine: (embed-certs-175374) DBG | unable to find current IP address of domain embed-certs-175374 in network mk-embed-certs-175374
	I0913 19:58:12.438573   71233 main.go:141] libmachine: (embed-certs-175374) DBG | I0913 19:58:12.438508   73146 retry.go:31] will retry after 295.16699ms: waiting for machine to come up
	I0913 19:58:12.735114   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:12.735588   71233 main.go:141] libmachine: (embed-certs-175374) DBG | unable to find current IP address of domain embed-certs-175374 in network mk-embed-certs-175374
	I0913 19:58:12.735624   71233 main.go:141] libmachine: (embed-certs-175374) DBG | I0913 19:58:12.735558   73146 retry.go:31] will retry after 439.751807ms: waiting for machine to come up
	I0913 19:58:13.177095   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:13.177613   71233 main.go:141] libmachine: (embed-certs-175374) DBG | unable to find current IP address of domain embed-certs-175374 in network mk-embed-certs-175374
	I0913 19:58:13.177643   71233 main.go:141] libmachine: (embed-certs-175374) DBG | I0913 19:58:13.177584   73146 retry.go:31] will retry after 561.896034ms: waiting for machine to come up
	I0913 19:58:13.741520   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:13.742128   71233 main.go:141] libmachine: (embed-certs-175374) DBG | unable to find current IP address of domain embed-certs-175374 in network mk-embed-certs-175374
	I0913 19:58:13.742164   71233 main.go:141] libmachine: (embed-certs-175374) DBG | I0913 19:58:13.742027   73146 retry.go:31] will retry after 713.20889ms: waiting for machine to come up
	I0913 19:58:11.865414   71702 pod_ready.go:103] pod "etcd-default-k8s-diff-port-512125" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:13.865756   71702 pod_ready.go:103] pod "etcd-default-k8s-diff-port-512125" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:11.267770   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:13.269041   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:15.768231   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:12.317917   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetIP
	I0913 19:58:12.320920   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:12.321261   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:33:43", ip: ""} in network mk-old-k8s-version-234290: {Iface:virbr4 ExpiryTime:2024-09-13 20:58:03 +0000 UTC Type:0 Mac:52:54:00:11:33:43 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-234290 Clientid:01:52:54:00:11:33:43}
	I0913 19:58:12.321291   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined IP address 192.168.72.137 and MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:12.321498   71926 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0913 19:58:12.325745   71926 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0913 19:58:12.340042   71926 kubeadm.go:883] updating cluster {Name:old-k8s-version-234290 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-234290 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.137 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0913 19:58:12.340163   71926 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0913 19:58:12.340227   71926 ssh_runner.go:195] Run: sudo crictl images --output json
	I0913 19:58:12.387772   71926 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0913 19:58:12.387831   71926 ssh_runner.go:195] Run: which lz4
	I0913 19:58:12.391877   71926 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0913 19:58:12.397084   71926 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0913 19:58:12.397111   71926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0913 19:58:14.108496   71926 crio.go:462] duration metric: took 1.716639607s to copy over tarball
	I0913 19:58:14.108580   71926 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0913 19:58:14.457047   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:14.457530   71233 main.go:141] libmachine: (embed-certs-175374) DBG | unable to find current IP address of domain embed-certs-175374 in network mk-embed-certs-175374
	I0913 19:58:14.457578   71233 main.go:141] libmachine: (embed-certs-175374) DBG | I0913 19:58:14.457461   73146 retry.go:31] will retry after 696.737044ms: waiting for machine to come up
	I0913 19:58:15.156145   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:15.156601   71233 main.go:141] libmachine: (embed-certs-175374) DBG | unable to find current IP address of domain embed-certs-175374 in network mk-embed-certs-175374
	I0913 19:58:15.156634   71233 main.go:141] libmachine: (embed-certs-175374) DBG | I0913 19:58:15.156555   73146 retry.go:31] will retry after 799.457406ms: waiting for machine to come up
	I0913 19:58:15.957762   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:15.958268   71233 main.go:141] libmachine: (embed-certs-175374) DBG | unable to find current IP address of domain embed-certs-175374 in network mk-embed-certs-175374
	I0913 19:58:15.958296   71233 main.go:141] libmachine: (embed-certs-175374) DBG | I0913 19:58:15.958218   73146 retry.go:31] will retry after 1.037426883s: waiting for machine to come up
	I0913 19:58:16.996752   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:16.997283   71233 main.go:141] libmachine: (embed-certs-175374) DBG | unable to find current IP address of domain embed-certs-175374 in network mk-embed-certs-175374
	I0913 19:58:16.997310   71233 main.go:141] libmachine: (embed-certs-175374) DBG | I0913 19:58:16.997233   73146 retry.go:31] will retry after 1.529310984s: waiting for machine to come up
	I0913 19:58:18.528167   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:18.528770   71233 main.go:141] libmachine: (embed-certs-175374) DBG | unable to find current IP address of domain embed-certs-175374 in network mk-embed-certs-175374
	I0913 19:58:18.528817   71233 main.go:141] libmachine: (embed-certs-175374) DBG | I0913 19:58:18.528732   73146 retry.go:31] will retry after 1.63281335s: waiting for machine to come up
	I0913 19:58:15.866154   71702 pod_ready.go:103] pod "etcd-default-k8s-diff-port-512125" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:16.865395   71702 pod_ready.go:93] pod "etcd-default-k8s-diff-port-512125" in "kube-system" namespace has status "Ready":"True"
	I0913 19:58:16.865434   71702 pod_ready.go:82] duration metric: took 7.007454177s for pod "etcd-default-k8s-diff-port-512125" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:16.865449   71702 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-512125" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:16.871374   71702 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-512125" in "kube-system" namespace has status "Ready":"True"
	I0913 19:58:16.871398   71702 pod_ready.go:82] duration metric: took 5.94123ms for pod "kube-apiserver-default-k8s-diff-port-512125" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:16.871410   71702 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-512125" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:19.122189   71702 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-512125" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:19.413846   71702 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-512125" in "kube-system" namespace has status "Ready":"True"
	I0913 19:58:19.413866   71702 pod_ready.go:82] duration metric: took 2.542449272s for pod "kube-controller-manager-default-k8s-diff-port-512125" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:19.413880   71702 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-xqv9m" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:19.419124   71702 pod_ready.go:93] pod "kube-proxy-xqv9m" in "kube-system" namespace has status "Ready":"True"
	I0913 19:58:19.419146   71702 pod_ready.go:82] duration metric: took 5.258451ms for pod "kube-proxy-xqv9m" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:19.419157   71702 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-512125" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:19.424347   71702 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-512125" in "kube-system" namespace has status "Ready":"True"
	I0913 19:58:19.424369   71702 pod_ready.go:82] duration metric: took 5.205567ms for pod "kube-scheduler-default-k8s-diff-port-512125" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:19.424378   71702 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:18.266585   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:20.267496   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:17.092899   71926 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.984287393s)
	I0913 19:58:17.092927   71926 crio.go:469] duration metric: took 2.984399164s to extract the tarball
	I0913 19:58:17.092936   71926 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0913 19:58:17.134595   71926 ssh_runner.go:195] Run: sudo crictl images --output json
	I0913 19:58:17.173233   71926 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0913 19:58:17.173261   71926 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0913 19:58:17.173353   71926 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 19:58:17.173420   71926 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0913 19:58:17.173496   71926 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0913 19:58:17.173545   71926 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0913 19:58:17.173432   71926 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0913 19:58:17.173391   71926 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0913 19:58:17.173404   71926 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0913 19:58:17.173354   71926 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0913 19:58:17.174795   71926 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0913 19:58:17.174996   71926 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0913 19:58:17.175016   71926 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0913 19:58:17.175093   71926 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0913 19:58:17.175261   71926 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0913 19:58:17.175274   71926 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0913 19:58:17.175314   71926 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 19:58:17.175374   71926 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0913 19:58:17.389976   71926 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0913 19:58:17.401858   71926 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0913 19:58:17.422207   71926 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0913 19:58:17.437983   71926 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0913 19:58:17.441827   71926 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0913 19:58:17.444353   71926 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0913 19:58:17.448291   71926 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0913 19:58:17.448327   71926 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0913 19:58:17.448360   71926 ssh_runner.go:195] Run: which crictl
	I0913 19:58:17.484630   71926 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0913 19:58:17.486907   71926 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0913 19:58:17.486953   71926 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0913 19:58:17.486992   71926 ssh_runner.go:195] Run: which crictl
	I0913 19:58:17.552972   71926 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0913 19:58:17.553017   71926 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0913 19:58:17.553070   71926 ssh_runner.go:195] Run: which crictl
	I0913 19:58:17.553094   71926 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0913 19:58:17.553131   71926 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0913 19:58:17.553172   71926 ssh_runner.go:195] Run: which crictl
	I0913 19:58:17.573201   71926 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0913 19:58:17.573248   71926 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0913 19:58:17.573298   71926 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0913 19:58:17.573320   71926 ssh_runner.go:195] Run: which crictl
	I0913 19:58:17.573340   71926 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0913 19:58:17.573386   71926 ssh_runner.go:195] Run: which crictl
	I0913 19:58:17.573434   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0913 19:58:17.605151   71926 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0913 19:58:17.605199   71926 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0913 19:58:17.605248   71926 ssh_runner.go:195] Run: which crictl
	I0913 19:58:17.605300   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0913 19:58:17.605347   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0913 19:58:17.605439   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0913 19:58:17.628947   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0913 19:58:17.628992   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0913 19:58:17.629158   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0913 19:58:17.629483   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0913 19:58:17.714772   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0913 19:58:17.714812   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0913 19:58:17.755168   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0913 19:58:17.813880   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0913 19:58:17.813980   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0913 19:58:17.822268   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0913 19:58:17.822277   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0913 19:58:17.864418   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0913 19:58:17.864418   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0913 19:58:17.930419   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0913 19:58:18.000454   71926 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0913 19:58:18.000613   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0913 19:58:18.000655   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0913 19:58:18.014400   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0913 19:58:18.065836   71926 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0913 19:58:18.065876   71926 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0913 19:58:18.065922   71926 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0913 19:58:18.068943   71926 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0913 19:58:18.075442   71926 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0913 19:58:18.102221   71926 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0913 19:58:18.330677   71926 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 19:58:18.474412   71926 cache_images.go:92] duration metric: took 1.301129458s to LoadCachedImages
	W0913 19:58:18.474515   71926 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0913 19:58:18.474532   71926 kubeadm.go:934] updating node { 192.168.72.137 8443 v1.20.0 crio true true} ...
	I0913 19:58:18.474668   71926 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-234290 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.137
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-234290 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0913 19:58:18.474764   71926 ssh_runner.go:195] Run: crio config
	I0913 19:58:18.528116   71926 cni.go:84] Creating CNI manager for ""
	I0913 19:58:18.528142   71926 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0913 19:58:18.528153   71926 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0913 19:58:18.528175   71926 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.137 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-234290 NodeName:old-k8s-version-234290 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.137"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.137 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0913 19:58:18.528341   71926 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.137
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-234290"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.137
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.137"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0913 19:58:18.528421   71926 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0913 19:58:18.539309   71926 binaries.go:44] Found k8s binaries, skipping transfer
	I0913 19:58:18.539396   71926 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0913 19:58:18.549279   71926 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0913 19:58:18.566974   71926 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0913 19:58:18.585652   71926 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0913 19:58:18.606817   71926 ssh_runner.go:195] Run: grep 192.168.72.137	control-plane.minikube.internal$ /etc/hosts
	I0913 19:58:18.610999   71926 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.137	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0913 19:58:18.623650   71926 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 19:58:18.738375   71926 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0913 19:58:18.759935   71926 certs.go:68] Setting up /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/old-k8s-version-234290 for IP: 192.168.72.137
	I0913 19:58:18.759960   71926 certs.go:194] generating shared ca certs ...
	I0913 19:58:18.759976   71926 certs.go:226] acquiring lock for ca certs: {Name:mke780aab4c2895bded11772e31c7a357d07742c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 19:58:18.760149   71926 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.key
	I0913 19:58:18.760202   71926 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.key
	I0913 19:58:18.760217   71926 certs.go:256] generating profile certs ...
	I0913 19:58:18.760337   71926 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/old-k8s-version-234290/client.key
	I0913 19:58:18.760412   71926 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/old-k8s-version-234290/apiserver.key.e5f62d17
	I0913 19:58:18.760468   71926 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/old-k8s-version-234290/proxy-client.key
	I0913 19:58:18.760623   71926 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079.pem (1338 bytes)
	W0913 19:58:18.760669   71926 certs.go:480] ignoring /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079_empty.pem, impossibly tiny 0 bytes
	I0913 19:58:18.760681   71926 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem (1675 bytes)
	I0913 19:58:18.760718   71926 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem (1082 bytes)
	I0913 19:58:18.760751   71926 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem (1123 bytes)
	I0913 19:58:18.760779   71926 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem (1679 bytes)
	I0913 19:58:18.760832   71926 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem (1708 bytes)
	I0913 19:58:18.761583   71926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0913 19:58:18.793014   71926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0913 19:58:18.848745   71926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0913 19:58:18.886745   71926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0913 19:58:18.924588   71926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/old-k8s-version-234290/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0913 19:58:18.958481   71926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/old-k8s-version-234290/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0913 19:58:18.991482   71926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/old-k8s-version-234290/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0913 19:58:19.032324   71926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/old-k8s-version-234290/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0913 19:58:19.059068   71926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079.pem --> /usr/share/ca-certificates/11079.pem (1338 bytes)
	I0913 19:58:19.085949   71926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem --> /usr/share/ca-certificates/110792.pem (1708 bytes)
	I0913 19:58:19.113643   71926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0913 19:58:19.145333   71926 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0913 19:58:19.163687   71926 ssh_runner.go:195] Run: openssl version
	I0913 19:58:19.171767   71926 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11079.pem && ln -fs /usr/share/ca-certificates/11079.pem /etc/ssl/certs/11079.pem"
	I0913 19:58:19.186554   71926 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11079.pem
	I0913 19:58:19.192330   71926 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 13 18:38 /usr/share/ca-certificates/11079.pem
	I0913 19:58:19.192401   71926 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11079.pem
	I0913 19:58:19.198792   71926 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11079.pem /etc/ssl/certs/51391683.0"
	I0913 19:58:19.210300   71926 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110792.pem && ln -fs /usr/share/ca-certificates/110792.pem /etc/ssl/certs/110792.pem"
	I0913 19:58:19.223407   71926 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110792.pem
	I0913 19:58:19.228291   71926 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 13 18:38 /usr/share/ca-certificates/110792.pem
	I0913 19:58:19.228349   71926 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110792.pem
	I0913 19:58:19.234308   71926 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110792.pem /etc/ssl/certs/3ec20f2e.0"
	I0913 19:58:19.245203   71926 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0913 19:58:19.256773   71926 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0913 19:58:19.262488   71926 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 13 18:22 /usr/share/ca-certificates/minikubeCA.pem
	I0913 19:58:19.262571   71926 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0913 19:58:19.269483   71926 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0913 19:58:19.281592   71926 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0913 19:58:19.286741   71926 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0913 19:58:19.293353   71926 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0913 19:58:19.299808   71926 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0913 19:58:19.306799   71926 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0913 19:58:19.313162   71926 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0913 19:58:19.319027   71926 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0913 19:58:19.325179   71926 kubeadm.go:392] StartCluster: {Name:old-k8s-version-234290 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-234290 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.137 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 19:58:19.325264   71926 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0913 19:58:19.325324   71926 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0913 19:58:19.369346   71926 cri.go:89] found id: ""
	I0913 19:58:19.369426   71926 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0913 19:58:19.379886   71926 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0913 19:58:19.379909   71926 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0913 19:58:19.379970   71926 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0913 19:58:19.390431   71926 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0913 19:58:19.391399   71926 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-234290" does not appear in /home/jenkins/minikube-integration/19636-3902/kubeconfig
	I0913 19:58:19.392019   71926 kubeconfig.go:62] /home/jenkins/minikube-integration/19636-3902/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-234290" cluster setting kubeconfig missing "old-k8s-version-234290" context setting]
	I0913 19:58:19.392914   71926 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/kubeconfig: {Name:mk324cf401f96b93fed93af51e24b0634e5fa1a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 19:58:19.407513   71926 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0913 19:58:19.419283   71926 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.137
	I0913 19:58:19.419314   71926 kubeadm.go:1160] stopping kube-system containers ...
	I0913 19:58:19.419326   71926 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0913 19:58:19.419379   71926 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0913 19:58:19.461864   71926 cri.go:89] found id: ""
	I0913 19:58:19.461935   71926 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0913 19:58:19.479746   71926 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0913 19:58:19.490722   71926 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0913 19:58:19.490746   71926 kubeadm.go:157] found existing configuration files:
	
	I0913 19:58:19.490793   71926 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0913 19:58:19.500968   71926 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0913 19:58:19.501031   71926 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0913 19:58:19.511172   71926 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0913 19:58:19.521623   71926 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0913 19:58:19.521690   71926 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0913 19:58:19.532035   71926 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0913 19:58:19.542058   71926 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0913 19:58:19.542139   71926 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0913 19:58:19.551747   71926 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0913 19:58:19.561183   71926 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0913 19:58:19.561240   71926 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0913 19:58:19.571631   71926 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0913 19:58:19.582073   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:58:19.730805   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:58:20.577463   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:58:20.815243   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:58:20.907599   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:58:21.008611   71926 api_server.go:52] waiting for apiserver process to appear ...
	I0913 19:58:21.008687   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:21.509128   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:20.163342   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:20.163836   71233 main.go:141] libmachine: (embed-certs-175374) DBG | unable to find current IP address of domain embed-certs-175374 in network mk-embed-certs-175374
	I0913 19:58:20.163866   71233 main.go:141] libmachine: (embed-certs-175374) DBG | I0913 19:58:20.163797   73146 retry.go:31] will retry after 2.608130242s: waiting for machine to come up
	I0913 19:58:22.773220   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:22.773746   71233 main.go:141] libmachine: (embed-certs-175374) DBG | unable to find current IP address of domain embed-certs-175374 in network mk-embed-certs-175374
	I0913 19:58:22.773773   71233 main.go:141] libmachine: (embed-certs-175374) DBG | I0913 19:58:22.773702   73146 retry.go:31] will retry after 2.358024102s: waiting for machine to come up
	I0913 19:58:21.432080   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:23.931775   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:22.766841   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:24.767073   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:22.009465   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:22.509128   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:23.009680   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:23.509066   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:24.009717   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:24.509499   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:25.008831   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:25.509742   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:26.009748   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:26.509405   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:25.134055   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:25.134613   71233 main.go:141] libmachine: (embed-certs-175374) DBG | unable to find current IP address of domain embed-certs-175374 in network mk-embed-certs-175374
	I0913 19:58:25.134637   71233 main.go:141] libmachine: (embed-certs-175374) DBG | I0913 19:58:25.134569   73146 retry.go:31] will retry after 3.938314294s: waiting for machine to come up
	I0913 19:58:29.076283   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:29.076741   71233 main.go:141] libmachine: (embed-certs-175374) Found IP for machine: 192.168.39.32
	I0913 19:58:29.076760   71233 main.go:141] libmachine: (embed-certs-175374) Reserving static IP address...
	I0913 19:58:29.076770   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has current primary IP address 192.168.39.32 and MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:29.077137   71233 main.go:141] libmachine: (embed-certs-175374) DBG | found host DHCP lease matching {name: "embed-certs-175374", mac: "52:54:00:72:57:cd", ip: "192.168.39.32"} in network mk-embed-certs-175374: {Iface:virbr3 ExpiryTime:2024-09-13 20:58:22 +0000 UTC Type:0 Mac:52:54:00:72:57:cd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:embed-certs-175374 Clientid:01:52:54:00:72:57:cd}
	I0913 19:58:29.077164   71233 main.go:141] libmachine: (embed-certs-175374) DBG | skip adding static IP to network mk-embed-certs-175374 - found existing host DHCP lease matching {name: "embed-certs-175374", mac: "52:54:00:72:57:cd", ip: "192.168.39.32"}
	I0913 19:58:29.077174   71233 main.go:141] libmachine: (embed-certs-175374) Reserved static IP address: 192.168.39.32
	I0913 19:58:29.077185   71233 main.go:141] libmachine: (embed-certs-175374) Waiting for SSH to be available...
	I0913 19:58:29.077194   71233 main.go:141] libmachine: (embed-certs-175374) DBG | Getting to WaitForSSH function...
	I0913 19:58:29.079065   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:29.079375   71233 main.go:141] libmachine: (embed-certs-175374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:57:cd", ip: ""} in network mk-embed-certs-175374: {Iface:virbr3 ExpiryTime:2024-09-13 20:58:22 +0000 UTC Type:0 Mac:52:54:00:72:57:cd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:embed-certs-175374 Clientid:01:52:54:00:72:57:cd}
	I0913 19:58:29.079407   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined IP address 192.168.39.32 and MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:29.079508   71233 main.go:141] libmachine: (embed-certs-175374) DBG | Using SSH client type: external
	I0913 19:58:29.079559   71233 main.go:141] libmachine: (embed-certs-175374) DBG | Using SSH private key: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/embed-certs-175374/id_rsa (-rw-------)
	I0913 19:58:29.079600   71233 main.go:141] libmachine: (embed-certs-175374) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.32 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19636-3902/.minikube/machines/embed-certs-175374/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0913 19:58:29.079615   71233 main.go:141] libmachine: (embed-certs-175374) DBG | About to run SSH command:
	I0913 19:58:29.079643   71233 main.go:141] libmachine: (embed-certs-175374) DBG | exit 0
	I0913 19:58:29.202138   71233 main.go:141] libmachine: (embed-certs-175374) DBG | SSH cmd err, output: <nil>: 
	I0913 19:58:29.202522   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetConfigRaw
	I0913 19:58:26.431735   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:28.930537   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:27.266331   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:29.272314   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:29.203122   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetIP
	I0913 19:58:29.205936   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:29.206304   71233 main.go:141] libmachine: (embed-certs-175374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:57:cd", ip: ""} in network mk-embed-certs-175374: {Iface:virbr3 ExpiryTime:2024-09-13 20:58:22 +0000 UTC Type:0 Mac:52:54:00:72:57:cd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:embed-certs-175374 Clientid:01:52:54:00:72:57:cd}
	I0913 19:58:29.206326   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined IP address 192.168.39.32 and MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:29.206567   71233 profile.go:143] Saving config to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/embed-certs-175374/config.json ...
	I0913 19:58:29.206799   71233 machine.go:93] provisionDockerMachine start ...
	I0913 19:58:29.206820   71233 main.go:141] libmachine: (embed-certs-175374) Calling .DriverName
	I0913 19:58:29.207047   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHHostname
	I0913 19:58:29.209407   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:29.209733   71233 main.go:141] libmachine: (embed-certs-175374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:57:cd", ip: ""} in network mk-embed-certs-175374: {Iface:virbr3 ExpiryTime:2024-09-13 20:58:22 +0000 UTC Type:0 Mac:52:54:00:72:57:cd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:embed-certs-175374 Clientid:01:52:54:00:72:57:cd}
	I0913 19:58:29.209755   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined IP address 192.168.39.32 and MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:29.209880   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHPort
	I0913 19:58:29.210087   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHKeyPath
	I0913 19:58:29.210264   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHKeyPath
	I0913 19:58:29.210475   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHUsername
	I0913 19:58:29.210613   71233 main.go:141] libmachine: Using SSH client type: native
	I0913 19:58:29.210806   71233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.32 22 <nil> <nil>}
	I0913 19:58:29.210819   71233 main.go:141] libmachine: About to run SSH command:
	hostname
	I0913 19:58:29.318615   71233 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0913 19:58:29.318647   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetMachineName
	I0913 19:58:29.318874   71233 buildroot.go:166] provisioning hostname "embed-certs-175374"
	I0913 19:58:29.318891   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetMachineName
	I0913 19:58:29.319050   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHHostname
	I0913 19:58:29.321627   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:29.321981   71233 main.go:141] libmachine: (embed-certs-175374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:57:cd", ip: ""} in network mk-embed-certs-175374: {Iface:virbr3 ExpiryTime:2024-09-13 20:58:22 +0000 UTC Type:0 Mac:52:54:00:72:57:cd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:embed-certs-175374 Clientid:01:52:54:00:72:57:cd}
	I0913 19:58:29.322007   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined IP address 192.168.39.32 and MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:29.322233   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHPort
	I0913 19:58:29.322411   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHKeyPath
	I0913 19:58:29.322524   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHKeyPath
	I0913 19:58:29.322665   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHUsername
	I0913 19:58:29.322814   71233 main.go:141] libmachine: Using SSH client type: native
	I0913 19:58:29.322993   71233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.32 22 <nil> <nil>}
	I0913 19:58:29.323011   71233 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-175374 && echo "embed-certs-175374" | sudo tee /etc/hostname
	I0913 19:58:29.441656   71233 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-175374
	
	I0913 19:58:29.441686   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHHostname
	I0913 19:58:29.444529   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:29.444942   71233 main.go:141] libmachine: (embed-certs-175374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:57:cd", ip: ""} in network mk-embed-certs-175374: {Iface:virbr3 ExpiryTime:2024-09-13 20:58:22 +0000 UTC Type:0 Mac:52:54:00:72:57:cd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:embed-certs-175374 Clientid:01:52:54:00:72:57:cd}
	I0913 19:58:29.444973   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined IP address 192.168.39.32 and MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:29.445107   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHPort
	I0913 19:58:29.445291   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHKeyPath
	I0913 19:58:29.445451   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHKeyPath
	I0913 19:58:29.445560   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHUsername
	I0913 19:58:29.445756   71233 main.go:141] libmachine: Using SSH client type: native
	I0913 19:58:29.445939   71233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.32 22 <nil> <nil>}
	I0913 19:58:29.445961   71233 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-175374' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-175374/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-175374' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0913 19:58:29.555773   71233 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0913 19:58:29.555798   71233 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19636-3902/.minikube CaCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19636-3902/.minikube}
	I0913 19:58:29.555815   71233 buildroot.go:174] setting up certificates
	I0913 19:58:29.555836   71233 provision.go:84] configureAuth start
	I0913 19:58:29.555845   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetMachineName
	I0913 19:58:29.556128   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetIP
	I0913 19:58:29.559064   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:29.559438   71233 main.go:141] libmachine: (embed-certs-175374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:57:cd", ip: ""} in network mk-embed-certs-175374: {Iface:virbr3 ExpiryTime:2024-09-13 20:58:22 +0000 UTC Type:0 Mac:52:54:00:72:57:cd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:embed-certs-175374 Clientid:01:52:54:00:72:57:cd}
	I0913 19:58:29.559459   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined IP address 192.168.39.32 and MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:29.559589   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHHostname
	I0913 19:58:29.561763   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:29.562078   71233 main.go:141] libmachine: (embed-certs-175374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:57:cd", ip: ""} in network mk-embed-certs-175374: {Iface:virbr3 ExpiryTime:2024-09-13 20:58:22 +0000 UTC Type:0 Mac:52:54:00:72:57:cd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:embed-certs-175374 Clientid:01:52:54:00:72:57:cd}
	I0913 19:58:29.562120   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined IP address 192.168.39.32 and MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:29.562218   71233 provision.go:143] copyHostCerts
	I0913 19:58:29.562277   71233 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem, removing ...
	I0913 19:58:29.562288   71233 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem
	I0913 19:58:29.562362   71233 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem (1082 bytes)
	I0913 19:58:29.562476   71233 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem, removing ...
	I0913 19:58:29.562487   71233 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem
	I0913 19:58:29.562519   71233 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem (1123 bytes)
	I0913 19:58:29.562621   71233 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem, removing ...
	I0913 19:58:29.562630   71233 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem
	I0913 19:58:29.562657   71233 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem (1679 bytes)
	I0913 19:58:29.562729   71233 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem org=jenkins.embed-certs-175374 san=[127.0.0.1 192.168.39.32 embed-certs-175374 localhost minikube]
	I0913 19:58:29.724450   71233 provision.go:177] copyRemoteCerts
	I0913 19:58:29.724502   71233 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0913 19:58:29.724524   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHHostname
	I0913 19:58:29.727348   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:29.727653   71233 main.go:141] libmachine: (embed-certs-175374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:57:cd", ip: ""} in network mk-embed-certs-175374: {Iface:virbr3 ExpiryTime:2024-09-13 20:58:22 +0000 UTC Type:0 Mac:52:54:00:72:57:cd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:embed-certs-175374 Clientid:01:52:54:00:72:57:cd}
	I0913 19:58:29.727680   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined IP address 192.168.39.32 and MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:29.727870   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHPort
	I0913 19:58:29.728028   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHKeyPath
	I0913 19:58:29.728142   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHUsername
	I0913 19:58:29.728291   71233 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/embed-certs-175374/id_rsa Username:docker}
	I0913 19:58:29.807752   71233 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0913 19:58:29.832344   71233 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0913 19:58:29.856275   71233 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0913 19:58:29.879235   71233 provision.go:87] duration metric: took 323.386002ms to configureAuth
	I0913 19:58:29.879264   71233 buildroot.go:189] setting minikube options for container-runtime
	I0913 19:58:29.879464   71233 config.go:182] Loaded profile config "embed-certs-175374": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 19:58:29.879535   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHHostname
	I0913 19:58:29.882178   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:29.882577   71233 main.go:141] libmachine: (embed-certs-175374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:57:cd", ip: ""} in network mk-embed-certs-175374: {Iface:virbr3 ExpiryTime:2024-09-13 20:58:22 +0000 UTC Type:0 Mac:52:54:00:72:57:cd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:embed-certs-175374 Clientid:01:52:54:00:72:57:cd}
	I0913 19:58:29.882608   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined IP address 192.168.39.32 and MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:29.882736   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHPort
	I0913 19:58:29.883001   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHKeyPath
	I0913 19:58:29.883187   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHKeyPath
	I0913 19:58:29.883328   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHUsername
	I0913 19:58:29.883519   71233 main.go:141] libmachine: Using SSH client type: native
	I0913 19:58:29.883723   71233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.32 22 <nil> <nil>}
	I0913 19:58:29.883747   71233 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0913 19:58:30.103532   71233 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0913 19:58:30.103557   71233 machine.go:96] duration metric: took 896.744413ms to provisionDockerMachine
	I0913 19:58:30.103574   71233 start.go:293] postStartSetup for "embed-certs-175374" (driver="kvm2")
	I0913 19:58:30.103588   71233 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0913 19:58:30.103610   71233 main.go:141] libmachine: (embed-certs-175374) Calling .DriverName
	I0913 19:58:30.103908   71233 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0913 19:58:30.103935   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHHostname
	I0913 19:58:30.106889   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:30.107288   71233 main.go:141] libmachine: (embed-certs-175374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:57:cd", ip: ""} in network mk-embed-certs-175374: {Iface:virbr3 ExpiryTime:2024-09-13 20:58:22 +0000 UTC Type:0 Mac:52:54:00:72:57:cd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:embed-certs-175374 Clientid:01:52:54:00:72:57:cd}
	I0913 19:58:30.107320   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined IP address 192.168.39.32 and MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:30.107434   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHPort
	I0913 19:58:30.107613   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHKeyPath
	I0913 19:58:30.107766   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHUsername
	I0913 19:58:30.107900   71233 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/embed-certs-175374/id_rsa Username:docker}
	I0913 19:58:30.189085   71233 ssh_runner.go:195] Run: cat /etc/os-release
	I0913 19:58:30.193560   71233 info.go:137] Remote host: Buildroot 2023.02.9
	I0913 19:58:30.193587   71233 filesync.go:126] Scanning /home/jenkins/minikube-integration/19636-3902/.minikube/addons for local assets ...
	I0913 19:58:30.193667   71233 filesync.go:126] Scanning /home/jenkins/minikube-integration/19636-3902/.minikube/files for local assets ...
	I0913 19:58:30.193767   71233 filesync.go:149] local asset: /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem -> 110792.pem in /etc/ssl/certs
	I0913 19:58:30.193878   71233 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0913 19:58:30.203533   71233 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem --> /etc/ssl/certs/110792.pem (1708 bytes)
	I0913 19:58:30.227895   71233 start.go:296] duration metric: took 124.307474ms for postStartSetup
	I0913 19:58:30.227936   71233 fix.go:56] duration metric: took 19.464716966s for fixHost
	I0913 19:58:30.227956   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHHostname
	I0913 19:58:30.230672   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:30.230977   71233 main.go:141] libmachine: (embed-certs-175374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:57:cd", ip: ""} in network mk-embed-certs-175374: {Iface:virbr3 ExpiryTime:2024-09-13 20:58:22 +0000 UTC Type:0 Mac:52:54:00:72:57:cd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:embed-certs-175374 Clientid:01:52:54:00:72:57:cd}
	I0913 19:58:30.231003   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined IP address 192.168.39.32 and MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:30.231167   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHPort
	I0913 19:58:30.231432   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHKeyPath
	I0913 19:58:30.231610   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHKeyPath
	I0913 19:58:30.231758   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHUsername
	I0913 19:58:30.231913   71233 main.go:141] libmachine: Using SSH client type: native
	I0913 19:58:30.232089   71233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.32 22 <nil> <nil>}
	I0913 19:58:30.232100   71233 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0913 19:58:30.331036   71233 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726257510.303110870
	
	I0913 19:58:30.331065   71233 fix.go:216] guest clock: 1726257510.303110870
	I0913 19:58:30.331076   71233 fix.go:229] Guest: 2024-09-13 19:58:30.30311087 +0000 UTC Remote: 2024-09-13 19:58:30.227940037 +0000 UTC m=+356.058673795 (delta=75.170833ms)
	I0913 19:58:30.331112   71233 fix.go:200] guest clock delta is within tolerance: 75.170833ms
	I0913 19:58:30.331117   71233 start.go:83] releasing machines lock for "embed-certs-175374", held for 19.567934671s
	I0913 19:58:30.331140   71233 main.go:141] libmachine: (embed-certs-175374) Calling .DriverName
	I0913 19:58:30.331423   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetIP
	I0913 19:58:30.334022   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:30.334506   71233 main.go:141] libmachine: (embed-certs-175374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:57:cd", ip: ""} in network mk-embed-certs-175374: {Iface:virbr3 ExpiryTime:2024-09-13 20:58:22 +0000 UTC Type:0 Mac:52:54:00:72:57:cd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:embed-certs-175374 Clientid:01:52:54:00:72:57:cd}
	I0913 19:58:30.334533   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined IP address 192.168.39.32 and MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:30.334671   71233 main.go:141] libmachine: (embed-certs-175374) Calling .DriverName
	I0913 19:58:30.335259   71233 main.go:141] libmachine: (embed-certs-175374) Calling .DriverName
	I0913 19:58:30.335431   71233 main.go:141] libmachine: (embed-certs-175374) Calling .DriverName
	I0913 19:58:30.335489   71233 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0913 19:58:30.335528   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHHostname
	I0913 19:58:30.335642   71233 ssh_runner.go:195] Run: cat /version.json
	I0913 19:58:30.335660   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHHostname
	I0913 19:58:30.338223   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:30.338556   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:30.338585   71233 main.go:141] libmachine: (embed-certs-175374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:57:cd", ip: ""} in network mk-embed-certs-175374: {Iface:virbr3 ExpiryTime:2024-09-13 20:58:22 +0000 UTC Type:0 Mac:52:54:00:72:57:cd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:embed-certs-175374 Clientid:01:52:54:00:72:57:cd}
	I0913 19:58:30.338608   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined IP address 192.168.39.32 and MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:30.338738   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHPort
	I0913 19:58:30.338891   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHKeyPath
	I0913 19:58:30.339037   71233 main.go:141] libmachine: (embed-certs-175374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:57:cd", ip: ""} in network mk-embed-certs-175374: {Iface:virbr3 ExpiryTime:2024-09-13 20:58:22 +0000 UTC Type:0 Mac:52:54:00:72:57:cd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:embed-certs-175374 Clientid:01:52:54:00:72:57:cd}
	I0913 19:58:30.339057   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined IP address 192.168.39.32 and MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:30.339072   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHUsername
	I0913 19:58:30.339199   71233 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/embed-certs-175374/id_rsa Username:docker}
	I0913 19:58:30.339247   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHPort
	I0913 19:58:30.339387   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHKeyPath
	I0913 19:58:30.339526   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHUsername
	I0913 19:58:30.339639   71233 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/embed-certs-175374/id_rsa Username:docker}
	I0913 19:58:30.415622   71233 ssh_runner.go:195] Run: systemctl --version
	I0913 19:58:30.440604   71233 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0913 19:58:30.586022   71233 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0913 19:58:30.594584   71233 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0913 19:58:30.594660   71233 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0913 19:58:30.611349   71233 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0913 19:58:30.611371   71233 start.go:495] detecting cgroup driver to use...
	I0913 19:58:30.611431   71233 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0913 19:58:30.626916   71233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0913 19:58:30.641834   71233 docker.go:217] disabling cri-docker service (if available) ...
	I0913 19:58:30.641899   71233 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0913 19:58:30.656109   71233 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0913 19:58:30.670053   71233 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0913 19:58:30.785264   71233 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0913 19:58:30.936484   71233 docker.go:233] disabling docker service ...
	I0913 19:58:30.936548   71233 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0913 19:58:30.951998   71233 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0913 19:58:30.965863   71233 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0913 19:58:31.117753   71233 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0913 19:58:31.241750   71233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0913 19:58:31.255910   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0913 19:58:31.276372   71233 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0913 19:58:31.276453   71233 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:58:31.286686   71233 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0913 19:58:31.286749   71233 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:58:31.296762   71233 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:58:31.306752   71233 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:58:31.317435   71233 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0913 19:58:31.328859   71233 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:58:31.339508   71233 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:58:31.358855   71233 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:58:31.369756   71233 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0913 19:58:31.379838   71233 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0913 19:58:31.379908   71233 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0913 19:58:31.392714   71233 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0913 19:58:31.402973   71233 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 19:58:31.543089   71233 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0913 19:58:31.635184   71233 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0913 19:58:31.635259   71233 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0913 19:58:31.640122   71233 start.go:563] Will wait 60s for crictl version
	I0913 19:58:31.640190   71233 ssh_runner.go:195] Run: which crictl
	I0913 19:58:31.644326   71233 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0913 19:58:31.687840   71233 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0913 19:58:31.687936   71233 ssh_runner.go:195] Run: crio --version
	I0913 19:58:31.716376   71233 ssh_runner.go:195] Run: crio --version
	I0913 19:58:31.749357   71233 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0913 19:58:27.009130   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:27.509574   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:28.009714   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:28.508758   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:29.008768   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:29.509523   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:30.009031   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:30.508827   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:31.009653   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:31.509554   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:31.750649   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetIP
	I0913 19:58:31.753235   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:31.753547   71233 main.go:141] libmachine: (embed-certs-175374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:57:cd", ip: ""} in network mk-embed-certs-175374: {Iface:virbr3 ExpiryTime:2024-09-13 20:58:22 +0000 UTC Type:0 Mac:52:54:00:72:57:cd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:embed-certs-175374 Clientid:01:52:54:00:72:57:cd}
	I0913 19:58:31.753576   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined IP address 192.168.39.32 and MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:31.753809   71233 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0913 19:58:31.757927   71233 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0913 19:58:31.771018   71233 kubeadm.go:883] updating cluster {Name:embed-certs-175374 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:embed-certs-175374 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.32 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0913 19:58:31.771171   71233 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0913 19:58:31.771221   71233 ssh_runner.go:195] Run: sudo crictl images --output json
	I0913 19:58:31.810741   71233 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0913 19:58:31.810798   71233 ssh_runner.go:195] Run: which lz4
	I0913 19:58:31.814892   71233 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0913 19:58:31.819229   71233 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0913 19:58:31.819269   71233 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0913 19:58:33.221865   71233 crio.go:462] duration metric: took 1.407002501s to copy over tarball
	I0913 19:58:33.221951   71233 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0913 19:58:30.931694   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:32.934639   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:31.767243   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:33.767834   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:35.768301   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:32.009337   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:32.509870   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:33.009618   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:33.509364   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:34.009124   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:34.509744   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:35.009610   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:35.509772   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:36.009680   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:36.509743   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:35.282125   71233 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.060124935s)
	I0913 19:58:35.282151   71233 crio.go:469] duration metric: took 2.060254719s to extract the tarball
	I0913 19:58:35.282158   71233 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0913 19:58:35.320685   71233 ssh_runner.go:195] Run: sudo crictl images --output json
	I0913 19:58:35.364371   71233 crio.go:514] all images are preloaded for cri-o runtime.
	I0913 19:58:35.364396   71233 cache_images.go:84] Images are preloaded, skipping loading
	I0913 19:58:35.364404   71233 kubeadm.go:934] updating node { 192.168.39.32 8443 v1.31.1 crio true true} ...
	I0913 19:58:35.364505   71233 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-175374 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.32
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-175374 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0913 19:58:35.364574   71233 ssh_runner.go:195] Run: crio config
	I0913 19:58:35.409662   71233 cni.go:84] Creating CNI manager for ""
	I0913 19:58:35.409684   71233 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0913 19:58:35.409692   71233 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0913 19:58:35.409711   71233 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.32 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-175374 NodeName:embed-certs-175374 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.32"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.32 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0913 19:58:35.409829   71233 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.32
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-175374"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.32
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.32"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0913 19:58:35.409886   71233 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0913 19:58:35.420286   71233 binaries.go:44] Found k8s binaries, skipping transfer
	I0913 19:58:35.420354   71233 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0913 19:58:35.430624   71233 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0913 19:58:35.448662   71233 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0913 19:58:35.465838   71233 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0913 19:58:35.483262   71233 ssh_runner.go:195] Run: grep 192.168.39.32	control-plane.minikube.internal$ /etc/hosts
	I0913 19:58:35.487299   71233 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.32	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0913 19:58:35.500571   71233 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 19:58:35.615618   71233 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0913 19:58:35.634191   71233 certs.go:68] Setting up /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/embed-certs-175374 for IP: 192.168.39.32
	I0913 19:58:35.634216   71233 certs.go:194] generating shared ca certs ...
	I0913 19:58:35.634237   71233 certs.go:226] acquiring lock for ca certs: {Name:mke780aab4c2895bded11772e31c7a357d07742c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 19:58:35.634421   71233 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.key
	I0913 19:58:35.634489   71233 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.key
	I0913 19:58:35.634503   71233 certs.go:256] generating profile certs ...
	I0913 19:58:35.634599   71233 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/embed-certs-175374/client.key
	I0913 19:58:35.634664   71233 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/embed-certs-175374/apiserver.key.f26b0d46
	I0913 19:58:35.634719   71233 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/embed-certs-175374/proxy-client.key
	I0913 19:58:35.634847   71233 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079.pem (1338 bytes)
	W0913 19:58:35.634888   71233 certs.go:480] ignoring /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079_empty.pem, impossibly tiny 0 bytes
	I0913 19:58:35.634903   71233 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem (1675 bytes)
	I0913 19:58:35.634940   71233 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem (1082 bytes)
	I0913 19:58:35.634974   71233 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem (1123 bytes)
	I0913 19:58:35.635013   71233 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem (1679 bytes)
	I0913 19:58:35.635070   71233 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem (1708 bytes)
	I0913 19:58:35.635679   71233 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0913 19:58:35.680013   71233 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0913 19:58:35.708836   71233 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0913 19:58:35.742138   71233 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0913 19:58:35.783230   71233 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/embed-certs-175374/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0913 19:58:35.816022   71233 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/embed-certs-175374/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0913 19:58:35.847365   71233 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/embed-certs-175374/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0913 19:58:35.871389   71233 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/embed-certs-175374/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0913 19:58:35.896617   71233 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem --> /usr/share/ca-certificates/110792.pem (1708 bytes)
	I0913 19:58:35.920811   71233 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0913 19:58:35.947119   71233 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079.pem --> /usr/share/ca-certificates/11079.pem (1338 bytes)
	I0913 19:58:35.971590   71233 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0913 19:58:35.988797   71233 ssh_runner.go:195] Run: openssl version
	I0913 19:58:35.994690   71233 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11079.pem && ln -fs /usr/share/ca-certificates/11079.pem /etc/ssl/certs/11079.pem"
	I0913 19:58:36.006056   71233 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11079.pem
	I0913 19:58:36.010744   71233 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 13 18:38 /usr/share/ca-certificates/11079.pem
	I0913 19:58:36.010813   71233 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11079.pem
	I0913 19:58:36.016820   71233 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11079.pem /etc/ssl/certs/51391683.0"
	I0913 19:58:36.028895   71233 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110792.pem && ln -fs /usr/share/ca-certificates/110792.pem /etc/ssl/certs/110792.pem"
	I0913 19:58:36.040296   71233 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110792.pem
	I0913 19:58:36.044904   71233 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 13 18:38 /usr/share/ca-certificates/110792.pem
	I0913 19:58:36.044948   71233 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110792.pem
	I0913 19:58:36.050727   71233 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110792.pem /etc/ssl/certs/3ec20f2e.0"
	I0913 19:58:36.061195   71233 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0913 19:58:36.071527   71233 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0913 19:58:36.076171   71233 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 13 18:22 /usr/share/ca-certificates/minikubeCA.pem
	I0913 19:58:36.076204   71233 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0913 19:58:36.081765   71233 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0913 19:58:36.093815   71233 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0913 19:58:36.098729   71233 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0913 19:58:36.105238   71233 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0913 19:58:36.111340   71233 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0913 19:58:36.117349   71233 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0913 19:58:36.123329   71233 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0913 19:58:36.129083   71233 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0913 19:58:36.134952   71233 kubeadm.go:392] StartCluster: {Name:embed-certs-175374 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:embed-certs-175374 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.32 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 19:58:36.135035   71233 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0913 19:58:36.135095   71233 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0913 19:58:36.177680   71233 cri.go:89] found id: ""
	I0913 19:58:36.177743   71233 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0913 19:58:36.188511   71233 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0913 19:58:36.188531   71233 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0913 19:58:36.188580   71233 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0913 19:58:36.199007   71233 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0913 19:58:36.200034   71233 kubeconfig.go:125] found "embed-certs-175374" server: "https://192.168.39.32:8443"
	I0913 19:58:36.201838   71233 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0913 19:58:36.211823   71233 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.32
	I0913 19:58:36.211850   71233 kubeadm.go:1160] stopping kube-system containers ...
	I0913 19:58:36.211863   71233 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0913 19:58:36.211907   71233 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0913 19:58:36.254383   71233 cri.go:89] found id: ""
	I0913 19:58:36.254452   71233 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0913 19:58:36.274482   71233 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0913 19:58:36.284752   71233 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0913 19:58:36.284776   71233 kubeadm.go:157] found existing configuration files:
	
	I0913 19:58:36.284826   71233 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0913 19:58:36.294122   71233 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0913 19:58:36.294186   71233 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0913 19:58:36.303848   71233 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0913 19:58:36.313197   71233 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0913 19:58:36.313270   71233 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0913 19:58:36.322754   71233 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0913 19:58:36.332018   71233 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0913 19:58:36.332078   71233 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0913 19:58:36.341980   71233 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0913 19:58:36.351251   71233 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0913 19:58:36.351308   71233 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0913 19:58:36.360867   71233 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0913 19:58:36.370253   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:58:36.476811   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:58:37.459731   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:58:37.701271   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:58:37.795569   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:58:37.884961   71233 api_server.go:52] waiting for apiserver process to appear ...
	I0913 19:58:37.885054   71233 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:38.385265   71233 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:38.886038   71233 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:35.431757   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:37.930698   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:38.869696   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:37.009084   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:37.509368   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:38.009698   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:38.509699   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:39.008821   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:39.509724   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:40.008865   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:40.509533   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:41.009397   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:41.508872   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:39.385638   71233 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:39.885566   71233 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:39.901409   71233 api_server.go:72] duration metric: took 2.016446791s to wait for apiserver process to appear ...
	I0913 19:58:39.901438   71233 api_server.go:88] waiting for apiserver healthz status ...
	I0913 19:58:39.901469   71233 api_server.go:253] Checking apiserver healthz at https://192.168.39.32:8443/healthz ...
	I0913 19:58:42.607623   71233 api_server.go:279] https://192.168.39.32:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0913 19:58:42.607656   71233 api_server.go:103] status: https://192.168.39.32:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0913 19:58:42.607672   71233 api_server.go:253] Checking apiserver healthz at https://192.168.39.32:8443/healthz ...
	I0913 19:58:42.625107   71233 api_server.go:279] https://192.168.39.32:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0913 19:58:42.625134   71233 api_server.go:103] status: https://192.168.39.32:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0913 19:58:42.902512   71233 api_server.go:253] Checking apiserver healthz at https://192.168.39.32:8443/healthz ...
	I0913 19:58:42.912382   71233 api_server.go:279] https://192.168.39.32:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0913 19:58:42.912424   71233 api_server.go:103] status: https://192.168.39.32:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0913 19:58:43.401981   71233 api_server.go:253] Checking apiserver healthz at https://192.168.39.32:8443/healthz ...
	I0913 19:58:43.406231   71233 api_server.go:279] https://192.168.39.32:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0913 19:58:43.406253   71233 api_server.go:103] status: https://192.168.39.32:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0913 19:58:43.901758   71233 api_server.go:253] Checking apiserver healthz at https://192.168.39.32:8443/healthz ...
	I0913 19:58:43.909236   71233 api_server.go:279] https://192.168.39.32:8443/healthz returned 200:
	ok
	I0913 19:58:43.915858   71233 api_server.go:141] control plane version: v1.31.1
	I0913 19:58:43.915878   71233 api_server.go:131] duration metric: took 4.014433541s to wait for apiserver health ...
	I0913 19:58:43.915886   71233 cni.go:84] Creating CNI manager for ""
	I0913 19:58:43.915892   71233 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0913 19:58:43.917333   71233 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0913 19:58:43.918437   71233 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0913 19:58:43.929803   71233 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0913 19:58:43.962264   71233 system_pods.go:43] waiting for kube-system pods to appear ...
	I0913 19:58:43.974064   71233 system_pods.go:59] 8 kube-system pods found
	I0913 19:58:43.974124   71233 system_pods.go:61] "coredns-7c65d6cfc9-lrrkx" [3b5420cc-a0cd-4f34-8f65-9fa6fd5dd02f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0913 19:58:43.974132   71233 system_pods.go:61] "etcd-embed-certs-175374" [4f645ba5-cffa-49a2-9219-8e635f58abd3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0913 19:58:43.974140   71233 system_pods.go:61] "kube-apiserver-embed-certs-175374" [4c21b983-d949-4642-b17c-b547330b0d05] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0913 19:58:43.974146   71233 system_pods.go:61] "kube-controller-manager-embed-certs-175374" [defb4f6c-81f8-4405-b2c0-abe91c846f4c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0913 19:58:43.974154   71233 system_pods.go:61] "kube-proxy-jv77q" [28580bbe-7c5f-4161-8370-41f3286d508c] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0913 19:58:43.974159   71233 system_pods.go:61] "kube-scheduler-embed-certs-175374" [c5aa66b9-523c-46e8-b8e4-70680490dbf5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0913 19:58:43.974168   71233 system_pods.go:61] "metrics-server-6867b74b74-fnznh" [9ca67e1c-a852-4513-abfc-ace5908d2727] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0913 19:58:43.974174   71233 system_pods.go:61] "storage-provisioner" [afa99920-b51a-4d30-a8e0-269a0beeee8a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0913 19:58:43.974180   71233 system_pods.go:74] duration metric: took 11.890984ms to wait for pod list to return data ...
	I0913 19:58:43.974191   71233 node_conditions.go:102] verifying NodePressure condition ...
	I0913 19:58:43.978060   71233 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0913 19:58:43.978084   71233 node_conditions.go:123] node cpu capacity is 2
	I0913 19:58:43.978115   71233 node_conditions.go:105] duration metric: took 3.91914ms to run NodePressure ...
	I0913 19:58:43.978136   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:58:39.931725   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:41.931904   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:43.932454   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:44.265300   71233 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0913 19:58:44.270133   71233 kubeadm.go:739] kubelet initialised
	I0913 19:58:44.270161   71233 kubeadm.go:740] duration metric: took 4.829768ms waiting for restarted kubelet to initialise ...
	I0913 19:58:44.270170   71233 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0913 19:58:44.275324   71233 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-lrrkx" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:44.280420   71233 pod_ready.go:98] node "embed-certs-175374" hosting pod "coredns-7c65d6cfc9-lrrkx" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-175374" has status "Ready":"False"
	I0913 19:58:44.280443   71233 pod_ready.go:82] duration metric: took 5.093507ms for pod "coredns-7c65d6cfc9-lrrkx" in "kube-system" namespace to be "Ready" ...
	E0913 19:58:44.280452   71233 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-175374" hosting pod "coredns-7c65d6cfc9-lrrkx" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-175374" has status "Ready":"False"
	I0913 19:58:44.280459   71233 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-175374" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:44.284917   71233 pod_ready.go:98] node "embed-certs-175374" hosting pod "etcd-embed-certs-175374" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-175374" has status "Ready":"False"
	I0913 19:58:44.284937   71233 pod_ready.go:82] duration metric: took 4.469078ms for pod "etcd-embed-certs-175374" in "kube-system" namespace to be "Ready" ...
	E0913 19:58:44.284945   71233 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-175374" hosting pod "etcd-embed-certs-175374" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-175374" has status "Ready":"False"
	I0913 19:58:44.284952   71233 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-175374" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:44.288979   71233 pod_ready.go:98] node "embed-certs-175374" hosting pod "kube-apiserver-embed-certs-175374" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-175374" has status "Ready":"False"
	I0913 19:58:44.289001   71233 pod_ready.go:82] duration metric: took 4.040314ms for pod "kube-apiserver-embed-certs-175374" in "kube-system" namespace to be "Ready" ...
	E0913 19:58:44.289012   71233 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-175374" hosting pod "kube-apiserver-embed-certs-175374" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-175374" has status "Ready":"False"
	I0913 19:58:44.289019   71233 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-175374" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:44.366067   71233 pod_ready.go:98] node "embed-certs-175374" hosting pod "kube-controller-manager-embed-certs-175374" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-175374" has status "Ready":"False"
	I0913 19:58:44.366115   71233 pod_ready.go:82] duration metric: took 77.081723ms for pod "kube-controller-manager-embed-certs-175374" in "kube-system" namespace to be "Ready" ...
	E0913 19:58:44.366130   71233 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-175374" hosting pod "kube-controller-manager-embed-certs-175374" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-175374" has status "Ready":"False"
	I0913 19:58:44.366138   71233 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-jv77q" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:44.768797   71233 pod_ready.go:98] node "embed-certs-175374" hosting pod "kube-proxy-jv77q" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-175374" has status "Ready":"False"
	I0913 19:58:44.768829   71233 pod_ready.go:82] duration metric: took 402.677833ms for pod "kube-proxy-jv77q" in "kube-system" namespace to be "Ready" ...
	E0913 19:58:44.768838   71233 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-175374" hosting pod "kube-proxy-jv77q" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-175374" has status "Ready":"False"
	I0913 19:58:44.768845   71233 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-175374" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:45.166011   71233 pod_ready.go:98] node "embed-certs-175374" hosting pod "kube-scheduler-embed-certs-175374" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-175374" has status "Ready":"False"
	I0913 19:58:45.166046   71233 pod_ready.go:82] duration metric: took 397.193399ms for pod "kube-scheduler-embed-certs-175374" in "kube-system" namespace to be "Ready" ...
	E0913 19:58:45.166059   71233 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-175374" hosting pod "kube-scheduler-embed-certs-175374" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-175374" has status "Ready":"False"
	I0913 19:58:45.166068   71233 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:45.565304   71233 pod_ready.go:98] node "embed-certs-175374" hosting pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-175374" has status "Ready":"False"
	I0913 19:58:45.565328   71233 pod_ready.go:82] duration metric: took 399.249933ms for pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace to be "Ready" ...
	E0913 19:58:45.565337   71233 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-175374" hosting pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-175374" has status "Ready":"False"
	I0913 19:58:45.565350   71233 pod_ready.go:39] duration metric: took 1.295171906s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0913 19:58:45.565371   71233 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0913 19:58:45.577831   71233 ops.go:34] apiserver oom_adj: -16
	I0913 19:58:45.577857   71233 kubeadm.go:597] duration metric: took 9.389319229s to restartPrimaryControlPlane
	I0913 19:58:45.577868   71233 kubeadm.go:394] duration metric: took 9.442921883s to StartCluster
	I0913 19:58:45.577884   71233 settings.go:142] acquiring lock: {Name:mk3b751ea8a9f3a6c2d19469cffad500e96411b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 19:58:45.577967   71233 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19636-3902/kubeconfig
	I0913 19:58:45.579765   71233 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/kubeconfig: {Name:mk324cf401f96b93fed93af51e24b0634e5fa1a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 19:58:45.580068   71233 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.32 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0913 19:58:45.580156   71233 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0913 19:58:45.580249   71233 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-175374"
	I0913 19:58:45.580272   71233 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-175374"
	W0913 19:58:45.580281   71233 addons.go:243] addon storage-provisioner should already be in state true
	I0913 19:58:45.580295   71233 config.go:182] Loaded profile config "embed-certs-175374": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 19:58:45.580311   71233 host.go:66] Checking if "embed-certs-175374" exists ...
	I0913 19:58:45.580300   71233 addons.go:69] Setting default-storageclass=true in profile "embed-certs-175374"
	I0913 19:58:45.580353   71233 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-175374"
	I0913 19:58:45.580341   71233 addons.go:69] Setting metrics-server=true in profile "embed-certs-175374"
	I0913 19:58:45.580395   71233 addons.go:234] Setting addon metrics-server=true in "embed-certs-175374"
	W0913 19:58:45.580409   71233 addons.go:243] addon metrics-server should already be in state true
	I0913 19:58:45.580482   71233 host.go:66] Checking if "embed-certs-175374" exists ...
	I0913 19:58:45.580753   71233 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:58:45.580799   71233 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:58:45.580846   71233 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:58:45.580894   71233 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:58:45.580952   71233 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:58:45.581001   71233 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:58:45.581828   71233 out.go:177] * Verifying Kubernetes components...
	I0913 19:58:45.583145   71233 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 19:58:45.596215   71233 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38289
	I0913 19:58:45.596347   71233 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43797
	I0913 19:58:45.596650   71233 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:58:45.596775   71233 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44367
	I0913 19:58:45.596889   71233 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:58:45.597150   71233 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:58:45.597156   71233 main.go:141] libmachine: Using API Version  1
	I0913 19:58:45.597175   71233 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:58:45.597345   71233 main.go:141] libmachine: Using API Version  1
	I0913 19:58:45.597359   71233 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:58:45.597606   71233 main.go:141] libmachine: Using API Version  1
	I0913 19:58:45.597623   71233 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:58:45.597659   71233 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:58:45.597683   71233 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:58:45.597842   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetState
	I0913 19:58:45.597952   71233 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:58:45.598212   71233 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:58:45.598243   71233 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:58:45.598512   71233 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:58:45.598541   71233 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:58:45.601548   71233 addons.go:234] Setting addon default-storageclass=true in "embed-certs-175374"
	W0913 19:58:45.601569   71233 addons.go:243] addon default-storageclass should already be in state true
	I0913 19:58:45.601596   71233 host.go:66] Checking if "embed-certs-175374" exists ...
	I0913 19:58:45.601941   71233 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:58:45.601971   71233 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:58:45.613596   71233 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37017
	I0913 19:58:45.614086   71233 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:58:45.614646   71233 main.go:141] libmachine: Using API Version  1
	I0913 19:58:45.614670   71233 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:58:45.615015   71233 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:58:45.615328   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetState
	I0913 19:58:45.615792   71233 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42809
	I0913 19:58:45.616459   71233 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:58:45.617057   71233 main.go:141] libmachine: Using API Version  1
	I0913 19:58:45.617076   71233 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:58:45.617135   71233 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40251
	I0913 19:58:45.617429   71233 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:58:45.617492   71233 main.go:141] libmachine: (embed-certs-175374) Calling .DriverName
	I0913 19:58:45.617538   71233 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:58:45.617720   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetState
	I0913 19:58:45.618009   71233 main.go:141] libmachine: Using API Version  1
	I0913 19:58:45.618029   71233 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:58:45.618610   71233 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:58:45.619215   71233 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:58:45.619257   71233 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:58:45.619496   71233 main.go:141] libmachine: (embed-certs-175374) Calling .DriverName
	I0913 19:58:45.619734   71233 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0913 19:58:45.620863   71233 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 19:58:41.266572   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:43.267658   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:45.768086   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:42.009722   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:42.509784   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:43.009630   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:43.508726   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:44.009339   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:44.509674   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:45.009675   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:45.509437   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:46.009589   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:46.509457   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:45.620906   71233 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0913 19:58:45.620921   71233 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0913 19:58:45.620940   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHHostname
	I0913 19:58:45.622242   71233 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0913 19:58:45.622255   71233 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0913 19:58:45.622272   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHHostname
	I0913 19:58:45.624230   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:45.624735   71233 main.go:141] libmachine: (embed-certs-175374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:57:cd", ip: ""} in network mk-embed-certs-175374: {Iface:virbr3 ExpiryTime:2024-09-13 20:58:22 +0000 UTC Type:0 Mac:52:54:00:72:57:cd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:embed-certs-175374 Clientid:01:52:54:00:72:57:cd}
	I0913 19:58:45.624763   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined IP address 192.168.39.32 and MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:45.624903   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHPort
	I0913 19:58:45.625063   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHKeyPath
	I0913 19:58:45.625200   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHUsername
	I0913 19:58:45.625354   71233 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/embed-certs-175374/id_rsa Username:docker}
	I0913 19:58:45.625501   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:45.625915   71233 main.go:141] libmachine: (embed-certs-175374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:57:cd", ip: ""} in network mk-embed-certs-175374: {Iface:virbr3 ExpiryTime:2024-09-13 20:58:22 +0000 UTC Type:0 Mac:52:54:00:72:57:cd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:embed-certs-175374 Clientid:01:52:54:00:72:57:cd}
	I0913 19:58:45.625938   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined IP address 192.168.39.32 and MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:45.626141   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHPort
	I0913 19:58:45.626285   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHKeyPath
	I0913 19:58:45.626451   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHUsername
	I0913 19:58:45.626625   71233 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/embed-certs-175374/id_rsa Username:docker}
	I0913 19:58:45.658599   71233 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45219
	I0913 19:58:45.659088   71233 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:58:45.659729   71233 main.go:141] libmachine: Using API Version  1
	I0913 19:58:45.659752   71233 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:58:45.660087   71233 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:58:45.660266   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetState
	I0913 19:58:45.661894   71233 main.go:141] libmachine: (embed-certs-175374) Calling .DriverName
	I0913 19:58:45.662127   71233 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0913 19:58:45.662143   71233 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0913 19:58:45.662159   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHHostname
	I0913 19:58:45.664987   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:45.665347   71233 main.go:141] libmachine: (embed-certs-175374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:57:cd", ip: ""} in network mk-embed-certs-175374: {Iface:virbr3 ExpiryTime:2024-09-13 20:58:22 +0000 UTC Type:0 Mac:52:54:00:72:57:cd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:embed-certs-175374 Clientid:01:52:54:00:72:57:cd}
	I0913 19:58:45.665369   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined IP address 192.168.39.32 and MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:45.665475   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHPort
	I0913 19:58:45.665622   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHKeyPath
	I0913 19:58:45.665765   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHUsername
	I0913 19:58:45.665890   71233 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/embed-certs-175374/id_rsa Username:docker}
	I0913 19:58:45.771910   71233 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0913 19:58:45.788103   71233 node_ready.go:35] waiting up to 6m0s for node "embed-certs-175374" to be "Ready" ...
	I0913 19:58:45.849115   71233 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0913 19:58:45.954823   71233 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0913 19:58:45.954845   71233 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0913 19:58:45.972602   71233 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0913 19:58:46.008217   71233 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0913 19:58:46.008243   71233 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0913 19:58:46.087347   71233 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0913 19:58:46.087374   71233 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0913 19:58:46.145493   71233 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0913 19:58:46.413833   71233 main.go:141] libmachine: Making call to close driver server
	I0913 19:58:46.413867   71233 main.go:141] libmachine: (embed-certs-175374) Calling .Close
	I0913 19:58:46.414152   71233 main.go:141] libmachine: Successfully made call to close driver server
	I0913 19:58:46.414211   71233 main.go:141] libmachine: (embed-certs-175374) DBG | Closing plugin on server side
	I0913 19:58:46.414228   71233 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 19:58:46.414239   71233 main.go:141] libmachine: Making call to close driver server
	I0913 19:58:46.414257   71233 main.go:141] libmachine: (embed-certs-175374) Calling .Close
	I0913 19:58:46.414562   71233 main.go:141] libmachine: (embed-certs-175374) DBG | Closing plugin on server side
	I0913 19:58:46.414574   71233 main.go:141] libmachine: Successfully made call to close driver server
	I0913 19:58:46.414587   71233 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 19:58:46.420582   71233 main.go:141] libmachine: Making call to close driver server
	I0913 19:58:46.420600   71233 main.go:141] libmachine: (embed-certs-175374) Calling .Close
	I0913 19:58:46.420839   71233 main.go:141] libmachine: Successfully made call to close driver server
	I0913 19:58:46.420855   71233 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 19:58:46.960928   71233 main.go:141] libmachine: Making call to close driver server
	I0913 19:58:46.960961   71233 main.go:141] libmachine: (embed-certs-175374) Calling .Close
	I0913 19:58:46.961258   71233 main.go:141] libmachine: Successfully made call to close driver server
	I0913 19:58:46.961292   71233 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 19:58:46.961298   71233 main.go:141] libmachine: (embed-certs-175374) DBG | Closing plugin on server side
	I0913 19:58:46.961314   71233 main.go:141] libmachine: Making call to close driver server
	I0913 19:58:46.961325   71233 main.go:141] libmachine: (embed-certs-175374) Calling .Close
	I0913 19:58:46.961592   71233 main.go:141] libmachine: Successfully made call to close driver server
	I0913 19:58:46.961607   71233 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 19:58:47.205831   71233 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.060299398s)
	I0913 19:58:47.205881   71233 main.go:141] libmachine: Making call to close driver server
	I0913 19:58:47.205896   71233 main.go:141] libmachine: (embed-certs-175374) Calling .Close
	I0913 19:58:47.206177   71233 main.go:141] libmachine: Successfully made call to close driver server
	I0913 19:58:47.206198   71233 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 19:58:47.206211   71233 main.go:141] libmachine: Making call to close driver server
	I0913 19:58:47.206209   71233 main.go:141] libmachine: (embed-certs-175374) DBG | Closing plugin on server side
	I0913 19:58:47.206218   71233 main.go:141] libmachine: (embed-certs-175374) Calling .Close
	I0913 19:58:47.206422   71233 main.go:141] libmachine: (embed-certs-175374) DBG | Closing plugin on server side
	I0913 19:58:47.206461   71233 main.go:141] libmachine: Successfully made call to close driver server
	I0913 19:58:47.206469   71233 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 19:58:47.206482   71233 addons.go:475] Verifying addon metrics-server=true in "embed-certs-175374"
	I0913 19:58:47.208308   71233 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0913 19:58:47.209327   71233 addons.go:510] duration metric: took 1.629176141s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0913 19:58:47.792485   71233 node_ready.go:53] node "embed-certs-175374" has status "Ready":"False"
	I0913 19:58:46.431055   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:48.930705   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:48.265994   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:50.266158   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:47.009340   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:47.509159   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:48.009550   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:48.509199   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:49.009364   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:49.509522   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:50.008790   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:50.509733   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:51.009675   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:51.509423   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:50.293136   71233 node_ready.go:53] node "embed-certs-175374" has status "Ready":"False"
	I0913 19:58:52.792201   71233 node_ready.go:53] node "embed-certs-175374" has status "Ready":"False"
	I0913 19:58:53.291781   71233 node_ready.go:49] node "embed-certs-175374" has status "Ready":"True"
	I0913 19:58:53.291808   71233 node_ready.go:38] duration metric: took 7.503674244s for node "embed-certs-175374" to be "Ready" ...
	I0913 19:58:53.291817   71233 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0913 19:58:53.297601   71233 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-lrrkx" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:53.304575   71233 pod_ready.go:93] pod "coredns-7c65d6cfc9-lrrkx" in "kube-system" namespace has status "Ready":"True"
	I0913 19:58:53.304599   71233 pod_ready.go:82] duration metric: took 6.973055ms for pod "coredns-7c65d6cfc9-lrrkx" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:53.304608   71233 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-175374" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:50.932102   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:53.431177   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:52.267198   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:54.267301   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:52.009563   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:52.509133   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:53.008751   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:53.508990   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:54.009430   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:54.508835   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:55.009332   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:55.509474   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:56.009345   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:56.509025   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:55.312022   71233 pod_ready.go:103] pod "etcd-embed-certs-175374" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:57.310407   71233 pod_ready.go:93] pod "etcd-embed-certs-175374" in "kube-system" namespace has status "Ready":"True"
	I0913 19:58:57.310430   71233 pod_ready.go:82] duration metric: took 4.0058159s for pod "etcd-embed-certs-175374" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:57.310440   71233 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-175374" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:57.315573   71233 pod_ready.go:93] pod "kube-apiserver-embed-certs-175374" in "kube-system" namespace has status "Ready":"True"
	I0913 19:58:57.315592   71233 pod_ready.go:82] duration metric: took 5.146474ms for pod "kube-apiserver-embed-certs-175374" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:57.315600   71233 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-175374" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:57.319332   71233 pod_ready.go:93] pod "kube-controller-manager-embed-certs-175374" in "kube-system" namespace has status "Ready":"True"
	I0913 19:58:57.319347   71233 pod_ready.go:82] duration metric: took 3.741976ms for pod "kube-controller-manager-embed-certs-175374" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:57.319356   71233 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-jv77q" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:57.323231   71233 pod_ready.go:93] pod "kube-proxy-jv77q" in "kube-system" namespace has status "Ready":"True"
	I0913 19:58:57.323247   71233 pod_ready.go:82] duration metric: took 3.886178ms for pod "kube-proxy-jv77q" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:57.323254   71233 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-175374" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:57.329250   71233 pod_ready.go:93] pod "kube-scheduler-embed-certs-175374" in "kube-system" namespace has status "Ready":"True"
	I0913 19:58:57.329264   71233 pod_ready.go:82] duration metric: took 6.005366ms for pod "kube-scheduler-embed-certs-175374" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:57.329273   71233 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:55.932146   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:58.430922   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:56.765730   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:58.767104   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:57.009384   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:57.509443   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:58.008849   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:58.509514   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:59.009139   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:59.509433   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:00.009778   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:00.508827   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:01.009427   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:01.508910   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:59.335308   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:01.335559   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:03.337207   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:00.930860   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:02.932443   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:01.267236   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:03.765856   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:05.766799   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:02.009696   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:02.509043   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:03.008825   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:03.509139   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:04.009549   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:04.509093   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:05.009633   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:05.509496   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:06.008914   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:06.509007   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:05.835701   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:07.836050   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:05.431045   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:07.431161   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:08.266221   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:10.267540   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:07.009527   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:07.509637   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:08.009264   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:08.509505   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:09.008838   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:09.509478   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:10.009569   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:10.509697   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:11.009352   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:11.509280   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:10.335743   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:12.835060   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:09.930272   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:11.930469   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:14.431325   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:12.766317   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:14.766811   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:12.009200   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:12.509540   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:13.008811   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:13.509512   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:14.008877   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:14.509361   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:15.008823   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:15.509022   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:16.009176   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:16.509654   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:14.836303   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:17.336034   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:16.431384   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:18.930816   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:17.266683   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:19.268476   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:17.009758   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:17.509429   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:18.009470   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:18.509732   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:19.008954   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:19.509475   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:20.008916   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:20.509623   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:21.009697   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 19:59:21.009781   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 19:59:21.044701   71926 cri.go:89] found id: ""
	I0913 19:59:21.044724   71926 logs.go:276] 0 containers: []
	W0913 19:59:21.044734   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 19:59:21.044739   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 19:59:21.044786   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 19:59:21.086372   71926 cri.go:89] found id: ""
	I0913 19:59:21.086399   71926 logs.go:276] 0 containers: []
	W0913 19:59:21.086408   71926 logs.go:278] No container was found matching "etcd"
	I0913 19:59:21.086413   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 19:59:21.086474   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 19:59:21.121663   71926 cri.go:89] found id: ""
	I0913 19:59:21.121691   71926 logs.go:276] 0 containers: []
	W0913 19:59:21.121702   71926 logs.go:278] No container was found matching "coredns"
	I0913 19:59:21.121709   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 19:59:21.121772   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 19:59:21.159432   71926 cri.go:89] found id: ""
	I0913 19:59:21.159462   71926 logs.go:276] 0 containers: []
	W0913 19:59:21.159474   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 19:59:21.159481   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 19:59:21.159547   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 19:59:21.196077   71926 cri.go:89] found id: ""
	I0913 19:59:21.196108   71926 logs.go:276] 0 containers: []
	W0913 19:59:21.196120   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 19:59:21.196128   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 19:59:21.196202   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 19:59:21.228527   71926 cri.go:89] found id: ""
	I0913 19:59:21.228560   71926 logs.go:276] 0 containers: []
	W0913 19:59:21.228572   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 19:59:21.228579   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 19:59:21.228642   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 19:59:21.261972   71926 cri.go:89] found id: ""
	I0913 19:59:21.262004   71926 logs.go:276] 0 containers: []
	W0913 19:59:21.262015   71926 logs.go:278] No container was found matching "kindnet"
	I0913 19:59:21.262023   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 19:59:21.262088   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 19:59:21.297147   71926 cri.go:89] found id: ""
	I0913 19:59:21.297178   71926 logs.go:276] 0 containers: []
	W0913 19:59:21.297191   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 19:59:21.297201   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 19:59:21.297214   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 19:59:21.351067   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 19:59:21.351099   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 19:59:21.367453   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 19:59:21.367492   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 19:59:21.497454   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 19:59:21.497474   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 19:59:21.497487   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 19:59:21.568483   71926 logs.go:123] Gathering logs for container status ...
	I0913 19:59:21.568519   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 19:59:19.836218   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:22.336293   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:21.430519   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:23.930458   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:21.767677   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:24.267717   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:24.114940   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:24.127514   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 19:59:24.127597   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 19:59:24.164868   71926 cri.go:89] found id: ""
	I0913 19:59:24.164896   71926 logs.go:276] 0 containers: []
	W0913 19:59:24.164909   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 19:59:24.164917   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 19:59:24.164976   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 19:59:24.200342   71926 cri.go:89] found id: ""
	I0913 19:59:24.200374   71926 logs.go:276] 0 containers: []
	W0913 19:59:24.200386   71926 logs.go:278] No container was found matching "etcd"
	I0913 19:59:24.200393   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 19:59:24.200453   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 19:59:24.237566   71926 cri.go:89] found id: ""
	I0913 19:59:24.237592   71926 logs.go:276] 0 containers: []
	W0913 19:59:24.237603   71926 logs.go:278] No container was found matching "coredns"
	I0913 19:59:24.237619   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 19:59:24.237691   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 19:59:24.275357   71926 cri.go:89] found id: ""
	I0913 19:59:24.275386   71926 logs.go:276] 0 containers: []
	W0913 19:59:24.275397   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 19:59:24.275412   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 19:59:24.275476   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 19:59:24.312715   71926 cri.go:89] found id: ""
	I0913 19:59:24.312744   71926 logs.go:276] 0 containers: []
	W0913 19:59:24.312754   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 19:59:24.312759   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 19:59:24.312821   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 19:59:24.348027   71926 cri.go:89] found id: ""
	I0913 19:59:24.348053   71926 logs.go:276] 0 containers: []
	W0913 19:59:24.348064   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 19:59:24.348071   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 19:59:24.348149   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 19:59:24.382195   71926 cri.go:89] found id: ""
	I0913 19:59:24.382219   71926 logs.go:276] 0 containers: []
	W0913 19:59:24.382229   71926 logs.go:278] No container was found matching "kindnet"
	I0913 19:59:24.382235   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 19:59:24.382282   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 19:59:24.418783   71926 cri.go:89] found id: ""
	I0913 19:59:24.418806   71926 logs.go:276] 0 containers: []
	W0913 19:59:24.418815   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 19:59:24.418823   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 19:59:24.418833   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 19:59:24.504681   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 19:59:24.504705   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 19:59:24.504718   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 19:59:24.578961   71926 logs.go:123] Gathering logs for container status ...
	I0913 19:59:24.578993   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 19:59:24.623057   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 19:59:24.623083   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 19:59:24.673801   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 19:59:24.673835   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 19:59:24.336593   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:26.835014   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:28.836636   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:25.932213   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:28.431013   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:26.767205   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:29.266801   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:27.188790   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:27.201419   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 19:59:27.201476   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 19:59:27.234511   71926 cri.go:89] found id: ""
	I0913 19:59:27.234535   71926 logs.go:276] 0 containers: []
	W0913 19:59:27.234543   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 19:59:27.234550   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 19:59:27.234610   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 19:59:27.273066   71926 cri.go:89] found id: ""
	I0913 19:59:27.273089   71926 logs.go:276] 0 containers: []
	W0913 19:59:27.273098   71926 logs.go:278] No container was found matching "etcd"
	I0913 19:59:27.273112   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 19:59:27.273169   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 19:59:27.306510   71926 cri.go:89] found id: ""
	I0913 19:59:27.306531   71926 logs.go:276] 0 containers: []
	W0913 19:59:27.306540   71926 logs.go:278] No container was found matching "coredns"
	I0913 19:59:27.306545   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 19:59:27.306630   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 19:59:27.343335   71926 cri.go:89] found id: ""
	I0913 19:59:27.343359   71926 logs.go:276] 0 containers: []
	W0913 19:59:27.343371   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 19:59:27.343378   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 19:59:27.343427   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 19:59:27.380440   71926 cri.go:89] found id: ""
	I0913 19:59:27.380469   71926 logs.go:276] 0 containers: []
	W0913 19:59:27.380478   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 19:59:27.380483   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 19:59:27.380536   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 19:59:27.419250   71926 cri.go:89] found id: ""
	I0913 19:59:27.419280   71926 logs.go:276] 0 containers: []
	W0913 19:59:27.419292   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 19:59:27.419299   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 19:59:27.419370   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 19:59:27.454315   71926 cri.go:89] found id: ""
	I0913 19:59:27.454337   71926 logs.go:276] 0 containers: []
	W0913 19:59:27.454346   71926 logs.go:278] No container was found matching "kindnet"
	I0913 19:59:27.454352   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 19:59:27.454402   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 19:59:27.491075   71926 cri.go:89] found id: ""
	I0913 19:59:27.491107   71926 logs.go:276] 0 containers: []
	W0913 19:59:27.491118   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 19:59:27.491128   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 19:59:27.491170   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 19:59:27.540849   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 19:59:27.540877   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 19:59:27.554829   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 19:59:27.554860   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 19:59:27.624534   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 19:59:27.624562   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 19:59:27.624577   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 19:59:27.702577   71926 logs.go:123] Gathering logs for container status ...
	I0913 19:59:27.702612   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 19:59:30.242489   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:30.255585   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 19:59:30.255667   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 19:59:30.298214   71926 cri.go:89] found id: ""
	I0913 19:59:30.298241   71926 logs.go:276] 0 containers: []
	W0913 19:59:30.298253   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 19:59:30.298259   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 19:59:30.298332   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 19:59:30.335270   71926 cri.go:89] found id: ""
	I0913 19:59:30.335300   71926 logs.go:276] 0 containers: []
	W0913 19:59:30.335309   71926 logs.go:278] No container was found matching "etcd"
	I0913 19:59:30.335314   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 19:59:30.335379   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 19:59:30.373478   71926 cri.go:89] found id: ""
	I0913 19:59:30.373506   71926 logs.go:276] 0 containers: []
	W0913 19:59:30.373517   71926 logs.go:278] No container was found matching "coredns"
	I0913 19:59:30.373524   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 19:59:30.373583   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 19:59:30.411820   71926 cri.go:89] found id: ""
	I0913 19:59:30.411845   71926 logs.go:276] 0 containers: []
	W0913 19:59:30.411854   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 19:59:30.411863   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 19:59:30.411908   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 19:59:30.449434   71926 cri.go:89] found id: ""
	I0913 19:59:30.449463   71926 logs.go:276] 0 containers: []
	W0913 19:59:30.449479   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 19:59:30.449486   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 19:59:30.449547   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 19:59:30.482794   71926 cri.go:89] found id: ""
	I0913 19:59:30.482822   71926 logs.go:276] 0 containers: []
	W0913 19:59:30.482831   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 19:59:30.482837   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 19:59:30.482887   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 19:59:30.517320   71926 cri.go:89] found id: ""
	I0913 19:59:30.517342   71926 logs.go:276] 0 containers: []
	W0913 19:59:30.517351   71926 logs.go:278] No container was found matching "kindnet"
	I0913 19:59:30.517357   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 19:59:30.517404   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 19:59:30.553119   71926 cri.go:89] found id: ""
	I0913 19:59:30.553146   71926 logs.go:276] 0 containers: []
	W0913 19:59:30.553154   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 19:59:30.553162   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 19:59:30.553172   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 19:59:30.605857   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 19:59:30.605886   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 19:59:30.620823   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 19:59:30.620855   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 19:59:30.689618   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 19:59:30.689637   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 19:59:30.689650   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 19:59:30.772324   71926 logs.go:123] Gathering logs for container status ...
	I0913 19:59:30.772359   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 19:59:31.335265   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:33.336711   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:30.431957   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:32.930866   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:31.765595   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:33.768217   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:33.313109   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:33.327584   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 19:59:33.327642   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 19:59:33.362880   71926 cri.go:89] found id: ""
	I0913 19:59:33.362904   71926 logs.go:276] 0 containers: []
	W0913 19:59:33.362912   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 19:59:33.362919   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 19:59:33.362979   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 19:59:33.395790   71926 cri.go:89] found id: ""
	I0913 19:59:33.395818   71926 logs.go:276] 0 containers: []
	W0913 19:59:33.395828   71926 logs.go:278] No container was found matching "etcd"
	I0913 19:59:33.395833   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 19:59:33.395883   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 19:59:33.430371   71926 cri.go:89] found id: ""
	I0913 19:59:33.430397   71926 logs.go:276] 0 containers: []
	W0913 19:59:33.430405   71926 logs.go:278] No container was found matching "coredns"
	I0913 19:59:33.430410   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 19:59:33.430522   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 19:59:33.465466   71926 cri.go:89] found id: ""
	I0913 19:59:33.465494   71926 logs.go:276] 0 containers: []
	W0913 19:59:33.465502   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 19:59:33.465508   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 19:59:33.465554   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 19:59:33.500340   71926 cri.go:89] found id: ""
	I0913 19:59:33.500370   71926 logs.go:276] 0 containers: []
	W0913 19:59:33.500385   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 19:59:33.500390   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 19:59:33.500440   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 19:59:33.537219   71926 cri.go:89] found id: ""
	I0913 19:59:33.537248   71926 logs.go:276] 0 containers: []
	W0913 19:59:33.537259   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 19:59:33.537267   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 19:59:33.537315   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 19:59:33.576171   71926 cri.go:89] found id: ""
	I0913 19:59:33.576201   71926 logs.go:276] 0 containers: []
	W0913 19:59:33.576209   71926 logs.go:278] No container was found matching "kindnet"
	I0913 19:59:33.576214   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 19:59:33.576261   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 19:59:33.618525   71926 cri.go:89] found id: ""
	I0913 19:59:33.618552   71926 logs.go:276] 0 containers: []
	W0913 19:59:33.618564   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 19:59:33.618574   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 19:59:33.618588   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 19:59:33.667903   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 19:59:33.667932   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 19:59:33.683870   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 19:59:33.683897   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 19:59:33.755651   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 19:59:33.755675   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 19:59:33.755687   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 19:59:33.834518   71926 logs.go:123] Gathering logs for container status ...
	I0913 19:59:33.834563   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 19:59:36.375763   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:36.389874   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 19:59:36.389945   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 19:59:36.423918   71926 cri.go:89] found id: ""
	I0913 19:59:36.423943   71926 logs.go:276] 0 containers: []
	W0913 19:59:36.423955   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 19:59:36.423962   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 19:59:36.424021   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 19:59:36.461591   71926 cri.go:89] found id: ""
	I0913 19:59:36.461615   71926 logs.go:276] 0 containers: []
	W0913 19:59:36.461627   71926 logs.go:278] No container was found matching "etcd"
	I0913 19:59:36.461633   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 19:59:36.461686   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 19:59:36.500927   71926 cri.go:89] found id: ""
	I0913 19:59:36.500951   71926 logs.go:276] 0 containers: []
	W0913 19:59:36.500961   71926 logs.go:278] No container was found matching "coredns"
	I0913 19:59:36.500966   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 19:59:36.501024   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 19:59:36.538153   71926 cri.go:89] found id: ""
	I0913 19:59:36.538178   71926 logs.go:276] 0 containers: []
	W0913 19:59:36.538189   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 19:59:36.538196   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 19:59:36.538253   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 19:59:36.586559   71926 cri.go:89] found id: ""
	I0913 19:59:36.586593   71926 logs.go:276] 0 containers: []
	W0913 19:59:36.586604   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 19:59:36.586612   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 19:59:36.586671   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 19:59:36.635198   71926 cri.go:89] found id: ""
	I0913 19:59:36.635226   71926 logs.go:276] 0 containers: []
	W0913 19:59:36.635238   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 19:59:36.635246   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 19:59:36.635312   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 19:59:36.679531   71926 cri.go:89] found id: ""
	I0913 19:59:36.679554   71926 logs.go:276] 0 containers: []
	W0913 19:59:36.679565   71926 logs.go:278] No container was found matching "kindnet"
	I0913 19:59:36.679572   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 19:59:36.679635   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 19:59:36.714288   71926 cri.go:89] found id: ""
	I0913 19:59:36.714315   71926 logs.go:276] 0 containers: []
	W0913 19:59:36.714327   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 19:59:36.714338   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 19:59:36.714352   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 19:59:36.793900   71926 logs.go:123] Gathering logs for container status ...
	I0913 19:59:36.793938   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 19:59:36.831700   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 19:59:36.831732   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 19:59:36.884424   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 19:59:36.884466   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 19:59:36.900593   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 19:59:36.900622   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 19:59:35.835628   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:37.836645   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:34.931979   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:37.429866   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:39.431100   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:36.265867   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:38.266340   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:40.767051   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	W0913 19:59:36.979173   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 19:59:39.480036   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:39.494368   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 19:59:39.494434   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 19:59:39.528620   71926 cri.go:89] found id: ""
	I0913 19:59:39.528647   71926 logs.go:276] 0 containers: []
	W0913 19:59:39.528655   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 19:59:39.528661   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 19:59:39.528708   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 19:59:39.564307   71926 cri.go:89] found id: ""
	I0913 19:59:39.564339   71926 logs.go:276] 0 containers: []
	W0913 19:59:39.564348   71926 logs.go:278] No container was found matching "etcd"
	I0913 19:59:39.564354   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 19:59:39.564402   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 19:59:39.596786   71926 cri.go:89] found id: ""
	I0913 19:59:39.596813   71926 logs.go:276] 0 containers: []
	W0913 19:59:39.596822   71926 logs.go:278] No container was found matching "coredns"
	I0913 19:59:39.596828   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 19:59:39.596887   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 19:59:39.640612   71926 cri.go:89] found id: ""
	I0913 19:59:39.640636   71926 logs.go:276] 0 containers: []
	W0913 19:59:39.640649   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 19:59:39.640654   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 19:59:39.640701   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 19:59:39.685855   71926 cri.go:89] found id: ""
	I0913 19:59:39.685876   71926 logs.go:276] 0 containers: []
	W0913 19:59:39.685884   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 19:59:39.685890   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 19:59:39.685937   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 19:59:39.723549   71926 cri.go:89] found id: ""
	I0913 19:59:39.723578   71926 logs.go:276] 0 containers: []
	W0913 19:59:39.723586   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 19:59:39.723592   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 19:59:39.723647   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 19:59:39.761901   71926 cri.go:89] found id: ""
	I0913 19:59:39.761928   71926 logs.go:276] 0 containers: []
	W0913 19:59:39.761938   71926 logs.go:278] No container was found matching "kindnet"
	I0913 19:59:39.761944   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 19:59:39.762005   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 19:59:39.804200   71926 cri.go:89] found id: ""
	I0913 19:59:39.804233   71926 logs.go:276] 0 containers: []
	W0913 19:59:39.804244   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 19:59:39.804254   71926 logs.go:123] Gathering logs for container status ...
	I0913 19:59:39.804268   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 19:59:39.843760   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 19:59:39.843792   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 19:59:39.898610   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 19:59:39.898640   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 19:59:39.915710   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 19:59:39.915733   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 19:59:39.991138   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 19:59:39.991161   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 19:59:39.991175   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 19:59:40.335372   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:42.339270   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:41.431411   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:43.930395   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:43.266899   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:45.769316   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:42.567023   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:42.579927   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 19:59:42.580001   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 19:59:42.613569   71926 cri.go:89] found id: ""
	I0913 19:59:42.613595   71926 logs.go:276] 0 containers: []
	W0913 19:59:42.613603   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 19:59:42.613608   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 19:59:42.613654   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 19:59:42.649375   71926 cri.go:89] found id: ""
	I0913 19:59:42.649408   71926 logs.go:276] 0 containers: []
	W0913 19:59:42.649421   71926 logs.go:278] No container was found matching "etcd"
	I0913 19:59:42.649433   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 19:59:42.649502   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 19:59:42.685299   71926 cri.go:89] found id: ""
	I0913 19:59:42.685322   71926 logs.go:276] 0 containers: []
	W0913 19:59:42.685330   71926 logs.go:278] No container was found matching "coredns"
	I0913 19:59:42.685336   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 19:59:42.685383   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 19:59:42.718646   71926 cri.go:89] found id: ""
	I0913 19:59:42.718671   71926 logs.go:276] 0 containers: []
	W0913 19:59:42.718680   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 19:59:42.718686   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 19:59:42.718736   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 19:59:42.755277   71926 cri.go:89] found id: ""
	I0913 19:59:42.755310   71926 logs.go:276] 0 containers: []
	W0913 19:59:42.755322   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 19:59:42.755330   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 19:59:42.755399   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 19:59:42.791071   71926 cri.go:89] found id: ""
	I0913 19:59:42.791099   71926 logs.go:276] 0 containers: []
	W0913 19:59:42.791110   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 19:59:42.791117   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 19:59:42.791191   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 19:59:42.824895   71926 cri.go:89] found id: ""
	I0913 19:59:42.824924   71926 logs.go:276] 0 containers: []
	W0913 19:59:42.824935   71926 logs.go:278] No container was found matching "kindnet"
	I0913 19:59:42.824942   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 19:59:42.825004   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 19:59:42.864526   71926 cri.go:89] found id: ""
	I0913 19:59:42.864555   71926 logs.go:276] 0 containers: []
	W0913 19:59:42.864567   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 19:59:42.864576   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 19:59:42.864590   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 19:59:42.913990   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 19:59:42.914023   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 19:59:42.929285   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 19:59:42.929319   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 19:59:43.003029   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 19:59:43.003061   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 19:59:43.003075   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 19:59:43.083457   71926 logs.go:123] Gathering logs for container status ...
	I0913 19:59:43.083490   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 19:59:45.625941   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:45.639111   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 19:59:45.639200   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 19:59:45.672356   71926 cri.go:89] found id: ""
	I0913 19:59:45.672383   71926 logs.go:276] 0 containers: []
	W0913 19:59:45.672391   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 19:59:45.672397   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 19:59:45.672463   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 19:59:45.713528   71926 cri.go:89] found id: ""
	I0913 19:59:45.713551   71926 logs.go:276] 0 containers: []
	W0913 19:59:45.713557   71926 logs.go:278] No container was found matching "etcd"
	I0913 19:59:45.713564   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 19:59:45.713618   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 19:59:45.749950   71926 cri.go:89] found id: ""
	I0913 19:59:45.749974   71926 logs.go:276] 0 containers: []
	W0913 19:59:45.749982   71926 logs.go:278] No container was found matching "coredns"
	I0913 19:59:45.749988   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 19:59:45.750036   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 19:59:45.787366   71926 cri.go:89] found id: ""
	I0913 19:59:45.787403   71926 logs.go:276] 0 containers: []
	W0913 19:59:45.787415   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 19:59:45.787423   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 19:59:45.787482   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 19:59:45.822464   71926 cri.go:89] found id: ""
	I0913 19:59:45.822493   71926 logs.go:276] 0 containers: []
	W0913 19:59:45.822504   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 19:59:45.822511   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 19:59:45.822574   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 19:59:45.857614   71926 cri.go:89] found id: ""
	I0913 19:59:45.857643   71926 logs.go:276] 0 containers: []
	W0913 19:59:45.857654   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 19:59:45.857666   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 19:59:45.857716   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 19:59:45.896323   71926 cri.go:89] found id: ""
	I0913 19:59:45.896349   71926 logs.go:276] 0 containers: []
	W0913 19:59:45.896357   71926 logs.go:278] No container was found matching "kindnet"
	I0913 19:59:45.896362   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 19:59:45.896416   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 19:59:45.935706   71926 cri.go:89] found id: ""
	I0913 19:59:45.935731   71926 logs.go:276] 0 containers: []
	W0913 19:59:45.935742   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 19:59:45.935751   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 19:59:45.935763   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 19:59:45.986687   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 19:59:45.986721   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 19:59:46.001549   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 19:59:46.001584   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 19:59:46.075482   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 19:59:46.075505   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 19:59:46.075520   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 19:59:46.152094   71926 logs.go:123] Gathering logs for container status ...
	I0913 19:59:46.152130   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 19:59:44.836085   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:46.836175   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:45.932069   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:47.932660   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:48.266623   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:50.766356   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:48.698127   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:48.711198   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 19:59:48.711271   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 19:59:48.746560   71926 cri.go:89] found id: ""
	I0913 19:59:48.746588   71926 logs.go:276] 0 containers: []
	W0913 19:59:48.746598   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 19:59:48.746605   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 19:59:48.746662   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 19:59:48.780588   71926 cri.go:89] found id: ""
	I0913 19:59:48.780614   71926 logs.go:276] 0 containers: []
	W0913 19:59:48.780624   71926 logs.go:278] No container was found matching "etcd"
	I0913 19:59:48.780631   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 19:59:48.780689   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 19:59:48.812521   71926 cri.go:89] found id: ""
	I0913 19:59:48.812547   71926 logs.go:276] 0 containers: []
	W0913 19:59:48.812560   71926 logs.go:278] No container was found matching "coredns"
	I0913 19:59:48.812567   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 19:59:48.812626   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 19:59:48.850273   71926 cri.go:89] found id: ""
	I0913 19:59:48.850303   71926 logs.go:276] 0 containers: []
	W0913 19:59:48.850314   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 19:59:48.850322   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 19:59:48.850384   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 19:59:48.887865   71926 cri.go:89] found id: ""
	I0913 19:59:48.887888   71926 logs.go:276] 0 containers: []
	W0913 19:59:48.887896   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 19:59:48.887901   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 19:59:48.887966   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 19:59:48.928155   71926 cri.go:89] found id: ""
	I0913 19:59:48.928182   71926 logs.go:276] 0 containers: []
	W0913 19:59:48.928193   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 19:59:48.928201   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 19:59:48.928263   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 19:59:48.966159   71926 cri.go:89] found id: ""
	I0913 19:59:48.966185   71926 logs.go:276] 0 containers: []
	W0913 19:59:48.966194   71926 logs.go:278] No container was found matching "kindnet"
	I0913 19:59:48.966199   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 19:59:48.966267   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 19:59:48.999627   71926 cri.go:89] found id: ""
	I0913 19:59:48.999654   71926 logs.go:276] 0 containers: []
	W0913 19:59:48.999663   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 19:59:48.999674   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 19:59:48.999685   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 19:59:49.087362   71926 logs.go:123] Gathering logs for container status ...
	I0913 19:59:49.087398   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 19:59:49.131037   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 19:59:49.131065   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 19:59:49.183183   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 19:59:49.183214   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 19:59:49.197511   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 19:59:49.197536   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 19:59:49.269251   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 19:59:51.769494   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:51.783274   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 19:59:51.783334   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 19:59:51.819611   71926 cri.go:89] found id: ""
	I0913 19:59:51.819644   71926 logs.go:276] 0 containers: []
	W0913 19:59:51.819655   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 19:59:51.819662   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 19:59:51.819722   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 19:59:51.857054   71926 cri.go:89] found id: ""
	I0913 19:59:51.857081   71926 logs.go:276] 0 containers: []
	W0913 19:59:51.857093   71926 logs.go:278] No container was found matching "etcd"
	I0913 19:59:51.857101   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 19:59:51.857183   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 19:59:51.892270   71926 cri.go:89] found id: ""
	I0913 19:59:51.892292   71926 logs.go:276] 0 containers: []
	W0913 19:59:51.892301   71926 logs.go:278] No container was found matching "coredns"
	I0913 19:59:51.892306   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 19:59:51.892354   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 19:59:51.926615   71926 cri.go:89] found id: ""
	I0913 19:59:51.926641   71926 logs.go:276] 0 containers: []
	W0913 19:59:51.926653   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 19:59:51.926667   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 19:59:51.926728   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 19:59:49.336581   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:51.837000   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:53.838872   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:49.936518   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:52.430631   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:52.767109   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:55.265920   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:51.966322   71926 cri.go:89] found id: ""
	I0913 19:59:51.966352   71926 logs.go:276] 0 containers: []
	W0913 19:59:51.966372   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 19:59:51.966380   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 19:59:51.966446   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 19:59:52.008145   71926 cri.go:89] found id: ""
	I0913 19:59:52.008173   71926 logs.go:276] 0 containers: []
	W0913 19:59:52.008182   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 19:59:52.008189   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 19:59:52.008241   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 19:59:52.044486   71926 cri.go:89] found id: ""
	I0913 19:59:52.044512   71926 logs.go:276] 0 containers: []
	W0913 19:59:52.044520   71926 logs.go:278] No container was found matching "kindnet"
	I0913 19:59:52.044527   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 19:59:52.044590   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 19:59:52.080845   71926 cri.go:89] found id: ""
	I0913 19:59:52.080873   71926 logs.go:276] 0 containers: []
	W0913 19:59:52.080885   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 19:59:52.080895   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 19:59:52.080910   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 19:59:52.094040   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 19:59:52.094067   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 19:59:52.163809   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 19:59:52.163836   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 19:59:52.163850   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 19:59:52.244680   71926 logs.go:123] Gathering logs for container status ...
	I0913 19:59:52.244724   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 19:59:52.284651   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 19:59:52.284686   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 19:59:54.841167   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:54.853992   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 19:59:54.854055   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 19:59:54.886801   71926 cri.go:89] found id: ""
	I0913 19:59:54.886830   71926 logs.go:276] 0 containers: []
	W0913 19:59:54.886841   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 19:59:54.886848   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 19:59:54.886922   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 19:59:54.921963   71926 cri.go:89] found id: ""
	I0913 19:59:54.921990   71926 logs.go:276] 0 containers: []
	W0913 19:59:54.922001   71926 logs.go:278] No container was found matching "etcd"
	I0913 19:59:54.922009   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 19:59:54.922074   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 19:59:54.960805   71926 cri.go:89] found id: ""
	I0913 19:59:54.960840   71926 logs.go:276] 0 containers: []
	W0913 19:59:54.960852   71926 logs.go:278] No container was found matching "coredns"
	I0913 19:59:54.960859   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 19:59:54.960938   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 19:59:54.998466   71926 cri.go:89] found id: ""
	I0913 19:59:54.998490   71926 logs.go:276] 0 containers: []
	W0913 19:59:54.998501   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 19:59:54.998508   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 19:59:54.998570   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 19:59:55.036768   71926 cri.go:89] found id: ""
	I0913 19:59:55.036795   71926 logs.go:276] 0 containers: []
	W0913 19:59:55.036803   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 19:59:55.036809   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 19:59:55.036870   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 19:59:55.075135   71926 cri.go:89] found id: ""
	I0913 19:59:55.075165   71926 logs.go:276] 0 containers: []
	W0913 19:59:55.075176   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 19:59:55.075184   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 19:59:55.075244   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 19:59:55.112784   71926 cri.go:89] found id: ""
	I0913 19:59:55.112806   71926 logs.go:276] 0 containers: []
	W0913 19:59:55.112815   71926 logs.go:278] No container was found matching "kindnet"
	I0913 19:59:55.112821   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 19:59:55.112866   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 19:59:55.147189   71926 cri.go:89] found id: ""
	I0913 19:59:55.147215   71926 logs.go:276] 0 containers: []
	W0913 19:59:55.147226   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 19:59:55.147236   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 19:59:55.147247   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 19:59:55.199769   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 19:59:55.199802   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 19:59:55.214075   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 19:59:55.214124   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 19:59:55.285621   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 19:59:55.285640   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 19:59:55.285650   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 19:59:55.366727   71926 logs.go:123] Gathering logs for container status ...
	I0913 19:59:55.366762   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 19:59:56.336491   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:58.836762   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:54.932054   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:57.431007   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:57.266309   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:59.266774   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:57.913453   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:57.927109   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 19:59:57.927249   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 19:59:57.964607   71926 cri.go:89] found id: ""
	I0913 19:59:57.964635   71926 logs.go:276] 0 containers: []
	W0913 19:59:57.964647   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 19:59:57.964655   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 19:59:57.964723   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 19:59:58.006749   71926 cri.go:89] found id: ""
	I0913 19:59:58.006773   71926 logs.go:276] 0 containers: []
	W0913 19:59:58.006782   71926 logs.go:278] No container was found matching "etcd"
	I0913 19:59:58.006788   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 19:59:58.006835   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 19:59:58.042438   71926 cri.go:89] found id: ""
	I0913 19:59:58.042461   71926 logs.go:276] 0 containers: []
	W0913 19:59:58.042470   71926 logs.go:278] No container was found matching "coredns"
	I0913 19:59:58.042475   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 19:59:58.042526   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 19:59:58.076413   71926 cri.go:89] found id: ""
	I0913 19:59:58.076443   71926 logs.go:276] 0 containers: []
	W0913 19:59:58.076456   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 19:59:58.076463   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 19:59:58.076525   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 19:59:58.110474   71926 cri.go:89] found id: ""
	I0913 19:59:58.110495   71926 logs.go:276] 0 containers: []
	W0913 19:59:58.110502   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 19:59:58.110507   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 19:59:58.110555   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 19:59:58.145068   71926 cri.go:89] found id: ""
	I0913 19:59:58.145090   71926 logs.go:276] 0 containers: []
	W0913 19:59:58.145098   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 19:59:58.145104   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 19:59:58.145152   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 19:59:58.179071   71926 cri.go:89] found id: ""
	I0913 19:59:58.179102   71926 logs.go:276] 0 containers: []
	W0913 19:59:58.179115   71926 logs.go:278] No container was found matching "kindnet"
	I0913 19:59:58.179122   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 19:59:58.179176   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 19:59:58.212749   71926 cri.go:89] found id: ""
	I0913 19:59:58.212779   71926 logs.go:276] 0 containers: []
	W0913 19:59:58.212791   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 19:59:58.212801   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 19:59:58.212814   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 19:59:58.297012   71926 logs.go:123] Gathering logs for container status ...
	I0913 19:59:58.297046   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 19:59:58.336206   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 19:59:58.336229   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 19:59:58.388982   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 19:59:58.389014   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 19:59:58.404582   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 19:59:58.404612   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 19:59:58.476882   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:00.977647   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:00.991660   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:00.991743   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:01.028753   71926 cri.go:89] found id: ""
	I0913 20:00:01.028779   71926 logs.go:276] 0 containers: []
	W0913 20:00:01.028788   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:01.028794   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:01.028843   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:01.064606   71926 cri.go:89] found id: ""
	I0913 20:00:01.064638   71926 logs.go:276] 0 containers: []
	W0913 20:00:01.064647   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:01.064652   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:01.064701   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:01.100564   71926 cri.go:89] found id: ""
	I0913 20:00:01.100597   71926 logs.go:276] 0 containers: []
	W0913 20:00:01.100609   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:01.100615   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:01.100678   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:01.135197   71926 cri.go:89] found id: ""
	I0913 20:00:01.135230   71926 logs.go:276] 0 containers: []
	W0913 20:00:01.135242   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:01.135249   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:01.135313   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:01.187712   71926 cri.go:89] found id: ""
	I0913 20:00:01.187783   71926 logs.go:276] 0 containers: []
	W0913 20:00:01.187796   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:01.187804   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:01.187870   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:01.225400   71926 cri.go:89] found id: ""
	I0913 20:00:01.225434   71926 logs.go:276] 0 containers: []
	W0913 20:00:01.225443   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:01.225449   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:01.225511   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:01.263652   71926 cri.go:89] found id: ""
	I0913 20:00:01.263685   71926 logs.go:276] 0 containers: []
	W0913 20:00:01.263696   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:01.263703   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:01.263764   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:01.302630   71926 cri.go:89] found id: ""
	I0913 20:00:01.302652   71926 logs.go:276] 0 containers: []
	W0913 20:00:01.302661   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:01.302669   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:01.302680   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:01.316837   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:01.316871   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:01.389124   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:01.389149   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:01.389159   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:01.475528   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:01.475565   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:01.518896   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:01.518922   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:01.338229   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:03.836029   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:59.932112   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:01.932389   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:03.932525   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:01.267699   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:03.268309   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:05.765913   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:04.069894   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:04.083877   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:04.083940   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:04.119079   71926 cri.go:89] found id: ""
	I0913 20:00:04.119109   71926 logs.go:276] 0 containers: []
	W0913 20:00:04.119118   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:04.119123   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:04.119175   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:04.154999   71926 cri.go:89] found id: ""
	I0913 20:00:04.155026   71926 logs.go:276] 0 containers: []
	W0913 20:00:04.155035   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:04.155040   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:04.155087   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:04.189390   71926 cri.go:89] found id: ""
	I0913 20:00:04.189414   71926 logs.go:276] 0 containers: []
	W0913 20:00:04.189422   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:04.189428   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:04.189477   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:04.224883   71926 cri.go:89] found id: ""
	I0913 20:00:04.224912   71926 logs.go:276] 0 containers: []
	W0913 20:00:04.224924   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:04.224932   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:04.224990   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:04.265300   71926 cri.go:89] found id: ""
	I0913 20:00:04.265328   71926 logs.go:276] 0 containers: []
	W0913 20:00:04.265340   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:04.265347   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:04.265403   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:04.308225   71926 cri.go:89] found id: ""
	I0913 20:00:04.308253   71926 logs.go:276] 0 containers: []
	W0913 20:00:04.308264   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:04.308271   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:04.308339   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:04.347508   71926 cri.go:89] found id: ""
	I0913 20:00:04.347539   71926 logs.go:276] 0 containers: []
	W0913 20:00:04.347552   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:04.347558   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:04.347615   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:04.385610   71926 cri.go:89] found id: ""
	I0913 20:00:04.385635   71926 logs.go:276] 0 containers: []
	W0913 20:00:04.385644   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:04.385651   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:04.385664   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:04.438181   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:04.438210   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:04.452207   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:04.452231   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:04.527920   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:04.527942   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:04.527956   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:04.617212   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:04.617256   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:05.836478   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:08.336478   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:06.429978   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:08.430153   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:08.266149   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:10.267683   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:07.159525   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:07.172355   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:07.172433   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:07.210683   71926 cri.go:89] found id: ""
	I0913 20:00:07.210713   71926 logs.go:276] 0 containers: []
	W0913 20:00:07.210725   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:07.210733   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:07.210794   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:07.252477   71926 cri.go:89] found id: ""
	I0913 20:00:07.252501   71926 logs.go:276] 0 containers: []
	W0913 20:00:07.252511   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:07.252516   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:07.252563   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:07.293646   71926 cri.go:89] found id: ""
	I0913 20:00:07.293671   71926 logs.go:276] 0 containers: []
	W0913 20:00:07.293679   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:07.293685   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:07.293744   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:07.328603   71926 cri.go:89] found id: ""
	I0913 20:00:07.328631   71926 logs.go:276] 0 containers: []
	W0913 20:00:07.328644   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:07.328652   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:07.328716   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:07.366358   71926 cri.go:89] found id: ""
	I0913 20:00:07.366385   71926 logs.go:276] 0 containers: []
	W0913 20:00:07.366395   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:07.366402   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:07.366480   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:07.400380   71926 cri.go:89] found id: ""
	I0913 20:00:07.400406   71926 logs.go:276] 0 containers: []
	W0913 20:00:07.400417   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:07.400425   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:07.400476   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:07.438332   71926 cri.go:89] found id: ""
	I0913 20:00:07.438360   71926 logs.go:276] 0 containers: []
	W0913 20:00:07.438371   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:07.438380   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:07.438437   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:07.477262   71926 cri.go:89] found id: ""
	I0913 20:00:07.477294   71926 logs.go:276] 0 containers: []
	W0913 20:00:07.477305   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:07.477315   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:07.477329   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:07.528404   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:07.528437   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:07.543025   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:07.543058   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:07.619580   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:07.619599   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:07.619611   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:07.698002   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:07.698037   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:10.241528   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:10.255274   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:10.255350   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:10.289635   71926 cri.go:89] found id: ""
	I0913 20:00:10.289660   71926 logs.go:276] 0 containers: []
	W0913 20:00:10.289670   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:10.289675   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:10.289726   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:10.323749   71926 cri.go:89] found id: ""
	I0913 20:00:10.323772   71926 logs.go:276] 0 containers: []
	W0913 20:00:10.323781   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:10.323788   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:10.323835   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:10.360399   71926 cri.go:89] found id: ""
	I0913 20:00:10.360424   71926 logs.go:276] 0 containers: []
	W0913 20:00:10.360432   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:10.360441   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:10.360517   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:10.396685   71926 cri.go:89] found id: ""
	I0913 20:00:10.396714   71926 logs.go:276] 0 containers: []
	W0913 20:00:10.396724   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:10.396731   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:10.396793   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:10.439076   71926 cri.go:89] found id: ""
	I0913 20:00:10.439104   71926 logs.go:276] 0 containers: []
	W0913 20:00:10.439116   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:10.439126   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:10.439202   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:10.484451   71926 cri.go:89] found id: ""
	I0913 20:00:10.484474   71926 logs.go:276] 0 containers: []
	W0913 20:00:10.484483   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:10.484490   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:10.484553   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:10.521271   71926 cri.go:89] found id: ""
	I0913 20:00:10.521297   71926 logs.go:276] 0 containers: []
	W0913 20:00:10.521307   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:10.521313   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:10.521371   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:10.556468   71926 cri.go:89] found id: ""
	I0913 20:00:10.556490   71926 logs.go:276] 0 containers: []
	W0913 20:00:10.556499   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:10.556507   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:10.556519   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:10.593210   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:10.593237   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:10.645456   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:10.645493   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:10.659365   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:10.659393   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:10.734764   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:10.734786   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:10.734800   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:10.338631   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:12.835744   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:10.430954   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:12.931007   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:12.767070   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:15.267220   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:13.320229   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:13.335065   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:13.335136   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:13.373500   71926 cri.go:89] found id: ""
	I0913 20:00:13.373531   71926 logs.go:276] 0 containers: []
	W0913 20:00:13.373543   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:13.373550   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:13.373613   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:13.410784   71926 cri.go:89] found id: ""
	I0913 20:00:13.410815   71926 logs.go:276] 0 containers: []
	W0913 20:00:13.410827   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:13.410834   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:13.410900   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:13.447564   71926 cri.go:89] found id: ""
	I0913 20:00:13.447592   71926 logs.go:276] 0 containers: []
	W0913 20:00:13.447603   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:13.447611   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:13.447668   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:13.482858   71926 cri.go:89] found id: ""
	I0913 20:00:13.482885   71926 logs.go:276] 0 containers: []
	W0913 20:00:13.482895   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:13.482902   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:13.482963   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:13.517827   71926 cri.go:89] found id: ""
	I0913 20:00:13.517853   71926 logs.go:276] 0 containers: []
	W0913 20:00:13.517864   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:13.517870   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:13.517932   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:13.553032   71926 cri.go:89] found id: ""
	I0913 20:00:13.553063   71926 logs.go:276] 0 containers: []
	W0913 20:00:13.553081   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:13.553088   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:13.553149   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:13.588527   71926 cri.go:89] found id: ""
	I0913 20:00:13.588553   71926 logs.go:276] 0 containers: []
	W0913 20:00:13.588561   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:13.588567   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:13.588620   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:13.625094   71926 cri.go:89] found id: ""
	I0913 20:00:13.625118   71926 logs.go:276] 0 containers: []
	W0913 20:00:13.625131   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:13.625141   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:13.625158   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:13.677821   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:13.677851   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:13.691860   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:13.691887   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:13.764966   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:13.764993   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:13.765009   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:13.847866   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:13.847908   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:16.389642   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:16.403272   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:16.403331   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:16.440078   71926 cri.go:89] found id: ""
	I0913 20:00:16.440104   71926 logs.go:276] 0 containers: []
	W0913 20:00:16.440114   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:16.440122   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:16.440190   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:16.474256   71926 cri.go:89] found id: ""
	I0913 20:00:16.474283   71926 logs.go:276] 0 containers: []
	W0913 20:00:16.474301   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:16.474308   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:16.474366   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:16.510715   71926 cri.go:89] found id: ""
	I0913 20:00:16.510749   71926 logs.go:276] 0 containers: []
	W0913 20:00:16.510760   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:16.510767   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:16.510828   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:16.547051   71926 cri.go:89] found id: ""
	I0913 20:00:16.547081   71926 logs.go:276] 0 containers: []
	W0913 20:00:16.547090   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:16.547095   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:16.547181   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:16.583643   71926 cri.go:89] found id: ""
	I0913 20:00:16.583673   71926 logs.go:276] 0 containers: []
	W0913 20:00:16.583684   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:16.583692   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:16.583751   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:16.620508   71926 cri.go:89] found id: ""
	I0913 20:00:16.620531   71926 logs.go:276] 0 containers: []
	W0913 20:00:16.620538   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:16.620544   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:16.620591   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:16.659447   71926 cri.go:89] found id: ""
	I0913 20:00:16.659474   71926 logs.go:276] 0 containers: []
	W0913 20:00:16.659483   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:16.659487   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:16.659551   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:16.696858   71926 cri.go:89] found id: ""
	I0913 20:00:16.696883   71926 logs.go:276] 0 containers: []
	W0913 20:00:16.696892   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:16.696900   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:16.696913   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:16.767299   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:16.767322   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:16.767337   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:16.847320   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:16.847356   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:16.922176   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:16.922209   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:14.836490   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:16.838300   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:15.430562   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:17.431842   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:17.766696   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:19.767921   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:16.982583   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:16.982627   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:19.496581   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:19.509942   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:19.510040   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:19.546457   71926 cri.go:89] found id: ""
	I0913 20:00:19.546493   71926 logs.go:276] 0 containers: []
	W0913 20:00:19.546510   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:19.546517   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:19.546584   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:19.585577   71926 cri.go:89] found id: ""
	I0913 20:00:19.585613   71926 logs.go:276] 0 containers: []
	W0913 20:00:19.585624   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:19.585631   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:19.585681   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:19.624383   71926 cri.go:89] found id: ""
	I0913 20:00:19.624416   71926 logs.go:276] 0 containers: []
	W0913 20:00:19.624428   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:19.624436   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:19.624492   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:19.662531   71926 cri.go:89] found id: ""
	I0913 20:00:19.662558   71926 logs.go:276] 0 containers: []
	W0913 20:00:19.662570   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:19.662578   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:19.662636   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:19.704253   71926 cri.go:89] found id: ""
	I0913 20:00:19.704278   71926 logs.go:276] 0 containers: []
	W0913 20:00:19.704290   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:19.704296   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:19.704354   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:19.743087   71926 cri.go:89] found id: ""
	I0913 20:00:19.743113   71926 logs.go:276] 0 containers: []
	W0913 20:00:19.743122   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:19.743128   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:19.743175   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:19.779598   71926 cri.go:89] found id: ""
	I0913 20:00:19.779625   71926 logs.go:276] 0 containers: []
	W0913 20:00:19.779635   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:19.779643   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:19.779692   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:19.817509   71926 cri.go:89] found id: ""
	I0913 20:00:19.817541   71926 logs.go:276] 0 containers: []
	W0913 20:00:19.817553   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:19.817564   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:19.817577   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:19.870071   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:19.870120   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:19.884612   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:19.884638   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:19.959650   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:19.959675   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:19.959687   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:20.040351   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:20.040384   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:19.335437   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:21.335913   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:23.838023   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:19.931244   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:22.430934   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:24.431456   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:22.266411   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:24.266828   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:22.581978   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:22.599144   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:22.599205   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:22.652862   71926 cri.go:89] found id: ""
	I0913 20:00:22.652894   71926 logs.go:276] 0 containers: []
	W0913 20:00:22.652906   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:22.652913   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:22.652998   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:22.712188   71926 cri.go:89] found id: ""
	I0913 20:00:22.712220   71926 logs.go:276] 0 containers: []
	W0913 20:00:22.712231   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:22.712238   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:22.712299   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:22.750212   71926 cri.go:89] found id: ""
	I0913 20:00:22.750238   71926 logs.go:276] 0 containers: []
	W0913 20:00:22.750249   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:22.750257   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:22.750319   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:22.786432   71926 cri.go:89] found id: ""
	I0913 20:00:22.786463   71926 logs.go:276] 0 containers: []
	W0913 20:00:22.786475   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:22.786483   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:22.786547   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:22.822681   71926 cri.go:89] found id: ""
	I0913 20:00:22.822707   71926 logs.go:276] 0 containers: []
	W0913 20:00:22.822716   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:22.822722   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:22.822780   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:22.862076   71926 cri.go:89] found id: ""
	I0913 20:00:22.862143   71926 logs.go:276] 0 containers: []
	W0913 20:00:22.862157   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:22.862166   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:22.862230   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:22.899489   71926 cri.go:89] found id: ""
	I0913 20:00:22.899519   71926 logs.go:276] 0 containers: []
	W0913 20:00:22.899528   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:22.899535   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:22.899604   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:22.937229   71926 cri.go:89] found id: ""
	I0913 20:00:22.937255   71926 logs.go:276] 0 containers: []
	W0913 20:00:22.937270   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:22.937282   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:22.937300   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:23.007842   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:23.007871   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:23.007887   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:23.092726   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:23.092763   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:23.131372   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:23.131403   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:23.183785   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:23.183819   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:25.698367   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:25.712256   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:25.712338   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:25.747797   71926 cri.go:89] found id: ""
	I0913 20:00:25.747824   71926 logs.go:276] 0 containers: []
	W0913 20:00:25.747835   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:25.747842   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:25.747929   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:25.783257   71926 cri.go:89] found id: ""
	I0913 20:00:25.783285   71926 logs.go:276] 0 containers: []
	W0913 20:00:25.783295   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:25.783301   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:25.783352   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:25.817087   71926 cri.go:89] found id: ""
	I0913 20:00:25.817120   71926 logs.go:276] 0 containers: []
	W0913 20:00:25.817132   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:25.817142   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:25.817203   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:25.854055   71926 cri.go:89] found id: ""
	I0913 20:00:25.854082   71926 logs.go:276] 0 containers: []
	W0913 20:00:25.854108   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:25.854116   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:25.854188   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:25.889972   71926 cri.go:89] found id: ""
	I0913 20:00:25.889994   71926 logs.go:276] 0 containers: []
	W0913 20:00:25.890002   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:25.890008   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:25.890058   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:25.927061   71926 cri.go:89] found id: ""
	I0913 20:00:25.927093   71926 logs.go:276] 0 containers: []
	W0913 20:00:25.927104   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:25.927115   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:25.927169   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:25.965541   71926 cri.go:89] found id: ""
	I0913 20:00:25.965570   71926 logs.go:276] 0 containers: []
	W0913 20:00:25.965582   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:25.965588   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:25.965649   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:26.002772   71926 cri.go:89] found id: ""
	I0913 20:00:26.002801   71926 logs.go:276] 0 containers: []
	W0913 20:00:26.002814   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:26.002825   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:26.002840   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:26.054407   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:26.054442   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:26.069608   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:26.069634   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:26.141529   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:26.141557   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:26.141572   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:26.223623   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:26.223657   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:26.336386   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:28.836218   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:26.431607   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:28.431821   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:26.267742   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:28.766624   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:30.767391   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:28.764312   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:28.779319   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:28.779383   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:28.813431   71926 cri.go:89] found id: ""
	I0913 20:00:28.813457   71926 logs.go:276] 0 containers: []
	W0913 20:00:28.813465   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:28.813473   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:28.813532   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:28.850083   71926 cri.go:89] found id: ""
	I0913 20:00:28.850145   71926 logs.go:276] 0 containers: []
	W0913 20:00:28.850157   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:28.850163   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:28.850221   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:28.890348   71926 cri.go:89] found id: ""
	I0913 20:00:28.890373   71926 logs.go:276] 0 containers: []
	W0913 20:00:28.890384   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:28.890390   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:28.890440   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:28.926531   71926 cri.go:89] found id: ""
	I0913 20:00:28.926564   71926 logs.go:276] 0 containers: []
	W0913 20:00:28.926576   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:28.926583   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:28.926650   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:28.968307   71926 cri.go:89] found id: ""
	I0913 20:00:28.968336   71926 logs.go:276] 0 containers: []
	W0913 20:00:28.968349   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:28.968356   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:28.968420   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:29.004257   71926 cri.go:89] found id: ""
	I0913 20:00:29.004285   71926 logs.go:276] 0 containers: []
	W0913 20:00:29.004297   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:29.004304   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:29.004369   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:29.039438   71926 cri.go:89] found id: ""
	I0913 20:00:29.039466   71926 logs.go:276] 0 containers: []
	W0913 20:00:29.039478   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:29.039486   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:29.039546   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:29.075138   71926 cri.go:89] found id: ""
	I0913 20:00:29.075172   71926 logs.go:276] 0 containers: []
	W0913 20:00:29.075184   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:29.075195   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:29.075210   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:29.151631   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:29.151671   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:29.193680   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:29.193707   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:29.244926   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:29.244960   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:29.258897   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:29.258922   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:29.329212   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:31.829955   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:31.846683   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:31.846747   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:31.888898   71926 cri.go:89] found id: ""
	I0913 20:00:31.888933   71926 logs.go:276] 0 containers: []
	W0913 20:00:31.888944   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:31.888952   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:31.889023   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:31.923896   71926 cri.go:89] found id: ""
	I0913 20:00:31.923922   71926 logs.go:276] 0 containers: []
	W0913 20:00:31.923933   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:31.923940   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:31.923999   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:30.836587   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:33.335323   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:30.431964   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:32.931375   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:32.770852   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:35.267129   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:31.965263   71926 cri.go:89] found id: ""
	I0913 20:00:31.965297   71926 logs.go:276] 0 containers: []
	W0913 20:00:31.965309   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:31.965317   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:31.965387   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:31.998461   71926 cri.go:89] found id: ""
	I0913 20:00:31.998491   71926 logs.go:276] 0 containers: []
	W0913 20:00:31.998505   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:31.998512   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:31.998564   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:32.039654   71926 cri.go:89] found id: ""
	I0913 20:00:32.039681   71926 logs.go:276] 0 containers: []
	W0913 20:00:32.039690   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:32.039696   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:32.039747   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:32.073387   71926 cri.go:89] found id: ""
	I0913 20:00:32.073413   71926 logs.go:276] 0 containers: []
	W0913 20:00:32.073424   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:32.073432   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:32.073491   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:32.112119   71926 cri.go:89] found id: ""
	I0913 20:00:32.112148   71926 logs.go:276] 0 containers: []
	W0913 20:00:32.112159   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:32.112166   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:32.112231   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:32.156508   71926 cri.go:89] found id: ""
	I0913 20:00:32.156539   71926 logs.go:276] 0 containers: []
	W0913 20:00:32.156548   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:32.156556   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:32.156567   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:32.210961   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:32.210994   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:32.224674   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:32.224703   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:32.297699   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:32.297725   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:32.297738   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:32.383090   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:32.383130   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:34.926212   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:34.942930   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:34.943015   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:34.983115   71926 cri.go:89] found id: ""
	I0913 20:00:34.983140   71926 logs.go:276] 0 containers: []
	W0913 20:00:34.983151   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:34.983159   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:34.983232   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:35.022893   71926 cri.go:89] found id: ""
	I0913 20:00:35.022916   71926 logs.go:276] 0 containers: []
	W0913 20:00:35.022924   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:35.022931   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:35.022980   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:35.061105   71926 cri.go:89] found id: ""
	I0913 20:00:35.061129   71926 logs.go:276] 0 containers: []
	W0913 20:00:35.061137   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:35.061143   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:35.061191   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:35.095853   71926 cri.go:89] found id: ""
	I0913 20:00:35.095879   71926 logs.go:276] 0 containers: []
	W0913 20:00:35.095890   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:35.095897   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:35.095966   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:35.132771   71926 cri.go:89] found id: ""
	I0913 20:00:35.132796   71926 logs.go:276] 0 containers: []
	W0913 20:00:35.132811   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:35.132816   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:35.132879   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:35.171692   71926 cri.go:89] found id: ""
	I0913 20:00:35.171720   71926 logs.go:276] 0 containers: []
	W0913 20:00:35.171729   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:35.171734   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:35.171782   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:35.212217   71926 cri.go:89] found id: ""
	I0913 20:00:35.212248   71926 logs.go:276] 0 containers: []
	W0913 20:00:35.212258   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:35.212266   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:35.212318   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:35.247910   71926 cri.go:89] found id: ""
	I0913 20:00:35.247938   71926 logs.go:276] 0 containers: []
	W0913 20:00:35.247949   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:35.247958   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:35.247972   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:35.321607   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:35.321627   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:35.321641   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:35.405442   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:35.405483   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:35.450174   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:35.450201   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:35.503640   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:35.503673   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:35.336847   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:37.337476   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:34.931775   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:37.430241   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:39.432113   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:37.268324   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:39.766957   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:38.019116   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:38.033625   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:38.033696   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:38.068648   71926 cri.go:89] found id: ""
	I0913 20:00:38.068677   71926 logs.go:276] 0 containers: []
	W0913 20:00:38.068688   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:38.068696   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:38.068764   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:38.103808   71926 cri.go:89] found id: ""
	I0913 20:00:38.103837   71926 logs.go:276] 0 containers: []
	W0913 20:00:38.103850   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:38.103857   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:38.103920   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:38.143305   71926 cri.go:89] found id: ""
	I0913 20:00:38.143326   71926 logs.go:276] 0 containers: []
	W0913 20:00:38.143335   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:38.143341   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:38.143397   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:38.181356   71926 cri.go:89] found id: ""
	I0913 20:00:38.181382   71926 logs.go:276] 0 containers: []
	W0913 20:00:38.181394   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:38.181401   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:38.181459   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:38.216935   71926 cri.go:89] found id: ""
	I0913 20:00:38.216966   71926 logs.go:276] 0 containers: []
	W0913 20:00:38.216977   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:38.216985   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:38.217043   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:38.253565   71926 cri.go:89] found id: ""
	I0913 20:00:38.253594   71926 logs.go:276] 0 containers: []
	W0913 20:00:38.253606   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:38.253614   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:38.253670   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:38.288672   71926 cri.go:89] found id: ""
	I0913 20:00:38.288698   71926 logs.go:276] 0 containers: []
	W0913 20:00:38.288714   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:38.288721   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:38.288782   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:38.323979   71926 cri.go:89] found id: ""
	I0913 20:00:38.324001   71926 logs.go:276] 0 containers: []
	W0913 20:00:38.324009   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:38.324017   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:38.324028   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:38.375830   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:38.375861   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:38.391703   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:38.391742   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:38.464586   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:38.464615   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:38.464630   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:38.545577   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:38.545616   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:41.090184   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:41.104040   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:41.104129   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:41.148332   71926 cri.go:89] found id: ""
	I0913 20:00:41.148362   71926 logs.go:276] 0 containers: []
	W0913 20:00:41.148371   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:41.148377   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:41.148441   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:41.187205   71926 cri.go:89] found id: ""
	I0913 20:00:41.187230   71926 logs.go:276] 0 containers: []
	W0913 20:00:41.187238   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:41.187244   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:41.187299   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:41.223654   71926 cri.go:89] found id: ""
	I0913 20:00:41.223677   71926 logs.go:276] 0 containers: []
	W0913 20:00:41.223685   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:41.223690   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:41.223750   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:41.258481   71926 cri.go:89] found id: ""
	I0913 20:00:41.258505   71926 logs.go:276] 0 containers: []
	W0913 20:00:41.258515   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:41.258520   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:41.258578   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:41.293205   71926 cri.go:89] found id: ""
	I0913 20:00:41.293236   71926 logs.go:276] 0 containers: []
	W0913 20:00:41.293248   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:41.293255   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:41.293313   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:41.331425   71926 cri.go:89] found id: ""
	I0913 20:00:41.331454   71926 logs.go:276] 0 containers: []
	W0913 20:00:41.331466   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:41.331474   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:41.331524   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:41.367916   71926 cri.go:89] found id: ""
	I0913 20:00:41.367942   71926 logs.go:276] 0 containers: []
	W0913 20:00:41.367953   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:41.367960   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:41.368024   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:41.413683   71926 cri.go:89] found id: ""
	I0913 20:00:41.413713   71926 logs.go:276] 0 containers: []
	W0913 20:00:41.413724   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:41.413736   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:41.413752   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:41.468018   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:41.468053   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:41.482397   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:41.482424   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:41.552203   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:41.552223   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:41.552238   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:41.628515   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:41.628553   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:39.835678   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:42.336092   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:41.932753   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:44.431833   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:41.768156   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:44.268056   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:44.167382   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:44.180046   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:44.180127   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:44.218376   71926 cri.go:89] found id: ""
	I0913 20:00:44.218400   71926 logs.go:276] 0 containers: []
	W0913 20:00:44.218409   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:44.218415   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:44.218474   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:44.255250   71926 cri.go:89] found id: ""
	I0913 20:00:44.255275   71926 logs.go:276] 0 containers: []
	W0913 20:00:44.255284   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:44.255289   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:44.255338   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:44.292561   71926 cri.go:89] found id: ""
	I0913 20:00:44.292587   71926 logs.go:276] 0 containers: []
	W0913 20:00:44.292597   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:44.292604   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:44.292662   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:44.326876   71926 cri.go:89] found id: ""
	I0913 20:00:44.326900   71926 logs.go:276] 0 containers: []
	W0913 20:00:44.326912   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:44.326919   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:44.326977   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:44.362531   71926 cri.go:89] found id: ""
	I0913 20:00:44.362562   71926 logs.go:276] 0 containers: []
	W0913 20:00:44.362574   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:44.362582   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:44.362646   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:44.396145   71926 cri.go:89] found id: ""
	I0913 20:00:44.396170   71926 logs.go:276] 0 containers: []
	W0913 20:00:44.396181   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:44.396188   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:44.396251   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:44.434531   71926 cri.go:89] found id: ""
	I0913 20:00:44.434556   71926 logs.go:276] 0 containers: []
	W0913 20:00:44.434564   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:44.434570   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:44.434627   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:44.470926   71926 cri.go:89] found id: ""
	I0913 20:00:44.470955   71926 logs.go:276] 0 containers: []
	W0913 20:00:44.470965   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:44.470976   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:44.470991   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:44.523651   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:44.523685   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:44.537739   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:44.537766   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:44.608055   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:44.608083   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:44.608098   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:44.694384   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:44.694416   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:44.835785   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:47.336699   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:46.932718   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:49.431805   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:46.766589   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:48.773406   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:47.235609   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:47.248691   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:47.248749   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:47.284011   71926 cri.go:89] found id: ""
	I0913 20:00:47.284037   71926 logs.go:276] 0 containers: []
	W0913 20:00:47.284049   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:47.284056   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:47.284118   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:47.319729   71926 cri.go:89] found id: ""
	I0913 20:00:47.319755   71926 logs.go:276] 0 containers: []
	W0913 20:00:47.319766   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:47.319772   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:47.319833   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:47.356053   71926 cri.go:89] found id: ""
	I0913 20:00:47.356079   71926 logs.go:276] 0 containers: []
	W0913 20:00:47.356087   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:47.356094   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:47.356172   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:47.390054   71926 cri.go:89] found id: ""
	I0913 20:00:47.390083   71926 logs.go:276] 0 containers: []
	W0913 20:00:47.390101   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:47.390109   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:47.390171   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:47.428894   71926 cri.go:89] found id: ""
	I0913 20:00:47.428920   71926 logs.go:276] 0 containers: []
	W0913 20:00:47.428932   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:47.428939   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:47.428996   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:47.463328   71926 cri.go:89] found id: ""
	I0913 20:00:47.463363   71926 logs.go:276] 0 containers: []
	W0913 20:00:47.463376   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:47.463389   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:47.463450   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:47.500739   71926 cri.go:89] found id: ""
	I0913 20:00:47.500764   71926 logs.go:276] 0 containers: []
	W0913 20:00:47.500773   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:47.500779   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:47.500827   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:47.534425   71926 cri.go:89] found id: ""
	I0913 20:00:47.534456   71926 logs.go:276] 0 containers: []
	W0913 20:00:47.534468   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:47.534479   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:47.534493   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:47.584525   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:47.584553   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:47.599468   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:47.599497   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:47.683020   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:47.683044   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:47.683055   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:47.767236   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:47.767272   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:50.310385   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:50.324059   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:50.324136   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:50.359347   71926 cri.go:89] found id: ""
	I0913 20:00:50.359381   71926 logs.go:276] 0 containers: []
	W0913 20:00:50.359393   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:50.359401   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:50.359451   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:50.395291   71926 cri.go:89] found id: ""
	I0913 20:00:50.395339   71926 logs.go:276] 0 containers: []
	W0913 20:00:50.395352   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:50.395360   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:50.395416   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:50.430783   71926 cri.go:89] found id: ""
	I0913 20:00:50.430806   71926 logs.go:276] 0 containers: []
	W0913 20:00:50.430816   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:50.430823   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:50.430884   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:50.467883   71926 cri.go:89] found id: ""
	I0913 20:00:50.467916   71926 logs.go:276] 0 containers: []
	W0913 20:00:50.467928   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:50.467935   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:50.467997   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:50.505962   71926 cri.go:89] found id: ""
	I0913 20:00:50.505995   71926 logs.go:276] 0 containers: []
	W0913 20:00:50.506007   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:50.506014   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:50.506080   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:50.542408   71926 cri.go:89] found id: ""
	I0913 20:00:50.542432   71926 logs.go:276] 0 containers: []
	W0913 20:00:50.542440   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:50.542445   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:50.542492   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:50.576431   71926 cri.go:89] found id: ""
	I0913 20:00:50.576454   71926 logs.go:276] 0 containers: []
	W0913 20:00:50.576463   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:50.576469   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:50.576533   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:50.609940   71926 cri.go:89] found id: ""
	I0913 20:00:50.609982   71926 logs.go:276] 0 containers: []
	W0913 20:00:50.609994   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:50.610004   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:50.610022   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:50.661793   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:50.661829   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:50.676085   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:50.676128   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:50.745092   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:50.745118   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:50.745132   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:50.830719   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:50.830754   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:49.835228   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:51.835655   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:53.835956   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:51.930403   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:53.931943   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:51.266576   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:53.267140   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:55.267966   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:53.369220   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:53.381944   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:53.382009   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:53.414010   71926 cri.go:89] found id: ""
	I0913 20:00:53.414032   71926 logs.go:276] 0 containers: []
	W0913 20:00:53.414041   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:53.414047   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:53.414131   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:53.448480   71926 cri.go:89] found id: ""
	I0913 20:00:53.448512   71926 logs.go:276] 0 containers: []
	W0913 20:00:53.448523   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:53.448530   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:53.448589   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:53.489451   71926 cri.go:89] found id: ""
	I0913 20:00:53.489483   71926 logs.go:276] 0 containers: []
	W0913 20:00:53.489494   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:53.489502   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:53.489562   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:53.531122   71926 cri.go:89] found id: ""
	I0913 20:00:53.531143   71926 logs.go:276] 0 containers: []
	W0913 20:00:53.531154   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:53.531169   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:53.531228   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:53.568070   71926 cri.go:89] found id: ""
	I0913 20:00:53.568098   71926 logs.go:276] 0 containers: []
	W0913 20:00:53.568109   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:53.568116   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:53.568187   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:53.601557   71926 cri.go:89] found id: ""
	I0913 20:00:53.601580   71926 logs.go:276] 0 containers: []
	W0913 20:00:53.601589   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:53.601595   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:53.601646   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:53.637689   71926 cri.go:89] found id: ""
	I0913 20:00:53.637710   71926 logs.go:276] 0 containers: []
	W0913 20:00:53.637719   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:53.637724   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:53.637782   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:53.672876   71926 cri.go:89] found id: ""
	I0913 20:00:53.672899   71926 logs.go:276] 0 containers: []
	W0913 20:00:53.672908   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:53.672916   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:53.672932   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:53.686015   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:53.686044   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:53.753998   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:53.754023   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:53.754042   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:53.844298   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:53.844329   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:53.883828   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:53.883853   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:56.440474   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:56.453970   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:56.454039   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:56.490986   71926 cri.go:89] found id: ""
	I0913 20:00:56.491013   71926 logs.go:276] 0 containers: []
	W0913 20:00:56.491024   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:56.491031   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:56.491094   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:56.530035   71926 cri.go:89] found id: ""
	I0913 20:00:56.530059   71926 logs.go:276] 0 containers: []
	W0913 20:00:56.530070   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:56.530077   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:56.530175   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:56.565256   71926 cri.go:89] found id: ""
	I0913 20:00:56.565280   71926 logs.go:276] 0 containers: []
	W0913 20:00:56.565290   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:56.565297   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:56.565358   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:56.600685   71926 cri.go:89] found id: ""
	I0913 20:00:56.600705   71926 logs.go:276] 0 containers: []
	W0913 20:00:56.600714   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:56.600725   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:56.600777   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:56.635015   71926 cri.go:89] found id: ""
	I0913 20:00:56.635042   71926 logs.go:276] 0 containers: []
	W0913 20:00:56.635053   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:56.635060   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:56.635125   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:56.671319   71926 cri.go:89] found id: ""
	I0913 20:00:56.671346   71926 logs.go:276] 0 containers: []
	W0913 20:00:56.671354   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:56.671359   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:56.671407   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:56.732238   71926 cri.go:89] found id: ""
	I0913 20:00:56.732269   71926 logs.go:276] 0 containers: []
	W0913 20:00:56.732280   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:56.732287   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:56.732347   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:56.779316   71926 cri.go:89] found id: ""
	I0913 20:00:56.779345   71926 logs.go:276] 0 containers: []
	W0913 20:00:56.779356   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:56.779366   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:56.779378   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:56.858727   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:56.858755   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:56.899449   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:56.899480   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:55.836469   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:58.335760   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:56.431305   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:58.431336   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:57.766219   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:59.767250   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:56.952972   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:56.953003   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:56.967092   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:56.967131   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:57.052690   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:59.553696   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:59.569295   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:59.569370   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:59.608548   71926 cri.go:89] found id: ""
	I0913 20:00:59.608592   71926 logs.go:276] 0 containers: []
	W0913 20:00:59.608605   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:59.608612   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:59.608674   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:59.643673   71926 cri.go:89] found id: ""
	I0913 20:00:59.643699   71926 logs.go:276] 0 containers: []
	W0913 20:00:59.643710   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:59.643716   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:59.643776   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:59.687204   71926 cri.go:89] found id: ""
	I0913 20:00:59.687228   71926 logs.go:276] 0 containers: []
	W0913 20:00:59.687237   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:59.687242   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:59.687306   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:59.720335   71926 cri.go:89] found id: ""
	I0913 20:00:59.720360   71926 logs.go:276] 0 containers: []
	W0913 20:00:59.720371   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:59.720378   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:59.720446   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:59.761164   71926 cri.go:89] found id: ""
	I0913 20:00:59.761194   71926 logs.go:276] 0 containers: []
	W0913 20:00:59.761205   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:59.761213   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:59.761278   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:59.796770   71926 cri.go:89] found id: ""
	I0913 20:00:59.796796   71926 logs.go:276] 0 containers: []
	W0913 20:00:59.796807   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:59.796814   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:59.796880   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:59.832333   71926 cri.go:89] found id: ""
	I0913 20:00:59.832364   71926 logs.go:276] 0 containers: []
	W0913 20:00:59.832377   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:59.832385   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:59.832444   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:59.868387   71926 cri.go:89] found id: ""
	I0913 20:00:59.868415   71926 logs.go:276] 0 containers: []
	W0913 20:00:59.868427   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:59.868437   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:59.868450   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:59.920632   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:59.920663   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:59.937573   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:59.937609   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:00.007803   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:00.007826   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:00.007837   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:00.085289   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:00.085324   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:00.336553   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:02.835544   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:00.931173   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:02.931879   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:02.267501   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:04.766302   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:02.628580   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:02.642122   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:02.642200   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:02.678238   71926 cri.go:89] found id: ""
	I0913 20:01:02.678260   71926 logs.go:276] 0 containers: []
	W0913 20:01:02.678268   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:02.678274   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:02.678325   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:02.714164   71926 cri.go:89] found id: ""
	I0913 20:01:02.714187   71926 logs.go:276] 0 containers: []
	W0913 20:01:02.714197   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:02.714202   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:02.714267   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:02.750200   71926 cri.go:89] found id: ""
	I0913 20:01:02.750228   71926 logs.go:276] 0 containers: []
	W0913 20:01:02.750237   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:02.750243   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:02.750291   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:02.786893   71926 cri.go:89] found id: ""
	I0913 20:01:02.786921   71926 logs.go:276] 0 containers: []
	W0913 20:01:02.786929   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:02.786935   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:02.786987   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:02.824161   71926 cri.go:89] found id: ""
	I0913 20:01:02.824192   71926 logs.go:276] 0 containers: []
	W0913 20:01:02.824204   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:02.824211   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:02.824274   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:02.861567   71926 cri.go:89] found id: ""
	I0913 20:01:02.861594   71926 logs.go:276] 0 containers: []
	W0913 20:01:02.861606   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:02.861613   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:02.861678   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:02.897791   71926 cri.go:89] found id: ""
	I0913 20:01:02.897813   71926 logs.go:276] 0 containers: []
	W0913 20:01:02.897822   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:02.897827   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:02.897875   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:02.935790   71926 cri.go:89] found id: ""
	I0913 20:01:02.935818   71926 logs.go:276] 0 containers: []
	W0913 20:01:02.935830   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:02.935840   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:02.935853   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:02.987011   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:02.987044   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:03.000688   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:03.000722   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:03.075757   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:03.075780   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:03.075795   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:03.167565   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:03.167611   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:05.713852   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:05.727810   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:05.727876   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:05.765630   71926 cri.go:89] found id: ""
	I0913 20:01:05.765655   71926 logs.go:276] 0 containers: []
	W0913 20:01:05.765666   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:05.765672   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:05.765720   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:05.801085   71926 cri.go:89] found id: ""
	I0913 20:01:05.801123   71926 logs.go:276] 0 containers: []
	W0913 20:01:05.801136   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:05.801142   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:05.801209   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:05.840112   71926 cri.go:89] found id: ""
	I0913 20:01:05.840146   71926 logs.go:276] 0 containers: []
	W0913 20:01:05.840156   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:05.840163   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:05.840225   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:05.876082   71926 cri.go:89] found id: ""
	I0913 20:01:05.876107   71926 logs.go:276] 0 containers: []
	W0913 20:01:05.876118   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:05.876125   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:05.876188   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:05.913772   71926 cri.go:89] found id: ""
	I0913 20:01:05.913801   71926 logs.go:276] 0 containers: []
	W0913 20:01:05.913813   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:05.913820   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:05.913879   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:05.947670   71926 cri.go:89] found id: ""
	I0913 20:01:05.947694   71926 logs.go:276] 0 containers: []
	W0913 20:01:05.947702   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:05.947708   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:05.947756   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:05.989763   71926 cri.go:89] found id: ""
	I0913 20:01:05.989794   71926 logs.go:276] 0 containers: []
	W0913 20:01:05.989807   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:05.989814   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:05.989875   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:06.030319   71926 cri.go:89] found id: ""
	I0913 20:01:06.030361   71926 logs.go:276] 0 containers: []
	W0913 20:01:06.030373   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:06.030383   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:06.030397   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:06.111153   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:06.111185   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:06.149331   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:06.149357   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:06.202013   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:06.202056   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:06.216224   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:06.216252   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:06.291004   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:04.839716   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:07.334774   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:04.932814   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:07.431144   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:09.431578   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:06.766410   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:09.267184   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:08.791573   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:08.804872   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:08.804930   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:08.846040   71926 cri.go:89] found id: ""
	I0913 20:01:08.846068   71926 logs.go:276] 0 containers: []
	W0913 20:01:08.846081   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:08.846088   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:08.846159   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:08.892954   71926 cri.go:89] found id: ""
	I0913 20:01:08.892986   71926 logs.go:276] 0 containers: []
	W0913 20:01:08.892998   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:08.893005   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:08.893068   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:08.949737   71926 cri.go:89] found id: ""
	I0913 20:01:08.949762   71926 logs.go:276] 0 containers: []
	W0913 20:01:08.949773   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:08.949779   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:08.949847   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:08.991915   71926 cri.go:89] found id: ""
	I0913 20:01:08.991938   71926 logs.go:276] 0 containers: []
	W0913 20:01:08.991950   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:08.991956   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:08.992009   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:09.028702   71926 cri.go:89] found id: ""
	I0913 20:01:09.028733   71926 logs.go:276] 0 containers: []
	W0913 20:01:09.028754   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:09.028764   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:09.028828   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:09.063615   71926 cri.go:89] found id: ""
	I0913 20:01:09.063639   71926 logs.go:276] 0 containers: []
	W0913 20:01:09.063648   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:09.063653   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:09.063713   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:09.096739   71926 cri.go:89] found id: ""
	I0913 20:01:09.096766   71926 logs.go:276] 0 containers: []
	W0913 20:01:09.096776   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:09.096782   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:09.096838   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:09.129605   71926 cri.go:89] found id: ""
	I0913 20:01:09.129635   71926 logs.go:276] 0 containers: []
	W0913 20:01:09.129647   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:09.129657   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:09.129674   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:09.181245   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:09.181280   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:09.195598   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:09.195627   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:09.271676   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:09.271703   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:09.271718   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:09.353657   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:09.353688   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:11.896613   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:11.910334   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:11.910410   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:09.336081   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:11.336204   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:13.336445   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:11.934825   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:14.430581   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:11.766779   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:14.267119   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:11.947632   71926 cri.go:89] found id: ""
	I0913 20:01:11.947654   71926 logs.go:276] 0 containers: []
	W0913 20:01:11.947663   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:11.947669   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:11.947727   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:11.983996   71926 cri.go:89] found id: ""
	I0913 20:01:11.984019   71926 logs.go:276] 0 containers: []
	W0913 20:01:11.984028   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:11.984034   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:11.984082   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:12.020497   71926 cri.go:89] found id: ""
	I0913 20:01:12.020536   71926 logs.go:276] 0 containers: []
	W0913 20:01:12.020548   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:12.020556   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:12.020615   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:12.059799   71926 cri.go:89] found id: ""
	I0913 20:01:12.059821   71926 logs.go:276] 0 containers: []
	W0913 20:01:12.059829   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:12.059835   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:12.059882   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:12.099079   71926 cri.go:89] found id: ""
	I0913 20:01:12.099113   71926 logs.go:276] 0 containers: []
	W0913 20:01:12.099125   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:12.099132   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:12.099193   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:12.132677   71926 cri.go:89] found id: ""
	I0913 20:01:12.132704   71926 logs.go:276] 0 containers: []
	W0913 20:01:12.132716   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:12.132722   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:12.132769   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:12.165522   71926 cri.go:89] found id: ""
	I0913 20:01:12.165546   71926 logs.go:276] 0 containers: []
	W0913 20:01:12.165560   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:12.165565   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:12.165625   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:12.207145   71926 cri.go:89] found id: ""
	I0913 20:01:12.207168   71926 logs.go:276] 0 containers: []
	W0913 20:01:12.207177   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:12.207185   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:12.207195   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:12.260523   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:12.260556   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:12.275208   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:12.275236   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:12.346424   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:12.346447   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:12.346463   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:12.431012   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:12.431050   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:14.970909   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:14.984452   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:14.984512   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:15.020614   71926 cri.go:89] found id: ""
	I0913 20:01:15.020635   71926 logs.go:276] 0 containers: []
	W0913 20:01:15.020645   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:15.020650   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:15.020695   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:15.060481   71926 cri.go:89] found id: ""
	I0913 20:01:15.060508   71926 logs.go:276] 0 containers: []
	W0913 20:01:15.060516   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:15.060520   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:15.060580   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:15.095109   71926 cri.go:89] found id: ""
	I0913 20:01:15.095134   71926 logs.go:276] 0 containers: []
	W0913 20:01:15.095143   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:15.095148   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:15.095201   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:15.130733   71926 cri.go:89] found id: ""
	I0913 20:01:15.130754   71926 logs.go:276] 0 containers: []
	W0913 20:01:15.130762   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:15.130768   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:15.130816   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:15.164998   71926 cri.go:89] found id: ""
	I0913 20:01:15.165027   71926 logs.go:276] 0 containers: []
	W0913 20:01:15.165042   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:15.165049   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:15.165100   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:15.205957   71926 cri.go:89] found id: ""
	I0913 20:01:15.205981   71926 logs.go:276] 0 containers: []
	W0913 20:01:15.205989   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:15.205994   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:15.206053   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:15.240502   71926 cri.go:89] found id: ""
	I0913 20:01:15.240526   71926 logs.go:276] 0 containers: []
	W0913 20:01:15.240535   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:15.240540   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:15.240586   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:15.275900   71926 cri.go:89] found id: ""
	I0913 20:01:15.275920   71926 logs.go:276] 0 containers: []
	W0913 20:01:15.275928   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:15.275936   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:15.275946   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:15.343658   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:15.343677   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:15.343690   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:15.426317   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:15.426355   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:15.474538   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:15.474569   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:15.525405   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:15.525439   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:15.836259   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:18.336529   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:16.431423   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:18.930385   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:16.766863   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:19.266906   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:18.040698   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:18.053759   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:18.053820   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:18.086056   71926 cri.go:89] found id: ""
	I0913 20:01:18.086078   71926 logs.go:276] 0 containers: []
	W0913 20:01:18.086087   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:18.086092   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:18.086166   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:18.119247   71926 cri.go:89] found id: ""
	I0913 20:01:18.119277   71926 logs.go:276] 0 containers: []
	W0913 20:01:18.119290   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:18.119301   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:18.119366   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:18.163417   71926 cri.go:89] found id: ""
	I0913 20:01:18.163442   71926 logs.go:276] 0 containers: []
	W0913 20:01:18.163450   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:18.163456   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:18.163504   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:18.196786   71926 cri.go:89] found id: ""
	I0913 20:01:18.196812   71926 logs.go:276] 0 containers: []
	W0913 20:01:18.196820   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:18.196826   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:18.196878   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:18.231864   71926 cri.go:89] found id: ""
	I0913 20:01:18.231893   71926 logs.go:276] 0 containers: []
	W0913 20:01:18.231903   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:18.231909   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:18.231973   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:18.268498   71926 cri.go:89] found id: ""
	I0913 20:01:18.268518   71926 logs.go:276] 0 containers: []
	W0913 20:01:18.268527   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:18.268534   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:18.268586   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:18.305391   71926 cri.go:89] found id: ""
	I0913 20:01:18.305418   71926 logs.go:276] 0 containers: []
	W0913 20:01:18.305430   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:18.305438   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:18.305499   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:18.340160   71926 cri.go:89] found id: ""
	I0913 20:01:18.340187   71926 logs.go:276] 0 containers: []
	W0913 20:01:18.340197   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:18.340207   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:18.340220   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:18.391723   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:18.391757   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:18.405914   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:18.405943   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:18.476941   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:18.476960   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:18.476972   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:18.556812   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:18.556845   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:21.098172   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:21.111147   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:21.111211   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:21.147273   71926 cri.go:89] found id: ""
	I0913 20:01:21.147299   71926 logs.go:276] 0 containers: []
	W0913 20:01:21.147316   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:21.147322   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:21.147370   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:21.185018   71926 cri.go:89] found id: ""
	I0913 20:01:21.185047   71926 logs.go:276] 0 containers: []
	W0913 20:01:21.185059   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:21.185066   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:21.185148   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:21.219763   71926 cri.go:89] found id: ""
	I0913 20:01:21.219798   71926 logs.go:276] 0 containers: []
	W0913 20:01:21.219809   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:21.219816   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:21.219882   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:21.257121   71926 cri.go:89] found id: ""
	I0913 20:01:21.257149   71926 logs.go:276] 0 containers: []
	W0913 20:01:21.257161   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:21.257167   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:21.257229   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:21.292011   71926 cri.go:89] found id: ""
	I0913 20:01:21.292038   71926 logs.go:276] 0 containers: []
	W0913 20:01:21.292049   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:21.292060   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:21.292124   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:21.327563   71926 cri.go:89] found id: ""
	I0913 20:01:21.327592   71926 logs.go:276] 0 containers: []
	W0913 20:01:21.327604   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:21.327612   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:21.327679   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:21.365175   71926 cri.go:89] found id: ""
	I0913 20:01:21.365201   71926 logs.go:276] 0 containers: []
	W0913 20:01:21.365209   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:21.365215   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:21.365272   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:21.403238   71926 cri.go:89] found id: ""
	I0913 20:01:21.403266   71926 logs.go:276] 0 containers: []
	W0913 20:01:21.403278   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:21.403288   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:21.403310   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:21.417704   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:21.417737   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:21.490232   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:21.490264   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:21.490283   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:21.573376   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:21.573414   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:21.619573   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:21.619604   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:20.835709   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:22.835800   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:20.931257   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:22.932350   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:21.267729   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:23.767489   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:25.768029   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:24.173931   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:24.187538   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:24.187608   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:24.227803   71926 cri.go:89] found id: ""
	I0913 20:01:24.227827   71926 logs.go:276] 0 containers: []
	W0913 20:01:24.227836   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:24.227841   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:24.227898   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:24.263194   71926 cri.go:89] found id: ""
	I0913 20:01:24.263223   71926 logs.go:276] 0 containers: []
	W0913 20:01:24.263239   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:24.263246   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:24.263308   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:24.298159   71926 cri.go:89] found id: ""
	I0913 20:01:24.298188   71926 logs.go:276] 0 containers: []
	W0913 20:01:24.298201   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:24.298209   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:24.298267   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:24.340590   71926 cri.go:89] found id: ""
	I0913 20:01:24.340621   71926 logs.go:276] 0 containers: []
	W0913 20:01:24.340633   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:24.340640   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:24.340699   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:24.385839   71926 cri.go:89] found id: ""
	I0913 20:01:24.385866   71926 logs.go:276] 0 containers: []
	W0913 20:01:24.385875   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:24.385880   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:24.385944   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:24.424723   71926 cri.go:89] found id: ""
	I0913 20:01:24.424750   71926 logs.go:276] 0 containers: []
	W0913 20:01:24.424761   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:24.424767   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:24.424828   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:24.461879   71926 cri.go:89] found id: ""
	I0913 20:01:24.461911   71926 logs.go:276] 0 containers: []
	W0913 20:01:24.461922   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:24.461929   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:24.461995   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:24.497230   71926 cri.go:89] found id: ""
	I0913 20:01:24.497257   71926 logs.go:276] 0 containers: []
	W0913 20:01:24.497269   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:24.497278   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:24.497297   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:24.538048   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:24.538084   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:24.592840   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:24.592880   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:24.608817   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:24.608851   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:24.683335   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:24.683356   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:24.683367   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:24.836044   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:27.335709   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:25.431310   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:27.931864   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:28.266427   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:30.765946   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:27.262211   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:27.275199   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:27.275277   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:27.308960   71926 cri.go:89] found id: ""
	I0913 20:01:27.308986   71926 logs.go:276] 0 containers: []
	W0913 20:01:27.308994   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:27.309000   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:27.309065   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:27.345030   71926 cri.go:89] found id: ""
	I0913 20:01:27.345055   71926 logs.go:276] 0 containers: []
	W0913 20:01:27.345067   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:27.345074   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:27.345132   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:27.380791   71926 cri.go:89] found id: ""
	I0913 20:01:27.380823   71926 logs.go:276] 0 containers: []
	W0913 20:01:27.380831   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:27.380837   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:27.380896   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:27.416561   71926 cri.go:89] found id: ""
	I0913 20:01:27.416589   71926 logs.go:276] 0 containers: []
	W0913 20:01:27.416599   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:27.416604   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:27.416654   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:27.455781   71926 cri.go:89] found id: ""
	I0913 20:01:27.455809   71926 logs.go:276] 0 containers: []
	W0913 20:01:27.455820   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:27.455827   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:27.455887   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:27.493920   71926 cri.go:89] found id: ""
	I0913 20:01:27.493950   71926 logs.go:276] 0 containers: []
	W0913 20:01:27.493959   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:27.493967   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:27.494016   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:27.531706   71926 cri.go:89] found id: ""
	I0913 20:01:27.531730   71926 logs.go:276] 0 containers: []
	W0913 20:01:27.531740   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:27.531746   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:27.531796   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:27.568697   71926 cri.go:89] found id: ""
	I0913 20:01:27.568726   71926 logs.go:276] 0 containers: []
	W0913 20:01:27.568735   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:27.568744   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:27.568755   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:27.620618   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:27.620655   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:27.636353   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:27.636378   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:27.709779   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:27.709806   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:27.709819   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:27.784476   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:27.784508   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:30.331351   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:30.344583   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:30.344647   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:30.379992   71926 cri.go:89] found id: ""
	I0913 20:01:30.380018   71926 logs.go:276] 0 containers: []
	W0913 20:01:30.380051   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:30.380059   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:30.380129   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:30.415148   71926 cri.go:89] found id: ""
	I0913 20:01:30.415174   71926 logs.go:276] 0 containers: []
	W0913 20:01:30.415185   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:30.415192   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:30.415250   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:30.450578   71926 cri.go:89] found id: ""
	I0913 20:01:30.450602   71926 logs.go:276] 0 containers: []
	W0913 20:01:30.450611   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:30.450616   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:30.450675   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:30.484582   71926 cri.go:89] found id: ""
	I0913 20:01:30.484612   71926 logs.go:276] 0 containers: []
	W0913 20:01:30.484623   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:30.484631   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:30.484683   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:30.520476   71926 cri.go:89] found id: ""
	I0913 20:01:30.520498   71926 logs.go:276] 0 containers: []
	W0913 20:01:30.520506   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:30.520512   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:30.520559   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:30.556898   71926 cri.go:89] found id: ""
	I0913 20:01:30.556928   71926 logs.go:276] 0 containers: []
	W0913 20:01:30.556937   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:30.556943   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:30.556993   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:30.595216   71926 cri.go:89] found id: ""
	I0913 20:01:30.595240   71926 logs.go:276] 0 containers: []
	W0913 20:01:30.595252   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:30.595259   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:30.595312   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:30.629834   71926 cri.go:89] found id: ""
	I0913 20:01:30.629866   71926 logs.go:276] 0 containers: []
	W0913 20:01:30.629880   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:30.629891   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:30.629907   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:30.643487   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:30.643516   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:30.718267   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:30.718290   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:30.718304   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:30.803014   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:30.803044   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:30.843670   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:30.843694   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:29.336064   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:31.836582   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:29.932193   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:32.431217   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:32.766473   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:34.767287   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:33.396566   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:33.409579   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:33.409641   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:33.444647   71926 cri.go:89] found id: ""
	I0913 20:01:33.444671   71926 logs.go:276] 0 containers: []
	W0913 20:01:33.444682   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:33.444689   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:33.444747   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:33.479545   71926 cri.go:89] found id: ""
	I0913 20:01:33.479568   71926 logs.go:276] 0 containers: []
	W0913 20:01:33.479577   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:33.479583   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:33.479634   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:33.517343   71926 cri.go:89] found id: ""
	I0913 20:01:33.517366   71926 logs.go:276] 0 containers: []
	W0913 20:01:33.517375   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:33.517380   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:33.517437   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:33.551394   71926 cri.go:89] found id: ""
	I0913 20:01:33.551424   71926 logs.go:276] 0 containers: []
	W0913 20:01:33.551436   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:33.551449   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:33.551499   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:33.583871   71926 cri.go:89] found id: ""
	I0913 20:01:33.583893   71926 logs.go:276] 0 containers: []
	W0913 20:01:33.583902   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:33.583907   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:33.583966   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:33.616984   71926 cri.go:89] found id: ""
	I0913 20:01:33.617008   71926 logs.go:276] 0 containers: []
	W0913 20:01:33.617018   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:33.617023   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:33.617078   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:33.660231   71926 cri.go:89] found id: ""
	I0913 20:01:33.660252   71926 logs.go:276] 0 containers: []
	W0913 20:01:33.660261   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:33.660267   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:33.660315   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:33.705140   71926 cri.go:89] found id: ""
	I0913 20:01:33.705176   71926 logs.go:276] 0 containers: []
	W0913 20:01:33.705188   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:33.705198   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:33.705213   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:33.781889   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:33.781916   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:33.781931   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:33.860132   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:33.860171   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:33.899723   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:33.899754   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:33.953380   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:33.953416   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:36.469123   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:36.482260   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:36.482324   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:36.516469   71926 cri.go:89] found id: ""
	I0913 20:01:36.516494   71926 logs.go:276] 0 containers: []
	W0913 20:01:36.516504   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:36.516509   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:36.516599   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:36.554065   71926 cri.go:89] found id: ""
	I0913 20:01:36.554103   71926 logs.go:276] 0 containers: []
	W0913 20:01:36.554116   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:36.554123   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:36.554197   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:36.587706   71926 cri.go:89] found id: ""
	I0913 20:01:36.587736   71926 logs.go:276] 0 containers: []
	W0913 20:01:36.587745   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:36.587750   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:36.587812   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:36.621866   71926 cri.go:89] found id: ""
	I0913 20:01:36.621897   71926 logs.go:276] 0 containers: []
	W0913 20:01:36.621908   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:36.621915   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:36.621984   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:36.656374   71926 cri.go:89] found id: ""
	I0913 20:01:36.656403   71926 logs.go:276] 0 containers: []
	W0913 20:01:36.656411   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:36.656416   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:36.656465   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:36.690236   71926 cri.go:89] found id: ""
	I0913 20:01:36.690259   71926 logs.go:276] 0 containers: []
	W0913 20:01:36.690268   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:36.690273   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:36.690329   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:36.725544   71926 cri.go:89] found id: ""
	I0913 20:01:36.725580   71926 logs.go:276] 0 containers: []
	W0913 20:01:36.725592   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:36.725600   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:36.725656   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:36.762143   71926 cri.go:89] found id: ""
	I0913 20:01:36.762175   71926 logs.go:276] 0 containers: []
	W0913 20:01:36.762186   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:36.762195   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:36.762210   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:36.837436   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:36.837459   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:36.837473   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:36.916272   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:36.916310   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:34.334975   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:36.335436   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:38.835559   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:34.930444   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:36.931136   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:39.430007   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:37.266186   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:39.769801   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:36.962300   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:36.962332   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:37.016419   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:37.016449   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:39.532924   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:39.546066   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:39.546150   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:39.585711   71926 cri.go:89] found id: ""
	I0913 20:01:39.585733   71926 logs.go:276] 0 containers: []
	W0913 20:01:39.585741   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:39.585747   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:39.585803   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:39.623366   71926 cri.go:89] found id: ""
	I0913 20:01:39.623409   71926 logs.go:276] 0 containers: []
	W0913 20:01:39.623419   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:39.623425   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:39.623487   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:39.660533   71926 cri.go:89] found id: ""
	I0913 20:01:39.660558   71926 logs.go:276] 0 containers: []
	W0913 20:01:39.660567   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:39.660575   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:39.660637   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:39.694266   71926 cri.go:89] found id: ""
	I0913 20:01:39.694293   71926 logs.go:276] 0 containers: []
	W0913 20:01:39.694304   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:39.694311   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:39.694373   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:39.734258   71926 cri.go:89] found id: ""
	I0913 20:01:39.734284   71926 logs.go:276] 0 containers: []
	W0913 20:01:39.734295   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:39.734302   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:39.734361   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:39.770202   71926 cri.go:89] found id: ""
	I0913 20:01:39.770219   71926 logs.go:276] 0 containers: []
	W0913 20:01:39.770227   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:39.770233   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:39.770284   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:39.804380   71926 cri.go:89] found id: ""
	I0913 20:01:39.804417   71926 logs.go:276] 0 containers: []
	W0913 20:01:39.804425   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:39.804431   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:39.804481   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:39.837961   71926 cri.go:89] found id: ""
	I0913 20:01:39.837989   71926 logs.go:276] 0 containers: []
	W0913 20:01:39.838011   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:39.838021   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:39.838047   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:39.851152   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:39.851176   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:39.930199   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:39.930224   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:39.930239   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:40.007475   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:40.007507   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:40.050291   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:40.050320   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:40.835948   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:42.836933   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:41.431508   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:43.930509   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:42.265895   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:44.267214   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:42.599100   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:42.613586   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:42.613662   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:42.648187   71926 cri.go:89] found id: ""
	I0913 20:01:42.648213   71926 logs.go:276] 0 containers: []
	W0913 20:01:42.648225   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:42.648232   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:42.648283   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:42.692692   71926 cri.go:89] found id: ""
	I0913 20:01:42.692721   71926 logs.go:276] 0 containers: []
	W0913 20:01:42.692730   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:42.692736   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:42.692787   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:42.726248   71926 cri.go:89] found id: ""
	I0913 20:01:42.726273   71926 logs.go:276] 0 containers: []
	W0913 20:01:42.726283   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:42.726291   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:42.726364   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:42.764372   71926 cri.go:89] found id: ""
	I0913 20:01:42.764416   71926 logs.go:276] 0 containers: []
	W0913 20:01:42.764428   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:42.764434   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:42.764495   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:42.802871   71926 cri.go:89] found id: ""
	I0913 20:01:42.802902   71926 logs.go:276] 0 containers: []
	W0913 20:01:42.802914   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:42.802922   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:42.802983   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:42.840018   71926 cri.go:89] found id: ""
	I0913 20:01:42.840048   71926 logs.go:276] 0 containers: []
	W0913 20:01:42.840060   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:42.840067   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:42.840127   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:42.876359   71926 cri.go:89] found id: ""
	I0913 20:01:42.876388   71926 logs.go:276] 0 containers: []
	W0913 20:01:42.876400   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:42.876408   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:42.876473   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:42.911673   71926 cri.go:89] found id: ""
	I0913 20:01:42.911697   71926 logs.go:276] 0 containers: []
	W0913 20:01:42.911706   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:42.911713   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:42.911725   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:43.016107   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:43.016143   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:43.061781   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:43.061811   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:43.112989   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:43.113035   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:43.129172   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:43.129201   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:43.209596   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:45.710278   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:45.723939   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:45.724013   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:45.759393   71926 cri.go:89] found id: ""
	I0913 20:01:45.759420   71926 logs.go:276] 0 containers: []
	W0913 20:01:45.759432   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:45.759439   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:45.759500   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:45.796795   71926 cri.go:89] found id: ""
	I0913 20:01:45.796820   71926 logs.go:276] 0 containers: []
	W0913 20:01:45.796831   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:45.796838   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:45.796911   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:45.837904   71926 cri.go:89] found id: ""
	I0913 20:01:45.837929   71926 logs.go:276] 0 containers: []
	W0913 20:01:45.837939   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:45.837945   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:45.838005   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:45.873079   71926 cri.go:89] found id: ""
	I0913 20:01:45.873104   71926 logs.go:276] 0 containers: []
	W0913 20:01:45.873115   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:45.873122   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:45.873194   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:45.910115   71926 cri.go:89] found id: ""
	I0913 20:01:45.910144   71926 logs.go:276] 0 containers: []
	W0913 20:01:45.910160   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:45.910167   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:45.910228   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:45.949135   71926 cri.go:89] found id: ""
	I0913 20:01:45.949166   71926 logs.go:276] 0 containers: []
	W0913 20:01:45.949178   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:45.949186   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:45.949246   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:45.984997   71926 cri.go:89] found id: ""
	I0913 20:01:45.985023   71926 logs.go:276] 0 containers: []
	W0913 20:01:45.985033   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:45.985038   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:45.985088   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:46.023590   71926 cri.go:89] found id: ""
	I0913 20:01:46.023618   71926 logs.go:276] 0 containers: []
	W0913 20:01:46.023632   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:46.023642   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:46.023656   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:46.062364   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:46.062392   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:46.114617   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:46.114650   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:46.129756   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:46.129799   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:46.202031   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:46.202052   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:46.202063   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:45.337317   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:47.834948   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:45.931344   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:47.932508   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:46.776369   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:49.268050   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:48.782933   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:48.796685   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:48.796758   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:48.832035   71926 cri.go:89] found id: ""
	I0913 20:01:48.832061   71926 logs.go:276] 0 containers: []
	W0913 20:01:48.832074   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:48.832081   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:48.832155   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:48.866904   71926 cri.go:89] found id: ""
	I0913 20:01:48.866929   71926 logs.go:276] 0 containers: []
	W0913 20:01:48.866942   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:48.866947   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:48.867004   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:48.904082   71926 cri.go:89] found id: ""
	I0913 20:01:48.904105   71926 logs.go:276] 0 containers: []
	W0913 20:01:48.904113   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:48.904118   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:48.904174   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:48.947496   71926 cri.go:89] found id: ""
	I0913 20:01:48.947519   71926 logs.go:276] 0 containers: []
	W0913 20:01:48.947526   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:48.947532   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:48.947588   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:48.982918   71926 cri.go:89] found id: ""
	I0913 20:01:48.982954   71926 logs.go:276] 0 containers: []
	W0913 20:01:48.982964   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:48.982969   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:48.983034   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:49.023593   71926 cri.go:89] found id: ""
	I0913 20:01:49.023615   71926 logs.go:276] 0 containers: []
	W0913 20:01:49.023623   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:49.023629   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:49.023690   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:49.068211   71926 cri.go:89] found id: ""
	I0913 20:01:49.068233   71926 logs.go:276] 0 containers: []
	W0913 20:01:49.068241   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:49.068246   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:49.068310   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:49.104174   71926 cri.go:89] found id: ""
	I0913 20:01:49.104195   71926 logs.go:276] 0 containers: []
	W0913 20:01:49.104203   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:49.104210   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:49.104221   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:49.158282   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:49.158317   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:49.173592   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:49.173617   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:49.249039   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:49.249067   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:49.249082   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:49.334924   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:49.334969   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:51.884986   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:51.898020   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:51.898172   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:51.937858   71926 cri.go:89] found id: ""
	I0913 20:01:51.937885   71926 logs.go:276] 0 containers: []
	W0913 20:01:51.937896   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:51.937905   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:51.937971   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:49.836646   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:52.337477   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:50.432045   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:52.930984   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:51.765027   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:53.766659   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:55.766923   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:51.977712   71926 cri.go:89] found id: ""
	I0913 20:01:51.977743   71926 logs.go:276] 0 containers: []
	W0913 20:01:51.977756   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:51.977766   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:51.977852   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:52.012504   71926 cri.go:89] found id: ""
	I0913 20:01:52.012529   71926 logs.go:276] 0 containers: []
	W0913 20:01:52.012539   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:52.012544   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:52.012604   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:52.047642   71926 cri.go:89] found id: ""
	I0913 20:01:52.047671   71926 logs.go:276] 0 containers: []
	W0913 20:01:52.047683   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:52.047692   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:52.047743   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:52.087213   71926 cri.go:89] found id: ""
	I0913 20:01:52.087240   71926 logs.go:276] 0 containers: []
	W0913 20:01:52.087251   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:52.087258   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:52.087317   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:52.126609   71926 cri.go:89] found id: ""
	I0913 20:01:52.126633   71926 logs.go:276] 0 containers: []
	W0913 20:01:52.126641   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:52.126647   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:52.126699   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:52.160754   71926 cri.go:89] found id: ""
	I0913 20:01:52.160778   71926 logs.go:276] 0 containers: []
	W0913 20:01:52.160789   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:52.160796   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:52.160857   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:52.196945   71926 cri.go:89] found id: ""
	I0913 20:01:52.196967   71926 logs.go:276] 0 containers: []
	W0913 20:01:52.196975   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:52.196983   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:52.196994   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:52.248515   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:52.248552   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:52.264602   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:52.264629   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:52.339626   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:52.339653   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:52.339668   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:52.419793   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:52.419827   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:54.966220   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:54.981234   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:54.981299   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:55.015806   71926 cri.go:89] found id: ""
	I0913 20:01:55.015843   71926 logs.go:276] 0 containers: []
	W0913 20:01:55.015855   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:55.015861   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:55.015931   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:55.053979   71926 cri.go:89] found id: ""
	I0913 20:01:55.054002   71926 logs.go:276] 0 containers: []
	W0913 20:01:55.054011   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:55.054016   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:55.054075   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:55.096464   71926 cri.go:89] found id: ""
	I0913 20:01:55.096495   71926 logs.go:276] 0 containers: []
	W0913 20:01:55.096506   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:55.096514   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:55.096591   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:55.146557   71926 cri.go:89] found id: ""
	I0913 20:01:55.146586   71926 logs.go:276] 0 containers: []
	W0913 20:01:55.146599   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:55.146606   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:55.146667   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:55.199034   71926 cri.go:89] found id: ""
	I0913 20:01:55.199063   71926 logs.go:276] 0 containers: []
	W0913 20:01:55.199074   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:55.199083   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:55.199144   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:55.234731   71926 cri.go:89] found id: ""
	I0913 20:01:55.234760   71926 logs.go:276] 0 containers: []
	W0913 20:01:55.234772   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:55.234780   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:55.234830   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:55.273543   71926 cri.go:89] found id: ""
	I0913 20:01:55.273570   71926 logs.go:276] 0 containers: []
	W0913 20:01:55.273579   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:55.273585   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:55.273643   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:55.310449   71926 cri.go:89] found id: ""
	I0913 20:01:55.310483   71926 logs.go:276] 0 containers: []
	W0913 20:01:55.310496   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:55.310507   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:55.310521   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:55.360725   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:55.360761   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:55.375346   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:55.375374   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:55.445075   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:55.445098   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:55.445108   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:55.520105   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:55.520144   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:54.835305   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:56.835825   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:58.836975   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:55.431354   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:57.930223   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:57.767026   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:00.266415   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:58.059379   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:58.072373   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:58.072454   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:58.108624   71926 cri.go:89] found id: ""
	I0913 20:01:58.108650   71926 logs.go:276] 0 containers: []
	W0913 20:01:58.108659   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:58.108665   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:58.108722   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:58.143978   71926 cri.go:89] found id: ""
	I0913 20:01:58.143999   71926 logs.go:276] 0 containers: []
	W0913 20:01:58.144007   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:58.144013   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:58.144059   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:58.178996   71926 cri.go:89] found id: ""
	I0913 20:01:58.179024   71926 logs.go:276] 0 containers: []
	W0913 20:01:58.179032   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:58.179038   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:58.179097   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:58.211596   71926 cri.go:89] found id: ""
	I0913 20:01:58.211624   71926 logs.go:276] 0 containers: []
	W0913 20:01:58.211634   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:58.211640   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:58.211696   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:58.245034   71926 cri.go:89] found id: ""
	I0913 20:01:58.245065   71926 logs.go:276] 0 containers: []
	W0913 20:01:58.245077   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:58.245085   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:58.245148   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:58.283216   71926 cri.go:89] found id: ""
	I0913 20:01:58.283238   71926 logs.go:276] 0 containers: []
	W0913 20:01:58.283247   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:58.283252   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:58.283309   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:58.321463   71926 cri.go:89] found id: ""
	I0913 20:01:58.321484   71926 logs.go:276] 0 containers: []
	W0913 20:01:58.321492   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:58.321498   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:58.321544   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:58.355902   71926 cri.go:89] found id: ""
	I0913 20:01:58.355926   71926 logs.go:276] 0 containers: []
	W0913 20:01:58.355937   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:58.355947   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:58.355965   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:58.413005   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:58.413035   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:58.428082   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:58.428108   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:58.498169   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:58.498197   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:58.498212   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:58.578725   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:58.578757   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:02:01.119256   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:02:01.146017   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:02:01.146114   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:02:01.196918   71926 cri.go:89] found id: ""
	I0913 20:02:01.196945   71926 logs.go:276] 0 containers: []
	W0913 20:02:01.196953   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:02:01.196959   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:02:01.197016   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:02:01.233679   71926 cri.go:89] found id: ""
	I0913 20:02:01.233718   71926 logs.go:276] 0 containers: []
	W0913 20:02:01.233729   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:02:01.233736   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:02:01.233800   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:02:01.269464   71926 cri.go:89] found id: ""
	I0913 20:02:01.269492   71926 logs.go:276] 0 containers: []
	W0913 20:02:01.269503   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:02:01.269510   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:02:01.269570   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:02:01.305729   71926 cri.go:89] found id: ""
	I0913 20:02:01.305754   71926 logs.go:276] 0 containers: []
	W0913 20:02:01.305763   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:02:01.305769   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:02:01.305828   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:02:01.342388   71926 cri.go:89] found id: ""
	I0913 20:02:01.342415   71926 logs.go:276] 0 containers: []
	W0913 20:02:01.342426   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:02:01.342433   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:02:01.342496   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:02:01.381841   71926 cri.go:89] found id: ""
	I0913 20:02:01.381870   71926 logs.go:276] 0 containers: []
	W0913 20:02:01.381887   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:02:01.381895   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:02:01.381959   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:02:01.418835   71926 cri.go:89] found id: ""
	I0913 20:02:01.418875   71926 logs.go:276] 0 containers: []
	W0913 20:02:01.418886   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:02:01.418893   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:02:01.418957   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:02:01.457113   71926 cri.go:89] found id: ""
	I0913 20:02:01.457147   71926 logs.go:276] 0 containers: []
	W0913 20:02:01.457158   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:02:01.457168   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:02:01.457178   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:02:01.513024   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:02:01.513060   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:02:01.528102   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:02:01.528128   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:02:01.602459   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:02:01.602480   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:02:01.602495   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:02:01.685567   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:02:01.685599   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:02:01.336152   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:03.836139   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:59.931408   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:02.430247   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:04.431966   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:02.266731   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:04.768148   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:04.231832   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:02:04.246134   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:02:04.246215   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:02:04.285749   71926 cri.go:89] found id: ""
	I0913 20:02:04.285779   71926 logs.go:276] 0 containers: []
	W0913 20:02:04.285789   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:02:04.285796   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:02:04.285857   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:02:04.320201   71926 cri.go:89] found id: ""
	I0913 20:02:04.320235   71926 logs.go:276] 0 containers: []
	W0913 20:02:04.320246   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:02:04.320252   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:02:04.320314   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:02:04.356354   71926 cri.go:89] found id: ""
	I0913 20:02:04.356376   71926 logs.go:276] 0 containers: []
	W0913 20:02:04.356387   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:02:04.356394   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:02:04.356452   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:02:04.391995   71926 cri.go:89] found id: ""
	I0913 20:02:04.392025   71926 logs.go:276] 0 containers: []
	W0913 20:02:04.392036   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:02:04.392044   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:02:04.392111   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:02:04.425216   71926 cri.go:89] found id: ""
	I0913 20:02:04.425245   71926 logs.go:276] 0 containers: []
	W0913 20:02:04.425255   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:02:04.425262   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:02:04.425327   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:02:04.473200   71926 cri.go:89] found id: ""
	I0913 20:02:04.473223   71926 logs.go:276] 0 containers: []
	W0913 20:02:04.473232   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:02:04.473238   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:02:04.473283   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:02:04.508088   71926 cri.go:89] found id: ""
	I0913 20:02:04.508110   71926 logs.go:276] 0 containers: []
	W0913 20:02:04.508119   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:02:04.508124   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:02:04.508175   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:02:04.543322   71926 cri.go:89] found id: ""
	I0913 20:02:04.543343   71926 logs.go:276] 0 containers: []
	W0913 20:02:04.543351   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:02:04.543360   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:02:04.543370   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:02:04.618660   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:02:04.618679   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:02:04.618691   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:02:04.695411   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:02:04.695446   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:02:04.735797   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:02:04.735829   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:02:04.792281   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:02:04.792318   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:02:05.836177   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:07.837164   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:06.931841   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:09.432062   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:07.266508   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:07.266540   71424 pod_ready.go:82] duration metric: took 4m0.00658418s for pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace to be "Ready" ...
	E0913 20:02:07.266553   71424 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0913 20:02:07.266569   71424 pod_ready.go:39] duration metric: took 4m3.201709894s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0913 20:02:07.266588   71424 api_server.go:52] waiting for apiserver process to appear ...
	I0913 20:02:07.266618   71424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:02:07.266671   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:02:07.316650   71424 cri.go:89] found id: "7b1108fd5841778e635c87c901ca2bc56071cc7f48f9b9812e1bf8fa063284c3"
	I0913 20:02:07.316674   71424 cri.go:89] found id: ""
	I0913 20:02:07.316681   71424 logs.go:276] 1 containers: [7b1108fd5841778e635c87c901ca2bc56071cc7f48f9b9812e1bf8fa063284c3]
	I0913 20:02:07.316740   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:07.321334   71424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:02:07.321407   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:02:07.373164   71424 cri.go:89] found id: "a3490cc2f99b270938b5fed10c34d8466f4e2d304dac9cdacb69b239c36988e3"
	I0913 20:02:07.373187   71424 cri.go:89] found id: ""
	I0913 20:02:07.373197   71424 logs.go:276] 1 containers: [a3490cc2f99b270938b5fed10c34d8466f4e2d304dac9cdacb69b239c36988e3]
	I0913 20:02:07.373247   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:07.377883   71424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:02:07.377954   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:02:07.424142   71424 cri.go:89] found id: "e70559352db6a4d712cb06065541ce8338cb0ea989eac571f538aa205d022f73"
	I0913 20:02:07.424169   71424 cri.go:89] found id: ""
	I0913 20:02:07.424179   71424 logs.go:276] 1 containers: [e70559352db6a4d712cb06065541ce8338cb0ea989eac571f538aa205d022f73]
	I0913 20:02:07.424241   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:07.429508   71424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:02:07.429578   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:02:07.484114   71424 cri.go:89] found id: "4c2bf4fed4e33c404d568d430bf58fe30ca0c9f0cf8964c530e93f1fd1a51edf"
	I0913 20:02:07.484180   71424 cri.go:89] found id: ""
	I0913 20:02:07.484193   71424 logs.go:276] 1 containers: [4c2bf4fed4e33c404d568d430bf58fe30ca0c9f0cf8964c530e93f1fd1a51edf]
	I0913 20:02:07.484250   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:07.488689   71424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:02:07.488757   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:02:07.527755   71424 cri.go:89] found id: "adbec8ff0ed7ae2d76b2b4191fbf08117c3c2c17aaf2f56bf763ce14fba83991"
	I0913 20:02:07.527777   71424 cri.go:89] found id: ""
	I0913 20:02:07.527786   71424 logs.go:276] 1 containers: [adbec8ff0ed7ae2d76b2b4191fbf08117c3c2c17aaf2f56bf763ce14fba83991]
	I0913 20:02:07.527840   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:07.532748   71424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:02:07.532806   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:02:07.570018   71424 cri.go:89] found id: "e6169bebe5711890f53f70a7ec514779cf495ee725c60dab67e7acdca64ef03d"
	I0913 20:02:07.570043   71424 cri.go:89] found id: ""
	I0913 20:02:07.570052   71424 logs.go:276] 1 containers: [e6169bebe5711890f53f70a7ec514779cf495ee725c60dab67e7acdca64ef03d]
	I0913 20:02:07.570125   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:07.574697   71424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:02:07.574765   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:02:07.618877   71424 cri.go:89] found id: ""
	I0913 20:02:07.618971   71424 logs.go:276] 0 containers: []
	W0913 20:02:07.618998   71424 logs.go:278] No container was found matching "kindnet"
	I0913 20:02:07.619014   71424 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0913 20:02:07.619122   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0913 20:02:07.659244   71424 cri.go:89] found id: "fc01d7b17bbc929489326ae7e806a248f2d3d5cbc145bf51eb1b0cf2ba88d950"
	I0913 20:02:07.659270   71424 cri.go:89] found id: "4a9c61bb677322f851f9504c0ffe2044897d80333391be76078a4f7825afe9fe"
	I0913 20:02:07.659275   71424 cri.go:89] found id: ""
	I0913 20:02:07.659283   71424 logs.go:276] 2 containers: [fc01d7b17bbc929489326ae7e806a248f2d3d5cbc145bf51eb1b0cf2ba88d950 4a9c61bb677322f851f9504c0ffe2044897d80333391be76078a4f7825afe9fe]
	I0913 20:02:07.659335   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:07.664257   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:07.668591   71424 logs.go:123] Gathering logs for kube-scheduler [4c2bf4fed4e33c404d568d430bf58fe30ca0c9f0cf8964c530e93f1fd1a51edf] ...
	I0913 20:02:07.668613   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c2bf4fed4e33c404d568d430bf58fe30ca0c9f0cf8964c530e93f1fd1a51edf"
	I0913 20:02:07.709612   71424 logs.go:123] Gathering logs for kube-controller-manager [e6169bebe5711890f53f70a7ec514779cf495ee725c60dab67e7acdca64ef03d] ...
	I0913 20:02:07.709638   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e6169bebe5711890f53f70a7ec514779cf495ee725c60dab67e7acdca64ef03d"
	I0913 20:02:07.765784   71424 logs.go:123] Gathering logs for storage-provisioner [4a9c61bb677322f851f9504c0ffe2044897d80333391be76078a4f7825afe9fe] ...
	I0913 20:02:07.765838   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4a9c61bb677322f851f9504c0ffe2044897d80333391be76078a4f7825afe9fe"
	I0913 20:02:07.808828   71424 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:02:07.808853   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:02:08.315417   71424 logs.go:123] Gathering logs for container status ...
	I0913 20:02:08.315462   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:02:08.361953   71424 logs.go:123] Gathering logs for kubelet ...
	I0913 20:02:08.361984   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:02:08.434091   71424 logs.go:123] Gathering logs for dmesg ...
	I0913 20:02:08.434143   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:02:08.448853   71424 logs.go:123] Gathering logs for etcd [a3490cc2f99b270938b5fed10c34d8466f4e2d304dac9cdacb69b239c36988e3] ...
	I0913 20:02:08.448877   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a3490cc2f99b270938b5fed10c34d8466f4e2d304dac9cdacb69b239c36988e3"
	I0913 20:02:08.510886   71424 logs.go:123] Gathering logs for coredns [e70559352db6a4d712cb06065541ce8338cb0ea989eac571f538aa205d022f73] ...
	I0913 20:02:08.510919   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e70559352db6a4d712cb06065541ce8338cb0ea989eac571f538aa205d022f73"
	I0913 20:02:08.547445   71424 logs.go:123] Gathering logs for kube-proxy [adbec8ff0ed7ae2d76b2b4191fbf08117c3c2c17aaf2f56bf763ce14fba83991] ...
	I0913 20:02:08.547482   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 adbec8ff0ed7ae2d76b2b4191fbf08117c3c2c17aaf2f56bf763ce14fba83991"
	I0913 20:02:08.585883   71424 logs.go:123] Gathering logs for storage-provisioner [fc01d7b17bbc929489326ae7e806a248f2d3d5cbc145bf51eb1b0cf2ba88d950] ...
	I0913 20:02:08.585907   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc01d7b17bbc929489326ae7e806a248f2d3d5cbc145bf51eb1b0cf2ba88d950"
	I0913 20:02:08.628105   71424 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:02:08.628134   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 20:02:08.764531   71424 logs.go:123] Gathering logs for kube-apiserver [7b1108fd5841778e635c87c901ca2bc56071cc7f48f9b9812e1bf8fa063284c3] ...
	I0913 20:02:08.764562   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7b1108fd5841778e635c87c901ca2bc56071cc7f48f9b9812e1bf8fa063284c3"
	I0913 20:02:07.307778   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:02:07.322032   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:02:07.322110   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:02:07.357558   71926 cri.go:89] found id: ""
	I0913 20:02:07.357585   71926 logs.go:276] 0 containers: []
	W0913 20:02:07.357597   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:02:07.357605   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:02:07.357664   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:02:07.391428   71926 cri.go:89] found id: ""
	I0913 20:02:07.391457   71926 logs.go:276] 0 containers: []
	W0913 20:02:07.391468   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:02:07.391476   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:02:07.391531   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:02:07.427244   71926 cri.go:89] found id: ""
	I0913 20:02:07.427268   71926 logs.go:276] 0 containers: []
	W0913 20:02:07.427281   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:02:07.427289   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:02:07.427364   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:02:07.477369   71926 cri.go:89] found id: ""
	I0913 20:02:07.477399   71926 logs.go:276] 0 containers: []
	W0913 20:02:07.477411   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:02:07.477420   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:02:07.477478   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:02:07.519477   71926 cri.go:89] found id: ""
	I0913 20:02:07.519505   71926 logs.go:276] 0 containers: []
	W0913 20:02:07.519516   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:02:07.519524   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:02:07.519586   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:02:07.556229   71926 cri.go:89] found id: ""
	I0913 20:02:07.556252   71926 logs.go:276] 0 containers: []
	W0913 20:02:07.556260   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:02:07.556270   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:02:07.556329   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:02:07.594498   71926 cri.go:89] found id: ""
	I0913 20:02:07.594531   71926 logs.go:276] 0 containers: []
	W0913 20:02:07.594543   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:02:07.594551   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:02:07.594609   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:02:07.634000   71926 cri.go:89] found id: ""
	I0913 20:02:07.634027   71926 logs.go:276] 0 containers: []
	W0913 20:02:07.634038   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:02:07.634048   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:02:07.634061   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:02:07.690769   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:02:07.690801   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:02:07.708059   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:02:07.708087   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:02:07.783421   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:02:07.783440   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:02:07.783481   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:02:07.866138   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:02:07.866169   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:02:10.416560   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:02:10.430464   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:02:10.430536   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:02:10.467821   71926 cri.go:89] found id: ""
	I0913 20:02:10.467849   71926 logs.go:276] 0 containers: []
	W0913 20:02:10.467858   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:02:10.467864   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:02:10.467930   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:02:10.504320   71926 cri.go:89] found id: ""
	I0913 20:02:10.504347   71926 logs.go:276] 0 containers: []
	W0913 20:02:10.504358   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:02:10.504371   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:02:10.504461   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:02:10.541261   71926 cri.go:89] found id: ""
	I0913 20:02:10.541290   71926 logs.go:276] 0 containers: []
	W0913 20:02:10.541302   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:02:10.541309   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:02:10.541376   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:02:10.576270   71926 cri.go:89] found id: ""
	I0913 20:02:10.576297   71926 logs.go:276] 0 containers: []
	W0913 20:02:10.576310   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:02:10.576317   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:02:10.576373   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:02:10.614974   71926 cri.go:89] found id: ""
	I0913 20:02:10.615004   71926 logs.go:276] 0 containers: []
	W0913 20:02:10.615022   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:02:10.615029   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:02:10.615091   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:02:10.650917   71926 cri.go:89] found id: ""
	I0913 20:02:10.650947   71926 logs.go:276] 0 containers: []
	W0913 20:02:10.650959   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:02:10.650967   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:02:10.651028   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:02:10.688597   71926 cri.go:89] found id: ""
	I0913 20:02:10.688622   71926 logs.go:276] 0 containers: []
	W0913 20:02:10.688632   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:02:10.688640   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:02:10.688699   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:02:10.723937   71926 cri.go:89] found id: ""
	I0913 20:02:10.723962   71926 logs.go:276] 0 containers: []
	W0913 20:02:10.723973   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:02:10.723983   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:02:10.723998   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:02:10.776033   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:02:10.776065   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:02:10.791601   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:02:10.791624   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:02:10.870427   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:02:10.870453   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:02:10.870481   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:02:10.950924   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:02:10.950958   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:02:10.335945   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:12.336240   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:11.932240   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:14.430527   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:11.311597   71424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:02:11.329620   71424 api_server.go:72] duration metric: took 4m14.578764648s to wait for apiserver process to appear ...
	I0913 20:02:11.329645   71424 api_server.go:88] waiting for apiserver healthz status ...
	I0913 20:02:11.329689   71424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:02:11.329748   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:02:11.372419   71424 cri.go:89] found id: "7b1108fd5841778e635c87c901ca2bc56071cc7f48f9b9812e1bf8fa063284c3"
	I0913 20:02:11.372443   71424 cri.go:89] found id: ""
	I0913 20:02:11.372454   71424 logs.go:276] 1 containers: [7b1108fd5841778e635c87c901ca2bc56071cc7f48f9b9812e1bf8fa063284c3]
	I0913 20:02:11.372510   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:11.377048   71424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:02:11.377112   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:02:11.415150   71424 cri.go:89] found id: "a3490cc2f99b270938b5fed10c34d8466f4e2d304dac9cdacb69b239c36988e3"
	I0913 20:02:11.415177   71424 cri.go:89] found id: ""
	I0913 20:02:11.415186   71424 logs.go:276] 1 containers: [a3490cc2f99b270938b5fed10c34d8466f4e2d304dac9cdacb69b239c36988e3]
	I0913 20:02:11.415255   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:11.420007   71424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:02:11.420092   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:02:11.459538   71424 cri.go:89] found id: "e70559352db6a4d712cb06065541ce8338cb0ea989eac571f538aa205d022f73"
	I0913 20:02:11.459560   71424 cri.go:89] found id: ""
	I0913 20:02:11.459568   71424 logs.go:276] 1 containers: [e70559352db6a4d712cb06065541ce8338cb0ea989eac571f538aa205d022f73]
	I0913 20:02:11.459626   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:11.464079   71424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:02:11.464133   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:02:11.502877   71424 cri.go:89] found id: "4c2bf4fed4e33c404d568d430bf58fe30ca0c9f0cf8964c530e93f1fd1a51edf"
	I0913 20:02:11.502902   71424 cri.go:89] found id: ""
	I0913 20:02:11.502909   71424 logs.go:276] 1 containers: [4c2bf4fed4e33c404d568d430bf58fe30ca0c9f0cf8964c530e93f1fd1a51edf]
	I0913 20:02:11.502958   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:11.507529   71424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:02:11.507614   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:02:11.553452   71424 cri.go:89] found id: "adbec8ff0ed7ae2d76b2b4191fbf08117c3c2c17aaf2f56bf763ce14fba83991"
	I0913 20:02:11.553476   71424 cri.go:89] found id: ""
	I0913 20:02:11.553484   71424 logs.go:276] 1 containers: [adbec8ff0ed7ae2d76b2b4191fbf08117c3c2c17aaf2f56bf763ce14fba83991]
	I0913 20:02:11.553538   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:11.557584   71424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:02:11.557649   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:02:11.598606   71424 cri.go:89] found id: "e6169bebe5711890f53f70a7ec514779cf495ee725c60dab67e7acdca64ef03d"
	I0913 20:02:11.598632   71424 cri.go:89] found id: ""
	I0913 20:02:11.598641   71424 logs.go:276] 1 containers: [e6169bebe5711890f53f70a7ec514779cf495ee725c60dab67e7acdca64ef03d]
	I0913 20:02:11.598694   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:11.602735   71424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:02:11.602803   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:02:11.637072   71424 cri.go:89] found id: ""
	I0913 20:02:11.637099   71424 logs.go:276] 0 containers: []
	W0913 20:02:11.637110   71424 logs.go:278] No container was found matching "kindnet"
	I0913 20:02:11.637133   71424 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0913 20:02:11.637197   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0913 20:02:11.680922   71424 cri.go:89] found id: "fc01d7b17bbc929489326ae7e806a248f2d3d5cbc145bf51eb1b0cf2ba88d950"
	I0913 20:02:11.680941   71424 cri.go:89] found id: "4a9c61bb677322f851f9504c0ffe2044897d80333391be76078a4f7825afe9fe"
	I0913 20:02:11.680945   71424 cri.go:89] found id: ""
	I0913 20:02:11.680952   71424 logs.go:276] 2 containers: [fc01d7b17bbc929489326ae7e806a248f2d3d5cbc145bf51eb1b0cf2ba88d950 4a9c61bb677322f851f9504c0ffe2044897d80333391be76078a4f7825afe9fe]
	I0913 20:02:11.680993   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:11.685264   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:11.689862   71424 logs.go:123] Gathering logs for etcd [a3490cc2f99b270938b5fed10c34d8466f4e2d304dac9cdacb69b239c36988e3] ...
	I0913 20:02:11.689887   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a3490cc2f99b270938b5fed10c34d8466f4e2d304dac9cdacb69b239c36988e3"
	I0913 20:02:11.758440   71424 logs.go:123] Gathering logs for coredns [e70559352db6a4d712cb06065541ce8338cb0ea989eac571f538aa205d022f73] ...
	I0913 20:02:11.758475   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e70559352db6a4d712cb06065541ce8338cb0ea989eac571f538aa205d022f73"
	I0913 20:02:11.799263   71424 logs.go:123] Gathering logs for kube-proxy [adbec8ff0ed7ae2d76b2b4191fbf08117c3c2c17aaf2f56bf763ce14fba83991] ...
	I0913 20:02:11.799295   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 adbec8ff0ed7ae2d76b2b4191fbf08117c3c2c17aaf2f56bf763ce14fba83991"
	I0913 20:02:11.837890   71424 logs.go:123] Gathering logs for kube-controller-manager [e6169bebe5711890f53f70a7ec514779cf495ee725c60dab67e7acdca64ef03d] ...
	I0913 20:02:11.837918   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e6169bebe5711890f53f70a7ec514779cf495ee725c60dab67e7acdca64ef03d"
	I0913 20:02:11.902156   71424 logs.go:123] Gathering logs for storage-provisioner [fc01d7b17bbc929489326ae7e806a248f2d3d5cbc145bf51eb1b0cf2ba88d950] ...
	I0913 20:02:11.902189   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc01d7b17bbc929489326ae7e806a248f2d3d5cbc145bf51eb1b0cf2ba88d950"
	I0913 20:02:11.953825   71424 logs.go:123] Gathering logs for kubelet ...
	I0913 20:02:11.953854   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:02:12.022461   71424 logs.go:123] Gathering logs for dmesg ...
	I0913 20:02:12.022498   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:02:12.038744   71424 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:02:12.038773   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 20:02:12.156945   71424 logs.go:123] Gathering logs for storage-provisioner [4a9c61bb677322f851f9504c0ffe2044897d80333391be76078a4f7825afe9fe] ...
	I0913 20:02:12.156982   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4a9c61bb677322f851f9504c0ffe2044897d80333391be76078a4f7825afe9fe"
	I0913 20:02:12.191539   71424 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:02:12.191576   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:02:12.615499   71424 logs.go:123] Gathering logs for kube-apiserver [7b1108fd5841778e635c87c901ca2bc56071cc7f48f9b9812e1bf8fa063284c3] ...
	I0913 20:02:12.615539   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7b1108fd5841778e635c87c901ca2bc56071cc7f48f9b9812e1bf8fa063284c3"
	I0913 20:02:12.662305   71424 logs.go:123] Gathering logs for kube-scheduler [4c2bf4fed4e33c404d568d430bf58fe30ca0c9f0cf8964c530e93f1fd1a51edf] ...
	I0913 20:02:12.662340   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c2bf4fed4e33c404d568d430bf58fe30ca0c9f0cf8964c530e93f1fd1a51edf"
	I0913 20:02:12.701720   71424 logs.go:123] Gathering logs for container status ...
	I0913 20:02:12.701747   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:02:15.241370   71424 api_server.go:253] Checking apiserver healthz at https://192.168.50.13:8443/healthz ...
	I0913 20:02:15.246417   71424 api_server.go:279] https://192.168.50.13:8443/healthz returned 200:
	ok
	I0913 20:02:15.247538   71424 api_server.go:141] control plane version: v1.31.1
	I0913 20:02:15.247557   71424 api_server.go:131] duration metric: took 3.917905929s to wait for apiserver health ...
	I0913 20:02:15.247565   71424 system_pods.go:43] waiting for kube-system pods to appear ...
	I0913 20:02:15.247592   71424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:02:15.247646   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:02:15.287202   71424 cri.go:89] found id: "7b1108fd5841778e635c87c901ca2bc56071cc7f48f9b9812e1bf8fa063284c3"
	I0913 20:02:15.287223   71424 cri.go:89] found id: ""
	I0913 20:02:15.287231   71424 logs.go:276] 1 containers: [7b1108fd5841778e635c87c901ca2bc56071cc7f48f9b9812e1bf8fa063284c3]
	I0913 20:02:15.287285   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:15.292060   71424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:02:15.292115   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:02:15.327342   71424 cri.go:89] found id: "a3490cc2f99b270938b5fed10c34d8466f4e2d304dac9cdacb69b239c36988e3"
	I0913 20:02:15.327367   71424 cri.go:89] found id: ""
	I0913 20:02:15.327376   71424 logs.go:276] 1 containers: [a3490cc2f99b270938b5fed10c34d8466f4e2d304dac9cdacb69b239c36988e3]
	I0913 20:02:15.327441   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:15.332284   71424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:02:15.332356   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:02:15.374686   71424 cri.go:89] found id: "e70559352db6a4d712cb06065541ce8338cb0ea989eac571f538aa205d022f73"
	I0913 20:02:15.374708   71424 cri.go:89] found id: ""
	I0913 20:02:15.374714   71424 logs.go:276] 1 containers: [e70559352db6a4d712cb06065541ce8338cb0ea989eac571f538aa205d022f73]
	I0913 20:02:15.374771   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:15.379199   71424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:02:15.379269   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:02:15.422011   71424 cri.go:89] found id: "4c2bf4fed4e33c404d568d430bf58fe30ca0c9f0cf8964c530e93f1fd1a51edf"
	I0913 20:02:15.422034   71424 cri.go:89] found id: ""
	I0913 20:02:15.422044   71424 logs.go:276] 1 containers: [4c2bf4fed4e33c404d568d430bf58fe30ca0c9f0cf8964c530e93f1fd1a51edf]
	I0913 20:02:15.422110   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:15.426331   71424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:02:15.426395   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:02:15.471552   71424 cri.go:89] found id: "adbec8ff0ed7ae2d76b2b4191fbf08117c3c2c17aaf2f56bf763ce14fba83991"
	I0913 20:02:15.471570   71424 cri.go:89] found id: ""
	I0913 20:02:15.471577   71424 logs.go:276] 1 containers: [adbec8ff0ed7ae2d76b2b4191fbf08117c3c2c17aaf2f56bf763ce14fba83991]
	I0913 20:02:15.471630   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:15.475964   71424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:02:15.476021   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:02:15.520619   71424 cri.go:89] found id: "e6169bebe5711890f53f70a7ec514779cf495ee725c60dab67e7acdca64ef03d"
	I0913 20:02:15.520647   71424 cri.go:89] found id: ""
	I0913 20:02:15.520656   71424 logs.go:276] 1 containers: [e6169bebe5711890f53f70a7ec514779cf495ee725c60dab67e7acdca64ef03d]
	I0913 20:02:15.520713   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:15.524851   71424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:02:15.524912   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:02:15.559283   71424 cri.go:89] found id: ""
	I0913 20:02:15.559309   71424 logs.go:276] 0 containers: []
	W0913 20:02:15.559320   71424 logs.go:278] No container was found matching "kindnet"
	I0913 20:02:15.559327   71424 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0913 20:02:15.559383   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0913 20:02:15.597439   71424 cri.go:89] found id: "fc01d7b17bbc929489326ae7e806a248f2d3d5cbc145bf51eb1b0cf2ba88d950"
	I0913 20:02:15.597465   71424 cri.go:89] found id: "4a9c61bb677322f851f9504c0ffe2044897d80333391be76078a4f7825afe9fe"
	I0913 20:02:15.597471   71424 cri.go:89] found id: ""
	I0913 20:02:15.597480   71424 logs.go:276] 2 containers: [fc01d7b17bbc929489326ae7e806a248f2d3d5cbc145bf51eb1b0cf2ba88d950 4a9c61bb677322f851f9504c0ffe2044897d80333391be76078a4f7825afe9fe]
	I0913 20:02:15.597540   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:15.601932   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:15.605741   71424 logs.go:123] Gathering logs for kube-scheduler [4c2bf4fed4e33c404d568d430bf58fe30ca0c9f0cf8964c530e93f1fd1a51edf] ...
	I0913 20:02:15.605765   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c2bf4fed4e33c404d568d430bf58fe30ca0c9f0cf8964c530e93f1fd1a51edf"
	I0913 20:02:15.641300   71424 logs.go:123] Gathering logs for kube-proxy [adbec8ff0ed7ae2d76b2b4191fbf08117c3c2c17aaf2f56bf763ce14fba83991] ...
	I0913 20:02:15.641328   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 adbec8ff0ed7ae2d76b2b4191fbf08117c3c2c17aaf2f56bf763ce14fba83991"
	I0913 20:02:15.679604   71424 logs.go:123] Gathering logs for kube-controller-manager [e6169bebe5711890f53f70a7ec514779cf495ee725c60dab67e7acdca64ef03d] ...
	I0913 20:02:15.679633   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e6169bebe5711890f53f70a7ec514779cf495ee725c60dab67e7acdca64ef03d"
	I0913 20:02:15.731316   71424 logs.go:123] Gathering logs for storage-provisioner [fc01d7b17bbc929489326ae7e806a248f2d3d5cbc145bf51eb1b0cf2ba88d950] ...
	I0913 20:02:15.731348   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc01d7b17bbc929489326ae7e806a248f2d3d5cbc145bf51eb1b0cf2ba88d950"
	I0913 20:02:15.774692   71424 logs.go:123] Gathering logs for dmesg ...
	I0913 20:02:15.774719   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:02:15.789708   71424 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:02:15.789733   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 20:02:15.899485   71424 logs.go:123] Gathering logs for kube-apiserver [7b1108fd5841778e635c87c901ca2bc56071cc7f48f9b9812e1bf8fa063284c3] ...
	I0913 20:02:15.899517   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7b1108fd5841778e635c87c901ca2bc56071cc7f48f9b9812e1bf8fa063284c3"
	I0913 20:02:15.953758   71424 logs.go:123] Gathering logs for coredns [e70559352db6a4d712cb06065541ce8338cb0ea989eac571f538aa205d022f73] ...
	I0913 20:02:15.953795   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e70559352db6a4d712cb06065541ce8338cb0ea989eac571f538aa205d022f73"
	I0913 20:02:15.996235   71424 logs.go:123] Gathering logs for storage-provisioner [4a9c61bb677322f851f9504c0ffe2044897d80333391be76078a4f7825afe9fe] ...
	I0913 20:02:15.996266   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4a9c61bb677322f851f9504c0ffe2044897d80333391be76078a4f7825afe9fe"
	I0913 20:02:16.033729   71424 logs.go:123] Gathering logs for container status ...
	I0913 20:02:16.033765   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:02:16.083481   71424 logs.go:123] Gathering logs for kubelet ...
	I0913 20:02:16.083514   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:02:13.497982   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:02:13.511795   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:02:13.511865   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:02:13.551686   71926 cri.go:89] found id: ""
	I0913 20:02:13.551714   71926 logs.go:276] 0 containers: []
	W0913 20:02:13.551723   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:02:13.551729   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:02:13.551779   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:02:13.584641   71926 cri.go:89] found id: ""
	I0913 20:02:13.584671   71926 logs.go:276] 0 containers: []
	W0913 20:02:13.584682   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:02:13.584689   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:02:13.584740   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:02:13.622693   71926 cri.go:89] found id: ""
	I0913 20:02:13.622720   71926 logs.go:276] 0 containers: []
	W0913 20:02:13.622731   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:02:13.622739   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:02:13.622801   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:02:13.661315   71926 cri.go:89] found id: ""
	I0913 20:02:13.661343   71926 logs.go:276] 0 containers: []
	W0913 20:02:13.661355   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:02:13.661363   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:02:13.661422   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:02:13.698433   71926 cri.go:89] found id: ""
	I0913 20:02:13.698460   71926 logs.go:276] 0 containers: []
	W0913 20:02:13.698471   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:02:13.698485   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:02:13.698541   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:02:13.733216   71926 cri.go:89] found id: ""
	I0913 20:02:13.733245   71926 logs.go:276] 0 containers: []
	W0913 20:02:13.733256   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:02:13.733264   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:02:13.733323   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:02:13.769397   71926 cri.go:89] found id: ""
	I0913 20:02:13.769426   71926 logs.go:276] 0 containers: []
	W0913 20:02:13.769436   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:02:13.769441   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:02:13.769502   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:02:13.806343   71926 cri.go:89] found id: ""
	I0913 20:02:13.806367   71926 logs.go:276] 0 containers: []
	W0913 20:02:13.806378   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:02:13.806389   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:02:13.806402   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:02:13.885918   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:02:13.885958   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:02:13.927909   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:02:13.927948   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:02:13.983289   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:02:13.983325   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:02:13.996867   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:02:13.996892   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:02:14.065876   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:02:16.566876   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:02:16.581980   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:02:16.582059   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:02:16.620669   71926 cri.go:89] found id: ""
	I0913 20:02:16.620694   71926 logs.go:276] 0 containers: []
	W0913 20:02:16.620703   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:02:16.620709   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:02:16.620758   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:02:16.660133   71926 cri.go:89] found id: ""
	I0913 20:02:16.660156   71926 logs.go:276] 0 containers: []
	W0913 20:02:16.660165   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:02:16.660171   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:02:16.660218   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:02:16.695475   71926 cri.go:89] found id: ""
	I0913 20:02:16.695503   71926 logs.go:276] 0 containers: []
	W0913 20:02:16.695515   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:02:16.695522   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:02:16.695581   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:02:16.730018   71926 cri.go:89] found id: ""
	I0913 20:02:16.730051   71926 logs.go:276] 0 containers: []
	W0913 20:02:16.730063   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:02:16.730069   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:02:16.730136   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:02:16.763193   71926 cri.go:89] found id: ""
	I0913 20:02:16.763219   71926 logs.go:276] 0 containers: []
	W0913 20:02:16.763230   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:02:16.763236   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:02:16.763303   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:02:16.800623   71926 cri.go:89] found id: ""
	I0913 20:02:16.800650   71926 logs.go:276] 0 containers: []
	W0913 20:02:16.800662   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:02:16.800670   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:02:16.800730   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:02:16.834922   71926 cri.go:89] found id: ""
	I0913 20:02:16.834950   71926 logs.go:276] 0 containers: []
	W0913 20:02:16.834961   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:02:16.834968   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:02:16.835012   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:02:16.872570   71926 cri.go:89] found id: ""
	I0913 20:02:16.872598   71926 logs.go:276] 0 containers: []
	W0913 20:02:16.872607   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:02:16.872615   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:02:16.872625   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:02:16.922229   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:02:16.922265   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:02:16.936954   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:02:16.936985   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 20:02:16.155161   71424 logs.go:123] Gathering logs for etcd [a3490cc2f99b270938b5fed10c34d8466f4e2d304dac9cdacb69b239c36988e3] ...
	I0913 20:02:16.155202   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a3490cc2f99b270938b5fed10c34d8466f4e2d304dac9cdacb69b239c36988e3"
	I0913 20:02:16.213457   71424 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:02:16.213494   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:02:19.078923   71424 system_pods.go:59] 8 kube-system pods found
	I0913 20:02:19.078950   71424 system_pods.go:61] "coredns-7c65d6cfc9-fjzxv" [984f1946-61b1-4881-ae99-495855aaf948] Running
	I0913 20:02:19.078956   71424 system_pods.go:61] "etcd-no-preload-239327" [d0514967-93ea-4792-b49d-500c6800d102] Running
	I0913 20:02:19.078959   71424 system_pods.go:61] "kube-apiserver-no-preload-239327" [2e37433d-d767-4e6a-9697-79e99bb2cf74] Running
	I0913 20:02:19.078964   71424 system_pods.go:61] "kube-controller-manager-no-preload-239327" [71d86891-1378-43b5-af10-c45acb1ef854] Running
	I0913 20:02:19.078967   71424 system_pods.go:61] "kube-proxy-b24zg" [67fffd9e-ddf7-4abb-bfce-1528060d6b43] Running
	I0913 20:02:19.078971   71424 system_pods.go:61] "kube-scheduler-no-preload-239327" [22e17a4f-902f-4593-82dd-d2f04104e66a] Running
	I0913 20:02:19.078976   71424 system_pods.go:61] "metrics-server-6867b74b74-bq7jp" [9920ad88-3d00-458f-94d4-3dcfd0cd9a01] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0913 20:02:19.078980   71424 system_pods.go:61] "storage-provisioner" [1cb55fe9-4adb-4d3e-9f26-34ee4b3f01a2] Running
	I0913 20:02:19.078988   71424 system_pods.go:74] duration metric: took 3.831418395s to wait for pod list to return data ...
	I0913 20:02:19.078995   71424 default_sa.go:34] waiting for default service account to be created ...
	I0913 20:02:19.081391   71424 default_sa.go:45] found service account: "default"
	I0913 20:02:19.081412   71424 default_sa.go:55] duration metric: took 2.412971ms for default service account to be created ...
	I0913 20:02:19.081419   71424 system_pods.go:116] waiting for k8s-apps to be running ...
	I0913 20:02:19.085561   71424 system_pods.go:86] 8 kube-system pods found
	I0913 20:02:19.085580   71424 system_pods.go:89] "coredns-7c65d6cfc9-fjzxv" [984f1946-61b1-4881-ae99-495855aaf948] Running
	I0913 20:02:19.085586   71424 system_pods.go:89] "etcd-no-preload-239327" [d0514967-93ea-4792-b49d-500c6800d102] Running
	I0913 20:02:19.085590   71424 system_pods.go:89] "kube-apiserver-no-preload-239327" [2e37433d-d767-4e6a-9697-79e99bb2cf74] Running
	I0913 20:02:19.085594   71424 system_pods.go:89] "kube-controller-manager-no-preload-239327" [71d86891-1378-43b5-af10-c45acb1ef854] Running
	I0913 20:02:19.085597   71424 system_pods.go:89] "kube-proxy-b24zg" [67fffd9e-ddf7-4abb-bfce-1528060d6b43] Running
	I0913 20:02:19.085601   71424 system_pods.go:89] "kube-scheduler-no-preload-239327" [22e17a4f-902f-4593-82dd-d2f04104e66a] Running
	I0913 20:02:19.085607   71424 system_pods.go:89] "metrics-server-6867b74b74-bq7jp" [9920ad88-3d00-458f-94d4-3dcfd0cd9a01] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0913 20:02:19.085610   71424 system_pods.go:89] "storage-provisioner" [1cb55fe9-4adb-4d3e-9f26-34ee4b3f01a2] Running
	I0913 20:02:19.085616   71424 system_pods.go:126] duration metric: took 4.193561ms to wait for k8s-apps to be running ...
	I0913 20:02:19.085625   71424 system_svc.go:44] waiting for kubelet service to be running ....
	I0913 20:02:19.085664   71424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 20:02:19.105440   71424 system_svc.go:56] duration metric: took 19.808703ms WaitForService to wait for kubelet
	I0913 20:02:19.105469   71424 kubeadm.go:582] duration metric: took 4m22.354619761s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0913 20:02:19.105491   71424 node_conditions.go:102] verifying NodePressure condition ...
	I0913 20:02:19.109107   71424 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0913 20:02:19.109126   71424 node_conditions.go:123] node cpu capacity is 2
	I0913 20:02:19.109136   71424 node_conditions.go:105] duration metric: took 3.640406ms to run NodePressure ...
	I0913 20:02:19.109146   71424 start.go:241] waiting for startup goroutines ...
	I0913 20:02:19.109153   71424 start.go:246] waiting for cluster config update ...
	I0913 20:02:19.109163   71424 start.go:255] writing updated cluster config ...
	I0913 20:02:19.109412   71424 ssh_runner.go:195] Run: rm -f paused
	I0913 20:02:19.156906   71424 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0913 20:02:19.158757   71424 out.go:177] * Done! kubectl is now configured to use "no-preload-239327" cluster and "default" namespace by default
	I0913 20:02:14.835749   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:17.335566   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:16.431024   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:18.434223   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:19.425264   71702 pod_ready.go:82] duration metric: took 4m0.000872269s for pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace to be "Ready" ...
	E0913 20:02:19.425295   71702 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0913 20:02:19.425314   71702 pod_ready.go:39] duration metric: took 4m14.083085064s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0913 20:02:19.425344   71702 kubeadm.go:597] duration metric: took 4m21.72399516s to restartPrimaryControlPlane
	W0913 20:02:19.425404   71702 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0913 20:02:19.425434   71702 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	W0913 20:02:17.022260   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:02:17.022281   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:02:17.022292   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:02:17.103233   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:02:17.103262   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:02:19.648871   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:02:19.664542   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:02:19.664619   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:02:19.704693   71926 cri.go:89] found id: ""
	I0913 20:02:19.704725   71926 logs.go:276] 0 containers: []
	W0913 20:02:19.704738   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:02:19.704745   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:02:19.704808   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:02:19.746522   71926 cri.go:89] found id: ""
	I0913 20:02:19.746550   71926 logs.go:276] 0 containers: []
	W0913 20:02:19.746562   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:02:19.746569   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:02:19.746630   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:02:19.782277   71926 cri.go:89] found id: ""
	I0913 20:02:19.782305   71926 logs.go:276] 0 containers: []
	W0913 20:02:19.782316   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:02:19.782323   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:02:19.782390   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:02:19.822083   71926 cri.go:89] found id: ""
	I0913 20:02:19.822147   71926 logs.go:276] 0 containers: []
	W0913 20:02:19.822157   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:02:19.822163   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:02:19.822221   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:02:19.865403   71926 cri.go:89] found id: ""
	I0913 20:02:19.865433   71926 logs.go:276] 0 containers: []
	W0913 20:02:19.865443   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:02:19.865451   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:02:19.865513   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:02:19.906436   71926 cri.go:89] found id: ""
	I0913 20:02:19.906462   71926 logs.go:276] 0 containers: []
	W0913 20:02:19.906471   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:02:19.906477   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:02:19.906535   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:02:19.943277   71926 cri.go:89] found id: ""
	I0913 20:02:19.943301   71926 logs.go:276] 0 containers: []
	W0913 20:02:19.943311   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:02:19.943318   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:02:19.943379   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:02:19.978660   71926 cri.go:89] found id: ""
	I0913 20:02:19.978683   71926 logs.go:276] 0 containers: []
	W0913 20:02:19.978694   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:02:19.978704   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:02:19.978718   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:02:20.051748   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:02:20.051773   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:02:20.051788   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:02:20.133912   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:02:20.133951   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:02:20.174826   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:02:20.174854   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:02:20.228002   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:02:20.228038   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:02:22.743346   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:02:22.757641   71926 kubeadm.go:597] duration metric: took 4m3.377721408s to restartPrimaryControlPlane
	W0913 20:02:22.757719   71926 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0913 20:02:22.757750   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0913 20:02:23.489114   71926 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 20:02:23.505494   71926 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0913 20:02:23.516804   71926 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0913 20:02:23.527373   71926 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0913 20:02:23.527392   71926 kubeadm.go:157] found existing configuration files:
	
	I0913 20:02:23.527433   71926 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0913 20:02:23.537725   71926 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0913 20:02:23.537797   71926 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0913 20:02:23.548667   71926 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0913 20:02:23.558212   71926 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0913 20:02:23.558288   71926 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0913 20:02:23.568235   71926 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0913 20:02:23.577925   71926 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0913 20:02:23.577989   71926 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0913 20:02:23.587819   71926 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0913 20:02:23.597266   71926 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0913 20:02:23.597330   71926 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0913 20:02:23.607021   71926 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0913 20:02:23.681432   71926 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0913 20:02:23.681576   71926 kubeadm.go:310] [preflight] Running pre-flight checks
	I0913 20:02:23.837573   71926 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0913 20:02:23.837727   71926 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0913 20:02:23.837858   71926 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0913 20:02:24.016267   71926 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0913 20:02:19.336285   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:21.836115   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:23.837035   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:24.018067   71926 out.go:235]   - Generating certificates and keys ...
	I0913 20:02:24.018195   71926 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0913 20:02:24.018297   71926 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0913 20:02:24.018394   71926 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0913 20:02:24.018482   71926 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0913 20:02:24.018586   71926 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0913 20:02:24.019167   71926 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0913 20:02:24.019590   71926 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0913 20:02:24.020258   71926 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0913 20:02:24.020814   71926 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0913 20:02:24.021409   71926 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0913 20:02:24.021519   71926 kubeadm.go:310] [certs] Using the existing "sa" key
	I0913 20:02:24.021596   71926 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0913 20:02:24.094249   71926 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0913 20:02:24.178186   71926 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0913 20:02:24.412313   71926 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0913 20:02:24.570296   71926 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0913 20:02:24.585365   71926 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0913 20:02:24.586493   71926 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0913 20:02:24.586548   71926 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0913 20:02:24.726026   71926 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0913 20:02:24.727979   71926 out.go:235]   - Booting up control plane ...
	I0913 20:02:24.728117   71926 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0913 20:02:24.740176   71926 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0913 20:02:24.741334   71926 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0913 20:02:24.742057   71926 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0913 20:02:24.744724   71926 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0913 20:02:26.336853   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:28.841632   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:31.336243   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:33.835739   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:36.337341   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:38.835188   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:40.836019   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:42.836112   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:45.681212   71702 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.255746666s)
	I0913 20:02:45.681319   71702 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 20:02:45.700645   71702 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0913 20:02:45.716032   71702 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0913 20:02:45.735914   71702 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0913 20:02:45.735934   71702 kubeadm.go:157] found existing configuration files:
	
	I0913 20:02:45.735991   71702 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0913 20:02:45.746143   71702 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0913 20:02:45.746212   71702 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0913 20:02:45.756542   71702 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0913 20:02:45.774317   71702 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0913 20:02:45.774371   71702 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0913 20:02:45.786627   71702 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0913 20:02:45.796851   71702 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0913 20:02:45.796913   71702 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0913 20:02:45.817449   71702 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0913 20:02:45.827702   71702 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0913 20:02:45.827769   71702 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0913 20:02:45.838431   71702 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0913 20:02:45.891108   71702 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0913 20:02:45.891320   71702 kubeadm.go:310] [preflight] Running pre-flight checks
	I0913 20:02:46.000041   71702 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0913 20:02:46.000212   71702 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0913 20:02:46.000375   71702 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0913 20:02:46.008967   71702 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0913 20:02:46.010730   71702 out.go:235]   - Generating certificates and keys ...
	I0913 20:02:46.010839   71702 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0913 20:02:46.010943   71702 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0913 20:02:46.011058   71702 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0913 20:02:46.011180   71702 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0913 20:02:46.011270   71702 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0913 20:02:46.011352   71702 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0913 20:02:46.011438   71702 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0913 20:02:46.011528   71702 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0913 20:02:46.011627   71702 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0913 20:02:46.011727   71702 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0913 20:02:46.011781   71702 kubeadm.go:310] [certs] Using the existing "sa" key
	I0913 20:02:46.011850   71702 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0913 20:02:46.203740   71702 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0913 20:02:46.287426   71702 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0913 20:02:46.417622   71702 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0913 20:02:46.837809   71702 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0913 20:02:47.159346   71702 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0913 20:02:47.159994   71702 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0913 20:02:47.162768   71702 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0913 20:02:45.335134   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:47.338183   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:47.164508   71702 out.go:235]   - Booting up control plane ...
	I0913 20:02:47.164636   71702 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0913 20:02:47.164740   71702 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0913 20:02:47.164827   71702 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0913 20:02:47.182734   71702 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0913 20:02:47.188946   71702 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0913 20:02:47.189012   71702 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0913 20:02:47.311613   71702 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0913 20:02:47.311820   71702 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0913 20:02:47.812730   71702 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.220732ms
	I0913 20:02:47.812859   71702 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0913 20:02:53.314958   71702 kubeadm.go:310] [api-check] The API server is healthy after 5.502078323s
	I0913 20:02:53.332711   71702 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0913 20:02:53.363295   71702 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0913 20:02:53.416780   71702 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0913 20:02:53.417000   71702 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-512125 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0913 20:02:53.450532   71702 kubeadm.go:310] [bootstrap-token] Using token: omlshd.2vtm45ugvt4lb37m
	I0913 20:02:49.837005   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:52.336369   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:53.451903   71702 out.go:235]   - Configuring RBAC rules ...
	I0913 20:02:53.452024   71702 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0913 20:02:53.474646   71702 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0913 20:02:53.501155   71702 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0913 20:02:53.510978   71702 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0913 20:02:53.529034   71702 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0913 20:02:53.540839   71702 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0913 20:02:53.724625   71702 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0913 20:02:54.178585   71702 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0913 20:02:54.728758   71702 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0913 20:02:54.729745   71702 kubeadm.go:310] 
	I0913 20:02:54.729808   71702 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0913 20:02:54.729816   71702 kubeadm.go:310] 
	I0913 20:02:54.729906   71702 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0913 20:02:54.729931   71702 kubeadm.go:310] 
	I0913 20:02:54.729981   71702 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0913 20:02:54.730079   71702 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0913 20:02:54.730170   71702 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0913 20:02:54.730180   71702 kubeadm.go:310] 
	I0913 20:02:54.730386   71702 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0913 20:02:54.730403   71702 kubeadm.go:310] 
	I0913 20:02:54.730453   71702 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0913 20:02:54.730476   71702 kubeadm.go:310] 
	I0913 20:02:54.730538   71702 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0913 20:02:54.730642   71702 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0913 20:02:54.730737   71702 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0913 20:02:54.730746   71702 kubeadm.go:310] 
	I0913 20:02:54.730866   71702 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0913 20:02:54.730978   71702 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0913 20:02:54.730990   71702 kubeadm.go:310] 
	I0913 20:02:54.731059   71702 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token omlshd.2vtm45ugvt4lb37m \
	I0913 20:02:54.731147   71702 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a4240e11abfbfd373505036507e5ef67d548fb372e560a526b421dfcccc65691 \
	I0913 20:02:54.731172   71702 kubeadm.go:310] 	--control-plane 
	I0913 20:02:54.731178   71702 kubeadm.go:310] 
	I0913 20:02:54.731250   71702 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0913 20:02:54.731265   71702 kubeadm.go:310] 
	I0913 20:02:54.731385   71702 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token omlshd.2vtm45ugvt4lb37m \
	I0913 20:02:54.731537   71702 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a4240e11abfbfd373505036507e5ef67d548fb372e560a526b421dfcccc65691 
	I0913 20:02:54.732490   71702 kubeadm.go:310] W0913 20:02:45.866846    2534 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0913 20:02:54.732825   71702 kubeadm.go:310] W0913 20:02:45.867680    2534 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0913 20:02:54.732991   71702 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0913 20:02:54.733013   71702 cni.go:84] Creating CNI manager for ""
	I0913 20:02:54.733024   71702 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0913 20:02:54.734613   71702 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0913 20:02:54.735888   71702 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0913 20:02:54.747812   71702 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0913 20:02:54.769810   71702 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0913 20:02:54.769849   71702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 20:02:54.769936   71702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-512125 minikube.k8s.io/updated_at=2024_09_13T20_02_54_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=fdd33bebc6743cfd1c61ec7fe066add478610a92 minikube.k8s.io/name=default-k8s-diff-port-512125 minikube.k8s.io/primary=true
	I0913 20:02:54.934477   71702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 20:02:55.021422   71702 ops.go:34] apiserver oom_adj: -16
	I0913 20:02:55.435528   71702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 20:02:55.935089   71702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 20:02:56.434609   71702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 20:02:56.934698   71702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 20:02:57.434523   71702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 20:02:57.935430   71702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 20:02:58.434786   71702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 20:02:58.935296   71702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 20:02:59.068131   71702 kubeadm.go:1113] duration metric: took 4.298327621s to wait for elevateKubeSystemPrivileges
	I0913 20:02:59.068171   71702 kubeadm.go:394] duration metric: took 5m1.428919049s to StartCluster
	I0913 20:02:59.068191   71702 settings.go:142] acquiring lock: {Name:mk3b751ea8a9f3a6c2d19469cffad500e96411b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 20:02:59.068274   71702 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19636-3902/kubeconfig
	I0913 20:02:59.069936   71702 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/kubeconfig: {Name:mk324cf401f96b93fed93af51e24b0634e5fa1a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 20:02:59.070196   71702 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.3 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0913 20:02:59.070258   71702 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0913 20:02:59.070355   71702 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-512125"
	I0913 20:02:59.070373   71702 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-512125"
	W0913 20:02:59.070386   71702 addons.go:243] addon storage-provisioner should already be in state true
	I0913 20:02:59.070383   71702 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-512125"
	I0913 20:02:59.070407   71702 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-512125"
	I0913 20:02:59.070407   71702 config.go:182] Loaded profile config "default-k8s-diff-port-512125": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 20:02:59.070425   71702 host.go:66] Checking if "default-k8s-diff-port-512125" exists ...
	I0913 20:02:59.070413   71702 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-512125"
	I0913 20:02:59.070447   71702 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-512125"
	W0913 20:02:59.070457   71702 addons.go:243] addon metrics-server should already be in state true
	I0913 20:02:59.070481   71702 host.go:66] Checking if "default-k8s-diff-port-512125" exists ...
	I0913 20:02:59.070819   71702 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 20:02:59.070863   71702 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 20:02:59.070866   71702 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 20:02:59.070891   71702 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 20:02:59.070911   71702 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 20:02:59.070935   71702 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 20:02:59.072027   71702 out.go:177] * Verifying Kubernetes components...
	I0913 20:02:59.073600   71702 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 20:02:59.088175   71702 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46813
	I0913 20:02:59.088737   71702 main.go:141] libmachine: () Calling .GetVersion
	I0913 20:02:59.089296   71702 main.go:141] libmachine: Using API Version  1
	I0913 20:02:59.089321   71702 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 20:02:59.089716   71702 main.go:141] libmachine: () Calling .GetMachineName
	I0913 20:02:59.090168   71702 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41303
	I0913 20:02:59.090184   71702 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43597
	I0913 20:02:59.090323   71702 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 20:02:59.090370   71702 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 20:02:59.090639   71702 main.go:141] libmachine: () Calling .GetVersion
	I0913 20:02:59.090642   71702 main.go:141] libmachine: () Calling .GetVersion
	I0913 20:02:59.091125   71702 main.go:141] libmachine: Using API Version  1
	I0913 20:02:59.091157   71702 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 20:02:59.091295   71702 main.go:141] libmachine: Using API Version  1
	I0913 20:02:59.091309   71702 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 20:02:59.091691   71702 main.go:141] libmachine: () Calling .GetMachineName
	I0913 20:02:59.091749   71702 main.go:141] libmachine: () Calling .GetMachineName
	I0913 20:02:59.092208   71702 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 20:02:59.092244   71702 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 20:02:59.092420   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetState
	I0913 20:02:59.096383   71702 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-512125"
	W0913 20:02:59.096408   71702 addons.go:243] addon default-storageclass should already be in state true
	I0913 20:02:59.096439   71702 host.go:66] Checking if "default-k8s-diff-port-512125" exists ...
	I0913 20:02:59.096799   71702 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 20:02:59.096839   71702 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 20:02:59.110299   71702 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32853
	I0913 20:02:59.110382   71702 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41891
	I0913 20:02:59.110847   71702 main.go:141] libmachine: () Calling .GetVersion
	I0913 20:02:59.110951   71702 main.go:141] libmachine: () Calling .GetVersion
	I0913 20:02:59.111458   71702 main.go:141] libmachine: Using API Version  1
	I0913 20:02:59.111472   71702 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 20:02:59.111483   71702 main.go:141] libmachine: Using API Version  1
	I0913 20:02:59.111500   71702 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 20:02:59.111815   71702 main.go:141] libmachine: () Calling .GetMachineName
	I0913 20:02:59.111979   71702 main.go:141] libmachine: () Calling .GetMachineName
	I0913 20:02:59.112029   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetState
	I0913 20:02:59.112585   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetState
	I0913 20:02:59.114070   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .DriverName
	I0913 20:02:59.114919   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .DriverName
	I0913 20:02:59.116054   71702 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 20:02:59.116911   71702 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0913 20:02:54.837749   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:57.335281   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:57.335308   71233 pod_ready.go:82] duration metric: took 4m0.006028535s for pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace to be "Ready" ...
	E0913 20:02:57.335316   71233 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0913 20:02:57.335325   71233 pod_ready.go:39] duration metric: took 4m4.043499675s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0913 20:02:57.335338   71233 api_server.go:52] waiting for apiserver process to appear ...
	I0913 20:02:57.335365   71233 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:02:57.335429   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:02:57.384724   71233 cri.go:89] found id: "8c6b66cfda64cc01b2add3842317e8e38bd44940ec52b87ff6868e2b782ceb0d"
	I0913 20:02:57.384750   71233 cri.go:89] found id: ""
	I0913 20:02:57.384759   71233 logs.go:276] 1 containers: [8c6b66cfda64cc01b2add3842317e8e38bd44940ec52b87ff6868e2b782ceb0d]
	I0913 20:02:57.384816   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:02:57.393335   71233 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:02:57.393406   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:02:57.432064   71233 cri.go:89] found id: "b7288e6c437a29f9a016812446ce5905f4cc78974775e1f26b227cd41414a8c0"
	I0913 20:02:57.432112   71233 cri.go:89] found id: ""
	I0913 20:02:57.432121   71233 logs.go:276] 1 containers: [b7288e6c437a29f9a016812446ce5905f4cc78974775e1f26b227cd41414a8c0]
	I0913 20:02:57.432170   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:02:57.437305   71233 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:02:57.437363   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:02:57.484101   71233 cri.go:89] found id: "5a58f184d570427c2adde5e982b72f455eb375ced496ce62a3217f22f7c409e7"
	I0913 20:02:57.484125   71233 cri.go:89] found id: ""
	I0913 20:02:57.484135   71233 logs.go:276] 1 containers: [5a58f184d570427c2adde5e982b72f455eb375ced496ce62a3217f22f7c409e7]
	I0913 20:02:57.484204   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:02:57.489057   71233 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:02:57.489129   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:02:57.531094   71233 cri.go:89] found id: "c32212fb06588456e47850075601e1e9622d8fab0cd30faa9a6289ff74e4bc9f"
	I0913 20:02:57.531138   71233 cri.go:89] found id: ""
	I0913 20:02:57.531147   71233 logs.go:276] 1 containers: [c32212fb06588456e47850075601e1e9622d8fab0cd30faa9a6289ff74e4bc9f]
	I0913 20:02:57.531208   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:02:57.536227   71233 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:02:57.536290   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:02:57.575177   71233 cri.go:89] found id: "57402126568c76ff63c95511c256896d8fb9ec9889deab2a2816385513386b86"
	I0913 20:02:57.575204   71233 cri.go:89] found id: ""
	I0913 20:02:57.575213   71233 logs.go:276] 1 containers: [57402126568c76ff63c95511c256896d8fb9ec9889deab2a2816385513386b86]
	I0913 20:02:57.575265   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:02:57.580702   71233 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:02:57.580772   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:02:57.616846   71233 cri.go:89] found id: "3e8d6c49b3b396aea040cc47de27b36e0f6018361f1925fe36424adffe6a6b73"
	I0913 20:02:57.616872   71233 cri.go:89] found id: ""
	I0913 20:02:57.616881   71233 logs.go:276] 1 containers: [3e8d6c49b3b396aea040cc47de27b36e0f6018361f1925fe36424adffe6a6b73]
	I0913 20:02:57.616937   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:02:57.626381   71233 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:02:57.626438   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:02:57.665834   71233 cri.go:89] found id: ""
	I0913 20:02:57.665859   71233 logs.go:276] 0 containers: []
	W0913 20:02:57.665868   71233 logs.go:278] No container was found matching "kindnet"
	I0913 20:02:57.665873   71233 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0913 20:02:57.665924   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0913 20:02:57.709261   71233 cri.go:89] found id: "db0694e689431d2416b903e0f01588bdf740ca0d24d9561799e853cfb065cb11"
	I0913 20:02:57.709282   71233 cri.go:89] found id: "d21ac9f9341fd40b5f181668035e7327fd6c2310c56a7f5f0158d440bfd2e9a3"
	I0913 20:02:57.709286   71233 cri.go:89] found id: ""
	I0913 20:02:57.709293   71233 logs.go:276] 2 containers: [db0694e689431d2416b903e0f01588bdf740ca0d24d9561799e853cfb065cb11 d21ac9f9341fd40b5f181668035e7327fd6c2310c56a7f5f0158d440bfd2e9a3]
	I0913 20:02:57.709352   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:02:57.713629   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:02:57.717722   71233 logs.go:123] Gathering logs for kubelet ...
	I0913 20:02:57.717739   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:02:57.791226   71233 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:02:57.791258   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 20:02:57.967572   71233 logs.go:123] Gathering logs for kube-apiserver [8c6b66cfda64cc01b2add3842317e8e38bd44940ec52b87ff6868e2b782ceb0d] ...
	I0913 20:02:57.967614   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8c6b66cfda64cc01b2add3842317e8e38bd44940ec52b87ff6868e2b782ceb0d"
	I0913 20:02:58.035311   71233 logs.go:123] Gathering logs for kube-scheduler [c32212fb06588456e47850075601e1e9622d8fab0cd30faa9a6289ff74e4bc9f] ...
	I0913 20:02:58.035356   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c32212fb06588456e47850075601e1e9622d8fab0cd30faa9a6289ff74e4bc9f"
	I0913 20:02:58.076771   71233 logs.go:123] Gathering logs for kube-proxy [57402126568c76ff63c95511c256896d8fb9ec9889deab2a2816385513386b86] ...
	I0913 20:02:58.076801   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57402126568c76ff63c95511c256896d8fb9ec9889deab2a2816385513386b86"
	I0913 20:02:58.120108   71233 logs.go:123] Gathering logs for storage-provisioner [db0694e689431d2416b903e0f01588bdf740ca0d24d9561799e853cfb065cb11] ...
	I0913 20:02:58.120138   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db0694e689431d2416b903e0f01588bdf740ca0d24d9561799e853cfb065cb11"
	I0913 20:02:58.169935   71233 logs.go:123] Gathering logs for storage-provisioner [d21ac9f9341fd40b5f181668035e7327fd6c2310c56a7f5f0158d440bfd2e9a3] ...
	I0913 20:02:58.169964   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d21ac9f9341fd40b5f181668035e7327fd6c2310c56a7f5f0158d440bfd2e9a3"
	I0913 20:02:58.213552   71233 logs.go:123] Gathering logs for dmesg ...
	I0913 20:02:58.213579   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:02:58.227590   71233 logs.go:123] Gathering logs for etcd [b7288e6c437a29f9a016812446ce5905f4cc78974775e1f26b227cd41414a8c0] ...
	I0913 20:02:58.227618   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b7288e6c437a29f9a016812446ce5905f4cc78974775e1f26b227cd41414a8c0"
	I0913 20:02:58.272273   71233 logs.go:123] Gathering logs for coredns [5a58f184d570427c2adde5e982b72f455eb375ced496ce62a3217f22f7c409e7] ...
	I0913 20:02:58.272304   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a58f184d570427c2adde5e982b72f455eb375ced496ce62a3217f22f7c409e7"
	I0913 20:02:58.325246   71233 logs.go:123] Gathering logs for kube-controller-manager [3e8d6c49b3b396aea040cc47de27b36e0f6018361f1925fe36424adffe6a6b73] ...
	I0913 20:02:58.325282   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e8d6c49b3b396aea040cc47de27b36e0f6018361f1925fe36424adffe6a6b73"
	I0913 20:02:58.383314   71233 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:02:58.383344   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:02:58.878384   71233 logs.go:123] Gathering logs for container status ...
	I0913 20:02:58.878423   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:02:59.116960   71702 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35415
	I0913 20:02:59.117841   71702 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0913 20:02:59.117861   71702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0913 20:02:59.117881   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHHostname
	I0913 20:02:59.117970   71702 main.go:141] libmachine: () Calling .GetVersion
	I0913 20:02:59.118540   71702 main.go:141] libmachine: Using API Version  1
	I0913 20:02:59.118559   71702 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 20:02:59.118756   71702 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0913 20:02:59.118776   71702 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0913 20:02:59.118795   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHHostname
	I0913 20:02:59.118937   71702 main.go:141] libmachine: () Calling .GetMachineName
	I0913 20:02:59.120038   71702 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 20:02:59.120119   71702 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 20:02:59.122253   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 20:02:59.122695   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 20:02:59.122693   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:54:e0", ip: ""} in network mk-default-k8s-diff-port-512125: {Iface:virbr2 ExpiryTime:2024-09-13 20:49:35 +0000 UTC Type:0 Mac:52:54:00:5b:54:e0 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-512125 Clientid:01:52:54:00:5b:54:e0}
	I0913 20:02:59.122727   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined IP address 192.168.61.3 and MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 20:02:59.122937   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHPort
	I0913 20:02:59.123131   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHKeyPath
	I0913 20:02:59.123151   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:54:e0", ip: ""} in network mk-default-k8s-diff-port-512125: {Iface:virbr2 ExpiryTime:2024-09-13 20:49:35 +0000 UTC Type:0 Mac:52:54:00:5b:54:e0 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-512125 Clientid:01:52:54:00:5b:54:e0}
	I0913 20:02:59.123172   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined IP address 192.168.61.3 and MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 20:02:59.123321   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHUsername
	I0913 20:02:59.123523   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHPort
	I0913 20:02:59.123531   71702 sshutil.go:53] new ssh client: &{IP:192.168.61.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/default-k8s-diff-port-512125/id_rsa Username:docker}
	I0913 20:02:59.123629   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHKeyPath
	I0913 20:02:59.123727   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHUsername
	I0913 20:02:59.123835   71702 sshutil.go:53] new ssh client: &{IP:192.168.61.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/default-k8s-diff-port-512125/id_rsa Username:docker}
	I0913 20:02:59.137333   71702 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45929
	I0913 20:02:59.137767   71702 main.go:141] libmachine: () Calling .GetVersion
	I0913 20:02:59.138291   71702 main.go:141] libmachine: Using API Version  1
	I0913 20:02:59.138311   71702 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 20:02:59.138659   71702 main.go:141] libmachine: () Calling .GetMachineName
	I0913 20:02:59.138865   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetState
	I0913 20:02:59.140658   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .DriverName
	I0913 20:02:59.140891   71702 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0913 20:02:59.140908   71702 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0913 20:02:59.140934   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHHostname
	I0913 20:02:59.144330   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 20:02:59.144802   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:54:e0", ip: ""} in network mk-default-k8s-diff-port-512125: {Iface:virbr2 ExpiryTime:2024-09-13 20:49:35 +0000 UTC Type:0 Mac:52:54:00:5b:54:e0 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-512125 Clientid:01:52:54:00:5b:54:e0}
	I0913 20:02:59.144834   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined IP address 192.168.61.3 and MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 20:02:59.144971   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHPort
	I0913 20:02:59.145149   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHKeyPath
	I0913 20:02:59.145280   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHUsername
	I0913 20:02:59.145398   71702 sshutil.go:53] new ssh client: &{IP:192.168.61.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/default-k8s-diff-port-512125/id_rsa Username:docker}
	I0913 20:02:59.313139   71702 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0913 20:02:59.364703   71702 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-512125" to be "Ready" ...
	I0913 20:02:59.390283   71702 node_ready.go:49] node "default-k8s-diff-port-512125" has status "Ready":"True"
	I0913 20:02:59.390322   71702 node_ready.go:38] duration metric: took 25.568477ms for node "default-k8s-diff-port-512125" to be "Ready" ...
	I0913 20:02:59.390335   71702 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0913 20:02:59.404911   71702 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-2qg68" in "kube-system" namespace to be "Ready" ...
	I0913 20:02:59.534386   71702 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0913 20:02:59.534414   71702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0913 20:02:59.562931   71702 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0913 20:02:59.562958   71702 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0913 20:02:59.569447   71702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0913 20:02:59.630245   71702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0913 20:02:59.664309   71702 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0913 20:02:59.664341   71702 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0913 20:02:59.766546   71702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0913 20:03:00.996748   71702 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.366470603s)
	I0913 20:03:00.996799   71702 main.go:141] libmachine: Making call to close driver server
	I0913 20:03:00.996814   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .Close
	I0913 20:03:00.996831   71702 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.427344727s)
	I0913 20:03:00.996874   71702 main.go:141] libmachine: Making call to close driver server
	I0913 20:03:00.996886   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .Close
	I0913 20:03:00.997187   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | Closing plugin on server side
	I0913 20:03:00.997223   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | Closing plugin on server side
	I0913 20:03:00.997216   71702 main.go:141] libmachine: Successfully made call to close driver server
	I0913 20:03:00.997261   71702 main.go:141] libmachine: Successfully made call to close driver server
	I0913 20:03:00.997272   71702 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 20:03:00.997283   71702 main.go:141] libmachine: Making call to close driver server
	I0913 20:03:00.997293   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .Close
	I0913 20:03:00.997261   71702 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 20:03:00.997352   71702 main.go:141] libmachine: Making call to close driver server
	I0913 20:03:00.997360   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .Close
	I0913 20:03:00.997576   71702 main.go:141] libmachine: Successfully made call to close driver server
	I0913 20:03:00.997619   71702 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 20:03:00.997631   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | Closing plugin on server side
	I0913 20:03:00.997657   71702 main.go:141] libmachine: Successfully made call to close driver server
	I0913 20:03:00.997717   71702 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 20:03:01.017603   71702 main.go:141] libmachine: Making call to close driver server
	I0913 20:03:01.017629   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .Close
	I0913 20:03:01.017896   71702 main.go:141] libmachine: Successfully made call to close driver server
	I0913 20:03:01.017913   71702 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 20:03:01.034684   71702 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.268104844s)
	I0913 20:03:01.034739   71702 main.go:141] libmachine: Making call to close driver server
	I0913 20:03:01.034756   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .Close
	I0913 20:03:01.035100   71702 main.go:141] libmachine: Successfully made call to close driver server
	I0913 20:03:01.035120   71702 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 20:03:01.035137   71702 main.go:141] libmachine: Making call to close driver server
	I0913 20:03:01.035145   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .Close
	I0913 20:03:01.036842   71702 main.go:141] libmachine: Successfully made call to close driver server
	I0913 20:03:01.036871   71702 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 20:03:01.036882   71702 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-512125"
	I0913 20:03:01.039496   71702 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0913 20:03:01.432233   71233 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:03:01.452473   71233 api_server.go:72] duration metric: took 4m15.872372226s to wait for apiserver process to appear ...
	I0913 20:03:01.452503   71233 api_server.go:88] waiting for apiserver healthz status ...
	I0913 20:03:01.452544   71233 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:03:01.452600   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:03:01.495509   71233 cri.go:89] found id: "8c6b66cfda64cc01b2add3842317e8e38bd44940ec52b87ff6868e2b782ceb0d"
	I0913 20:03:01.495532   71233 cri.go:89] found id: ""
	I0913 20:03:01.495539   71233 logs.go:276] 1 containers: [8c6b66cfda64cc01b2add3842317e8e38bd44940ec52b87ff6868e2b782ceb0d]
	I0913 20:03:01.495601   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:03:01.502156   71233 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:03:01.502244   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:03:01.545020   71233 cri.go:89] found id: "b7288e6c437a29f9a016812446ce5905f4cc78974775e1f26b227cd41414a8c0"
	I0913 20:03:01.545046   71233 cri.go:89] found id: ""
	I0913 20:03:01.545056   71233 logs.go:276] 1 containers: [b7288e6c437a29f9a016812446ce5905f4cc78974775e1f26b227cd41414a8c0]
	I0913 20:03:01.545114   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:03:01.549607   71233 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:03:01.549675   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:03:01.589590   71233 cri.go:89] found id: "5a58f184d570427c2adde5e982b72f455eb375ced496ce62a3217f22f7c409e7"
	I0913 20:03:01.589619   71233 cri.go:89] found id: ""
	I0913 20:03:01.589627   71233 logs.go:276] 1 containers: [5a58f184d570427c2adde5e982b72f455eb375ced496ce62a3217f22f7c409e7]
	I0913 20:03:01.589677   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:03:01.595352   71233 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:03:01.595429   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:03:01.642418   71233 cri.go:89] found id: "c32212fb06588456e47850075601e1e9622d8fab0cd30faa9a6289ff74e4bc9f"
	I0913 20:03:01.642441   71233 cri.go:89] found id: ""
	I0913 20:03:01.642449   71233 logs.go:276] 1 containers: [c32212fb06588456e47850075601e1e9622d8fab0cd30faa9a6289ff74e4bc9f]
	I0913 20:03:01.642511   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:03:01.647937   71233 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:03:01.648004   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:03:01.691575   71233 cri.go:89] found id: "57402126568c76ff63c95511c256896d8fb9ec9889deab2a2816385513386b86"
	I0913 20:03:01.691603   71233 cri.go:89] found id: ""
	I0913 20:03:01.691612   71233 logs.go:276] 1 containers: [57402126568c76ff63c95511c256896d8fb9ec9889deab2a2816385513386b86]
	I0913 20:03:01.691669   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:03:01.697223   71233 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:03:01.697296   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:03:01.737359   71233 cri.go:89] found id: "3e8d6c49b3b396aea040cc47de27b36e0f6018361f1925fe36424adffe6a6b73"
	I0913 20:03:01.737386   71233 cri.go:89] found id: ""
	I0913 20:03:01.737395   71233 logs.go:276] 1 containers: [3e8d6c49b3b396aea040cc47de27b36e0f6018361f1925fe36424adffe6a6b73]
	I0913 20:03:01.737453   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:03:01.743717   71233 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:03:01.743779   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:03:01.784813   71233 cri.go:89] found id: ""
	I0913 20:03:01.784836   71233 logs.go:276] 0 containers: []
	W0913 20:03:01.784845   71233 logs.go:278] No container was found matching "kindnet"
	I0913 20:03:01.784849   71233 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0913 20:03:01.784898   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0913 20:03:01.823391   71233 cri.go:89] found id: "db0694e689431d2416b903e0f01588bdf740ca0d24d9561799e853cfb065cb11"
	I0913 20:03:01.823420   71233 cri.go:89] found id: "d21ac9f9341fd40b5f181668035e7327fd6c2310c56a7f5f0158d440bfd2e9a3"
	I0913 20:03:01.823427   71233 cri.go:89] found id: ""
	I0913 20:03:01.823436   71233 logs.go:276] 2 containers: [db0694e689431d2416b903e0f01588bdf740ca0d24d9561799e853cfb065cb11 d21ac9f9341fd40b5f181668035e7327fd6c2310c56a7f5f0158d440bfd2e9a3]
	I0913 20:03:01.823484   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:03:01.828764   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:03:01.834519   71233 logs.go:123] Gathering logs for storage-provisioner [db0694e689431d2416b903e0f01588bdf740ca0d24d9561799e853cfb065cb11] ...
	I0913 20:03:01.834546   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db0694e689431d2416b903e0f01588bdf740ca0d24d9561799e853cfb065cb11"
	I0913 20:03:01.872925   71233 logs.go:123] Gathering logs for etcd [b7288e6c437a29f9a016812446ce5905f4cc78974775e1f26b227cd41414a8c0] ...
	I0913 20:03:01.872954   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b7288e6c437a29f9a016812446ce5905f4cc78974775e1f26b227cd41414a8c0"
	I0913 20:03:01.927669   71233 logs.go:123] Gathering logs for coredns [5a58f184d570427c2adde5e982b72f455eb375ced496ce62a3217f22f7c409e7] ...
	I0913 20:03:01.927702   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a58f184d570427c2adde5e982b72f455eb375ced496ce62a3217f22f7c409e7"
	I0913 20:03:01.973537   71233 logs.go:123] Gathering logs for kube-scheduler [c32212fb06588456e47850075601e1e9622d8fab0cd30faa9a6289ff74e4bc9f] ...
	I0913 20:03:01.973576   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c32212fb06588456e47850075601e1e9622d8fab0cd30faa9a6289ff74e4bc9f"
	I0913 20:03:02.017320   71233 logs.go:123] Gathering logs for kube-proxy [57402126568c76ff63c95511c256896d8fb9ec9889deab2a2816385513386b86] ...
	I0913 20:03:02.017353   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57402126568c76ff63c95511c256896d8fb9ec9889deab2a2816385513386b86"
	I0913 20:03:02.064003   71233 logs.go:123] Gathering logs for kubelet ...
	I0913 20:03:02.064042   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:03:02.134901   71233 logs.go:123] Gathering logs for dmesg ...
	I0913 20:03:02.134933   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:03:02.150541   71233 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:03:02.150575   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 20:03:02.268583   71233 logs.go:123] Gathering logs for kube-apiserver [8c6b66cfda64cc01b2add3842317e8e38bd44940ec52b87ff6868e2b782ceb0d] ...
	I0913 20:03:02.268626   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8c6b66cfda64cc01b2add3842317e8e38bd44940ec52b87ff6868e2b782ceb0d"
	I0913 20:03:02.320972   71233 logs.go:123] Gathering logs for kube-controller-manager [3e8d6c49b3b396aea040cc47de27b36e0f6018361f1925fe36424adffe6a6b73] ...
	I0913 20:03:02.321004   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e8d6c49b3b396aea040cc47de27b36e0f6018361f1925fe36424adffe6a6b73"
	I0913 20:03:02.373848   71233 logs.go:123] Gathering logs for storage-provisioner [d21ac9f9341fd40b5f181668035e7327fd6c2310c56a7f5f0158d440bfd2e9a3] ...
	I0913 20:03:02.373881   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d21ac9f9341fd40b5f181668035e7327fd6c2310c56a7f5f0158d440bfd2e9a3"
	I0913 20:03:02.409851   71233 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:03:02.409882   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:03:02.833329   71233 logs.go:123] Gathering logs for container status ...
	I0913 20:03:02.833384   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:03:01.041611   71702 addons.go:510] duration metric: took 1.971356508s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0913 20:03:01.415839   71702 pod_ready.go:103] pod "coredns-7c65d6cfc9-2qg68" in "kube-system" namespace has status "Ready":"False"
	I0913 20:03:03.911854   71702 pod_ready.go:103] pod "coredns-7c65d6cfc9-2qg68" in "kube-system" namespace has status "Ready":"False"
	I0913 20:03:04.745279   71926 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0913 20:03:04.745917   71926 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0913 20:03:04.746165   71926 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0913 20:03:05.413146   71702 pod_ready.go:93] pod "coredns-7c65d6cfc9-2qg68" in "kube-system" namespace has status "Ready":"True"
	I0913 20:03:05.413172   71702 pod_ready.go:82] duration metric: took 6.008227569s for pod "coredns-7c65d6cfc9-2qg68" in "kube-system" namespace to be "Ready" ...
	I0913 20:03:05.413184   71702 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-pm4s9" in "kube-system" namespace to be "Ready" ...
	I0913 20:03:07.420197   71702 pod_ready.go:103] pod "coredns-7c65d6cfc9-pm4s9" in "kube-system" namespace has status "Ready":"False"
	I0913 20:03:07.920309   71702 pod_ready.go:93] pod "coredns-7c65d6cfc9-pm4s9" in "kube-system" namespace has status "Ready":"True"
	I0913 20:03:07.920333   71702 pod_ready.go:82] duration metric: took 2.507141455s for pod "coredns-7c65d6cfc9-pm4s9" in "kube-system" namespace to be "Ready" ...
	I0913 20:03:07.920342   71702 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-512125" in "kube-system" namespace to be "Ready" ...
	I0913 20:03:07.924871   71702 pod_ready.go:93] pod "etcd-default-k8s-diff-port-512125" in "kube-system" namespace has status "Ready":"True"
	I0913 20:03:07.924892   71702 pod_ready.go:82] duration metric: took 4.543474ms for pod "etcd-default-k8s-diff-port-512125" in "kube-system" namespace to be "Ready" ...
	I0913 20:03:07.924901   71702 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-512125" in "kube-system" namespace to be "Ready" ...
	I0913 20:03:07.929323   71702 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-512125" in "kube-system" namespace has status "Ready":"True"
	I0913 20:03:07.929343   71702 pod_ready.go:82] duration metric: took 4.435416ms for pod "kube-apiserver-default-k8s-diff-port-512125" in "kube-system" namespace to be "Ready" ...
	I0913 20:03:07.929351   71702 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-512125" in "kube-system" namespace to be "Ready" ...
	I0913 20:03:07.933200   71702 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-512125" in "kube-system" namespace has status "Ready":"True"
	I0913 20:03:07.933225   71702 pod_ready.go:82] duration metric: took 3.865423ms for pod "kube-controller-manager-default-k8s-diff-port-512125" in "kube-system" namespace to be "Ready" ...
	I0913 20:03:07.933237   71702 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-6zfwm" in "kube-system" namespace to be "Ready" ...
	I0913 20:03:07.938215   71702 pod_ready.go:93] pod "kube-proxy-6zfwm" in "kube-system" namespace has status "Ready":"True"
	I0913 20:03:07.938241   71702 pod_ready.go:82] duration metric: took 4.996366ms for pod "kube-proxy-6zfwm" in "kube-system" namespace to be "Ready" ...
	I0913 20:03:07.938251   71702 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-512125" in "kube-system" namespace to be "Ready" ...
	I0913 20:03:08.317175   71702 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-512125" in "kube-system" namespace has status "Ready":"True"
	I0913 20:03:08.317200   71702 pod_ready.go:82] duration metric: took 378.941006ms for pod "kube-scheduler-default-k8s-diff-port-512125" in "kube-system" namespace to be "Ready" ...
	I0913 20:03:08.317207   71702 pod_ready.go:39] duration metric: took 8.926861264s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0913 20:03:08.317220   71702 api_server.go:52] waiting for apiserver process to appear ...
	I0913 20:03:08.317270   71702 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:03:08.332715   71702 api_server.go:72] duration metric: took 9.262487177s to wait for apiserver process to appear ...
	I0913 20:03:08.332745   71702 api_server.go:88] waiting for apiserver healthz status ...
	I0913 20:03:08.332766   71702 api_server.go:253] Checking apiserver healthz at https://192.168.61.3:8444/healthz ...
	I0913 20:03:08.337492   71702 api_server.go:279] https://192.168.61.3:8444/healthz returned 200:
	ok
	I0913 20:03:08.338513   71702 api_server.go:141] control plane version: v1.31.1
	I0913 20:03:08.338534   71702 api_server.go:131] duration metric: took 5.781718ms to wait for apiserver health ...
	I0913 20:03:08.338540   71702 system_pods.go:43] waiting for kube-system pods to appear ...
	I0913 20:03:08.519723   71702 system_pods.go:59] 9 kube-system pods found
	I0913 20:03:08.519751   71702 system_pods.go:61] "coredns-7c65d6cfc9-2qg68" [06d7bc39-7f7b-405c-828f-22d68741b063] Running
	I0913 20:03:08.519756   71702 system_pods.go:61] "coredns-7c65d6cfc9-pm4s9" [82a23abb-d3a2-415b-a992-971fe65ee840] Running
	I0913 20:03:08.519760   71702 system_pods.go:61] "etcd-default-k8s-diff-port-512125" [b146d4ea-c125-42c6-8435-b23a83c75597] Running
	I0913 20:03:08.519764   71702 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-512125" [69a6fc72-2e73-41ad-a945-985c4f3b406c] Running
	I0913 20:03:08.519767   71702 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-512125" [7875db7b-84da-4dae-b78c-beb539869479] Running
	I0913 20:03:08.519770   71702 system_pods.go:61] "kube-proxy-6zfwm" [b62cff15-1c67-42d6-a30b-6f43a914fa0c] Running
	I0913 20:03:08.519773   71702 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-512125" [840349ee-5068-4ccf-9e6f-9879904b3647] Running
	I0913 20:03:08.519779   71702 system_pods.go:61] "metrics-server-6867b74b74-tk8qn" [e4e5d427-7760-4397-8529-3ae3734ed891] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0913 20:03:08.519782   71702 system_pods.go:61] "storage-provisioner" [7cd5034f-5d90-4155-acab-804dca90a2ed] Running
	I0913 20:03:08.519790   71702 system_pods.go:74] duration metric: took 181.244915ms to wait for pod list to return data ...
	I0913 20:03:08.519797   71702 default_sa.go:34] waiting for default service account to be created ...
	I0913 20:03:08.717123   71702 default_sa.go:45] found service account: "default"
	I0913 20:03:08.717146   71702 default_sa.go:55] duration metric: took 197.343901ms for default service account to be created ...
	I0913 20:03:08.717155   71702 system_pods.go:116] waiting for k8s-apps to be running ...
	I0913 20:03:08.920347   71702 system_pods.go:86] 9 kube-system pods found
	I0913 20:03:08.920378   71702 system_pods.go:89] "coredns-7c65d6cfc9-2qg68" [06d7bc39-7f7b-405c-828f-22d68741b063] Running
	I0913 20:03:08.920383   71702 system_pods.go:89] "coredns-7c65d6cfc9-pm4s9" [82a23abb-d3a2-415b-a992-971fe65ee840] Running
	I0913 20:03:08.920388   71702 system_pods.go:89] "etcd-default-k8s-diff-port-512125" [b146d4ea-c125-42c6-8435-b23a83c75597] Running
	I0913 20:03:08.920392   71702 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-512125" [69a6fc72-2e73-41ad-a945-985c4f3b406c] Running
	I0913 20:03:08.920396   71702 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-512125" [7875db7b-84da-4dae-b78c-beb539869479] Running
	I0913 20:03:08.920401   71702 system_pods.go:89] "kube-proxy-6zfwm" [b62cff15-1c67-42d6-a30b-6f43a914fa0c] Running
	I0913 20:03:08.920407   71702 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-512125" [840349ee-5068-4ccf-9e6f-9879904b3647] Running
	I0913 20:03:08.920415   71702 system_pods.go:89] "metrics-server-6867b74b74-tk8qn" [e4e5d427-7760-4397-8529-3ae3734ed891] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0913 20:03:08.920421   71702 system_pods.go:89] "storage-provisioner" [7cd5034f-5d90-4155-acab-804dca90a2ed] Running
	I0913 20:03:08.920433   71702 system_pods.go:126] duration metric: took 203.271141ms to wait for k8s-apps to be running ...
	I0913 20:03:08.920446   71702 system_svc.go:44] waiting for kubelet service to be running ....
	I0913 20:03:08.920492   71702 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 20:03:08.937818   71702 system_svc.go:56] duration metric: took 17.363979ms WaitForService to wait for kubelet
	I0913 20:03:08.937850   71702 kubeadm.go:582] duration metric: took 9.867627646s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0913 20:03:08.937866   71702 node_conditions.go:102] verifying NodePressure condition ...
	I0913 20:03:09.117836   71702 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0913 20:03:09.117861   71702 node_conditions.go:123] node cpu capacity is 2
	I0913 20:03:09.117870   71702 node_conditions.go:105] duration metric: took 180.000591ms to run NodePressure ...
	I0913 20:03:09.117880   71702 start.go:241] waiting for startup goroutines ...
	I0913 20:03:09.117886   71702 start.go:246] waiting for cluster config update ...
	I0913 20:03:09.117896   71702 start.go:255] writing updated cluster config ...
	I0913 20:03:09.118224   71702 ssh_runner.go:195] Run: rm -f paused
	I0913 20:03:09.166470   71702 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0913 20:03:09.168569   71702 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-512125" cluster and "default" namespace by default
	I0913 20:03:05.379534   71233 api_server.go:253] Checking apiserver healthz at https://192.168.39.32:8443/healthz ...
	I0913 20:03:05.385296   71233 api_server.go:279] https://192.168.39.32:8443/healthz returned 200:
	ok
	I0913 20:03:05.386447   71233 api_server.go:141] control plane version: v1.31.1
	I0913 20:03:05.386467   71233 api_server.go:131] duration metric: took 3.933956718s to wait for apiserver health ...
	I0913 20:03:05.386476   71233 system_pods.go:43] waiting for kube-system pods to appear ...
	I0913 20:03:05.386501   71233 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:03:05.386558   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:03:05.435632   71233 cri.go:89] found id: "8c6b66cfda64cc01b2add3842317e8e38bd44940ec52b87ff6868e2b782ceb0d"
	I0913 20:03:05.435663   71233 cri.go:89] found id: ""
	I0913 20:03:05.435674   71233 logs.go:276] 1 containers: [8c6b66cfda64cc01b2add3842317e8e38bd44940ec52b87ff6868e2b782ceb0d]
	I0913 20:03:05.435734   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:03:05.440489   71233 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:03:05.440552   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:03:05.479659   71233 cri.go:89] found id: "b7288e6c437a29f9a016812446ce5905f4cc78974775e1f26b227cd41414a8c0"
	I0913 20:03:05.479684   71233 cri.go:89] found id: ""
	I0913 20:03:05.479692   71233 logs.go:276] 1 containers: [b7288e6c437a29f9a016812446ce5905f4cc78974775e1f26b227cd41414a8c0]
	I0913 20:03:05.479739   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:03:05.483811   71233 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:03:05.483868   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:03:05.519053   71233 cri.go:89] found id: "5a58f184d570427c2adde5e982b72f455eb375ced496ce62a3217f22f7c409e7"
	I0913 20:03:05.519077   71233 cri.go:89] found id: ""
	I0913 20:03:05.519085   71233 logs.go:276] 1 containers: [5a58f184d570427c2adde5e982b72f455eb375ced496ce62a3217f22f7c409e7]
	I0913 20:03:05.519139   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:03:05.523529   71233 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:03:05.523596   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:03:05.560575   71233 cri.go:89] found id: "c32212fb06588456e47850075601e1e9622d8fab0cd30faa9a6289ff74e4bc9f"
	I0913 20:03:05.560599   71233 cri.go:89] found id: ""
	I0913 20:03:05.560608   71233 logs.go:276] 1 containers: [c32212fb06588456e47850075601e1e9622d8fab0cd30faa9a6289ff74e4bc9f]
	I0913 20:03:05.560655   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:03:05.564712   71233 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:03:05.564761   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:03:05.602092   71233 cri.go:89] found id: "57402126568c76ff63c95511c256896d8fb9ec9889deab2a2816385513386b86"
	I0913 20:03:05.602131   71233 cri.go:89] found id: ""
	I0913 20:03:05.602141   71233 logs.go:276] 1 containers: [57402126568c76ff63c95511c256896d8fb9ec9889deab2a2816385513386b86]
	I0913 20:03:05.602202   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:03:05.606465   71233 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:03:05.606531   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:03:05.652471   71233 cri.go:89] found id: "3e8d6c49b3b396aea040cc47de27b36e0f6018361f1925fe36424adffe6a6b73"
	I0913 20:03:05.652499   71233 cri.go:89] found id: ""
	I0913 20:03:05.652509   71233 logs.go:276] 1 containers: [3e8d6c49b3b396aea040cc47de27b36e0f6018361f1925fe36424adffe6a6b73]
	I0913 20:03:05.652567   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:03:05.656969   71233 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:03:05.657028   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:03:05.695549   71233 cri.go:89] found id: ""
	I0913 20:03:05.695575   71233 logs.go:276] 0 containers: []
	W0913 20:03:05.695586   71233 logs.go:278] No container was found matching "kindnet"
	I0913 20:03:05.695594   71233 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0913 20:03:05.695657   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0913 20:03:05.732796   71233 cri.go:89] found id: "db0694e689431d2416b903e0f01588bdf740ca0d24d9561799e853cfb065cb11"
	I0913 20:03:05.732824   71233 cri.go:89] found id: "d21ac9f9341fd40b5f181668035e7327fd6c2310c56a7f5f0158d440bfd2e9a3"
	I0913 20:03:05.732830   71233 cri.go:89] found id: ""
	I0913 20:03:05.732838   71233 logs.go:276] 2 containers: [db0694e689431d2416b903e0f01588bdf740ca0d24d9561799e853cfb065cb11 d21ac9f9341fd40b5f181668035e7327fd6c2310c56a7f5f0158d440bfd2e9a3]
	I0913 20:03:05.732905   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:03:05.737676   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:03:05.742071   71233 logs.go:123] Gathering logs for etcd [b7288e6c437a29f9a016812446ce5905f4cc78974775e1f26b227cd41414a8c0] ...
	I0913 20:03:05.742109   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b7288e6c437a29f9a016812446ce5905f4cc78974775e1f26b227cd41414a8c0"
	I0913 20:03:05.792956   71233 logs.go:123] Gathering logs for kube-scheduler [c32212fb06588456e47850075601e1e9622d8fab0cd30faa9a6289ff74e4bc9f] ...
	I0913 20:03:05.792984   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c32212fb06588456e47850075601e1e9622d8fab0cd30faa9a6289ff74e4bc9f"
	I0913 20:03:05.834623   71233 logs.go:123] Gathering logs for kube-proxy [57402126568c76ff63c95511c256896d8fb9ec9889deab2a2816385513386b86] ...
	I0913 20:03:05.834651   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57402126568c76ff63c95511c256896d8fb9ec9889deab2a2816385513386b86"
	I0913 20:03:05.872365   71233 logs.go:123] Gathering logs for storage-provisioner [db0694e689431d2416b903e0f01588bdf740ca0d24d9561799e853cfb065cb11] ...
	I0913 20:03:05.872395   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db0694e689431d2416b903e0f01588bdf740ca0d24d9561799e853cfb065cb11"
	I0913 20:03:05.909565   71233 logs.go:123] Gathering logs for storage-provisioner [d21ac9f9341fd40b5f181668035e7327fd6c2310c56a7f5f0158d440bfd2e9a3] ...
	I0913 20:03:05.909589   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d21ac9f9341fd40b5f181668035e7327fd6c2310c56a7f5f0158d440bfd2e9a3"
	I0913 20:03:05.950037   71233 logs.go:123] Gathering logs for container status ...
	I0913 20:03:05.950073   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:03:06.006670   71233 logs.go:123] Gathering logs for kubelet ...
	I0913 20:03:06.006702   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:03:06.075591   71233 logs.go:123] Gathering logs for dmesg ...
	I0913 20:03:06.075633   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:03:06.090020   71233 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:03:06.090051   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 20:03:06.193190   71233 logs.go:123] Gathering logs for kube-apiserver [8c6b66cfda64cc01b2add3842317e8e38bd44940ec52b87ff6868e2b782ceb0d] ...
	I0913 20:03:06.193216   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8c6b66cfda64cc01b2add3842317e8e38bd44940ec52b87ff6868e2b782ceb0d"
	I0913 20:03:06.236386   71233 logs.go:123] Gathering logs for coredns [5a58f184d570427c2adde5e982b72f455eb375ced496ce62a3217f22f7c409e7] ...
	I0913 20:03:06.236414   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a58f184d570427c2adde5e982b72f455eb375ced496ce62a3217f22f7c409e7"
	I0913 20:03:06.276618   71233 logs.go:123] Gathering logs for kube-controller-manager [3e8d6c49b3b396aea040cc47de27b36e0f6018361f1925fe36424adffe6a6b73] ...
	I0913 20:03:06.276644   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e8d6c49b3b396aea040cc47de27b36e0f6018361f1925fe36424adffe6a6b73"
	I0913 20:03:06.332088   71233 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:03:06.332119   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:03:09.189499   71233 system_pods.go:59] 8 kube-system pods found
	I0913 20:03:09.189533   71233 system_pods.go:61] "coredns-7c65d6cfc9-lrrkx" [3b5420cc-a0cd-4f34-8f65-9fa6fd5dd02f] Running
	I0913 20:03:09.189542   71233 system_pods.go:61] "etcd-embed-certs-175374" [4f645ba5-cffa-49a2-9219-8e635f58abd3] Running
	I0913 20:03:09.189549   71233 system_pods.go:61] "kube-apiserver-embed-certs-175374" [4c21b983-d949-4642-b17c-b547330b0d05] Running
	I0913 20:03:09.189564   71233 system_pods.go:61] "kube-controller-manager-embed-certs-175374" [defb4f6c-81f8-4405-b2c0-abe91c846f4c] Running
	I0913 20:03:09.189571   71233 system_pods.go:61] "kube-proxy-jv77q" [28580bbe-7c5f-4161-8370-41f3286d508c] Running
	I0913 20:03:09.189577   71233 system_pods.go:61] "kube-scheduler-embed-certs-175374" [c5aa66b9-523c-46e8-b8e4-70680490dbf5] Running
	I0913 20:03:09.189588   71233 system_pods.go:61] "metrics-server-6867b74b74-fnznh" [9ca67e1c-a852-4513-abfc-ace5908d2727] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0913 20:03:09.189597   71233 system_pods.go:61] "storage-provisioner" [afa99920-b51a-4d30-a8e0-269a0beeee8a] Running
	I0913 20:03:09.189610   71233 system_pods.go:74] duration metric: took 3.803122963s to wait for pod list to return data ...
	I0913 20:03:09.189618   71233 default_sa.go:34] waiting for default service account to be created ...
	I0913 20:03:09.192997   71233 default_sa.go:45] found service account: "default"
	I0913 20:03:09.193023   71233 default_sa.go:55] duration metric: took 3.397513ms for default service account to be created ...
	I0913 20:03:09.193033   71233 system_pods.go:116] waiting for k8s-apps to be running ...
	I0913 20:03:09.198238   71233 system_pods.go:86] 8 kube-system pods found
	I0913 20:03:09.198263   71233 system_pods.go:89] "coredns-7c65d6cfc9-lrrkx" [3b5420cc-a0cd-4f34-8f65-9fa6fd5dd02f] Running
	I0913 20:03:09.198268   71233 system_pods.go:89] "etcd-embed-certs-175374" [4f645ba5-cffa-49a2-9219-8e635f58abd3] Running
	I0913 20:03:09.198272   71233 system_pods.go:89] "kube-apiserver-embed-certs-175374" [4c21b983-d949-4642-b17c-b547330b0d05] Running
	I0913 20:03:09.198276   71233 system_pods.go:89] "kube-controller-manager-embed-certs-175374" [defb4f6c-81f8-4405-b2c0-abe91c846f4c] Running
	I0913 20:03:09.198280   71233 system_pods.go:89] "kube-proxy-jv77q" [28580bbe-7c5f-4161-8370-41f3286d508c] Running
	I0913 20:03:09.198284   71233 system_pods.go:89] "kube-scheduler-embed-certs-175374" [c5aa66b9-523c-46e8-b8e4-70680490dbf5] Running
	I0913 20:03:09.198291   71233 system_pods.go:89] "metrics-server-6867b74b74-fnznh" [9ca67e1c-a852-4513-abfc-ace5908d2727] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0913 20:03:09.198298   71233 system_pods.go:89] "storage-provisioner" [afa99920-b51a-4d30-a8e0-269a0beeee8a] Running
	I0913 20:03:09.198305   71233 system_pods.go:126] duration metric: took 5.267005ms to wait for k8s-apps to be running ...
	I0913 20:03:09.198314   71233 system_svc.go:44] waiting for kubelet service to be running ....
	I0913 20:03:09.198349   71233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 20:03:09.216256   71233 system_svc.go:56] duration metric: took 17.93212ms WaitForService to wait for kubelet
	I0913 20:03:09.216295   71233 kubeadm.go:582] duration metric: took 4m23.636198466s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0913 20:03:09.216318   71233 node_conditions.go:102] verifying NodePressure condition ...
	I0913 20:03:09.219598   71233 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0913 20:03:09.219623   71233 node_conditions.go:123] node cpu capacity is 2
	I0913 20:03:09.219634   71233 node_conditions.go:105] duration metric: took 3.310981ms to run NodePressure ...
	I0913 20:03:09.219644   71233 start.go:241] waiting for startup goroutines ...
	I0913 20:03:09.219650   71233 start.go:246] waiting for cluster config update ...
	I0913 20:03:09.219659   71233 start.go:255] writing updated cluster config ...
	I0913 20:03:09.219956   71233 ssh_runner.go:195] Run: rm -f paused
	I0913 20:03:09.275861   71233 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0913 20:03:09.277856   71233 out.go:177] * Done! kubectl is now configured to use "embed-certs-175374" cluster and "default" namespace by default
	I0913 20:03:09.746358   71926 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0913 20:03:09.746651   71926 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0913 20:03:19.746817   71926 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0913 20:03:19.747178   71926 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0913 20:03:39.747563   71926 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0913 20:03:39.747837   71926 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0913 20:04:19.749006   71926 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0913 20:04:19.749293   71926 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0913 20:04:19.749318   71926 kubeadm.go:310] 
	I0913 20:04:19.749381   71926 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0913 20:04:19.749450   71926 kubeadm.go:310] 		timed out waiting for the condition
	I0913 20:04:19.749486   71926 kubeadm.go:310] 
	I0913 20:04:19.749554   71926 kubeadm.go:310] 	This error is likely caused by:
	I0913 20:04:19.749588   71926 kubeadm.go:310] 		- The kubelet is not running
	I0913 20:04:19.749737   71926 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0913 20:04:19.749752   71926 kubeadm.go:310] 
	I0913 20:04:19.749887   71926 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0913 20:04:19.749920   71926 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0913 20:04:19.749949   71926 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0913 20:04:19.749955   71926 kubeadm.go:310] 
	I0913 20:04:19.750044   71926 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0913 20:04:19.750136   71926 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0913 20:04:19.750145   71926 kubeadm.go:310] 
	I0913 20:04:19.750247   71926 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0913 20:04:19.750339   71926 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0913 20:04:19.750430   71926 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0913 20:04:19.750523   71926 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0913 20:04:19.750574   71926 kubeadm.go:310] 
	I0913 20:04:19.751362   71926 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0913 20:04:19.751496   71926 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0913 20:04:19.751584   71926 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0913 20:04:19.751725   71926 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0913 20:04:19.751774   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0913 20:04:20.207124   71926 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 20:04:20.223940   71926 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0913 20:04:20.234902   71926 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0913 20:04:20.234923   71926 kubeadm.go:157] found existing configuration files:
	
	I0913 20:04:20.234970   71926 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0913 20:04:20.246057   71926 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0913 20:04:20.246154   71926 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0913 20:04:20.257102   71926 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0913 20:04:20.266497   71926 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0913 20:04:20.266545   71926 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0913 20:04:20.276259   71926 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0913 20:04:20.285640   71926 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0913 20:04:20.285690   71926 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0913 20:04:20.294988   71926 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0913 20:04:20.303978   71926 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0913 20:04:20.304021   71926 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0913 20:04:20.313426   71926 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0913 20:04:20.392431   71926 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0913 20:04:20.392519   71926 kubeadm.go:310] [preflight] Running pre-flight checks
	I0913 20:04:20.550959   71926 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0913 20:04:20.551072   71926 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0913 20:04:20.551169   71926 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0913 20:04:20.746999   71926 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0913 20:04:20.749008   71926 out.go:235]   - Generating certificates and keys ...
	I0913 20:04:20.749104   71926 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0913 20:04:20.749181   71926 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0913 20:04:20.749255   71926 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0913 20:04:20.749339   71926 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0913 20:04:20.749457   71926 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0913 20:04:20.749540   71926 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0913 20:04:20.749608   71926 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0913 20:04:20.749670   71926 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0913 20:04:20.749732   71926 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0913 20:04:20.749801   71926 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0913 20:04:20.749833   71926 kubeadm.go:310] [certs] Using the existing "sa" key
	I0913 20:04:20.749877   71926 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0913 20:04:20.997924   71926 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0913 20:04:21.175932   71926 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0913 20:04:21.442609   71926 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0913 20:04:21.714181   71926 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0913 20:04:21.737741   71926 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0913 20:04:21.738987   71926 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0913 20:04:21.739058   71926 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0913 20:04:21.885498   71926 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0913 20:04:21.887033   71926 out.go:235]   - Booting up control plane ...
	I0913 20:04:21.887170   71926 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0913 20:04:21.893768   71926 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0913 20:04:21.893887   71926 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0913 20:04:21.894035   71926 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0913 20:04:21.904503   71926 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0913 20:05:01.906985   71926 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0913 20:05:01.907109   71926 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0913 20:05:01.907459   71926 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0913 20:05:06.907586   71926 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0913 20:05:06.907859   71926 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0913 20:05:16.908086   71926 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0913 20:05:16.908310   71926 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0913 20:05:36.908887   71926 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0913 20:05:36.909114   71926 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0913 20:06:16.909944   71926 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0913 20:06:16.910449   71926 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0913 20:06:16.910509   71926 kubeadm.go:310] 
	I0913 20:06:16.910582   71926 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0913 20:06:16.910675   71926 kubeadm.go:310] 		timed out waiting for the condition
	I0913 20:06:16.910694   71926 kubeadm.go:310] 
	I0913 20:06:16.910764   71926 kubeadm.go:310] 	This error is likely caused by:
	I0913 20:06:16.910837   71926 kubeadm.go:310] 		- The kubelet is not running
	I0913 20:06:16.910965   71926 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0913 20:06:16.910979   71926 kubeadm.go:310] 
	I0913 20:06:16.911088   71926 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0913 20:06:16.911126   71926 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0913 20:06:16.911172   71926 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0913 20:06:16.911180   71926 kubeadm.go:310] 
	I0913 20:06:16.911298   71926 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0913 20:06:16.911400   71926 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0913 20:06:16.911415   71926 kubeadm.go:310] 
	I0913 20:06:16.911586   71926 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0913 20:06:16.911787   71926 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0913 20:06:16.911938   71926 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0913 20:06:16.912063   71926 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0913 20:06:16.912087   71926 kubeadm.go:310] 
	I0913 20:06:16.912489   71926 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0913 20:06:16.912671   71926 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0913 20:06:16.912770   71926 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0913 20:06:16.912842   71926 kubeadm.go:394] duration metric: took 7m57.58767006s to StartCluster
	I0913 20:06:16.912893   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:06:16.912960   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:06:16.957280   71926 cri.go:89] found id: ""
	I0913 20:06:16.957307   71926 logs.go:276] 0 containers: []
	W0913 20:06:16.957319   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:06:16.957326   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:06:16.957393   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:06:16.997065   71926 cri.go:89] found id: ""
	I0913 20:06:16.997095   71926 logs.go:276] 0 containers: []
	W0913 20:06:16.997106   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:06:16.997115   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:06:16.997193   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:06:17.033075   71926 cri.go:89] found id: ""
	I0913 20:06:17.033099   71926 logs.go:276] 0 containers: []
	W0913 20:06:17.033107   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:06:17.033112   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:06:17.033180   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:06:17.071062   71926 cri.go:89] found id: ""
	I0913 20:06:17.071090   71926 logs.go:276] 0 containers: []
	W0913 20:06:17.071101   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:06:17.071108   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:06:17.071176   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:06:17.107558   71926 cri.go:89] found id: ""
	I0913 20:06:17.107584   71926 logs.go:276] 0 containers: []
	W0913 20:06:17.107594   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:06:17.107599   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:06:17.107658   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:06:17.146037   71926 cri.go:89] found id: ""
	I0913 20:06:17.146066   71926 logs.go:276] 0 containers: []
	W0913 20:06:17.146076   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:06:17.146083   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:06:17.146158   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:06:17.189129   71926 cri.go:89] found id: ""
	I0913 20:06:17.189163   71926 logs.go:276] 0 containers: []
	W0913 20:06:17.189174   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:06:17.189181   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:06:17.189241   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:06:17.224018   71926 cri.go:89] found id: ""
	I0913 20:06:17.224045   71926 logs.go:276] 0 containers: []
	W0913 20:06:17.224056   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:06:17.224067   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:06:17.224081   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:06:17.238494   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:06:17.238526   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:06:17.319627   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:06:17.319650   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:06:17.319663   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:06:17.472823   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:06:17.472854   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:06:17.515919   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:06:17.515957   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0913 20:06:17.566082   71926 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0913 20:06:17.566145   71926 out.go:270] * 
	W0913 20:06:17.566249   71926 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0913 20:06:17.566272   71926 out.go:270] * 
	W0913 20:06:17.567031   71926 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0913 20:06:17.570669   71926 out.go:201] 
	W0913 20:06:17.571961   71926 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0913 20:06:17.572007   71926 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0913 20:06:17.572024   71926 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0913 20:06:17.573440   71926 out.go:201] 
	
	
	==> CRI-O <==
	Sep 13 20:12:11 default-k8s-diff-port-512125 crio[712]: time="2024-09-13 20:12:11.582670410Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726258331582647138,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fe21c55e-e7db-49df-9af1-86eccfea3200 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 20:12:11 default-k8s-diff-port-512125 crio[712]: time="2024-09-13 20:12:11.583490404Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ec21f442-ad32-4a1c-8ef9-afdc5d5752fc name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 20:12:11 default-k8s-diff-port-512125 crio[712]: time="2024-09-13 20:12:11.583569016Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ec21f442-ad32-4a1c-8ef9-afdc5d5752fc name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 20:12:11 default-k8s-diff-port-512125 crio[712]: time="2024-09-13 20:12:11.583884169Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:727272a23be61853f465dcb60dae0ed961217133d96457374632a3fcc490b3fc,PodSandboxId:58a301000ce7d132d84c690126e5f2f9f7eff2fc43a1d7768691bff03ea08313,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726257781400421218,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cd5034f-5d90-4155-acab-804dca90a2ed,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c02b3652c8f8a1a5dfd3605162b44848742e491870aadfc38df2bf02f7aa191,PodSandboxId:957a4906fe4a4e7af85afea666bfa6beda0c586c60782f73ba1c3ce8f0aacb43,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726257780435880084,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-pm4s9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82a23abb-d3a2-415b-a992-971fe65ee840,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02eb787bf6a192de4d39c5e03d0b0940577c08e94c3a958c288248ddf709afa7,PodSandboxId:8ca473c893d6c0b9405ac375a7d7c2ad2715421b55d1e9d84ee73e677eddb418,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726257780357558748,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-2qg68,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 06d7bc39-7f7b-405c-828f-22d68741b063,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00782ad9f16fb059fd73915c67417c6aab1d191dbcd6af0059a9949bc8bc295e,PodSandboxId:ca0fbd46713432a083043ccee64f17e4ceca4a97500e12a9fea4dbd07362d118,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING
,CreatedAt:1726257779537974164,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6zfwm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b62cff15-1c67-42d6-a30b-6f43a914fa0c,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25c925f18c1640fdc04855eb6a2c9b653041cd3edece8e4a852a8e2872f674db,PodSandboxId:f3d61314d58f911a3ad49dc18ac26e4edca862c5c03c819a84071747d5ebc7da,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726257768419189757,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-512125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f711b8224e50435ad450d5b09efe44bc,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b172dac6b2fe57ea534fdb4e230ff081590b6a591d05a565a442ececc0a1da4,PodSandboxId:07baef0ade19d34ea642f9426e6fd5557dc09c1231327f9c854b988cb784afcf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726257768395139286,Labels:map[string]string{io.ku
bernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-512125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85b2cdcedba1ec7b01b6340beecfc4da,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b227cf71d8db583d3217e10f5801b9139d31d9df3bb72e96789ebfcd8099db4a,PodSandboxId:80d1a409081b6c307bf3d47519ec9f442388e5ed7cec5c223a6203f1d926d853,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726257768356325605,Labels:map[string]string{io.kube
rnetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-512125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e143ccd1e9a00975eade6b9f4aeb4e03,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c7c881fbf40e5abef1e031d9144018705696ddc7366b9d3140690504d0a30a1,PodSandboxId:11cb043226726e899517bfb84bed802095dbc122bbd90257e857f8ee6bcbe74f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726257768284642266,Labels:map[string]string{
io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-512125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32e7999a85302bfc3f13eb8f97e0e72c,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:683d63db2439b412660887896403aa799b25c848d4dbd98b9a50e08af1663258,PodSandboxId:1127567245cc692a52caf3f61e8bdb50773f23137b36b09cebc8af2b3dc89ff1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726257480085198223,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-512125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85b2cdcedba1ec7b01b6340beecfc4da,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ec21f442-ad32-4a1c-8ef9-afdc5d5752fc name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 20:12:11 default-k8s-diff-port-512125 crio[712]: time="2024-09-13 20:12:11.634120023Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6552e260-1eb9-403c-ae58-b3d479b5d364 name=/runtime.v1.RuntimeService/Version
	Sep 13 20:12:11 default-k8s-diff-port-512125 crio[712]: time="2024-09-13 20:12:11.634215469Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6552e260-1eb9-403c-ae58-b3d479b5d364 name=/runtime.v1.RuntimeService/Version
	Sep 13 20:12:11 default-k8s-diff-port-512125 crio[712]: time="2024-09-13 20:12:11.635337451Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=37e4efb9-5b4b-43f2-add3-6a2c34584fb8 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 20:12:11 default-k8s-diff-port-512125 crio[712]: time="2024-09-13 20:12:11.635737093Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726258331635715374,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=37e4efb9-5b4b-43f2-add3-6a2c34584fb8 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 20:12:11 default-k8s-diff-port-512125 crio[712]: time="2024-09-13 20:12:11.636682777Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cc6469e7-4e95-456a-a98e-01bfca5eeba2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 20:12:11 default-k8s-diff-port-512125 crio[712]: time="2024-09-13 20:12:11.636787219Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cc6469e7-4e95-456a-a98e-01bfca5eeba2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 20:12:11 default-k8s-diff-port-512125 crio[712]: time="2024-09-13 20:12:11.636996156Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:727272a23be61853f465dcb60dae0ed961217133d96457374632a3fcc490b3fc,PodSandboxId:58a301000ce7d132d84c690126e5f2f9f7eff2fc43a1d7768691bff03ea08313,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726257781400421218,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cd5034f-5d90-4155-acab-804dca90a2ed,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c02b3652c8f8a1a5dfd3605162b44848742e491870aadfc38df2bf02f7aa191,PodSandboxId:957a4906fe4a4e7af85afea666bfa6beda0c586c60782f73ba1c3ce8f0aacb43,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726257780435880084,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-pm4s9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82a23abb-d3a2-415b-a992-971fe65ee840,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02eb787bf6a192de4d39c5e03d0b0940577c08e94c3a958c288248ddf709afa7,PodSandboxId:8ca473c893d6c0b9405ac375a7d7c2ad2715421b55d1e9d84ee73e677eddb418,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726257780357558748,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-2qg68,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 06d7bc39-7f7b-405c-828f-22d68741b063,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00782ad9f16fb059fd73915c67417c6aab1d191dbcd6af0059a9949bc8bc295e,PodSandboxId:ca0fbd46713432a083043ccee64f17e4ceca4a97500e12a9fea4dbd07362d118,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING
,CreatedAt:1726257779537974164,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6zfwm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b62cff15-1c67-42d6-a30b-6f43a914fa0c,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25c925f18c1640fdc04855eb6a2c9b653041cd3edece8e4a852a8e2872f674db,PodSandboxId:f3d61314d58f911a3ad49dc18ac26e4edca862c5c03c819a84071747d5ebc7da,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726257768419189757,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-512125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f711b8224e50435ad450d5b09efe44bc,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b172dac6b2fe57ea534fdb4e230ff081590b6a591d05a565a442ececc0a1da4,PodSandboxId:07baef0ade19d34ea642f9426e6fd5557dc09c1231327f9c854b988cb784afcf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726257768395139286,Labels:map[string]string{io.ku
bernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-512125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85b2cdcedba1ec7b01b6340beecfc4da,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b227cf71d8db583d3217e10f5801b9139d31d9df3bb72e96789ebfcd8099db4a,PodSandboxId:80d1a409081b6c307bf3d47519ec9f442388e5ed7cec5c223a6203f1d926d853,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726257768356325605,Labels:map[string]string{io.kube
rnetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-512125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e143ccd1e9a00975eade6b9f4aeb4e03,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c7c881fbf40e5abef1e031d9144018705696ddc7366b9d3140690504d0a30a1,PodSandboxId:11cb043226726e899517bfb84bed802095dbc122bbd90257e857f8ee6bcbe74f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726257768284642266,Labels:map[string]string{
io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-512125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32e7999a85302bfc3f13eb8f97e0e72c,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:683d63db2439b412660887896403aa799b25c848d4dbd98b9a50e08af1663258,PodSandboxId:1127567245cc692a52caf3f61e8bdb50773f23137b36b09cebc8af2b3dc89ff1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726257480085198223,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-512125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85b2cdcedba1ec7b01b6340beecfc4da,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cc6469e7-4e95-456a-a98e-01bfca5eeba2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 20:12:11 default-k8s-diff-port-512125 crio[712]: time="2024-09-13 20:12:11.679835145Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ec6e7fc0-930a-4306-8758-549ef3b7c487 name=/runtime.v1.RuntimeService/Version
	Sep 13 20:12:11 default-k8s-diff-port-512125 crio[712]: time="2024-09-13 20:12:11.679938764Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ec6e7fc0-930a-4306-8758-549ef3b7c487 name=/runtime.v1.RuntimeService/Version
	Sep 13 20:12:11 default-k8s-diff-port-512125 crio[712]: time="2024-09-13 20:12:11.681270254Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e24d5026-9198-49c9-a0fb-b71bea3abb03 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 20:12:11 default-k8s-diff-port-512125 crio[712]: time="2024-09-13 20:12:11.682064469Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726258331682032712,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e24d5026-9198-49c9-a0fb-b71bea3abb03 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 20:12:11 default-k8s-diff-port-512125 crio[712]: time="2024-09-13 20:12:11.682981684Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3cf3c3b4-29f3-4e86-838e-f0fb0a8fe6bb name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 20:12:11 default-k8s-diff-port-512125 crio[712]: time="2024-09-13 20:12:11.683069397Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3cf3c3b4-29f3-4e86-838e-f0fb0a8fe6bb name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 20:12:11 default-k8s-diff-port-512125 crio[712]: time="2024-09-13 20:12:11.683289798Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:727272a23be61853f465dcb60dae0ed961217133d96457374632a3fcc490b3fc,PodSandboxId:58a301000ce7d132d84c690126e5f2f9f7eff2fc43a1d7768691bff03ea08313,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726257781400421218,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cd5034f-5d90-4155-acab-804dca90a2ed,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c02b3652c8f8a1a5dfd3605162b44848742e491870aadfc38df2bf02f7aa191,PodSandboxId:957a4906fe4a4e7af85afea666bfa6beda0c586c60782f73ba1c3ce8f0aacb43,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726257780435880084,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-pm4s9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82a23abb-d3a2-415b-a992-971fe65ee840,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02eb787bf6a192de4d39c5e03d0b0940577c08e94c3a958c288248ddf709afa7,PodSandboxId:8ca473c893d6c0b9405ac375a7d7c2ad2715421b55d1e9d84ee73e677eddb418,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726257780357558748,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-2qg68,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 06d7bc39-7f7b-405c-828f-22d68741b063,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00782ad9f16fb059fd73915c67417c6aab1d191dbcd6af0059a9949bc8bc295e,PodSandboxId:ca0fbd46713432a083043ccee64f17e4ceca4a97500e12a9fea4dbd07362d118,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING
,CreatedAt:1726257779537974164,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6zfwm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b62cff15-1c67-42d6-a30b-6f43a914fa0c,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25c925f18c1640fdc04855eb6a2c9b653041cd3edece8e4a852a8e2872f674db,PodSandboxId:f3d61314d58f911a3ad49dc18ac26e4edca862c5c03c819a84071747d5ebc7da,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726257768419189757,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-512125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f711b8224e50435ad450d5b09efe44bc,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b172dac6b2fe57ea534fdb4e230ff081590b6a591d05a565a442ececc0a1da4,PodSandboxId:07baef0ade19d34ea642f9426e6fd5557dc09c1231327f9c854b988cb784afcf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726257768395139286,Labels:map[string]string{io.ku
bernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-512125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85b2cdcedba1ec7b01b6340beecfc4da,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b227cf71d8db583d3217e10f5801b9139d31d9df3bb72e96789ebfcd8099db4a,PodSandboxId:80d1a409081b6c307bf3d47519ec9f442388e5ed7cec5c223a6203f1d926d853,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726257768356325605,Labels:map[string]string{io.kube
rnetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-512125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e143ccd1e9a00975eade6b9f4aeb4e03,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c7c881fbf40e5abef1e031d9144018705696ddc7366b9d3140690504d0a30a1,PodSandboxId:11cb043226726e899517bfb84bed802095dbc122bbd90257e857f8ee6bcbe74f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726257768284642266,Labels:map[string]string{
io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-512125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32e7999a85302bfc3f13eb8f97e0e72c,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:683d63db2439b412660887896403aa799b25c848d4dbd98b9a50e08af1663258,PodSandboxId:1127567245cc692a52caf3f61e8bdb50773f23137b36b09cebc8af2b3dc89ff1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726257480085198223,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-512125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85b2cdcedba1ec7b01b6340beecfc4da,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3cf3c3b4-29f3-4e86-838e-f0fb0a8fe6bb name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 20:12:11 default-k8s-diff-port-512125 crio[712]: time="2024-09-13 20:12:11.722397692Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2b4413a0-5ba6-4bef-99a6-1a11ae956ec0 name=/runtime.v1.RuntimeService/Version
	Sep 13 20:12:11 default-k8s-diff-port-512125 crio[712]: time="2024-09-13 20:12:11.722702529Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2b4413a0-5ba6-4bef-99a6-1a11ae956ec0 name=/runtime.v1.RuntimeService/Version
	Sep 13 20:12:11 default-k8s-diff-port-512125 crio[712]: time="2024-09-13 20:12:11.724563355Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4a8a5423-fd40-4118-950c-f1a1d82de579 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 20:12:11 default-k8s-diff-port-512125 crio[712]: time="2024-09-13 20:12:11.725213271Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726258331725182612,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4a8a5423-fd40-4118-950c-f1a1d82de579 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 20:12:11 default-k8s-diff-port-512125 crio[712]: time="2024-09-13 20:12:11.726150290Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c8d51809-ae89-4c6d-ba51-054856926ac3 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 20:12:11 default-k8s-diff-port-512125 crio[712]: time="2024-09-13 20:12:11.726410089Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c8d51809-ae89-4c6d-ba51-054856926ac3 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 20:12:11 default-k8s-diff-port-512125 crio[712]: time="2024-09-13 20:12:11.727241974Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:727272a23be61853f465dcb60dae0ed961217133d96457374632a3fcc490b3fc,PodSandboxId:58a301000ce7d132d84c690126e5f2f9f7eff2fc43a1d7768691bff03ea08313,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726257781400421218,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cd5034f-5d90-4155-acab-804dca90a2ed,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c02b3652c8f8a1a5dfd3605162b44848742e491870aadfc38df2bf02f7aa191,PodSandboxId:957a4906fe4a4e7af85afea666bfa6beda0c586c60782f73ba1c3ce8f0aacb43,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726257780435880084,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-pm4s9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82a23abb-d3a2-415b-a992-971fe65ee840,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02eb787bf6a192de4d39c5e03d0b0940577c08e94c3a958c288248ddf709afa7,PodSandboxId:8ca473c893d6c0b9405ac375a7d7c2ad2715421b55d1e9d84ee73e677eddb418,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726257780357558748,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-2qg68,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 06d7bc39-7f7b-405c-828f-22d68741b063,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00782ad9f16fb059fd73915c67417c6aab1d191dbcd6af0059a9949bc8bc295e,PodSandboxId:ca0fbd46713432a083043ccee64f17e4ceca4a97500e12a9fea4dbd07362d118,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING
,CreatedAt:1726257779537974164,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6zfwm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b62cff15-1c67-42d6-a30b-6f43a914fa0c,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25c925f18c1640fdc04855eb6a2c9b653041cd3edece8e4a852a8e2872f674db,PodSandboxId:f3d61314d58f911a3ad49dc18ac26e4edca862c5c03c819a84071747d5ebc7da,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726257768419189757,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-512125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f711b8224e50435ad450d5b09efe44bc,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b172dac6b2fe57ea534fdb4e230ff081590b6a591d05a565a442ececc0a1da4,PodSandboxId:07baef0ade19d34ea642f9426e6fd5557dc09c1231327f9c854b988cb784afcf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726257768395139286,Labels:map[string]string{io.ku
bernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-512125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85b2cdcedba1ec7b01b6340beecfc4da,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b227cf71d8db583d3217e10f5801b9139d31d9df3bb72e96789ebfcd8099db4a,PodSandboxId:80d1a409081b6c307bf3d47519ec9f442388e5ed7cec5c223a6203f1d926d853,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726257768356325605,Labels:map[string]string{io.kube
rnetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-512125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e143ccd1e9a00975eade6b9f4aeb4e03,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c7c881fbf40e5abef1e031d9144018705696ddc7366b9d3140690504d0a30a1,PodSandboxId:11cb043226726e899517bfb84bed802095dbc122bbd90257e857f8ee6bcbe74f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726257768284642266,Labels:map[string]string{
io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-512125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32e7999a85302bfc3f13eb8f97e0e72c,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:683d63db2439b412660887896403aa799b25c848d4dbd98b9a50e08af1663258,PodSandboxId:1127567245cc692a52caf3f61e8bdb50773f23137b36b09cebc8af2b3dc89ff1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726257480085198223,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-512125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85b2cdcedba1ec7b01b6340beecfc4da,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c8d51809-ae89-4c6d-ba51-054856926ac3 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	727272a23be61       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   58a301000ce7d       storage-provisioner
	7c02b3652c8f8       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   957a4906fe4a4       coredns-7c65d6cfc9-pm4s9
	02eb787bf6a19       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   8ca473c893d6c       coredns-7c65d6cfc9-2qg68
	00782ad9f16fb       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   9 minutes ago       Running             kube-proxy                0                   ca0fbd4671343       kube-proxy-6zfwm
	25c925f18c164       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   9 minutes ago       Running             etcd                      2                   f3d61314d58f9       etcd-default-k8s-diff-port-512125
	3b172dac6b2fe       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   9 minutes ago       Running             kube-apiserver            2                   07baef0ade19d       kube-apiserver-default-k8s-diff-port-512125
	b227cf71d8db5       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   9 minutes ago       Running             kube-scheduler            2                   80d1a409081b6       kube-scheduler-default-k8s-diff-port-512125
	1c7c881fbf40e       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   9 minutes ago       Running             kube-controller-manager   2                   11cb043226726       kube-controller-manager-default-k8s-diff-port-512125
	683d63db2439b       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   14 minutes ago      Exited              kube-apiserver            1                   1127567245cc6       kube-apiserver-default-k8s-diff-port-512125
	
	
	==> coredns [02eb787bf6a192de4d39c5e03d0b0940577c08e94c3a958c288248ddf709afa7] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [7c02b3652c8f8a1a5dfd3605162b44848742e491870aadfc38df2bf02f7aa191] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-512125
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-512125
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fdd33bebc6743cfd1c61ec7fe066add478610a92
	                    minikube.k8s.io/name=default-k8s-diff-port-512125
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_13T20_02_54_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Sep 2024 20:02:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-512125
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 13 Sep 2024 20:12:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Sep 2024 20:08:09 +0000   Fri, 13 Sep 2024 20:02:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Sep 2024 20:08:09 +0000   Fri, 13 Sep 2024 20:02:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Sep 2024 20:08:09 +0000   Fri, 13 Sep 2024 20:02:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 13 Sep 2024 20:08:09 +0000   Fri, 13 Sep 2024 20:02:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.3
	  Hostname:    default-k8s-diff-port-512125
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 fa295e50dae8466ebb3dcc5231a36e2f
	  System UUID:                fa295e50-dae8-466e-bb3d-cc5231a36e2f
	  Boot ID:                    abbc88b8-2b85-4789-973d-2b37147e3020
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-2qg68                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m13s
	  kube-system                 coredns-7c65d6cfc9-pm4s9                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m13s
	  kube-system                 etcd-default-k8s-diff-port-512125                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         9m18s
	  kube-system                 kube-apiserver-default-k8s-diff-port-512125             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9m20s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-512125    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9m18s
	  kube-system                 kube-proxy-6zfwm                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m14s
	  kube-system                 kube-scheduler-default-k8s-diff-port-512125             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9m18s
	  kube-system                 metrics-server-6867b74b74-tk8qn                         100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         9m12s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m12s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m12s                  kube-proxy       
	  Normal  Starting                 9m25s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m25s (x8 over 9m25s)  kubelet          Node default-k8s-diff-port-512125 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m25s (x8 over 9m25s)  kubelet          Node default-k8s-diff-port-512125 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m25s (x7 over 9m25s)  kubelet          Node default-k8s-diff-port-512125 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m25s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 9m19s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m18s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m18s                  kubelet          Node default-k8s-diff-port-512125 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m18s                  kubelet          Node default-k8s-diff-port-512125 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m18s                  kubelet          Node default-k8s-diff-port-512125 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m14s                  node-controller  Node default-k8s-diff-port-512125 event: Registered Node default-k8s-diff-port-512125 in Controller
	
	
	==> dmesg <==
	[  +0.050519] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041535] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.876583] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.607494] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.628590] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.724239] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.062732] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.064778] systemd-fstab-generator[647]: Ignoring "noauto" option for root device
	[  +0.188071] systemd-fstab-generator[661]: Ignoring "noauto" option for root device
	[  +0.188934] systemd-fstab-generator[673]: Ignoring "noauto" option for root device
	[  +0.329211] systemd-fstab-generator[702]: Ignoring "noauto" option for root device
	[  +4.416439] systemd-fstab-generator[794]: Ignoring "noauto" option for root device
	[  +0.067089] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.123926] systemd-fstab-generator[915]: Ignoring "noauto" option for root device
	[Sep13 19:58] kauditd_printk_skb: 97 callbacks suppressed
	[  +6.952992] kauditd_printk_skb: 87 callbacks suppressed
	[Sep13 20:02] systemd-fstab-generator[2561]: Ignoring "noauto" option for root device
	[  +0.059068] kauditd_printk_skb: 8 callbacks suppressed
	[  +6.487896] systemd-fstab-generator[2876]: Ignoring "noauto" option for root device
	[  +0.081889] kauditd_printk_skb: 54 callbacks suppressed
	[  +5.316007] systemd-fstab-generator[2993]: Ignoring "noauto" option for root device
	[  +0.121809] kauditd_printk_skb: 12 callbacks suppressed
	[Sep13 20:03] kauditd_printk_skb: 88 callbacks suppressed
	
	
	==> etcd [25c925f18c1640fdc04855eb6a2c9b653041cd3edece8e4a852a8e2872f674db] <==
	{"level":"info","ts":"2024-09-13T20:02:48.808919Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.61.3:2380"}
	{"level":"info","ts":"2024-09-13T20:02:48.809104Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"bd69003d43e617bf","initial-advertise-peer-urls":["https://192.168.61.3:2380"],"listen-peer-urls":["https://192.168.61.3:2380"],"advertise-client-urls":["https://192.168.61.3:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.3:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-13T20:02:48.809140Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-13T20:02:48.818125Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bd69003d43e617bf switched to configuration voters=(13648440408855156671)"}
	{"level":"info","ts":"2024-09-13T20:02:48.818257Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"bd78613cdcde8fe4","local-member-id":"bd69003d43e617bf","added-peer-id":"bd69003d43e617bf","added-peer-peer-urls":["https://192.168.61.3:2380"]}
	{"level":"info","ts":"2024-09-13T20:02:49.231819Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bd69003d43e617bf is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-13T20:02:49.231932Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bd69003d43e617bf became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-13T20:02:49.231951Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bd69003d43e617bf received MsgPreVoteResp from bd69003d43e617bf at term 1"}
	{"level":"info","ts":"2024-09-13T20:02:49.231965Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bd69003d43e617bf became candidate at term 2"}
	{"level":"info","ts":"2024-09-13T20:02:49.231970Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bd69003d43e617bf received MsgVoteResp from bd69003d43e617bf at term 2"}
	{"level":"info","ts":"2024-09-13T20:02:49.231978Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bd69003d43e617bf became leader at term 2"}
	{"level":"info","ts":"2024-09-13T20:02:49.231985Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: bd69003d43e617bf elected leader bd69003d43e617bf at term 2"}
	{"level":"info","ts":"2024-09-13T20:02:49.235959Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-13T20:02:49.240031Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"bd69003d43e617bf","local-member-attributes":"{Name:default-k8s-diff-port-512125 ClientURLs:[https://192.168.61.3:2379]}","request-path":"/0/members/bd69003d43e617bf/attributes","cluster-id":"bd78613cdcde8fe4","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-13T20:02:49.240165Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-13T20:02:49.240497Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-13T20:02:49.240635Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-13T20:02:49.240662Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-13T20:02:49.241285Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-13T20:02:49.249112Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-13T20:02:49.249719Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-13T20:02:49.252441Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.3:2379"}
	{"level":"info","ts":"2024-09-13T20:02:49.283665Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"bd78613cdcde8fe4","local-member-id":"bd69003d43e617bf","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-13T20:02:49.299689Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-13T20:02:49.317863Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 20:12:12 up 14 min,  0 users,  load average: 0.12, 0.13, 0.09
	Linux default-k8s-diff-port-512125 5.10.207 #1 SMP Thu Sep 12 19:03:33 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [3b172dac6b2fe57ea534fdb4e230ff081590b6a591d05a565a442ececc0a1da4] <==
	W0913 20:07:52.155606       1 handler_proxy.go:99] no RequestInfo found in the context
	E0913 20:07:52.155804       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0913 20:07:52.157130       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0913 20:07:52.157238       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0913 20:08:52.158216       1 handler_proxy.go:99] no RequestInfo found in the context
	E0913 20:08:52.158666       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0913 20:08:52.158568       1 handler_proxy.go:99] no RequestInfo found in the context
	E0913 20:08:52.158927       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0913 20:08:52.159993       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0913 20:08:52.160100       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0913 20:10:52.160339       1 handler_proxy.go:99] no RequestInfo found in the context
	W0913 20:10:52.160335       1 handler_proxy.go:99] no RequestInfo found in the context
	E0913 20:10:52.160967       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0913 20:10:52.161072       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0913 20:10:52.162242       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0913 20:10:52.162316       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [683d63db2439b412660887896403aa799b25c848d4dbd98b9a50e08af1663258] <==
	W0913 20:02:40.399694       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0913 20:02:40.419486       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0913 20:02:40.463223       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0913 20:02:40.466808       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0913 20:02:40.480031       1 logging.go:55] [core] [Channel #106 SubChannel #107]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0913 20:02:40.485464       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0913 20:02:40.553488       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0913 20:02:40.561976       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0913 20:02:40.591993       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0913 20:02:40.799670       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0913 20:02:40.819464       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0913 20:02:40.844168       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0913 20:02:40.862136       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0913 20:02:40.869303       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0913 20:02:40.902666       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0913 20:02:45.075127       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0913 20:02:45.126562       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0913 20:02:45.154011       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0913 20:02:45.286688       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0913 20:02:45.391667       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0913 20:02:45.476941       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0913 20:02:45.496305       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0913 20:02:45.505060       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0913 20:02:45.510503       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0913 20:02:45.581627       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [1c7c881fbf40e5abef1e031d9144018705696ddc7366b9d3140690504d0a30a1] <==
	E0913 20:06:58.221584       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0913 20:06:58.663812       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0913 20:07:28.227413       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0913 20:07:28.672564       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0913 20:07:58.236098       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0913 20:07:58.681061       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0913 20:08:09.743421       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-512125"
	E0913 20:08:28.242577       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0913 20:08:28.689449       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0913 20:08:58.248565       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0913 20:08:58.698003       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0913 20:08:59.026572       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="226.196µs"
	I0913 20:09:10.023654       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="82.647µs"
	E0913 20:09:28.254311       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0913 20:09:28.705954       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0913 20:09:58.260933       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0913 20:09:58.713103       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0913 20:10:28.267673       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0913 20:10:28.721091       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0913 20:10:58.274218       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0913 20:10:58.728820       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0913 20:11:28.281191       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0913 20:11:28.735883       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0913 20:11:58.287668       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0913 20:11:58.743590       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [00782ad9f16fb059fd73915c67417c6aab1d191dbcd6af0059a9949bc8bc295e] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0913 20:02:59.809546       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0913 20:02:59.840326       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.61.3"]
	E0913 20:02:59.840513       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0913 20:02:59.924649       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0913 20:02:59.924965       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0913 20:02:59.924999       1 server_linux.go:169] "Using iptables Proxier"
	I0913 20:02:59.931854       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0913 20:02:59.932135       1 server.go:483] "Version info" version="v1.31.1"
	I0913 20:02:59.932147       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0913 20:02:59.936990       1 config.go:199] "Starting service config controller"
	I0913 20:02:59.937083       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0913 20:02:59.937128       1 config.go:105] "Starting endpoint slice config controller"
	I0913 20:02:59.937136       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0913 20:02:59.937607       1 config.go:328] "Starting node config controller"
	I0913 20:02:59.937613       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0913 20:03:00.037710       1 shared_informer.go:320] Caches are synced for node config
	I0913 20:03:00.037803       1 shared_informer.go:320] Caches are synced for service config
	I0913 20:03:00.037812       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [b227cf71d8db583d3217e10f5801b9139d31d9df3bb72e96789ebfcd8099db4a] <==
	W0913 20:02:51.222246       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0913 20:02:51.222298       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0913 20:02:51.222332       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0913 20:02:51.222384       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0913 20:02:51.222344       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0913 20:02:51.222445       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0913 20:02:52.088834       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0913 20:02:52.088897       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0913 20:02:52.101928       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0913 20:02:52.102042       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0913 20:02:52.107104       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0913 20:02:52.107635       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0913 20:02:52.209655       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0913 20:02:52.209799       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0913 20:02:52.371498       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0913 20:02:52.371562       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0913 20:02:52.407135       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0913 20:02:52.407191       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0913 20:02:52.408348       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0913 20:02:52.408394       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0913 20:02:52.459633       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0913 20:02:52.459669       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0913 20:02:52.494617       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0913 20:02:52.494651       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0913 20:02:55.114858       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 13 20:10:58 default-k8s-diff-port-512125 kubelet[2883]: E0913 20:10:58.009602    2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-tk8qn" podUID="e4e5d427-7760-4397-8529-3ae3734ed891"
	Sep 13 20:11:04 default-k8s-diff-port-512125 kubelet[2883]: E0913 20:11:04.203525    2883 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726258264203181735,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 20:11:04 default-k8s-diff-port-512125 kubelet[2883]: E0913 20:11:04.203829    2883 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726258264203181735,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 20:11:12 default-k8s-diff-port-512125 kubelet[2883]: E0913 20:11:12.009617    2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-tk8qn" podUID="e4e5d427-7760-4397-8529-3ae3734ed891"
	Sep 13 20:11:14 default-k8s-diff-port-512125 kubelet[2883]: E0913 20:11:14.205583    2883 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726258274204922520,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 20:11:14 default-k8s-diff-port-512125 kubelet[2883]: E0913 20:11:14.206003    2883 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726258274204922520,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 20:11:24 default-k8s-diff-port-512125 kubelet[2883]: E0913 20:11:24.210995    2883 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726258284210557335,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 20:11:24 default-k8s-diff-port-512125 kubelet[2883]: E0913 20:11:24.211256    2883 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726258284210557335,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 20:11:27 default-k8s-diff-port-512125 kubelet[2883]: E0913 20:11:27.008596    2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-tk8qn" podUID="e4e5d427-7760-4397-8529-3ae3734ed891"
	Sep 13 20:11:34 default-k8s-diff-port-512125 kubelet[2883]: E0913 20:11:34.212377    2883 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726258294212136513,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 20:11:34 default-k8s-diff-port-512125 kubelet[2883]: E0913 20:11:34.212416    2883 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726258294212136513,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 20:11:38 default-k8s-diff-port-512125 kubelet[2883]: E0913 20:11:38.010136    2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-tk8qn" podUID="e4e5d427-7760-4397-8529-3ae3734ed891"
	Sep 13 20:11:44 default-k8s-diff-port-512125 kubelet[2883]: E0913 20:11:44.213501    2883 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726258304213263802,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 20:11:44 default-k8s-diff-port-512125 kubelet[2883]: E0913 20:11:44.213547    2883 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726258304213263802,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 20:11:52 default-k8s-diff-port-512125 kubelet[2883]: E0913 20:11:52.008638    2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-tk8qn" podUID="e4e5d427-7760-4397-8529-3ae3734ed891"
	Sep 13 20:11:54 default-k8s-diff-port-512125 kubelet[2883]: E0913 20:11:54.023231    2883 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 13 20:11:54 default-k8s-diff-port-512125 kubelet[2883]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 13 20:11:54 default-k8s-diff-port-512125 kubelet[2883]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 13 20:11:54 default-k8s-diff-port-512125 kubelet[2883]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 13 20:11:54 default-k8s-diff-port-512125 kubelet[2883]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 13 20:11:54 default-k8s-diff-port-512125 kubelet[2883]: E0913 20:11:54.215795    2883 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726258314215173719,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 20:11:54 default-k8s-diff-port-512125 kubelet[2883]: E0913 20:11:54.215836    2883 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726258314215173719,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 20:12:04 default-k8s-diff-port-512125 kubelet[2883]: E0913 20:12:04.220277    2883 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726258324218393008,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 20:12:04 default-k8s-diff-port-512125 kubelet[2883]: E0913 20:12:04.220687    2883 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726258324218393008,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 20:12:06 default-k8s-diff-port-512125 kubelet[2883]: E0913 20:12:06.012501    2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-tk8qn" podUID="e4e5d427-7760-4397-8529-3ae3734ed891"
	
	
	==> storage-provisioner [727272a23be61853f465dcb60dae0ed961217133d96457374632a3fcc490b3fc] <==
	I0913 20:03:01.511928       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0913 20:03:01.531597       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0913 20:03:01.531643       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0913 20:03:01.558588       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0913 20:03:01.559120       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-512125_ca6bfefb-05ff-422d-a0d8-62ddadbf9f62!
	I0913 20:03:01.561274       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"13d78089-4533-4fc7-aeb3-4b7fda570d53", APIVersion:"v1", ResourceVersion:"441", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-512125_ca6bfefb-05ff-422d-a0d8-62ddadbf9f62 became leader
	I0913 20:03:01.659906       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-512125_ca6bfefb-05ff-422d-a0d8-62ddadbf9f62!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-512125 -n default-k8s-diff-port-512125
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-512125 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-tk8qn
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-512125 describe pod metrics-server-6867b74b74-tk8qn
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-512125 describe pod metrics-server-6867b74b74-tk8qn: exit status 1 (71.583546ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-tk8qn" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-512125 describe pod metrics-server-6867b74b74-tk8qn: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (545.66s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (545.8s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0913 20:03:50.604169   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/bridge-604714/client.crt: no such file or directory" logger="UnhandledError"
E0913 20:04:06.601426   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357/client.crt: no such file or directory" logger="UnhandledError"
E0913 20:05:00.700923   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/auto-604714/client.crt: no such file or directory" logger="UnhandledError"
E0913 20:05:12.299374   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/kindnet-604714/client.crt: no such file or directory" logger="UnhandledError"
E0913 20:05:51.373696   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/calico-604714/client.crt: no such file or directory" logger="UnhandledError"
E0913 20:05:57.575922   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/functional-204039/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-175374 -n embed-certs-175374
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-09-13 20:12:09.825049335 +0000 UTC m=+6683.304054562
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-175374 -n embed-certs-175374
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-175374 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-175374 logs -n 25: (2.42383076s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-604714 sudo cat                              | bridge-604714                | jenkins | v1.34.0 | 13 Sep 24 19:49 UTC | 13 Sep 24 19:49 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-604714 sudo                                  | bridge-604714                | jenkins | v1.34.0 | 13 Sep 24 19:49 UTC | 13 Sep 24 19:49 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-604714 sudo                                  | bridge-604714                | jenkins | v1.34.0 | 13 Sep 24 19:49 UTC | 13 Sep 24 19:49 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-604714 sudo                                  | bridge-604714                | jenkins | v1.34.0 | 13 Sep 24 19:49 UTC | 13 Sep 24 19:49 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-604714 sudo find                             | bridge-604714                | jenkins | v1.34.0 | 13 Sep 24 19:49 UTC | 13 Sep 24 19:49 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-604714 sudo crio                             | bridge-604714                | jenkins | v1.34.0 | 13 Sep 24 19:49 UTC | 13 Sep 24 19:49 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-604714                                       | bridge-604714                | jenkins | v1.34.0 | 13 Sep 24 19:49 UTC | 13 Sep 24 19:49 UTC |
	| delete  | -p                                                     | disable-driver-mounts-221882 | jenkins | v1.34.0 | 13 Sep 24 19:49 UTC | 13 Sep 24 19:49 UTC |
	|         | disable-driver-mounts-221882                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-512125 | jenkins | v1.34.0 | 13 Sep 24 19:49 UTC | 13 Sep 24 19:50 UTC |
	|         | default-k8s-diff-port-512125                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-175374            | embed-certs-175374           | jenkins | v1.34.0 | 13 Sep 24 19:50 UTC | 13 Sep 24 19:50 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-175374                                  | embed-certs-175374           | jenkins | v1.34.0 | 13 Sep 24 19:50 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-239327             | no-preload-239327            | jenkins | v1.34.0 | 13 Sep 24 19:50 UTC | 13 Sep 24 19:50 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-239327                                   | no-preload-239327            | jenkins | v1.34.0 | 13 Sep 24 19:50 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-512125  | default-k8s-diff-port-512125 | jenkins | v1.34.0 | 13 Sep 24 19:50 UTC | 13 Sep 24 19:50 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-512125 | jenkins | v1.34.0 | 13 Sep 24 19:50 UTC |                     |
	|         | default-k8s-diff-port-512125                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-234290        | old-k8s-version-234290       | jenkins | v1.34.0 | 13 Sep 24 19:52 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-175374                 | embed-certs-175374           | jenkins | v1.34.0 | 13 Sep 24 19:52 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-175374                                  | embed-certs-175374           | jenkins | v1.34.0 | 13 Sep 24 19:52 UTC | 13 Sep 24 20:03 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-239327                  | no-preload-239327            | jenkins | v1.34.0 | 13 Sep 24 19:52 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-239327                                   | no-preload-239327            | jenkins | v1.34.0 | 13 Sep 24 19:52 UTC | 13 Sep 24 20:02 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-512125       | default-k8s-diff-port-512125 | jenkins | v1.34.0 | 13 Sep 24 19:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-512125 | jenkins | v1.34.0 | 13 Sep 24 19:53 UTC | 13 Sep 24 20:03 UTC |
	|         | default-k8s-diff-port-512125                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-234290                              | old-k8s-version-234290       | jenkins | v1.34.0 | 13 Sep 24 19:53 UTC | 13 Sep 24 19:53 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-234290             | old-k8s-version-234290       | jenkins | v1.34.0 | 13 Sep 24 19:53 UTC | 13 Sep 24 19:53 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-234290                              | old-k8s-version-234290       | jenkins | v1.34.0 | 13 Sep 24 19:53 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/13 19:53:41
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0913 19:53:41.943032   71926 out.go:345] Setting OutFile to fd 1 ...
	I0913 19:53:41.943180   71926 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 19:53:41.943190   71926 out.go:358] Setting ErrFile to fd 2...
	I0913 19:53:41.943197   71926 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 19:53:41.943402   71926 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19636-3902/.minikube/bin
	I0913 19:53:41.943930   71926 out.go:352] Setting JSON to false
	I0913 19:53:41.944812   71926 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5765,"bootTime":1726251457,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0913 19:53:41.944913   71926 start.go:139] virtualization: kvm guest
	I0913 19:53:41.946864   71926 out.go:177] * [old-k8s-version-234290] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0913 19:53:41.948276   71926 out.go:177]   - MINIKUBE_LOCATION=19636
	I0913 19:53:41.948281   71926 notify.go:220] Checking for updates...
	I0913 19:53:41.950620   71926 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 19:53:41.951967   71926 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19636-3902/kubeconfig
	I0913 19:53:41.953232   71926 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19636-3902/.minikube
	I0913 19:53:41.954348   71926 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0913 19:53:41.955441   71926 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 19:53:41.957011   71926 config.go:182] Loaded profile config "old-k8s-version-234290": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0913 19:53:41.957371   71926 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:53:41.957441   71926 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:53:41.972835   71926 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38069
	I0913 19:53:41.973170   71926 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:53:41.973680   71926 main.go:141] libmachine: Using API Version  1
	I0913 19:53:41.973702   71926 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:53:41.974021   71926 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:53:41.974203   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .DriverName
	I0913 19:53:41.975950   71926 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0913 19:53:41.977070   71926 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 19:53:41.977361   71926 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:53:41.977393   71926 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:53:41.992564   71926 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46407
	I0913 19:53:41.992937   71926 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:53:41.993374   71926 main.go:141] libmachine: Using API Version  1
	I0913 19:53:41.993394   71926 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:53:41.993696   71926 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:53:41.993887   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .DriverName
	I0913 19:53:42.028938   71926 out.go:177] * Using the kvm2 driver based on existing profile
	I0913 19:53:42.030049   71926 start.go:297] selected driver: kvm2
	I0913 19:53:42.030059   71926 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-234290 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-234290 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.137 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 19:53:42.030200   71926 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 19:53:42.030849   71926 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 19:53:42.030932   71926 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19636-3902/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0913 19:53:42.045463   71926 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0913 19:53:42.045953   71926 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0913 19:53:42.045989   71926 cni.go:84] Creating CNI manager for ""
	I0913 19:53:42.046049   71926 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0913 19:53:42.046110   71926 start.go:340] cluster config:
	{Name:old-k8s-version-234290 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-234290 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.137 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 19:53:42.046256   71926 iso.go:125] acquiring lock: {Name:mk6d09a9e7ffc35d34bf41cb51ad15df14a4d34d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 19:53:42.047975   71926 out.go:177] * Starting "old-k8s-version-234290" primary control-plane node in "old-k8s-version-234290" cluster
	I0913 19:53:42.049111   71926 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0913 19:53:42.049142   71926 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0913 19:53:42.049152   71926 cache.go:56] Caching tarball of preloaded images
	I0913 19:53:42.049244   71926 preload.go:172] Found /home/jenkins/minikube-integration/19636-3902/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0913 19:53:42.049256   71926 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0913 19:53:42.049381   71926 profile.go:143] Saving config to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/old-k8s-version-234290/config.json ...
	I0913 19:53:42.049610   71926 start.go:360] acquireMachinesLock for old-k8s-version-234290: {Name:mk2a4fc9f87a3264088553785e086036ce1d8c5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 19:53:44.338294   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:53:47.410436   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:53:53.490365   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:53:56.562332   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:54:02.642421   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:54:05.714373   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:54:11.794509   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:54:14.866446   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:54:20.946376   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:54:24.018394   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:54:30.098454   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:54:33.170427   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:54:39.250379   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:54:42.322396   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:54:48.402383   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:54:51.474349   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:54:57.554326   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:55:00.626470   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:55:06.706406   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:55:09.778406   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:55:15.858396   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:55:18.930350   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:55:25.010369   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:55:28.082351   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:55:34.162384   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:55:37.234340   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:55:43.314402   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:55:46.386350   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:55:52.466366   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:55:55.538393   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:56:01.618347   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:56:04.690441   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:56:10.770383   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:56:13.842385   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:56:19.922294   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:56:22.994351   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:56:29.074375   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:56:32.146398   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:56:38.226398   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:56:41.298354   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:56:47.378372   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:56:50.450410   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:56:56.530367   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:56:59.602397   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:57:05.682382   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:57:08.754412   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:57:11.758611   71424 start.go:364] duration metric: took 4m20.559966284s to acquireMachinesLock for "no-preload-239327"
	I0913 19:57:11.758664   71424 start.go:96] Skipping create...Using existing machine configuration
	I0913 19:57:11.758671   71424 fix.go:54] fixHost starting: 
	I0913 19:57:11.759024   71424 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:57:11.759062   71424 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:57:11.773946   71424 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43109
	I0913 19:57:11.774454   71424 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:57:11.774923   71424 main.go:141] libmachine: Using API Version  1
	I0913 19:57:11.774944   71424 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:57:11.775249   71424 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:57:11.775449   71424 main.go:141] libmachine: (no-preload-239327) Calling .DriverName
	I0913 19:57:11.775561   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetState
	I0913 19:57:11.777226   71424 fix.go:112] recreateIfNeeded on no-preload-239327: state=Stopped err=<nil>
	I0913 19:57:11.777255   71424 main.go:141] libmachine: (no-preload-239327) Calling .DriverName
	W0913 19:57:11.777386   71424 fix.go:138] unexpected machine state, will restart: <nil>
	I0913 19:57:11.778991   71424 out.go:177] * Restarting existing kvm2 VM for "no-preload-239327" ...
	I0913 19:57:11.756000   71233 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0913 19:57:11.756057   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetMachineName
	I0913 19:57:11.756380   71233 buildroot.go:166] provisioning hostname "embed-certs-175374"
	I0913 19:57:11.756419   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetMachineName
	I0913 19:57:11.756625   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHHostname
	I0913 19:57:11.758480   71233 machine.go:96] duration metric: took 4m37.434582624s to provisionDockerMachine
	I0913 19:57:11.758528   71233 fix.go:56] duration metric: took 4m37.454978505s for fixHost
	I0913 19:57:11.758535   71233 start.go:83] releasing machines lock for "embed-certs-175374", held for 4m37.454997672s
	W0913 19:57:11.758553   71233 start.go:714] error starting host: provision: host is not running
	W0913 19:57:11.758636   71233 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0913 19:57:11.758644   71233 start.go:729] Will try again in 5 seconds ...
	I0913 19:57:11.780324   71424 main.go:141] libmachine: (no-preload-239327) Calling .Start
	I0913 19:57:11.780481   71424 main.go:141] libmachine: (no-preload-239327) Ensuring networks are active...
	I0913 19:57:11.781265   71424 main.go:141] libmachine: (no-preload-239327) Ensuring network default is active
	I0913 19:57:11.781663   71424 main.go:141] libmachine: (no-preload-239327) Ensuring network mk-no-preload-239327 is active
	I0913 19:57:11.782007   71424 main.go:141] libmachine: (no-preload-239327) Getting domain xml...
	I0913 19:57:11.782826   71424 main.go:141] libmachine: (no-preload-239327) Creating domain...
	I0913 19:57:12.992355   71424 main.go:141] libmachine: (no-preload-239327) Waiting to get IP...
	I0913 19:57:12.993373   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:12.993782   71424 main.go:141] libmachine: (no-preload-239327) DBG | unable to find current IP address of domain no-preload-239327 in network mk-no-preload-239327
	I0913 19:57:12.993855   71424 main.go:141] libmachine: (no-preload-239327) DBG | I0913 19:57:12.993770   72661 retry.go:31] will retry after 199.574184ms: waiting for machine to come up
	I0913 19:57:13.195419   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:13.195877   71424 main.go:141] libmachine: (no-preload-239327) DBG | unable to find current IP address of domain no-preload-239327 in network mk-no-preload-239327
	I0913 19:57:13.195911   71424 main.go:141] libmachine: (no-preload-239327) DBG | I0913 19:57:13.195826   72661 retry.go:31] will retry after 380.700462ms: waiting for machine to come up
	I0913 19:57:13.578683   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:13.579202   71424 main.go:141] libmachine: (no-preload-239327) DBG | unable to find current IP address of domain no-preload-239327 in network mk-no-preload-239327
	I0913 19:57:13.579222   71424 main.go:141] libmachine: (no-preload-239327) DBG | I0913 19:57:13.579162   72661 retry.go:31] will retry after 398.874813ms: waiting for machine to come up
	I0913 19:57:13.979670   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:13.979999   71424 main.go:141] libmachine: (no-preload-239327) DBG | unable to find current IP address of domain no-preload-239327 in network mk-no-preload-239327
	I0913 19:57:13.980026   71424 main.go:141] libmachine: (no-preload-239327) DBG | I0913 19:57:13.979969   72661 retry.go:31] will retry after 430.946638ms: waiting for machine to come up
	I0913 19:57:14.412524   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:14.412887   71424 main.go:141] libmachine: (no-preload-239327) DBG | unable to find current IP address of domain no-preload-239327 in network mk-no-preload-239327
	I0913 19:57:14.412919   71424 main.go:141] libmachine: (no-preload-239327) DBG | I0913 19:57:14.412851   72661 retry.go:31] will retry after 619.103851ms: waiting for machine to come up
	I0913 19:57:15.033546   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:15.034023   71424 main.go:141] libmachine: (no-preload-239327) DBG | unable to find current IP address of domain no-preload-239327 in network mk-no-preload-239327
	I0913 19:57:15.034049   71424 main.go:141] libmachine: (no-preload-239327) DBG | I0913 19:57:15.033968   72661 retry.go:31] will retry after 686.825946ms: waiting for machine to come up
	I0913 19:57:15.722892   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:15.723272   71424 main.go:141] libmachine: (no-preload-239327) DBG | unable to find current IP address of domain no-preload-239327 in network mk-no-preload-239327
	I0913 19:57:15.723291   71424 main.go:141] libmachine: (no-preload-239327) DBG | I0913 19:57:15.723232   72661 retry.go:31] will retry after 950.457281ms: waiting for machine to come up
	I0913 19:57:16.760330   71233 start.go:360] acquireMachinesLock for embed-certs-175374: {Name:mk2a4fc9f87a3264088553785e086036ce1d8c5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 19:57:16.675363   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:16.675847   71424 main.go:141] libmachine: (no-preload-239327) DBG | unable to find current IP address of domain no-preload-239327 in network mk-no-preload-239327
	I0913 19:57:16.675877   71424 main.go:141] libmachine: (no-preload-239327) DBG | I0913 19:57:16.675800   72661 retry.go:31] will retry after 1.216886459s: waiting for machine to come up
	I0913 19:57:17.894808   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:17.895217   71424 main.go:141] libmachine: (no-preload-239327) DBG | unable to find current IP address of domain no-preload-239327 in network mk-no-preload-239327
	I0913 19:57:17.895239   71424 main.go:141] libmachine: (no-preload-239327) DBG | I0913 19:57:17.895175   72661 retry.go:31] will retry after 1.427837109s: waiting for machine to come up
	I0913 19:57:19.324743   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:19.325196   71424 main.go:141] libmachine: (no-preload-239327) DBG | unable to find current IP address of domain no-preload-239327 in network mk-no-preload-239327
	I0913 19:57:19.325217   71424 main.go:141] libmachine: (no-preload-239327) DBG | I0913 19:57:19.325162   72661 retry.go:31] will retry after 1.457475552s: waiting for machine to come up
	I0913 19:57:20.783805   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:20.784266   71424 main.go:141] libmachine: (no-preload-239327) DBG | unable to find current IP address of domain no-preload-239327 in network mk-no-preload-239327
	I0913 19:57:20.784330   71424 main.go:141] libmachine: (no-preload-239327) DBG | I0913 19:57:20.784199   72661 retry.go:31] will retry after 1.982491512s: waiting for machine to come up
	I0913 19:57:22.768091   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:22.768617   71424 main.go:141] libmachine: (no-preload-239327) DBG | unable to find current IP address of domain no-preload-239327 in network mk-no-preload-239327
	I0913 19:57:22.768648   71424 main.go:141] libmachine: (no-preload-239327) DBG | I0913 19:57:22.768571   72661 retry.go:31] will retry after 2.984595157s: waiting for machine to come up
	I0913 19:57:25.756723   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:25.757201   71424 main.go:141] libmachine: (no-preload-239327) DBG | unable to find current IP address of domain no-preload-239327 in network mk-no-preload-239327
	I0913 19:57:25.757254   71424 main.go:141] libmachine: (no-preload-239327) DBG | I0913 19:57:25.757153   72661 retry.go:31] will retry after 3.54213444s: waiting for machine to come up
	I0913 19:57:30.479236   71702 start.go:364] duration metric: took 4m5.481713344s to acquireMachinesLock for "default-k8s-diff-port-512125"
	I0913 19:57:30.479302   71702 start.go:96] Skipping create...Using existing machine configuration
	I0913 19:57:30.479311   71702 fix.go:54] fixHost starting: 
	I0913 19:57:30.479747   71702 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:57:30.479800   71702 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:57:30.496493   71702 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36631
	I0913 19:57:30.497088   71702 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:57:30.497677   71702 main.go:141] libmachine: Using API Version  1
	I0913 19:57:30.497710   71702 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:57:30.498088   71702 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:57:30.498293   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .DriverName
	I0913 19:57:30.498469   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetState
	I0913 19:57:30.500176   71702 fix.go:112] recreateIfNeeded on default-k8s-diff-port-512125: state=Stopped err=<nil>
	I0913 19:57:30.500202   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .DriverName
	W0913 19:57:30.500367   71702 fix.go:138] unexpected machine state, will restart: <nil>
	I0913 19:57:30.503496   71702 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-512125" ...
	I0913 19:57:29.301999   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:29.302506   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has current primary IP address 192.168.50.13 and MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:29.302529   71424 main.go:141] libmachine: (no-preload-239327) Found IP for machine: 192.168.50.13
	I0913 19:57:29.302571   71424 main.go:141] libmachine: (no-preload-239327) Reserving static IP address...
	I0913 19:57:29.302937   71424 main.go:141] libmachine: (no-preload-239327) Reserved static IP address: 192.168.50.13
	I0913 19:57:29.302956   71424 main.go:141] libmachine: (no-preload-239327) Waiting for SSH to be available...
	I0913 19:57:29.302980   71424 main.go:141] libmachine: (no-preload-239327) DBG | found host DHCP lease matching {name: "no-preload-239327", mac: "52:54:00:14:8c:9d", ip: "192.168.50.13"} in network mk-no-preload-239327: {Iface:virbr1 ExpiryTime:2024-09-13 20:57:22 +0000 UTC Type:0 Mac:52:54:00:14:8c:9d Iaid: IPaddr:192.168.50.13 Prefix:24 Hostname:no-preload-239327 Clientid:01:52:54:00:14:8c:9d}
	I0913 19:57:29.303002   71424 main.go:141] libmachine: (no-preload-239327) DBG | skip adding static IP to network mk-no-preload-239327 - found existing host DHCP lease matching {name: "no-preload-239327", mac: "52:54:00:14:8c:9d", ip: "192.168.50.13"}
	I0913 19:57:29.303016   71424 main.go:141] libmachine: (no-preload-239327) DBG | Getting to WaitForSSH function...
	I0913 19:57:29.305047   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:29.305362   71424 main.go:141] libmachine: (no-preload-239327) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:8c:9d", ip: ""} in network mk-no-preload-239327: {Iface:virbr1 ExpiryTime:2024-09-13 20:57:22 +0000 UTC Type:0 Mac:52:54:00:14:8c:9d Iaid: IPaddr:192.168.50.13 Prefix:24 Hostname:no-preload-239327 Clientid:01:52:54:00:14:8c:9d}
	I0913 19:57:29.305404   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined IP address 192.168.50.13 and MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:29.305515   71424 main.go:141] libmachine: (no-preload-239327) DBG | Using SSH client type: external
	I0913 19:57:29.305542   71424 main.go:141] libmachine: (no-preload-239327) DBG | Using SSH private key: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/no-preload-239327/id_rsa (-rw-------)
	I0913 19:57:29.305564   71424 main.go:141] libmachine: (no-preload-239327) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.13 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19636-3902/.minikube/machines/no-preload-239327/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0913 19:57:29.305573   71424 main.go:141] libmachine: (no-preload-239327) DBG | About to run SSH command:
	I0913 19:57:29.305581   71424 main.go:141] libmachine: (no-preload-239327) DBG | exit 0
	I0913 19:57:29.425845   71424 main.go:141] libmachine: (no-preload-239327) DBG | SSH cmd err, output: <nil>: 
	I0913 19:57:29.426277   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetConfigRaw
	I0913 19:57:29.426883   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetIP
	I0913 19:57:29.429328   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:29.429569   71424 main.go:141] libmachine: (no-preload-239327) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:8c:9d", ip: ""} in network mk-no-preload-239327: {Iface:virbr1 ExpiryTime:2024-09-13 20:57:22 +0000 UTC Type:0 Mac:52:54:00:14:8c:9d Iaid: IPaddr:192.168.50.13 Prefix:24 Hostname:no-preload-239327 Clientid:01:52:54:00:14:8c:9d}
	I0913 19:57:29.429604   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined IP address 192.168.50.13 and MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:29.429866   71424 profile.go:143] Saving config to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/no-preload-239327/config.json ...
	I0913 19:57:29.430088   71424 machine.go:93] provisionDockerMachine start ...
	I0913 19:57:29.430124   71424 main.go:141] libmachine: (no-preload-239327) Calling .DriverName
	I0913 19:57:29.430316   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHHostname
	I0913 19:57:29.432371   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:29.432697   71424 main.go:141] libmachine: (no-preload-239327) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:8c:9d", ip: ""} in network mk-no-preload-239327: {Iface:virbr1 ExpiryTime:2024-09-13 20:57:22 +0000 UTC Type:0 Mac:52:54:00:14:8c:9d Iaid: IPaddr:192.168.50.13 Prefix:24 Hostname:no-preload-239327 Clientid:01:52:54:00:14:8c:9d}
	I0913 19:57:29.432723   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined IP address 192.168.50.13 and MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:29.432877   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHPort
	I0913 19:57:29.433028   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHKeyPath
	I0913 19:57:29.433161   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHKeyPath
	I0913 19:57:29.433304   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHUsername
	I0913 19:57:29.433452   71424 main.go:141] libmachine: Using SSH client type: native
	I0913 19:57:29.433659   71424 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.13 22 <nil> <nil>}
	I0913 19:57:29.433671   71424 main.go:141] libmachine: About to run SSH command:
	hostname
	I0913 19:57:29.530650   71424 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0913 19:57:29.530683   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetMachineName
	I0913 19:57:29.530900   71424 buildroot.go:166] provisioning hostname "no-preload-239327"
	I0913 19:57:29.530926   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetMachineName
	I0913 19:57:29.531118   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHHostname
	I0913 19:57:29.533702   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:29.534171   71424 main.go:141] libmachine: (no-preload-239327) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:8c:9d", ip: ""} in network mk-no-preload-239327: {Iface:virbr1 ExpiryTime:2024-09-13 20:57:22 +0000 UTC Type:0 Mac:52:54:00:14:8c:9d Iaid: IPaddr:192.168.50.13 Prefix:24 Hostname:no-preload-239327 Clientid:01:52:54:00:14:8c:9d}
	I0913 19:57:29.534199   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined IP address 192.168.50.13 and MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:29.534417   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHPort
	I0913 19:57:29.534572   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHKeyPath
	I0913 19:57:29.534745   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHKeyPath
	I0913 19:57:29.534891   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHUsername
	I0913 19:57:29.535019   71424 main.go:141] libmachine: Using SSH client type: native
	I0913 19:57:29.535187   71424 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.13 22 <nil> <nil>}
	I0913 19:57:29.535199   71424 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-239327 && echo "no-preload-239327" | sudo tee /etc/hostname
	I0913 19:57:29.648889   71424 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-239327
	
	I0913 19:57:29.648913   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHHostname
	I0913 19:57:29.651418   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:29.651794   71424 main.go:141] libmachine: (no-preload-239327) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:8c:9d", ip: ""} in network mk-no-preload-239327: {Iface:virbr1 ExpiryTime:2024-09-13 20:57:22 +0000 UTC Type:0 Mac:52:54:00:14:8c:9d Iaid: IPaddr:192.168.50.13 Prefix:24 Hostname:no-preload-239327 Clientid:01:52:54:00:14:8c:9d}
	I0913 19:57:29.651818   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined IP address 192.168.50.13 and MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:29.651947   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHPort
	I0913 19:57:29.652123   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHKeyPath
	I0913 19:57:29.652233   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHKeyPath
	I0913 19:57:29.652398   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHUsername
	I0913 19:57:29.652574   71424 main.go:141] libmachine: Using SSH client type: native
	I0913 19:57:29.652776   71424 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.13 22 <nil> <nil>}
	I0913 19:57:29.652794   71424 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-239327' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-239327/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-239327' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0913 19:57:29.762739   71424 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0913 19:57:29.762770   71424 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19636-3902/.minikube CaCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19636-3902/.minikube}
	I0913 19:57:29.762788   71424 buildroot.go:174] setting up certificates
	I0913 19:57:29.762798   71424 provision.go:84] configureAuth start
	I0913 19:57:29.762807   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetMachineName
	I0913 19:57:29.763076   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetIP
	I0913 19:57:29.765579   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:29.765844   71424 main.go:141] libmachine: (no-preload-239327) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:8c:9d", ip: ""} in network mk-no-preload-239327: {Iface:virbr1 ExpiryTime:2024-09-13 20:57:22 +0000 UTC Type:0 Mac:52:54:00:14:8c:9d Iaid: IPaddr:192.168.50.13 Prefix:24 Hostname:no-preload-239327 Clientid:01:52:54:00:14:8c:9d}
	I0913 19:57:29.765881   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined IP address 192.168.50.13 and MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:29.766037   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHHostname
	I0913 19:57:29.768073   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:29.768363   71424 main.go:141] libmachine: (no-preload-239327) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:8c:9d", ip: ""} in network mk-no-preload-239327: {Iface:virbr1 ExpiryTime:2024-09-13 20:57:22 +0000 UTC Type:0 Mac:52:54:00:14:8c:9d Iaid: IPaddr:192.168.50.13 Prefix:24 Hostname:no-preload-239327 Clientid:01:52:54:00:14:8c:9d}
	I0913 19:57:29.768389   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined IP address 192.168.50.13 and MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:29.768465   71424 provision.go:143] copyHostCerts
	I0913 19:57:29.768517   71424 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem, removing ...
	I0913 19:57:29.768527   71424 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem
	I0913 19:57:29.768590   71424 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem (1679 bytes)
	I0913 19:57:29.768687   71424 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem, removing ...
	I0913 19:57:29.768694   71424 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem
	I0913 19:57:29.768722   71424 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem (1082 bytes)
	I0913 19:57:29.768788   71424 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem, removing ...
	I0913 19:57:29.768795   71424 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem
	I0913 19:57:29.768817   71424 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem (1123 bytes)
	I0913 19:57:29.768889   71424 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem org=jenkins.no-preload-239327 san=[127.0.0.1 192.168.50.13 localhost minikube no-preload-239327]
	I0913 19:57:29.880624   71424 provision.go:177] copyRemoteCerts
	I0913 19:57:29.880682   71424 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0913 19:57:29.880717   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHHostname
	I0913 19:57:29.883382   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:29.883679   71424 main.go:141] libmachine: (no-preload-239327) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:8c:9d", ip: ""} in network mk-no-preload-239327: {Iface:virbr1 ExpiryTime:2024-09-13 20:57:22 +0000 UTC Type:0 Mac:52:54:00:14:8c:9d Iaid: IPaddr:192.168.50.13 Prefix:24 Hostname:no-preload-239327 Clientid:01:52:54:00:14:8c:9d}
	I0913 19:57:29.883706   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined IP address 192.168.50.13 and MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:29.883861   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHPort
	I0913 19:57:29.884034   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHKeyPath
	I0913 19:57:29.884172   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHUsername
	I0913 19:57:29.884299   71424 sshutil.go:53] new ssh client: &{IP:192.168.50.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/no-preload-239327/id_rsa Username:docker}
	I0913 19:57:29.964073   71424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0913 19:57:29.988940   71424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0913 19:57:30.013491   71424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0913 19:57:30.038401   71424 provision.go:87] duration metric: took 275.590034ms to configureAuth
	I0913 19:57:30.038427   71424 buildroot.go:189] setting minikube options for container-runtime
	I0913 19:57:30.038638   71424 config.go:182] Loaded profile config "no-preload-239327": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 19:57:30.038726   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHHostname
	I0913 19:57:30.041435   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:30.041734   71424 main.go:141] libmachine: (no-preload-239327) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:8c:9d", ip: ""} in network mk-no-preload-239327: {Iface:virbr1 ExpiryTime:2024-09-13 20:57:22 +0000 UTC Type:0 Mac:52:54:00:14:8c:9d Iaid: IPaddr:192.168.50.13 Prefix:24 Hostname:no-preload-239327 Clientid:01:52:54:00:14:8c:9d}
	I0913 19:57:30.041758   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined IP address 192.168.50.13 and MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:30.041939   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHPort
	I0913 19:57:30.042135   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHKeyPath
	I0913 19:57:30.042328   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHKeyPath
	I0913 19:57:30.042488   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHUsername
	I0913 19:57:30.042633   71424 main.go:141] libmachine: Using SSH client type: native
	I0913 19:57:30.042788   71424 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.13 22 <nil> <nil>}
	I0913 19:57:30.042803   71424 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0913 19:57:30.253339   71424 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0913 19:57:30.253366   71424 machine.go:96] duration metric: took 823.250507ms to provisionDockerMachine
	I0913 19:57:30.253379   71424 start.go:293] postStartSetup for "no-preload-239327" (driver="kvm2")
	I0913 19:57:30.253391   71424 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0913 19:57:30.253413   71424 main.go:141] libmachine: (no-preload-239327) Calling .DriverName
	I0913 19:57:30.253755   71424 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0913 19:57:30.253789   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHHostname
	I0913 19:57:30.256252   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:30.256514   71424 main.go:141] libmachine: (no-preload-239327) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:8c:9d", ip: ""} in network mk-no-preload-239327: {Iface:virbr1 ExpiryTime:2024-09-13 20:57:22 +0000 UTC Type:0 Mac:52:54:00:14:8c:9d Iaid: IPaddr:192.168.50.13 Prefix:24 Hostname:no-preload-239327 Clientid:01:52:54:00:14:8c:9d}
	I0913 19:57:30.256540   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined IP address 192.168.50.13 and MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:30.256711   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHPort
	I0913 19:57:30.256876   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHKeyPath
	I0913 19:57:30.257073   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHUsername
	I0913 19:57:30.257214   71424 sshutil.go:53] new ssh client: &{IP:192.168.50.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/no-preload-239327/id_rsa Username:docker}
	I0913 19:57:30.337478   71424 ssh_runner.go:195] Run: cat /etc/os-release
	I0913 19:57:30.342399   71424 info.go:137] Remote host: Buildroot 2023.02.9
	I0913 19:57:30.342432   71424 filesync.go:126] Scanning /home/jenkins/minikube-integration/19636-3902/.minikube/addons for local assets ...
	I0913 19:57:30.342520   71424 filesync.go:126] Scanning /home/jenkins/minikube-integration/19636-3902/.minikube/files for local assets ...
	I0913 19:57:30.342602   71424 filesync.go:149] local asset: /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem -> 110792.pem in /etc/ssl/certs
	I0913 19:57:30.342687   71424 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0913 19:57:30.352513   71424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem --> /etc/ssl/certs/110792.pem (1708 bytes)
	I0913 19:57:30.377672   71424 start.go:296] duration metric: took 124.280454ms for postStartSetup
	I0913 19:57:30.377713   71424 fix.go:56] duration metric: took 18.619042375s for fixHost
	I0913 19:57:30.377736   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHHostname
	I0913 19:57:30.380480   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:30.380762   71424 main.go:141] libmachine: (no-preload-239327) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:8c:9d", ip: ""} in network mk-no-preload-239327: {Iface:virbr1 ExpiryTime:2024-09-13 20:57:22 +0000 UTC Type:0 Mac:52:54:00:14:8c:9d Iaid: IPaddr:192.168.50.13 Prefix:24 Hostname:no-preload-239327 Clientid:01:52:54:00:14:8c:9d}
	I0913 19:57:30.380784   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined IP address 192.168.50.13 and MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:30.380956   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHPort
	I0913 19:57:30.381202   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHKeyPath
	I0913 19:57:30.381348   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHKeyPath
	I0913 19:57:30.381458   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHUsername
	I0913 19:57:30.381616   71424 main.go:141] libmachine: Using SSH client type: native
	I0913 19:57:30.381771   71424 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.13 22 <nil> <nil>}
	I0913 19:57:30.381780   71424 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0913 19:57:30.479035   71424 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726257450.452618583
	
	I0913 19:57:30.479060   71424 fix.go:216] guest clock: 1726257450.452618583
	I0913 19:57:30.479069   71424 fix.go:229] Guest: 2024-09-13 19:57:30.452618583 +0000 UTC Remote: 2024-09-13 19:57:30.377717716 +0000 UTC m=+279.312798159 (delta=74.900867ms)
	I0913 19:57:30.479125   71424 fix.go:200] guest clock delta is within tolerance: 74.900867ms
	I0913 19:57:30.479144   71424 start.go:83] releasing machines lock for "no-preload-239327", held for 18.720496354s
	I0913 19:57:30.479184   71424 main.go:141] libmachine: (no-preload-239327) Calling .DriverName
	I0913 19:57:30.479427   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetIP
	I0913 19:57:30.481882   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:30.482255   71424 main.go:141] libmachine: (no-preload-239327) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:8c:9d", ip: ""} in network mk-no-preload-239327: {Iface:virbr1 ExpiryTime:2024-09-13 20:57:22 +0000 UTC Type:0 Mac:52:54:00:14:8c:9d Iaid: IPaddr:192.168.50.13 Prefix:24 Hostname:no-preload-239327 Clientid:01:52:54:00:14:8c:9d}
	I0913 19:57:30.482282   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined IP address 192.168.50.13 and MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:30.482456   71424 main.go:141] libmachine: (no-preload-239327) Calling .DriverName
	I0913 19:57:30.482964   71424 main.go:141] libmachine: (no-preload-239327) Calling .DriverName
	I0913 19:57:30.483140   71424 main.go:141] libmachine: (no-preload-239327) Calling .DriverName
	I0913 19:57:30.483216   71424 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0913 19:57:30.483243   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHHostname
	I0913 19:57:30.483423   71424 ssh_runner.go:195] Run: cat /version.json
	I0913 19:57:30.483453   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHHostname
	I0913 19:57:30.485658   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:30.486000   71424 main.go:141] libmachine: (no-preload-239327) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:8c:9d", ip: ""} in network mk-no-preload-239327: {Iface:virbr1 ExpiryTime:2024-09-13 20:57:22 +0000 UTC Type:0 Mac:52:54:00:14:8c:9d Iaid: IPaddr:192.168.50.13 Prefix:24 Hostname:no-preload-239327 Clientid:01:52:54:00:14:8c:9d}
	I0913 19:57:30.486026   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined IP address 192.168.50.13 and MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:30.486080   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:30.486173   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHPort
	I0913 19:57:30.486321   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHKeyPath
	I0913 19:57:30.486463   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHUsername
	I0913 19:57:30.486536   71424 main.go:141] libmachine: (no-preload-239327) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:8c:9d", ip: ""} in network mk-no-preload-239327: {Iface:virbr1 ExpiryTime:2024-09-13 20:57:22 +0000 UTC Type:0 Mac:52:54:00:14:8c:9d Iaid: IPaddr:192.168.50.13 Prefix:24 Hostname:no-preload-239327 Clientid:01:52:54:00:14:8c:9d}
	I0913 19:57:30.486556   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined IP address 192.168.50.13 and MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:30.486581   71424 sshutil.go:53] new ssh client: &{IP:192.168.50.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/no-preload-239327/id_rsa Username:docker}
	I0913 19:57:30.486717   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHPort
	I0913 19:57:30.486859   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHKeyPath
	I0913 19:57:30.487019   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHUsername
	I0913 19:57:30.487177   71424 sshutil.go:53] new ssh client: &{IP:192.168.50.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/no-preload-239327/id_rsa Username:docker}
	I0913 19:57:30.567383   71424 ssh_runner.go:195] Run: systemctl --version
	I0913 19:57:30.589782   71424 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0913 19:57:30.731014   71424 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0913 19:57:30.737329   71424 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0913 19:57:30.737400   71424 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0913 19:57:30.753326   71424 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0913 19:57:30.753355   71424 start.go:495] detecting cgroup driver to use...
	I0913 19:57:30.753427   71424 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0913 19:57:30.769188   71424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0913 19:57:30.783273   71424 docker.go:217] disabling cri-docker service (if available) ...
	I0913 19:57:30.783338   71424 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0913 19:57:30.796488   71424 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0913 19:57:30.809856   71424 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0913 19:57:30.920704   71424 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0913 19:57:31.096766   71424 docker.go:233] disabling docker service ...
	I0913 19:57:31.096843   71424 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0913 19:57:31.111766   71424 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0913 19:57:31.127537   71424 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0913 19:57:31.243075   71424 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0913 19:57:31.367950   71424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0913 19:57:31.382349   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0913 19:57:31.401339   71424 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0913 19:57:31.401408   71424 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:57:31.412154   71424 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0913 19:57:31.412230   71424 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:57:31.423247   71424 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:57:31.433976   71424 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:57:31.445438   71424 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0913 19:57:31.457530   71424 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:57:31.468624   71424 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:57:31.487026   71424 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:57:31.498412   71424 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0913 19:57:31.508829   71424 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0913 19:57:31.508895   71424 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0913 19:57:31.524710   71424 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0913 19:57:31.535524   71424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 19:57:31.653359   71424 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0913 19:57:31.747320   71424 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0913 19:57:31.747407   71424 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0913 19:57:31.752629   71424 start.go:563] Will wait 60s for crictl version
	I0913 19:57:31.752688   71424 ssh_runner.go:195] Run: which crictl
	I0913 19:57:31.756745   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0913 19:57:31.801760   71424 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0913 19:57:31.801845   71424 ssh_runner.go:195] Run: crio --version
	I0913 19:57:31.831043   71424 ssh_runner.go:195] Run: crio --version
	I0913 19:57:31.864324   71424 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0913 19:57:30.504936   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .Start
	I0913 19:57:30.505113   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Ensuring networks are active...
	I0913 19:57:30.505954   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Ensuring network default is active
	I0913 19:57:30.506465   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Ensuring network mk-default-k8s-diff-port-512125 is active
	I0913 19:57:30.506848   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Getting domain xml...
	I0913 19:57:30.507643   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Creating domain...
	I0913 19:57:31.762345   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting to get IP...
	I0913 19:57:31.763307   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:31.763782   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | unable to find current IP address of domain default-k8s-diff-port-512125 in network mk-default-k8s-diff-port-512125
	I0913 19:57:31.763844   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | I0913 19:57:31.763764   72780 retry.go:31] will retry after 200.585233ms: waiting for machine to come up
	I0913 19:57:31.966496   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:31.968386   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | unable to find current IP address of domain default-k8s-diff-port-512125 in network mk-default-k8s-diff-port-512125
	I0913 19:57:31.968411   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | I0913 19:57:31.968318   72780 retry.go:31] will retry after 263.858664ms: waiting for machine to come up
	I0913 19:57:32.234115   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:32.234585   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | unable to find current IP address of domain default-k8s-diff-port-512125 in network mk-default-k8s-diff-port-512125
	I0913 19:57:32.234611   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | I0913 19:57:32.234528   72780 retry.go:31] will retry after 372.592721ms: waiting for machine to come up
	I0913 19:57:32.609295   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:32.609822   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | unable to find current IP address of domain default-k8s-diff-port-512125 in network mk-default-k8s-diff-port-512125
	I0913 19:57:32.609852   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | I0913 19:57:32.609783   72780 retry.go:31] will retry after 570.937116ms: waiting for machine to come up
	I0913 19:57:33.182680   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:33.183060   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | unable to find current IP address of domain default-k8s-diff-port-512125 in network mk-default-k8s-diff-port-512125
	I0913 19:57:33.183090   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | I0913 19:57:33.183013   72780 retry.go:31] will retry after 573.320817ms: waiting for machine to come up
	I0913 19:57:33.757741   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:33.758113   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | unable to find current IP address of domain default-k8s-diff-port-512125 in network mk-default-k8s-diff-port-512125
	I0913 19:57:33.758145   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | I0913 19:57:33.758052   72780 retry.go:31] will retry after 732.322448ms: waiting for machine to come up
	I0913 19:57:34.492123   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:34.492507   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | unable to find current IP address of domain default-k8s-diff-port-512125 in network mk-default-k8s-diff-port-512125
	I0913 19:57:34.492538   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | I0913 19:57:34.492457   72780 retry.go:31] will retry after 958.042939ms: waiting for machine to come up
	I0913 19:57:31.865671   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetIP
	I0913 19:57:31.868390   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:31.868769   71424 main.go:141] libmachine: (no-preload-239327) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:8c:9d", ip: ""} in network mk-no-preload-239327: {Iface:virbr1 ExpiryTime:2024-09-13 20:57:22 +0000 UTC Type:0 Mac:52:54:00:14:8c:9d Iaid: IPaddr:192.168.50.13 Prefix:24 Hostname:no-preload-239327 Clientid:01:52:54:00:14:8c:9d}
	I0913 19:57:31.868809   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined IP address 192.168.50.13 and MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:31.868948   71424 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0913 19:57:31.873443   71424 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0913 19:57:31.886704   71424 kubeadm.go:883] updating cluster {Name:no-preload-239327 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:no-preload-239327 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.13 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0913 19:57:31.886832   71424 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0913 19:57:31.886886   71424 ssh_runner.go:195] Run: sudo crictl images --output json
	I0913 19:57:31.925232   71424 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0913 19:57:31.925256   71424 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.1 registry.k8s.io/kube-controller-manager:v1.31.1 registry.k8s.io/kube-scheduler:v1.31.1 registry.k8s.io/kube-proxy:v1.31.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0913 19:57:31.925331   71424 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 19:57:31.925351   71424 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0913 19:57:31.925350   71424 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I0913 19:57:31.925433   71424 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0913 19:57:31.925483   71424 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0913 19:57:31.925542   71424 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I0913 19:57:31.925553   71424 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I0913 19:57:31.925619   71424 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0913 19:57:31.927195   71424 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0913 19:57:31.927210   71424 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I0913 19:57:31.927221   71424 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0913 19:57:31.927234   71424 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0913 19:57:31.927201   71424 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I0913 19:57:31.927210   71424 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 19:57:31.927265   71424 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I0913 19:57:31.927291   71424 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0913 19:57:32.127330   71424 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0913 19:57:32.132821   71424 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I0913 19:57:32.142922   71424 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.1
	I0913 19:57:32.151533   71424 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.1
	I0913 19:57:32.187158   71424 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.1
	I0913 19:57:32.196395   71424 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0913 19:57:32.196447   71424 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0913 19:57:32.196495   71424 ssh_runner.go:195] Run: which crictl
	I0913 19:57:32.197121   71424 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.1
	I0913 19:57:32.223747   71424 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0913 19:57:32.241044   71424 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.1" does not exist at hash "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee" in container runtime
	I0913 19:57:32.241098   71424 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.1
	I0913 19:57:32.241146   71424 ssh_runner.go:195] Run: which crictl
	I0913 19:57:32.241193   71424 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I0913 19:57:32.241248   71424 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I0913 19:57:32.241305   71424 ssh_runner.go:195] Run: which crictl
	I0913 19:57:32.307038   71424 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.1" does not exist at hash "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1" in container runtime
	I0913 19:57:32.307081   71424 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0913 19:57:32.307161   71424 ssh_runner.go:195] Run: which crictl
	I0913 19:57:32.310315   71424 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.1" does not exist at hash "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b" in container runtime
	I0913 19:57:32.310353   71424 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.1
	I0913 19:57:32.310403   71424 ssh_runner.go:195] Run: which crictl
	I0913 19:57:32.310456   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0913 19:57:32.310513   71424 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.1" needs transfer: "registry.k8s.io/kube-proxy:v1.31.1" does not exist at hash "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561" in container runtime
	I0913 19:57:32.310544   71424 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.1
	I0913 19:57:32.310579   71424 ssh_runner.go:195] Run: which crictl
	I0913 19:57:32.432848   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0913 19:57:32.432949   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0913 19:57:32.432981   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0913 19:57:32.433034   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0913 19:57:32.433086   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0913 19:57:32.433185   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0913 19:57:32.568999   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0913 19:57:32.569071   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0913 19:57:32.569090   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0913 19:57:32.569137   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0913 19:57:32.569158   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0913 19:57:32.569239   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0913 19:57:32.686591   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0913 19:57:32.709864   71424 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0913 19:57:32.709957   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0913 19:57:32.709984   71424 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0913 19:57:32.710022   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0913 19:57:32.710074   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0913 19:57:32.714371   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0913 19:57:32.812533   71424 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I0913 19:57:32.812546   71424 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I0913 19:57:32.812646   71424 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0913 19:57:32.812679   71424 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I0913 19:57:32.822802   71424 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0913 19:57:32.822821   71424 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0913 19:57:32.822870   71424 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0913 19:57:32.822949   71424 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I0913 19:57:32.823020   71424 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I0913 19:57:32.823036   71424 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I0913 19:57:32.823105   71424 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0913 19:57:32.823127   71424 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0913 19:57:32.823108   71424 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1
	I0913 19:57:32.827694   71424 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I0913 19:57:32.827935   71424 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.1 (exists)
	I0913 19:57:33.133519   71424 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 19:57:35.452314   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:35.452807   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | unable to find current IP address of domain default-k8s-diff-port-512125 in network mk-default-k8s-diff-port-512125
	I0913 19:57:35.452832   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | I0913 19:57:35.452764   72780 retry.go:31] will retry after 1.050724369s: waiting for machine to come up
	I0913 19:57:36.504580   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:36.505059   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | unable to find current IP address of domain default-k8s-diff-port-512125 in network mk-default-k8s-diff-port-512125
	I0913 19:57:36.505083   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | I0913 19:57:36.505005   72780 retry.go:31] will retry after 1.828970571s: waiting for machine to come up
	I0913 19:57:38.336079   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:38.336524   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | unable to find current IP address of domain default-k8s-diff-port-512125 in network mk-default-k8s-diff-port-512125
	I0913 19:57:38.336551   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | I0913 19:57:38.336484   72780 retry.go:31] will retry after 1.745975748s: waiting for machine to come up
	I0913 19:57:36.540092   71424 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.717200665s)
	I0913 19:57:36.540120   71424 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0913 19:57:36.540143   71424 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I0913 19:57:36.540185   71424 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1: (3.717045749s)
	I0913 19:57:36.540088   71424 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1: (3.716939076s)
	I0913 19:57:36.540246   71424 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1: (3.717074576s)
	I0913 19:57:36.540263   71424 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.1 (exists)
	I0913 19:57:36.540196   71424 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I0913 19:57:36.540247   71424 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.1 (exists)
	I0913 19:57:36.540220   71424 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.1 (exists)
	I0913 19:57:36.540318   71424 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (3.406769496s)
	I0913 19:57:36.540350   71424 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0913 19:57:36.540383   71424 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 19:57:36.540425   71424 ssh_runner.go:195] Run: which crictl
	I0913 19:57:38.607617   71424 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (2.06732841s)
	I0913 19:57:38.607656   71424 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I0913 19:57:38.607657   71424 ssh_runner.go:235] Completed: which crictl: (2.067217735s)
	I0913 19:57:38.607681   71424 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0913 19:57:38.607717   71424 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0913 19:57:38.607717   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 19:57:38.655710   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 19:57:40.096743   71424 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.440995963s)
	I0913 19:57:40.096836   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 19:57:40.096885   71424 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1: (1.489140573s)
	I0913 19:57:40.096912   71424 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 from cache
	I0913 19:57:40.096946   71424 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.1
	I0913 19:57:40.097003   71424 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1
	I0913 19:57:40.142959   71424 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0913 19:57:40.143072   71424 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0913 19:57:40.083781   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:40.084316   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | unable to find current IP address of domain default-k8s-diff-port-512125 in network mk-default-k8s-diff-port-512125
	I0913 19:57:40.084339   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | I0913 19:57:40.084202   72780 retry.go:31] will retry after 2.736824298s: waiting for machine to come up
	I0913 19:57:42.823269   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:42.823689   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | unable to find current IP address of domain default-k8s-diff-port-512125 in network mk-default-k8s-diff-port-512125
	I0913 19:57:42.823723   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | I0913 19:57:42.823648   72780 retry.go:31] will retry after 3.517461718s: waiting for machine to come up
	I0913 19:57:42.266895   71424 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1: (2.169865218s)
	I0913 19:57:42.266929   71424 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 from cache
	I0913 19:57:42.266971   71424 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0913 19:57:42.267074   71424 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0913 19:57:42.266978   71424 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (2.123869445s)
	I0913 19:57:42.267185   71424 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0913 19:57:44.129215   71424 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1: (1.86211411s)
	I0913 19:57:44.129248   71424 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 from cache
	I0913 19:57:44.129280   71424 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0913 19:57:44.129356   71424 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0913 19:57:46.077759   71424 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1: (1.948382667s)
	I0913 19:57:46.077791   71424 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 from cache
	I0913 19:57:46.077818   71424 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0913 19:57:46.077859   71424 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0913 19:57:46.342187   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:46.342624   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | unable to find current IP address of domain default-k8s-diff-port-512125 in network mk-default-k8s-diff-port-512125
	I0913 19:57:46.342661   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | I0913 19:57:46.342555   72780 retry.go:31] will retry after 3.728072283s: waiting for machine to come up
	I0913 19:57:46.728210   71424 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0913 19:57:46.728256   71424 cache_images.go:123] Successfully loaded all cached images
	I0913 19:57:46.728261   71424 cache_images.go:92] duration metric: took 14.802990931s to LoadCachedImages
	I0913 19:57:46.728274   71424 kubeadm.go:934] updating node { 192.168.50.13 8443 v1.31.1 crio true true} ...
	I0913 19:57:46.728393   71424 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-239327 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.13
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-239327 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0913 19:57:46.728503   71424 ssh_runner.go:195] Run: crio config
	I0913 19:57:46.777890   71424 cni.go:84] Creating CNI manager for ""
	I0913 19:57:46.777916   71424 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0913 19:57:46.777928   71424 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0913 19:57:46.777948   71424 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.13 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-239327 NodeName:no-preload-239327 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.13"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.13 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0913 19:57:46.778129   71424 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.13
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-239327"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.13
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.13"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0913 19:57:46.778201   71424 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0913 19:57:46.788550   71424 binaries.go:44] Found k8s binaries, skipping transfer
	I0913 19:57:46.788612   71424 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0913 19:57:46.797610   71424 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0913 19:57:46.813683   71424 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0913 19:57:46.829359   71424 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0913 19:57:46.846055   71424 ssh_runner.go:195] Run: grep 192.168.50.13	control-plane.minikube.internal$ /etc/hosts
	I0913 19:57:46.849820   71424 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.13	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0913 19:57:46.861351   71424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 19:57:46.976645   71424 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0913 19:57:46.993359   71424 certs.go:68] Setting up /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/no-preload-239327 for IP: 192.168.50.13
	I0913 19:57:46.993390   71424 certs.go:194] generating shared ca certs ...
	I0913 19:57:46.993410   71424 certs.go:226] acquiring lock for ca certs: {Name:mke780aab4c2895bded11772e31c7a357d07742c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 19:57:46.993586   71424 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.key
	I0913 19:57:46.993648   71424 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.key
	I0913 19:57:46.993661   71424 certs.go:256] generating profile certs ...
	I0913 19:57:46.993761   71424 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/no-preload-239327/client.key
	I0913 19:57:46.993845   71424 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/no-preload-239327/apiserver.key.1d2f30c2
	I0913 19:57:46.993896   71424 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/no-preload-239327/proxy-client.key
	I0913 19:57:46.994053   71424 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079.pem (1338 bytes)
	W0913 19:57:46.994120   71424 certs.go:480] ignoring /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079_empty.pem, impossibly tiny 0 bytes
	I0913 19:57:46.994134   71424 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem (1675 bytes)
	I0913 19:57:46.994178   71424 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem (1082 bytes)
	I0913 19:57:46.994218   71424 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem (1123 bytes)
	I0913 19:57:46.994250   71424 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem (1679 bytes)
	I0913 19:57:46.994307   71424 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem (1708 bytes)
	I0913 19:57:46.995114   71424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0913 19:57:47.025538   71424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0913 19:57:47.078641   71424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0913 19:57:47.107063   71424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0913 19:57:47.147536   71424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/no-preload-239327/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0913 19:57:47.179796   71424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/no-preload-239327/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0913 19:57:47.202593   71424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/no-preload-239327/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0913 19:57:47.227536   71424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/no-preload-239327/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0913 19:57:47.251324   71424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0913 19:57:47.274447   71424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079.pem --> /usr/share/ca-certificates/11079.pem (1338 bytes)
	I0913 19:57:47.297216   71424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem --> /usr/share/ca-certificates/110792.pem (1708 bytes)
	I0913 19:57:47.320138   71424 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0913 19:57:47.336696   71424 ssh_runner.go:195] Run: openssl version
	I0913 19:57:47.342403   71424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0913 19:57:47.352378   71424 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0913 19:57:47.356749   71424 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 13 18:22 /usr/share/ca-certificates/minikubeCA.pem
	I0913 19:57:47.356793   71424 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0913 19:57:47.362541   71424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0913 19:57:47.372621   71424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11079.pem && ln -fs /usr/share/ca-certificates/11079.pem /etc/ssl/certs/11079.pem"
	I0913 19:57:47.382729   71424 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11079.pem
	I0913 19:57:47.387369   71424 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 13 18:38 /usr/share/ca-certificates/11079.pem
	I0913 19:57:47.387431   71424 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11079.pem
	I0913 19:57:47.393218   71424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11079.pem /etc/ssl/certs/51391683.0"
	I0913 19:57:47.403529   71424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110792.pem && ln -fs /usr/share/ca-certificates/110792.pem /etc/ssl/certs/110792.pem"
	I0913 19:57:47.414210   71424 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110792.pem
	I0913 19:57:47.418917   71424 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 13 18:38 /usr/share/ca-certificates/110792.pem
	I0913 19:57:47.418965   71424 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110792.pem
	I0913 19:57:47.424414   71424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110792.pem /etc/ssl/certs/3ec20f2e.0"
	I0913 19:57:47.434850   71424 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0913 19:57:47.439245   71424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0913 19:57:47.445052   71424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0913 19:57:47.450680   71424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0913 19:57:47.456489   71424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0913 19:57:47.462051   71424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0913 19:57:47.467582   71424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0913 19:57:47.473181   71424 kubeadm.go:392] StartCluster: {Name:no-preload-239327 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:no-preload-239327 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.13 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 19:57:47.473256   71424 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0913 19:57:47.473295   71424 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0913 19:57:47.510432   71424 cri.go:89] found id: ""
	I0913 19:57:47.510508   71424 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0913 19:57:47.520272   71424 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0913 19:57:47.520293   71424 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0913 19:57:47.520338   71424 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0913 19:57:47.529391   71424 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0913 19:57:47.530298   71424 kubeconfig.go:125] found "no-preload-239327" server: "https://192.168.50.13:8443"
	I0913 19:57:47.532275   71424 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0913 19:57:47.541080   71424 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.13
	I0913 19:57:47.541115   71424 kubeadm.go:1160] stopping kube-system containers ...
	I0913 19:57:47.541130   71424 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0913 19:57:47.541167   71424 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0913 19:57:47.575726   71424 cri.go:89] found id: ""
	I0913 19:57:47.575797   71424 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0913 19:57:47.591640   71424 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0913 19:57:47.600616   71424 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0913 19:57:47.600634   71424 kubeadm.go:157] found existing configuration files:
	
	I0913 19:57:47.600680   71424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0913 19:57:47.609317   71424 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0913 19:57:47.609360   71424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0913 19:57:47.618729   71424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0913 19:57:47.627198   71424 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0913 19:57:47.627241   71424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0913 19:57:47.636259   71424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0913 19:57:47.645245   71424 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0913 19:57:47.645303   71424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0913 19:57:47.654245   71424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0913 19:57:47.662970   71424 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0913 19:57:47.663045   71424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0913 19:57:47.672250   71424 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0913 19:57:47.681504   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:57:47.783618   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:57:48.614939   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:57:48.812739   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:57:48.888885   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:57:48.999877   71424 api_server.go:52] waiting for apiserver process to appear ...
	I0913 19:57:48.999966   71424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:57:49.500587   71424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:57:50.001072   71424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:57:50.026939   71424 api_server.go:72] duration metric: took 1.027062019s to wait for apiserver process to appear ...
	I0913 19:57:50.026965   71424 api_server.go:88] waiting for apiserver healthz status ...
	I0913 19:57:50.026983   71424 api_server.go:253] Checking apiserver healthz at https://192.168.50.13:8443/healthz ...
	I0913 19:57:51.327259   71926 start.go:364] duration metric: took 4m9.277620447s to acquireMachinesLock for "old-k8s-version-234290"
	I0913 19:57:51.327324   71926 start.go:96] Skipping create...Using existing machine configuration
	I0913 19:57:51.327338   71926 fix.go:54] fixHost starting: 
	I0913 19:57:51.327769   71926 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:57:51.327815   71926 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:57:51.344030   71926 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37477
	I0913 19:57:51.344527   71926 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:57:51.344994   71926 main.go:141] libmachine: Using API Version  1
	I0913 19:57:51.345018   71926 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:57:51.345360   71926 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:57:51.345563   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .DriverName
	I0913 19:57:51.345700   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetState
	I0913 19:57:51.347144   71926 fix.go:112] recreateIfNeeded on old-k8s-version-234290: state=Stopped err=<nil>
	I0913 19:57:51.347169   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .DriverName
	W0913 19:57:51.347304   71926 fix.go:138] unexpected machine state, will restart: <nil>
	I0913 19:57:51.349231   71926 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-234290" ...
	I0913 19:57:51.350756   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .Start
	I0913 19:57:51.350906   71926 main.go:141] libmachine: (old-k8s-version-234290) Ensuring networks are active...
	I0913 19:57:51.351585   71926 main.go:141] libmachine: (old-k8s-version-234290) Ensuring network default is active
	I0913 19:57:51.351974   71926 main.go:141] libmachine: (old-k8s-version-234290) Ensuring network mk-old-k8s-version-234290 is active
	I0913 19:57:51.352333   71926 main.go:141] libmachine: (old-k8s-version-234290) Getting domain xml...
	I0913 19:57:51.352947   71926 main.go:141] libmachine: (old-k8s-version-234290) Creating domain...
	I0913 19:57:50.075284   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:50.075782   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has current primary IP address 192.168.61.3 and MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:50.075801   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Found IP for machine: 192.168.61.3
	I0913 19:57:50.075813   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Reserving static IP address...
	I0913 19:57:50.076344   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Reserved static IP address: 192.168.61.3
	I0913 19:57:50.076383   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting for SSH to be available...
	I0913 19:57:50.076420   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-512125", mac: "52:54:00:5b:54:e0", ip: "192.168.61.3"} in network mk-default-k8s-diff-port-512125: {Iface:virbr2 ExpiryTime:2024-09-13 20:49:35 +0000 UTC Type:0 Mac:52:54:00:5b:54:e0 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-512125 Clientid:01:52:54:00:5b:54:e0}
	I0913 19:57:50.076452   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | skip adding static IP to network mk-default-k8s-diff-port-512125 - found existing host DHCP lease matching {name: "default-k8s-diff-port-512125", mac: "52:54:00:5b:54:e0", ip: "192.168.61.3"}
	I0913 19:57:50.076468   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | Getting to WaitForSSH function...
	I0913 19:57:50.078783   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:50.079184   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:54:e0", ip: ""} in network mk-default-k8s-diff-port-512125: {Iface:virbr2 ExpiryTime:2024-09-13 20:49:35 +0000 UTC Type:0 Mac:52:54:00:5b:54:e0 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-512125 Clientid:01:52:54:00:5b:54:e0}
	I0913 19:57:50.079251   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined IP address 192.168.61.3 and MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:50.079322   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | Using SSH client type: external
	I0913 19:57:50.079363   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | Using SSH private key: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/default-k8s-diff-port-512125/id_rsa (-rw-------)
	I0913 19:57:50.079395   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.3 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19636-3902/.minikube/machines/default-k8s-diff-port-512125/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0913 19:57:50.079422   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | About to run SSH command:
	I0913 19:57:50.079444   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | exit 0
	I0913 19:57:50.206454   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | SSH cmd err, output: <nil>: 
	I0913 19:57:50.206818   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetConfigRaw
	I0913 19:57:50.207468   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetIP
	I0913 19:57:50.210231   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:50.210663   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:54:e0", ip: ""} in network mk-default-k8s-diff-port-512125: {Iface:virbr2 ExpiryTime:2024-09-13 20:49:35 +0000 UTC Type:0 Mac:52:54:00:5b:54:e0 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-512125 Clientid:01:52:54:00:5b:54:e0}
	I0913 19:57:50.210690   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined IP address 192.168.61.3 and MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:50.210983   71702 profile.go:143] Saving config to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/default-k8s-diff-port-512125/config.json ...
	I0913 19:57:50.211209   71702 machine.go:93] provisionDockerMachine start ...
	I0913 19:57:50.211228   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .DriverName
	I0913 19:57:50.211520   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHHostname
	I0913 19:57:50.214581   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:50.214920   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:54:e0", ip: ""} in network mk-default-k8s-diff-port-512125: {Iface:virbr2 ExpiryTime:2024-09-13 20:49:35 +0000 UTC Type:0 Mac:52:54:00:5b:54:e0 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-512125 Clientid:01:52:54:00:5b:54:e0}
	I0913 19:57:50.214943   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined IP address 192.168.61.3 and MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:50.215121   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHPort
	I0913 19:57:50.215303   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHKeyPath
	I0913 19:57:50.215451   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHKeyPath
	I0913 19:57:50.215645   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHUsername
	I0913 19:57:50.215804   71702 main.go:141] libmachine: Using SSH client type: native
	I0913 19:57:50.216045   71702 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.3 22 <nil> <nil>}
	I0913 19:57:50.216060   71702 main.go:141] libmachine: About to run SSH command:
	hostname
	I0913 19:57:50.331657   71702 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0913 19:57:50.331684   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetMachineName
	I0913 19:57:50.331934   71702 buildroot.go:166] provisioning hostname "default-k8s-diff-port-512125"
	I0913 19:57:50.331950   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetMachineName
	I0913 19:57:50.332149   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHHostname
	I0913 19:57:50.335159   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:50.335537   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:54:e0", ip: ""} in network mk-default-k8s-diff-port-512125: {Iface:virbr2 ExpiryTime:2024-09-13 20:49:35 +0000 UTC Type:0 Mac:52:54:00:5b:54:e0 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-512125 Clientid:01:52:54:00:5b:54:e0}
	I0913 19:57:50.335567   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined IP address 192.168.61.3 and MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:50.335723   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHPort
	I0913 19:57:50.335908   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHKeyPath
	I0913 19:57:50.336046   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHKeyPath
	I0913 19:57:50.336226   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHUsername
	I0913 19:57:50.336384   71702 main.go:141] libmachine: Using SSH client type: native
	I0913 19:57:50.336597   71702 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.3 22 <nil> <nil>}
	I0913 19:57:50.336616   71702 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-512125 && echo "default-k8s-diff-port-512125" | sudo tee /etc/hostname
	I0913 19:57:50.467731   71702 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-512125
	
	I0913 19:57:50.467765   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHHostname
	I0913 19:57:50.470668   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:50.471106   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:54:e0", ip: ""} in network mk-default-k8s-diff-port-512125: {Iface:virbr2 ExpiryTime:2024-09-13 20:49:35 +0000 UTC Type:0 Mac:52:54:00:5b:54:e0 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-512125 Clientid:01:52:54:00:5b:54:e0}
	I0913 19:57:50.471135   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined IP address 192.168.61.3 and MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:50.471401   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHPort
	I0913 19:57:50.471588   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHKeyPath
	I0913 19:57:50.471784   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHKeyPath
	I0913 19:57:50.471944   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHUsername
	I0913 19:57:50.472126   71702 main.go:141] libmachine: Using SSH client type: native
	I0913 19:57:50.472334   71702 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.3 22 <nil> <nil>}
	I0913 19:57:50.472352   71702 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-512125' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-512125/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-512125' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0913 19:57:50.587535   71702 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0913 19:57:50.587565   71702 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19636-3902/.minikube CaCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19636-3902/.minikube}
	I0913 19:57:50.587599   71702 buildroot.go:174] setting up certificates
	I0913 19:57:50.587608   71702 provision.go:84] configureAuth start
	I0913 19:57:50.587617   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetMachineName
	I0913 19:57:50.587881   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetIP
	I0913 19:57:50.590622   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:50.591016   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:54:e0", ip: ""} in network mk-default-k8s-diff-port-512125: {Iface:virbr2 ExpiryTime:2024-09-13 20:49:35 +0000 UTC Type:0 Mac:52:54:00:5b:54:e0 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-512125 Clientid:01:52:54:00:5b:54:e0}
	I0913 19:57:50.591046   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined IP address 192.168.61.3 and MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:50.591235   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHHostname
	I0913 19:57:50.593758   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:50.594161   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:54:e0", ip: ""} in network mk-default-k8s-diff-port-512125: {Iface:virbr2 ExpiryTime:2024-09-13 20:49:35 +0000 UTC Type:0 Mac:52:54:00:5b:54:e0 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-512125 Clientid:01:52:54:00:5b:54:e0}
	I0913 19:57:50.594188   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined IP address 192.168.61.3 and MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:50.594290   71702 provision.go:143] copyHostCerts
	I0913 19:57:50.594351   71702 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem, removing ...
	I0913 19:57:50.594364   71702 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem
	I0913 19:57:50.594423   71702 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem (1123 bytes)
	I0913 19:57:50.594504   71702 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem, removing ...
	I0913 19:57:50.594511   71702 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem
	I0913 19:57:50.594529   71702 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem (1679 bytes)
	I0913 19:57:50.594580   71702 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem, removing ...
	I0913 19:57:50.594586   71702 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem
	I0913 19:57:50.594603   71702 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem (1082 bytes)
	I0913 19:57:50.594654   71702 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-512125 san=[127.0.0.1 192.168.61.3 default-k8s-diff-port-512125 localhost minikube]
	I0913 19:57:50.688827   71702 provision.go:177] copyRemoteCerts
	I0913 19:57:50.688879   71702 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0913 19:57:50.688903   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHHostname
	I0913 19:57:50.691724   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:50.692112   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:54:e0", ip: ""} in network mk-default-k8s-diff-port-512125: {Iface:virbr2 ExpiryTime:2024-09-13 20:49:35 +0000 UTC Type:0 Mac:52:54:00:5b:54:e0 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-512125 Clientid:01:52:54:00:5b:54:e0}
	I0913 19:57:50.692142   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined IP address 192.168.61.3 and MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:50.692387   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHPort
	I0913 19:57:50.692579   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHKeyPath
	I0913 19:57:50.692754   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHUsername
	I0913 19:57:50.692876   71702 sshutil.go:53] new ssh client: &{IP:192.168.61.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/default-k8s-diff-port-512125/id_rsa Username:docker}
	I0913 19:57:50.776582   71702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0913 19:57:50.802453   71702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0913 19:57:50.827446   71702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0913 19:57:50.855966   71702 provision.go:87] duration metric: took 268.344608ms to configureAuth
	I0913 19:57:50.855995   71702 buildroot.go:189] setting minikube options for container-runtime
	I0913 19:57:50.856210   71702 config.go:182] Loaded profile config "default-k8s-diff-port-512125": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 19:57:50.856298   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHHostname
	I0913 19:57:50.859097   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:50.859426   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:54:e0", ip: ""} in network mk-default-k8s-diff-port-512125: {Iface:virbr2 ExpiryTime:2024-09-13 20:49:35 +0000 UTC Type:0 Mac:52:54:00:5b:54:e0 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-512125 Clientid:01:52:54:00:5b:54:e0}
	I0913 19:57:50.859464   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined IP address 192.168.61.3 and MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:50.859667   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHPort
	I0913 19:57:50.859851   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHKeyPath
	I0913 19:57:50.860001   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHKeyPath
	I0913 19:57:50.860103   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHUsername
	I0913 19:57:50.860270   71702 main.go:141] libmachine: Using SSH client type: native
	I0913 19:57:50.860450   71702 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.3 22 <nil> <nil>}
	I0913 19:57:50.860472   71702 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0913 19:57:51.091137   71702 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0913 19:57:51.091162   71702 machine.go:96] duration metric: took 879.939352ms to provisionDockerMachine
	I0913 19:57:51.091174   71702 start.go:293] postStartSetup for "default-k8s-diff-port-512125" (driver="kvm2")
	I0913 19:57:51.091187   71702 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0913 19:57:51.091208   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .DriverName
	I0913 19:57:51.091525   71702 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0913 19:57:51.091558   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHHostname
	I0913 19:57:51.094398   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:51.094755   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:54:e0", ip: ""} in network mk-default-k8s-diff-port-512125: {Iface:virbr2 ExpiryTime:2024-09-13 20:49:35 +0000 UTC Type:0 Mac:52:54:00:5b:54:e0 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-512125 Clientid:01:52:54:00:5b:54:e0}
	I0913 19:57:51.094783   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined IP address 192.168.61.3 and MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:51.094945   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHPort
	I0913 19:57:51.095112   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHKeyPath
	I0913 19:57:51.095269   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHUsername
	I0913 19:57:51.095391   71702 sshutil.go:53] new ssh client: &{IP:192.168.61.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/default-k8s-diff-port-512125/id_rsa Username:docker}
	I0913 19:57:51.176959   71702 ssh_runner.go:195] Run: cat /etc/os-release
	I0913 19:57:51.181585   71702 info.go:137] Remote host: Buildroot 2023.02.9
	I0913 19:57:51.181614   71702 filesync.go:126] Scanning /home/jenkins/minikube-integration/19636-3902/.minikube/addons for local assets ...
	I0913 19:57:51.181687   71702 filesync.go:126] Scanning /home/jenkins/minikube-integration/19636-3902/.minikube/files for local assets ...
	I0913 19:57:51.181768   71702 filesync.go:149] local asset: /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem -> 110792.pem in /etc/ssl/certs
	I0913 19:57:51.181857   71702 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0913 19:57:51.191417   71702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem --> /etc/ssl/certs/110792.pem (1708 bytes)
	I0913 19:57:51.218033   71702 start.go:296] duration metric: took 126.844149ms for postStartSetup
	I0913 19:57:51.218076   71702 fix.go:56] duration metric: took 20.738765131s for fixHost
	I0913 19:57:51.218119   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHHostname
	I0913 19:57:51.221206   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:51.221713   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:54:e0", ip: ""} in network mk-default-k8s-diff-port-512125: {Iface:virbr2 ExpiryTime:2024-09-13 20:49:35 +0000 UTC Type:0 Mac:52:54:00:5b:54:e0 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-512125 Clientid:01:52:54:00:5b:54:e0}
	I0913 19:57:51.221748   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined IP address 192.168.61.3 and MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:51.221946   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHPort
	I0913 19:57:51.222151   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHKeyPath
	I0913 19:57:51.222344   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHKeyPath
	I0913 19:57:51.222538   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHUsername
	I0913 19:57:51.222673   71702 main.go:141] libmachine: Using SSH client type: native
	I0913 19:57:51.222834   71702 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.3 22 <nil> <nil>}
	I0913 19:57:51.222844   71702 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0913 19:57:51.327091   71702 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726257471.303496315
	
	I0913 19:57:51.327121   71702 fix.go:216] guest clock: 1726257471.303496315
	I0913 19:57:51.327132   71702 fix.go:229] Guest: 2024-09-13 19:57:51.303496315 +0000 UTC Remote: 2024-09-13 19:57:51.218080493 +0000 UTC m=+266.360246627 (delta=85.415822ms)
	I0913 19:57:51.327179   71702 fix.go:200] guest clock delta is within tolerance: 85.415822ms
	I0913 19:57:51.327187   71702 start.go:83] releasing machines lock for "default-k8s-diff-port-512125", held for 20.847905198s
	I0913 19:57:51.327218   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .DriverName
	I0913 19:57:51.327478   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetIP
	I0913 19:57:51.330295   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:51.330668   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:54:e0", ip: ""} in network mk-default-k8s-diff-port-512125: {Iface:virbr2 ExpiryTime:2024-09-13 20:49:35 +0000 UTC Type:0 Mac:52:54:00:5b:54:e0 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-512125 Clientid:01:52:54:00:5b:54:e0}
	I0913 19:57:51.330701   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined IP address 192.168.61.3 and MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:51.330809   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .DriverName
	I0913 19:57:51.331309   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .DriverName
	I0913 19:57:51.331492   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .DriverName
	I0913 19:57:51.331611   71702 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0913 19:57:51.331653   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHHostname
	I0913 19:57:51.331703   71702 ssh_runner.go:195] Run: cat /version.json
	I0913 19:57:51.331728   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHHostname
	I0913 19:57:51.334221   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:51.334411   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:51.334581   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:54:e0", ip: ""} in network mk-default-k8s-diff-port-512125: {Iface:virbr2 ExpiryTime:2024-09-13 20:49:35 +0000 UTC Type:0 Mac:52:54:00:5b:54:e0 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-512125 Clientid:01:52:54:00:5b:54:e0}
	I0913 19:57:51.334609   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined IP address 192.168.61.3 and MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:51.334779   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHPort
	I0913 19:57:51.334879   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:54:e0", ip: ""} in network mk-default-k8s-diff-port-512125: {Iface:virbr2 ExpiryTime:2024-09-13 20:49:35 +0000 UTC Type:0 Mac:52:54:00:5b:54:e0 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-512125 Clientid:01:52:54:00:5b:54:e0}
	I0913 19:57:51.334919   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined IP address 192.168.61.3 and MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:51.334966   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHKeyPath
	I0913 19:57:51.335052   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHPort
	I0913 19:57:51.335126   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHUsername
	I0913 19:57:51.335198   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHKeyPath
	I0913 19:57:51.335270   71702 sshutil.go:53] new ssh client: &{IP:192.168.61.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/default-k8s-diff-port-512125/id_rsa Username:docker}
	I0913 19:57:51.335331   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHUsername
	I0913 19:57:51.335546   71702 sshutil.go:53] new ssh client: &{IP:192.168.61.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/default-k8s-diff-port-512125/id_rsa Username:docker}
	I0913 19:57:51.415552   71702 ssh_runner.go:195] Run: systemctl --version
	I0913 19:57:51.440411   71702 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0913 19:57:51.584757   71702 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0913 19:57:51.590531   71702 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0913 19:57:51.590604   71702 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0913 19:57:51.606595   71702 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0913 19:57:51.606619   71702 start.go:495] detecting cgroup driver to use...
	I0913 19:57:51.606678   71702 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0913 19:57:51.622887   71702 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0913 19:57:51.642168   71702 docker.go:217] disabling cri-docker service (if available) ...
	I0913 19:57:51.642235   71702 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0913 19:57:51.657201   71702 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0913 19:57:51.672504   71702 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0913 19:57:51.797046   71702 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0913 19:57:51.944856   71702 docker.go:233] disabling docker service ...
	I0913 19:57:51.944930   71702 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0913 19:57:51.962885   71702 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0913 19:57:51.979765   71702 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0913 19:57:52.144865   71702 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0913 19:57:52.305549   71702 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0913 19:57:52.319742   71702 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0913 19:57:52.341814   71702 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0913 19:57:52.341877   71702 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:57:52.356233   71702 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0913 19:57:52.356304   71702 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:57:52.367867   71702 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:57:52.380357   71702 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:57:52.396158   71702 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0913 19:57:52.409682   71702 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:57:52.425012   71702 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:57:52.443770   71702 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:57:52.455296   71702 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0913 19:57:52.471321   71702 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0913 19:57:52.471384   71702 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0913 19:57:52.486626   71702 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0913 19:57:52.503172   71702 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 19:57:52.637550   71702 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0913 19:57:52.749215   71702 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0913 19:57:52.749314   71702 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0913 19:57:52.755695   71702 start.go:563] Will wait 60s for crictl version
	I0913 19:57:52.755764   71702 ssh_runner.go:195] Run: which crictl
	I0913 19:57:52.760759   71702 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0913 19:57:52.810845   71702 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0913 19:57:52.810938   71702 ssh_runner.go:195] Run: crio --version
	I0913 19:57:52.843238   71702 ssh_runner.go:195] Run: crio --version
	I0913 19:57:52.881367   71702 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0913 19:57:52.882926   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetIP
	I0913 19:57:52.886161   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:52.886611   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:54:e0", ip: ""} in network mk-default-k8s-diff-port-512125: {Iface:virbr2 ExpiryTime:2024-09-13 20:49:35 +0000 UTC Type:0 Mac:52:54:00:5b:54:e0 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-512125 Clientid:01:52:54:00:5b:54:e0}
	I0913 19:57:52.886640   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined IP address 192.168.61.3 and MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:52.886873   71702 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0913 19:57:52.891585   71702 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0913 19:57:52.909764   71702 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-512125 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:default-k8s-diff-port-512125 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.3 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0913 19:57:52.909895   71702 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0913 19:57:52.909946   71702 ssh_runner.go:195] Run: sudo crictl images --output json
	I0913 19:57:52.951579   71702 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0913 19:57:52.951663   71702 ssh_runner.go:195] Run: which lz4
	I0913 19:57:52.956284   71702 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0913 19:57:52.961057   71702 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0913 19:57:52.961107   71702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0913 19:57:54.413207   71702 crio.go:462] duration metric: took 1.457013899s to copy over tarball
	I0913 19:57:54.413281   71702 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0913 19:57:53.355482   71424 api_server.go:279] https://192.168.50.13:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0913 19:57:53.355515   71424 api_server.go:103] status: https://192.168.50.13:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0913 19:57:53.355532   71424 api_server.go:253] Checking apiserver healthz at https://192.168.50.13:8443/healthz ...
	I0913 19:57:53.403530   71424 api_server.go:279] https://192.168.50.13:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0913 19:57:53.403563   71424 api_server.go:103] status: https://192.168.50.13:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0913 19:57:53.527891   71424 api_server.go:253] Checking apiserver healthz at https://192.168.50.13:8443/healthz ...
	I0913 19:57:53.540614   71424 api_server.go:279] https://192.168.50.13:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0913 19:57:53.540645   71424 api_server.go:103] status: https://192.168.50.13:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0913 19:57:54.027103   71424 api_server.go:253] Checking apiserver healthz at https://192.168.50.13:8443/healthz ...
	I0913 19:57:54.033969   71424 api_server.go:279] https://192.168.50.13:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0913 19:57:54.034007   71424 api_server.go:103] status: https://192.168.50.13:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0913 19:57:54.527232   71424 api_server.go:253] Checking apiserver healthz at https://192.168.50.13:8443/healthz ...
	I0913 19:57:54.533061   71424 api_server.go:279] https://192.168.50.13:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0913 19:57:54.533101   71424 api_server.go:103] status: https://192.168.50.13:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0913 19:57:55.027284   71424 api_server.go:253] Checking apiserver healthz at https://192.168.50.13:8443/healthz ...
	I0913 19:57:55.033940   71424 api_server.go:279] https://192.168.50.13:8443/healthz returned 200:
	ok
	I0913 19:57:55.041955   71424 api_server.go:141] control plane version: v1.31.1
	I0913 19:57:55.041994   71424 api_server.go:131] duration metric: took 5.01501979s to wait for apiserver health ...
	I0913 19:57:55.042004   71424 cni.go:84] Creating CNI manager for ""
	I0913 19:57:55.042012   71424 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0913 19:57:55.043980   71424 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0913 19:57:55.045528   71424 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0913 19:57:55.095694   71424 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0913 19:57:55.130974   71424 system_pods.go:43] waiting for kube-system pods to appear ...
	I0913 19:57:55.144810   71424 system_pods.go:59] 8 kube-system pods found
	I0913 19:57:55.144850   71424 system_pods.go:61] "coredns-7c65d6cfc9-fjzxv" [984f1946-61b1-4881-ae99-495855aaf948] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0913 19:57:55.144861   71424 system_pods.go:61] "etcd-no-preload-239327" [d0514967-93ea-4792-b49d-500c6800d102] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0913 19:57:55.144871   71424 system_pods.go:61] "kube-apiserver-no-preload-239327" [2e37433d-d767-4e6a-9697-79e99bb2cf74] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0913 19:57:55.144879   71424 system_pods.go:61] "kube-controller-manager-no-preload-239327" [71d86891-1378-43b5-af10-c45acb1ef854] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0913 19:57:55.144885   71424 system_pods.go:61] "kube-proxy-b24zg" [67fffd9e-ddf7-4abb-bfce-1528060d6b43] Running
	I0913 19:57:55.144892   71424 system_pods.go:61] "kube-scheduler-no-preload-239327" [22e17a4f-902f-4593-82dd-d2f04104e66a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0913 19:57:55.144899   71424 system_pods.go:61] "metrics-server-6867b74b74-bq7jp" [9920ad88-3d00-458f-94d4-3dcfd0cd9a01] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0913 19:57:55.144904   71424 system_pods.go:61] "storage-provisioner" [1cb55fe9-4adb-4d3e-9f26-34ee4b3f01a2] Running
	I0913 19:57:55.144912   71424 system_pods.go:74] duration metric: took 13.911878ms to wait for pod list to return data ...
	I0913 19:57:55.144925   71424 node_conditions.go:102] verifying NodePressure condition ...
	I0913 19:57:55.150452   71424 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0913 19:57:55.150485   71424 node_conditions.go:123] node cpu capacity is 2
	I0913 19:57:55.150498   71424 node_conditions.go:105] duration metric: took 5.568616ms to run NodePressure ...
	I0913 19:57:55.150517   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:57:55.469599   71424 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0913 19:57:55.475337   71424 kubeadm.go:739] kubelet initialised
	I0913 19:57:55.475361   71424 kubeadm.go:740] duration metric: took 5.681154ms waiting for restarted kubelet to initialise ...
	I0913 19:57:55.475372   71424 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0913 19:57:55.485218   71424 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-fjzxv" in "kube-system" namespace to be "Ready" ...
	I0913 19:57:55.495426   71424 pod_ready.go:98] node "no-preload-239327" hosting pod "coredns-7c65d6cfc9-fjzxv" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-239327" has status "Ready":"False"
	I0913 19:57:55.495451   71424 pod_ready.go:82] duration metric: took 10.207619ms for pod "coredns-7c65d6cfc9-fjzxv" in "kube-system" namespace to be "Ready" ...
	E0913 19:57:55.495464   71424 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-239327" hosting pod "coredns-7c65d6cfc9-fjzxv" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-239327" has status "Ready":"False"
	I0913 19:57:55.495474   71424 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-239327" in "kube-system" namespace to be "Ready" ...
	I0913 19:57:55.501722   71424 pod_ready.go:98] node "no-preload-239327" hosting pod "etcd-no-preload-239327" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-239327" has status "Ready":"False"
	I0913 19:57:55.501746   71424 pod_ready.go:82] duration metric: took 6.262633ms for pod "etcd-no-preload-239327" in "kube-system" namespace to be "Ready" ...
	E0913 19:57:55.501758   71424 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-239327" hosting pod "etcd-no-preload-239327" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-239327" has status "Ready":"False"
	I0913 19:57:55.501766   71424 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-239327" in "kube-system" namespace to be "Ready" ...
	I0913 19:57:55.508771   71424 pod_ready.go:98] node "no-preload-239327" hosting pod "kube-apiserver-no-preload-239327" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-239327" has status "Ready":"False"
	I0913 19:57:55.508797   71424 pod_ready.go:82] duration metric: took 7.022139ms for pod "kube-apiserver-no-preload-239327" in "kube-system" namespace to be "Ready" ...
	E0913 19:57:55.508808   71424 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-239327" hosting pod "kube-apiserver-no-preload-239327" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-239327" has status "Ready":"False"
	I0913 19:57:55.508816   71424 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-239327" in "kube-system" namespace to be "Ready" ...
	I0913 19:57:55.533464   71424 pod_ready.go:98] node "no-preload-239327" hosting pod "kube-controller-manager-no-preload-239327" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-239327" has status "Ready":"False"
	I0913 19:57:55.533494   71424 pod_ready.go:82] duration metric: took 24.667318ms for pod "kube-controller-manager-no-preload-239327" in "kube-system" namespace to be "Ready" ...
	E0913 19:57:55.533505   71424 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-239327" hosting pod "kube-controller-manager-no-preload-239327" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-239327" has status "Ready":"False"
	I0913 19:57:55.533515   71424 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-b24zg" in "kube-system" namespace to be "Ready" ...
	I0913 19:57:55.935346   71424 pod_ready.go:98] node "no-preload-239327" hosting pod "kube-proxy-b24zg" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-239327" has status "Ready":"False"
	I0913 19:57:55.935376   71424 pod_ready.go:82] duration metric: took 401.852235ms for pod "kube-proxy-b24zg" in "kube-system" namespace to be "Ready" ...
	E0913 19:57:55.935388   71424 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-239327" hosting pod "kube-proxy-b24zg" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-239327" has status "Ready":"False"
	I0913 19:57:55.935399   71424 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-239327" in "kube-system" namespace to be "Ready" ...
	I0913 19:57:56.335156   71424 pod_ready.go:98] node "no-preload-239327" hosting pod "kube-scheduler-no-preload-239327" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-239327" has status "Ready":"False"
	I0913 19:57:56.335194   71424 pod_ready.go:82] duration metric: took 399.782959ms for pod "kube-scheduler-no-preload-239327" in "kube-system" namespace to be "Ready" ...
	E0913 19:57:56.335207   71424 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-239327" hosting pod "kube-scheduler-no-preload-239327" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-239327" has status "Ready":"False"
	I0913 19:57:56.335216   71424 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace to be "Ready" ...
	I0913 19:57:56.734606   71424 pod_ready.go:98] node "no-preload-239327" hosting pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-239327" has status "Ready":"False"
	I0913 19:57:56.734633   71424 pod_ready.go:82] duration metric: took 399.405497ms for pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace to be "Ready" ...
	E0913 19:57:56.734644   71424 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-239327" hosting pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-239327" has status "Ready":"False"
	I0913 19:57:56.734654   71424 pod_ready.go:39] duration metric: took 1.259272309s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0913 19:57:56.734673   71424 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0913 19:57:56.748215   71424 ops.go:34] apiserver oom_adj: -16
	I0913 19:57:56.748236   71424 kubeadm.go:597] duration metric: took 9.227936606s to restartPrimaryControlPlane
	I0913 19:57:56.748247   71424 kubeadm.go:394] duration metric: took 9.275070425s to StartCluster
	I0913 19:57:56.748267   71424 settings.go:142] acquiring lock: {Name:mk3b751ea8a9f3a6c2d19469cffad500e96411b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 19:57:56.748361   71424 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19636-3902/kubeconfig
	I0913 19:57:56.750523   71424 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/kubeconfig: {Name:mk324cf401f96b93fed93af51e24b0634e5fa1a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 19:57:56.750818   71424 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.13 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0913 19:57:56.750914   71424 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0913 19:57:56.751016   71424 addons.go:69] Setting storage-provisioner=true in profile "no-preload-239327"
	I0913 19:57:56.751037   71424 addons.go:234] Setting addon storage-provisioner=true in "no-preload-239327"
	W0913 19:57:56.751046   71424 addons.go:243] addon storage-provisioner should already be in state true
	I0913 19:57:56.751034   71424 addons.go:69] Setting default-storageclass=true in profile "no-preload-239327"
	I0913 19:57:56.751066   71424 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-239327"
	I0913 19:57:56.751076   71424 host.go:66] Checking if "no-preload-239327" exists ...
	I0913 19:57:56.751108   71424 config.go:182] Loaded profile config "no-preload-239327": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 19:57:56.751172   71424 addons.go:69] Setting metrics-server=true in profile "no-preload-239327"
	I0913 19:57:56.751186   71424 addons.go:234] Setting addon metrics-server=true in "no-preload-239327"
	W0913 19:57:56.751208   71424 addons.go:243] addon metrics-server should already be in state true
	I0913 19:57:56.751231   71424 host.go:66] Checking if "no-preload-239327" exists ...
	I0913 19:57:56.751527   71424 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:57:56.751550   71424 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:57:56.751568   71424 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:57:56.751581   71424 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:57:56.751735   71424 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:57:56.751799   71424 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:57:56.753086   71424 out.go:177] * Verifying Kubernetes components...
	I0913 19:57:56.755069   71424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 19:57:56.769111   71424 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39191
	I0913 19:57:56.769722   71424 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:57:56.770138   71424 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37643
	I0913 19:57:56.770380   71424 main.go:141] libmachine: Using API Version  1
	I0913 19:57:56.770397   71424 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:57:56.770472   71424 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44401
	I0913 19:57:56.770616   71424 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:57:56.770858   71424 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:57:56.771033   71424 main.go:141] libmachine: Using API Version  1
	I0913 19:57:56.771054   71424 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:57:56.771358   71424 main.go:141] libmachine: Using API Version  1
	I0913 19:57:56.771375   71424 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:57:56.771393   71424 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:57:56.771418   71424 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:57:56.771553   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetState
	I0913 19:57:56.772058   71424 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:57:56.772097   71424 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:57:56.772313   71424 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:57:56.772870   71424 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:57:56.772911   71424 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:57:56.791429   71424 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39909
	I0913 19:57:56.791741   71424 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:57:56.791800   71424 addons.go:234] Setting addon default-storageclass=true in "no-preload-239327"
	W0913 19:57:56.791813   71424 addons.go:243] addon default-storageclass should already be in state true
	I0913 19:57:56.791841   71424 host.go:66] Checking if "no-preload-239327" exists ...
	I0913 19:57:56.792127   71424 main.go:141] libmachine: Using API Version  1
	I0913 19:57:56.792142   71424 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:57:56.792204   71424 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:57:56.792234   71424 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:57:56.792419   71424 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:57:56.792545   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetState
	I0913 19:57:56.794360   71424 main.go:141] libmachine: (no-preload-239327) Calling .DriverName
	I0913 19:57:56.796432   71424 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0913 19:57:56.797889   71424 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0913 19:57:56.797906   71424 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0913 19:57:56.797936   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHHostname
	I0913 19:57:56.801559   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:56.801916   71424 main.go:141] libmachine: (no-preload-239327) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:8c:9d", ip: ""} in network mk-no-preload-239327: {Iface:virbr1 ExpiryTime:2024-09-13 20:57:22 +0000 UTC Type:0 Mac:52:54:00:14:8c:9d Iaid: IPaddr:192.168.50.13 Prefix:24 Hostname:no-preload-239327 Clientid:01:52:54:00:14:8c:9d}
	I0913 19:57:56.801937   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined IP address 192.168.50.13 and MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:56.803787   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHPort
	I0913 19:57:56.803937   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHKeyPath
	I0913 19:57:56.806185   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHUsername
	I0913 19:57:56.806357   71424 sshutil.go:53] new ssh client: &{IP:192.168.50.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/no-preload-239327/id_rsa Username:docker}
	I0913 19:57:56.809000   71424 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40391
	I0913 19:57:56.809444   71424 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:57:56.809928   71424 main.go:141] libmachine: Using API Version  1
	I0913 19:57:56.809943   71424 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:57:56.809962   71424 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36435
	I0913 19:57:56.810309   71424 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:57:56.810511   71424 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:57:56.810829   71424 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:57:56.810862   71424 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:57:56.810872   71424 main.go:141] libmachine: Using API Version  1
	I0913 19:57:56.810886   71424 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:57:56.811194   71424 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:57:56.811321   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetState
	I0913 19:57:56.812760   71424 main.go:141] libmachine: (no-preload-239327) Calling .DriverName
	I0913 19:57:56.814270   71424 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 19:57:52.718732   71926 main.go:141] libmachine: (old-k8s-version-234290) Waiting to get IP...
	I0913 19:57:52.719575   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:57:52.720004   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | unable to find current IP address of domain old-k8s-version-234290 in network mk-old-k8s-version-234290
	I0913 19:57:52.720083   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | I0913 19:57:52.720009   72953 retry.go:31] will retry after 304.912151ms: waiting for machine to come up
	I0913 19:57:53.026797   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:57:53.027578   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | unable to find current IP address of domain old-k8s-version-234290 in network mk-old-k8s-version-234290
	I0913 19:57:53.027703   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | I0913 19:57:53.027641   72953 retry.go:31] will retry after 242.676909ms: waiting for machine to come up
	I0913 19:57:53.272108   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:57:53.272588   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | unable to find current IP address of domain old-k8s-version-234290 in network mk-old-k8s-version-234290
	I0913 19:57:53.272612   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | I0913 19:57:53.272518   72953 retry.go:31] will retry after 405.559393ms: waiting for machine to come up
	I0913 19:57:53.679940   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:57:53.680380   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | unable to find current IP address of domain old-k8s-version-234290 in network mk-old-k8s-version-234290
	I0913 19:57:53.680414   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | I0913 19:57:53.680348   72953 retry.go:31] will retry after 378.743628ms: waiting for machine to come up
	I0913 19:57:54.061169   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:57:54.061778   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | unable to find current IP address of domain old-k8s-version-234290 in network mk-old-k8s-version-234290
	I0913 19:57:54.061805   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | I0913 19:57:54.061698   72953 retry.go:31] will retry after 481.46563ms: waiting for machine to come up
	I0913 19:57:54.545134   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:57:54.545582   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | unable to find current IP address of domain old-k8s-version-234290 in network mk-old-k8s-version-234290
	I0913 19:57:54.545604   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | I0913 19:57:54.545548   72953 retry.go:31] will retry after 836.433898ms: waiting for machine to come up
	I0913 19:57:55.383396   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:57:55.384063   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | unable to find current IP address of domain old-k8s-version-234290 in network mk-old-k8s-version-234290
	I0913 19:57:55.384094   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | I0913 19:57:55.384023   72953 retry.go:31] will retry after 848.706378ms: waiting for machine to come up
	I0913 19:57:56.233996   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:57:56.234429   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | unable to find current IP address of domain old-k8s-version-234290 in network mk-old-k8s-version-234290
	I0913 19:57:56.234456   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | I0913 19:57:56.234381   72953 retry.go:31] will retry after 969.158848ms: waiting for machine to come up
	I0913 19:57:56.815854   71424 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0913 19:57:56.815866   71424 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0913 19:57:56.815878   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHHostname
	I0913 19:57:56.822710   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:56.823097   71424 main.go:141] libmachine: (no-preload-239327) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:8c:9d", ip: ""} in network mk-no-preload-239327: {Iface:virbr1 ExpiryTime:2024-09-13 20:57:22 +0000 UTC Type:0 Mac:52:54:00:14:8c:9d Iaid: IPaddr:192.168.50.13 Prefix:24 Hostname:no-preload-239327 Clientid:01:52:54:00:14:8c:9d}
	I0913 19:57:56.823115   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined IP address 192.168.50.13 and MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:56.823379   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHPort
	I0913 19:57:56.823519   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHKeyPath
	I0913 19:57:56.823634   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHUsername
	I0913 19:57:56.823721   71424 sshutil.go:53] new ssh client: &{IP:192.168.50.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/no-preload-239327/id_rsa Username:docker}
	I0913 19:57:56.830245   71424 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38447
	I0913 19:57:56.830634   71424 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:57:56.831243   71424 main.go:141] libmachine: Using API Version  1
	I0913 19:57:56.831258   71424 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:57:56.831746   71424 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:57:56.831977   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetState
	I0913 19:57:56.833771   71424 main.go:141] libmachine: (no-preload-239327) Calling .DriverName
	I0913 19:57:56.833953   71424 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0913 19:57:56.833966   71424 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0913 19:57:56.833981   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHHostname
	I0913 19:57:56.837171   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:56.837611   71424 main.go:141] libmachine: (no-preload-239327) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:8c:9d", ip: ""} in network mk-no-preload-239327: {Iface:virbr1 ExpiryTime:2024-09-13 20:57:22 +0000 UTC Type:0 Mac:52:54:00:14:8c:9d Iaid: IPaddr:192.168.50.13 Prefix:24 Hostname:no-preload-239327 Clientid:01:52:54:00:14:8c:9d}
	I0913 19:57:56.837630   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined IP address 192.168.50.13 and MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:56.837793   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHPort
	I0913 19:57:56.837962   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHKeyPath
	I0913 19:57:56.838198   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHUsername
	I0913 19:57:56.838323   71424 sshutil.go:53] new ssh client: &{IP:192.168.50.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/no-preload-239327/id_rsa Username:docker}
	I0913 19:57:57.030836   71424 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0913 19:57:57.056630   71424 node_ready.go:35] waiting up to 6m0s for node "no-preload-239327" to be "Ready" ...
	I0913 19:57:57.157478   71424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0913 19:57:57.169686   71424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0913 19:57:57.302368   71424 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0913 19:57:57.302395   71424 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0913 19:57:57.355982   71424 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0913 19:57:57.356013   71424 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0913 19:57:57.378079   71424 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0913 19:57:57.378128   71424 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0913 19:57:57.437879   71424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0913 19:57:59.395739   71424 node_ready.go:53] node "no-preload-239327" has status "Ready":"False"
	I0913 19:57:59.399929   71424 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.230206257s)
	I0913 19:57:59.399976   71424 main.go:141] libmachine: Making call to close driver server
	I0913 19:57:59.399988   71424 main.go:141] libmachine: (no-preload-239327) Calling .Close
	I0913 19:57:59.400026   71424 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.242509219s)
	I0913 19:57:59.400067   71424 main.go:141] libmachine: Making call to close driver server
	I0913 19:57:59.400083   71424 main.go:141] libmachine: (no-preload-239327) Calling .Close
	I0913 19:57:59.400273   71424 main.go:141] libmachine: Successfully made call to close driver server
	I0913 19:57:59.400287   71424 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 19:57:59.400297   71424 main.go:141] libmachine: Making call to close driver server
	I0913 19:57:59.400305   71424 main.go:141] libmachine: (no-preload-239327) Calling .Close
	I0913 19:57:59.400481   71424 main.go:141] libmachine: (no-preload-239327) DBG | Closing plugin on server side
	I0913 19:57:59.400514   71424 main.go:141] libmachine: Successfully made call to close driver server
	I0913 19:57:59.400529   71424 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 19:57:59.400548   71424 main.go:141] libmachine: Making call to close driver server
	I0913 19:57:59.400556   71424 main.go:141] libmachine: (no-preload-239327) Calling .Close
	I0913 19:57:59.400706   71424 main.go:141] libmachine: Successfully made call to close driver server
	I0913 19:57:59.400716   71424 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 19:57:59.402063   71424 main.go:141] libmachine: Successfully made call to close driver server
	I0913 19:57:59.402078   71424 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 19:57:59.402110   71424 main.go:141] libmachine: (no-preload-239327) DBG | Closing plugin on server side
	I0913 19:57:59.729071   71424 main.go:141] libmachine: Making call to close driver server
	I0913 19:57:59.729097   71424 main.go:141] libmachine: (no-preload-239327) Calling .Close
	I0913 19:57:59.729396   71424 main.go:141] libmachine: Successfully made call to close driver server
	I0913 19:57:59.729416   71424 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 19:57:59.862773   71424 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.424844753s)
	I0913 19:57:59.862831   71424 main.go:141] libmachine: Making call to close driver server
	I0913 19:57:59.862847   71424 main.go:141] libmachine: (no-preload-239327) Calling .Close
	I0913 19:57:59.863167   71424 main.go:141] libmachine: (no-preload-239327) DBG | Closing plugin on server side
	I0913 19:57:59.863223   71424 main.go:141] libmachine: Successfully made call to close driver server
	I0913 19:57:59.863241   71424 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 19:57:59.863253   71424 main.go:141] libmachine: Making call to close driver server
	I0913 19:57:59.863261   71424 main.go:141] libmachine: (no-preload-239327) Calling .Close
	I0913 19:57:59.863505   71424 main.go:141] libmachine: Successfully made call to close driver server
	I0913 19:57:59.863521   71424 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 19:57:59.863536   71424 addons.go:475] Verifying addon metrics-server=true in "no-preload-239327"
	I0913 19:57:59.865569   71424 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0913 19:57:56.673474   71702 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.260118506s)
	I0913 19:57:56.673521   71702 crio.go:469] duration metric: took 2.260277637s to extract the tarball
	I0913 19:57:56.673535   71702 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0913 19:57:56.710512   71702 ssh_runner.go:195] Run: sudo crictl images --output json
	I0913 19:57:56.757884   71702 crio.go:514] all images are preloaded for cri-o runtime.
	I0913 19:57:56.757904   71702 cache_images.go:84] Images are preloaded, skipping loading
	I0913 19:57:56.757913   71702 kubeadm.go:934] updating node { 192.168.61.3 8444 v1.31.1 crio true true} ...
	I0913 19:57:56.758026   71702 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-512125 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-512125 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0913 19:57:56.758115   71702 ssh_runner.go:195] Run: crio config
	I0913 19:57:56.832109   71702 cni.go:84] Creating CNI manager for ""
	I0913 19:57:56.832131   71702 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0913 19:57:56.832143   71702 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0913 19:57:56.832170   71702 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.3 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-512125 NodeName:default-k8s-diff-port-512125 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.3"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0913 19:57:56.832376   71702 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.3
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-512125"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.3"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0913 19:57:56.832442   71702 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0913 19:57:56.845057   71702 binaries.go:44] Found k8s binaries, skipping transfer
	I0913 19:57:56.845112   71702 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0913 19:57:56.855452   71702 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I0913 19:57:56.874607   71702 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0913 19:57:56.891656   71702 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0913 19:57:56.910268   71702 ssh_runner.go:195] Run: grep 192.168.61.3	control-plane.minikube.internal$ /etc/hosts
	I0913 19:57:56.915416   71702 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.3	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0913 19:57:56.929858   71702 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 19:57:57.051400   71702 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0913 19:57:57.073706   71702 certs.go:68] Setting up /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/default-k8s-diff-port-512125 for IP: 192.168.61.3
	I0913 19:57:57.073736   71702 certs.go:194] generating shared ca certs ...
	I0913 19:57:57.073756   71702 certs.go:226] acquiring lock for ca certs: {Name:mke780aab4c2895bded11772e31c7a357d07742c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 19:57:57.073920   71702 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.key
	I0913 19:57:57.073981   71702 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.key
	I0913 19:57:57.073997   71702 certs.go:256] generating profile certs ...
	I0913 19:57:57.074130   71702 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/default-k8s-diff-port-512125/client.key
	I0913 19:57:57.074222   71702 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/default-k8s-diff-port-512125/apiserver.key.c56bc154
	I0913 19:57:57.074281   71702 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/default-k8s-diff-port-512125/proxy-client.key
	I0913 19:57:57.074428   71702 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079.pem (1338 bytes)
	W0913 19:57:57.074478   71702 certs.go:480] ignoring /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079_empty.pem, impossibly tiny 0 bytes
	I0913 19:57:57.074492   71702 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem (1675 bytes)
	I0913 19:57:57.074524   71702 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem (1082 bytes)
	I0913 19:57:57.074552   71702 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem (1123 bytes)
	I0913 19:57:57.074588   71702 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem (1679 bytes)
	I0913 19:57:57.074648   71702 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem (1708 bytes)
	I0913 19:57:57.075352   71702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0913 19:57:57.116487   71702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0913 19:57:57.149579   71702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0913 19:57:57.181669   71702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0913 19:57:57.222493   71702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/default-k8s-diff-port-512125/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0913 19:57:57.265591   71702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/default-k8s-diff-port-512125/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0913 19:57:57.309431   71702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/default-k8s-diff-port-512125/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0913 19:57:57.337978   71702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/default-k8s-diff-port-512125/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0913 19:57:57.368737   71702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem --> /usr/share/ca-certificates/110792.pem (1708 bytes)
	I0913 19:57:57.395163   71702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0913 19:57:57.422620   71702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079.pem --> /usr/share/ca-certificates/11079.pem (1338 bytes)
	I0913 19:57:57.452103   71702 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0913 19:57:57.473413   71702 ssh_runner.go:195] Run: openssl version
	I0913 19:57:57.481312   71702 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11079.pem && ln -fs /usr/share/ca-certificates/11079.pem /etc/ssl/certs/11079.pem"
	I0913 19:57:57.492674   71702 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11079.pem
	I0913 19:57:57.497758   71702 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 13 18:38 /usr/share/ca-certificates/11079.pem
	I0913 19:57:57.497839   71702 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11079.pem
	I0913 19:57:57.504428   71702 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11079.pem /etc/ssl/certs/51391683.0"
	I0913 19:57:57.516174   71702 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110792.pem && ln -fs /usr/share/ca-certificates/110792.pem /etc/ssl/certs/110792.pem"
	I0913 19:57:57.531615   71702 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110792.pem
	I0913 19:57:57.536963   71702 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 13 18:38 /usr/share/ca-certificates/110792.pem
	I0913 19:57:57.537044   71702 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110792.pem
	I0913 19:57:57.543533   71702 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110792.pem /etc/ssl/certs/3ec20f2e.0"
	I0913 19:57:57.555225   71702 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0913 19:57:57.567042   71702 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0913 19:57:57.571812   71702 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 13 18:22 /usr/share/ca-certificates/minikubeCA.pem
	I0913 19:57:57.571880   71702 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0913 19:57:57.578078   71702 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0913 19:57:57.589068   71702 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0913 19:57:57.593977   71702 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0913 19:57:57.600118   71702 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0913 19:57:57.608059   71702 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0913 19:57:57.616018   71702 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0913 19:57:57.623731   71702 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0913 19:57:57.631334   71702 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0913 19:57:57.639262   71702 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-512125 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:default-k8s-diff-port-512125 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.3 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2
6280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 19:57:57.639371   71702 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0913 19:57:57.639428   71702 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0913 19:57:57.690322   71702 cri.go:89] found id: ""
	I0913 19:57:57.690474   71702 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0913 19:57:57.701319   71702 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0913 19:57:57.701343   71702 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0913 19:57:57.701398   71702 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0913 19:57:57.714480   71702 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0913 19:57:57.715899   71702 kubeconfig.go:125] found "default-k8s-diff-port-512125" server: "https://192.168.61.3:8444"
	I0913 19:57:57.719013   71702 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0913 19:57:57.732186   71702 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.3
	I0913 19:57:57.732229   71702 kubeadm.go:1160] stopping kube-system containers ...
	I0913 19:57:57.732243   71702 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0913 19:57:57.732295   71702 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0913 19:57:57.777389   71702 cri.go:89] found id: ""
	I0913 19:57:57.777469   71702 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0913 19:57:57.800158   71702 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0913 19:57:57.813502   71702 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0913 19:57:57.813524   71702 kubeadm.go:157] found existing configuration files:
	
	I0913 19:57:57.813587   71702 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0913 19:57:57.824010   71702 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0913 19:57:57.824089   71702 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0913 19:57:57.837916   71702 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0913 19:57:57.848018   71702 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0913 19:57:57.848100   71702 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0913 19:57:57.858224   71702 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0913 19:57:57.867720   71702 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0913 19:57:57.867791   71702 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0913 19:57:57.877546   71702 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0913 19:57:57.886880   71702 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0913 19:57:57.886946   71702 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0913 19:57:57.897287   71702 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0913 19:57:57.907278   71702 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:57:58.066862   71702 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:57:59.038179   71702 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:57:59.245671   71702 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:57:59.306302   71702 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:57:59.366665   71702 api_server.go:52] waiting for apiserver process to appear ...
	I0913 19:57:59.366755   71702 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:57:59.867295   71702 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:57:59.867010   71424 addons.go:510] duration metric: took 3.116105462s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0913 19:57:57.205383   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:57:57.205897   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | unable to find current IP address of domain old-k8s-version-234290 in network mk-old-k8s-version-234290
	I0913 19:57:57.205926   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | I0913 19:57:57.205815   72953 retry.go:31] will retry after 1.270443953s: waiting for machine to come up
	I0913 19:57:58.477621   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:57:58.478121   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | unable to find current IP address of domain old-k8s-version-234290 in network mk-old-k8s-version-234290
	I0913 19:57:58.478142   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | I0913 19:57:58.478071   72953 retry.go:31] will retry after 1.698380616s: waiting for machine to come up
	I0913 19:58:00.179093   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:00.179578   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | unable to find current IP address of domain old-k8s-version-234290 in network mk-old-k8s-version-234290
	I0913 19:58:00.179602   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | I0913 19:58:00.179528   72953 retry.go:31] will retry after 2.83575453s: waiting for machine to come up
	I0913 19:58:00.367089   71702 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:00.386556   71702 api_server.go:72] duration metric: took 1.019888667s to wait for apiserver process to appear ...
	I0913 19:58:00.386585   71702 api_server.go:88] waiting for apiserver healthz status ...
	I0913 19:58:00.386612   71702 api_server.go:253] Checking apiserver healthz at https://192.168.61.3:8444/healthz ...
	I0913 19:58:00.387195   71702 api_server.go:269] stopped: https://192.168.61.3:8444/healthz: Get "https://192.168.61.3:8444/healthz": dial tcp 192.168.61.3:8444: connect: connection refused
	I0913 19:58:00.887556   71702 api_server.go:253] Checking apiserver healthz at https://192.168.61.3:8444/healthz ...
	I0913 19:58:03.321626   71702 api_server.go:279] https://192.168.61.3:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0913 19:58:03.321655   71702 api_server.go:103] status: https://192.168.61.3:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0913 19:58:03.321671   71702 api_server.go:253] Checking apiserver healthz at https://192.168.61.3:8444/healthz ...
	I0913 19:58:03.348469   71702 api_server.go:279] https://192.168.61.3:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0913 19:58:03.348523   71702 api_server.go:103] status: https://192.168.61.3:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0913 19:58:03.386697   71702 api_server.go:253] Checking apiserver healthz at https://192.168.61.3:8444/healthz ...
	I0913 19:58:03.431803   71702 api_server.go:279] https://192.168.61.3:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0913 19:58:03.431840   71702 api_server.go:103] status: https://192.168.61.3:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0913 19:58:03.887458   71702 api_server.go:253] Checking apiserver healthz at https://192.168.61.3:8444/healthz ...
	I0913 19:58:03.892461   71702 api_server.go:279] https://192.168.61.3:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0913 19:58:03.892542   71702 api_server.go:103] status: https://192.168.61.3:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0913 19:58:04.387025   71702 api_server.go:253] Checking apiserver healthz at https://192.168.61.3:8444/healthz ...
	I0913 19:58:04.392727   71702 api_server.go:279] https://192.168.61.3:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0913 19:58:04.392754   71702 api_server.go:103] status: https://192.168.61.3:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0913 19:58:04.887683   71702 api_server.go:253] Checking apiserver healthz at https://192.168.61.3:8444/healthz ...
	I0913 19:58:04.892753   71702 api_server.go:279] https://192.168.61.3:8444/healthz returned 200:
	ok
	I0913 19:58:04.904148   71702 api_server.go:141] control plane version: v1.31.1
	I0913 19:58:04.904182   71702 api_server.go:131] duration metric: took 4.517588824s to wait for apiserver health ...
	I0913 19:58:04.904194   71702 cni.go:84] Creating CNI manager for ""
	I0913 19:58:04.904202   71702 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0913 19:58:04.905663   71702 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0913 19:58:01.560970   71424 node_ready.go:53] node "no-preload-239327" has status "Ready":"False"
	I0913 19:58:04.064801   71424 node_ready.go:49] node "no-preload-239327" has status "Ready":"True"
	I0913 19:58:04.064833   71424 node_ready.go:38] duration metric: took 7.008173513s for node "no-preload-239327" to be "Ready" ...
	I0913 19:58:04.064847   71424 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0913 19:58:04.071226   71424 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-fjzxv" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:04.075856   71424 pod_ready.go:93] pod "coredns-7c65d6cfc9-fjzxv" in "kube-system" namespace has status "Ready":"True"
	I0913 19:58:04.075876   71424 pod_ready.go:82] duration metric: took 4.620688ms for pod "coredns-7c65d6cfc9-fjzxv" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:04.075886   71424 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-239327" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:06.082608   71424 pod_ready.go:103] pod "etcd-no-preload-239327" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:03.017261   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:03.017797   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | unable to find current IP address of domain old-k8s-version-234290 in network mk-old-k8s-version-234290
	I0913 19:58:03.017824   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | I0913 19:58:03.017750   72953 retry.go:31] will retry after 2.837073214s: waiting for machine to come up
	I0913 19:58:05.856138   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:05.856521   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | unable to find current IP address of domain old-k8s-version-234290 in network mk-old-k8s-version-234290
	I0913 19:58:05.856541   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | I0913 19:58:05.856478   72953 retry.go:31] will retry after 3.468611434s: waiting for machine to come up
	I0913 19:58:04.907086   71702 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0913 19:58:04.935755   71702 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0913 19:58:04.972552   71702 system_pods.go:43] waiting for kube-system pods to appear ...
	I0913 19:58:04.987070   71702 system_pods.go:59] 8 kube-system pods found
	I0913 19:58:04.987104   71702 system_pods.go:61] "coredns-7c65d6cfc9-zvnss" [b6584e3d-4140-4666-8303-94c0900eaf8c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0913 19:58:04.987118   71702 system_pods.go:61] "etcd-default-k8s-diff-port-512125" [5eb1e9b1-b89a-427d-83f5-96d9109b10c6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0913 19:58:04.987128   71702 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-512125" [5118097e-a1ed-403e-8acb-22c7619a6db9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0913 19:58:04.987148   71702 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-512125" [37f11854-a2b8-45d5-8491-e2f92b860220] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0913 19:58:04.987160   71702 system_pods.go:61] "kube-proxy-xqv9m" [92c9dda2-fabe-4b3b-9bae-892e6daf0889] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0913 19:58:04.987172   71702 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-512125" [a9f4fa75-b73d-477a-83e9-e855ec50f90d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0913 19:58:04.987180   71702 system_pods.go:61] "metrics-server-6867b74b74-7ltrm" [8560dbda-82b3-49a1-8ed8-f149e5e99168] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0913 19:58:04.987188   71702 system_pods.go:61] "storage-provisioner" [d8f393fe-0f71-4f3c-b17e-6132503c2b9d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0913 19:58:04.987198   71702 system_pods.go:74] duration metric: took 14.623093ms to wait for pod list to return data ...
	I0913 19:58:04.987207   71702 node_conditions.go:102] verifying NodePressure condition ...
	I0913 19:58:04.991659   71702 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0913 19:58:04.991686   71702 node_conditions.go:123] node cpu capacity is 2
	I0913 19:58:04.991701   71702 node_conditions.go:105] duration metric: took 4.488975ms to run NodePressure ...
	I0913 19:58:04.991720   71702 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:58:05.329547   71702 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0913 19:58:05.342174   71702 kubeadm.go:739] kubelet initialised
	I0913 19:58:05.342208   71702 kubeadm.go:740] duration metric: took 12.632654ms waiting for restarted kubelet to initialise ...
	I0913 19:58:05.342218   71702 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0913 19:58:05.351246   71702 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-zvnss" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:07.371790   71702 pod_ready.go:103] pod "coredns-7c65d6cfc9-zvnss" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:09.857936   71702 pod_ready.go:93] pod "coredns-7c65d6cfc9-zvnss" in "kube-system" namespace has status "Ready":"True"
	I0913 19:58:09.857956   71702 pod_ready.go:82] duration metric: took 4.506679998s for pod "coredns-7c65d6cfc9-zvnss" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:09.857966   71702 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-512125" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:10.763154   71233 start.go:364] duration metric: took 54.002772677s to acquireMachinesLock for "embed-certs-175374"
	I0913 19:58:10.763209   71233 start.go:96] Skipping create...Using existing machine configuration
	I0913 19:58:10.763220   71233 fix.go:54] fixHost starting: 
	I0913 19:58:10.763652   71233 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:58:10.763701   71233 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:58:10.780781   71233 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34533
	I0913 19:58:10.781257   71233 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:58:10.781767   71233 main.go:141] libmachine: Using API Version  1
	I0913 19:58:10.781792   71233 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:58:10.782108   71233 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:58:10.782297   71233 main.go:141] libmachine: (embed-certs-175374) Calling .DriverName
	I0913 19:58:10.782435   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetState
	I0913 19:58:10.783818   71233 fix.go:112] recreateIfNeeded on embed-certs-175374: state=Stopped err=<nil>
	I0913 19:58:10.783838   71233 main.go:141] libmachine: (embed-certs-175374) Calling .DriverName
	W0913 19:58:10.783968   71233 fix.go:138] unexpected machine state, will restart: <nil>
	I0913 19:58:10.786142   71233 out.go:177] * Restarting existing kvm2 VM for "embed-certs-175374" ...
	I0913 19:58:07.082571   71424 pod_ready.go:93] pod "etcd-no-preload-239327" in "kube-system" namespace has status "Ready":"True"
	I0913 19:58:07.082601   71424 pod_ready.go:82] duration metric: took 3.006705611s for pod "etcd-no-preload-239327" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:07.082614   71424 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-239327" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:07.087377   71424 pod_ready.go:93] pod "kube-apiserver-no-preload-239327" in "kube-system" namespace has status "Ready":"True"
	I0913 19:58:07.087394   71424 pod_ready.go:82] duration metric: took 4.772922ms for pod "kube-apiserver-no-preload-239327" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:07.087403   71424 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-239327" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:07.091167   71424 pod_ready.go:93] pod "kube-controller-manager-no-preload-239327" in "kube-system" namespace has status "Ready":"True"
	I0913 19:58:07.091181   71424 pod_ready.go:82] duration metric: took 3.772461ms for pod "kube-controller-manager-no-preload-239327" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:07.091188   71424 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-b24zg" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:07.095143   71424 pod_ready.go:93] pod "kube-proxy-b24zg" in "kube-system" namespace has status "Ready":"True"
	I0913 19:58:07.095158   71424 pod_ready.go:82] duration metric: took 3.964773ms for pod "kube-proxy-b24zg" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:07.095164   71424 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-239327" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:07.259916   71424 pod_ready.go:93] pod "kube-scheduler-no-preload-239327" in "kube-system" namespace has status "Ready":"True"
	I0913 19:58:07.259939   71424 pod_ready.go:82] duration metric: took 164.768229ms for pod "kube-scheduler-no-preload-239327" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:07.259948   71424 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:09.267203   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:09.327843   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:09.328294   71926 main.go:141] libmachine: (old-k8s-version-234290) Found IP for machine: 192.168.72.137
	I0913 19:58:09.328318   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has current primary IP address 192.168.72.137 and MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:09.328326   71926 main.go:141] libmachine: (old-k8s-version-234290) Reserving static IP address...
	I0913 19:58:09.328829   71926 main.go:141] libmachine: (old-k8s-version-234290) Reserved static IP address: 192.168.72.137
	I0913 19:58:09.328863   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | found host DHCP lease matching {name: "old-k8s-version-234290", mac: "52:54:00:11:33:43", ip: "192.168.72.137"} in network mk-old-k8s-version-234290: {Iface:virbr4 ExpiryTime:2024-09-13 20:58:03 +0000 UTC Type:0 Mac:52:54:00:11:33:43 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-234290 Clientid:01:52:54:00:11:33:43}
	I0913 19:58:09.328879   71926 main.go:141] libmachine: (old-k8s-version-234290) Waiting for SSH to be available...
	I0913 19:58:09.328907   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | skip adding static IP to network mk-old-k8s-version-234290 - found existing host DHCP lease matching {name: "old-k8s-version-234290", mac: "52:54:00:11:33:43", ip: "192.168.72.137"}
	I0913 19:58:09.328936   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | Getting to WaitForSSH function...
	I0913 19:58:09.331039   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:09.331303   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:33:43", ip: ""} in network mk-old-k8s-version-234290: {Iface:virbr4 ExpiryTime:2024-09-13 20:58:03 +0000 UTC Type:0 Mac:52:54:00:11:33:43 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-234290 Clientid:01:52:54:00:11:33:43}
	I0913 19:58:09.331334   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined IP address 192.168.72.137 and MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:09.331400   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | Using SSH client type: external
	I0913 19:58:09.331435   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | Using SSH private key: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/old-k8s-version-234290/id_rsa (-rw-------)
	I0913 19:58:09.331513   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.137 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19636-3902/.minikube/machines/old-k8s-version-234290/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0913 19:58:09.331538   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | About to run SSH command:
	I0913 19:58:09.331555   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | exit 0
	I0913 19:58:09.458142   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | SSH cmd err, output: <nil>: 
	I0913 19:58:09.458503   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetConfigRaw
	I0913 19:58:09.459065   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetIP
	I0913 19:58:09.461622   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:09.461915   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:33:43", ip: ""} in network mk-old-k8s-version-234290: {Iface:virbr4 ExpiryTime:2024-09-13 20:58:03 +0000 UTC Type:0 Mac:52:54:00:11:33:43 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-234290 Clientid:01:52:54:00:11:33:43}
	I0913 19:58:09.461939   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined IP address 192.168.72.137 and MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:09.462215   71926 profile.go:143] Saving config to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/old-k8s-version-234290/config.json ...
	I0913 19:58:09.462430   71926 machine.go:93] provisionDockerMachine start ...
	I0913 19:58:09.462454   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .DriverName
	I0913 19:58:09.462652   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHHostname
	I0913 19:58:09.464731   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:09.465001   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:33:43", ip: ""} in network mk-old-k8s-version-234290: {Iface:virbr4 ExpiryTime:2024-09-13 20:58:03 +0000 UTC Type:0 Mac:52:54:00:11:33:43 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-234290 Clientid:01:52:54:00:11:33:43}
	I0913 19:58:09.465025   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined IP address 192.168.72.137 and MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:09.465140   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHPort
	I0913 19:58:09.465292   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHKeyPath
	I0913 19:58:09.465448   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHKeyPath
	I0913 19:58:09.465586   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHUsername
	I0913 19:58:09.465754   71926 main.go:141] libmachine: Using SSH client type: native
	I0913 19:58:09.465934   71926 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.137 22 <nil> <nil>}
	I0913 19:58:09.465944   71926 main.go:141] libmachine: About to run SSH command:
	hostname
	I0913 19:58:09.578790   71926 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0913 19:58:09.578821   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetMachineName
	I0913 19:58:09.579119   71926 buildroot.go:166] provisioning hostname "old-k8s-version-234290"
	I0913 19:58:09.579148   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetMachineName
	I0913 19:58:09.579352   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHHostname
	I0913 19:58:09.582085   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:09.582501   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:33:43", ip: ""} in network mk-old-k8s-version-234290: {Iface:virbr4 ExpiryTime:2024-09-13 20:58:03 +0000 UTC Type:0 Mac:52:54:00:11:33:43 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-234290 Clientid:01:52:54:00:11:33:43}
	I0913 19:58:09.582530   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined IP address 192.168.72.137 and MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:09.582677   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHPort
	I0913 19:58:09.582828   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHKeyPath
	I0913 19:58:09.582982   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHKeyPath
	I0913 19:58:09.583111   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHUsername
	I0913 19:58:09.583310   71926 main.go:141] libmachine: Using SSH client type: native
	I0913 19:58:09.583519   71926 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.137 22 <nil> <nil>}
	I0913 19:58:09.583536   71926 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-234290 && echo "old-k8s-version-234290" | sudo tee /etc/hostname
	I0913 19:58:09.712818   71926 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-234290
	
	I0913 19:58:09.712849   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHHostname
	I0913 19:58:09.715668   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:09.716012   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:33:43", ip: ""} in network mk-old-k8s-version-234290: {Iface:virbr4 ExpiryTime:2024-09-13 20:58:03 +0000 UTC Type:0 Mac:52:54:00:11:33:43 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-234290 Clientid:01:52:54:00:11:33:43}
	I0913 19:58:09.716033   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined IP address 192.168.72.137 and MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:09.716166   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHPort
	I0913 19:58:09.716370   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHKeyPath
	I0913 19:58:09.716550   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHKeyPath
	I0913 19:58:09.716740   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHUsername
	I0913 19:58:09.716935   71926 main.go:141] libmachine: Using SSH client type: native
	I0913 19:58:09.717120   71926 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.137 22 <nil> <nil>}
	I0913 19:58:09.717145   71926 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-234290' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-234290/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-234290' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0913 19:58:09.835137   71926 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0913 19:58:09.835169   71926 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19636-3902/.minikube CaCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19636-3902/.minikube}
	I0913 19:58:09.835198   71926 buildroot.go:174] setting up certificates
	I0913 19:58:09.835207   71926 provision.go:84] configureAuth start
	I0913 19:58:09.835220   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetMachineName
	I0913 19:58:09.835493   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetIP
	I0913 19:58:09.838090   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:09.838460   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:33:43", ip: ""} in network mk-old-k8s-version-234290: {Iface:virbr4 ExpiryTime:2024-09-13 20:58:03 +0000 UTC Type:0 Mac:52:54:00:11:33:43 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-234290 Clientid:01:52:54:00:11:33:43}
	I0913 19:58:09.838496   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined IP address 192.168.72.137 and MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:09.838570   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHHostname
	I0913 19:58:09.840786   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:09.841121   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:33:43", ip: ""} in network mk-old-k8s-version-234290: {Iface:virbr4 ExpiryTime:2024-09-13 20:58:03 +0000 UTC Type:0 Mac:52:54:00:11:33:43 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-234290 Clientid:01:52:54:00:11:33:43}
	I0913 19:58:09.841147   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined IP address 192.168.72.137 and MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:09.841283   71926 provision.go:143] copyHostCerts
	I0913 19:58:09.841339   71926 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem, removing ...
	I0913 19:58:09.841349   71926 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem
	I0913 19:58:09.841404   71926 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem (1082 bytes)
	I0913 19:58:09.841495   71926 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem, removing ...
	I0913 19:58:09.841503   71926 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem
	I0913 19:58:09.841525   71926 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem (1123 bytes)
	I0913 19:58:09.841589   71926 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem, removing ...
	I0913 19:58:09.841596   71926 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem
	I0913 19:58:09.841613   71926 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem (1679 bytes)
	I0913 19:58:09.841671   71926 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-234290 san=[127.0.0.1 192.168.72.137 localhost minikube old-k8s-version-234290]
	I0913 19:58:10.111960   71926 provision.go:177] copyRemoteCerts
	I0913 19:58:10.112014   71926 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0913 19:58:10.112042   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHHostname
	I0913 19:58:10.114801   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:10.115156   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:33:43", ip: ""} in network mk-old-k8s-version-234290: {Iface:virbr4 ExpiryTime:2024-09-13 20:58:03 +0000 UTC Type:0 Mac:52:54:00:11:33:43 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-234290 Clientid:01:52:54:00:11:33:43}
	I0913 19:58:10.115203   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined IP address 192.168.72.137 and MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:10.115378   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHPort
	I0913 19:58:10.115543   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHKeyPath
	I0913 19:58:10.115703   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHUsername
	I0913 19:58:10.115816   71926 sshutil.go:53] new ssh client: &{IP:192.168.72.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/old-k8s-version-234290/id_rsa Username:docker}
	I0913 19:58:10.201034   71926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0913 19:58:10.225250   71926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0913 19:58:10.248653   71926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0913 19:58:10.273946   71926 provision.go:87] duration metric: took 438.72916ms to configureAuth
	I0913 19:58:10.273971   71926 buildroot.go:189] setting minikube options for container-runtime
	I0913 19:58:10.274216   71926 config.go:182] Loaded profile config "old-k8s-version-234290": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0913 19:58:10.274293   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHHostname
	I0913 19:58:10.276661   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:10.276973   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:33:43", ip: ""} in network mk-old-k8s-version-234290: {Iface:virbr4 ExpiryTime:2024-09-13 20:58:03 +0000 UTC Type:0 Mac:52:54:00:11:33:43 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-234290 Clientid:01:52:54:00:11:33:43}
	I0913 19:58:10.277010   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined IP address 192.168.72.137 and MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:10.277098   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHPort
	I0913 19:58:10.277315   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHKeyPath
	I0913 19:58:10.277465   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHKeyPath
	I0913 19:58:10.277593   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHUsername
	I0913 19:58:10.277755   71926 main.go:141] libmachine: Using SSH client type: native
	I0913 19:58:10.277914   71926 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.137 22 <nil> <nil>}
	I0913 19:58:10.277929   71926 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0913 19:58:10.506712   71926 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0913 19:58:10.506740   71926 machine.go:96] duration metric: took 1.044293936s to provisionDockerMachine
	I0913 19:58:10.506752   71926 start.go:293] postStartSetup for "old-k8s-version-234290" (driver="kvm2")
	I0913 19:58:10.506761   71926 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0913 19:58:10.506786   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .DriverName
	I0913 19:58:10.507087   71926 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0913 19:58:10.507121   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHHostname
	I0913 19:58:10.509746   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:10.510073   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:33:43", ip: ""} in network mk-old-k8s-version-234290: {Iface:virbr4 ExpiryTime:2024-09-13 20:58:03 +0000 UTC Type:0 Mac:52:54:00:11:33:43 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-234290 Clientid:01:52:54:00:11:33:43}
	I0913 19:58:10.510115   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined IP address 192.168.72.137 and MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:10.510319   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHPort
	I0913 19:58:10.510493   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHKeyPath
	I0913 19:58:10.510652   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHUsername
	I0913 19:58:10.510791   71926 sshutil.go:53] new ssh client: &{IP:192.168.72.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/old-k8s-version-234290/id_rsa Username:docker}
	I0913 19:58:10.608187   71926 ssh_runner.go:195] Run: cat /etc/os-release
	I0913 19:58:10.612545   71926 info.go:137] Remote host: Buildroot 2023.02.9
	I0913 19:58:10.612570   71926 filesync.go:126] Scanning /home/jenkins/minikube-integration/19636-3902/.minikube/addons for local assets ...
	I0913 19:58:10.612659   71926 filesync.go:126] Scanning /home/jenkins/minikube-integration/19636-3902/.minikube/files for local assets ...
	I0913 19:58:10.612760   71926 filesync.go:149] local asset: /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem -> 110792.pem in /etc/ssl/certs
	I0913 19:58:10.612876   71926 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0913 19:58:10.623923   71926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem --> /etc/ssl/certs/110792.pem (1708 bytes)
	I0913 19:58:10.649632   71926 start.go:296] duration metric: took 142.866871ms for postStartSetup
	I0913 19:58:10.649667   71926 fix.go:56] duration metric: took 19.32233086s for fixHost
	I0913 19:58:10.649685   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHHostname
	I0913 19:58:10.652317   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:10.652626   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:33:43", ip: ""} in network mk-old-k8s-version-234290: {Iface:virbr4 ExpiryTime:2024-09-13 20:58:03 +0000 UTC Type:0 Mac:52:54:00:11:33:43 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-234290 Clientid:01:52:54:00:11:33:43}
	I0913 19:58:10.652654   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined IP address 192.168.72.137 and MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:10.652772   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHPort
	I0913 19:58:10.652954   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHKeyPath
	I0913 19:58:10.653102   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHKeyPath
	I0913 19:58:10.653224   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHUsername
	I0913 19:58:10.653391   71926 main.go:141] libmachine: Using SSH client type: native
	I0913 19:58:10.653546   71926 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.137 22 <nil> <nil>}
	I0913 19:58:10.653555   71926 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0913 19:58:10.762947   71926 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726257490.741456651
	
	I0913 19:58:10.763010   71926 fix.go:216] guest clock: 1726257490.741456651
	I0913 19:58:10.763025   71926 fix.go:229] Guest: 2024-09-13 19:58:10.741456651 +0000 UTC Remote: 2024-09-13 19:58:10.649671047 +0000 UTC m=+268.740736518 (delta=91.785604ms)
	I0913 19:58:10.763052   71926 fix.go:200] guest clock delta is within tolerance: 91.785604ms
	I0913 19:58:10.763059   71926 start.go:83] releasing machines lock for "old-k8s-version-234290", held for 19.435752772s
	I0913 19:58:10.763095   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .DriverName
	I0913 19:58:10.763368   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetIP
	I0913 19:58:10.766318   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:10.766797   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:33:43", ip: ""} in network mk-old-k8s-version-234290: {Iface:virbr4 ExpiryTime:2024-09-13 20:58:03 +0000 UTC Type:0 Mac:52:54:00:11:33:43 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-234290 Clientid:01:52:54:00:11:33:43}
	I0913 19:58:10.766838   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined IP address 192.168.72.137 and MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:10.767094   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .DriverName
	I0913 19:58:10.767602   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .DriverName
	I0913 19:58:10.767791   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .DriverName
	I0913 19:58:10.767901   71926 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0913 19:58:10.767938   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHHostname
	I0913 19:58:10.768057   71926 ssh_runner.go:195] Run: cat /version.json
	I0913 19:58:10.768077   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHHostname
	I0913 19:58:10.770835   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:10.770860   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:10.771204   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:33:43", ip: ""} in network mk-old-k8s-version-234290: {Iface:virbr4 ExpiryTime:2024-09-13 20:58:03 +0000 UTC Type:0 Mac:52:54:00:11:33:43 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-234290 Clientid:01:52:54:00:11:33:43}
	I0913 19:58:10.771231   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined IP address 192.168.72.137 and MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:10.771269   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:33:43", ip: ""} in network mk-old-k8s-version-234290: {Iface:virbr4 ExpiryTime:2024-09-13 20:58:03 +0000 UTC Type:0 Mac:52:54:00:11:33:43 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-234290 Clientid:01:52:54:00:11:33:43}
	I0913 19:58:10.771299   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined IP address 192.168.72.137 and MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:10.771390   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHPort
	I0913 19:58:10.771538   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHKeyPath
	I0913 19:58:10.771562   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHPort
	I0913 19:58:10.771722   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHUsername
	I0913 19:58:10.771758   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHKeyPath
	I0913 19:58:10.771888   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHUsername
	I0913 19:58:10.771910   71926 sshutil.go:53] new ssh client: &{IP:192.168.72.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/old-k8s-version-234290/id_rsa Username:docker}
	I0913 19:58:10.772009   71926 sshutil.go:53] new ssh client: &{IP:192.168.72.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/old-k8s-version-234290/id_rsa Username:docker}
	I0913 19:58:10.851445   71926 ssh_runner.go:195] Run: systemctl --version
	I0913 19:58:10.876291   71926 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0913 19:58:11.022514   71926 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0913 19:58:11.029415   71926 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0913 19:58:11.029478   71926 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0913 19:58:11.046313   71926 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0913 19:58:11.046338   71926 start.go:495] detecting cgroup driver to use...
	I0913 19:58:11.046411   71926 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0913 19:58:11.064638   71926 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0913 19:58:11.079465   71926 docker.go:217] disabling cri-docker service (if available) ...
	I0913 19:58:11.079555   71926 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0913 19:58:11.092965   71926 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0913 19:58:11.107487   71926 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0913 19:58:11.225260   71926 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0913 19:58:11.379777   71926 docker.go:233] disabling docker service ...
	I0913 19:58:11.379918   71926 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0913 19:58:11.399146   71926 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0913 19:58:11.418820   71926 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0913 19:58:11.608056   71926 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0913 19:58:11.793596   71926 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0913 19:58:11.809432   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0913 19:58:11.831794   71926 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0913 19:58:11.831867   71926 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:58:11.843613   71926 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0913 19:58:11.843681   71926 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:58:11.856437   71926 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:58:11.868563   71926 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:58:11.880448   71926 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0913 19:58:11.892795   71926 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0913 19:58:11.903756   71926 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0913 19:58:11.903820   71926 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0913 19:58:11.919323   71926 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0913 19:58:11.932414   71926 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 19:58:12.084112   71926 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0913 19:58:12.186561   71926 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0913 19:58:12.186626   71926 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0913 19:58:12.200935   71926 start.go:563] Will wait 60s for crictl version
	I0913 19:58:12.200999   71926 ssh_runner.go:195] Run: which crictl
	I0913 19:58:12.204888   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0913 19:58:12.251729   71926 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0913 19:58:12.251822   71926 ssh_runner.go:195] Run: crio --version
	I0913 19:58:12.284955   71926 ssh_runner.go:195] Run: crio --version
	I0913 19:58:12.316561   71926 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0913 19:58:10.787457   71233 main.go:141] libmachine: (embed-certs-175374) Calling .Start
	I0913 19:58:10.787620   71233 main.go:141] libmachine: (embed-certs-175374) Ensuring networks are active...
	I0913 19:58:10.788313   71233 main.go:141] libmachine: (embed-certs-175374) Ensuring network default is active
	I0913 19:58:10.788694   71233 main.go:141] libmachine: (embed-certs-175374) Ensuring network mk-embed-certs-175374 is active
	I0913 19:58:10.789203   71233 main.go:141] libmachine: (embed-certs-175374) Getting domain xml...
	I0913 19:58:10.790255   71233 main.go:141] libmachine: (embed-certs-175374) Creating domain...
	I0913 19:58:12.138157   71233 main.go:141] libmachine: (embed-certs-175374) Waiting to get IP...
	I0913 19:58:12.139236   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:12.139700   71233 main.go:141] libmachine: (embed-certs-175374) DBG | unable to find current IP address of domain embed-certs-175374 in network mk-embed-certs-175374
	I0913 19:58:12.139753   71233 main.go:141] libmachine: (embed-certs-175374) DBG | I0913 19:58:12.139667   73146 retry.go:31] will retry after 297.211027ms: waiting for machine to come up
	I0913 19:58:12.438089   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:12.438546   71233 main.go:141] libmachine: (embed-certs-175374) DBG | unable to find current IP address of domain embed-certs-175374 in network mk-embed-certs-175374
	I0913 19:58:12.438573   71233 main.go:141] libmachine: (embed-certs-175374) DBG | I0913 19:58:12.438508   73146 retry.go:31] will retry after 295.16699ms: waiting for machine to come up
	I0913 19:58:12.735114   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:12.735588   71233 main.go:141] libmachine: (embed-certs-175374) DBG | unable to find current IP address of domain embed-certs-175374 in network mk-embed-certs-175374
	I0913 19:58:12.735624   71233 main.go:141] libmachine: (embed-certs-175374) DBG | I0913 19:58:12.735558   73146 retry.go:31] will retry after 439.751807ms: waiting for machine to come up
	I0913 19:58:13.177095   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:13.177613   71233 main.go:141] libmachine: (embed-certs-175374) DBG | unable to find current IP address of domain embed-certs-175374 in network mk-embed-certs-175374
	I0913 19:58:13.177643   71233 main.go:141] libmachine: (embed-certs-175374) DBG | I0913 19:58:13.177584   73146 retry.go:31] will retry after 561.896034ms: waiting for machine to come up
	I0913 19:58:13.741520   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:13.742128   71233 main.go:141] libmachine: (embed-certs-175374) DBG | unable to find current IP address of domain embed-certs-175374 in network mk-embed-certs-175374
	I0913 19:58:13.742164   71233 main.go:141] libmachine: (embed-certs-175374) DBG | I0913 19:58:13.742027   73146 retry.go:31] will retry after 713.20889ms: waiting for machine to come up
	I0913 19:58:11.865414   71702 pod_ready.go:103] pod "etcd-default-k8s-diff-port-512125" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:13.865756   71702 pod_ready.go:103] pod "etcd-default-k8s-diff-port-512125" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:11.267770   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:13.269041   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:15.768231   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:12.317917   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetIP
	I0913 19:58:12.320920   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:12.321261   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:33:43", ip: ""} in network mk-old-k8s-version-234290: {Iface:virbr4 ExpiryTime:2024-09-13 20:58:03 +0000 UTC Type:0 Mac:52:54:00:11:33:43 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-234290 Clientid:01:52:54:00:11:33:43}
	I0913 19:58:12.321291   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined IP address 192.168.72.137 and MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:12.321498   71926 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0913 19:58:12.325745   71926 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0913 19:58:12.340042   71926 kubeadm.go:883] updating cluster {Name:old-k8s-version-234290 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-234290 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.137 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0913 19:58:12.340163   71926 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0913 19:58:12.340227   71926 ssh_runner.go:195] Run: sudo crictl images --output json
	I0913 19:58:12.387772   71926 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0913 19:58:12.387831   71926 ssh_runner.go:195] Run: which lz4
	I0913 19:58:12.391877   71926 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0913 19:58:12.397084   71926 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0913 19:58:12.397111   71926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0913 19:58:14.108496   71926 crio.go:462] duration metric: took 1.716639607s to copy over tarball
	I0913 19:58:14.108580   71926 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0913 19:58:14.457047   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:14.457530   71233 main.go:141] libmachine: (embed-certs-175374) DBG | unable to find current IP address of domain embed-certs-175374 in network mk-embed-certs-175374
	I0913 19:58:14.457578   71233 main.go:141] libmachine: (embed-certs-175374) DBG | I0913 19:58:14.457461   73146 retry.go:31] will retry after 696.737044ms: waiting for machine to come up
	I0913 19:58:15.156145   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:15.156601   71233 main.go:141] libmachine: (embed-certs-175374) DBG | unable to find current IP address of domain embed-certs-175374 in network mk-embed-certs-175374
	I0913 19:58:15.156634   71233 main.go:141] libmachine: (embed-certs-175374) DBG | I0913 19:58:15.156555   73146 retry.go:31] will retry after 799.457406ms: waiting for machine to come up
	I0913 19:58:15.957762   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:15.958268   71233 main.go:141] libmachine: (embed-certs-175374) DBG | unable to find current IP address of domain embed-certs-175374 in network mk-embed-certs-175374
	I0913 19:58:15.958296   71233 main.go:141] libmachine: (embed-certs-175374) DBG | I0913 19:58:15.958218   73146 retry.go:31] will retry after 1.037426883s: waiting for machine to come up
	I0913 19:58:16.996752   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:16.997283   71233 main.go:141] libmachine: (embed-certs-175374) DBG | unable to find current IP address of domain embed-certs-175374 in network mk-embed-certs-175374
	I0913 19:58:16.997310   71233 main.go:141] libmachine: (embed-certs-175374) DBG | I0913 19:58:16.997233   73146 retry.go:31] will retry after 1.529310984s: waiting for machine to come up
	I0913 19:58:18.528167   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:18.528770   71233 main.go:141] libmachine: (embed-certs-175374) DBG | unable to find current IP address of domain embed-certs-175374 in network mk-embed-certs-175374
	I0913 19:58:18.528817   71233 main.go:141] libmachine: (embed-certs-175374) DBG | I0913 19:58:18.528732   73146 retry.go:31] will retry after 1.63281335s: waiting for machine to come up
	I0913 19:58:15.866154   71702 pod_ready.go:103] pod "etcd-default-k8s-diff-port-512125" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:16.865395   71702 pod_ready.go:93] pod "etcd-default-k8s-diff-port-512125" in "kube-system" namespace has status "Ready":"True"
	I0913 19:58:16.865434   71702 pod_ready.go:82] duration metric: took 7.007454177s for pod "etcd-default-k8s-diff-port-512125" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:16.865449   71702 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-512125" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:16.871374   71702 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-512125" in "kube-system" namespace has status "Ready":"True"
	I0913 19:58:16.871398   71702 pod_ready.go:82] duration metric: took 5.94123ms for pod "kube-apiserver-default-k8s-diff-port-512125" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:16.871410   71702 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-512125" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:19.122189   71702 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-512125" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:19.413846   71702 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-512125" in "kube-system" namespace has status "Ready":"True"
	I0913 19:58:19.413866   71702 pod_ready.go:82] duration metric: took 2.542449272s for pod "kube-controller-manager-default-k8s-diff-port-512125" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:19.413880   71702 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-xqv9m" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:19.419124   71702 pod_ready.go:93] pod "kube-proxy-xqv9m" in "kube-system" namespace has status "Ready":"True"
	I0913 19:58:19.419146   71702 pod_ready.go:82] duration metric: took 5.258451ms for pod "kube-proxy-xqv9m" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:19.419157   71702 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-512125" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:19.424347   71702 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-512125" in "kube-system" namespace has status "Ready":"True"
	I0913 19:58:19.424369   71702 pod_ready.go:82] duration metric: took 5.205567ms for pod "kube-scheduler-default-k8s-diff-port-512125" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:19.424378   71702 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:18.266585   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:20.267496   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:17.092899   71926 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.984287393s)
	I0913 19:58:17.092927   71926 crio.go:469] duration metric: took 2.984399164s to extract the tarball
	I0913 19:58:17.092936   71926 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0913 19:58:17.134595   71926 ssh_runner.go:195] Run: sudo crictl images --output json
	I0913 19:58:17.173233   71926 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0913 19:58:17.173261   71926 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0913 19:58:17.173353   71926 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 19:58:17.173420   71926 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0913 19:58:17.173496   71926 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0913 19:58:17.173545   71926 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0913 19:58:17.173432   71926 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0913 19:58:17.173391   71926 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0913 19:58:17.173404   71926 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0913 19:58:17.173354   71926 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0913 19:58:17.174795   71926 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0913 19:58:17.174996   71926 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0913 19:58:17.175016   71926 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0913 19:58:17.175093   71926 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0913 19:58:17.175261   71926 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0913 19:58:17.175274   71926 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0913 19:58:17.175314   71926 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 19:58:17.175374   71926 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0913 19:58:17.389976   71926 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0913 19:58:17.401858   71926 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0913 19:58:17.422207   71926 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0913 19:58:17.437983   71926 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0913 19:58:17.441827   71926 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0913 19:58:17.444353   71926 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0913 19:58:17.448291   71926 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0913 19:58:17.448327   71926 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0913 19:58:17.448360   71926 ssh_runner.go:195] Run: which crictl
	I0913 19:58:17.484630   71926 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0913 19:58:17.486907   71926 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0913 19:58:17.486953   71926 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0913 19:58:17.486992   71926 ssh_runner.go:195] Run: which crictl
	I0913 19:58:17.552972   71926 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0913 19:58:17.553017   71926 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0913 19:58:17.553070   71926 ssh_runner.go:195] Run: which crictl
	I0913 19:58:17.553094   71926 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0913 19:58:17.553131   71926 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0913 19:58:17.553172   71926 ssh_runner.go:195] Run: which crictl
	I0913 19:58:17.573201   71926 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0913 19:58:17.573248   71926 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0913 19:58:17.573298   71926 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0913 19:58:17.573320   71926 ssh_runner.go:195] Run: which crictl
	I0913 19:58:17.573340   71926 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0913 19:58:17.573386   71926 ssh_runner.go:195] Run: which crictl
	I0913 19:58:17.573434   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0913 19:58:17.605151   71926 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0913 19:58:17.605199   71926 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0913 19:58:17.605248   71926 ssh_runner.go:195] Run: which crictl
	I0913 19:58:17.605300   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0913 19:58:17.605347   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0913 19:58:17.605439   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0913 19:58:17.628947   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0913 19:58:17.628992   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0913 19:58:17.629158   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0913 19:58:17.629483   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0913 19:58:17.714772   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0913 19:58:17.714812   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0913 19:58:17.755168   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0913 19:58:17.813880   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0913 19:58:17.813980   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0913 19:58:17.822268   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0913 19:58:17.822277   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0913 19:58:17.864418   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0913 19:58:17.864418   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0913 19:58:17.930419   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0913 19:58:18.000454   71926 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0913 19:58:18.000613   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0913 19:58:18.000655   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0913 19:58:18.014400   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0913 19:58:18.065836   71926 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0913 19:58:18.065876   71926 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0913 19:58:18.065922   71926 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0913 19:58:18.068943   71926 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0913 19:58:18.075442   71926 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0913 19:58:18.102221   71926 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0913 19:58:18.330677   71926 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 19:58:18.474412   71926 cache_images.go:92] duration metric: took 1.301129458s to LoadCachedImages
	W0913 19:58:18.474515   71926 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0913 19:58:18.474532   71926 kubeadm.go:934] updating node { 192.168.72.137 8443 v1.20.0 crio true true} ...
	I0913 19:58:18.474668   71926 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-234290 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.137
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-234290 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0913 19:58:18.474764   71926 ssh_runner.go:195] Run: crio config
	I0913 19:58:18.528116   71926 cni.go:84] Creating CNI manager for ""
	I0913 19:58:18.528142   71926 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0913 19:58:18.528153   71926 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0913 19:58:18.528175   71926 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.137 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-234290 NodeName:old-k8s-version-234290 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.137"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.137 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0913 19:58:18.528341   71926 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.137
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-234290"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.137
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.137"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0913 19:58:18.528421   71926 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0913 19:58:18.539309   71926 binaries.go:44] Found k8s binaries, skipping transfer
	I0913 19:58:18.539396   71926 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0913 19:58:18.549279   71926 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0913 19:58:18.566974   71926 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0913 19:58:18.585652   71926 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0913 19:58:18.606817   71926 ssh_runner.go:195] Run: grep 192.168.72.137	control-plane.minikube.internal$ /etc/hosts
	I0913 19:58:18.610999   71926 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.137	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0913 19:58:18.623650   71926 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 19:58:18.738375   71926 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0913 19:58:18.759935   71926 certs.go:68] Setting up /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/old-k8s-version-234290 for IP: 192.168.72.137
	I0913 19:58:18.759960   71926 certs.go:194] generating shared ca certs ...
	I0913 19:58:18.759976   71926 certs.go:226] acquiring lock for ca certs: {Name:mke780aab4c2895bded11772e31c7a357d07742c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 19:58:18.760149   71926 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.key
	I0913 19:58:18.760202   71926 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.key
	I0913 19:58:18.760217   71926 certs.go:256] generating profile certs ...
	I0913 19:58:18.760337   71926 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/old-k8s-version-234290/client.key
	I0913 19:58:18.760412   71926 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/old-k8s-version-234290/apiserver.key.e5f62d17
	I0913 19:58:18.760468   71926 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/old-k8s-version-234290/proxy-client.key
	I0913 19:58:18.760623   71926 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079.pem (1338 bytes)
	W0913 19:58:18.760669   71926 certs.go:480] ignoring /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079_empty.pem, impossibly tiny 0 bytes
	I0913 19:58:18.760681   71926 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem (1675 bytes)
	I0913 19:58:18.760718   71926 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem (1082 bytes)
	I0913 19:58:18.760751   71926 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem (1123 bytes)
	I0913 19:58:18.760779   71926 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem (1679 bytes)
	I0913 19:58:18.760832   71926 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem (1708 bytes)
	I0913 19:58:18.761583   71926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0913 19:58:18.793014   71926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0913 19:58:18.848745   71926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0913 19:58:18.886745   71926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0913 19:58:18.924588   71926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/old-k8s-version-234290/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0913 19:58:18.958481   71926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/old-k8s-version-234290/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0913 19:58:18.991482   71926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/old-k8s-version-234290/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0913 19:58:19.032324   71926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/old-k8s-version-234290/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0913 19:58:19.059068   71926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079.pem --> /usr/share/ca-certificates/11079.pem (1338 bytes)
	I0913 19:58:19.085949   71926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem --> /usr/share/ca-certificates/110792.pem (1708 bytes)
	I0913 19:58:19.113643   71926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0913 19:58:19.145333   71926 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0913 19:58:19.163687   71926 ssh_runner.go:195] Run: openssl version
	I0913 19:58:19.171767   71926 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11079.pem && ln -fs /usr/share/ca-certificates/11079.pem /etc/ssl/certs/11079.pem"
	I0913 19:58:19.186554   71926 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11079.pem
	I0913 19:58:19.192330   71926 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 13 18:38 /usr/share/ca-certificates/11079.pem
	I0913 19:58:19.192401   71926 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11079.pem
	I0913 19:58:19.198792   71926 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11079.pem /etc/ssl/certs/51391683.0"
	I0913 19:58:19.210300   71926 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110792.pem && ln -fs /usr/share/ca-certificates/110792.pem /etc/ssl/certs/110792.pem"
	I0913 19:58:19.223407   71926 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110792.pem
	I0913 19:58:19.228291   71926 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 13 18:38 /usr/share/ca-certificates/110792.pem
	I0913 19:58:19.228349   71926 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110792.pem
	I0913 19:58:19.234308   71926 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110792.pem /etc/ssl/certs/3ec20f2e.0"
	I0913 19:58:19.245203   71926 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0913 19:58:19.256773   71926 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0913 19:58:19.262488   71926 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 13 18:22 /usr/share/ca-certificates/minikubeCA.pem
	I0913 19:58:19.262571   71926 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0913 19:58:19.269483   71926 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0913 19:58:19.281592   71926 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0913 19:58:19.286741   71926 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0913 19:58:19.293353   71926 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0913 19:58:19.299808   71926 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0913 19:58:19.306799   71926 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0913 19:58:19.313162   71926 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0913 19:58:19.319027   71926 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0913 19:58:19.325179   71926 kubeadm.go:392] StartCluster: {Name:old-k8s-version-234290 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-234290 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.137 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 19:58:19.325264   71926 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0913 19:58:19.325324   71926 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0913 19:58:19.369346   71926 cri.go:89] found id: ""
	I0913 19:58:19.369426   71926 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0913 19:58:19.379886   71926 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0913 19:58:19.379909   71926 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0913 19:58:19.379970   71926 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0913 19:58:19.390431   71926 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0913 19:58:19.391399   71926 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-234290" does not appear in /home/jenkins/minikube-integration/19636-3902/kubeconfig
	I0913 19:58:19.392019   71926 kubeconfig.go:62] /home/jenkins/minikube-integration/19636-3902/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-234290" cluster setting kubeconfig missing "old-k8s-version-234290" context setting]
	I0913 19:58:19.392914   71926 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/kubeconfig: {Name:mk324cf401f96b93fed93af51e24b0634e5fa1a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 19:58:19.407513   71926 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0913 19:58:19.419283   71926 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.137
	I0913 19:58:19.419314   71926 kubeadm.go:1160] stopping kube-system containers ...
	I0913 19:58:19.419326   71926 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0913 19:58:19.419379   71926 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0913 19:58:19.461864   71926 cri.go:89] found id: ""
	I0913 19:58:19.461935   71926 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0913 19:58:19.479746   71926 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0913 19:58:19.490722   71926 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0913 19:58:19.490746   71926 kubeadm.go:157] found existing configuration files:
	
	I0913 19:58:19.490793   71926 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0913 19:58:19.500968   71926 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0913 19:58:19.501031   71926 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0913 19:58:19.511172   71926 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0913 19:58:19.521623   71926 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0913 19:58:19.521690   71926 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0913 19:58:19.532035   71926 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0913 19:58:19.542058   71926 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0913 19:58:19.542139   71926 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0913 19:58:19.551747   71926 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0913 19:58:19.561183   71926 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0913 19:58:19.561240   71926 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0913 19:58:19.571631   71926 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0913 19:58:19.582073   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:58:19.730805   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:58:20.577463   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:58:20.815243   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:58:20.907599   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:58:21.008611   71926 api_server.go:52] waiting for apiserver process to appear ...
	I0913 19:58:21.008687   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:21.509128   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:20.163342   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:20.163836   71233 main.go:141] libmachine: (embed-certs-175374) DBG | unable to find current IP address of domain embed-certs-175374 in network mk-embed-certs-175374
	I0913 19:58:20.163866   71233 main.go:141] libmachine: (embed-certs-175374) DBG | I0913 19:58:20.163797   73146 retry.go:31] will retry after 2.608130242s: waiting for machine to come up
	I0913 19:58:22.773220   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:22.773746   71233 main.go:141] libmachine: (embed-certs-175374) DBG | unable to find current IP address of domain embed-certs-175374 in network mk-embed-certs-175374
	I0913 19:58:22.773773   71233 main.go:141] libmachine: (embed-certs-175374) DBG | I0913 19:58:22.773702   73146 retry.go:31] will retry after 2.358024102s: waiting for machine to come up
	I0913 19:58:21.432080   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:23.931775   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:22.766841   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:24.767073   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:22.009465   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:22.509128   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:23.009680   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:23.509066   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:24.009717   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:24.509499   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:25.008831   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:25.509742   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:26.009748   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:26.509405   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:25.134055   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:25.134613   71233 main.go:141] libmachine: (embed-certs-175374) DBG | unable to find current IP address of domain embed-certs-175374 in network mk-embed-certs-175374
	I0913 19:58:25.134637   71233 main.go:141] libmachine: (embed-certs-175374) DBG | I0913 19:58:25.134569   73146 retry.go:31] will retry after 3.938314294s: waiting for machine to come up
	I0913 19:58:29.076283   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:29.076741   71233 main.go:141] libmachine: (embed-certs-175374) Found IP for machine: 192.168.39.32
	I0913 19:58:29.076760   71233 main.go:141] libmachine: (embed-certs-175374) Reserving static IP address...
	I0913 19:58:29.076770   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has current primary IP address 192.168.39.32 and MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:29.077137   71233 main.go:141] libmachine: (embed-certs-175374) DBG | found host DHCP lease matching {name: "embed-certs-175374", mac: "52:54:00:72:57:cd", ip: "192.168.39.32"} in network mk-embed-certs-175374: {Iface:virbr3 ExpiryTime:2024-09-13 20:58:22 +0000 UTC Type:0 Mac:52:54:00:72:57:cd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:embed-certs-175374 Clientid:01:52:54:00:72:57:cd}
	I0913 19:58:29.077164   71233 main.go:141] libmachine: (embed-certs-175374) DBG | skip adding static IP to network mk-embed-certs-175374 - found existing host DHCP lease matching {name: "embed-certs-175374", mac: "52:54:00:72:57:cd", ip: "192.168.39.32"}
	I0913 19:58:29.077174   71233 main.go:141] libmachine: (embed-certs-175374) Reserved static IP address: 192.168.39.32
	I0913 19:58:29.077185   71233 main.go:141] libmachine: (embed-certs-175374) Waiting for SSH to be available...
	I0913 19:58:29.077194   71233 main.go:141] libmachine: (embed-certs-175374) DBG | Getting to WaitForSSH function...
	I0913 19:58:29.079065   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:29.079375   71233 main.go:141] libmachine: (embed-certs-175374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:57:cd", ip: ""} in network mk-embed-certs-175374: {Iface:virbr3 ExpiryTime:2024-09-13 20:58:22 +0000 UTC Type:0 Mac:52:54:00:72:57:cd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:embed-certs-175374 Clientid:01:52:54:00:72:57:cd}
	I0913 19:58:29.079407   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined IP address 192.168.39.32 and MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:29.079508   71233 main.go:141] libmachine: (embed-certs-175374) DBG | Using SSH client type: external
	I0913 19:58:29.079559   71233 main.go:141] libmachine: (embed-certs-175374) DBG | Using SSH private key: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/embed-certs-175374/id_rsa (-rw-------)
	I0913 19:58:29.079600   71233 main.go:141] libmachine: (embed-certs-175374) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.32 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19636-3902/.minikube/machines/embed-certs-175374/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0913 19:58:29.079615   71233 main.go:141] libmachine: (embed-certs-175374) DBG | About to run SSH command:
	I0913 19:58:29.079643   71233 main.go:141] libmachine: (embed-certs-175374) DBG | exit 0
	I0913 19:58:29.202138   71233 main.go:141] libmachine: (embed-certs-175374) DBG | SSH cmd err, output: <nil>: 
	I0913 19:58:29.202522   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetConfigRaw
	I0913 19:58:26.431735   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:28.930537   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:27.266331   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:29.272314   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:29.203122   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetIP
	I0913 19:58:29.205936   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:29.206304   71233 main.go:141] libmachine: (embed-certs-175374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:57:cd", ip: ""} in network mk-embed-certs-175374: {Iface:virbr3 ExpiryTime:2024-09-13 20:58:22 +0000 UTC Type:0 Mac:52:54:00:72:57:cd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:embed-certs-175374 Clientid:01:52:54:00:72:57:cd}
	I0913 19:58:29.206326   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined IP address 192.168.39.32 and MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:29.206567   71233 profile.go:143] Saving config to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/embed-certs-175374/config.json ...
	I0913 19:58:29.206799   71233 machine.go:93] provisionDockerMachine start ...
	I0913 19:58:29.206820   71233 main.go:141] libmachine: (embed-certs-175374) Calling .DriverName
	I0913 19:58:29.207047   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHHostname
	I0913 19:58:29.209407   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:29.209733   71233 main.go:141] libmachine: (embed-certs-175374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:57:cd", ip: ""} in network mk-embed-certs-175374: {Iface:virbr3 ExpiryTime:2024-09-13 20:58:22 +0000 UTC Type:0 Mac:52:54:00:72:57:cd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:embed-certs-175374 Clientid:01:52:54:00:72:57:cd}
	I0913 19:58:29.209755   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined IP address 192.168.39.32 and MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:29.209880   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHPort
	I0913 19:58:29.210087   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHKeyPath
	I0913 19:58:29.210264   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHKeyPath
	I0913 19:58:29.210475   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHUsername
	I0913 19:58:29.210613   71233 main.go:141] libmachine: Using SSH client type: native
	I0913 19:58:29.210806   71233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.32 22 <nil> <nil>}
	I0913 19:58:29.210819   71233 main.go:141] libmachine: About to run SSH command:
	hostname
	I0913 19:58:29.318615   71233 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0913 19:58:29.318647   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetMachineName
	I0913 19:58:29.318874   71233 buildroot.go:166] provisioning hostname "embed-certs-175374"
	I0913 19:58:29.318891   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetMachineName
	I0913 19:58:29.319050   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHHostname
	I0913 19:58:29.321627   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:29.321981   71233 main.go:141] libmachine: (embed-certs-175374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:57:cd", ip: ""} in network mk-embed-certs-175374: {Iface:virbr3 ExpiryTime:2024-09-13 20:58:22 +0000 UTC Type:0 Mac:52:54:00:72:57:cd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:embed-certs-175374 Clientid:01:52:54:00:72:57:cd}
	I0913 19:58:29.322007   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined IP address 192.168.39.32 and MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:29.322233   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHPort
	I0913 19:58:29.322411   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHKeyPath
	I0913 19:58:29.322524   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHKeyPath
	I0913 19:58:29.322665   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHUsername
	I0913 19:58:29.322814   71233 main.go:141] libmachine: Using SSH client type: native
	I0913 19:58:29.322993   71233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.32 22 <nil> <nil>}
	I0913 19:58:29.323011   71233 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-175374 && echo "embed-certs-175374" | sudo tee /etc/hostname
	I0913 19:58:29.441656   71233 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-175374
	
	I0913 19:58:29.441686   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHHostname
	I0913 19:58:29.444529   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:29.444942   71233 main.go:141] libmachine: (embed-certs-175374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:57:cd", ip: ""} in network mk-embed-certs-175374: {Iface:virbr3 ExpiryTime:2024-09-13 20:58:22 +0000 UTC Type:0 Mac:52:54:00:72:57:cd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:embed-certs-175374 Clientid:01:52:54:00:72:57:cd}
	I0913 19:58:29.444973   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined IP address 192.168.39.32 and MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:29.445107   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHPort
	I0913 19:58:29.445291   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHKeyPath
	I0913 19:58:29.445451   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHKeyPath
	I0913 19:58:29.445560   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHUsername
	I0913 19:58:29.445756   71233 main.go:141] libmachine: Using SSH client type: native
	I0913 19:58:29.445939   71233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.32 22 <nil> <nil>}
	I0913 19:58:29.445961   71233 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-175374' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-175374/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-175374' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0913 19:58:29.555773   71233 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0913 19:58:29.555798   71233 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19636-3902/.minikube CaCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19636-3902/.minikube}
	I0913 19:58:29.555815   71233 buildroot.go:174] setting up certificates
	I0913 19:58:29.555836   71233 provision.go:84] configureAuth start
	I0913 19:58:29.555845   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetMachineName
	I0913 19:58:29.556128   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetIP
	I0913 19:58:29.559064   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:29.559438   71233 main.go:141] libmachine: (embed-certs-175374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:57:cd", ip: ""} in network mk-embed-certs-175374: {Iface:virbr3 ExpiryTime:2024-09-13 20:58:22 +0000 UTC Type:0 Mac:52:54:00:72:57:cd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:embed-certs-175374 Clientid:01:52:54:00:72:57:cd}
	I0913 19:58:29.559459   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined IP address 192.168.39.32 and MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:29.559589   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHHostname
	I0913 19:58:29.561763   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:29.562078   71233 main.go:141] libmachine: (embed-certs-175374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:57:cd", ip: ""} in network mk-embed-certs-175374: {Iface:virbr3 ExpiryTime:2024-09-13 20:58:22 +0000 UTC Type:0 Mac:52:54:00:72:57:cd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:embed-certs-175374 Clientid:01:52:54:00:72:57:cd}
	I0913 19:58:29.562120   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined IP address 192.168.39.32 and MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:29.562218   71233 provision.go:143] copyHostCerts
	I0913 19:58:29.562277   71233 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem, removing ...
	I0913 19:58:29.562288   71233 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem
	I0913 19:58:29.562362   71233 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem (1082 bytes)
	I0913 19:58:29.562476   71233 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem, removing ...
	I0913 19:58:29.562487   71233 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem
	I0913 19:58:29.562519   71233 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem (1123 bytes)
	I0913 19:58:29.562621   71233 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem, removing ...
	I0913 19:58:29.562630   71233 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem
	I0913 19:58:29.562657   71233 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem (1679 bytes)
	I0913 19:58:29.562729   71233 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem org=jenkins.embed-certs-175374 san=[127.0.0.1 192.168.39.32 embed-certs-175374 localhost minikube]
	I0913 19:58:29.724450   71233 provision.go:177] copyRemoteCerts
	I0913 19:58:29.724502   71233 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0913 19:58:29.724524   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHHostname
	I0913 19:58:29.727348   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:29.727653   71233 main.go:141] libmachine: (embed-certs-175374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:57:cd", ip: ""} in network mk-embed-certs-175374: {Iface:virbr3 ExpiryTime:2024-09-13 20:58:22 +0000 UTC Type:0 Mac:52:54:00:72:57:cd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:embed-certs-175374 Clientid:01:52:54:00:72:57:cd}
	I0913 19:58:29.727680   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined IP address 192.168.39.32 and MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:29.727870   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHPort
	I0913 19:58:29.728028   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHKeyPath
	I0913 19:58:29.728142   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHUsername
	I0913 19:58:29.728291   71233 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/embed-certs-175374/id_rsa Username:docker}
	I0913 19:58:29.807752   71233 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0913 19:58:29.832344   71233 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0913 19:58:29.856275   71233 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0913 19:58:29.879235   71233 provision.go:87] duration metric: took 323.386002ms to configureAuth
	I0913 19:58:29.879264   71233 buildroot.go:189] setting minikube options for container-runtime
	I0913 19:58:29.879464   71233 config.go:182] Loaded profile config "embed-certs-175374": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 19:58:29.879535   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHHostname
	I0913 19:58:29.882178   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:29.882577   71233 main.go:141] libmachine: (embed-certs-175374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:57:cd", ip: ""} in network mk-embed-certs-175374: {Iface:virbr3 ExpiryTime:2024-09-13 20:58:22 +0000 UTC Type:0 Mac:52:54:00:72:57:cd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:embed-certs-175374 Clientid:01:52:54:00:72:57:cd}
	I0913 19:58:29.882608   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined IP address 192.168.39.32 and MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:29.882736   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHPort
	I0913 19:58:29.883001   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHKeyPath
	I0913 19:58:29.883187   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHKeyPath
	I0913 19:58:29.883328   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHUsername
	I0913 19:58:29.883519   71233 main.go:141] libmachine: Using SSH client type: native
	I0913 19:58:29.883723   71233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.32 22 <nil> <nil>}
	I0913 19:58:29.883747   71233 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0913 19:58:30.103532   71233 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0913 19:58:30.103557   71233 machine.go:96] duration metric: took 896.744413ms to provisionDockerMachine
	I0913 19:58:30.103574   71233 start.go:293] postStartSetup for "embed-certs-175374" (driver="kvm2")
	I0913 19:58:30.103588   71233 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0913 19:58:30.103610   71233 main.go:141] libmachine: (embed-certs-175374) Calling .DriverName
	I0913 19:58:30.103908   71233 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0913 19:58:30.103935   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHHostname
	I0913 19:58:30.106889   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:30.107288   71233 main.go:141] libmachine: (embed-certs-175374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:57:cd", ip: ""} in network mk-embed-certs-175374: {Iface:virbr3 ExpiryTime:2024-09-13 20:58:22 +0000 UTC Type:0 Mac:52:54:00:72:57:cd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:embed-certs-175374 Clientid:01:52:54:00:72:57:cd}
	I0913 19:58:30.107320   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined IP address 192.168.39.32 and MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:30.107434   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHPort
	I0913 19:58:30.107613   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHKeyPath
	I0913 19:58:30.107766   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHUsername
	I0913 19:58:30.107900   71233 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/embed-certs-175374/id_rsa Username:docker}
	I0913 19:58:30.189085   71233 ssh_runner.go:195] Run: cat /etc/os-release
	I0913 19:58:30.193560   71233 info.go:137] Remote host: Buildroot 2023.02.9
	I0913 19:58:30.193587   71233 filesync.go:126] Scanning /home/jenkins/minikube-integration/19636-3902/.minikube/addons for local assets ...
	I0913 19:58:30.193667   71233 filesync.go:126] Scanning /home/jenkins/minikube-integration/19636-3902/.minikube/files for local assets ...
	I0913 19:58:30.193767   71233 filesync.go:149] local asset: /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem -> 110792.pem in /etc/ssl/certs
	I0913 19:58:30.193878   71233 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0913 19:58:30.203533   71233 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem --> /etc/ssl/certs/110792.pem (1708 bytes)
	I0913 19:58:30.227895   71233 start.go:296] duration metric: took 124.307474ms for postStartSetup
	I0913 19:58:30.227936   71233 fix.go:56] duration metric: took 19.464716966s for fixHost
	I0913 19:58:30.227956   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHHostname
	I0913 19:58:30.230672   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:30.230977   71233 main.go:141] libmachine: (embed-certs-175374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:57:cd", ip: ""} in network mk-embed-certs-175374: {Iface:virbr3 ExpiryTime:2024-09-13 20:58:22 +0000 UTC Type:0 Mac:52:54:00:72:57:cd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:embed-certs-175374 Clientid:01:52:54:00:72:57:cd}
	I0913 19:58:30.231003   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined IP address 192.168.39.32 and MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:30.231167   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHPort
	I0913 19:58:30.231432   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHKeyPath
	I0913 19:58:30.231610   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHKeyPath
	I0913 19:58:30.231758   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHUsername
	I0913 19:58:30.231913   71233 main.go:141] libmachine: Using SSH client type: native
	I0913 19:58:30.232089   71233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.32 22 <nil> <nil>}
	I0913 19:58:30.232100   71233 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0913 19:58:30.331036   71233 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726257510.303110870
	
	I0913 19:58:30.331065   71233 fix.go:216] guest clock: 1726257510.303110870
	I0913 19:58:30.331076   71233 fix.go:229] Guest: 2024-09-13 19:58:30.30311087 +0000 UTC Remote: 2024-09-13 19:58:30.227940037 +0000 UTC m=+356.058673795 (delta=75.170833ms)
	I0913 19:58:30.331112   71233 fix.go:200] guest clock delta is within tolerance: 75.170833ms
	I0913 19:58:30.331117   71233 start.go:83] releasing machines lock for "embed-certs-175374", held for 19.567934671s
	I0913 19:58:30.331140   71233 main.go:141] libmachine: (embed-certs-175374) Calling .DriverName
	I0913 19:58:30.331423   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetIP
	I0913 19:58:30.334022   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:30.334506   71233 main.go:141] libmachine: (embed-certs-175374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:57:cd", ip: ""} in network mk-embed-certs-175374: {Iface:virbr3 ExpiryTime:2024-09-13 20:58:22 +0000 UTC Type:0 Mac:52:54:00:72:57:cd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:embed-certs-175374 Clientid:01:52:54:00:72:57:cd}
	I0913 19:58:30.334533   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined IP address 192.168.39.32 and MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:30.334671   71233 main.go:141] libmachine: (embed-certs-175374) Calling .DriverName
	I0913 19:58:30.335259   71233 main.go:141] libmachine: (embed-certs-175374) Calling .DriverName
	I0913 19:58:30.335431   71233 main.go:141] libmachine: (embed-certs-175374) Calling .DriverName
	I0913 19:58:30.335489   71233 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0913 19:58:30.335528   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHHostname
	I0913 19:58:30.335642   71233 ssh_runner.go:195] Run: cat /version.json
	I0913 19:58:30.335660   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHHostname
	I0913 19:58:30.338223   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:30.338556   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:30.338585   71233 main.go:141] libmachine: (embed-certs-175374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:57:cd", ip: ""} in network mk-embed-certs-175374: {Iface:virbr3 ExpiryTime:2024-09-13 20:58:22 +0000 UTC Type:0 Mac:52:54:00:72:57:cd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:embed-certs-175374 Clientid:01:52:54:00:72:57:cd}
	I0913 19:58:30.338608   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined IP address 192.168.39.32 and MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:30.338738   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHPort
	I0913 19:58:30.338891   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHKeyPath
	I0913 19:58:30.339037   71233 main.go:141] libmachine: (embed-certs-175374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:57:cd", ip: ""} in network mk-embed-certs-175374: {Iface:virbr3 ExpiryTime:2024-09-13 20:58:22 +0000 UTC Type:0 Mac:52:54:00:72:57:cd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:embed-certs-175374 Clientid:01:52:54:00:72:57:cd}
	I0913 19:58:30.339057   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined IP address 192.168.39.32 and MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:30.339072   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHUsername
	I0913 19:58:30.339199   71233 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/embed-certs-175374/id_rsa Username:docker}
	I0913 19:58:30.339247   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHPort
	I0913 19:58:30.339387   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHKeyPath
	I0913 19:58:30.339526   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHUsername
	I0913 19:58:30.339639   71233 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/embed-certs-175374/id_rsa Username:docker}
	I0913 19:58:30.415622   71233 ssh_runner.go:195] Run: systemctl --version
	I0913 19:58:30.440604   71233 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0913 19:58:30.586022   71233 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0913 19:58:30.594584   71233 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0913 19:58:30.594660   71233 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0913 19:58:30.611349   71233 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0913 19:58:30.611371   71233 start.go:495] detecting cgroup driver to use...
	I0913 19:58:30.611431   71233 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0913 19:58:30.626916   71233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0913 19:58:30.641834   71233 docker.go:217] disabling cri-docker service (if available) ...
	I0913 19:58:30.641899   71233 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0913 19:58:30.656109   71233 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0913 19:58:30.670053   71233 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0913 19:58:30.785264   71233 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0913 19:58:30.936484   71233 docker.go:233] disabling docker service ...
	I0913 19:58:30.936548   71233 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0913 19:58:30.951998   71233 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0913 19:58:30.965863   71233 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0913 19:58:31.117753   71233 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0913 19:58:31.241750   71233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0913 19:58:31.255910   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0913 19:58:31.276372   71233 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0913 19:58:31.276453   71233 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:58:31.286686   71233 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0913 19:58:31.286749   71233 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:58:31.296762   71233 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:58:31.306752   71233 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:58:31.317435   71233 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0913 19:58:31.328859   71233 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:58:31.339508   71233 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:58:31.358855   71233 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:58:31.369756   71233 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0913 19:58:31.379838   71233 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0913 19:58:31.379908   71233 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0913 19:58:31.392714   71233 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0913 19:58:31.402973   71233 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 19:58:31.543089   71233 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0913 19:58:31.635184   71233 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0913 19:58:31.635259   71233 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0913 19:58:31.640122   71233 start.go:563] Will wait 60s for crictl version
	I0913 19:58:31.640190   71233 ssh_runner.go:195] Run: which crictl
	I0913 19:58:31.644326   71233 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0913 19:58:31.687840   71233 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0913 19:58:31.687936   71233 ssh_runner.go:195] Run: crio --version
	I0913 19:58:31.716376   71233 ssh_runner.go:195] Run: crio --version
	I0913 19:58:31.749357   71233 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0913 19:58:27.009130   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:27.509574   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:28.009714   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:28.508758   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:29.008768   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:29.509523   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:30.009031   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:30.508827   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:31.009653   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:31.509554   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:31.750649   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetIP
	I0913 19:58:31.753235   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:31.753547   71233 main.go:141] libmachine: (embed-certs-175374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:57:cd", ip: ""} in network mk-embed-certs-175374: {Iface:virbr3 ExpiryTime:2024-09-13 20:58:22 +0000 UTC Type:0 Mac:52:54:00:72:57:cd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:embed-certs-175374 Clientid:01:52:54:00:72:57:cd}
	I0913 19:58:31.753576   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined IP address 192.168.39.32 and MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:31.753809   71233 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0913 19:58:31.757927   71233 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0913 19:58:31.771018   71233 kubeadm.go:883] updating cluster {Name:embed-certs-175374 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:embed-certs-175374 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.32 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0913 19:58:31.771171   71233 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0913 19:58:31.771221   71233 ssh_runner.go:195] Run: sudo crictl images --output json
	I0913 19:58:31.810741   71233 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0913 19:58:31.810798   71233 ssh_runner.go:195] Run: which lz4
	I0913 19:58:31.814892   71233 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0913 19:58:31.819229   71233 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0913 19:58:31.819269   71233 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0913 19:58:33.221865   71233 crio.go:462] duration metric: took 1.407002501s to copy over tarball
	I0913 19:58:33.221951   71233 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0913 19:58:30.931694   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:32.934639   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:31.767243   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:33.767834   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:35.768301   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:32.009337   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:32.509870   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:33.009618   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:33.509364   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:34.009124   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:34.509744   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:35.009610   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:35.509772   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:36.009680   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:36.509743   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:35.282125   71233 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.060124935s)
	I0913 19:58:35.282151   71233 crio.go:469] duration metric: took 2.060254719s to extract the tarball
	I0913 19:58:35.282158   71233 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0913 19:58:35.320685   71233 ssh_runner.go:195] Run: sudo crictl images --output json
	I0913 19:58:35.364371   71233 crio.go:514] all images are preloaded for cri-o runtime.
	I0913 19:58:35.364396   71233 cache_images.go:84] Images are preloaded, skipping loading
	I0913 19:58:35.364404   71233 kubeadm.go:934] updating node { 192.168.39.32 8443 v1.31.1 crio true true} ...
	I0913 19:58:35.364505   71233 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-175374 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.32
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-175374 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0913 19:58:35.364574   71233 ssh_runner.go:195] Run: crio config
	I0913 19:58:35.409662   71233 cni.go:84] Creating CNI manager for ""
	I0913 19:58:35.409684   71233 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0913 19:58:35.409692   71233 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0913 19:58:35.409711   71233 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.32 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-175374 NodeName:embed-certs-175374 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.32"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.32 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0913 19:58:35.409829   71233 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.32
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-175374"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.32
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.32"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0913 19:58:35.409886   71233 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0913 19:58:35.420286   71233 binaries.go:44] Found k8s binaries, skipping transfer
	I0913 19:58:35.420354   71233 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0913 19:58:35.430624   71233 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0913 19:58:35.448662   71233 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0913 19:58:35.465838   71233 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0913 19:58:35.483262   71233 ssh_runner.go:195] Run: grep 192.168.39.32	control-plane.minikube.internal$ /etc/hosts
	I0913 19:58:35.487299   71233 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.32	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0913 19:58:35.500571   71233 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 19:58:35.615618   71233 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0913 19:58:35.634191   71233 certs.go:68] Setting up /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/embed-certs-175374 for IP: 192.168.39.32
	I0913 19:58:35.634216   71233 certs.go:194] generating shared ca certs ...
	I0913 19:58:35.634237   71233 certs.go:226] acquiring lock for ca certs: {Name:mke780aab4c2895bded11772e31c7a357d07742c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 19:58:35.634421   71233 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.key
	I0913 19:58:35.634489   71233 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.key
	I0913 19:58:35.634503   71233 certs.go:256] generating profile certs ...
	I0913 19:58:35.634599   71233 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/embed-certs-175374/client.key
	I0913 19:58:35.634664   71233 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/embed-certs-175374/apiserver.key.f26b0d46
	I0913 19:58:35.634719   71233 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/embed-certs-175374/proxy-client.key
	I0913 19:58:35.634847   71233 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079.pem (1338 bytes)
	W0913 19:58:35.634888   71233 certs.go:480] ignoring /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079_empty.pem, impossibly tiny 0 bytes
	I0913 19:58:35.634903   71233 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem (1675 bytes)
	I0913 19:58:35.634940   71233 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem (1082 bytes)
	I0913 19:58:35.634974   71233 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem (1123 bytes)
	I0913 19:58:35.635013   71233 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem (1679 bytes)
	I0913 19:58:35.635070   71233 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem (1708 bytes)
	I0913 19:58:35.635679   71233 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0913 19:58:35.680013   71233 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0913 19:58:35.708836   71233 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0913 19:58:35.742138   71233 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0913 19:58:35.783230   71233 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/embed-certs-175374/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0913 19:58:35.816022   71233 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/embed-certs-175374/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0913 19:58:35.847365   71233 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/embed-certs-175374/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0913 19:58:35.871389   71233 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/embed-certs-175374/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0913 19:58:35.896617   71233 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem --> /usr/share/ca-certificates/110792.pem (1708 bytes)
	I0913 19:58:35.920811   71233 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0913 19:58:35.947119   71233 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079.pem --> /usr/share/ca-certificates/11079.pem (1338 bytes)
	I0913 19:58:35.971590   71233 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0913 19:58:35.988797   71233 ssh_runner.go:195] Run: openssl version
	I0913 19:58:35.994690   71233 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11079.pem && ln -fs /usr/share/ca-certificates/11079.pem /etc/ssl/certs/11079.pem"
	I0913 19:58:36.006056   71233 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11079.pem
	I0913 19:58:36.010744   71233 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 13 18:38 /usr/share/ca-certificates/11079.pem
	I0913 19:58:36.010813   71233 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11079.pem
	I0913 19:58:36.016820   71233 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11079.pem /etc/ssl/certs/51391683.0"
	I0913 19:58:36.028895   71233 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110792.pem && ln -fs /usr/share/ca-certificates/110792.pem /etc/ssl/certs/110792.pem"
	I0913 19:58:36.040296   71233 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110792.pem
	I0913 19:58:36.044904   71233 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 13 18:38 /usr/share/ca-certificates/110792.pem
	I0913 19:58:36.044948   71233 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110792.pem
	I0913 19:58:36.050727   71233 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110792.pem /etc/ssl/certs/3ec20f2e.0"
	I0913 19:58:36.061195   71233 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0913 19:58:36.071527   71233 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0913 19:58:36.076171   71233 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 13 18:22 /usr/share/ca-certificates/minikubeCA.pem
	I0913 19:58:36.076204   71233 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0913 19:58:36.081765   71233 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0913 19:58:36.093815   71233 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0913 19:58:36.098729   71233 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0913 19:58:36.105238   71233 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0913 19:58:36.111340   71233 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0913 19:58:36.117349   71233 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0913 19:58:36.123329   71233 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0913 19:58:36.129083   71233 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0913 19:58:36.134952   71233 kubeadm.go:392] StartCluster: {Name:embed-certs-175374 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:embed-certs-175374 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.32 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 19:58:36.135035   71233 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0913 19:58:36.135095   71233 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0913 19:58:36.177680   71233 cri.go:89] found id: ""
	I0913 19:58:36.177743   71233 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0913 19:58:36.188511   71233 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0913 19:58:36.188531   71233 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0913 19:58:36.188580   71233 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0913 19:58:36.199007   71233 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0913 19:58:36.200034   71233 kubeconfig.go:125] found "embed-certs-175374" server: "https://192.168.39.32:8443"
	I0913 19:58:36.201838   71233 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0913 19:58:36.211823   71233 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.32
	I0913 19:58:36.211850   71233 kubeadm.go:1160] stopping kube-system containers ...
	I0913 19:58:36.211863   71233 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0913 19:58:36.211907   71233 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0913 19:58:36.254383   71233 cri.go:89] found id: ""
	I0913 19:58:36.254452   71233 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0913 19:58:36.274482   71233 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0913 19:58:36.284752   71233 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0913 19:58:36.284776   71233 kubeadm.go:157] found existing configuration files:
	
	I0913 19:58:36.284826   71233 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0913 19:58:36.294122   71233 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0913 19:58:36.294186   71233 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0913 19:58:36.303848   71233 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0913 19:58:36.313197   71233 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0913 19:58:36.313270   71233 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0913 19:58:36.322754   71233 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0913 19:58:36.332018   71233 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0913 19:58:36.332078   71233 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0913 19:58:36.341980   71233 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0913 19:58:36.351251   71233 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0913 19:58:36.351308   71233 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0913 19:58:36.360867   71233 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0913 19:58:36.370253   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:58:36.476811   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:58:37.459731   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:58:37.701271   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:58:37.795569   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:58:37.884961   71233 api_server.go:52] waiting for apiserver process to appear ...
	I0913 19:58:37.885054   71233 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:38.385265   71233 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:38.886038   71233 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:35.431757   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:37.930698   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:38.869696   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:37.009084   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:37.509368   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:38.009698   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:38.509699   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:39.008821   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:39.509724   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:40.008865   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:40.509533   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:41.009397   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:41.508872   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:39.385638   71233 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:39.885566   71233 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:39.901409   71233 api_server.go:72] duration metric: took 2.016446791s to wait for apiserver process to appear ...
	I0913 19:58:39.901438   71233 api_server.go:88] waiting for apiserver healthz status ...
	I0913 19:58:39.901469   71233 api_server.go:253] Checking apiserver healthz at https://192.168.39.32:8443/healthz ...
	I0913 19:58:42.607623   71233 api_server.go:279] https://192.168.39.32:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0913 19:58:42.607656   71233 api_server.go:103] status: https://192.168.39.32:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0913 19:58:42.607672   71233 api_server.go:253] Checking apiserver healthz at https://192.168.39.32:8443/healthz ...
	I0913 19:58:42.625107   71233 api_server.go:279] https://192.168.39.32:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0913 19:58:42.625134   71233 api_server.go:103] status: https://192.168.39.32:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0913 19:58:42.902512   71233 api_server.go:253] Checking apiserver healthz at https://192.168.39.32:8443/healthz ...
	I0913 19:58:42.912382   71233 api_server.go:279] https://192.168.39.32:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0913 19:58:42.912424   71233 api_server.go:103] status: https://192.168.39.32:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0913 19:58:43.401981   71233 api_server.go:253] Checking apiserver healthz at https://192.168.39.32:8443/healthz ...
	I0913 19:58:43.406231   71233 api_server.go:279] https://192.168.39.32:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0913 19:58:43.406253   71233 api_server.go:103] status: https://192.168.39.32:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0913 19:58:43.901758   71233 api_server.go:253] Checking apiserver healthz at https://192.168.39.32:8443/healthz ...
	I0913 19:58:43.909236   71233 api_server.go:279] https://192.168.39.32:8443/healthz returned 200:
	ok
	I0913 19:58:43.915858   71233 api_server.go:141] control plane version: v1.31.1
	I0913 19:58:43.915878   71233 api_server.go:131] duration metric: took 4.014433541s to wait for apiserver health ...
	I0913 19:58:43.915886   71233 cni.go:84] Creating CNI manager for ""
	I0913 19:58:43.915892   71233 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0913 19:58:43.917333   71233 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0913 19:58:43.918437   71233 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0913 19:58:43.929803   71233 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0913 19:58:43.962264   71233 system_pods.go:43] waiting for kube-system pods to appear ...
	I0913 19:58:43.974064   71233 system_pods.go:59] 8 kube-system pods found
	I0913 19:58:43.974124   71233 system_pods.go:61] "coredns-7c65d6cfc9-lrrkx" [3b5420cc-a0cd-4f34-8f65-9fa6fd5dd02f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0913 19:58:43.974132   71233 system_pods.go:61] "etcd-embed-certs-175374" [4f645ba5-cffa-49a2-9219-8e635f58abd3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0913 19:58:43.974140   71233 system_pods.go:61] "kube-apiserver-embed-certs-175374" [4c21b983-d949-4642-b17c-b547330b0d05] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0913 19:58:43.974146   71233 system_pods.go:61] "kube-controller-manager-embed-certs-175374" [defb4f6c-81f8-4405-b2c0-abe91c846f4c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0913 19:58:43.974154   71233 system_pods.go:61] "kube-proxy-jv77q" [28580bbe-7c5f-4161-8370-41f3286d508c] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0913 19:58:43.974159   71233 system_pods.go:61] "kube-scheduler-embed-certs-175374" [c5aa66b9-523c-46e8-b8e4-70680490dbf5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0913 19:58:43.974168   71233 system_pods.go:61] "metrics-server-6867b74b74-fnznh" [9ca67e1c-a852-4513-abfc-ace5908d2727] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0913 19:58:43.974174   71233 system_pods.go:61] "storage-provisioner" [afa99920-b51a-4d30-a8e0-269a0beeee8a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0913 19:58:43.974180   71233 system_pods.go:74] duration metric: took 11.890984ms to wait for pod list to return data ...
	I0913 19:58:43.974191   71233 node_conditions.go:102] verifying NodePressure condition ...
	I0913 19:58:43.978060   71233 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0913 19:58:43.978084   71233 node_conditions.go:123] node cpu capacity is 2
	I0913 19:58:43.978115   71233 node_conditions.go:105] duration metric: took 3.91914ms to run NodePressure ...
	I0913 19:58:43.978136   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:58:39.931725   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:41.931904   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:43.932454   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:44.265300   71233 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0913 19:58:44.270133   71233 kubeadm.go:739] kubelet initialised
	I0913 19:58:44.270161   71233 kubeadm.go:740] duration metric: took 4.829768ms waiting for restarted kubelet to initialise ...
	I0913 19:58:44.270170   71233 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0913 19:58:44.275324   71233 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-lrrkx" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:44.280420   71233 pod_ready.go:98] node "embed-certs-175374" hosting pod "coredns-7c65d6cfc9-lrrkx" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-175374" has status "Ready":"False"
	I0913 19:58:44.280443   71233 pod_ready.go:82] duration metric: took 5.093507ms for pod "coredns-7c65d6cfc9-lrrkx" in "kube-system" namespace to be "Ready" ...
	E0913 19:58:44.280452   71233 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-175374" hosting pod "coredns-7c65d6cfc9-lrrkx" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-175374" has status "Ready":"False"
	I0913 19:58:44.280459   71233 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-175374" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:44.284917   71233 pod_ready.go:98] node "embed-certs-175374" hosting pod "etcd-embed-certs-175374" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-175374" has status "Ready":"False"
	I0913 19:58:44.284937   71233 pod_ready.go:82] duration metric: took 4.469078ms for pod "etcd-embed-certs-175374" in "kube-system" namespace to be "Ready" ...
	E0913 19:58:44.284945   71233 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-175374" hosting pod "etcd-embed-certs-175374" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-175374" has status "Ready":"False"
	I0913 19:58:44.284952   71233 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-175374" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:44.288979   71233 pod_ready.go:98] node "embed-certs-175374" hosting pod "kube-apiserver-embed-certs-175374" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-175374" has status "Ready":"False"
	I0913 19:58:44.289001   71233 pod_ready.go:82] duration metric: took 4.040314ms for pod "kube-apiserver-embed-certs-175374" in "kube-system" namespace to be "Ready" ...
	E0913 19:58:44.289012   71233 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-175374" hosting pod "kube-apiserver-embed-certs-175374" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-175374" has status "Ready":"False"
	I0913 19:58:44.289019   71233 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-175374" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:44.366067   71233 pod_ready.go:98] node "embed-certs-175374" hosting pod "kube-controller-manager-embed-certs-175374" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-175374" has status "Ready":"False"
	I0913 19:58:44.366115   71233 pod_ready.go:82] duration metric: took 77.081723ms for pod "kube-controller-manager-embed-certs-175374" in "kube-system" namespace to be "Ready" ...
	E0913 19:58:44.366130   71233 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-175374" hosting pod "kube-controller-manager-embed-certs-175374" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-175374" has status "Ready":"False"
	I0913 19:58:44.366138   71233 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-jv77q" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:44.768797   71233 pod_ready.go:98] node "embed-certs-175374" hosting pod "kube-proxy-jv77q" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-175374" has status "Ready":"False"
	I0913 19:58:44.768829   71233 pod_ready.go:82] duration metric: took 402.677833ms for pod "kube-proxy-jv77q" in "kube-system" namespace to be "Ready" ...
	E0913 19:58:44.768838   71233 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-175374" hosting pod "kube-proxy-jv77q" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-175374" has status "Ready":"False"
	I0913 19:58:44.768845   71233 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-175374" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:45.166011   71233 pod_ready.go:98] node "embed-certs-175374" hosting pod "kube-scheduler-embed-certs-175374" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-175374" has status "Ready":"False"
	I0913 19:58:45.166046   71233 pod_ready.go:82] duration metric: took 397.193399ms for pod "kube-scheduler-embed-certs-175374" in "kube-system" namespace to be "Ready" ...
	E0913 19:58:45.166059   71233 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-175374" hosting pod "kube-scheduler-embed-certs-175374" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-175374" has status "Ready":"False"
	I0913 19:58:45.166068   71233 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:45.565304   71233 pod_ready.go:98] node "embed-certs-175374" hosting pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-175374" has status "Ready":"False"
	I0913 19:58:45.565328   71233 pod_ready.go:82] duration metric: took 399.249933ms for pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace to be "Ready" ...
	E0913 19:58:45.565337   71233 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-175374" hosting pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-175374" has status "Ready":"False"
	I0913 19:58:45.565350   71233 pod_ready.go:39] duration metric: took 1.295171906s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0913 19:58:45.565371   71233 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0913 19:58:45.577831   71233 ops.go:34] apiserver oom_adj: -16
	I0913 19:58:45.577857   71233 kubeadm.go:597] duration metric: took 9.389319229s to restartPrimaryControlPlane
	I0913 19:58:45.577868   71233 kubeadm.go:394] duration metric: took 9.442921883s to StartCluster
	I0913 19:58:45.577884   71233 settings.go:142] acquiring lock: {Name:mk3b751ea8a9f3a6c2d19469cffad500e96411b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 19:58:45.577967   71233 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19636-3902/kubeconfig
	I0913 19:58:45.579765   71233 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/kubeconfig: {Name:mk324cf401f96b93fed93af51e24b0634e5fa1a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 19:58:45.580068   71233 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.32 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0913 19:58:45.580156   71233 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0913 19:58:45.580249   71233 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-175374"
	I0913 19:58:45.580272   71233 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-175374"
	W0913 19:58:45.580281   71233 addons.go:243] addon storage-provisioner should already be in state true
	I0913 19:58:45.580295   71233 config.go:182] Loaded profile config "embed-certs-175374": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 19:58:45.580311   71233 host.go:66] Checking if "embed-certs-175374" exists ...
	I0913 19:58:45.580300   71233 addons.go:69] Setting default-storageclass=true in profile "embed-certs-175374"
	I0913 19:58:45.580353   71233 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-175374"
	I0913 19:58:45.580341   71233 addons.go:69] Setting metrics-server=true in profile "embed-certs-175374"
	I0913 19:58:45.580395   71233 addons.go:234] Setting addon metrics-server=true in "embed-certs-175374"
	W0913 19:58:45.580409   71233 addons.go:243] addon metrics-server should already be in state true
	I0913 19:58:45.580482   71233 host.go:66] Checking if "embed-certs-175374" exists ...
	I0913 19:58:45.580753   71233 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:58:45.580799   71233 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:58:45.580846   71233 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:58:45.580894   71233 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:58:45.580952   71233 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:58:45.581001   71233 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:58:45.581828   71233 out.go:177] * Verifying Kubernetes components...
	I0913 19:58:45.583145   71233 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 19:58:45.596215   71233 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38289
	I0913 19:58:45.596347   71233 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43797
	I0913 19:58:45.596650   71233 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:58:45.596775   71233 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44367
	I0913 19:58:45.596889   71233 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:58:45.597150   71233 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:58:45.597156   71233 main.go:141] libmachine: Using API Version  1
	I0913 19:58:45.597175   71233 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:58:45.597345   71233 main.go:141] libmachine: Using API Version  1
	I0913 19:58:45.597359   71233 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:58:45.597606   71233 main.go:141] libmachine: Using API Version  1
	I0913 19:58:45.597623   71233 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:58:45.597659   71233 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:58:45.597683   71233 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:58:45.597842   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetState
	I0913 19:58:45.597952   71233 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:58:45.598212   71233 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:58:45.598243   71233 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:58:45.598512   71233 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:58:45.598541   71233 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:58:45.601548   71233 addons.go:234] Setting addon default-storageclass=true in "embed-certs-175374"
	W0913 19:58:45.601569   71233 addons.go:243] addon default-storageclass should already be in state true
	I0913 19:58:45.601596   71233 host.go:66] Checking if "embed-certs-175374" exists ...
	I0913 19:58:45.601941   71233 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:58:45.601971   71233 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:58:45.613596   71233 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37017
	I0913 19:58:45.614086   71233 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:58:45.614646   71233 main.go:141] libmachine: Using API Version  1
	I0913 19:58:45.614670   71233 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:58:45.615015   71233 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:58:45.615328   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetState
	I0913 19:58:45.615792   71233 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42809
	I0913 19:58:45.616459   71233 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:58:45.617057   71233 main.go:141] libmachine: Using API Version  1
	I0913 19:58:45.617076   71233 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:58:45.617135   71233 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40251
	I0913 19:58:45.617429   71233 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:58:45.617492   71233 main.go:141] libmachine: (embed-certs-175374) Calling .DriverName
	I0913 19:58:45.617538   71233 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:58:45.617720   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetState
	I0913 19:58:45.618009   71233 main.go:141] libmachine: Using API Version  1
	I0913 19:58:45.618029   71233 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:58:45.618610   71233 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:58:45.619215   71233 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:58:45.619257   71233 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:58:45.619496   71233 main.go:141] libmachine: (embed-certs-175374) Calling .DriverName
	I0913 19:58:45.619734   71233 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0913 19:58:45.620863   71233 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 19:58:41.266572   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:43.267658   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:45.768086   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:42.009722   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:42.509784   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:43.009630   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:43.508726   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:44.009339   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:44.509674   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:45.009675   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:45.509437   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:46.009589   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:46.509457   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:45.620906   71233 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0913 19:58:45.620921   71233 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0913 19:58:45.620940   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHHostname
	I0913 19:58:45.622242   71233 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0913 19:58:45.622255   71233 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0913 19:58:45.622272   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHHostname
	I0913 19:58:45.624230   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:45.624735   71233 main.go:141] libmachine: (embed-certs-175374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:57:cd", ip: ""} in network mk-embed-certs-175374: {Iface:virbr3 ExpiryTime:2024-09-13 20:58:22 +0000 UTC Type:0 Mac:52:54:00:72:57:cd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:embed-certs-175374 Clientid:01:52:54:00:72:57:cd}
	I0913 19:58:45.624763   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined IP address 192.168.39.32 and MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:45.624903   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHPort
	I0913 19:58:45.625063   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHKeyPath
	I0913 19:58:45.625200   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHUsername
	I0913 19:58:45.625354   71233 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/embed-certs-175374/id_rsa Username:docker}
	I0913 19:58:45.625501   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:45.625915   71233 main.go:141] libmachine: (embed-certs-175374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:57:cd", ip: ""} in network mk-embed-certs-175374: {Iface:virbr3 ExpiryTime:2024-09-13 20:58:22 +0000 UTC Type:0 Mac:52:54:00:72:57:cd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:embed-certs-175374 Clientid:01:52:54:00:72:57:cd}
	I0913 19:58:45.625938   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined IP address 192.168.39.32 and MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:45.626141   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHPort
	I0913 19:58:45.626285   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHKeyPath
	I0913 19:58:45.626451   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHUsername
	I0913 19:58:45.626625   71233 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/embed-certs-175374/id_rsa Username:docker}
	I0913 19:58:45.658599   71233 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45219
	I0913 19:58:45.659088   71233 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:58:45.659729   71233 main.go:141] libmachine: Using API Version  1
	I0913 19:58:45.659752   71233 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:58:45.660087   71233 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:58:45.660266   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetState
	I0913 19:58:45.661894   71233 main.go:141] libmachine: (embed-certs-175374) Calling .DriverName
	I0913 19:58:45.662127   71233 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0913 19:58:45.662143   71233 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0913 19:58:45.662159   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHHostname
	I0913 19:58:45.664987   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:45.665347   71233 main.go:141] libmachine: (embed-certs-175374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:57:cd", ip: ""} in network mk-embed-certs-175374: {Iface:virbr3 ExpiryTime:2024-09-13 20:58:22 +0000 UTC Type:0 Mac:52:54:00:72:57:cd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:embed-certs-175374 Clientid:01:52:54:00:72:57:cd}
	I0913 19:58:45.665369   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined IP address 192.168.39.32 and MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:45.665475   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHPort
	I0913 19:58:45.665622   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHKeyPath
	I0913 19:58:45.665765   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHUsername
	I0913 19:58:45.665890   71233 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/embed-certs-175374/id_rsa Username:docker}
	I0913 19:58:45.771910   71233 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0913 19:58:45.788103   71233 node_ready.go:35] waiting up to 6m0s for node "embed-certs-175374" to be "Ready" ...
	I0913 19:58:45.849115   71233 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0913 19:58:45.954823   71233 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0913 19:58:45.954845   71233 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0913 19:58:45.972602   71233 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0913 19:58:46.008217   71233 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0913 19:58:46.008243   71233 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0913 19:58:46.087347   71233 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0913 19:58:46.087374   71233 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0913 19:58:46.145493   71233 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0913 19:58:46.413833   71233 main.go:141] libmachine: Making call to close driver server
	I0913 19:58:46.413867   71233 main.go:141] libmachine: (embed-certs-175374) Calling .Close
	I0913 19:58:46.414152   71233 main.go:141] libmachine: Successfully made call to close driver server
	I0913 19:58:46.414211   71233 main.go:141] libmachine: (embed-certs-175374) DBG | Closing plugin on server side
	I0913 19:58:46.414228   71233 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 19:58:46.414239   71233 main.go:141] libmachine: Making call to close driver server
	I0913 19:58:46.414257   71233 main.go:141] libmachine: (embed-certs-175374) Calling .Close
	I0913 19:58:46.414562   71233 main.go:141] libmachine: (embed-certs-175374) DBG | Closing plugin on server side
	I0913 19:58:46.414574   71233 main.go:141] libmachine: Successfully made call to close driver server
	I0913 19:58:46.414587   71233 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 19:58:46.420582   71233 main.go:141] libmachine: Making call to close driver server
	I0913 19:58:46.420600   71233 main.go:141] libmachine: (embed-certs-175374) Calling .Close
	I0913 19:58:46.420839   71233 main.go:141] libmachine: Successfully made call to close driver server
	I0913 19:58:46.420855   71233 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 19:58:46.960928   71233 main.go:141] libmachine: Making call to close driver server
	I0913 19:58:46.960961   71233 main.go:141] libmachine: (embed-certs-175374) Calling .Close
	I0913 19:58:46.961258   71233 main.go:141] libmachine: Successfully made call to close driver server
	I0913 19:58:46.961292   71233 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 19:58:46.961298   71233 main.go:141] libmachine: (embed-certs-175374) DBG | Closing plugin on server side
	I0913 19:58:46.961314   71233 main.go:141] libmachine: Making call to close driver server
	I0913 19:58:46.961325   71233 main.go:141] libmachine: (embed-certs-175374) Calling .Close
	I0913 19:58:46.961592   71233 main.go:141] libmachine: Successfully made call to close driver server
	I0913 19:58:46.961607   71233 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 19:58:47.205831   71233 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.060299398s)
	I0913 19:58:47.205881   71233 main.go:141] libmachine: Making call to close driver server
	I0913 19:58:47.205896   71233 main.go:141] libmachine: (embed-certs-175374) Calling .Close
	I0913 19:58:47.206177   71233 main.go:141] libmachine: Successfully made call to close driver server
	I0913 19:58:47.206198   71233 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 19:58:47.206211   71233 main.go:141] libmachine: Making call to close driver server
	I0913 19:58:47.206209   71233 main.go:141] libmachine: (embed-certs-175374) DBG | Closing plugin on server side
	I0913 19:58:47.206218   71233 main.go:141] libmachine: (embed-certs-175374) Calling .Close
	I0913 19:58:47.206422   71233 main.go:141] libmachine: (embed-certs-175374) DBG | Closing plugin on server side
	I0913 19:58:47.206461   71233 main.go:141] libmachine: Successfully made call to close driver server
	I0913 19:58:47.206469   71233 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 19:58:47.206482   71233 addons.go:475] Verifying addon metrics-server=true in "embed-certs-175374"
	I0913 19:58:47.208308   71233 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0913 19:58:47.209327   71233 addons.go:510] duration metric: took 1.629176141s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0913 19:58:47.792485   71233 node_ready.go:53] node "embed-certs-175374" has status "Ready":"False"
	I0913 19:58:46.431055   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:48.930705   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:48.265994   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:50.266158   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:47.009340   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:47.509159   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:48.009550   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:48.509199   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:49.009364   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:49.509522   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:50.008790   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:50.509733   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:51.009675   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:51.509423   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:50.293136   71233 node_ready.go:53] node "embed-certs-175374" has status "Ready":"False"
	I0913 19:58:52.792201   71233 node_ready.go:53] node "embed-certs-175374" has status "Ready":"False"
	I0913 19:58:53.291781   71233 node_ready.go:49] node "embed-certs-175374" has status "Ready":"True"
	I0913 19:58:53.291808   71233 node_ready.go:38] duration metric: took 7.503674244s for node "embed-certs-175374" to be "Ready" ...
	I0913 19:58:53.291817   71233 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0913 19:58:53.297601   71233 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-lrrkx" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:53.304575   71233 pod_ready.go:93] pod "coredns-7c65d6cfc9-lrrkx" in "kube-system" namespace has status "Ready":"True"
	I0913 19:58:53.304599   71233 pod_ready.go:82] duration metric: took 6.973055ms for pod "coredns-7c65d6cfc9-lrrkx" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:53.304608   71233 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-175374" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:50.932102   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:53.431177   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:52.267198   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:54.267301   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:52.009563   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:52.509133   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:53.008751   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:53.508990   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:54.009430   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:54.508835   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:55.009332   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:55.509474   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:56.009345   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:56.509025   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:55.312022   71233 pod_ready.go:103] pod "etcd-embed-certs-175374" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:57.310407   71233 pod_ready.go:93] pod "etcd-embed-certs-175374" in "kube-system" namespace has status "Ready":"True"
	I0913 19:58:57.310430   71233 pod_ready.go:82] duration metric: took 4.0058159s for pod "etcd-embed-certs-175374" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:57.310440   71233 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-175374" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:57.315573   71233 pod_ready.go:93] pod "kube-apiserver-embed-certs-175374" in "kube-system" namespace has status "Ready":"True"
	I0913 19:58:57.315592   71233 pod_ready.go:82] duration metric: took 5.146474ms for pod "kube-apiserver-embed-certs-175374" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:57.315600   71233 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-175374" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:57.319332   71233 pod_ready.go:93] pod "kube-controller-manager-embed-certs-175374" in "kube-system" namespace has status "Ready":"True"
	I0913 19:58:57.319347   71233 pod_ready.go:82] duration metric: took 3.741976ms for pod "kube-controller-manager-embed-certs-175374" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:57.319356   71233 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-jv77q" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:57.323231   71233 pod_ready.go:93] pod "kube-proxy-jv77q" in "kube-system" namespace has status "Ready":"True"
	I0913 19:58:57.323247   71233 pod_ready.go:82] duration metric: took 3.886178ms for pod "kube-proxy-jv77q" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:57.323254   71233 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-175374" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:57.329250   71233 pod_ready.go:93] pod "kube-scheduler-embed-certs-175374" in "kube-system" namespace has status "Ready":"True"
	I0913 19:58:57.329264   71233 pod_ready.go:82] duration metric: took 6.005366ms for pod "kube-scheduler-embed-certs-175374" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:57.329273   71233 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:55.932146   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:58.430922   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:56.765730   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:58.767104   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:57.009384   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:57.509443   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:58.008849   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:58.509514   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:59.009139   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:59.509433   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:00.009778   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:00.508827   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:01.009427   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:01.508910   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:59.335308   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:01.335559   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:03.337207   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:00.930860   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:02.932443   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:01.267236   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:03.765856   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:05.766799   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:02.009696   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:02.509043   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:03.008825   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:03.509139   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:04.009549   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:04.509093   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:05.009633   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:05.509496   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:06.008914   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:06.509007   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:05.835701   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:07.836050   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:05.431045   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:07.431161   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:08.266221   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:10.267540   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:07.009527   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:07.509637   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:08.009264   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:08.509505   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:09.008838   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:09.509478   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:10.009569   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:10.509697   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:11.009352   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:11.509280   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:10.335743   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:12.835060   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:09.930272   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:11.930469   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:14.431325   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:12.766317   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:14.766811   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:12.009200   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:12.509540   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:13.008811   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:13.509512   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:14.008877   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:14.509361   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:15.008823   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:15.509022   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:16.009176   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:16.509654   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:14.836303   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:17.336034   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:16.431384   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:18.930816   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:17.266683   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:19.268476   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:17.009758   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:17.509429   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:18.009470   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:18.509732   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:19.008954   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:19.509475   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:20.008916   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:20.509623   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:21.009697   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 19:59:21.009781   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 19:59:21.044701   71926 cri.go:89] found id: ""
	I0913 19:59:21.044724   71926 logs.go:276] 0 containers: []
	W0913 19:59:21.044734   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 19:59:21.044739   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 19:59:21.044786   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 19:59:21.086372   71926 cri.go:89] found id: ""
	I0913 19:59:21.086399   71926 logs.go:276] 0 containers: []
	W0913 19:59:21.086408   71926 logs.go:278] No container was found matching "etcd"
	I0913 19:59:21.086413   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 19:59:21.086474   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 19:59:21.121663   71926 cri.go:89] found id: ""
	I0913 19:59:21.121691   71926 logs.go:276] 0 containers: []
	W0913 19:59:21.121702   71926 logs.go:278] No container was found matching "coredns"
	I0913 19:59:21.121709   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 19:59:21.121772   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 19:59:21.159432   71926 cri.go:89] found id: ""
	I0913 19:59:21.159462   71926 logs.go:276] 0 containers: []
	W0913 19:59:21.159474   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 19:59:21.159481   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 19:59:21.159547   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 19:59:21.196077   71926 cri.go:89] found id: ""
	I0913 19:59:21.196108   71926 logs.go:276] 0 containers: []
	W0913 19:59:21.196120   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 19:59:21.196128   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 19:59:21.196202   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 19:59:21.228527   71926 cri.go:89] found id: ""
	I0913 19:59:21.228560   71926 logs.go:276] 0 containers: []
	W0913 19:59:21.228572   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 19:59:21.228579   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 19:59:21.228642   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 19:59:21.261972   71926 cri.go:89] found id: ""
	I0913 19:59:21.262004   71926 logs.go:276] 0 containers: []
	W0913 19:59:21.262015   71926 logs.go:278] No container was found matching "kindnet"
	I0913 19:59:21.262023   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 19:59:21.262088   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 19:59:21.297147   71926 cri.go:89] found id: ""
	I0913 19:59:21.297178   71926 logs.go:276] 0 containers: []
	W0913 19:59:21.297191   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 19:59:21.297201   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 19:59:21.297214   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 19:59:21.351067   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 19:59:21.351099   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 19:59:21.367453   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 19:59:21.367492   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 19:59:21.497454   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 19:59:21.497474   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 19:59:21.497487   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 19:59:21.568483   71926 logs.go:123] Gathering logs for container status ...
	I0913 19:59:21.568519   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 19:59:19.836218   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:22.336293   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:21.430519   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:23.930458   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:21.767677   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:24.267717   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:24.114940   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:24.127514   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 19:59:24.127597   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 19:59:24.164868   71926 cri.go:89] found id: ""
	I0913 19:59:24.164896   71926 logs.go:276] 0 containers: []
	W0913 19:59:24.164909   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 19:59:24.164917   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 19:59:24.164976   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 19:59:24.200342   71926 cri.go:89] found id: ""
	I0913 19:59:24.200374   71926 logs.go:276] 0 containers: []
	W0913 19:59:24.200386   71926 logs.go:278] No container was found matching "etcd"
	I0913 19:59:24.200393   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 19:59:24.200453   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 19:59:24.237566   71926 cri.go:89] found id: ""
	I0913 19:59:24.237592   71926 logs.go:276] 0 containers: []
	W0913 19:59:24.237603   71926 logs.go:278] No container was found matching "coredns"
	I0913 19:59:24.237619   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 19:59:24.237691   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 19:59:24.275357   71926 cri.go:89] found id: ""
	I0913 19:59:24.275386   71926 logs.go:276] 0 containers: []
	W0913 19:59:24.275397   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 19:59:24.275412   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 19:59:24.275476   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 19:59:24.312715   71926 cri.go:89] found id: ""
	I0913 19:59:24.312744   71926 logs.go:276] 0 containers: []
	W0913 19:59:24.312754   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 19:59:24.312759   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 19:59:24.312821   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 19:59:24.348027   71926 cri.go:89] found id: ""
	I0913 19:59:24.348053   71926 logs.go:276] 0 containers: []
	W0913 19:59:24.348064   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 19:59:24.348071   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 19:59:24.348149   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 19:59:24.382195   71926 cri.go:89] found id: ""
	I0913 19:59:24.382219   71926 logs.go:276] 0 containers: []
	W0913 19:59:24.382229   71926 logs.go:278] No container was found matching "kindnet"
	I0913 19:59:24.382235   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 19:59:24.382282   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 19:59:24.418783   71926 cri.go:89] found id: ""
	I0913 19:59:24.418806   71926 logs.go:276] 0 containers: []
	W0913 19:59:24.418815   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 19:59:24.418823   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 19:59:24.418833   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 19:59:24.504681   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 19:59:24.504705   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 19:59:24.504718   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 19:59:24.578961   71926 logs.go:123] Gathering logs for container status ...
	I0913 19:59:24.578993   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 19:59:24.623057   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 19:59:24.623083   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 19:59:24.673801   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 19:59:24.673835   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 19:59:24.336593   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:26.835014   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:28.836636   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:25.932213   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:28.431013   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:26.767205   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:29.266801   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:27.188790   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:27.201419   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 19:59:27.201476   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 19:59:27.234511   71926 cri.go:89] found id: ""
	I0913 19:59:27.234535   71926 logs.go:276] 0 containers: []
	W0913 19:59:27.234543   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 19:59:27.234550   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 19:59:27.234610   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 19:59:27.273066   71926 cri.go:89] found id: ""
	I0913 19:59:27.273089   71926 logs.go:276] 0 containers: []
	W0913 19:59:27.273098   71926 logs.go:278] No container was found matching "etcd"
	I0913 19:59:27.273112   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 19:59:27.273169   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 19:59:27.306510   71926 cri.go:89] found id: ""
	I0913 19:59:27.306531   71926 logs.go:276] 0 containers: []
	W0913 19:59:27.306540   71926 logs.go:278] No container was found matching "coredns"
	I0913 19:59:27.306545   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 19:59:27.306630   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 19:59:27.343335   71926 cri.go:89] found id: ""
	I0913 19:59:27.343359   71926 logs.go:276] 0 containers: []
	W0913 19:59:27.343371   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 19:59:27.343378   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 19:59:27.343427   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 19:59:27.380440   71926 cri.go:89] found id: ""
	I0913 19:59:27.380469   71926 logs.go:276] 0 containers: []
	W0913 19:59:27.380478   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 19:59:27.380483   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 19:59:27.380536   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 19:59:27.419250   71926 cri.go:89] found id: ""
	I0913 19:59:27.419280   71926 logs.go:276] 0 containers: []
	W0913 19:59:27.419292   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 19:59:27.419299   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 19:59:27.419370   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 19:59:27.454315   71926 cri.go:89] found id: ""
	I0913 19:59:27.454337   71926 logs.go:276] 0 containers: []
	W0913 19:59:27.454346   71926 logs.go:278] No container was found matching "kindnet"
	I0913 19:59:27.454352   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 19:59:27.454402   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 19:59:27.491075   71926 cri.go:89] found id: ""
	I0913 19:59:27.491107   71926 logs.go:276] 0 containers: []
	W0913 19:59:27.491118   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 19:59:27.491128   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 19:59:27.491170   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 19:59:27.540849   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 19:59:27.540877   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 19:59:27.554829   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 19:59:27.554860   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 19:59:27.624534   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 19:59:27.624562   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 19:59:27.624577   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 19:59:27.702577   71926 logs.go:123] Gathering logs for container status ...
	I0913 19:59:27.702612   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 19:59:30.242489   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:30.255585   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 19:59:30.255667   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 19:59:30.298214   71926 cri.go:89] found id: ""
	I0913 19:59:30.298241   71926 logs.go:276] 0 containers: []
	W0913 19:59:30.298253   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 19:59:30.298259   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 19:59:30.298332   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 19:59:30.335270   71926 cri.go:89] found id: ""
	I0913 19:59:30.335300   71926 logs.go:276] 0 containers: []
	W0913 19:59:30.335309   71926 logs.go:278] No container was found matching "etcd"
	I0913 19:59:30.335314   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 19:59:30.335379   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 19:59:30.373478   71926 cri.go:89] found id: ""
	I0913 19:59:30.373506   71926 logs.go:276] 0 containers: []
	W0913 19:59:30.373517   71926 logs.go:278] No container was found matching "coredns"
	I0913 19:59:30.373524   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 19:59:30.373583   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 19:59:30.411820   71926 cri.go:89] found id: ""
	I0913 19:59:30.411845   71926 logs.go:276] 0 containers: []
	W0913 19:59:30.411854   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 19:59:30.411863   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 19:59:30.411908   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 19:59:30.449434   71926 cri.go:89] found id: ""
	I0913 19:59:30.449463   71926 logs.go:276] 0 containers: []
	W0913 19:59:30.449479   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 19:59:30.449486   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 19:59:30.449547   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 19:59:30.482794   71926 cri.go:89] found id: ""
	I0913 19:59:30.482822   71926 logs.go:276] 0 containers: []
	W0913 19:59:30.482831   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 19:59:30.482837   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 19:59:30.482887   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 19:59:30.517320   71926 cri.go:89] found id: ""
	I0913 19:59:30.517342   71926 logs.go:276] 0 containers: []
	W0913 19:59:30.517351   71926 logs.go:278] No container was found matching "kindnet"
	I0913 19:59:30.517357   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 19:59:30.517404   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 19:59:30.553119   71926 cri.go:89] found id: ""
	I0913 19:59:30.553146   71926 logs.go:276] 0 containers: []
	W0913 19:59:30.553154   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 19:59:30.553162   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 19:59:30.553172   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 19:59:30.605857   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 19:59:30.605886   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 19:59:30.620823   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 19:59:30.620855   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 19:59:30.689618   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 19:59:30.689637   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 19:59:30.689650   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 19:59:30.772324   71926 logs.go:123] Gathering logs for container status ...
	I0913 19:59:30.772359   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 19:59:31.335265   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:33.336711   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:30.431957   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:32.930866   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:31.765595   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:33.768217   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:33.313109   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:33.327584   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 19:59:33.327642   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 19:59:33.362880   71926 cri.go:89] found id: ""
	I0913 19:59:33.362904   71926 logs.go:276] 0 containers: []
	W0913 19:59:33.362912   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 19:59:33.362919   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 19:59:33.362979   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 19:59:33.395790   71926 cri.go:89] found id: ""
	I0913 19:59:33.395818   71926 logs.go:276] 0 containers: []
	W0913 19:59:33.395828   71926 logs.go:278] No container was found matching "etcd"
	I0913 19:59:33.395833   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 19:59:33.395883   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 19:59:33.430371   71926 cri.go:89] found id: ""
	I0913 19:59:33.430397   71926 logs.go:276] 0 containers: []
	W0913 19:59:33.430405   71926 logs.go:278] No container was found matching "coredns"
	I0913 19:59:33.430410   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 19:59:33.430522   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 19:59:33.465466   71926 cri.go:89] found id: ""
	I0913 19:59:33.465494   71926 logs.go:276] 0 containers: []
	W0913 19:59:33.465502   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 19:59:33.465508   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 19:59:33.465554   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 19:59:33.500340   71926 cri.go:89] found id: ""
	I0913 19:59:33.500370   71926 logs.go:276] 0 containers: []
	W0913 19:59:33.500385   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 19:59:33.500390   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 19:59:33.500440   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 19:59:33.537219   71926 cri.go:89] found id: ""
	I0913 19:59:33.537248   71926 logs.go:276] 0 containers: []
	W0913 19:59:33.537259   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 19:59:33.537267   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 19:59:33.537315   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 19:59:33.576171   71926 cri.go:89] found id: ""
	I0913 19:59:33.576201   71926 logs.go:276] 0 containers: []
	W0913 19:59:33.576209   71926 logs.go:278] No container was found matching "kindnet"
	I0913 19:59:33.576214   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 19:59:33.576261   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 19:59:33.618525   71926 cri.go:89] found id: ""
	I0913 19:59:33.618552   71926 logs.go:276] 0 containers: []
	W0913 19:59:33.618564   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 19:59:33.618574   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 19:59:33.618588   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 19:59:33.667903   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 19:59:33.667932   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 19:59:33.683870   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 19:59:33.683897   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 19:59:33.755651   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 19:59:33.755675   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 19:59:33.755687   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 19:59:33.834518   71926 logs.go:123] Gathering logs for container status ...
	I0913 19:59:33.834563   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 19:59:36.375763   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:36.389874   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 19:59:36.389945   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 19:59:36.423918   71926 cri.go:89] found id: ""
	I0913 19:59:36.423943   71926 logs.go:276] 0 containers: []
	W0913 19:59:36.423955   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 19:59:36.423962   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 19:59:36.424021   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 19:59:36.461591   71926 cri.go:89] found id: ""
	I0913 19:59:36.461615   71926 logs.go:276] 0 containers: []
	W0913 19:59:36.461627   71926 logs.go:278] No container was found matching "etcd"
	I0913 19:59:36.461633   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 19:59:36.461686   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 19:59:36.500927   71926 cri.go:89] found id: ""
	I0913 19:59:36.500951   71926 logs.go:276] 0 containers: []
	W0913 19:59:36.500961   71926 logs.go:278] No container was found matching "coredns"
	I0913 19:59:36.500966   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 19:59:36.501024   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 19:59:36.538153   71926 cri.go:89] found id: ""
	I0913 19:59:36.538178   71926 logs.go:276] 0 containers: []
	W0913 19:59:36.538189   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 19:59:36.538196   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 19:59:36.538253   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 19:59:36.586559   71926 cri.go:89] found id: ""
	I0913 19:59:36.586593   71926 logs.go:276] 0 containers: []
	W0913 19:59:36.586604   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 19:59:36.586612   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 19:59:36.586671   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 19:59:36.635198   71926 cri.go:89] found id: ""
	I0913 19:59:36.635226   71926 logs.go:276] 0 containers: []
	W0913 19:59:36.635238   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 19:59:36.635246   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 19:59:36.635312   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 19:59:36.679531   71926 cri.go:89] found id: ""
	I0913 19:59:36.679554   71926 logs.go:276] 0 containers: []
	W0913 19:59:36.679565   71926 logs.go:278] No container was found matching "kindnet"
	I0913 19:59:36.679572   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 19:59:36.679635   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 19:59:36.714288   71926 cri.go:89] found id: ""
	I0913 19:59:36.714315   71926 logs.go:276] 0 containers: []
	W0913 19:59:36.714327   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 19:59:36.714338   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 19:59:36.714352   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 19:59:36.793900   71926 logs.go:123] Gathering logs for container status ...
	I0913 19:59:36.793938   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 19:59:36.831700   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 19:59:36.831732   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 19:59:36.884424   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 19:59:36.884466   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 19:59:36.900593   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 19:59:36.900622   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 19:59:35.835628   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:37.836645   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:34.931979   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:37.429866   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:39.431100   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:36.265867   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:38.266340   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:40.767051   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	W0913 19:59:36.979173   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 19:59:39.480036   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:39.494368   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 19:59:39.494434   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 19:59:39.528620   71926 cri.go:89] found id: ""
	I0913 19:59:39.528647   71926 logs.go:276] 0 containers: []
	W0913 19:59:39.528655   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 19:59:39.528661   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 19:59:39.528708   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 19:59:39.564307   71926 cri.go:89] found id: ""
	I0913 19:59:39.564339   71926 logs.go:276] 0 containers: []
	W0913 19:59:39.564348   71926 logs.go:278] No container was found matching "etcd"
	I0913 19:59:39.564354   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 19:59:39.564402   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 19:59:39.596786   71926 cri.go:89] found id: ""
	I0913 19:59:39.596813   71926 logs.go:276] 0 containers: []
	W0913 19:59:39.596822   71926 logs.go:278] No container was found matching "coredns"
	I0913 19:59:39.596828   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 19:59:39.596887   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 19:59:39.640612   71926 cri.go:89] found id: ""
	I0913 19:59:39.640636   71926 logs.go:276] 0 containers: []
	W0913 19:59:39.640649   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 19:59:39.640654   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 19:59:39.640701   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 19:59:39.685855   71926 cri.go:89] found id: ""
	I0913 19:59:39.685876   71926 logs.go:276] 0 containers: []
	W0913 19:59:39.685884   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 19:59:39.685890   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 19:59:39.685937   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 19:59:39.723549   71926 cri.go:89] found id: ""
	I0913 19:59:39.723578   71926 logs.go:276] 0 containers: []
	W0913 19:59:39.723586   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 19:59:39.723592   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 19:59:39.723647   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 19:59:39.761901   71926 cri.go:89] found id: ""
	I0913 19:59:39.761928   71926 logs.go:276] 0 containers: []
	W0913 19:59:39.761938   71926 logs.go:278] No container was found matching "kindnet"
	I0913 19:59:39.761944   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 19:59:39.762005   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 19:59:39.804200   71926 cri.go:89] found id: ""
	I0913 19:59:39.804233   71926 logs.go:276] 0 containers: []
	W0913 19:59:39.804244   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 19:59:39.804254   71926 logs.go:123] Gathering logs for container status ...
	I0913 19:59:39.804268   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 19:59:39.843760   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 19:59:39.843792   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 19:59:39.898610   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 19:59:39.898640   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 19:59:39.915710   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 19:59:39.915733   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 19:59:39.991138   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 19:59:39.991161   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 19:59:39.991175   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 19:59:40.335372   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:42.339270   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:41.431411   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:43.930395   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:43.266899   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:45.769316   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:42.567023   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:42.579927   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 19:59:42.580001   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 19:59:42.613569   71926 cri.go:89] found id: ""
	I0913 19:59:42.613595   71926 logs.go:276] 0 containers: []
	W0913 19:59:42.613603   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 19:59:42.613608   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 19:59:42.613654   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 19:59:42.649375   71926 cri.go:89] found id: ""
	I0913 19:59:42.649408   71926 logs.go:276] 0 containers: []
	W0913 19:59:42.649421   71926 logs.go:278] No container was found matching "etcd"
	I0913 19:59:42.649433   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 19:59:42.649502   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 19:59:42.685299   71926 cri.go:89] found id: ""
	I0913 19:59:42.685322   71926 logs.go:276] 0 containers: []
	W0913 19:59:42.685330   71926 logs.go:278] No container was found matching "coredns"
	I0913 19:59:42.685336   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 19:59:42.685383   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 19:59:42.718646   71926 cri.go:89] found id: ""
	I0913 19:59:42.718671   71926 logs.go:276] 0 containers: []
	W0913 19:59:42.718680   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 19:59:42.718686   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 19:59:42.718736   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 19:59:42.755277   71926 cri.go:89] found id: ""
	I0913 19:59:42.755310   71926 logs.go:276] 0 containers: []
	W0913 19:59:42.755322   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 19:59:42.755330   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 19:59:42.755399   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 19:59:42.791071   71926 cri.go:89] found id: ""
	I0913 19:59:42.791099   71926 logs.go:276] 0 containers: []
	W0913 19:59:42.791110   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 19:59:42.791117   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 19:59:42.791191   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 19:59:42.824895   71926 cri.go:89] found id: ""
	I0913 19:59:42.824924   71926 logs.go:276] 0 containers: []
	W0913 19:59:42.824935   71926 logs.go:278] No container was found matching "kindnet"
	I0913 19:59:42.824942   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 19:59:42.825004   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 19:59:42.864526   71926 cri.go:89] found id: ""
	I0913 19:59:42.864555   71926 logs.go:276] 0 containers: []
	W0913 19:59:42.864567   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 19:59:42.864576   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 19:59:42.864590   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 19:59:42.913990   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 19:59:42.914023   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 19:59:42.929285   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 19:59:42.929319   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 19:59:43.003029   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 19:59:43.003061   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 19:59:43.003075   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 19:59:43.083457   71926 logs.go:123] Gathering logs for container status ...
	I0913 19:59:43.083490   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 19:59:45.625941   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:45.639111   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 19:59:45.639200   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 19:59:45.672356   71926 cri.go:89] found id: ""
	I0913 19:59:45.672383   71926 logs.go:276] 0 containers: []
	W0913 19:59:45.672391   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 19:59:45.672397   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 19:59:45.672463   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 19:59:45.713528   71926 cri.go:89] found id: ""
	I0913 19:59:45.713551   71926 logs.go:276] 0 containers: []
	W0913 19:59:45.713557   71926 logs.go:278] No container was found matching "etcd"
	I0913 19:59:45.713564   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 19:59:45.713618   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 19:59:45.749950   71926 cri.go:89] found id: ""
	I0913 19:59:45.749974   71926 logs.go:276] 0 containers: []
	W0913 19:59:45.749982   71926 logs.go:278] No container was found matching "coredns"
	I0913 19:59:45.749988   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 19:59:45.750036   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 19:59:45.787366   71926 cri.go:89] found id: ""
	I0913 19:59:45.787403   71926 logs.go:276] 0 containers: []
	W0913 19:59:45.787415   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 19:59:45.787423   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 19:59:45.787482   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 19:59:45.822464   71926 cri.go:89] found id: ""
	I0913 19:59:45.822493   71926 logs.go:276] 0 containers: []
	W0913 19:59:45.822504   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 19:59:45.822511   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 19:59:45.822574   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 19:59:45.857614   71926 cri.go:89] found id: ""
	I0913 19:59:45.857643   71926 logs.go:276] 0 containers: []
	W0913 19:59:45.857654   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 19:59:45.857666   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 19:59:45.857716   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 19:59:45.896323   71926 cri.go:89] found id: ""
	I0913 19:59:45.896349   71926 logs.go:276] 0 containers: []
	W0913 19:59:45.896357   71926 logs.go:278] No container was found matching "kindnet"
	I0913 19:59:45.896362   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 19:59:45.896416   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 19:59:45.935706   71926 cri.go:89] found id: ""
	I0913 19:59:45.935731   71926 logs.go:276] 0 containers: []
	W0913 19:59:45.935742   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 19:59:45.935751   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 19:59:45.935763   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 19:59:45.986687   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 19:59:45.986721   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 19:59:46.001549   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 19:59:46.001584   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 19:59:46.075482   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 19:59:46.075505   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 19:59:46.075520   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 19:59:46.152094   71926 logs.go:123] Gathering logs for container status ...
	I0913 19:59:46.152130   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 19:59:44.836085   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:46.836175   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:45.932069   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:47.932660   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:48.266623   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:50.766356   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:48.698127   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:48.711198   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 19:59:48.711271   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 19:59:48.746560   71926 cri.go:89] found id: ""
	I0913 19:59:48.746588   71926 logs.go:276] 0 containers: []
	W0913 19:59:48.746598   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 19:59:48.746605   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 19:59:48.746662   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 19:59:48.780588   71926 cri.go:89] found id: ""
	I0913 19:59:48.780614   71926 logs.go:276] 0 containers: []
	W0913 19:59:48.780624   71926 logs.go:278] No container was found matching "etcd"
	I0913 19:59:48.780631   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 19:59:48.780689   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 19:59:48.812521   71926 cri.go:89] found id: ""
	I0913 19:59:48.812547   71926 logs.go:276] 0 containers: []
	W0913 19:59:48.812560   71926 logs.go:278] No container was found matching "coredns"
	I0913 19:59:48.812567   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 19:59:48.812626   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 19:59:48.850273   71926 cri.go:89] found id: ""
	I0913 19:59:48.850303   71926 logs.go:276] 0 containers: []
	W0913 19:59:48.850314   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 19:59:48.850322   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 19:59:48.850384   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 19:59:48.887865   71926 cri.go:89] found id: ""
	I0913 19:59:48.887888   71926 logs.go:276] 0 containers: []
	W0913 19:59:48.887896   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 19:59:48.887901   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 19:59:48.887966   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 19:59:48.928155   71926 cri.go:89] found id: ""
	I0913 19:59:48.928182   71926 logs.go:276] 0 containers: []
	W0913 19:59:48.928193   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 19:59:48.928201   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 19:59:48.928263   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 19:59:48.966159   71926 cri.go:89] found id: ""
	I0913 19:59:48.966185   71926 logs.go:276] 0 containers: []
	W0913 19:59:48.966194   71926 logs.go:278] No container was found matching "kindnet"
	I0913 19:59:48.966199   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 19:59:48.966267   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 19:59:48.999627   71926 cri.go:89] found id: ""
	I0913 19:59:48.999654   71926 logs.go:276] 0 containers: []
	W0913 19:59:48.999663   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 19:59:48.999674   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 19:59:48.999685   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 19:59:49.087362   71926 logs.go:123] Gathering logs for container status ...
	I0913 19:59:49.087398   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 19:59:49.131037   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 19:59:49.131065   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 19:59:49.183183   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 19:59:49.183214   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 19:59:49.197511   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 19:59:49.197536   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 19:59:49.269251   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 19:59:51.769494   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:51.783274   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 19:59:51.783334   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 19:59:51.819611   71926 cri.go:89] found id: ""
	I0913 19:59:51.819644   71926 logs.go:276] 0 containers: []
	W0913 19:59:51.819655   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 19:59:51.819662   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 19:59:51.819722   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 19:59:51.857054   71926 cri.go:89] found id: ""
	I0913 19:59:51.857081   71926 logs.go:276] 0 containers: []
	W0913 19:59:51.857093   71926 logs.go:278] No container was found matching "etcd"
	I0913 19:59:51.857101   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 19:59:51.857183   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 19:59:51.892270   71926 cri.go:89] found id: ""
	I0913 19:59:51.892292   71926 logs.go:276] 0 containers: []
	W0913 19:59:51.892301   71926 logs.go:278] No container was found matching "coredns"
	I0913 19:59:51.892306   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 19:59:51.892354   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 19:59:51.926615   71926 cri.go:89] found id: ""
	I0913 19:59:51.926641   71926 logs.go:276] 0 containers: []
	W0913 19:59:51.926653   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 19:59:51.926667   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 19:59:51.926728   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 19:59:49.336581   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:51.837000   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:53.838872   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:49.936518   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:52.430631   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:52.767109   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:55.265920   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:51.966322   71926 cri.go:89] found id: ""
	I0913 19:59:51.966352   71926 logs.go:276] 0 containers: []
	W0913 19:59:51.966372   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 19:59:51.966380   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 19:59:51.966446   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 19:59:52.008145   71926 cri.go:89] found id: ""
	I0913 19:59:52.008173   71926 logs.go:276] 0 containers: []
	W0913 19:59:52.008182   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 19:59:52.008189   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 19:59:52.008241   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 19:59:52.044486   71926 cri.go:89] found id: ""
	I0913 19:59:52.044512   71926 logs.go:276] 0 containers: []
	W0913 19:59:52.044520   71926 logs.go:278] No container was found matching "kindnet"
	I0913 19:59:52.044527   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 19:59:52.044590   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 19:59:52.080845   71926 cri.go:89] found id: ""
	I0913 19:59:52.080873   71926 logs.go:276] 0 containers: []
	W0913 19:59:52.080885   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 19:59:52.080895   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 19:59:52.080910   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 19:59:52.094040   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 19:59:52.094067   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 19:59:52.163809   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 19:59:52.163836   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 19:59:52.163850   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 19:59:52.244680   71926 logs.go:123] Gathering logs for container status ...
	I0913 19:59:52.244724   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 19:59:52.284651   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 19:59:52.284686   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 19:59:54.841167   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:54.853992   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 19:59:54.854055   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 19:59:54.886801   71926 cri.go:89] found id: ""
	I0913 19:59:54.886830   71926 logs.go:276] 0 containers: []
	W0913 19:59:54.886841   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 19:59:54.886848   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 19:59:54.886922   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 19:59:54.921963   71926 cri.go:89] found id: ""
	I0913 19:59:54.921990   71926 logs.go:276] 0 containers: []
	W0913 19:59:54.922001   71926 logs.go:278] No container was found matching "etcd"
	I0913 19:59:54.922009   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 19:59:54.922074   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 19:59:54.960805   71926 cri.go:89] found id: ""
	I0913 19:59:54.960840   71926 logs.go:276] 0 containers: []
	W0913 19:59:54.960852   71926 logs.go:278] No container was found matching "coredns"
	I0913 19:59:54.960859   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 19:59:54.960938   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 19:59:54.998466   71926 cri.go:89] found id: ""
	I0913 19:59:54.998490   71926 logs.go:276] 0 containers: []
	W0913 19:59:54.998501   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 19:59:54.998508   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 19:59:54.998570   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 19:59:55.036768   71926 cri.go:89] found id: ""
	I0913 19:59:55.036795   71926 logs.go:276] 0 containers: []
	W0913 19:59:55.036803   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 19:59:55.036809   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 19:59:55.036870   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 19:59:55.075135   71926 cri.go:89] found id: ""
	I0913 19:59:55.075165   71926 logs.go:276] 0 containers: []
	W0913 19:59:55.075176   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 19:59:55.075184   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 19:59:55.075244   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 19:59:55.112784   71926 cri.go:89] found id: ""
	I0913 19:59:55.112806   71926 logs.go:276] 0 containers: []
	W0913 19:59:55.112815   71926 logs.go:278] No container was found matching "kindnet"
	I0913 19:59:55.112821   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 19:59:55.112866   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 19:59:55.147189   71926 cri.go:89] found id: ""
	I0913 19:59:55.147215   71926 logs.go:276] 0 containers: []
	W0913 19:59:55.147226   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 19:59:55.147236   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 19:59:55.147247   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 19:59:55.199769   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 19:59:55.199802   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 19:59:55.214075   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 19:59:55.214124   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 19:59:55.285621   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 19:59:55.285640   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 19:59:55.285650   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 19:59:55.366727   71926 logs.go:123] Gathering logs for container status ...
	I0913 19:59:55.366762   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 19:59:56.336491   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:58.836762   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:54.932054   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:57.431007   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:57.266309   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:59.266774   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:57.913453   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:57.927109   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 19:59:57.927249   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 19:59:57.964607   71926 cri.go:89] found id: ""
	I0913 19:59:57.964635   71926 logs.go:276] 0 containers: []
	W0913 19:59:57.964647   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 19:59:57.964655   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 19:59:57.964723   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 19:59:58.006749   71926 cri.go:89] found id: ""
	I0913 19:59:58.006773   71926 logs.go:276] 0 containers: []
	W0913 19:59:58.006782   71926 logs.go:278] No container was found matching "etcd"
	I0913 19:59:58.006788   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 19:59:58.006835   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 19:59:58.042438   71926 cri.go:89] found id: ""
	I0913 19:59:58.042461   71926 logs.go:276] 0 containers: []
	W0913 19:59:58.042470   71926 logs.go:278] No container was found matching "coredns"
	I0913 19:59:58.042475   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 19:59:58.042526   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 19:59:58.076413   71926 cri.go:89] found id: ""
	I0913 19:59:58.076443   71926 logs.go:276] 0 containers: []
	W0913 19:59:58.076456   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 19:59:58.076463   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 19:59:58.076525   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 19:59:58.110474   71926 cri.go:89] found id: ""
	I0913 19:59:58.110495   71926 logs.go:276] 0 containers: []
	W0913 19:59:58.110502   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 19:59:58.110507   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 19:59:58.110555   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 19:59:58.145068   71926 cri.go:89] found id: ""
	I0913 19:59:58.145090   71926 logs.go:276] 0 containers: []
	W0913 19:59:58.145098   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 19:59:58.145104   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 19:59:58.145152   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 19:59:58.179071   71926 cri.go:89] found id: ""
	I0913 19:59:58.179102   71926 logs.go:276] 0 containers: []
	W0913 19:59:58.179115   71926 logs.go:278] No container was found matching "kindnet"
	I0913 19:59:58.179122   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 19:59:58.179176   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 19:59:58.212749   71926 cri.go:89] found id: ""
	I0913 19:59:58.212779   71926 logs.go:276] 0 containers: []
	W0913 19:59:58.212791   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 19:59:58.212801   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 19:59:58.212814   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 19:59:58.297012   71926 logs.go:123] Gathering logs for container status ...
	I0913 19:59:58.297046   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 19:59:58.336206   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 19:59:58.336229   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 19:59:58.388982   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 19:59:58.389014   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 19:59:58.404582   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 19:59:58.404612   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 19:59:58.476882   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:00.977647   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:00.991660   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:00.991743   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:01.028753   71926 cri.go:89] found id: ""
	I0913 20:00:01.028779   71926 logs.go:276] 0 containers: []
	W0913 20:00:01.028788   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:01.028794   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:01.028843   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:01.064606   71926 cri.go:89] found id: ""
	I0913 20:00:01.064638   71926 logs.go:276] 0 containers: []
	W0913 20:00:01.064647   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:01.064652   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:01.064701   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:01.100564   71926 cri.go:89] found id: ""
	I0913 20:00:01.100597   71926 logs.go:276] 0 containers: []
	W0913 20:00:01.100609   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:01.100615   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:01.100678   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:01.135197   71926 cri.go:89] found id: ""
	I0913 20:00:01.135230   71926 logs.go:276] 0 containers: []
	W0913 20:00:01.135242   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:01.135249   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:01.135313   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:01.187712   71926 cri.go:89] found id: ""
	I0913 20:00:01.187783   71926 logs.go:276] 0 containers: []
	W0913 20:00:01.187796   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:01.187804   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:01.187870   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:01.225400   71926 cri.go:89] found id: ""
	I0913 20:00:01.225434   71926 logs.go:276] 0 containers: []
	W0913 20:00:01.225443   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:01.225449   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:01.225511   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:01.263652   71926 cri.go:89] found id: ""
	I0913 20:00:01.263685   71926 logs.go:276] 0 containers: []
	W0913 20:00:01.263696   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:01.263703   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:01.263764   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:01.302630   71926 cri.go:89] found id: ""
	I0913 20:00:01.302652   71926 logs.go:276] 0 containers: []
	W0913 20:00:01.302661   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:01.302669   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:01.302680   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:01.316837   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:01.316871   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:01.389124   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:01.389149   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:01.389159   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:01.475528   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:01.475565   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:01.518896   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:01.518922   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:01.338229   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:03.836029   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:59.932112   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:01.932389   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:03.932525   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:01.267699   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:03.268309   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:05.765913   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:04.069894   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:04.083877   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:04.083940   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:04.119079   71926 cri.go:89] found id: ""
	I0913 20:00:04.119109   71926 logs.go:276] 0 containers: []
	W0913 20:00:04.119118   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:04.119123   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:04.119175   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:04.154999   71926 cri.go:89] found id: ""
	I0913 20:00:04.155026   71926 logs.go:276] 0 containers: []
	W0913 20:00:04.155035   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:04.155040   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:04.155087   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:04.189390   71926 cri.go:89] found id: ""
	I0913 20:00:04.189414   71926 logs.go:276] 0 containers: []
	W0913 20:00:04.189422   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:04.189428   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:04.189477   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:04.224883   71926 cri.go:89] found id: ""
	I0913 20:00:04.224912   71926 logs.go:276] 0 containers: []
	W0913 20:00:04.224924   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:04.224932   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:04.224990   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:04.265300   71926 cri.go:89] found id: ""
	I0913 20:00:04.265328   71926 logs.go:276] 0 containers: []
	W0913 20:00:04.265340   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:04.265347   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:04.265403   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:04.308225   71926 cri.go:89] found id: ""
	I0913 20:00:04.308253   71926 logs.go:276] 0 containers: []
	W0913 20:00:04.308264   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:04.308271   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:04.308339   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:04.347508   71926 cri.go:89] found id: ""
	I0913 20:00:04.347539   71926 logs.go:276] 0 containers: []
	W0913 20:00:04.347552   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:04.347558   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:04.347615   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:04.385610   71926 cri.go:89] found id: ""
	I0913 20:00:04.385635   71926 logs.go:276] 0 containers: []
	W0913 20:00:04.385644   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:04.385651   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:04.385664   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:04.438181   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:04.438210   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:04.452207   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:04.452231   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:04.527920   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:04.527942   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:04.527956   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:04.617212   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:04.617256   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:05.836478   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:08.336478   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:06.429978   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:08.430153   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:08.266149   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:10.267683   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:07.159525   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:07.172355   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:07.172433   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:07.210683   71926 cri.go:89] found id: ""
	I0913 20:00:07.210713   71926 logs.go:276] 0 containers: []
	W0913 20:00:07.210725   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:07.210733   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:07.210794   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:07.252477   71926 cri.go:89] found id: ""
	I0913 20:00:07.252501   71926 logs.go:276] 0 containers: []
	W0913 20:00:07.252511   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:07.252516   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:07.252563   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:07.293646   71926 cri.go:89] found id: ""
	I0913 20:00:07.293671   71926 logs.go:276] 0 containers: []
	W0913 20:00:07.293679   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:07.293685   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:07.293744   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:07.328603   71926 cri.go:89] found id: ""
	I0913 20:00:07.328631   71926 logs.go:276] 0 containers: []
	W0913 20:00:07.328644   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:07.328652   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:07.328716   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:07.366358   71926 cri.go:89] found id: ""
	I0913 20:00:07.366385   71926 logs.go:276] 0 containers: []
	W0913 20:00:07.366395   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:07.366402   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:07.366480   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:07.400380   71926 cri.go:89] found id: ""
	I0913 20:00:07.400406   71926 logs.go:276] 0 containers: []
	W0913 20:00:07.400417   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:07.400425   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:07.400476   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:07.438332   71926 cri.go:89] found id: ""
	I0913 20:00:07.438360   71926 logs.go:276] 0 containers: []
	W0913 20:00:07.438371   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:07.438380   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:07.438437   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:07.477262   71926 cri.go:89] found id: ""
	I0913 20:00:07.477294   71926 logs.go:276] 0 containers: []
	W0913 20:00:07.477305   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:07.477315   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:07.477329   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:07.528404   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:07.528437   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:07.543025   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:07.543058   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:07.619580   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:07.619599   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:07.619611   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:07.698002   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:07.698037   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:10.241528   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:10.255274   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:10.255350   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:10.289635   71926 cri.go:89] found id: ""
	I0913 20:00:10.289660   71926 logs.go:276] 0 containers: []
	W0913 20:00:10.289670   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:10.289675   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:10.289726   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:10.323749   71926 cri.go:89] found id: ""
	I0913 20:00:10.323772   71926 logs.go:276] 0 containers: []
	W0913 20:00:10.323781   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:10.323788   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:10.323835   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:10.360399   71926 cri.go:89] found id: ""
	I0913 20:00:10.360424   71926 logs.go:276] 0 containers: []
	W0913 20:00:10.360432   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:10.360441   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:10.360517   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:10.396685   71926 cri.go:89] found id: ""
	I0913 20:00:10.396714   71926 logs.go:276] 0 containers: []
	W0913 20:00:10.396724   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:10.396731   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:10.396793   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:10.439076   71926 cri.go:89] found id: ""
	I0913 20:00:10.439104   71926 logs.go:276] 0 containers: []
	W0913 20:00:10.439116   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:10.439126   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:10.439202   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:10.484451   71926 cri.go:89] found id: ""
	I0913 20:00:10.484474   71926 logs.go:276] 0 containers: []
	W0913 20:00:10.484483   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:10.484490   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:10.484553   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:10.521271   71926 cri.go:89] found id: ""
	I0913 20:00:10.521297   71926 logs.go:276] 0 containers: []
	W0913 20:00:10.521307   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:10.521313   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:10.521371   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:10.556468   71926 cri.go:89] found id: ""
	I0913 20:00:10.556490   71926 logs.go:276] 0 containers: []
	W0913 20:00:10.556499   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:10.556507   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:10.556519   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:10.593210   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:10.593237   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:10.645456   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:10.645493   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:10.659365   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:10.659393   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:10.734764   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:10.734786   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:10.734800   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:10.338631   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:12.835744   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:10.430954   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:12.931007   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:12.767070   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:15.267220   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:13.320229   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:13.335065   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:13.335136   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:13.373500   71926 cri.go:89] found id: ""
	I0913 20:00:13.373531   71926 logs.go:276] 0 containers: []
	W0913 20:00:13.373543   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:13.373550   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:13.373613   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:13.410784   71926 cri.go:89] found id: ""
	I0913 20:00:13.410815   71926 logs.go:276] 0 containers: []
	W0913 20:00:13.410827   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:13.410834   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:13.410900   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:13.447564   71926 cri.go:89] found id: ""
	I0913 20:00:13.447592   71926 logs.go:276] 0 containers: []
	W0913 20:00:13.447603   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:13.447611   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:13.447668   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:13.482858   71926 cri.go:89] found id: ""
	I0913 20:00:13.482885   71926 logs.go:276] 0 containers: []
	W0913 20:00:13.482895   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:13.482902   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:13.482963   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:13.517827   71926 cri.go:89] found id: ""
	I0913 20:00:13.517853   71926 logs.go:276] 0 containers: []
	W0913 20:00:13.517864   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:13.517870   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:13.517932   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:13.553032   71926 cri.go:89] found id: ""
	I0913 20:00:13.553063   71926 logs.go:276] 0 containers: []
	W0913 20:00:13.553081   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:13.553088   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:13.553149   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:13.588527   71926 cri.go:89] found id: ""
	I0913 20:00:13.588553   71926 logs.go:276] 0 containers: []
	W0913 20:00:13.588561   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:13.588567   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:13.588620   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:13.625094   71926 cri.go:89] found id: ""
	I0913 20:00:13.625118   71926 logs.go:276] 0 containers: []
	W0913 20:00:13.625131   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:13.625141   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:13.625158   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:13.677821   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:13.677851   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:13.691860   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:13.691887   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:13.764966   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:13.764993   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:13.765009   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:13.847866   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:13.847908   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:16.389642   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:16.403272   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:16.403331   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:16.440078   71926 cri.go:89] found id: ""
	I0913 20:00:16.440104   71926 logs.go:276] 0 containers: []
	W0913 20:00:16.440114   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:16.440122   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:16.440190   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:16.474256   71926 cri.go:89] found id: ""
	I0913 20:00:16.474283   71926 logs.go:276] 0 containers: []
	W0913 20:00:16.474301   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:16.474308   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:16.474366   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:16.510715   71926 cri.go:89] found id: ""
	I0913 20:00:16.510749   71926 logs.go:276] 0 containers: []
	W0913 20:00:16.510760   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:16.510767   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:16.510828   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:16.547051   71926 cri.go:89] found id: ""
	I0913 20:00:16.547081   71926 logs.go:276] 0 containers: []
	W0913 20:00:16.547090   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:16.547095   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:16.547181   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:16.583643   71926 cri.go:89] found id: ""
	I0913 20:00:16.583673   71926 logs.go:276] 0 containers: []
	W0913 20:00:16.583684   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:16.583692   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:16.583751   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:16.620508   71926 cri.go:89] found id: ""
	I0913 20:00:16.620531   71926 logs.go:276] 0 containers: []
	W0913 20:00:16.620538   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:16.620544   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:16.620591   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:16.659447   71926 cri.go:89] found id: ""
	I0913 20:00:16.659474   71926 logs.go:276] 0 containers: []
	W0913 20:00:16.659483   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:16.659487   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:16.659551   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:16.696858   71926 cri.go:89] found id: ""
	I0913 20:00:16.696883   71926 logs.go:276] 0 containers: []
	W0913 20:00:16.696892   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:16.696900   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:16.696913   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:16.767299   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:16.767322   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:16.767337   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:16.847320   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:16.847356   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:16.922176   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:16.922209   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:14.836490   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:16.838300   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:15.430562   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:17.431842   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:17.766696   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:19.767921   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:16.982583   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:16.982627   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:19.496581   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:19.509942   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:19.510040   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:19.546457   71926 cri.go:89] found id: ""
	I0913 20:00:19.546493   71926 logs.go:276] 0 containers: []
	W0913 20:00:19.546510   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:19.546517   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:19.546584   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:19.585577   71926 cri.go:89] found id: ""
	I0913 20:00:19.585613   71926 logs.go:276] 0 containers: []
	W0913 20:00:19.585624   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:19.585631   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:19.585681   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:19.624383   71926 cri.go:89] found id: ""
	I0913 20:00:19.624416   71926 logs.go:276] 0 containers: []
	W0913 20:00:19.624428   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:19.624436   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:19.624492   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:19.662531   71926 cri.go:89] found id: ""
	I0913 20:00:19.662558   71926 logs.go:276] 0 containers: []
	W0913 20:00:19.662570   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:19.662578   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:19.662636   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:19.704253   71926 cri.go:89] found id: ""
	I0913 20:00:19.704278   71926 logs.go:276] 0 containers: []
	W0913 20:00:19.704290   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:19.704296   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:19.704354   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:19.743087   71926 cri.go:89] found id: ""
	I0913 20:00:19.743113   71926 logs.go:276] 0 containers: []
	W0913 20:00:19.743122   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:19.743128   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:19.743175   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:19.779598   71926 cri.go:89] found id: ""
	I0913 20:00:19.779625   71926 logs.go:276] 0 containers: []
	W0913 20:00:19.779635   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:19.779643   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:19.779692   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:19.817509   71926 cri.go:89] found id: ""
	I0913 20:00:19.817541   71926 logs.go:276] 0 containers: []
	W0913 20:00:19.817553   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:19.817564   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:19.817577   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:19.870071   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:19.870120   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:19.884612   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:19.884638   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:19.959650   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:19.959675   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:19.959687   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:20.040351   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:20.040384   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:19.335437   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:21.335913   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:23.838023   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:19.931244   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:22.430934   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:24.431456   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:22.266411   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:24.266828   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:22.581978   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:22.599144   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:22.599205   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:22.652862   71926 cri.go:89] found id: ""
	I0913 20:00:22.652894   71926 logs.go:276] 0 containers: []
	W0913 20:00:22.652906   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:22.652913   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:22.652998   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:22.712188   71926 cri.go:89] found id: ""
	I0913 20:00:22.712220   71926 logs.go:276] 0 containers: []
	W0913 20:00:22.712231   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:22.712238   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:22.712299   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:22.750212   71926 cri.go:89] found id: ""
	I0913 20:00:22.750238   71926 logs.go:276] 0 containers: []
	W0913 20:00:22.750249   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:22.750257   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:22.750319   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:22.786432   71926 cri.go:89] found id: ""
	I0913 20:00:22.786463   71926 logs.go:276] 0 containers: []
	W0913 20:00:22.786475   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:22.786483   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:22.786547   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:22.822681   71926 cri.go:89] found id: ""
	I0913 20:00:22.822707   71926 logs.go:276] 0 containers: []
	W0913 20:00:22.822716   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:22.822722   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:22.822780   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:22.862076   71926 cri.go:89] found id: ""
	I0913 20:00:22.862143   71926 logs.go:276] 0 containers: []
	W0913 20:00:22.862157   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:22.862166   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:22.862230   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:22.899489   71926 cri.go:89] found id: ""
	I0913 20:00:22.899519   71926 logs.go:276] 0 containers: []
	W0913 20:00:22.899528   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:22.899535   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:22.899604   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:22.937229   71926 cri.go:89] found id: ""
	I0913 20:00:22.937255   71926 logs.go:276] 0 containers: []
	W0913 20:00:22.937270   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:22.937282   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:22.937300   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:23.007842   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:23.007871   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:23.007887   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:23.092726   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:23.092763   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:23.131372   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:23.131403   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:23.183785   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:23.183819   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:25.698367   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:25.712256   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:25.712338   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:25.747797   71926 cri.go:89] found id: ""
	I0913 20:00:25.747824   71926 logs.go:276] 0 containers: []
	W0913 20:00:25.747835   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:25.747842   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:25.747929   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:25.783257   71926 cri.go:89] found id: ""
	I0913 20:00:25.783285   71926 logs.go:276] 0 containers: []
	W0913 20:00:25.783295   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:25.783301   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:25.783352   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:25.817087   71926 cri.go:89] found id: ""
	I0913 20:00:25.817120   71926 logs.go:276] 0 containers: []
	W0913 20:00:25.817132   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:25.817142   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:25.817203   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:25.854055   71926 cri.go:89] found id: ""
	I0913 20:00:25.854082   71926 logs.go:276] 0 containers: []
	W0913 20:00:25.854108   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:25.854116   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:25.854188   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:25.889972   71926 cri.go:89] found id: ""
	I0913 20:00:25.889994   71926 logs.go:276] 0 containers: []
	W0913 20:00:25.890002   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:25.890008   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:25.890058   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:25.927061   71926 cri.go:89] found id: ""
	I0913 20:00:25.927093   71926 logs.go:276] 0 containers: []
	W0913 20:00:25.927104   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:25.927115   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:25.927169   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:25.965541   71926 cri.go:89] found id: ""
	I0913 20:00:25.965570   71926 logs.go:276] 0 containers: []
	W0913 20:00:25.965582   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:25.965588   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:25.965649   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:26.002772   71926 cri.go:89] found id: ""
	I0913 20:00:26.002801   71926 logs.go:276] 0 containers: []
	W0913 20:00:26.002814   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:26.002825   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:26.002840   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:26.054407   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:26.054442   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:26.069608   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:26.069634   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:26.141529   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:26.141557   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:26.141572   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:26.223623   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:26.223657   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:26.336386   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:28.836218   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:26.431607   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:28.431821   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:26.267742   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:28.766624   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:30.767391   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:28.764312   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:28.779319   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:28.779383   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:28.813431   71926 cri.go:89] found id: ""
	I0913 20:00:28.813457   71926 logs.go:276] 0 containers: []
	W0913 20:00:28.813465   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:28.813473   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:28.813532   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:28.850083   71926 cri.go:89] found id: ""
	I0913 20:00:28.850145   71926 logs.go:276] 0 containers: []
	W0913 20:00:28.850157   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:28.850163   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:28.850221   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:28.890348   71926 cri.go:89] found id: ""
	I0913 20:00:28.890373   71926 logs.go:276] 0 containers: []
	W0913 20:00:28.890384   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:28.890390   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:28.890440   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:28.926531   71926 cri.go:89] found id: ""
	I0913 20:00:28.926564   71926 logs.go:276] 0 containers: []
	W0913 20:00:28.926576   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:28.926583   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:28.926650   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:28.968307   71926 cri.go:89] found id: ""
	I0913 20:00:28.968336   71926 logs.go:276] 0 containers: []
	W0913 20:00:28.968349   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:28.968356   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:28.968420   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:29.004257   71926 cri.go:89] found id: ""
	I0913 20:00:29.004285   71926 logs.go:276] 0 containers: []
	W0913 20:00:29.004297   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:29.004304   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:29.004369   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:29.039438   71926 cri.go:89] found id: ""
	I0913 20:00:29.039466   71926 logs.go:276] 0 containers: []
	W0913 20:00:29.039478   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:29.039486   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:29.039546   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:29.075138   71926 cri.go:89] found id: ""
	I0913 20:00:29.075172   71926 logs.go:276] 0 containers: []
	W0913 20:00:29.075184   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:29.075195   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:29.075210   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:29.151631   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:29.151671   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:29.193680   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:29.193707   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:29.244926   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:29.244960   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:29.258897   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:29.258922   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:29.329212   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:31.829955   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:31.846683   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:31.846747   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:31.888898   71926 cri.go:89] found id: ""
	I0913 20:00:31.888933   71926 logs.go:276] 0 containers: []
	W0913 20:00:31.888944   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:31.888952   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:31.889023   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:31.923896   71926 cri.go:89] found id: ""
	I0913 20:00:31.923922   71926 logs.go:276] 0 containers: []
	W0913 20:00:31.923933   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:31.923940   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:31.923999   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:30.836587   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:33.335323   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:30.431964   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:32.931375   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:32.770852   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:35.267129   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:31.965263   71926 cri.go:89] found id: ""
	I0913 20:00:31.965297   71926 logs.go:276] 0 containers: []
	W0913 20:00:31.965309   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:31.965317   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:31.965387   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:31.998461   71926 cri.go:89] found id: ""
	I0913 20:00:31.998491   71926 logs.go:276] 0 containers: []
	W0913 20:00:31.998505   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:31.998512   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:31.998564   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:32.039654   71926 cri.go:89] found id: ""
	I0913 20:00:32.039681   71926 logs.go:276] 0 containers: []
	W0913 20:00:32.039690   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:32.039696   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:32.039747   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:32.073387   71926 cri.go:89] found id: ""
	I0913 20:00:32.073413   71926 logs.go:276] 0 containers: []
	W0913 20:00:32.073424   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:32.073432   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:32.073491   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:32.112119   71926 cri.go:89] found id: ""
	I0913 20:00:32.112148   71926 logs.go:276] 0 containers: []
	W0913 20:00:32.112159   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:32.112166   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:32.112231   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:32.156508   71926 cri.go:89] found id: ""
	I0913 20:00:32.156539   71926 logs.go:276] 0 containers: []
	W0913 20:00:32.156548   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:32.156556   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:32.156567   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:32.210961   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:32.210994   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:32.224674   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:32.224703   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:32.297699   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:32.297725   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:32.297738   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:32.383090   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:32.383130   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:34.926212   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:34.942930   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:34.943015   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:34.983115   71926 cri.go:89] found id: ""
	I0913 20:00:34.983140   71926 logs.go:276] 0 containers: []
	W0913 20:00:34.983151   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:34.983159   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:34.983232   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:35.022893   71926 cri.go:89] found id: ""
	I0913 20:00:35.022916   71926 logs.go:276] 0 containers: []
	W0913 20:00:35.022924   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:35.022931   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:35.022980   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:35.061105   71926 cri.go:89] found id: ""
	I0913 20:00:35.061129   71926 logs.go:276] 0 containers: []
	W0913 20:00:35.061137   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:35.061143   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:35.061191   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:35.095853   71926 cri.go:89] found id: ""
	I0913 20:00:35.095879   71926 logs.go:276] 0 containers: []
	W0913 20:00:35.095890   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:35.095897   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:35.095966   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:35.132771   71926 cri.go:89] found id: ""
	I0913 20:00:35.132796   71926 logs.go:276] 0 containers: []
	W0913 20:00:35.132811   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:35.132816   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:35.132879   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:35.171692   71926 cri.go:89] found id: ""
	I0913 20:00:35.171720   71926 logs.go:276] 0 containers: []
	W0913 20:00:35.171729   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:35.171734   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:35.171782   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:35.212217   71926 cri.go:89] found id: ""
	I0913 20:00:35.212248   71926 logs.go:276] 0 containers: []
	W0913 20:00:35.212258   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:35.212266   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:35.212318   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:35.247910   71926 cri.go:89] found id: ""
	I0913 20:00:35.247938   71926 logs.go:276] 0 containers: []
	W0913 20:00:35.247949   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:35.247958   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:35.247972   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:35.321607   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:35.321627   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:35.321641   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:35.405442   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:35.405483   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:35.450174   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:35.450201   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:35.503640   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:35.503673   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:35.336847   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:37.337476   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:34.931775   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:37.430241   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:39.432113   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:37.268324   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:39.766957   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:38.019116   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:38.033625   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:38.033696   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:38.068648   71926 cri.go:89] found id: ""
	I0913 20:00:38.068677   71926 logs.go:276] 0 containers: []
	W0913 20:00:38.068688   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:38.068696   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:38.068764   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:38.103808   71926 cri.go:89] found id: ""
	I0913 20:00:38.103837   71926 logs.go:276] 0 containers: []
	W0913 20:00:38.103850   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:38.103857   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:38.103920   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:38.143305   71926 cri.go:89] found id: ""
	I0913 20:00:38.143326   71926 logs.go:276] 0 containers: []
	W0913 20:00:38.143335   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:38.143341   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:38.143397   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:38.181356   71926 cri.go:89] found id: ""
	I0913 20:00:38.181382   71926 logs.go:276] 0 containers: []
	W0913 20:00:38.181394   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:38.181401   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:38.181459   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:38.216935   71926 cri.go:89] found id: ""
	I0913 20:00:38.216966   71926 logs.go:276] 0 containers: []
	W0913 20:00:38.216977   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:38.216985   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:38.217043   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:38.253565   71926 cri.go:89] found id: ""
	I0913 20:00:38.253594   71926 logs.go:276] 0 containers: []
	W0913 20:00:38.253606   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:38.253614   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:38.253670   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:38.288672   71926 cri.go:89] found id: ""
	I0913 20:00:38.288698   71926 logs.go:276] 0 containers: []
	W0913 20:00:38.288714   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:38.288721   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:38.288782   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:38.323979   71926 cri.go:89] found id: ""
	I0913 20:00:38.324001   71926 logs.go:276] 0 containers: []
	W0913 20:00:38.324009   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:38.324017   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:38.324028   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:38.375830   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:38.375861   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:38.391703   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:38.391742   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:38.464586   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:38.464615   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:38.464630   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:38.545577   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:38.545616   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:41.090184   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:41.104040   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:41.104129   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:41.148332   71926 cri.go:89] found id: ""
	I0913 20:00:41.148362   71926 logs.go:276] 0 containers: []
	W0913 20:00:41.148371   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:41.148377   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:41.148441   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:41.187205   71926 cri.go:89] found id: ""
	I0913 20:00:41.187230   71926 logs.go:276] 0 containers: []
	W0913 20:00:41.187238   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:41.187244   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:41.187299   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:41.223654   71926 cri.go:89] found id: ""
	I0913 20:00:41.223677   71926 logs.go:276] 0 containers: []
	W0913 20:00:41.223685   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:41.223690   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:41.223750   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:41.258481   71926 cri.go:89] found id: ""
	I0913 20:00:41.258505   71926 logs.go:276] 0 containers: []
	W0913 20:00:41.258515   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:41.258520   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:41.258578   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:41.293205   71926 cri.go:89] found id: ""
	I0913 20:00:41.293236   71926 logs.go:276] 0 containers: []
	W0913 20:00:41.293248   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:41.293255   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:41.293313   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:41.331425   71926 cri.go:89] found id: ""
	I0913 20:00:41.331454   71926 logs.go:276] 0 containers: []
	W0913 20:00:41.331466   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:41.331474   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:41.331524   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:41.367916   71926 cri.go:89] found id: ""
	I0913 20:00:41.367942   71926 logs.go:276] 0 containers: []
	W0913 20:00:41.367953   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:41.367960   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:41.368024   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:41.413683   71926 cri.go:89] found id: ""
	I0913 20:00:41.413713   71926 logs.go:276] 0 containers: []
	W0913 20:00:41.413724   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:41.413736   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:41.413752   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:41.468018   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:41.468053   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:41.482397   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:41.482424   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:41.552203   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:41.552223   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:41.552238   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:41.628515   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:41.628553   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:39.835678   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:42.336092   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:41.932753   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:44.431833   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:41.768156   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:44.268056   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:44.167382   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:44.180046   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:44.180127   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:44.218376   71926 cri.go:89] found id: ""
	I0913 20:00:44.218400   71926 logs.go:276] 0 containers: []
	W0913 20:00:44.218409   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:44.218415   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:44.218474   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:44.255250   71926 cri.go:89] found id: ""
	I0913 20:00:44.255275   71926 logs.go:276] 0 containers: []
	W0913 20:00:44.255284   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:44.255289   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:44.255338   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:44.292561   71926 cri.go:89] found id: ""
	I0913 20:00:44.292587   71926 logs.go:276] 0 containers: []
	W0913 20:00:44.292597   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:44.292604   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:44.292662   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:44.326876   71926 cri.go:89] found id: ""
	I0913 20:00:44.326900   71926 logs.go:276] 0 containers: []
	W0913 20:00:44.326912   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:44.326919   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:44.326977   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:44.362531   71926 cri.go:89] found id: ""
	I0913 20:00:44.362562   71926 logs.go:276] 0 containers: []
	W0913 20:00:44.362574   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:44.362582   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:44.362646   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:44.396145   71926 cri.go:89] found id: ""
	I0913 20:00:44.396170   71926 logs.go:276] 0 containers: []
	W0913 20:00:44.396181   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:44.396188   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:44.396251   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:44.434531   71926 cri.go:89] found id: ""
	I0913 20:00:44.434556   71926 logs.go:276] 0 containers: []
	W0913 20:00:44.434564   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:44.434570   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:44.434627   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:44.470926   71926 cri.go:89] found id: ""
	I0913 20:00:44.470955   71926 logs.go:276] 0 containers: []
	W0913 20:00:44.470965   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:44.470976   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:44.470991   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:44.523651   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:44.523685   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:44.537739   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:44.537766   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:44.608055   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:44.608083   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:44.608098   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:44.694384   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:44.694416   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:44.835785   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:47.336699   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:46.932718   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:49.431805   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:46.766589   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:48.773406   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:47.235609   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:47.248691   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:47.248749   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:47.284011   71926 cri.go:89] found id: ""
	I0913 20:00:47.284037   71926 logs.go:276] 0 containers: []
	W0913 20:00:47.284049   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:47.284056   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:47.284118   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:47.319729   71926 cri.go:89] found id: ""
	I0913 20:00:47.319755   71926 logs.go:276] 0 containers: []
	W0913 20:00:47.319766   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:47.319772   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:47.319833   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:47.356053   71926 cri.go:89] found id: ""
	I0913 20:00:47.356079   71926 logs.go:276] 0 containers: []
	W0913 20:00:47.356087   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:47.356094   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:47.356172   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:47.390054   71926 cri.go:89] found id: ""
	I0913 20:00:47.390083   71926 logs.go:276] 0 containers: []
	W0913 20:00:47.390101   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:47.390109   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:47.390171   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:47.428894   71926 cri.go:89] found id: ""
	I0913 20:00:47.428920   71926 logs.go:276] 0 containers: []
	W0913 20:00:47.428932   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:47.428939   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:47.428996   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:47.463328   71926 cri.go:89] found id: ""
	I0913 20:00:47.463363   71926 logs.go:276] 0 containers: []
	W0913 20:00:47.463376   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:47.463389   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:47.463450   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:47.500739   71926 cri.go:89] found id: ""
	I0913 20:00:47.500764   71926 logs.go:276] 0 containers: []
	W0913 20:00:47.500773   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:47.500779   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:47.500827   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:47.534425   71926 cri.go:89] found id: ""
	I0913 20:00:47.534456   71926 logs.go:276] 0 containers: []
	W0913 20:00:47.534468   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:47.534479   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:47.534493   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:47.584525   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:47.584553   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:47.599468   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:47.599497   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:47.683020   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:47.683044   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:47.683055   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:47.767236   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:47.767272   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:50.310385   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:50.324059   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:50.324136   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:50.359347   71926 cri.go:89] found id: ""
	I0913 20:00:50.359381   71926 logs.go:276] 0 containers: []
	W0913 20:00:50.359393   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:50.359401   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:50.359451   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:50.395291   71926 cri.go:89] found id: ""
	I0913 20:00:50.395339   71926 logs.go:276] 0 containers: []
	W0913 20:00:50.395352   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:50.395360   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:50.395416   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:50.430783   71926 cri.go:89] found id: ""
	I0913 20:00:50.430806   71926 logs.go:276] 0 containers: []
	W0913 20:00:50.430816   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:50.430823   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:50.430884   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:50.467883   71926 cri.go:89] found id: ""
	I0913 20:00:50.467916   71926 logs.go:276] 0 containers: []
	W0913 20:00:50.467928   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:50.467935   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:50.467997   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:50.505962   71926 cri.go:89] found id: ""
	I0913 20:00:50.505995   71926 logs.go:276] 0 containers: []
	W0913 20:00:50.506007   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:50.506014   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:50.506080   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:50.542408   71926 cri.go:89] found id: ""
	I0913 20:00:50.542432   71926 logs.go:276] 0 containers: []
	W0913 20:00:50.542440   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:50.542445   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:50.542492   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:50.576431   71926 cri.go:89] found id: ""
	I0913 20:00:50.576454   71926 logs.go:276] 0 containers: []
	W0913 20:00:50.576463   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:50.576469   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:50.576533   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:50.609940   71926 cri.go:89] found id: ""
	I0913 20:00:50.609982   71926 logs.go:276] 0 containers: []
	W0913 20:00:50.609994   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:50.610004   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:50.610022   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:50.661793   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:50.661829   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:50.676085   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:50.676128   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:50.745092   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:50.745118   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:50.745132   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:50.830719   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:50.830754   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:49.835228   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:51.835655   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:53.835956   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:51.930403   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:53.931943   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:51.266576   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:53.267140   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:55.267966   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:53.369220   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:53.381944   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:53.382009   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:53.414010   71926 cri.go:89] found id: ""
	I0913 20:00:53.414032   71926 logs.go:276] 0 containers: []
	W0913 20:00:53.414041   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:53.414047   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:53.414131   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:53.448480   71926 cri.go:89] found id: ""
	I0913 20:00:53.448512   71926 logs.go:276] 0 containers: []
	W0913 20:00:53.448523   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:53.448530   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:53.448589   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:53.489451   71926 cri.go:89] found id: ""
	I0913 20:00:53.489483   71926 logs.go:276] 0 containers: []
	W0913 20:00:53.489494   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:53.489502   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:53.489562   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:53.531122   71926 cri.go:89] found id: ""
	I0913 20:00:53.531143   71926 logs.go:276] 0 containers: []
	W0913 20:00:53.531154   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:53.531169   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:53.531228   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:53.568070   71926 cri.go:89] found id: ""
	I0913 20:00:53.568098   71926 logs.go:276] 0 containers: []
	W0913 20:00:53.568109   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:53.568116   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:53.568187   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:53.601557   71926 cri.go:89] found id: ""
	I0913 20:00:53.601580   71926 logs.go:276] 0 containers: []
	W0913 20:00:53.601589   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:53.601595   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:53.601646   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:53.637689   71926 cri.go:89] found id: ""
	I0913 20:00:53.637710   71926 logs.go:276] 0 containers: []
	W0913 20:00:53.637719   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:53.637724   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:53.637782   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:53.672876   71926 cri.go:89] found id: ""
	I0913 20:00:53.672899   71926 logs.go:276] 0 containers: []
	W0913 20:00:53.672908   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:53.672916   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:53.672932   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:53.686015   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:53.686044   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:53.753998   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:53.754023   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:53.754042   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:53.844298   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:53.844329   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:53.883828   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:53.883853   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:56.440474   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:56.453970   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:56.454039   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:56.490986   71926 cri.go:89] found id: ""
	I0913 20:00:56.491013   71926 logs.go:276] 0 containers: []
	W0913 20:00:56.491024   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:56.491031   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:56.491094   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:56.530035   71926 cri.go:89] found id: ""
	I0913 20:00:56.530059   71926 logs.go:276] 0 containers: []
	W0913 20:00:56.530070   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:56.530077   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:56.530175   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:56.565256   71926 cri.go:89] found id: ""
	I0913 20:00:56.565280   71926 logs.go:276] 0 containers: []
	W0913 20:00:56.565290   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:56.565297   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:56.565358   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:56.600685   71926 cri.go:89] found id: ""
	I0913 20:00:56.600705   71926 logs.go:276] 0 containers: []
	W0913 20:00:56.600714   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:56.600725   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:56.600777   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:56.635015   71926 cri.go:89] found id: ""
	I0913 20:00:56.635042   71926 logs.go:276] 0 containers: []
	W0913 20:00:56.635053   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:56.635060   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:56.635125   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:56.671319   71926 cri.go:89] found id: ""
	I0913 20:00:56.671346   71926 logs.go:276] 0 containers: []
	W0913 20:00:56.671354   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:56.671359   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:56.671407   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:56.732238   71926 cri.go:89] found id: ""
	I0913 20:00:56.732269   71926 logs.go:276] 0 containers: []
	W0913 20:00:56.732280   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:56.732287   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:56.732347   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:56.779316   71926 cri.go:89] found id: ""
	I0913 20:00:56.779345   71926 logs.go:276] 0 containers: []
	W0913 20:00:56.779356   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:56.779366   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:56.779378   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:56.858727   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:56.858755   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:56.899449   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:56.899480   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:55.836469   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:58.335760   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:56.431305   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:58.431336   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:57.766219   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:59.767250   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:56.952972   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:56.953003   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:56.967092   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:56.967131   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:57.052690   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:59.553696   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:59.569295   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:59.569370   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:59.608548   71926 cri.go:89] found id: ""
	I0913 20:00:59.608592   71926 logs.go:276] 0 containers: []
	W0913 20:00:59.608605   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:59.608612   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:59.608674   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:59.643673   71926 cri.go:89] found id: ""
	I0913 20:00:59.643699   71926 logs.go:276] 0 containers: []
	W0913 20:00:59.643710   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:59.643716   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:59.643776   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:59.687204   71926 cri.go:89] found id: ""
	I0913 20:00:59.687228   71926 logs.go:276] 0 containers: []
	W0913 20:00:59.687237   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:59.687242   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:59.687306   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:59.720335   71926 cri.go:89] found id: ""
	I0913 20:00:59.720360   71926 logs.go:276] 0 containers: []
	W0913 20:00:59.720371   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:59.720378   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:59.720446   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:59.761164   71926 cri.go:89] found id: ""
	I0913 20:00:59.761194   71926 logs.go:276] 0 containers: []
	W0913 20:00:59.761205   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:59.761213   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:59.761278   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:59.796770   71926 cri.go:89] found id: ""
	I0913 20:00:59.796796   71926 logs.go:276] 0 containers: []
	W0913 20:00:59.796807   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:59.796814   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:59.796880   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:59.832333   71926 cri.go:89] found id: ""
	I0913 20:00:59.832364   71926 logs.go:276] 0 containers: []
	W0913 20:00:59.832377   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:59.832385   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:59.832444   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:59.868387   71926 cri.go:89] found id: ""
	I0913 20:00:59.868415   71926 logs.go:276] 0 containers: []
	W0913 20:00:59.868427   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:59.868437   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:59.868450   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:59.920632   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:59.920663   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:59.937573   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:59.937609   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:00.007803   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:00.007826   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:00.007837   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:00.085289   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:00.085324   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:00.336553   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:02.835544   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:00.931173   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:02.931879   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:02.267501   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:04.766302   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:02.628580   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:02.642122   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:02.642200   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:02.678238   71926 cri.go:89] found id: ""
	I0913 20:01:02.678260   71926 logs.go:276] 0 containers: []
	W0913 20:01:02.678268   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:02.678274   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:02.678325   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:02.714164   71926 cri.go:89] found id: ""
	I0913 20:01:02.714187   71926 logs.go:276] 0 containers: []
	W0913 20:01:02.714197   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:02.714202   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:02.714267   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:02.750200   71926 cri.go:89] found id: ""
	I0913 20:01:02.750228   71926 logs.go:276] 0 containers: []
	W0913 20:01:02.750237   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:02.750243   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:02.750291   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:02.786893   71926 cri.go:89] found id: ""
	I0913 20:01:02.786921   71926 logs.go:276] 0 containers: []
	W0913 20:01:02.786929   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:02.786935   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:02.786987   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:02.824161   71926 cri.go:89] found id: ""
	I0913 20:01:02.824192   71926 logs.go:276] 0 containers: []
	W0913 20:01:02.824204   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:02.824211   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:02.824274   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:02.861567   71926 cri.go:89] found id: ""
	I0913 20:01:02.861594   71926 logs.go:276] 0 containers: []
	W0913 20:01:02.861606   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:02.861613   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:02.861678   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:02.897791   71926 cri.go:89] found id: ""
	I0913 20:01:02.897813   71926 logs.go:276] 0 containers: []
	W0913 20:01:02.897822   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:02.897827   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:02.897875   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:02.935790   71926 cri.go:89] found id: ""
	I0913 20:01:02.935818   71926 logs.go:276] 0 containers: []
	W0913 20:01:02.935830   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:02.935840   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:02.935853   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:02.987011   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:02.987044   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:03.000688   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:03.000722   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:03.075757   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:03.075780   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:03.075795   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:03.167565   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:03.167611   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:05.713852   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:05.727810   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:05.727876   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:05.765630   71926 cri.go:89] found id: ""
	I0913 20:01:05.765655   71926 logs.go:276] 0 containers: []
	W0913 20:01:05.765666   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:05.765672   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:05.765720   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:05.801085   71926 cri.go:89] found id: ""
	I0913 20:01:05.801123   71926 logs.go:276] 0 containers: []
	W0913 20:01:05.801136   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:05.801142   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:05.801209   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:05.840112   71926 cri.go:89] found id: ""
	I0913 20:01:05.840146   71926 logs.go:276] 0 containers: []
	W0913 20:01:05.840156   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:05.840163   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:05.840225   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:05.876082   71926 cri.go:89] found id: ""
	I0913 20:01:05.876107   71926 logs.go:276] 0 containers: []
	W0913 20:01:05.876118   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:05.876125   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:05.876188   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:05.913772   71926 cri.go:89] found id: ""
	I0913 20:01:05.913801   71926 logs.go:276] 0 containers: []
	W0913 20:01:05.913813   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:05.913820   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:05.913879   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:05.947670   71926 cri.go:89] found id: ""
	I0913 20:01:05.947694   71926 logs.go:276] 0 containers: []
	W0913 20:01:05.947702   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:05.947708   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:05.947756   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:05.989763   71926 cri.go:89] found id: ""
	I0913 20:01:05.989794   71926 logs.go:276] 0 containers: []
	W0913 20:01:05.989807   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:05.989814   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:05.989875   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:06.030319   71926 cri.go:89] found id: ""
	I0913 20:01:06.030361   71926 logs.go:276] 0 containers: []
	W0913 20:01:06.030373   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:06.030383   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:06.030397   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:06.111153   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:06.111185   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:06.149331   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:06.149357   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:06.202013   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:06.202056   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:06.216224   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:06.216252   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:06.291004   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:04.839716   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:07.334774   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:04.932814   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:07.431144   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:09.431578   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:06.766410   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:09.267184   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:08.791573   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:08.804872   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:08.804930   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:08.846040   71926 cri.go:89] found id: ""
	I0913 20:01:08.846068   71926 logs.go:276] 0 containers: []
	W0913 20:01:08.846081   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:08.846088   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:08.846159   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:08.892954   71926 cri.go:89] found id: ""
	I0913 20:01:08.892986   71926 logs.go:276] 0 containers: []
	W0913 20:01:08.892998   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:08.893005   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:08.893068   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:08.949737   71926 cri.go:89] found id: ""
	I0913 20:01:08.949762   71926 logs.go:276] 0 containers: []
	W0913 20:01:08.949773   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:08.949779   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:08.949847   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:08.991915   71926 cri.go:89] found id: ""
	I0913 20:01:08.991938   71926 logs.go:276] 0 containers: []
	W0913 20:01:08.991950   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:08.991956   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:08.992009   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:09.028702   71926 cri.go:89] found id: ""
	I0913 20:01:09.028733   71926 logs.go:276] 0 containers: []
	W0913 20:01:09.028754   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:09.028764   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:09.028828   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:09.063615   71926 cri.go:89] found id: ""
	I0913 20:01:09.063639   71926 logs.go:276] 0 containers: []
	W0913 20:01:09.063648   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:09.063653   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:09.063713   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:09.096739   71926 cri.go:89] found id: ""
	I0913 20:01:09.096766   71926 logs.go:276] 0 containers: []
	W0913 20:01:09.096776   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:09.096782   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:09.096838   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:09.129605   71926 cri.go:89] found id: ""
	I0913 20:01:09.129635   71926 logs.go:276] 0 containers: []
	W0913 20:01:09.129647   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:09.129657   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:09.129674   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:09.181245   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:09.181280   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:09.195598   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:09.195627   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:09.271676   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:09.271703   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:09.271718   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:09.353657   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:09.353688   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:11.896613   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:11.910334   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:11.910410   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:09.336081   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:11.336204   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:13.336445   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:11.934825   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:14.430581   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:11.766779   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:14.267119   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:11.947632   71926 cri.go:89] found id: ""
	I0913 20:01:11.947654   71926 logs.go:276] 0 containers: []
	W0913 20:01:11.947663   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:11.947669   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:11.947727   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:11.983996   71926 cri.go:89] found id: ""
	I0913 20:01:11.984019   71926 logs.go:276] 0 containers: []
	W0913 20:01:11.984028   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:11.984034   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:11.984082   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:12.020497   71926 cri.go:89] found id: ""
	I0913 20:01:12.020536   71926 logs.go:276] 0 containers: []
	W0913 20:01:12.020548   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:12.020556   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:12.020615   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:12.059799   71926 cri.go:89] found id: ""
	I0913 20:01:12.059821   71926 logs.go:276] 0 containers: []
	W0913 20:01:12.059829   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:12.059835   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:12.059882   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:12.099079   71926 cri.go:89] found id: ""
	I0913 20:01:12.099113   71926 logs.go:276] 0 containers: []
	W0913 20:01:12.099125   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:12.099132   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:12.099193   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:12.132677   71926 cri.go:89] found id: ""
	I0913 20:01:12.132704   71926 logs.go:276] 0 containers: []
	W0913 20:01:12.132716   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:12.132722   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:12.132769   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:12.165522   71926 cri.go:89] found id: ""
	I0913 20:01:12.165546   71926 logs.go:276] 0 containers: []
	W0913 20:01:12.165560   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:12.165565   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:12.165625   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:12.207145   71926 cri.go:89] found id: ""
	I0913 20:01:12.207168   71926 logs.go:276] 0 containers: []
	W0913 20:01:12.207177   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:12.207185   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:12.207195   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:12.260523   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:12.260556   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:12.275208   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:12.275236   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:12.346424   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:12.346447   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:12.346463   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:12.431012   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:12.431050   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:14.970909   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:14.984452   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:14.984512   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:15.020614   71926 cri.go:89] found id: ""
	I0913 20:01:15.020635   71926 logs.go:276] 0 containers: []
	W0913 20:01:15.020645   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:15.020650   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:15.020695   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:15.060481   71926 cri.go:89] found id: ""
	I0913 20:01:15.060508   71926 logs.go:276] 0 containers: []
	W0913 20:01:15.060516   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:15.060520   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:15.060580   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:15.095109   71926 cri.go:89] found id: ""
	I0913 20:01:15.095134   71926 logs.go:276] 0 containers: []
	W0913 20:01:15.095143   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:15.095148   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:15.095201   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:15.130733   71926 cri.go:89] found id: ""
	I0913 20:01:15.130754   71926 logs.go:276] 0 containers: []
	W0913 20:01:15.130762   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:15.130768   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:15.130816   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:15.164998   71926 cri.go:89] found id: ""
	I0913 20:01:15.165027   71926 logs.go:276] 0 containers: []
	W0913 20:01:15.165042   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:15.165049   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:15.165100   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:15.205957   71926 cri.go:89] found id: ""
	I0913 20:01:15.205981   71926 logs.go:276] 0 containers: []
	W0913 20:01:15.205989   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:15.205994   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:15.206053   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:15.240502   71926 cri.go:89] found id: ""
	I0913 20:01:15.240526   71926 logs.go:276] 0 containers: []
	W0913 20:01:15.240535   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:15.240540   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:15.240586   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:15.275900   71926 cri.go:89] found id: ""
	I0913 20:01:15.275920   71926 logs.go:276] 0 containers: []
	W0913 20:01:15.275928   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:15.275936   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:15.275946   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:15.343658   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:15.343677   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:15.343690   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:15.426317   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:15.426355   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:15.474538   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:15.474569   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:15.525405   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:15.525439   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:15.836259   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:18.336529   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:16.431423   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:18.930385   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:16.766863   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:19.266906   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:18.040698   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:18.053759   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:18.053820   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:18.086056   71926 cri.go:89] found id: ""
	I0913 20:01:18.086078   71926 logs.go:276] 0 containers: []
	W0913 20:01:18.086087   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:18.086092   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:18.086166   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:18.119247   71926 cri.go:89] found id: ""
	I0913 20:01:18.119277   71926 logs.go:276] 0 containers: []
	W0913 20:01:18.119290   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:18.119301   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:18.119366   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:18.163417   71926 cri.go:89] found id: ""
	I0913 20:01:18.163442   71926 logs.go:276] 0 containers: []
	W0913 20:01:18.163450   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:18.163456   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:18.163504   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:18.196786   71926 cri.go:89] found id: ""
	I0913 20:01:18.196812   71926 logs.go:276] 0 containers: []
	W0913 20:01:18.196820   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:18.196826   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:18.196878   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:18.231864   71926 cri.go:89] found id: ""
	I0913 20:01:18.231893   71926 logs.go:276] 0 containers: []
	W0913 20:01:18.231903   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:18.231909   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:18.231973   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:18.268498   71926 cri.go:89] found id: ""
	I0913 20:01:18.268518   71926 logs.go:276] 0 containers: []
	W0913 20:01:18.268527   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:18.268534   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:18.268586   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:18.305391   71926 cri.go:89] found id: ""
	I0913 20:01:18.305418   71926 logs.go:276] 0 containers: []
	W0913 20:01:18.305430   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:18.305438   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:18.305499   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:18.340160   71926 cri.go:89] found id: ""
	I0913 20:01:18.340187   71926 logs.go:276] 0 containers: []
	W0913 20:01:18.340197   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:18.340207   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:18.340220   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:18.391723   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:18.391757   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:18.405914   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:18.405943   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:18.476941   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:18.476960   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:18.476972   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:18.556812   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:18.556845   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:21.098172   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:21.111147   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:21.111211   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:21.147273   71926 cri.go:89] found id: ""
	I0913 20:01:21.147299   71926 logs.go:276] 0 containers: []
	W0913 20:01:21.147316   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:21.147322   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:21.147370   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:21.185018   71926 cri.go:89] found id: ""
	I0913 20:01:21.185047   71926 logs.go:276] 0 containers: []
	W0913 20:01:21.185059   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:21.185066   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:21.185148   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:21.219763   71926 cri.go:89] found id: ""
	I0913 20:01:21.219798   71926 logs.go:276] 0 containers: []
	W0913 20:01:21.219809   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:21.219816   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:21.219882   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:21.257121   71926 cri.go:89] found id: ""
	I0913 20:01:21.257149   71926 logs.go:276] 0 containers: []
	W0913 20:01:21.257161   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:21.257167   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:21.257229   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:21.292011   71926 cri.go:89] found id: ""
	I0913 20:01:21.292038   71926 logs.go:276] 0 containers: []
	W0913 20:01:21.292049   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:21.292060   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:21.292124   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:21.327563   71926 cri.go:89] found id: ""
	I0913 20:01:21.327592   71926 logs.go:276] 0 containers: []
	W0913 20:01:21.327604   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:21.327612   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:21.327679   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:21.365175   71926 cri.go:89] found id: ""
	I0913 20:01:21.365201   71926 logs.go:276] 0 containers: []
	W0913 20:01:21.365209   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:21.365215   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:21.365272   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:21.403238   71926 cri.go:89] found id: ""
	I0913 20:01:21.403266   71926 logs.go:276] 0 containers: []
	W0913 20:01:21.403278   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:21.403288   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:21.403310   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:21.417704   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:21.417737   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:21.490232   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:21.490264   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:21.490283   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:21.573376   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:21.573414   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:21.619573   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:21.619604   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:20.835709   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:22.835800   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:20.931257   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:22.932350   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:21.267729   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:23.767489   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:25.768029   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:24.173931   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:24.187538   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:24.187608   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:24.227803   71926 cri.go:89] found id: ""
	I0913 20:01:24.227827   71926 logs.go:276] 0 containers: []
	W0913 20:01:24.227836   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:24.227841   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:24.227898   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:24.263194   71926 cri.go:89] found id: ""
	I0913 20:01:24.263223   71926 logs.go:276] 0 containers: []
	W0913 20:01:24.263239   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:24.263246   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:24.263308   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:24.298159   71926 cri.go:89] found id: ""
	I0913 20:01:24.298188   71926 logs.go:276] 0 containers: []
	W0913 20:01:24.298201   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:24.298209   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:24.298267   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:24.340590   71926 cri.go:89] found id: ""
	I0913 20:01:24.340621   71926 logs.go:276] 0 containers: []
	W0913 20:01:24.340633   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:24.340640   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:24.340699   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:24.385839   71926 cri.go:89] found id: ""
	I0913 20:01:24.385866   71926 logs.go:276] 0 containers: []
	W0913 20:01:24.385875   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:24.385880   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:24.385944   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:24.424723   71926 cri.go:89] found id: ""
	I0913 20:01:24.424750   71926 logs.go:276] 0 containers: []
	W0913 20:01:24.424761   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:24.424767   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:24.424828   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:24.461879   71926 cri.go:89] found id: ""
	I0913 20:01:24.461911   71926 logs.go:276] 0 containers: []
	W0913 20:01:24.461922   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:24.461929   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:24.461995   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:24.497230   71926 cri.go:89] found id: ""
	I0913 20:01:24.497257   71926 logs.go:276] 0 containers: []
	W0913 20:01:24.497269   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:24.497278   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:24.497297   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:24.538048   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:24.538084   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:24.592840   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:24.592880   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:24.608817   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:24.608851   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:24.683335   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:24.683356   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:24.683367   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:24.836044   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:27.335709   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:25.431310   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:27.931864   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:28.266427   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:30.765946   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:27.262211   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:27.275199   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:27.275277   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:27.308960   71926 cri.go:89] found id: ""
	I0913 20:01:27.308986   71926 logs.go:276] 0 containers: []
	W0913 20:01:27.308994   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:27.309000   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:27.309065   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:27.345030   71926 cri.go:89] found id: ""
	I0913 20:01:27.345055   71926 logs.go:276] 0 containers: []
	W0913 20:01:27.345067   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:27.345074   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:27.345132   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:27.380791   71926 cri.go:89] found id: ""
	I0913 20:01:27.380823   71926 logs.go:276] 0 containers: []
	W0913 20:01:27.380831   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:27.380837   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:27.380896   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:27.416561   71926 cri.go:89] found id: ""
	I0913 20:01:27.416589   71926 logs.go:276] 0 containers: []
	W0913 20:01:27.416599   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:27.416604   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:27.416654   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:27.455781   71926 cri.go:89] found id: ""
	I0913 20:01:27.455809   71926 logs.go:276] 0 containers: []
	W0913 20:01:27.455820   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:27.455827   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:27.455887   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:27.493920   71926 cri.go:89] found id: ""
	I0913 20:01:27.493950   71926 logs.go:276] 0 containers: []
	W0913 20:01:27.493959   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:27.493967   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:27.494016   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:27.531706   71926 cri.go:89] found id: ""
	I0913 20:01:27.531730   71926 logs.go:276] 0 containers: []
	W0913 20:01:27.531740   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:27.531746   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:27.531796   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:27.568697   71926 cri.go:89] found id: ""
	I0913 20:01:27.568726   71926 logs.go:276] 0 containers: []
	W0913 20:01:27.568735   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:27.568744   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:27.568755   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:27.620618   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:27.620655   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:27.636353   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:27.636378   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:27.709779   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:27.709806   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:27.709819   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:27.784476   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:27.784508   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:30.331351   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:30.344583   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:30.344647   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:30.379992   71926 cri.go:89] found id: ""
	I0913 20:01:30.380018   71926 logs.go:276] 0 containers: []
	W0913 20:01:30.380051   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:30.380059   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:30.380129   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:30.415148   71926 cri.go:89] found id: ""
	I0913 20:01:30.415174   71926 logs.go:276] 0 containers: []
	W0913 20:01:30.415185   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:30.415192   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:30.415250   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:30.450578   71926 cri.go:89] found id: ""
	I0913 20:01:30.450602   71926 logs.go:276] 0 containers: []
	W0913 20:01:30.450611   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:30.450616   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:30.450675   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:30.484582   71926 cri.go:89] found id: ""
	I0913 20:01:30.484612   71926 logs.go:276] 0 containers: []
	W0913 20:01:30.484623   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:30.484631   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:30.484683   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:30.520476   71926 cri.go:89] found id: ""
	I0913 20:01:30.520498   71926 logs.go:276] 0 containers: []
	W0913 20:01:30.520506   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:30.520512   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:30.520559   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:30.556898   71926 cri.go:89] found id: ""
	I0913 20:01:30.556928   71926 logs.go:276] 0 containers: []
	W0913 20:01:30.556937   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:30.556943   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:30.556993   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:30.595216   71926 cri.go:89] found id: ""
	I0913 20:01:30.595240   71926 logs.go:276] 0 containers: []
	W0913 20:01:30.595252   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:30.595259   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:30.595312   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:30.629834   71926 cri.go:89] found id: ""
	I0913 20:01:30.629866   71926 logs.go:276] 0 containers: []
	W0913 20:01:30.629880   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:30.629891   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:30.629907   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:30.643487   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:30.643516   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:30.718267   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:30.718290   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:30.718304   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:30.803014   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:30.803044   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:30.843670   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:30.843694   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:29.336064   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:31.836582   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:29.932193   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:32.431217   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:32.766473   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:34.767287   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:33.396566   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:33.409579   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:33.409641   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:33.444647   71926 cri.go:89] found id: ""
	I0913 20:01:33.444671   71926 logs.go:276] 0 containers: []
	W0913 20:01:33.444682   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:33.444689   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:33.444747   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:33.479545   71926 cri.go:89] found id: ""
	I0913 20:01:33.479568   71926 logs.go:276] 0 containers: []
	W0913 20:01:33.479577   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:33.479583   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:33.479634   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:33.517343   71926 cri.go:89] found id: ""
	I0913 20:01:33.517366   71926 logs.go:276] 0 containers: []
	W0913 20:01:33.517375   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:33.517380   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:33.517437   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:33.551394   71926 cri.go:89] found id: ""
	I0913 20:01:33.551424   71926 logs.go:276] 0 containers: []
	W0913 20:01:33.551436   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:33.551449   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:33.551499   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:33.583871   71926 cri.go:89] found id: ""
	I0913 20:01:33.583893   71926 logs.go:276] 0 containers: []
	W0913 20:01:33.583902   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:33.583907   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:33.583966   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:33.616984   71926 cri.go:89] found id: ""
	I0913 20:01:33.617008   71926 logs.go:276] 0 containers: []
	W0913 20:01:33.617018   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:33.617023   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:33.617078   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:33.660231   71926 cri.go:89] found id: ""
	I0913 20:01:33.660252   71926 logs.go:276] 0 containers: []
	W0913 20:01:33.660261   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:33.660267   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:33.660315   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:33.705140   71926 cri.go:89] found id: ""
	I0913 20:01:33.705176   71926 logs.go:276] 0 containers: []
	W0913 20:01:33.705188   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:33.705198   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:33.705213   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:33.781889   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:33.781916   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:33.781931   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:33.860132   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:33.860171   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:33.899723   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:33.899754   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:33.953380   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:33.953416   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:36.469123   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:36.482260   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:36.482324   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:36.516469   71926 cri.go:89] found id: ""
	I0913 20:01:36.516494   71926 logs.go:276] 0 containers: []
	W0913 20:01:36.516504   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:36.516509   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:36.516599   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:36.554065   71926 cri.go:89] found id: ""
	I0913 20:01:36.554103   71926 logs.go:276] 0 containers: []
	W0913 20:01:36.554116   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:36.554123   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:36.554197   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:36.587706   71926 cri.go:89] found id: ""
	I0913 20:01:36.587736   71926 logs.go:276] 0 containers: []
	W0913 20:01:36.587745   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:36.587750   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:36.587812   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:36.621866   71926 cri.go:89] found id: ""
	I0913 20:01:36.621897   71926 logs.go:276] 0 containers: []
	W0913 20:01:36.621908   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:36.621915   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:36.621984   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:36.656374   71926 cri.go:89] found id: ""
	I0913 20:01:36.656403   71926 logs.go:276] 0 containers: []
	W0913 20:01:36.656411   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:36.656416   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:36.656465   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:36.690236   71926 cri.go:89] found id: ""
	I0913 20:01:36.690259   71926 logs.go:276] 0 containers: []
	W0913 20:01:36.690268   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:36.690273   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:36.690329   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:36.725544   71926 cri.go:89] found id: ""
	I0913 20:01:36.725580   71926 logs.go:276] 0 containers: []
	W0913 20:01:36.725592   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:36.725600   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:36.725656   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:36.762143   71926 cri.go:89] found id: ""
	I0913 20:01:36.762175   71926 logs.go:276] 0 containers: []
	W0913 20:01:36.762186   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:36.762195   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:36.762210   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:36.837436   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:36.837459   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:36.837473   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:36.916272   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:36.916310   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:34.334975   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:36.335436   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:38.835559   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:34.930444   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:36.931136   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:39.430007   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:37.266186   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:39.769801   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:36.962300   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:36.962332   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:37.016419   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:37.016449   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:39.532924   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:39.546066   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:39.546150   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:39.585711   71926 cri.go:89] found id: ""
	I0913 20:01:39.585733   71926 logs.go:276] 0 containers: []
	W0913 20:01:39.585741   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:39.585747   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:39.585803   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:39.623366   71926 cri.go:89] found id: ""
	I0913 20:01:39.623409   71926 logs.go:276] 0 containers: []
	W0913 20:01:39.623419   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:39.623425   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:39.623487   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:39.660533   71926 cri.go:89] found id: ""
	I0913 20:01:39.660558   71926 logs.go:276] 0 containers: []
	W0913 20:01:39.660567   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:39.660575   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:39.660637   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:39.694266   71926 cri.go:89] found id: ""
	I0913 20:01:39.694293   71926 logs.go:276] 0 containers: []
	W0913 20:01:39.694304   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:39.694311   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:39.694373   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:39.734258   71926 cri.go:89] found id: ""
	I0913 20:01:39.734284   71926 logs.go:276] 0 containers: []
	W0913 20:01:39.734295   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:39.734302   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:39.734361   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:39.770202   71926 cri.go:89] found id: ""
	I0913 20:01:39.770219   71926 logs.go:276] 0 containers: []
	W0913 20:01:39.770227   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:39.770233   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:39.770284   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:39.804380   71926 cri.go:89] found id: ""
	I0913 20:01:39.804417   71926 logs.go:276] 0 containers: []
	W0913 20:01:39.804425   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:39.804431   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:39.804481   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:39.837961   71926 cri.go:89] found id: ""
	I0913 20:01:39.837989   71926 logs.go:276] 0 containers: []
	W0913 20:01:39.838011   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:39.838021   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:39.838047   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:39.851152   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:39.851176   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:39.930199   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:39.930224   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:39.930239   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:40.007475   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:40.007507   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:40.050291   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:40.050320   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:40.835948   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:42.836933   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:41.431508   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:43.930509   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:42.265895   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:44.267214   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:42.599100   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:42.613586   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:42.613662   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:42.648187   71926 cri.go:89] found id: ""
	I0913 20:01:42.648213   71926 logs.go:276] 0 containers: []
	W0913 20:01:42.648225   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:42.648232   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:42.648283   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:42.692692   71926 cri.go:89] found id: ""
	I0913 20:01:42.692721   71926 logs.go:276] 0 containers: []
	W0913 20:01:42.692730   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:42.692736   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:42.692787   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:42.726248   71926 cri.go:89] found id: ""
	I0913 20:01:42.726273   71926 logs.go:276] 0 containers: []
	W0913 20:01:42.726283   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:42.726291   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:42.726364   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:42.764372   71926 cri.go:89] found id: ""
	I0913 20:01:42.764416   71926 logs.go:276] 0 containers: []
	W0913 20:01:42.764428   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:42.764434   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:42.764495   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:42.802871   71926 cri.go:89] found id: ""
	I0913 20:01:42.802902   71926 logs.go:276] 0 containers: []
	W0913 20:01:42.802914   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:42.802922   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:42.802983   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:42.840018   71926 cri.go:89] found id: ""
	I0913 20:01:42.840048   71926 logs.go:276] 0 containers: []
	W0913 20:01:42.840060   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:42.840067   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:42.840127   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:42.876359   71926 cri.go:89] found id: ""
	I0913 20:01:42.876388   71926 logs.go:276] 0 containers: []
	W0913 20:01:42.876400   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:42.876408   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:42.876473   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:42.911673   71926 cri.go:89] found id: ""
	I0913 20:01:42.911697   71926 logs.go:276] 0 containers: []
	W0913 20:01:42.911706   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:42.911713   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:42.911725   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:43.016107   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:43.016143   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:43.061781   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:43.061811   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:43.112989   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:43.113035   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:43.129172   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:43.129201   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:43.209596   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:45.710278   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:45.723939   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:45.724013   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:45.759393   71926 cri.go:89] found id: ""
	I0913 20:01:45.759420   71926 logs.go:276] 0 containers: []
	W0913 20:01:45.759432   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:45.759439   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:45.759500   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:45.796795   71926 cri.go:89] found id: ""
	I0913 20:01:45.796820   71926 logs.go:276] 0 containers: []
	W0913 20:01:45.796831   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:45.796838   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:45.796911   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:45.837904   71926 cri.go:89] found id: ""
	I0913 20:01:45.837929   71926 logs.go:276] 0 containers: []
	W0913 20:01:45.837939   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:45.837945   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:45.838005   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:45.873079   71926 cri.go:89] found id: ""
	I0913 20:01:45.873104   71926 logs.go:276] 0 containers: []
	W0913 20:01:45.873115   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:45.873122   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:45.873194   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:45.910115   71926 cri.go:89] found id: ""
	I0913 20:01:45.910144   71926 logs.go:276] 0 containers: []
	W0913 20:01:45.910160   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:45.910167   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:45.910228   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:45.949135   71926 cri.go:89] found id: ""
	I0913 20:01:45.949166   71926 logs.go:276] 0 containers: []
	W0913 20:01:45.949178   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:45.949186   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:45.949246   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:45.984997   71926 cri.go:89] found id: ""
	I0913 20:01:45.985023   71926 logs.go:276] 0 containers: []
	W0913 20:01:45.985033   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:45.985038   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:45.985088   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:46.023590   71926 cri.go:89] found id: ""
	I0913 20:01:46.023618   71926 logs.go:276] 0 containers: []
	W0913 20:01:46.023632   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:46.023642   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:46.023656   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:46.062364   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:46.062392   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:46.114617   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:46.114650   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:46.129756   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:46.129799   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:46.202031   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:46.202052   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:46.202063   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:45.337317   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:47.834948   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:45.931344   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:47.932508   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:46.776369   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:49.268050   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:48.782933   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:48.796685   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:48.796758   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:48.832035   71926 cri.go:89] found id: ""
	I0913 20:01:48.832061   71926 logs.go:276] 0 containers: []
	W0913 20:01:48.832074   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:48.832081   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:48.832155   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:48.866904   71926 cri.go:89] found id: ""
	I0913 20:01:48.866929   71926 logs.go:276] 0 containers: []
	W0913 20:01:48.866942   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:48.866947   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:48.867004   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:48.904082   71926 cri.go:89] found id: ""
	I0913 20:01:48.904105   71926 logs.go:276] 0 containers: []
	W0913 20:01:48.904113   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:48.904118   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:48.904174   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:48.947496   71926 cri.go:89] found id: ""
	I0913 20:01:48.947519   71926 logs.go:276] 0 containers: []
	W0913 20:01:48.947526   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:48.947532   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:48.947588   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:48.982918   71926 cri.go:89] found id: ""
	I0913 20:01:48.982954   71926 logs.go:276] 0 containers: []
	W0913 20:01:48.982964   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:48.982969   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:48.983034   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:49.023593   71926 cri.go:89] found id: ""
	I0913 20:01:49.023615   71926 logs.go:276] 0 containers: []
	W0913 20:01:49.023623   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:49.023629   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:49.023690   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:49.068211   71926 cri.go:89] found id: ""
	I0913 20:01:49.068233   71926 logs.go:276] 0 containers: []
	W0913 20:01:49.068241   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:49.068246   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:49.068310   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:49.104174   71926 cri.go:89] found id: ""
	I0913 20:01:49.104195   71926 logs.go:276] 0 containers: []
	W0913 20:01:49.104203   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:49.104210   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:49.104221   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:49.158282   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:49.158317   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:49.173592   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:49.173617   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:49.249039   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:49.249067   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:49.249082   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:49.334924   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:49.334969   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:51.884986   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:51.898020   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:51.898172   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:51.937858   71926 cri.go:89] found id: ""
	I0913 20:01:51.937885   71926 logs.go:276] 0 containers: []
	W0913 20:01:51.937896   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:51.937905   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:51.937971   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:49.836646   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:52.337477   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:50.432045   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:52.930984   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:51.765027   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:53.766659   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:55.766923   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:51.977712   71926 cri.go:89] found id: ""
	I0913 20:01:51.977743   71926 logs.go:276] 0 containers: []
	W0913 20:01:51.977756   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:51.977766   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:51.977852   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:52.012504   71926 cri.go:89] found id: ""
	I0913 20:01:52.012529   71926 logs.go:276] 0 containers: []
	W0913 20:01:52.012539   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:52.012544   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:52.012604   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:52.047642   71926 cri.go:89] found id: ""
	I0913 20:01:52.047671   71926 logs.go:276] 0 containers: []
	W0913 20:01:52.047683   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:52.047692   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:52.047743   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:52.087213   71926 cri.go:89] found id: ""
	I0913 20:01:52.087240   71926 logs.go:276] 0 containers: []
	W0913 20:01:52.087251   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:52.087258   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:52.087317   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:52.126609   71926 cri.go:89] found id: ""
	I0913 20:01:52.126633   71926 logs.go:276] 0 containers: []
	W0913 20:01:52.126641   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:52.126647   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:52.126699   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:52.160754   71926 cri.go:89] found id: ""
	I0913 20:01:52.160778   71926 logs.go:276] 0 containers: []
	W0913 20:01:52.160789   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:52.160796   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:52.160857   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:52.196945   71926 cri.go:89] found id: ""
	I0913 20:01:52.196967   71926 logs.go:276] 0 containers: []
	W0913 20:01:52.196975   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:52.196983   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:52.196994   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:52.248515   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:52.248552   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:52.264602   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:52.264629   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:52.339626   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:52.339653   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:52.339668   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:52.419793   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:52.419827   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:54.966220   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:54.981234   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:54.981299   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:55.015806   71926 cri.go:89] found id: ""
	I0913 20:01:55.015843   71926 logs.go:276] 0 containers: []
	W0913 20:01:55.015855   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:55.015861   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:55.015931   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:55.053979   71926 cri.go:89] found id: ""
	I0913 20:01:55.054002   71926 logs.go:276] 0 containers: []
	W0913 20:01:55.054011   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:55.054016   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:55.054075   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:55.096464   71926 cri.go:89] found id: ""
	I0913 20:01:55.096495   71926 logs.go:276] 0 containers: []
	W0913 20:01:55.096506   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:55.096514   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:55.096591   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:55.146557   71926 cri.go:89] found id: ""
	I0913 20:01:55.146586   71926 logs.go:276] 0 containers: []
	W0913 20:01:55.146599   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:55.146606   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:55.146667   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:55.199034   71926 cri.go:89] found id: ""
	I0913 20:01:55.199063   71926 logs.go:276] 0 containers: []
	W0913 20:01:55.199074   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:55.199083   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:55.199144   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:55.234731   71926 cri.go:89] found id: ""
	I0913 20:01:55.234760   71926 logs.go:276] 0 containers: []
	W0913 20:01:55.234772   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:55.234780   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:55.234830   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:55.273543   71926 cri.go:89] found id: ""
	I0913 20:01:55.273570   71926 logs.go:276] 0 containers: []
	W0913 20:01:55.273579   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:55.273585   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:55.273643   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:55.310449   71926 cri.go:89] found id: ""
	I0913 20:01:55.310483   71926 logs.go:276] 0 containers: []
	W0913 20:01:55.310496   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:55.310507   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:55.310521   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:55.360725   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:55.360761   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:55.375346   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:55.375374   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:55.445075   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:55.445098   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:55.445108   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:55.520105   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:55.520144   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:54.835305   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:56.835825   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:58.836975   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:55.431354   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:57.930223   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:57.767026   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:00.266415   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:58.059379   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:58.072373   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:58.072454   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:58.108624   71926 cri.go:89] found id: ""
	I0913 20:01:58.108650   71926 logs.go:276] 0 containers: []
	W0913 20:01:58.108659   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:58.108665   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:58.108722   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:58.143978   71926 cri.go:89] found id: ""
	I0913 20:01:58.143999   71926 logs.go:276] 0 containers: []
	W0913 20:01:58.144007   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:58.144013   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:58.144059   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:58.178996   71926 cri.go:89] found id: ""
	I0913 20:01:58.179024   71926 logs.go:276] 0 containers: []
	W0913 20:01:58.179032   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:58.179038   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:58.179097   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:58.211596   71926 cri.go:89] found id: ""
	I0913 20:01:58.211624   71926 logs.go:276] 0 containers: []
	W0913 20:01:58.211634   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:58.211640   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:58.211696   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:58.245034   71926 cri.go:89] found id: ""
	I0913 20:01:58.245065   71926 logs.go:276] 0 containers: []
	W0913 20:01:58.245077   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:58.245085   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:58.245148   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:58.283216   71926 cri.go:89] found id: ""
	I0913 20:01:58.283238   71926 logs.go:276] 0 containers: []
	W0913 20:01:58.283247   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:58.283252   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:58.283309   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:58.321463   71926 cri.go:89] found id: ""
	I0913 20:01:58.321484   71926 logs.go:276] 0 containers: []
	W0913 20:01:58.321492   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:58.321498   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:58.321544   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:58.355902   71926 cri.go:89] found id: ""
	I0913 20:01:58.355926   71926 logs.go:276] 0 containers: []
	W0913 20:01:58.355937   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:58.355947   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:58.355965   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:58.413005   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:58.413035   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:58.428082   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:58.428108   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:58.498169   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:58.498197   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:58.498212   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:58.578725   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:58.578757   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:02:01.119256   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:02:01.146017   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:02:01.146114   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:02:01.196918   71926 cri.go:89] found id: ""
	I0913 20:02:01.196945   71926 logs.go:276] 0 containers: []
	W0913 20:02:01.196953   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:02:01.196959   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:02:01.197016   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:02:01.233679   71926 cri.go:89] found id: ""
	I0913 20:02:01.233718   71926 logs.go:276] 0 containers: []
	W0913 20:02:01.233729   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:02:01.233736   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:02:01.233800   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:02:01.269464   71926 cri.go:89] found id: ""
	I0913 20:02:01.269492   71926 logs.go:276] 0 containers: []
	W0913 20:02:01.269503   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:02:01.269510   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:02:01.269570   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:02:01.305729   71926 cri.go:89] found id: ""
	I0913 20:02:01.305754   71926 logs.go:276] 0 containers: []
	W0913 20:02:01.305763   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:02:01.305769   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:02:01.305828   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:02:01.342388   71926 cri.go:89] found id: ""
	I0913 20:02:01.342415   71926 logs.go:276] 0 containers: []
	W0913 20:02:01.342426   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:02:01.342433   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:02:01.342496   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:02:01.381841   71926 cri.go:89] found id: ""
	I0913 20:02:01.381870   71926 logs.go:276] 0 containers: []
	W0913 20:02:01.381887   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:02:01.381895   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:02:01.381959   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:02:01.418835   71926 cri.go:89] found id: ""
	I0913 20:02:01.418875   71926 logs.go:276] 0 containers: []
	W0913 20:02:01.418886   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:02:01.418893   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:02:01.418957   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:02:01.457113   71926 cri.go:89] found id: ""
	I0913 20:02:01.457147   71926 logs.go:276] 0 containers: []
	W0913 20:02:01.457158   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:02:01.457168   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:02:01.457178   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:02:01.513024   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:02:01.513060   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:02:01.528102   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:02:01.528128   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:02:01.602459   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:02:01.602480   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:02:01.602495   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:02:01.685567   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:02:01.685599   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:02:01.336152   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:03.836139   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:59.931408   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:02.430247   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:04.431966   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:02.266731   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:04.768148   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:04.231832   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:02:04.246134   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:02:04.246215   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:02:04.285749   71926 cri.go:89] found id: ""
	I0913 20:02:04.285779   71926 logs.go:276] 0 containers: []
	W0913 20:02:04.285789   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:02:04.285796   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:02:04.285857   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:02:04.320201   71926 cri.go:89] found id: ""
	I0913 20:02:04.320235   71926 logs.go:276] 0 containers: []
	W0913 20:02:04.320246   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:02:04.320252   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:02:04.320314   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:02:04.356354   71926 cri.go:89] found id: ""
	I0913 20:02:04.356376   71926 logs.go:276] 0 containers: []
	W0913 20:02:04.356387   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:02:04.356394   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:02:04.356452   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:02:04.391995   71926 cri.go:89] found id: ""
	I0913 20:02:04.392025   71926 logs.go:276] 0 containers: []
	W0913 20:02:04.392036   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:02:04.392044   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:02:04.392111   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:02:04.425216   71926 cri.go:89] found id: ""
	I0913 20:02:04.425245   71926 logs.go:276] 0 containers: []
	W0913 20:02:04.425255   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:02:04.425262   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:02:04.425327   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:02:04.473200   71926 cri.go:89] found id: ""
	I0913 20:02:04.473223   71926 logs.go:276] 0 containers: []
	W0913 20:02:04.473232   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:02:04.473238   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:02:04.473283   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:02:04.508088   71926 cri.go:89] found id: ""
	I0913 20:02:04.508110   71926 logs.go:276] 0 containers: []
	W0913 20:02:04.508119   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:02:04.508124   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:02:04.508175   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:02:04.543322   71926 cri.go:89] found id: ""
	I0913 20:02:04.543343   71926 logs.go:276] 0 containers: []
	W0913 20:02:04.543351   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:02:04.543360   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:02:04.543370   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:02:04.618660   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:02:04.618679   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:02:04.618691   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:02:04.695411   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:02:04.695446   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:02:04.735797   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:02:04.735829   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:02:04.792281   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:02:04.792318   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:02:05.836177   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:07.837164   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:06.931841   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:09.432062   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:07.266508   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:07.266540   71424 pod_ready.go:82] duration metric: took 4m0.00658418s for pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace to be "Ready" ...
	E0913 20:02:07.266553   71424 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0913 20:02:07.266569   71424 pod_ready.go:39] duration metric: took 4m3.201709894s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0913 20:02:07.266588   71424 api_server.go:52] waiting for apiserver process to appear ...
	I0913 20:02:07.266618   71424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:02:07.266671   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:02:07.316650   71424 cri.go:89] found id: "7b1108fd5841778e635c87c901ca2bc56071cc7f48f9b9812e1bf8fa063284c3"
	I0913 20:02:07.316674   71424 cri.go:89] found id: ""
	I0913 20:02:07.316681   71424 logs.go:276] 1 containers: [7b1108fd5841778e635c87c901ca2bc56071cc7f48f9b9812e1bf8fa063284c3]
	I0913 20:02:07.316740   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:07.321334   71424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:02:07.321407   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:02:07.373164   71424 cri.go:89] found id: "a3490cc2f99b270938b5fed10c34d8466f4e2d304dac9cdacb69b239c36988e3"
	I0913 20:02:07.373187   71424 cri.go:89] found id: ""
	I0913 20:02:07.373197   71424 logs.go:276] 1 containers: [a3490cc2f99b270938b5fed10c34d8466f4e2d304dac9cdacb69b239c36988e3]
	I0913 20:02:07.373247   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:07.377883   71424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:02:07.377954   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:02:07.424142   71424 cri.go:89] found id: "e70559352db6a4d712cb06065541ce8338cb0ea989eac571f538aa205d022f73"
	I0913 20:02:07.424169   71424 cri.go:89] found id: ""
	I0913 20:02:07.424179   71424 logs.go:276] 1 containers: [e70559352db6a4d712cb06065541ce8338cb0ea989eac571f538aa205d022f73]
	I0913 20:02:07.424241   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:07.429508   71424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:02:07.429578   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:02:07.484114   71424 cri.go:89] found id: "4c2bf4fed4e33c404d568d430bf58fe30ca0c9f0cf8964c530e93f1fd1a51edf"
	I0913 20:02:07.484180   71424 cri.go:89] found id: ""
	I0913 20:02:07.484193   71424 logs.go:276] 1 containers: [4c2bf4fed4e33c404d568d430bf58fe30ca0c9f0cf8964c530e93f1fd1a51edf]
	I0913 20:02:07.484250   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:07.488689   71424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:02:07.488757   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:02:07.527755   71424 cri.go:89] found id: "adbec8ff0ed7ae2d76b2b4191fbf08117c3c2c17aaf2f56bf763ce14fba83991"
	I0913 20:02:07.527777   71424 cri.go:89] found id: ""
	I0913 20:02:07.527786   71424 logs.go:276] 1 containers: [adbec8ff0ed7ae2d76b2b4191fbf08117c3c2c17aaf2f56bf763ce14fba83991]
	I0913 20:02:07.527840   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:07.532748   71424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:02:07.532806   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:02:07.570018   71424 cri.go:89] found id: "e6169bebe5711890f53f70a7ec514779cf495ee725c60dab67e7acdca64ef03d"
	I0913 20:02:07.570043   71424 cri.go:89] found id: ""
	I0913 20:02:07.570052   71424 logs.go:276] 1 containers: [e6169bebe5711890f53f70a7ec514779cf495ee725c60dab67e7acdca64ef03d]
	I0913 20:02:07.570125   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:07.574697   71424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:02:07.574765   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:02:07.618877   71424 cri.go:89] found id: ""
	I0913 20:02:07.618971   71424 logs.go:276] 0 containers: []
	W0913 20:02:07.618998   71424 logs.go:278] No container was found matching "kindnet"
	I0913 20:02:07.619014   71424 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0913 20:02:07.619122   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0913 20:02:07.659244   71424 cri.go:89] found id: "fc01d7b17bbc929489326ae7e806a248f2d3d5cbc145bf51eb1b0cf2ba88d950"
	I0913 20:02:07.659270   71424 cri.go:89] found id: "4a9c61bb677322f851f9504c0ffe2044897d80333391be76078a4f7825afe9fe"
	I0913 20:02:07.659275   71424 cri.go:89] found id: ""
	I0913 20:02:07.659283   71424 logs.go:276] 2 containers: [fc01d7b17bbc929489326ae7e806a248f2d3d5cbc145bf51eb1b0cf2ba88d950 4a9c61bb677322f851f9504c0ffe2044897d80333391be76078a4f7825afe9fe]
	I0913 20:02:07.659335   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:07.664257   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:07.668591   71424 logs.go:123] Gathering logs for kube-scheduler [4c2bf4fed4e33c404d568d430bf58fe30ca0c9f0cf8964c530e93f1fd1a51edf] ...
	I0913 20:02:07.668613   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c2bf4fed4e33c404d568d430bf58fe30ca0c9f0cf8964c530e93f1fd1a51edf"
	I0913 20:02:07.709612   71424 logs.go:123] Gathering logs for kube-controller-manager [e6169bebe5711890f53f70a7ec514779cf495ee725c60dab67e7acdca64ef03d] ...
	I0913 20:02:07.709638   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e6169bebe5711890f53f70a7ec514779cf495ee725c60dab67e7acdca64ef03d"
	I0913 20:02:07.765784   71424 logs.go:123] Gathering logs for storage-provisioner [4a9c61bb677322f851f9504c0ffe2044897d80333391be76078a4f7825afe9fe] ...
	I0913 20:02:07.765838   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4a9c61bb677322f851f9504c0ffe2044897d80333391be76078a4f7825afe9fe"
	I0913 20:02:07.808828   71424 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:02:07.808853   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:02:08.315417   71424 logs.go:123] Gathering logs for container status ...
	I0913 20:02:08.315462   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:02:08.361953   71424 logs.go:123] Gathering logs for kubelet ...
	I0913 20:02:08.361984   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:02:08.434091   71424 logs.go:123] Gathering logs for dmesg ...
	I0913 20:02:08.434143   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:02:08.448853   71424 logs.go:123] Gathering logs for etcd [a3490cc2f99b270938b5fed10c34d8466f4e2d304dac9cdacb69b239c36988e3] ...
	I0913 20:02:08.448877   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a3490cc2f99b270938b5fed10c34d8466f4e2d304dac9cdacb69b239c36988e3"
	I0913 20:02:08.510886   71424 logs.go:123] Gathering logs for coredns [e70559352db6a4d712cb06065541ce8338cb0ea989eac571f538aa205d022f73] ...
	I0913 20:02:08.510919   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e70559352db6a4d712cb06065541ce8338cb0ea989eac571f538aa205d022f73"
	I0913 20:02:08.547445   71424 logs.go:123] Gathering logs for kube-proxy [adbec8ff0ed7ae2d76b2b4191fbf08117c3c2c17aaf2f56bf763ce14fba83991] ...
	I0913 20:02:08.547482   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 adbec8ff0ed7ae2d76b2b4191fbf08117c3c2c17aaf2f56bf763ce14fba83991"
	I0913 20:02:08.585883   71424 logs.go:123] Gathering logs for storage-provisioner [fc01d7b17bbc929489326ae7e806a248f2d3d5cbc145bf51eb1b0cf2ba88d950] ...
	I0913 20:02:08.585907   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc01d7b17bbc929489326ae7e806a248f2d3d5cbc145bf51eb1b0cf2ba88d950"
	I0913 20:02:08.628105   71424 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:02:08.628134   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 20:02:08.764531   71424 logs.go:123] Gathering logs for kube-apiserver [7b1108fd5841778e635c87c901ca2bc56071cc7f48f9b9812e1bf8fa063284c3] ...
	I0913 20:02:08.764562   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7b1108fd5841778e635c87c901ca2bc56071cc7f48f9b9812e1bf8fa063284c3"
	I0913 20:02:07.307778   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:02:07.322032   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:02:07.322110   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:02:07.357558   71926 cri.go:89] found id: ""
	I0913 20:02:07.357585   71926 logs.go:276] 0 containers: []
	W0913 20:02:07.357597   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:02:07.357605   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:02:07.357664   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:02:07.391428   71926 cri.go:89] found id: ""
	I0913 20:02:07.391457   71926 logs.go:276] 0 containers: []
	W0913 20:02:07.391468   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:02:07.391476   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:02:07.391531   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:02:07.427244   71926 cri.go:89] found id: ""
	I0913 20:02:07.427268   71926 logs.go:276] 0 containers: []
	W0913 20:02:07.427281   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:02:07.427289   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:02:07.427364   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:02:07.477369   71926 cri.go:89] found id: ""
	I0913 20:02:07.477399   71926 logs.go:276] 0 containers: []
	W0913 20:02:07.477411   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:02:07.477420   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:02:07.477478   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:02:07.519477   71926 cri.go:89] found id: ""
	I0913 20:02:07.519505   71926 logs.go:276] 0 containers: []
	W0913 20:02:07.519516   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:02:07.519524   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:02:07.519586   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:02:07.556229   71926 cri.go:89] found id: ""
	I0913 20:02:07.556252   71926 logs.go:276] 0 containers: []
	W0913 20:02:07.556260   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:02:07.556270   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:02:07.556329   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:02:07.594498   71926 cri.go:89] found id: ""
	I0913 20:02:07.594531   71926 logs.go:276] 0 containers: []
	W0913 20:02:07.594543   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:02:07.594551   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:02:07.594609   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:02:07.634000   71926 cri.go:89] found id: ""
	I0913 20:02:07.634027   71926 logs.go:276] 0 containers: []
	W0913 20:02:07.634038   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:02:07.634048   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:02:07.634061   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:02:07.690769   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:02:07.690801   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:02:07.708059   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:02:07.708087   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:02:07.783421   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:02:07.783440   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:02:07.783481   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:02:07.866138   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:02:07.866169   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:02:10.416560   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:02:10.430464   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:02:10.430536   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:02:10.467821   71926 cri.go:89] found id: ""
	I0913 20:02:10.467849   71926 logs.go:276] 0 containers: []
	W0913 20:02:10.467858   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:02:10.467864   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:02:10.467930   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:02:10.504320   71926 cri.go:89] found id: ""
	I0913 20:02:10.504347   71926 logs.go:276] 0 containers: []
	W0913 20:02:10.504358   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:02:10.504371   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:02:10.504461   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:02:10.541261   71926 cri.go:89] found id: ""
	I0913 20:02:10.541290   71926 logs.go:276] 0 containers: []
	W0913 20:02:10.541302   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:02:10.541309   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:02:10.541376   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:02:10.576270   71926 cri.go:89] found id: ""
	I0913 20:02:10.576297   71926 logs.go:276] 0 containers: []
	W0913 20:02:10.576310   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:02:10.576317   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:02:10.576373   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:02:10.614974   71926 cri.go:89] found id: ""
	I0913 20:02:10.615004   71926 logs.go:276] 0 containers: []
	W0913 20:02:10.615022   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:02:10.615029   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:02:10.615091   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:02:10.650917   71926 cri.go:89] found id: ""
	I0913 20:02:10.650947   71926 logs.go:276] 0 containers: []
	W0913 20:02:10.650959   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:02:10.650967   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:02:10.651028   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:02:10.688597   71926 cri.go:89] found id: ""
	I0913 20:02:10.688622   71926 logs.go:276] 0 containers: []
	W0913 20:02:10.688632   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:02:10.688640   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:02:10.688699   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:02:10.723937   71926 cri.go:89] found id: ""
	I0913 20:02:10.723962   71926 logs.go:276] 0 containers: []
	W0913 20:02:10.723973   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:02:10.723983   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:02:10.723998   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:02:10.776033   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:02:10.776065   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:02:10.791601   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:02:10.791624   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:02:10.870427   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:02:10.870453   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:02:10.870481   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:02:10.950924   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:02:10.950958   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:02:10.335945   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:12.336240   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:11.932240   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:14.430527   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:11.311597   71424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:02:11.329620   71424 api_server.go:72] duration metric: took 4m14.578764648s to wait for apiserver process to appear ...
	I0913 20:02:11.329645   71424 api_server.go:88] waiting for apiserver healthz status ...
	I0913 20:02:11.329689   71424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:02:11.329748   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:02:11.372419   71424 cri.go:89] found id: "7b1108fd5841778e635c87c901ca2bc56071cc7f48f9b9812e1bf8fa063284c3"
	I0913 20:02:11.372443   71424 cri.go:89] found id: ""
	I0913 20:02:11.372454   71424 logs.go:276] 1 containers: [7b1108fd5841778e635c87c901ca2bc56071cc7f48f9b9812e1bf8fa063284c3]
	I0913 20:02:11.372510   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:11.377048   71424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:02:11.377112   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:02:11.415150   71424 cri.go:89] found id: "a3490cc2f99b270938b5fed10c34d8466f4e2d304dac9cdacb69b239c36988e3"
	I0913 20:02:11.415177   71424 cri.go:89] found id: ""
	I0913 20:02:11.415186   71424 logs.go:276] 1 containers: [a3490cc2f99b270938b5fed10c34d8466f4e2d304dac9cdacb69b239c36988e3]
	I0913 20:02:11.415255   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:11.420007   71424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:02:11.420092   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:02:11.459538   71424 cri.go:89] found id: "e70559352db6a4d712cb06065541ce8338cb0ea989eac571f538aa205d022f73"
	I0913 20:02:11.459560   71424 cri.go:89] found id: ""
	I0913 20:02:11.459568   71424 logs.go:276] 1 containers: [e70559352db6a4d712cb06065541ce8338cb0ea989eac571f538aa205d022f73]
	I0913 20:02:11.459626   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:11.464079   71424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:02:11.464133   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:02:11.502877   71424 cri.go:89] found id: "4c2bf4fed4e33c404d568d430bf58fe30ca0c9f0cf8964c530e93f1fd1a51edf"
	I0913 20:02:11.502902   71424 cri.go:89] found id: ""
	I0913 20:02:11.502909   71424 logs.go:276] 1 containers: [4c2bf4fed4e33c404d568d430bf58fe30ca0c9f0cf8964c530e93f1fd1a51edf]
	I0913 20:02:11.502958   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:11.507529   71424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:02:11.507614   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:02:11.553452   71424 cri.go:89] found id: "adbec8ff0ed7ae2d76b2b4191fbf08117c3c2c17aaf2f56bf763ce14fba83991"
	I0913 20:02:11.553476   71424 cri.go:89] found id: ""
	I0913 20:02:11.553484   71424 logs.go:276] 1 containers: [adbec8ff0ed7ae2d76b2b4191fbf08117c3c2c17aaf2f56bf763ce14fba83991]
	I0913 20:02:11.553538   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:11.557584   71424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:02:11.557649   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:02:11.598606   71424 cri.go:89] found id: "e6169bebe5711890f53f70a7ec514779cf495ee725c60dab67e7acdca64ef03d"
	I0913 20:02:11.598632   71424 cri.go:89] found id: ""
	I0913 20:02:11.598641   71424 logs.go:276] 1 containers: [e6169bebe5711890f53f70a7ec514779cf495ee725c60dab67e7acdca64ef03d]
	I0913 20:02:11.598694   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:11.602735   71424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:02:11.602803   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:02:11.637072   71424 cri.go:89] found id: ""
	I0913 20:02:11.637099   71424 logs.go:276] 0 containers: []
	W0913 20:02:11.637110   71424 logs.go:278] No container was found matching "kindnet"
	I0913 20:02:11.637133   71424 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0913 20:02:11.637197   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0913 20:02:11.680922   71424 cri.go:89] found id: "fc01d7b17bbc929489326ae7e806a248f2d3d5cbc145bf51eb1b0cf2ba88d950"
	I0913 20:02:11.680941   71424 cri.go:89] found id: "4a9c61bb677322f851f9504c0ffe2044897d80333391be76078a4f7825afe9fe"
	I0913 20:02:11.680945   71424 cri.go:89] found id: ""
	I0913 20:02:11.680952   71424 logs.go:276] 2 containers: [fc01d7b17bbc929489326ae7e806a248f2d3d5cbc145bf51eb1b0cf2ba88d950 4a9c61bb677322f851f9504c0ffe2044897d80333391be76078a4f7825afe9fe]
	I0913 20:02:11.680993   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:11.685264   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:11.689862   71424 logs.go:123] Gathering logs for etcd [a3490cc2f99b270938b5fed10c34d8466f4e2d304dac9cdacb69b239c36988e3] ...
	I0913 20:02:11.689887   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a3490cc2f99b270938b5fed10c34d8466f4e2d304dac9cdacb69b239c36988e3"
	I0913 20:02:11.758440   71424 logs.go:123] Gathering logs for coredns [e70559352db6a4d712cb06065541ce8338cb0ea989eac571f538aa205d022f73] ...
	I0913 20:02:11.758475   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e70559352db6a4d712cb06065541ce8338cb0ea989eac571f538aa205d022f73"
	I0913 20:02:11.799263   71424 logs.go:123] Gathering logs for kube-proxy [adbec8ff0ed7ae2d76b2b4191fbf08117c3c2c17aaf2f56bf763ce14fba83991] ...
	I0913 20:02:11.799295   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 adbec8ff0ed7ae2d76b2b4191fbf08117c3c2c17aaf2f56bf763ce14fba83991"
	I0913 20:02:11.837890   71424 logs.go:123] Gathering logs for kube-controller-manager [e6169bebe5711890f53f70a7ec514779cf495ee725c60dab67e7acdca64ef03d] ...
	I0913 20:02:11.837918   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e6169bebe5711890f53f70a7ec514779cf495ee725c60dab67e7acdca64ef03d"
	I0913 20:02:11.902156   71424 logs.go:123] Gathering logs for storage-provisioner [fc01d7b17bbc929489326ae7e806a248f2d3d5cbc145bf51eb1b0cf2ba88d950] ...
	I0913 20:02:11.902189   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc01d7b17bbc929489326ae7e806a248f2d3d5cbc145bf51eb1b0cf2ba88d950"
	I0913 20:02:11.953825   71424 logs.go:123] Gathering logs for kubelet ...
	I0913 20:02:11.953854   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:02:12.022461   71424 logs.go:123] Gathering logs for dmesg ...
	I0913 20:02:12.022498   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:02:12.038744   71424 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:02:12.038773   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 20:02:12.156945   71424 logs.go:123] Gathering logs for storage-provisioner [4a9c61bb677322f851f9504c0ffe2044897d80333391be76078a4f7825afe9fe] ...
	I0913 20:02:12.156982   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4a9c61bb677322f851f9504c0ffe2044897d80333391be76078a4f7825afe9fe"
	I0913 20:02:12.191539   71424 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:02:12.191576   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:02:12.615499   71424 logs.go:123] Gathering logs for kube-apiserver [7b1108fd5841778e635c87c901ca2bc56071cc7f48f9b9812e1bf8fa063284c3] ...
	I0913 20:02:12.615539   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7b1108fd5841778e635c87c901ca2bc56071cc7f48f9b9812e1bf8fa063284c3"
	I0913 20:02:12.662305   71424 logs.go:123] Gathering logs for kube-scheduler [4c2bf4fed4e33c404d568d430bf58fe30ca0c9f0cf8964c530e93f1fd1a51edf] ...
	I0913 20:02:12.662340   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c2bf4fed4e33c404d568d430bf58fe30ca0c9f0cf8964c530e93f1fd1a51edf"
	I0913 20:02:12.701720   71424 logs.go:123] Gathering logs for container status ...
	I0913 20:02:12.701747   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:02:15.241370   71424 api_server.go:253] Checking apiserver healthz at https://192.168.50.13:8443/healthz ...
	I0913 20:02:15.246417   71424 api_server.go:279] https://192.168.50.13:8443/healthz returned 200:
	ok
	I0913 20:02:15.247538   71424 api_server.go:141] control plane version: v1.31.1
	I0913 20:02:15.247557   71424 api_server.go:131] duration metric: took 3.917905929s to wait for apiserver health ...
	I0913 20:02:15.247565   71424 system_pods.go:43] waiting for kube-system pods to appear ...
	I0913 20:02:15.247592   71424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:02:15.247646   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:02:15.287202   71424 cri.go:89] found id: "7b1108fd5841778e635c87c901ca2bc56071cc7f48f9b9812e1bf8fa063284c3"
	I0913 20:02:15.287223   71424 cri.go:89] found id: ""
	I0913 20:02:15.287231   71424 logs.go:276] 1 containers: [7b1108fd5841778e635c87c901ca2bc56071cc7f48f9b9812e1bf8fa063284c3]
	I0913 20:02:15.287285   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:15.292060   71424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:02:15.292115   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:02:15.327342   71424 cri.go:89] found id: "a3490cc2f99b270938b5fed10c34d8466f4e2d304dac9cdacb69b239c36988e3"
	I0913 20:02:15.327367   71424 cri.go:89] found id: ""
	I0913 20:02:15.327376   71424 logs.go:276] 1 containers: [a3490cc2f99b270938b5fed10c34d8466f4e2d304dac9cdacb69b239c36988e3]
	I0913 20:02:15.327441   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:15.332284   71424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:02:15.332356   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:02:15.374686   71424 cri.go:89] found id: "e70559352db6a4d712cb06065541ce8338cb0ea989eac571f538aa205d022f73"
	I0913 20:02:15.374708   71424 cri.go:89] found id: ""
	I0913 20:02:15.374714   71424 logs.go:276] 1 containers: [e70559352db6a4d712cb06065541ce8338cb0ea989eac571f538aa205d022f73]
	I0913 20:02:15.374771   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:15.379199   71424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:02:15.379269   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:02:15.422011   71424 cri.go:89] found id: "4c2bf4fed4e33c404d568d430bf58fe30ca0c9f0cf8964c530e93f1fd1a51edf"
	I0913 20:02:15.422034   71424 cri.go:89] found id: ""
	I0913 20:02:15.422044   71424 logs.go:276] 1 containers: [4c2bf4fed4e33c404d568d430bf58fe30ca0c9f0cf8964c530e93f1fd1a51edf]
	I0913 20:02:15.422110   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:15.426331   71424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:02:15.426395   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:02:15.471552   71424 cri.go:89] found id: "adbec8ff0ed7ae2d76b2b4191fbf08117c3c2c17aaf2f56bf763ce14fba83991"
	I0913 20:02:15.471570   71424 cri.go:89] found id: ""
	I0913 20:02:15.471577   71424 logs.go:276] 1 containers: [adbec8ff0ed7ae2d76b2b4191fbf08117c3c2c17aaf2f56bf763ce14fba83991]
	I0913 20:02:15.471630   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:15.475964   71424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:02:15.476021   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:02:15.520619   71424 cri.go:89] found id: "e6169bebe5711890f53f70a7ec514779cf495ee725c60dab67e7acdca64ef03d"
	I0913 20:02:15.520647   71424 cri.go:89] found id: ""
	I0913 20:02:15.520656   71424 logs.go:276] 1 containers: [e6169bebe5711890f53f70a7ec514779cf495ee725c60dab67e7acdca64ef03d]
	I0913 20:02:15.520713   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:15.524851   71424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:02:15.524912   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:02:15.559283   71424 cri.go:89] found id: ""
	I0913 20:02:15.559309   71424 logs.go:276] 0 containers: []
	W0913 20:02:15.559320   71424 logs.go:278] No container was found matching "kindnet"
	I0913 20:02:15.559327   71424 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0913 20:02:15.559383   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0913 20:02:15.597439   71424 cri.go:89] found id: "fc01d7b17bbc929489326ae7e806a248f2d3d5cbc145bf51eb1b0cf2ba88d950"
	I0913 20:02:15.597465   71424 cri.go:89] found id: "4a9c61bb677322f851f9504c0ffe2044897d80333391be76078a4f7825afe9fe"
	I0913 20:02:15.597471   71424 cri.go:89] found id: ""
	I0913 20:02:15.597480   71424 logs.go:276] 2 containers: [fc01d7b17bbc929489326ae7e806a248f2d3d5cbc145bf51eb1b0cf2ba88d950 4a9c61bb677322f851f9504c0ffe2044897d80333391be76078a4f7825afe9fe]
	I0913 20:02:15.597540   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:15.601932   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:15.605741   71424 logs.go:123] Gathering logs for kube-scheduler [4c2bf4fed4e33c404d568d430bf58fe30ca0c9f0cf8964c530e93f1fd1a51edf] ...
	I0913 20:02:15.605765   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c2bf4fed4e33c404d568d430bf58fe30ca0c9f0cf8964c530e93f1fd1a51edf"
	I0913 20:02:15.641300   71424 logs.go:123] Gathering logs for kube-proxy [adbec8ff0ed7ae2d76b2b4191fbf08117c3c2c17aaf2f56bf763ce14fba83991] ...
	I0913 20:02:15.641328   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 adbec8ff0ed7ae2d76b2b4191fbf08117c3c2c17aaf2f56bf763ce14fba83991"
	I0913 20:02:15.679604   71424 logs.go:123] Gathering logs for kube-controller-manager [e6169bebe5711890f53f70a7ec514779cf495ee725c60dab67e7acdca64ef03d] ...
	I0913 20:02:15.679633   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e6169bebe5711890f53f70a7ec514779cf495ee725c60dab67e7acdca64ef03d"
	I0913 20:02:15.731316   71424 logs.go:123] Gathering logs for storage-provisioner [fc01d7b17bbc929489326ae7e806a248f2d3d5cbc145bf51eb1b0cf2ba88d950] ...
	I0913 20:02:15.731348   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc01d7b17bbc929489326ae7e806a248f2d3d5cbc145bf51eb1b0cf2ba88d950"
	I0913 20:02:15.774692   71424 logs.go:123] Gathering logs for dmesg ...
	I0913 20:02:15.774719   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:02:15.789708   71424 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:02:15.789733   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 20:02:15.899485   71424 logs.go:123] Gathering logs for kube-apiserver [7b1108fd5841778e635c87c901ca2bc56071cc7f48f9b9812e1bf8fa063284c3] ...
	I0913 20:02:15.899517   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7b1108fd5841778e635c87c901ca2bc56071cc7f48f9b9812e1bf8fa063284c3"
	I0913 20:02:15.953758   71424 logs.go:123] Gathering logs for coredns [e70559352db6a4d712cb06065541ce8338cb0ea989eac571f538aa205d022f73] ...
	I0913 20:02:15.953795   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e70559352db6a4d712cb06065541ce8338cb0ea989eac571f538aa205d022f73"
	I0913 20:02:15.996235   71424 logs.go:123] Gathering logs for storage-provisioner [4a9c61bb677322f851f9504c0ffe2044897d80333391be76078a4f7825afe9fe] ...
	I0913 20:02:15.996266   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4a9c61bb677322f851f9504c0ffe2044897d80333391be76078a4f7825afe9fe"
	I0913 20:02:16.033729   71424 logs.go:123] Gathering logs for container status ...
	I0913 20:02:16.033765   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:02:16.083481   71424 logs.go:123] Gathering logs for kubelet ...
	I0913 20:02:16.083514   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:02:13.497982   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:02:13.511795   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:02:13.511865   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:02:13.551686   71926 cri.go:89] found id: ""
	I0913 20:02:13.551714   71926 logs.go:276] 0 containers: []
	W0913 20:02:13.551723   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:02:13.551729   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:02:13.551779   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:02:13.584641   71926 cri.go:89] found id: ""
	I0913 20:02:13.584671   71926 logs.go:276] 0 containers: []
	W0913 20:02:13.584682   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:02:13.584689   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:02:13.584740   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:02:13.622693   71926 cri.go:89] found id: ""
	I0913 20:02:13.622720   71926 logs.go:276] 0 containers: []
	W0913 20:02:13.622731   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:02:13.622739   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:02:13.622801   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:02:13.661315   71926 cri.go:89] found id: ""
	I0913 20:02:13.661343   71926 logs.go:276] 0 containers: []
	W0913 20:02:13.661355   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:02:13.661363   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:02:13.661422   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:02:13.698433   71926 cri.go:89] found id: ""
	I0913 20:02:13.698460   71926 logs.go:276] 0 containers: []
	W0913 20:02:13.698471   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:02:13.698485   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:02:13.698541   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:02:13.733216   71926 cri.go:89] found id: ""
	I0913 20:02:13.733245   71926 logs.go:276] 0 containers: []
	W0913 20:02:13.733256   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:02:13.733264   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:02:13.733323   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:02:13.769397   71926 cri.go:89] found id: ""
	I0913 20:02:13.769426   71926 logs.go:276] 0 containers: []
	W0913 20:02:13.769436   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:02:13.769441   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:02:13.769502   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:02:13.806343   71926 cri.go:89] found id: ""
	I0913 20:02:13.806367   71926 logs.go:276] 0 containers: []
	W0913 20:02:13.806378   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:02:13.806389   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:02:13.806402   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:02:13.885918   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:02:13.885958   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:02:13.927909   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:02:13.927948   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:02:13.983289   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:02:13.983325   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:02:13.996867   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:02:13.996892   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:02:14.065876   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:02:16.566876   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:02:16.581980   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:02:16.582059   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:02:16.620669   71926 cri.go:89] found id: ""
	I0913 20:02:16.620694   71926 logs.go:276] 0 containers: []
	W0913 20:02:16.620703   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:02:16.620709   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:02:16.620758   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:02:16.660133   71926 cri.go:89] found id: ""
	I0913 20:02:16.660156   71926 logs.go:276] 0 containers: []
	W0913 20:02:16.660165   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:02:16.660171   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:02:16.660218   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:02:16.695475   71926 cri.go:89] found id: ""
	I0913 20:02:16.695503   71926 logs.go:276] 0 containers: []
	W0913 20:02:16.695515   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:02:16.695522   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:02:16.695581   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:02:16.730018   71926 cri.go:89] found id: ""
	I0913 20:02:16.730051   71926 logs.go:276] 0 containers: []
	W0913 20:02:16.730063   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:02:16.730069   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:02:16.730136   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:02:16.763193   71926 cri.go:89] found id: ""
	I0913 20:02:16.763219   71926 logs.go:276] 0 containers: []
	W0913 20:02:16.763230   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:02:16.763236   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:02:16.763303   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:02:16.800623   71926 cri.go:89] found id: ""
	I0913 20:02:16.800650   71926 logs.go:276] 0 containers: []
	W0913 20:02:16.800662   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:02:16.800670   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:02:16.800730   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:02:16.834922   71926 cri.go:89] found id: ""
	I0913 20:02:16.834950   71926 logs.go:276] 0 containers: []
	W0913 20:02:16.834961   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:02:16.834968   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:02:16.835012   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:02:16.872570   71926 cri.go:89] found id: ""
	I0913 20:02:16.872598   71926 logs.go:276] 0 containers: []
	W0913 20:02:16.872607   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:02:16.872615   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:02:16.872625   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:02:16.922229   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:02:16.922265   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:02:16.936954   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:02:16.936985   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 20:02:16.155161   71424 logs.go:123] Gathering logs for etcd [a3490cc2f99b270938b5fed10c34d8466f4e2d304dac9cdacb69b239c36988e3] ...
	I0913 20:02:16.155202   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a3490cc2f99b270938b5fed10c34d8466f4e2d304dac9cdacb69b239c36988e3"
	I0913 20:02:16.213457   71424 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:02:16.213494   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:02:19.078923   71424 system_pods.go:59] 8 kube-system pods found
	I0913 20:02:19.078950   71424 system_pods.go:61] "coredns-7c65d6cfc9-fjzxv" [984f1946-61b1-4881-ae99-495855aaf948] Running
	I0913 20:02:19.078956   71424 system_pods.go:61] "etcd-no-preload-239327" [d0514967-93ea-4792-b49d-500c6800d102] Running
	I0913 20:02:19.078959   71424 system_pods.go:61] "kube-apiserver-no-preload-239327" [2e37433d-d767-4e6a-9697-79e99bb2cf74] Running
	I0913 20:02:19.078964   71424 system_pods.go:61] "kube-controller-manager-no-preload-239327" [71d86891-1378-43b5-af10-c45acb1ef854] Running
	I0913 20:02:19.078967   71424 system_pods.go:61] "kube-proxy-b24zg" [67fffd9e-ddf7-4abb-bfce-1528060d6b43] Running
	I0913 20:02:19.078971   71424 system_pods.go:61] "kube-scheduler-no-preload-239327" [22e17a4f-902f-4593-82dd-d2f04104e66a] Running
	I0913 20:02:19.078976   71424 system_pods.go:61] "metrics-server-6867b74b74-bq7jp" [9920ad88-3d00-458f-94d4-3dcfd0cd9a01] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0913 20:02:19.078980   71424 system_pods.go:61] "storage-provisioner" [1cb55fe9-4adb-4d3e-9f26-34ee4b3f01a2] Running
	I0913 20:02:19.078988   71424 system_pods.go:74] duration metric: took 3.831418395s to wait for pod list to return data ...
	I0913 20:02:19.078995   71424 default_sa.go:34] waiting for default service account to be created ...
	I0913 20:02:19.081391   71424 default_sa.go:45] found service account: "default"
	I0913 20:02:19.081412   71424 default_sa.go:55] duration metric: took 2.412971ms for default service account to be created ...
	I0913 20:02:19.081419   71424 system_pods.go:116] waiting for k8s-apps to be running ...
	I0913 20:02:19.085561   71424 system_pods.go:86] 8 kube-system pods found
	I0913 20:02:19.085580   71424 system_pods.go:89] "coredns-7c65d6cfc9-fjzxv" [984f1946-61b1-4881-ae99-495855aaf948] Running
	I0913 20:02:19.085586   71424 system_pods.go:89] "etcd-no-preload-239327" [d0514967-93ea-4792-b49d-500c6800d102] Running
	I0913 20:02:19.085590   71424 system_pods.go:89] "kube-apiserver-no-preload-239327" [2e37433d-d767-4e6a-9697-79e99bb2cf74] Running
	I0913 20:02:19.085594   71424 system_pods.go:89] "kube-controller-manager-no-preload-239327" [71d86891-1378-43b5-af10-c45acb1ef854] Running
	I0913 20:02:19.085597   71424 system_pods.go:89] "kube-proxy-b24zg" [67fffd9e-ddf7-4abb-bfce-1528060d6b43] Running
	I0913 20:02:19.085601   71424 system_pods.go:89] "kube-scheduler-no-preload-239327" [22e17a4f-902f-4593-82dd-d2f04104e66a] Running
	I0913 20:02:19.085607   71424 system_pods.go:89] "metrics-server-6867b74b74-bq7jp" [9920ad88-3d00-458f-94d4-3dcfd0cd9a01] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0913 20:02:19.085610   71424 system_pods.go:89] "storage-provisioner" [1cb55fe9-4adb-4d3e-9f26-34ee4b3f01a2] Running
	I0913 20:02:19.085616   71424 system_pods.go:126] duration metric: took 4.193561ms to wait for k8s-apps to be running ...
	I0913 20:02:19.085625   71424 system_svc.go:44] waiting for kubelet service to be running ....
	I0913 20:02:19.085664   71424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 20:02:19.105440   71424 system_svc.go:56] duration metric: took 19.808703ms WaitForService to wait for kubelet
	I0913 20:02:19.105469   71424 kubeadm.go:582] duration metric: took 4m22.354619761s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0913 20:02:19.105491   71424 node_conditions.go:102] verifying NodePressure condition ...
	I0913 20:02:19.109107   71424 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0913 20:02:19.109126   71424 node_conditions.go:123] node cpu capacity is 2
	I0913 20:02:19.109136   71424 node_conditions.go:105] duration metric: took 3.640406ms to run NodePressure ...
	I0913 20:02:19.109146   71424 start.go:241] waiting for startup goroutines ...
	I0913 20:02:19.109153   71424 start.go:246] waiting for cluster config update ...
	I0913 20:02:19.109163   71424 start.go:255] writing updated cluster config ...
	I0913 20:02:19.109412   71424 ssh_runner.go:195] Run: rm -f paused
	I0913 20:02:19.156906   71424 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0913 20:02:19.158757   71424 out.go:177] * Done! kubectl is now configured to use "no-preload-239327" cluster and "default" namespace by default
	I0913 20:02:14.835749   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:17.335566   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:16.431024   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:18.434223   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:19.425264   71702 pod_ready.go:82] duration metric: took 4m0.000872269s for pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace to be "Ready" ...
	E0913 20:02:19.425295   71702 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0913 20:02:19.425314   71702 pod_ready.go:39] duration metric: took 4m14.083085064s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0913 20:02:19.425344   71702 kubeadm.go:597] duration metric: took 4m21.72399516s to restartPrimaryControlPlane
	W0913 20:02:19.425404   71702 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0913 20:02:19.425434   71702 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	W0913 20:02:17.022260   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:02:17.022281   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:02:17.022292   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:02:17.103233   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:02:17.103262   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:02:19.648871   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:02:19.664542   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:02:19.664619   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:02:19.704693   71926 cri.go:89] found id: ""
	I0913 20:02:19.704725   71926 logs.go:276] 0 containers: []
	W0913 20:02:19.704738   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:02:19.704745   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:02:19.704808   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:02:19.746522   71926 cri.go:89] found id: ""
	I0913 20:02:19.746550   71926 logs.go:276] 0 containers: []
	W0913 20:02:19.746562   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:02:19.746569   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:02:19.746630   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:02:19.782277   71926 cri.go:89] found id: ""
	I0913 20:02:19.782305   71926 logs.go:276] 0 containers: []
	W0913 20:02:19.782316   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:02:19.782323   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:02:19.782390   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:02:19.822083   71926 cri.go:89] found id: ""
	I0913 20:02:19.822147   71926 logs.go:276] 0 containers: []
	W0913 20:02:19.822157   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:02:19.822163   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:02:19.822221   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:02:19.865403   71926 cri.go:89] found id: ""
	I0913 20:02:19.865433   71926 logs.go:276] 0 containers: []
	W0913 20:02:19.865443   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:02:19.865451   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:02:19.865513   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:02:19.906436   71926 cri.go:89] found id: ""
	I0913 20:02:19.906462   71926 logs.go:276] 0 containers: []
	W0913 20:02:19.906471   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:02:19.906477   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:02:19.906535   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:02:19.943277   71926 cri.go:89] found id: ""
	I0913 20:02:19.943301   71926 logs.go:276] 0 containers: []
	W0913 20:02:19.943311   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:02:19.943318   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:02:19.943379   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:02:19.978660   71926 cri.go:89] found id: ""
	I0913 20:02:19.978683   71926 logs.go:276] 0 containers: []
	W0913 20:02:19.978694   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:02:19.978704   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:02:19.978718   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:02:20.051748   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:02:20.051773   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:02:20.051788   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:02:20.133912   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:02:20.133951   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:02:20.174826   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:02:20.174854   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:02:20.228002   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:02:20.228038   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:02:22.743346   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:02:22.757641   71926 kubeadm.go:597] duration metric: took 4m3.377721408s to restartPrimaryControlPlane
	W0913 20:02:22.757719   71926 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0913 20:02:22.757750   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0913 20:02:23.489114   71926 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 20:02:23.505494   71926 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0913 20:02:23.516804   71926 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0913 20:02:23.527373   71926 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0913 20:02:23.527392   71926 kubeadm.go:157] found existing configuration files:
	
	I0913 20:02:23.527433   71926 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0913 20:02:23.537725   71926 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0913 20:02:23.537797   71926 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0913 20:02:23.548667   71926 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0913 20:02:23.558212   71926 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0913 20:02:23.558288   71926 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0913 20:02:23.568235   71926 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0913 20:02:23.577925   71926 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0913 20:02:23.577989   71926 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0913 20:02:23.587819   71926 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0913 20:02:23.597266   71926 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0913 20:02:23.597330   71926 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0913 20:02:23.607021   71926 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0913 20:02:23.681432   71926 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0913 20:02:23.681576   71926 kubeadm.go:310] [preflight] Running pre-flight checks
	I0913 20:02:23.837573   71926 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0913 20:02:23.837727   71926 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0913 20:02:23.837858   71926 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0913 20:02:24.016267   71926 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0913 20:02:19.336285   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:21.836115   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:23.837035   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:24.018067   71926 out.go:235]   - Generating certificates and keys ...
	I0913 20:02:24.018195   71926 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0913 20:02:24.018297   71926 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0913 20:02:24.018394   71926 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0913 20:02:24.018482   71926 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0913 20:02:24.018586   71926 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0913 20:02:24.019167   71926 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0913 20:02:24.019590   71926 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0913 20:02:24.020258   71926 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0913 20:02:24.020814   71926 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0913 20:02:24.021409   71926 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0913 20:02:24.021519   71926 kubeadm.go:310] [certs] Using the existing "sa" key
	I0913 20:02:24.021596   71926 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0913 20:02:24.094249   71926 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0913 20:02:24.178186   71926 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0913 20:02:24.412313   71926 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0913 20:02:24.570296   71926 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0913 20:02:24.585365   71926 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0913 20:02:24.586493   71926 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0913 20:02:24.586548   71926 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0913 20:02:24.726026   71926 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0913 20:02:24.727979   71926 out.go:235]   - Booting up control plane ...
	I0913 20:02:24.728117   71926 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0913 20:02:24.740176   71926 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0913 20:02:24.741334   71926 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0913 20:02:24.742057   71926 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0913 20:02:24.744724   71926 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0913 20:02:26.336853   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:28.841632   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:31.336243   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:33.835739   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:36.337341   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:38.835188   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:40.836019   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:42.836112   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:45.681212   71702 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.255746666s)
	I0913 20:02:45.681319   71702 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 20:02:45.700645   71702 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0913 20:02:45.716032   71702 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0913 20:02:45.735914   71702 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0913 20:02:45.735934   71702 kubeadm.go:157] found existing configuration files:
	
	I0913 20:02:45.735991   71702 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0913 20:02:45.746143   71702 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0913 20:02:45.746212   71702 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0913 20:02:45.756542   71702 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0913 20:02:45.774317   71702 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0913 20:02:45.774371   71702 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0913 20:02:45.786627   71702 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0913 20:02:45.796851   71702 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0913 20:02:45.796913   71702 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0913 20:02:45.817449   71702 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0913 20:02:45.827702   71702 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0913 20:02:45.827769   71702 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0913 20:02:45.838431   71702 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0913 20:02:45.891108   71702 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0913 20:02:45.891320   71702 kubeadm.go:310] [preflight] Running pre-flight checks
	I0913 20:02:46.000041   71702 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0913 20:02:46.000212   71702 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0913 20:02:46.000375   71702 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0913 20:02:46.008967   71702 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0913 20:02:46.010730   71702 out.go:235]   - Generating certificates and keys ...
	I0913 20:02:46.010839   71702 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0913 20:02:46.010943   71702 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0913 20:02:46.011058   71702 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0913 20:02:46.011180   71702 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0913 20:02:46.011270   71702 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0913 20:02:46.011352   71702 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0913 20:02:46.011438   71702 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0913 20:02:46.011528   71702 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0913 20:02:46.011627   71702 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0913 20:02:46.011727   71702 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0913 20:02:46.011781   71702 kubeadm.go:310] [certs] Using the existing "sa" key
	I0913 20:02:46.011850   71702 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0913 20:02:46.203740   71702 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0913 20:02:46.287426   71702 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0913 20:02:46.417622   71702 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0913 20:02:46.837809   71702 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0913 20:02:47.159346   71702 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0913 20:02:47.159994   71702 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0913 20:02:47.162768   71702 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0913 20:02:45.335134   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:47.338183   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:47.164508   71702 out.go:235]   - Booting up control plane ...
	I0913 20:02:47.164636   71702 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0913 20:02:47.164740   71702 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0913 20:02:47.164827   71702 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0913 20:02:47.182734   71702 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0913 20:02:47.188946   71702 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0913 20:02:47.189012   71702 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0913 20:02:47.311613   71702 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0913 20:02:47.311820   71702 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0913 20:02:47.812730   71702 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.220732ms
	I0913 20:02:47.812859   71702 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0913 20:02:53.314958   71702 kubeadm.go:310] [api-check] The API server is healthy after 5.502078323s
	I0913 20:02:53.332711   71702 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0913 20:02:53.363295   71702 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0913 20:02:53.416780   71702 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0913 20:02:53.417000   71702 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-512125 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0913 20:02:53.450532   71702 kubeadm.go:310] [bootstrap-token] Using token: omlshd.2vtm45ugvt4lb37m
	I0913 20:02:49.837005   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:52.336369   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:53.451903   71702 out.go:235]   - Configuring RBAC rules ...
	I0913 20:02:53.452024   71702 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0913 20:02:53.474646   71702 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0913 20:02:53.501155   71702 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0913 20:02:53.510978   71702 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0913 20:02:53.529034   71702 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0913 20:02:53.540839   71702 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0913 20:02:53.724625   71702 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0913 20:02:54.178585   71702 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0913 20:02:54.728758   71702 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0913 20:02:54.729745   71702 kubeadm.go:310] 
	I0913 20:02:54.729808   71702 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0913 20:02:54.729816   71702 kubeadm.go:310] 
	I0913 20:02:54.729906   71702 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0913 20:02:54.729931   71702 kubeadm.go:310] 
	I0913 20:02:54.729981   71702 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0913 20:02:54.730079   71702 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0913 20:02:54.730170   71702 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0913 20:02:54.730180   71702 kubeadm.go:310] 
	I0913 20:02:54.730386   71702 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0913 20:02:54.730403   71702 kubeadm.go:310] 
	I0913 20:02:54.730453   71702 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0913 20:02:54.730476   71702 kubeadm.go:310] 
	I0913 20:02:54.730538   71702 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0913 20:02:54.730642   71702 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0913 20:02:54.730737   71702 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0913 20:02:54.730746   71702 kubeadm.go:310] 
	I0913 20:02:54.730866   71702 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0913 20:02:54.730978   71702 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0913 20:02:54.730990   71702 kubeadm.go:310] 
	I0913 20:02:54.731059   71702 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token omlshd.2vtm45ugvt4lb37m \
	I0913 20:02:54.731147   71702 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a4240e11abfbfd373505036507e5ef67d548fb372e560a526b421dfcccc65691 \
	I0913 20:02:54.731172   71702 kubeadm.go:310] 	--control-plane 
	I0913 20:02:54.731178   71702 kubeadm.go:310] 
	I0913 20:02:54.731250   71702 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0913 20:02:54.731265   71702 kubeadm.go:310] 
	I0913 20:02:54.731385   71702 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token omlshd.2vtm45ugvt4lb37m \
	I0913 20:02:54.731537   71702 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a4240e11abfbfd373505036507e5ef67d548fb372e560a526b421dfcccc65691 
	I0913 20:02:54.732490   71702 kubeadm.go:310] W0913 20:02:45.866846    2534 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0913 20:02:54.732825   71702 kubeadm.go:310] W0913 20:02:45.867680    2534 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0913 20:02:54.732991   71702 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0913 20:02:54.733013   71702 cni.go:84] Creating CNI manager for ""
	I0913 20:02:54.733024   71702 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0913 20:02:54.734613   71702 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0913 20:02:54.735888   71702 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0913 20:02:54.747812   71702 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0913 20:02:54.769810   71702 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0913 20:02:54.769849   71702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 20:02:54.769936   71702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-512125 minikube.k8s.io/updated_at=2024_09_13T20_02_54_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=fdd33bebc6743cfd1c61ec7fe066add478610a92 minikube.k8s.io/name=default-k8s-diff-port-512125 minikube.k8s.io/primary=true
	I0913 20:02:54.934477   71702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 20:02:55.021422   71702 ops.go:34] apiserver oom_adj: -16
	I0913 20:02:55.435528   71702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 20:02:55.935089   71702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 20:02:56.434609   71702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 20:02:56.934698   71702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 20:02:57.434523   71702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 20:02:57.935430   71702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 20:02:58.434786   71702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 20:02:58.935296   71702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 20:02:59.068131   71702 kubeadm.go:1113] duration metric: took 4.298327621s to wait for elevateKubeSystemPrivileges
	I0913 20:02:59.068171   71702 kubeadm.go:394] duration metric: took 5m1.428919049s to StartCluster
	I0913 20:02:59.068191   71702 settings.go:142] acquiring lock: {Name:mk3b751ea8a9f3a6c2d19469cffad500e96411b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 20:02:59.068274   71702 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19636-3902/kubeconfig
	I0913 20:02:59.069936   71702 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/kubeconfig: {Name:mk324cf401f96b93fed93af51e24b0634e5fa1a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 20:02:59.070196   71702 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.3 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0913 20:02:59.070258   71702 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0913 20:02:59.070355   71702 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-512125"
	I0913 20:02:59.070373   71702 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-512125"
	W0913 20:02:59.070386   71702 addons.go:243] addon storage-provisioner should already be in state true
	I0913 20:02:59.070383   71702 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-512125"
	I0913 20:02:59.070407   71702 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-512125"
	I0913 20:02:59.070407   71702 config.go:182] Loaded profile config "default-k8s-diff-port-512125": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 20:02:59.070425   71702 host.go:66] Checking if "default-k8s-diff-port-512125" exists ...
	I0913 20:02:59.070413   71702 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-512125"
	I0913 20:02:59.070447   71702 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-512125"
	W0913 20:02:59.070457   71702 addons.go:243] addon metrics-server should already be in state true
	I0913 20:02:59.070481   71702 host.go:66] Checking if "default-k8s-diff-port-512125" exists ...
	I0913 20:02:59.070819   71702 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 20:02:59.070863   71702 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 20:02:59.070866   71702 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 20:02:59.070891   71702 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 20:02:59.070911   71702 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 20:02:59.070935   71702 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 20:02:59.072027   71702 out.go:177] * Verifying Kubernetes components...
	I0913 20:02:59.073600   71702 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 20:02:59.088175   71702 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46813
	I0913 20:02:59.088737   71702 main.go:141] libmachine: () Calling .GetVersion
	I0913 20:02:59.089296   71702 main.go:141] libmachine: Using API Version  1
	I0913 20:02:59.089321   71702 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 20:02:59.089716   71702 main.go:141] libmachine: () Calling .GetMachineName
	I0913 20:02:59.090168   71702 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41303
	I0913 20:02:59.090184   71702 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43597
	I0913 20:02:59.090323   71702 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 20:02:59.090370   71702 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 20:02:59.090639   71702 main.go:141] libmachine: () Calling .GetVersion
	I0913 20:02:59.090642   71702 main.go:141] libmachine: () Calling .GetVersion
	I0913 20:02:59.091125   71702 main.go:141] libmachine: Using API Version  1
	I0913 20:02:59.091157   71702 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 20:02:59.091295   71702 main.go:141] libmachine: Using API Version  1
	I0913 20:02:59.091309   71702 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 20:02:59.091691   71702 main.go:141] libmachine: () Calling .GetMachineName
	I0913 20:02:59.091749   71702 main.go:141] libmachine: () Calling .GetMachineName
	I0913 20:02:59.092208   71702 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 20:02:59.092244   71702 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 20:02:59.092420   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetState
	I0913 20:02:59.096383   71702 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-512125"
	W0913 20:02:59.096408   71702 addons.go:243] addon default-storageclass should already be in state true
	I0913 20:02:59.096439   71702 host.go:66] Checking if "default-k8s-diff-port-512125" exists ...
	I0913 20:02:59.096799   71702 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 20:02:59.096839   71702 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 20:02:59.110299   71702 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32853
	I0913 20:02:59.110382   71702 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41891
	I0913 20:02:59.110847   71702 main.go:141] libmachine: () Calling .GetVersion
	I0913 20:02:59.110951   71702 main.go:141] libmachine: () Calling .GetVersion
	I0913 20:02:59.111458   71702 main.go:141] libmachine: Using API Version  1
	I0913 20:02:59.111472   71702 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 20:02:59.111483   71702 main.go:141] libmachine: Using API Version  1
	I0913 20:02:59.111500   71702 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 20:02:59.111815   71702 main.go:141] libmachine: () Calling .GetMachineName
	I0913 20:02:59.111979   71702 main.go:141] libmachine: () Calling .GetMachineName
	I0913 20:02:59.112029   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetState
	I0913 20:02:59.112585   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetState
	I0913 20:02:59.114070   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .DriverName
	I0913 20:02:59.114919   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .DriverName
	I0913 20:02:59.116054   71702 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 20:02:59.116911   71702 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0913 20:02:54.837749   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:57.335281   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:57.335308   71233 pod_ready.go:82] duration metric: took 4m0.006028535s for pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace to be "Ready" ...
	E0913 20:02:57.335316   71233 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0913 20:02:57.335325   71233 pod_ready.go:39] duration metric: took 4m4.043499675s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0913 20:02:57.335338   71233 api_server.go:52] waiting for apiserver process to appear ...
	I0913 20:02:57.335365   71233 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:02:57.335429   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:02:57.384724   71233 cri.go:89] found id: "8c6b66cfda64cc01b2add3842317e8e38bd44940ec52b87ff6868e2b782ceb0d"
	I0913 20:02:57.384750   71233 cri.go:89] found id: ""
	I0913 20:02:57.384759   71233 logs.go:276] 1 containers: [8c6b66cfda64cc01b2add3842317e8e38bd44940ec52b87ff6868e2b782ceb0d]
	I0913 20:02:57.384816   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:02:57.393335   71233 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:02:57.393406   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:02:57.432064   71233 cri.go:89] found id: "b7288e6c437a29f9a016812446ce5905f4cc78974775e1f26b227cd41414a8c0"
	I0913 20:02:57.432112   71233 cri.go:89] found id: ""
	I0913 20:02:57.432121   71233 logs.go:276] 1 containers: [b7288e6c437a29f9a016812446ce5905f4cc78974775e1f26b227cd41414a8c0]
	I0913 20:02:57.432170   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:02:57.437305   71233 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:02:57.437363   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:02:57.484101   71233 cri.go:89] found id: "5a58f184d570427c2adde5e982b72f455eb375ced496ce62a3217f22f7c409e7"
	I0913 20:02:57.484125   71233 cri.go:89] found id: ""
	I0913 20:02:57.484135   71233 logs.go:276] 1 containers: [5a58f184d570427c2adde5e982b72f455eb375ced496ce62a3217f22f7c409e7]
	I0913 20:02:57.484204   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:02:57.489057   71233 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:02:57.489129   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:02:57.531094   71233 cri.go:89] found id: "c32212fb06588456e47850075601e1e9622d8fab0cd30faa9a6289ff74e4bc9f"
	I0913 20:02:57.531138   71233 cri.go:89] found id: ""
	I0913 20:02:57.531147   71233 logs.go:276] 1 containers: [c32212fb06588456e47850075601e1e9622d8fab0cd30faa9a6289ff74e4bc9f]
	I0913 20:02:57.531208   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:02:57.536227   71233 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:02:57.536290   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:02:57.575177   71233 cri.go:89] found id: "57402126568c76ff63c95511c256896d8fb9ec9889deab2a2816385513386b86"
	I0913 20:02:57.575204   71233 cri.go:89] found id: ""
	I0913 20:02:57.575213   71233 logs.go:276] 1 containers: [57402126568c76ff63c95511c256896d8fb9ec9889deab2a2816385513386b86]
	I0913 20:02:57.575265   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:02:57.580702   71233 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:02:57.580772   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:02:57.616846   71233 cri.go:89] found id: "3e8d6c49b3b396aea040cc47de27b36e0f6018361f1925fe36424adffe6a6b73"
	I0913 20:02:57.616872   71233 cri.go:89] found id: ""
	I0913 20:02:57.616881   71233 logs.go:276] 1 containers: [3e8d6c49b3b396aea040cc47de27b36e0f6018361f1925fe36424adffe6a6b73]
	I0913 20:02:57.616937   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:02:57.626381   71233 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:02:57.626438   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:02:57.665834   71233 cri.go:89] found id: ""
	I0913 20:02:57.665859   71233 logs.go:276] 0 containers: []
	W0913 20:02:57.665868   71233 logs.go:278] No container was found matching "kindnet"
	I0913 20:02:57.665873   71233 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0913 20:02:57.665924   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0913 20:02:57.709261   71233 cri.go:89] found id: "db0694e689431d2416b903e0f01588bdf740ca0d24d9561799e853cfb065cb11"
	I0913 20:02:57.709282   71233 cri.go:89] found id: "d21ac9f9341fd40b5f181668035e7327fd6c2310c56a7f5f0158d440bfd2e9a3"
	I0913 20:02:57.709286   71233 cri.go:89] found id: ""
	I0913 20:02:57.709293   71233 logs.go:276] 2 containers: [db0694e689431d2416b903e0f01588bdf740ca0d24d9561799e853cfb065cb11 d21ac9f9341fd40b5f181668035e7327fd6c2310c56a7f5f0158d440bfd2e9a3]
	I0913 20:02:57.709352   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:02:57.713629   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:02:57.717722   71233 logs.go:123] Gathering logs for kubelet ...
	I0913 20:02:57.717739   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:02:57.791226   71233 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:02:57.791258   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 20:02:57.967572   71233 logs.go:123] Gathering logs for kube-apiserver [8c6b66cfda64cc01b2add3842317e8e38bd44940ec52b87ff6868e2b782ceb0d] ...
	I0913 20:02:57.967614   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8c6b66cfda64cc01b2add3842317e8e38bd44940ec52b87ff6868e2b782ceb0d"
	I0913 20:02:58.035311   71233 logs.go:123] Gathering logs for kube-scheduler [c32212fb06588456e47850075601e1e9622d8fab0cd30faa9a6289ff74e4bc9f] ...
	I0913 20:02:58.035356   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c32212fb06588456e47850075601e1e9622d8fab0cd30faa9a6289ff74e4bc9f"
	I0913 20:02:58.076771   71233 logs.go:123] Gathering logs for kube-proxy [57402126568c76ff63c95511c256896d8fb9ec9889deab2a2816385513386b86] ...
	I0913 20:02:58.076801   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57402126568c76ff63c95511c256896d8fb9ec9889deab2a2816385513386b86"
	I0913 20:02:58.120108   71233 logs.go:123] Gathering logs for storage-provisioner [db0694e689431d2416b903e0f01588bdf740ca0d24d9561799e853cfb065cb11] ...
	I0913 20:02:58.120138   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db0694e689431d2416b903e0f01588bdf740ca0d24d9561799e853cfb065cb11"
	I0913 20:02:58.169935   71233 logs.go:123] Gathering logs for storage-provisioner [d21ac9f9341fd40b5f181668035e7327fd6c2310c56a7f5f0158d440bfd2e9a3] ...
	I0913 20:02:58.169964   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d21ac9f9341fd40b5f181668035e7327fd6c2310c56a7f5f0158d440bfd2e9a3"
	I0913 20:02:58.213552   71233 logs.go:123] Gathering logs for dmesg ...
	I0913 20:02:58.213579   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:02:58.227590   71233 logs.go:123] Gathering logs for etcd [b7288e6c437a29f9a016812446ce5905f4cc78974775e1f26b227cd41414a8c0] ...
	I0913 20:02:58.227618   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b7288e6c437a29f9a016812446ce5905f4cc78974775e1f26b227cd41414a8c0"
	I0913 20:02:58.272273   71233 logs.go:123] Gathering logs for coredns [5a58f184d570427c2adde5e982b72f455eb375ced496ce62a3217f22f7c409e7] ...
	I0913 20:02:58.272304   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a58f184d570427c2adde5e982b72f455eb375ced496ce62a3217f22f7c409e7"
	I0913 20:02:58.325246   71233 logs.go:123] Gathering logs for kube-controller-manager [3e8d6c49b3b396aea040cc47de27b36e0f6018361f1925fe36424adffe6a6b73] ...
	I0913 20:02:58.325282   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e8d6c49b3b396aea040cc47de27b36e0f6018361f1925fe36424adffe6a6b73"
	I0913 20:02:58.383314   71233 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:02:58.383344   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:02:58.878384   71233 logs.go:123] Gathering logs for container status ...
	I0913 20:02:58.878423   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:02:59.116960   71702 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35415
	I0913 20:02:59.117841   71702 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0913 20:02:59.117861   71702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0913 20:02:59.117881   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHHostname
	I0913 20:02:59.117970   71702 main.go:141] libmachine: () Calling .GetVersion
	I0913 20:02:59.118540   71702 main.go:141] libmachine: Using API Version  1
	I0913 20:02:59.118559   71702 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 20:02:59.118756   71702 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0913 20:02:59.118776   71702 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0913 20:02:59.118795   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHHostname
	I0913 20:02:59.118937   71702 main.go:141] libmachine: () Calling .GetMachineName
	I0913 20:02:59.120038   71702 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 20:02:59.120119   71702 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 20:02:59.122253   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 20:02:59.122695   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 20:02:59.122693   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:54:e0", ip: ""} in network mk-default-k8s-diff-port-512125: {Iface:virbr2 ExpiryTime:2024-09-13 20:49:35 +0000 UTC Type:0 Mac:52:54:00:5b:54:e0 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-512125 Clientid:01:52:54:00:5b:54:e0}
	I0913 20:02:59.122727   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined IP address 192.168.61.3 and MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 20:02:59.122937   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHPort
	I0913 20:02:59.123131   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHKeyPath
	I0913 20:02:59.123151   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:54:e0", ip: ""} in network mk-default-k8s-diff-port-512125: {Iface:virbr2 ExpiryTime:2024-09-13 20:49:35 +0000 UTC Type:0 Mac:52:54:00:5b:54:e0 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-512125 Clientid:01:52:54:00:5b:54:e0}
	I0913 20:02:59.123172   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined IP address 192.168.61.3 and MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 20:02:59.123321   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHUsername
	I0913 20:02:59.123523   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHPort
	I0913 20:02:59.123531   71702 sshutil.go:53] new ssh client: &{IP:192.168.61.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/default-k8s-diff-port-512125/id_rsa Username:docker}
	I0913 20:02:59.123629   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHKeyPath
	I0913 20:02:59.123727   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHUsername
	I0913 20:02:59.123835   71702 sshutil.go:53] new ssh client: &{IP:192.168.61.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/default-k8s-diff-port-512125/id_rsa Username:docker}
	I0913 20:02:59.137333   71702 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45929
	I0913 20:02:59.137767   71702 main.go:141] libmachine: () Calling .GetVersion
	I0913 20:02:59.138291   71702 main.go:141] libmachine: Using API Version  1
	I0913 20:02:59.138311   71702 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 20:02:59.138659   71702 main.go:141] libmachine: () Calling .GetMachineName
	I0913 20:02:59.138865   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetState
	I0913 20:02:59.140658   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .DriverName
	I0913 20:02:59.140891   71702 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0913 20:02:59.140908   71702 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0913 20:02:59.140934   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHHostname
	I0913 20:02:59.144330   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 20:02:59.144802   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:54:e0", ip: ""} in network mk-default-k8s-diff-port-512125: {Iface:virbr2 ExpiryTime:2024-09-13 20:49:35 +0000 UTC Type:0 Mac:52:54:00:5b:54:e0 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-512125 Clientid:01:52:54:00:5b:54:e0}
	I0913 20:02:59.144834   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined IP address 192.168.61.3 and MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 20:02:59.144971   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHPort
	I0913 20:02:59.145149   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHKeyPath
	I0913 20:02:59.145280   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHUsername
	I0913 20:02:59.145398   71702 sshutil.go:53] new ssh client: &{IP:192.168.61.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/default-k8s-diff-port-512125/id_rsa Username:docker}
	I0913 20:02:59.313139   71702 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0913 20:02:59.364703   71702 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-512125" to be "Ready" ...
	I0913 20:02:59.390283   71702 node_ready.go:49] node "default-k8s-diff-port-512125" has status "Ready":"True"
	I0913 20:02:59.390322   71702 node_ready.go:38] duration metric: took 25.568477ms for node "default-k8s-diff-port-512125" to be "Ready" ...
	I0913 20:02:59.390335   71702 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0913 20:02:59.404911   71702 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-2qg68" in "kube-system" namespace to be "Ready" ...
	I0913 20:02:59.534386   71702 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0913 20:02:59.534414   71702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0913 20:02:59.562931   71702 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0913 20:02:59.562958   71702 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0913 20:02:59.569447   71702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0913 20:02:59.630245   71702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0913 20:02:59.664309   71702 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0913 20:02:59.664341   71702 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0913 20:02:59.766546   71702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0913 20:03:00.996748   71702 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.366470603s)
	I0913 20:03:00.996799   71702 main.go:141] libmachine: Making call to close driver server
	I0913 20:03:00.996814   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .Close
	I0913 20:03:00.996831   71702 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.427344727s)
	I0913 20:03:00.996874   71702 main.go:141] libmachine: Making call to close driver server
	I0913 20:03:00.996886   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .Close
	I0913 20:03:00.997187   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | Closing plugin on server side
	I0913 20:03:00.997223   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | Closing plugin on server side
	I0913 20:03:00.997216   71702 main.go:141] libmachine: Successfully made call to close driver server
	I0913 20:03:00.997261   71702 main.go:141] libmachine: Successfully made call to close driver server
	I0913 20:03:00.997272   71702 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 20:03:00.997283   71702 main.go:141] libmachine: Making call to close driver server
	I0913 20:03:00.997293   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .Close
	I0913 20:03:00.997261   71702 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 20:03:00.997352   71702 main.go:141] libmachine: Making call to close driver server
	I0913 20:03:00.997360   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .Close
	I0913 20:03:00.997576   71702 main.go:141] libmachine: Successfully made call to close driver server
	I0913 20:03:00.997619   71702 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 20:03:00.997631   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | Closing plugin on server side
	I0913 20:03:00.997657   71702 main.go:141] libmachine: Successfully made call to close driver server
	I0913 20:03:00.997717   71702 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 20:03:01.017603   71702 main.go:141] libmachine: Making call to close driver server
	I0913 20:03:01.017629   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .Close
	I0913 20:03:01.017896   71702 main.go:141] libmachine: Successfully made call to close driver server
	I0913 20:03:01.017913   71702 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 20:03:01.034684   71702 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.268104844s)
	I0913 20:03:01.034739   71702 main.go:141] libmachine: Making call to close driver server
	I0913 20:03:01.034756   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .Close
	I0913 20:03:01.035100   71702 main.go:141] libmachine: Successfully made call to close driver server
	I0913 20:03:01.035120   71702 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 20:03:01.035137   71702 main.go:141] libmachine: Making call to close driver server
	I0913 20:03:01.035145   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .Close
	I0913 20:03:01.036842   71702 main.go:141] libmachine: Successfully made call to close driver server
	I0913 20:03:01.036871   71702 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 20:03:01.036882   71702 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-512125"
	I0913 20:03:01.039496   71702 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0913 20:03:01.432233   71233 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:03:01.452473   71233 api_server.go:72] duration metric: took 4m15.872372226s to wait for apiserver process to appear ...
	I0913 20:03:01.452503   71233 api_server.go:88] waiting for apiserver healthz status ...
	I0913 20:03:01.452544   71233 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:03:01.452600   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:03:01.495509   71233 cri.go:89] found id: "8c6b66cfda64cc01b2add3842317e8e38bd44940ec52b87ff6868e2b782ceb0d"
	I0913 20:03:01.495532   71233 cri.go:89] found id: ""
	I0913 20:03:01.495539   71233 logs.go:276] 1 containers: [8c6b66cfda64cc01b2add3842317e8e38bd44940ec52b87ff6868e2b782ceb0d]
	I0913 20:03:01.495601   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:03:01.502156   71233 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:03:01.502244   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:03:01.545020   71233 cri.go:89] found id: "b7288e6c437a29f9a016812446ce5905f4cc78974775e1f26b227cd41414a8c0"
	I0913 20:03:01.545046   71233 cri.go:89] found id: ""
	I0913 20:03:01.545056   71233 logs.go:276] 1 containers: [b7288e6c437a29f9a016812446ce5905f4cc78974775e1f26b227cd41414a8c0]
	I0913 20:03:01.545114   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:03:01.549607   71233 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:03:01.549675   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:03:01.589590   71233 cri.go:89] found id: "5a58f184d570427c2adde5e982b72f455eb375ced496ce62a3217f22f7c409e7"
	I0913 20:03:01.589619   71233 cri.go:89] found id: ""
	I0913 20:03:01.589627   71233 logs.go:276] 1 containers: [5a58f184d570427c2adde5e982b72f455eb375ced496ce62a3217f22f7c409e7]
	I0913 20:03:01.589677   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:03:01.595352   71233 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:03:01.595429   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:03:01.642418   71233 cri.go:89] found id: "c32212fb06588456e47850075601e1e9622d8fab0cd30faa9a6289ff74e4bc9f"
	I0913 20:03:01.642441   71233 cri.go:89] found id: ""
	I0913 20:03:01.642449   71233 logs.go:276] 1 containers: [c32212fb06588456e47850075601e1e9622d8fab0cd30faa9a6289ff74e4bc9f]
	I0913 20:03:01.642511   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:03:01.647937   71233 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:03:01.648004   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:03:01.691575   71233 cri.go:89] found id: "57402126568c76ff63c95511c256896d8fb9ec9889deab2a2816385513386b86"
	I0913 20:03:01.691603   71233 cri.go:89] found id: ""
	I0913 20:03:01.691612   71233 logs.go:276] 1 containers: [57402126568c76ff63c95511c256896d8fb9ec9889deab2a2816385513386b86]
	I0913 20:03:01.691669   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:03:01.697223   71233 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:03:01.697296   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:03:01.737359   71233 cri.go:89] found id: "3e8d6c49b3b396aea040cc47de27b36e0f6018361f1925fe36424adffe6a6b73"
	I0913 20:03:01.737386   71233 cri.go:89] found id: ""
	I0913 20:03:01.737395   71233 logs.go:276] 1 containers: [3e8d6c49b3b396aea040cc47de27b36e0f6018361f1925fe36424adffe6a6b73]
	I0913 20:03:01.737453   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:03:01.743717   71233 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:03:01.743779   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:03:01.784813   71233 cri.go:89] found id: ""
	I0913 20:03:01.784836   71233 logs.go:276] 0 containers: []
	W0913 20:03:01.784845   71233 logs.go:278] No container was found matching "kindnet"
	I0913 20:03:01.784849   71233 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0913 20:03:01.784898   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0913 20:03:01.823391   71233 cri.go:89] found id: "db0694e689431d2416b903e0f01588bdf740ca0d24d9561799e853cfb065cb11"
	I0913 20:03:01.823420   71233 cri.go:89] found id: "d21ac9f9341fd40b5f181668035e7327fd6c2310c56a7f5f0158d440bfd2e9a3"
	I0913 20:03:01.823427   71233 cri.go:89] found id: ""
	I0913 20:03:01.823436   71233 logs.go:276] 2 containers: [db0694e689431d2416b903e0f01588bdf740ca0d24d9561799e853cfb065cb11 d21ac9f9341fd40b5f181668035e7327fd6c2310c56a7f5f0158d440bfd2e9a3]
	I0913 20:03:01.823484   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:03:01.828764   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:03:01.834519   71233 logs.go:123] Gathering logs for storage-provisioner [db0694e689431d2416b903e0f01588bdf740ca0d24d9561799e853cfb065cb11] ...
	I0913 20:03:01.834546   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db0694e689431d2416b903e0f01588bdf740ca0d24d9561799e853cfb065cb11"
	I0913 20:03:01.872925   71233 logs.go:123] Gathering logs for etcd [b7288e6c437a29f9a016812446ce5905f4cc78974775e1f26b227cd41414a8c0] ...
	I0913 20:03:01.872954   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b7288e6c437a29f9a016812446ce5905f4cc78974775e1f26b227cd41414a8c0"
	I0913 20:03:01.927669   71233 logs.go:123] Gathering logs for coredns [5a58f184d570427c2adde5e982b72f455eb375ced496ce62a3217f22f7c409e7] ...
	I0913 20:03:01.927702   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a58f184d570427c2adde5e982b72f455eb375ced496ce62a3217f22f7c409e7"
	I0913 20:03:01.973537   71233 logs.go:123] Gathering logs for kube-scheduler [c32212fb06588456e47850075601e1e9622d8fab0cd30faa9a6289ff74e4bc9f] ...
	I0913 20:03:01.973576   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c32212fb06588456e47850075601e1e9622d8fab0cd30faa9a6289ff74e4bc9f"
	I0913 20:03:02.017320   71233 logs.go:123] Gathering logs for kube-proxy [57402126568c76ff63c95511c256896d8fb9ec9889deab2a2816385513386b86] ...
	I0913 20:03:02.017353   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57402126568c76ff63c95511c256896d8fb9ec9889deab2a2816385513386b86"
	I0913 20:03:02.064003   71233 logs.go:123] Gathering logs for kubelet ...
	I0913 20:03:02.064042   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:03:02.134901   71233 logs.go:123] Gathering logs for dmesg ...
	I0913 20:03:02.134933   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:03:02.150541   71233 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:03:02.150575   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 20:03:02.268583   71233 logs.go:123] Gathering logs for kube-apiserver [8c6b66cfda64cc01b2add3842317e8e38bd44940ec52b87ff6868e2b782ceb0d] ...
	I0913 20:03:02.268626   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8c6b66cfda64cc01b2add3842317e8e38bd44940ec52b87ff6868e2b782ceb0d"
	I0913 20:03:02.320972   71233 logs.go:123] Gathering logs for kube-controller-manager [3e8d6c49b3b396aea040cc47de27b36e0f6018361f1925fe36424adffe6a6b73] ...
	I0913 20:03:02.321004   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e8d6c49b3b396aea040cc47de27b36e0f6018361f1925fe36424adffe6a6b73"
	I0913 20:03:02.373848   71233 logs.go:123] Gathering logs for storage-provisioner [d21ac9f9341fd40b5f181668035e7327fd6c2310c56a7f5f0158d440bfd2e9a3] ...
	I0913 20:03:02.373881   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d21ac9f9341fd40b5f181668035e7327fd6c2310c56a7f5f0158d440bfd2e9a3"
	I0913 20:03:02.409851   71233 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:03:02.409882   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:03:02.833329   71233 logs.go:123] Gathering logs for container status ...
	I0913 20:03:02.833384   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:03:01.041611   71702 addons.go:510] duration metric: took 1.971356508s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0913 20:03:01.415839   71702 pod_ready.go:103] pod "coredns-7c65d6cfc9-2qg68" in "kube-system" namespace has status "Ready":"False"
	I0913 20:03:03.911854   71702 pod_ready.go:103] pod "coredns-7c65d6cfc9-2qg68" in "kube-system" namespace has status "Ready":"False"
	I0913 20:03:04.745279   71926 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0913 20:03:04.745917   71926 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0913 20:03:04.746165   71926 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0913 20:03:05.413146   71702 pod_ready.go:93] pod "coredns-7c65d6cfc9-2qg68" in "kube-system" namespace has status "Ready":"True"
	I0913 20:03:05.413172   71702 pod_ready.go:82] duration metric: took 6.008227569s for pod "coredns-7c65d6cfc9-2qg68" in "kube-system" namespace to be "Ready" ...
	I0913 20:03:05.413184   71702 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-pm4s9" in "kube-system" namespace to be "Ready" ...
	I0913 20:03:07.420197   71702 pod_ready.go:103] pod "coredns-7c65d6cfc9-pm4s9" in "kube-system" namespace has status "Ready":"False"
	I0913 20:03:07.920309   71702 pod_ready.go:93] pod "coredns-7c65d6cfc9-pm4s9" in "kube-system" namespace has status "Ready":"True"
	I0913 20:03:07.920333   71702 pod_ready.go:82] duration metric: took 2.507141455s for pod "coredns-7c65d6cfc9-pm4s9" in "kube-system" namespace to be "Ready" ...
	I0913 20:03:07.920342   71702 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-512125" in "kube-system" namespace to be "Ready" ...
	I0913 20:03:07.924871   71702 pod_ready.go:93] pod "etcd-default-k8s-diff-port-512125" in "kube-system" namespace has status "Ready":"True"
	I0913 20:03:07.924892   71702 pod_ready.go:82] duration metric: took 4.543474ms for pod "etcd-default-k8s-diff-port-512125" in "kube-system" namespace to be "Ready" ...
	I0913 20:03:07.924901   71702 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-512125" in "kube-system" namespace to be "Ready" ...
	I0913 20:03:07.929323   71702 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-512125" in "kube-system" namespace has status "Ready":"True"
	I0913 20:03:07.929343   71702 pod_ready.go:82] duration metric: took 4.435416ms for pod "kube-apiserver-default-k8s-diff-port-512125" in "kube-system" namespace to be "Ready" ...
	I0913 20:03:07.929351   71702 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-512125" in "kube-system" namespace to be "Ready" ...
	I0913 20:03:07.933200   71702 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-512125" in "kube-system" namespace has status "Ready":"True"
	I0913 20:03:07.933225   71702 pod_ready.go:82] duration metric: took 3.865423ms for pod "kube-controller-manager-default-k8s-diff-port-512125" in "kube-system" namespace to be "Ready" ...
	I0913 20:03:07.933237   71702 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-6zfwm" in "kube-system" namespace to be "Ready" ...
	I0913 20:03:07.938215   71702 pod_ready.go:93] pod "kube-proxy-6zfwm" in "kube-system" namespace has status "Ready":"True"
	I0913 20:03:07.938241   71702 pod_ready.go:82] duration metric: took 4.996366ms for pod "kube-proxy-6zfwm" in "kube-system" namespace to be "Ready" ...
	I0913 20:03:07.938251   71702 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-512125" in "kube-system" namespace to be "Ready" ...
	I0913 20:03:08.317175   71702 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-512125" in "kube-system" namespace has status "Ready":"True"
	I0913 20:03:08.317200   71702 pod_ready.go:82] duration metric: took 378.941006ms for pod "kube-scheduler-default-k8s-diff-port-512125" in "kube-system" namespace to be "Ready" ...
	I0913 20:03:08.317207   71702 pod_ready.go:39] duration metric: took 8.926861264s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0913 20:03:08.317220   71702 api_server.go:52] waiting for apiserver process to appear ...
	I0913 20:03:08.317270   71702 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:03:08.332715   71702 api_server.go:72] duration metric: took 9.262487177s to wait for apiserver process to appear ...
	I0913 20:03:08.332745   71702 api_server.go:88] waiting for apiserver healthz status ...
	I0913 20:03:08.332766   71702 api_server.go:253] Checking apiserver healthz at https://192.168.61.3:8444/healthz ...
	I0913 20:03:08.337492   71702 api_server.go:279] https://192.168.61.3:8444/healthz returned 200:
	ok
	I0913 20:03:08.338513   71702 api_server.go:141] control plane version: v1.31.1
	I0913 20:03:08.338534   71702 api_server.go:131] duration metric: took 5.781718ms to wait for apiserver health ...
	I0913 20:03:08.338540   71702 system_pods.go:43] waiting for kube-system pods to appear ...
	I0913 20:03:08.519723   71702 system_pods.go:59] 9 kube-system pods found
	I0913 20:03:08.519751   71702 system_pods.go:61] "coredns-7c65d6cfc9-2qg68" [06d7bc39-7f7b-405c-828f-22d68741b063] Running
	I0913 20:03:08.519756   71702 system_pods.go:61] "coredns-7c65d6cfc9-pm4s9" [82a23abb-d3a2-415b-a992-971fe65ee840] Running
	I0913 20:03:08.519760   71702 system_pods.go:61] "etcd-default-k8s-diff-port-512125" [b146d4ea-c125-42c6-8435-b23a83c75597] Running
	I0913 20:03:08.519764   71702 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-512125" [69a6fc72-2e73-41ad-a945-985c4f3b406c] Running
	I0913 20:03:08.519767   71702 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-512125" [7875db7b-84da-4dae-b78c-beb539869479] Running
	I0913 20:03:08.519770   71702 system_pods.go:61] "kube-proxy-6zfwm" [b62cff15-1c67-42d6-a30b-6f43a914fa0c] Running
	I0913 20:03:08.519773   71702 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-512125" [840349ee-5068-4ccf-9e6f-9879904b3647] Running
	I0913 20:03:08.519779   71702 system_pods.go:61] "metrics-server-6867b74b74-tk8qn" [e4e5d427-7760-4397-8529-3ae3734ed891] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0913 20:03:08.519782   71702 system_pods.go:61] "storage-provisioner" [7cd5034f-5d90-4155-acab-804dca90a2ed] Running
	I0913 20:03:08.519790   71702 system_pods.go:74] duration metric: took 181.244915ms to wait for pod list to return data ...
	I0913 20:03:08.519797   71702 default_sa.go:34] waiting for default service account to be created ...
	I0913 20:03:08.717123   71702 default_sa.go:45] found service account: "default"
	I0913 20:03:08.717146   71702 default_sa.go:55] duration metric: took 197.343901ms for default service account to be created ...
	I0913 20:03:08.717155   71702 system_pods.go:116] waiting for k8s-apps to be running ...
	I0913 20:03:08.920347   71702 system_pods.go:86] 9 kube-system pods found
	I0913 20:03:08.920378   71702 system_pods.go:89] "coredns-7c65d6cfc9-2qg68" [06d7bc39-7f7b-405c-828f-22d68741b063] Running
	I0913 20:03:08.920383   71702 system_pods.go:89] "coredns-7c65d6cfc9-pm4s9" [82a23abb-d3a2-415b-a992-971fe65ee840] Running
	I0913 20:03:08.920388   71702 system_pods.go:89] "etcd-default-k8s-diff-port-512125" [b146d4ea-c125-42c6-8435-b23a83c75597] Running
	I0913 20:03:08.920392   71702 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-512125" [69a6fc72-2e73-41ad-a945-985c4f3b406c] Running
	I0913 20:03:08.920396   71702 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-512125" [7875db7b-84da-4dae-b78c-beb539869479] Running
	I0913 20:03:08.920401   71702 system_pods.go:89] "kube-proxy-6zfwm" [b62cff15-1c67-42d6-a30b-6f43a914fa0c] Running
	I0913 20:03:08.920407   71702 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-512125" [840349ee-5068-4ccf-9e6f-9879904b3647] Running
	I0913 20:03:08.920415   71702 system_pods.go:89] "metrics-server-6867b74b74-tk8qn" [e4e5d427-7760-4397-8529-3ae3734ed891] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0913 20:03:08.920421   71702 system_pods.go:89] "storage-provisioner" [7cd5034f-5d90-4155-acab-804dca90a2ed] Running
	I0913 20:03:08.920433   71702 system_pods.go:126] duration metric: took 203.271141ms to wait for k8s-apps to be running ...
	I0913 20:03:08.920446   71702 system_svc.go:44] waiting for kubelet service to be running ....
	I0913 20:03:08.920492   71702 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 20:03:08.937818   71702 system_svc.go:56] duration metric: took 17.363979ms WaitForService to wait for kubelet
	I0913 20:03:08.937850   71702 kubeadm.go:582] duration metric: took 9.867627646s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0913 20:03:08.937866   71702 node_conditions.go:102] verifying NodePressure condition ...
	I0913 20:03:09.117836   71702 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0913 20:03:09.117861   71702 node_conditions.go:123] node cpu capacity is 2
	I0913 20:03:09.117870   71702 node_conditions.go:105] duration metric: took 180.000591ms to run NodePressure ...
	I0913 20:03:09.117880   71702 start.go:241] waiting for startup goroutines ...
	I0913 20:03:09.117886   71702 start.go:246] waiting for cluster config update ...
	I0913 20:03:09.117896   71702 start.go:255] writing updated cluster config ...
	I0913 20:03:09.118224   71702 ssh_runner.go:195] Run: rm -f paused
	I0913 20:03:09.166470   71702 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0913 20:03:09.168569   71702 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-512125" cluster and "default" namespace by default
	I0913 20:03:05.379534   71233 api_server.go:253] Checking apiserver healthz at https://192.168.39.32:8443/healthz ...
	I0913 20:03:05.385296   71233 api_server.go:279] https://192.168.39.32:8443/healthz returned 200:
	ok
	I0913 20:03:05.386447   71233 api_server.go:141] control plane version: v1.31.1
	I0913 20:03:05.386467   71233 api_server.go:131] duration metric: took 3.933956718s to wait for apiserver health ...
	I0913 20:03:05.386476   71233 system_pods.go:43] waiting for kube-system pods to appear ...
	I0913 20:03:05.386501   71233 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:03:05.386558   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:03:05.435632   71233 cri.go:89] found id: "8c6b66cfda64cc01b2add3842317e8e38bd44940ec52b87ff6868e2b782ceb0d"
	I0913 20:03:05.435663   71233 cri.go:89] found id: ""
	I0913 20:03:05.435674   71233 logs.go:276] 1 containers: [8c6b66cfda64cc01b2add3842317e8e38bd44940ec52b87ff6868e2b782ceb0d]
	I0913 20:03:05.435734   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:03:05.440489   71233 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:03:05.440552   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:03:05.479659   71233 cri.go:89] found id: "b7288e6c437a29f9a016812446ce5905f4cc78974775e1f26b227cd41414a8c0"
	I0913 20:03:05.479684   71233 cri.go:89] found id: ""
	I0913 20:03:05.479692   71233 logs.go:276] 1 containers: [b7288e6c437a29f9a016812446ce5905f4cc78974775e1f26b227cd41414a8c0]
	I0913 20:03:05.479739   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:03:05.483811   71233 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:03:05.483868   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:03:05.519053   71233 cri.go:89] found id: "5a58f184d570427c2adde5e982b72f455eb375ced496ce62a3217f22f7c409e7"
	I0913 20:03:05.519077   71233 cri.go:89] found id: ""
	I0913 20:03:05.519085   71233 logs.go:276] 1 containers: [5a58f184d570427c2adde5e982b72f455eb375ced496ce62a3217f22f7c409e7]
	I0913 20:03:05.519139   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:03:05.523529   71233 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:03:05.523596   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:03:05.560575   71233 cri.go:89] found id: "c32212fb06588456e47850075601e1e9622d8fab0cd30faa9a6289ff74e4bc9f"
	I0913 20:03:05.560599   71233 cri.go:89] found id: ""
	I0913 20:03:05.560608   71233 logs.go:276] 1 containers: [c32212fb06588456e47850075601e1e9622d8fab0cd30faa9a6289ff74e4bc9f]
	I0913 20:03:05.560655   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:03:05.564712   71233 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:03:05.564761   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:03:05.602092   71233 cri.go:89] found id: "57402126568c76ff63c95511c256896d8fb9ec9889deab2a2816385513386b86"
	I0913 20:03:05.602131   71233 cri.go:89] found id: ""
	I0913 20:03:05.602141   71233 logs.go:276] 1 containers: [57402126568c76ff63c95511c256896d8fb9ec9889deab2a2816385513386b86]
	I0913 20:03:05.602202   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:03:05.606465   71233 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:03:05.606531   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:03:05.652471   71233 cri.go:89] found id: "3e8d6c49b3b396aea040cc47de27b36e0f6018361f1925fe36424adffe6a6b73"
	I0913 20:03:05.652499   71233 cri.go:89] found id: ""
	I0913 20:03:05.652509   71233 logs.go:276] 1 containers: [3e8d6c49b3b396aea040cc47de27b36e0f6018361f1925fe36424adffe6a6b73]
	I0913 20:03:05.652567   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:03:05.656969   71233 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:03:05.657028   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:03:05.695549   71233 cri.go:89] found id: ""
	I0913 20:03:05.695575   71233 logs.go:276] 0 containers: []
	W0913 20:03:05.695586   71233 logs.go:278] No container was found matching "kindnet"
	I0913 20:03:05.695594   71233 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0913 20:03:05.695657   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0913 20:03:05.732796   71233 cri.go:89] found id: "db0694e689431d2416b903e0f01588bdf740ca0d24d9561799e853cfb065cb11"
	I0913 20:03:05.732824   71233 cri.go:89] found id: "d21ac9f9341fd40b5f181668035e7327fd6c2310c56a7f5f0158d440bfd2e9a3"
	I0913 20:03:05.732830   71233 cri.go:89] found id: ""
	I0913 20:03:05.732838   71233 logs.go:276] 2 containers: [db0694e689431d2416b903e0f01588bdf740ca0d24d9561799e853cfb065cb11 d21ac9f9341fd40b5f181668035e7327fd6c2310c56a7f5f0158d440bfd2e9a3]
	I0913 20:03:05.732905   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:03:05.737676   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:03:05.742071   71233 logs.go:123] Gathering logs for etcd [b7288e6c437a29f9a016812446ce5905f4cc78974775e1f26b227cd41414a8c0] ...
	I0913 20:03:05.742109   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b7288e6c437a29f9a016812446ce5905f4cc78974775e1f26b227cd41414a8c0"
	I0913 20:03:05.792956   71233 logs.go:123] Gathering logs for kube-scheduler [c32212fb06588456e47850075601e1e9622d8fab0cd30faa9a6289ff74e4bc9f] ...
	I0913 20:03:05.792984   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c32212fb06588456e47850075601e1e9622d8fab0cd30faa9a6289ff74e4bc9f"
	I0913 20:03:05.834623   71233 logs.go:123] Gathering logs for kube-proxy [57402126568c76ff63c95511c256896d8fb9ec9889deab2a2816385513386b86] ...
	I0913 20:03:05.834651   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57402126568c76ff63c95511c256896d8fb9ec9889deab2a2816385513386b86"
	I0913 20:03:05.872365   71233 logs.go:123] Gathering logs for storage-provisioner [db0694e689431d2416b903e0f01588bdf740ca0d24d9561799e853cfb065cb11] ...
	I0913 20:03:05.872395   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db0694e689431d2416b903e0f01588bdf740ca0d24d9561799e853cfb065cb11"
	I0913 20:03:05.909565   71233 logs.go:123] Gathering logs for storage-provisioner [d21ac9f9341fd40b5f181668035e7327fd6c2310c56a7f5f0158d440bfd2e9a3] ...
	I0913 20:03:05.909589   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d21ac9f9341fd40b5f181668035e7327fd6c2310c56a7f5f0158d440bfd2e9a3"
	I0913 20:03:05.950037   71233 logs.go:123] Gathering logs for container status ...
	I0913 20:03:05.950073   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:03:06.006670   71233 logs.go:123] Gathering logs for kubelet ...
	I0913 20:03:06.006702   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:03:06.075591   71233 logs.go:123] Gathering logs for dmesg ...
	I0913 20:03:06.075633   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:03:06.090020   71233 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:03:06.090051   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 20:03:06.193190   71233 logs.go:123] Gathering logs for kube-apiserver [8c6b66cfda64cc01b2add3842317e8e38bd44940ec52b87ff6868e2b782ceb0d] ...
	I0913 20:03:06.193216   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8c6b66cfda64cc01b2add3842317e8e38bd44940ec52b87ff6868e2b782ceb0d"
	I0913 20:03:06.236386   71233 logs.go:123] Gathering logs for coredns [5a58f184d570427c2adde5e982b72f455eb375ced496ce62a3217f22f7c409e7] ...
	I0913 20:03:06.236414   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a58f184d570427c2adde5e982b72f455eb375ced496ce62a3217f22f7c409e7"
	I0913 20:03:06.276618   71233 logs.go:123] Gathering logs for kube-controller-manager [3e8d6c49b3b396aea040cc47de27b36e0f6018361f1925fe36424adffe6a6b73] ...
	I0913 20:03:06.276644   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e8d6c49b3b396aea040cc47de27b36e0f6018361f1925fe36424adffe6a6b73"
	I0913 20:03:06.332088   71233 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:03:06.332119   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:03:09.189499   71233 system_pods.go:59] 8 kube-system pods found
	I0913 20:03:09.189533   71233 system_pods.go:61] "coredns-7c65d6cfc9-lrrkx" [3b5420cc-a0cd-4f34-8f65-9fa6fd5dd02f] Running
	I0913 20:03:09.189542   71233 system_pods.go:61] "etcd-embed-certs-175374" [4f645ba5-cffa-49a2-9219-8e635f58abd3] Running
	I0913 20:03:09.189549   71233 system_pods.go:61] "kube-apiserver-embed-certs-175374" [4c21b983-d949-4642-b17c-b547330b0d05] Running
	I0913 20:03:09.189564   71233 system_pods.go:61] "kube-controller-manager-embed-certs-175374" [defb4f6c-81f8-4405-b2c0-abe91c846f4c] Running
	I0913 20:03:09.189571   71233 system_pods.go:61] "kube-proxy-jv77q" [28580bbe-7c5f-4161-8370-41f3286d508c] Running
	I0913 20:03:09.189577   71233 system_pods.go:61] "kube-scheduler-embed-certs-175374" [c5aa66b9-523c-46e8-b8e4-70680490dbf5] Running
	I0913 20:03:09.189588   71233 system_pods.go:61] "metrics-server-6867b74b74-fnznh" [9ca67e1c-a852-4513-abfc-ace5908d2727] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0913 20:03:09.189597   71233 system_pods.go:61] "storage-provisioner" [afa99920-b51a-4d30-a8e0-269a0beeee8a] Running
	I0913 20:03:09.189610   71233 system_pods.go:74] duration metric: took 3.803122963s to wait for pod list to return data ...
	I0913 20:03:09.189618   71233 default_sa.go:34] waiting for default service account to be created ...
	I0913 20:03:09.192997   71233 default_sa.go:45] found service account: "default"
	I0913 20:03:09.193023   71233 default_sa.go:55] duration metric: took 3.397513ms for default service account to be created ...
	I0913 20:03:09.193033   71233 system_pods.go:116] waiting for k8s-apps to be running ...
	I0913 20:03:09.198238   71233 system_pods.go:86] 8 kube-system pods found
	I0913 20:03:09.198263   71233 system_pods.go:89] "coredns-7c65d6cfc9-lrrkx" [3b5420cc-a0cd-4f34-8f65-9fa6fd5dd02f] Running
	I0913 20:03:09.198268   71233 system_pods.go:89] "etcd-embed-certs-175374" [4f645ba5-cffa-49a2-9219-8e635f58abd3] Running
	I0913 20:03:09.198272   71233 system_pods.go:89] "kube-apiserver-embed-certs-175374" [4c21b983-d949-4642-b17c-b547330b0d05] Running
	I0913 20:03:09.198276   71233 system_pods.go:89] "kube-controller-manager-embed-certs-175374" [defb4f6c-81f8-4405-b2c0-abe91c846f4c] Running
	I0913 20:03:09.198280   71233 system_pods.go:89] "kube-proxy-jv77q" [28580bbe-7c5f-4161-8370-41f3286d508c] Running
	I0913 20:03:09.198284   71233 system_pods.go:89] "kube-scheduler-embed-certs-175374" [c5aa66b9-523c-46e8-b8e4-70680490dbf5] Running
	I0913 20:03:09.198291   71233 system_pods.go:89] "metrics-server-6867b74b74-fnznh" [9ca67e1c-a852-4513-abfc-ace5908d2727] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0913 20:03:09.198298   71233 system_pods.go:89] "storage-provisioner" [afa99920-b51a-4d30-a8e0-269a0beeee8a] Running
	I0913 20:03:09.198305   71233 system_pods.go:126] duration metric: took 5.267005ms to wait for k8s-apps to be running ...
	I0913 20:03:09.198314   71233 system_svc.go:44] waiting for kubelet service to be running ....
	I0913 20:03:09.198349   71233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 20:03:09.216256   71233 system_svc.go:56] duration metric: took 17.93212ms WaitForService to wait for kubelet
	I0913 20:03:09.216295   71233 kubeadm.go:582] duration metric: took 4m23.636198466s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0913 20:03:09.216318   71233 node_conditions.go:102] verifying NodePressure condition ...
	I0913 20:03:09.219598   71233 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0913 20:03:09.219623   71233 node_conditions.go:123] node cpu capacity is 2
	I0913 20:03:09.219634   71233 node_conditions.go:105] duration metric: took 3.310981ms to run NodePressure ...
	I0913 20:03:09.219644   71233 start.go:241] waiting for startup goroutines ...
	I0913 20:03:09.219650   71233 start.go:246] waiting for cluster config update ...
	I0913 20:03:09.219659   71233 start.go:255] writing updated cluster config ...
	I0913 20:03:09.219956   71233 ssh_runner.go:195] Run: rm -f paused
	I0913 20:03:09.275861   71233 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0913 20:03:09.277856   71233 out.go:177] * Done! kubectl is now configured to use "embed-certs-175374" cluster and "default" namespace by default
	I0913 20:03:09.746358   71926 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0913 20:03:09.746651   71926 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0913 20:03:19.746817   71926 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0913 20:03:19.747178   71926 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0913 20:03:39.747563   71926 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0913 20:03:39.747837   71926 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0913 20:04:19.749006   71926 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0913 20:04:19.749293   71926 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0913 20:04:19.749318   71926 kubeadm.go:310] 
	I0913 20:04:19.749381   71926 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0913 20:04:19.749450   71926 kubeadm.go:310] 		timed out waiting for the condition
	I0913 20:04:19.749486   71926 kubeadm.go:310] 
	I0913 20:04:19.749554   71926 kubeadm.go:310] 	This error is likely caused by:
	I0913 20:04:19.749588   71926 kubeadm.go:310] 		- The kubelet is not running
	I0913 20:04:19.749737   71926 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0913 20:04:19.749752   71926 kubeadm.go:310] 
	I0913 20:04:19.749887   71926 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0913 20:04:19.749920   71926 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0913 20:04:19.749949   71926 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0913 20:04:19.749955   71926 kubeadm.go:310] 
	I0913 20:04:19.750044   71926 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0913 20:04:19.750136   71926 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0913 20:04:19.750145   71926 kubeadm.go:310] 
	I0913 20:04:19.750247   71926 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0913 20:04:19.750339   71926 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0913 20:04:19.750430   71926 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0913 20:04:19.750523   71926 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0913 20:04:19.750574   71926 kubeadm.go:310] 
	I0913 20:04:19.751362   71926 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0913 20:04:19.751496   71926 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0913 20:04:19.751584   71926 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0913 20:04:19.751725   71926 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0913 20:04:19.751774   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0913 20:04:20.207124   71926 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 20:04:20.223940   71926 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0913 20:04:20.234902   71926 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0913 20:04:20.234923   71926 kubeadm.go:157] found existing configuration files:
	
	I0913 20:04:20.234970   71926 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0913 20:04:20.246057   71926 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0913 20:04:20.246154   71926 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0913 20:04:20.257102   71926 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0913 20:04:20.266497   71926 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0913 20:04:20.266545   71926 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0913 20:04:20.276259   71926 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0913 20:04:20.285640   71926 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0913 20:04:20.285690   71926 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0913 20:04:20.294988   71926 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0913 20:04:20.303978   71926 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0913 20:04:20.304021   71926 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0913 20:04:20.313426   71926 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0913 20:04:20.392431   71926 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0913 20:04:20.392519   71926 kubeadm.go:310] [preflight] Running pre-flight checks
	I0913 20:04:20.550959   71926 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0913 20:04:20.551072   71926 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0913 20:04:20.551169   71926 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0913 20:04:20.746999   71926 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0913 20:04:20.749008   71926 out.go:235]   - Generating certificates and keys ...
	I0913 20:04:20.749104   71926 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0913 20:04:20.749181   71926 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0913 20:04:20.749255   71926 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0913 20:04:20.749339   71926 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0913 20:04:20.749457   71926 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0913 20:04:20.749540   71926 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0913 20:04:20.749608   71926 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0913 20:04:20.749670   71926 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0913 20:04:20.749732   71926 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0913 20:04:20.749801   71926 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0913 20:04:20.749833   71926 kubeadm.go:310] [certs] Using the existing "sa" key
	I0913 20:04:20.749877   71926 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0913 20:04:20.997924   71926 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0913 20:04:21.175932   71926 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0913 20:04:21.442609   71926 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0913 20:04:21.714181   71926 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0913 20:04:21.737741   71926 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0913 20:04:21.738987   71926 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0913 20:04:21.739058   71926 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0913 20:04:21.885498   71926 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0913 20:04:21.887033   71926 out.go:235]   - Booting up control plane ...
	I0913 20:04:21.887170   71926 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0913 20:04:21.893768   71926 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0913 20:04:21.893887   71926 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0913 20:04:21.894035   71926 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0913 20:04:21.904503   71926 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0913 20:05:01.906985   71926 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0913 20:05:01.907109   71926 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0913 20:05:01.907459   71926 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0913 20:05:06.907586   71926 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0913 20:05:06.907859   71926 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0913 20:05:16.908086   71926 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0913 20:05:16.908310   71926 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0913 20:05:36.908887   71926 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0913 20:05:36.909114   71926 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0913 20:06:16.909944   71926 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0913 20:06:16.910449   71926 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0913 20:06:16.910509   71926 kubeadm.go:310] 
	I0913 20:06:16.910582   71926 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0913 20:06:16.910675   71926 kubeadm.go:310] 		timed out waiting for the condition
	I0913 20:06:16.910694   71926 kubeadm.go:310] 
	I0913 20:06:16.910764   71926 kubeadm.go:310] 	This error is likely caused by:
	I0913 20:06:16.910837   71926 kubeadm.go:310] 		- The kubelet is not running
	I0913 20:06:16.910965   71926 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0913 20:06:16.910979   71926 kubeadm.go:310] 
	I0913 20:06:16.911088   71926 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0913 20:06:16.911126   71926 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0913 20:06:16.911172   71926 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0913 20:06:16.911180   71926 kubeadm.go:310] 
	I0913 20:06:16.911298   71926 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0913 20:06:16.911400   71926 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0913 20:06:16.911415   71926 kubeadm.go:310] 
	I0913 20:06:16.911586   71926 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0913 20:06:16.911787   71926 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0913 20:06:16.911938   71926 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0913 20:06:16.912063   71926 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0913 20:06:16.912087   71926 kubeadm.go:310] 
	I0913 20:06:16.912489   71926 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0913 20:06:16.912671   71926 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0913 20:06:16.912770   71926 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0913 20:06:16.912842   71926 kubeadm.go:394] duration metric: took 7m57.58767006s to StartCluster
	I0913 20:06:16.912893   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:06:16.912960   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:06:16.957280   71926 cri.go:89] found id: ""
	I0913 20:06:16.957307   71926 logs.go:276] 0 containers: []
	W0913 20:06:16.957319   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:06:16.957326   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:06:16.957393   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:06:16.997065   71926 cri.go:89] found id: ""
	I0913 20:06:16.997095   71926 logs.go:276] 0 containers: []
	W0913 20:06:16.997106   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:06:16.997115   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:06:16.997193   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:06:17.033075   71926 cri.go:89] found id: ""
	I0913 20:06:17.033099   71926 logs.go:276] 0 containers: []
	W0913 20:06:17.033107   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:06:17.033112   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:06:17.033180   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:06:17.071062   71926 cri.go:89] found id: ""
	I0913 20:06:17.071090   71926 logs.go:276] 0 containers: []
	W0913 20:06:17.071101   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:06:17.071108   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:06:17.071176   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:06:17.107558   71926 cri.go:89] found id: ""
	I0913 20:06:17.107584   71926 logs.go:276] 0 containers: []
	W0913 20:06:17.107594   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:06:17.107599   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:06:17.107658   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:06:17.146037   71926 cri.go:89] found id: ""
	I0913 20:06:17.146066   71926 logs.go:276] 0 containers: []
	W0913 20:06:17.146076   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:06:17.146083   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:06:17.146158   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:06:17.189129   71926 cri.go:89] found id: ""
	I0913 20:06:17.189163   71926 logs.go:276] 0 containers: []
	W0913 20:06:17.189174   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:06:17.189181   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:06:17.189241   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:06:17.224018   71926 cri.go:89] found id: ""
	I0913 20:06:17.224045   71926 logs.go:276] 0 containers: []
	W0913 20:06:17.224056   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:06:17.224067   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:06:17.224081   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:06:17.238494   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:06:17.238526   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:06:17.319627   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:06:17.319650   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:06:17.319663   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:06:17.472823   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:06:17.472854   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:06:17.515919   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:06:17.515957   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0913 20:06:17.566082   71926 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0913 20:06:17.566145   71926 out.go:270] * 
	W0913 20:06:17.566249   71926 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0913 20:06:17.566272   71926 out.go:270] * 
	W0913 20:06:17.567031   71926 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0913 20:06:17.570669   71926 out.go:201] 
	W0913 20:06:17.571961   71926 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0913 20:06:17.572007   71926 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0913 20:06:17.572024   71926 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0913 20:06:17.573440   71926 out.go:201] 
	
	
	==> CRI-O <==
	Sep 13 20:12:11 embed-certs-175374 crio[698]: time="2024-09-13 20:12:11.631035540Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726258331631006410,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8bb9bdea-36fe-444a-9876-ad86c11d4470 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 20:12:11 embed-certs-175374 crio[698]: time="2024-09-13 20:12:11.631772068Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=50890579-0d54-4625-84c9-7e7ebdd84608 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 20:12:11 embed-certs-175374 crio[698]: time="2024-09-13 20:12:11.631857180Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=50890579-0d54-4625-84c9-7e7ebdd84608 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 20:12:11 embed-certs-175374 crio[698]: time="2024-09-13 20:12:11.632063472Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:db0694e689431d2416b903e0f01588bdf740ca0d24d9561799e853cfb065cb11,PodSandboxId:cebcdd6272ca6944b6d37519f1e1fbac7d6fad2fa1b184d187925cb4ef791c6a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726257554121691214,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afa99920-b51a-4d30-a8e0-269a0beeee8a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fc1d1764b64031585d14059ac9cb84e25c4d56f6cce3f844422b55edf3e5740,PodSandboxId:5a65221b12cd8104483023513fe85d970fc91bfa5e41a3b2e3b03c884a7d0927,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1726257534279950015,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 78550100-7601-4019-a699-49a888b727ec,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a58f184d570427c2adde5e982b72f455eb375ced496ce62a3217f22f7c409e7,PodSandboxId:fe47c92942c956149df15d774f8c96fd707a9c4d65c0de5d2aeb9fbb7d9e8887,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726257530848869494,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-lrrkx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b5420cc-a0cd-4f34-8f65-9fa6fd5dd02f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57402126568c76ff63c95511c256896d8fb9ec9889deab2a2816385513386b86,PodSandboxId:140b66d4a3d1b0ac227dce90bb7a1a9e190801662abfdd179aef4272e409e8ea,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726257523259928114,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jv77q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28580bbe-7c5f-4161-8
370-41f3286d508c,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d21ac9f9341fd40b5f181668035e7327fd6c2310c56a7f5f0158d440bfd2e9a3,PodSandboxId:cebcdd6272ca6944b6d37519f1e1fbac7d6fad2fa1b184d187925cb4ef791c6a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726257523225956258,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afa99920-b51a-4d30-a8e0-269a0beee
e8a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7288e6c437a29f9a016812446ce5905f4cc78974775e1f26b227cd41414a8c0,PodSandboxId:9316c569b83c7cf2caf7ead97ea76c085819b6df6e94f6aa6420682f33f0e80a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726257519534934341,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-175374,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53f4ce16a79335202e881123ba1a8a01,},Annotations:map[string]string{io.kub
ernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c32212fb06588456e47850075601e1e9622d8fab0cd30faa9a6289ff74e4bc9f,PodSandboxId:33eefe67c1e3ab16447af68ae4af63fa0d3d9d282dccdb6f1e74ed4271f2e4f7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726257519507998419,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-175374,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 744c32c281fdfe02bb00a97e4b471c7e,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e8d6c49b3b396aea040cc47de27b36e0f6018361f1925fe36424adffe6a6b73,PodSandboxId:45b75d912b1093e8d1b29e8318ce06e122867a44a2b29b108d2b56b1ad8db987,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726257519513900636,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-175374,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bda8edd45b328faa386561c0e2decf96,},Annotations:map[string]string{io.
kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c6b66cfda64cc01b2add3842317e8e38bd44940ec52b87ff6868e2b782ceb0d,PodSandboxId:ccac873610d8af48a339296f7ebf6ac764ab8fbe209055034eed3d552a630193,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726257519504805650,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-175374,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e70f941756c7e7a05397efce3a7696f8,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=50890579-0d54-4625-84c9-7e7ebdd84608 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 20:12:11 embed-certs-175374 crio[698]: time="2024-09-13 20:12:11.674270730Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b34e569a-d555-4022-a935-c47a0c48e3a2 name=/runtime.v1.RuntimeService/Version
	Sep 13 20:12:11 embed-certs-175374 crio[698]: time="2024-09-13 20:12:11.674360929Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b34e569a-d555-4022-a935-c47a0c48e3a2 name=/runtime.v1.RuntimeService/Version
	Sep 13 20:12:11 embed-certs-175374 crio[698]: time="2024-09-13 20:12:11.676387115Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=37408c76-54e8-47df-b83a-a565fc586c8b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 20:12:11 embed-certs-175374 crio[698]: time="2024-09-13 20:12:11.676859216Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726258331676833970,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=37408c76-54e8-47df-b83a-a565fc586c8b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 20:12:11 embed-certs-175374 crio[698]: time="2024-09-13 20:12:11.677459504Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f3a81b9a-5090-4363-a173-388926dd896a name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 20:12:11 embed-certs-175374 crio[698]: time="2024-09-13 20:12:11.677592538Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f3a81b9a-5090-4363-a173-388926dd896a name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 20:12:11 embed-certs-175374 crio[698]: time="2024-09-13 20:12:11.677844175Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:db0694e689431d2416b903e0f01588bdf740ca0d24d9561799e853cfb065cb11,PodSandboxId:cebcdd6272ca6944b6d37519f1e1fbac7d6fad2fa1b184d187925cb4ef791c6a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726257554121691214,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afa99920-b51a-4d30-a8e0-269a0beeee8a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fc1d1764b64031585d14059ac9cb84e25c4d56f6cce3f844422b55edf3e5740,PodSandboxId:5a65221b12cd8104483023513fe85d970fc91bfa5e41a3b2e3b03c884a7d0927,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1726257534279950015,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 78550100-7601-4019-a699-49a888b727ec,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a58f184d570427c2adde5e982b72f455eb375ced496ce62a3217f22f7c409e7,PodSandboxId:fe47c92942c956149df15d774f8c96fd707a9c4d65c0de5d2aeb9fbb7d9e8887,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726257530848869494,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-lrrkx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b5420cc-a0cd-4f34-8f65-9fa6fd5dd02f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57402126568c76ff63c95511c256896d8fb9ec9889deab2a2816385513386b86,PodSandboxId:140b66d4a3d1b0ac227dce90bb7a1a9e190801662abfdd179aef4272e409e8ea,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726257523259928114,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jv77q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28580bbe-7c5f-4161-8
370-41f3286d508c,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d21ac9f9341fd40b5f181668035e7327fd6c2310c56a7f5f0158d440bfd2e9a3,PodSandboxId:cebcdd6272ca6944b6d37519f1e1fbac7d6fad2fa1b184d187925cb4ef791c6a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726257523225956258,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afa99920-b51a-4d30-a8e0-269a0beee
e8a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7288e6c437a29f9a016812446ce5905f4cc78974775e1f26b227cd41414a8c0,PodSandboxId:9316c569b83c7cf2caf7ead97ea76c085819b6df6e94f6aa6420682f33f0e80a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726257519534934341,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-175374,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53f4ce16a79335202e881123ba1a8a01,},Annotations:map[string]string{io.kub
ernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c32212fb06588456e47850075601e1e9622d8fab0cd30faa9a6289ff74e4bc9f,PodSandboxId:33eefe67c1e3ab16447af68ae4af63fa0d3d9d282dccdb6f1e74ed4271f2e4f7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726257519507998419,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-175374,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 744c32c281fdfe02bb00a97e4b471c7e,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e8d6c49b3b396aea040cc47de27b36e0f6018361f1925fe36424adffe6a6b73,PodSandboxId:45b75d912b1093e8d1b29e8318ce06e122867a44a2b29b108d2b56b1ad8db987,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726257519513900636,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-175374,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bda8edd45b328faa386561c0e2decf96,},Annotations:map[string]string{io.
kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c6b66cfda64cc01b2add3842317e8e38bd44940ec52b87ff6868e2b782ceb0d,PodSandboxId:ccac873610d8af48a339296f7ebf6ac764ab8fbe209055034eed3d552a630193,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726257519504805650,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-175374,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e70f941756c7e7a05397efce3a7696f8,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f3a81b9a-5090-4363-a173-388926dd896a name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 20:12:11 embed-certs-175374 crio[698]: time="2024-09-13 20:12:11.717714857Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e1ca3ee1-aae4-495d-979e-6daf811a3648 name=/runtime.v1.RuntimeService/Version
	Sep 13 20:12:11 embed-certs-175374 crio[698]: time="2024-09-13 20:12:11.717806029Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e1ca3ee1-aae4-495d-979e-6daf811a3648 name=/runtime.v1.RuntimeService/Version
	Sep 13 20:12:11 embed-certs-175374 crio[698]: time="2024-09-13 20:12:11.719679028Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=df3a12ea-1554-4386-ba91-a78745355045 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 20:12:11 embed-certs-175374 crio[698]: time="2024-09-13 20:12:11.720290122Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726258331720260766,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=df3a12ea-1554-4386-ba91-a78745355045 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 20:12:11 embed-certs-175374 crio[698]: time="2024-09-13 20:12:11.721010971Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=188a473c-b3fa-46cc-9adb-8bbe1a306ac1 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 20:12:11 embed-certs-175374 crio[698]: time="2024-09-13 20:12:11.721080110Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=188a473c-b3fa-46cc-9adb-8bbe1a306ac1 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 20:12:11 embed-certs-175374 crio[698]: time="2024-09-13 20:12:11.721324274Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:db0694e689431d2416b903e0f01588bdf740ca0d24d9561799e853cfb065cb11,PodSandboxId:cebcdd6272ca6944b6d37519f1e1fbac7d6fad2fa1b184d187925cb4ef791c6a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726257554121691214,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afa99920-b51a-4d30-a8e0-269a0beeee8a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fc1d1764b64031585d14059ac9cb84e25c4d56f6cce3f844422b55edf3e5740,PodSandboxId:5a65221b12cd8104483023513fe85d970fc91bfa5e41a3b2e3b03c884a7d0927,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1726257534279950015,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 78550100-7601-4019-a699-49a888b727ec,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a58f184d570427c2adde5e982b72f455eb375ced496ce62a3217f22f7c409e7,PodSandboxId:fe47c92942c956149df15d774f8c96fd707a9c4d65c0de5d2aeb9fbb7d9e8887,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726257530848869494,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-lrrkx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b5420cc-a0cd-4f34-8f65-9fa6fd5dd02f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57402126568c76ff63c95511c256896d8fb9ec9889deab2a2816385513386b86,PodSandboxId:140b66d4a3d1b0ac227dce90bb7a1a9e190801662abfdd179aef4272e409e8ea,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726257523259928114,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jv77q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28580bbe-7c5f-4161-8
370-41f3286d508c,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d21ac9f9341fd40b5f181668035e7327fd6c2310c56a7f5f0158d440bfd2e9a3,PodSandboxId:cebcdd6272ca6944b6d37519f1e1fbac7d6fad2fa1b184d187925cb4ef791c6a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726257523225956258,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afa99920-b51a-4d30-a8e0-269a0beee
e8a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7288e6c437a29f9a016812446ce5905f4cc78974775e1f26b227cd41414a8c0,PodSandboxId:9316c569b83c7cf2caf7ead97ea76c085819b6df6e94f6aa6420682f33f0e80a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726257519534934341,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-175374,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53f4ce16a79335202e881123ba1a8a01,},Annotations:map[string]string{io.kub
ernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c32212fb06588456e47850075601e1e9622d8fab0cd30faa9a6289ff74e4bc9f,PodSandboxId:33eefe67c1e3ab16447af68ae4af63fa0d3d9d282dccdb6f1e74ed4271f2e4f7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726257519507998419,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-175374,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 744c32c281fdfe02bb00a97e4b471c7e,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e8d6c49b3b396aea040cc47de27b36e0f6018361f1925fe36424adffe6a6b73,PodSandboxId:45b75d912b1093e8d1b29e8318ce06e122867a44a2b29b108d2b56b1ad8db987,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726257519513900636,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-175374,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bda8edd45b328faa386561c0e2decf96,},Annotations:map[string]string{io.
kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c6b66cfda64cc01b2add3842317e8e38bd44940ec52b87ff6868e2b782ceb0d,PodSandboxId:ccac873610d8af48a339296f7ebf6ac764ab8fbe209055034eed3d552a630193,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726257519504805650,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-175374,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e70f941756c7e7a05397efce3a7696f8,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=188a473c-b3fa-46cc-9adb-8bbe1a306ac1 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 20:12:11 embed-certs-175374 crio[698]: time="2024-09-13 20:12:11.758008707Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=722284ec-4cd0-4ec6-8a44-b9df7f96c214 name=/runtime.v1.RuntimeService/Version
	Sep 13 20:12:11 embed-certs-175374 crio[698]: time="2024-09-13 20:12:11.758297304Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=722284ec-4cd0-4ec6-8a44-b9df7f96c214 name=/runtime.v1.RuntimeService/Version
	Sep 13 20:12:11 embed-certs-175374 crio[698]: time="2024-09-13 20:12:11.760004205Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a9cac303-492a-42f6-9bd8-a45bf5002931 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 20:12:11 embed-certs-175374 crio[698]: time="2024-09-13 20:12:11.760449040Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726258331760428074,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a9cac303-492a-42f6-9bd8-a45bf5002931 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 20:12:11 embed-certs-175374 crio[698]: time="2024-09-13 20:12:11.761302894Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5df61e35-fa73-4bb8-8bba-52a4796877dc name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 20:12:11 embed-certs-175374 crio[698]: time="2024-09-13 20:12:11.761394641Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5df61e35-fa73-4bb8-8bba-52a4796877dc name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 20:12:11 embed-certs-175374 crio[698]: time="2024-09-13 20:12:11.761805161Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:db0694e689431d2416b903e0f01588bdf740ca0d24d9561799e853cfb065cb11,PodSandboxId:cebcdd6272ca6944b6d37519f1e1fbac7d6fad2fa1b184d187925cb4ef791c6a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726257554121691214,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afa99920-b51a-4d30-a8e0-269a0beeee8a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fc1d1764b64031585d14059ac9cb84e25c4d56f6cce3f844422b55edf3e5740,PodSandboxId:5a65221b12cd8104483023513fe85d970fc91bfa5e41a3b2e3b03c884a7d0927,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1726257534279950015,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 78550100-7601-4019-a699-49a888b727ec,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a58f184d570427c2adde5e982b72f455eb375ced496ce62a3217f22f7c409e7,PodSandboxId:fe47c92942c956149df15d774f8c96fd707a9c4d65c0de5d2aeb9fbb7d9e8887,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726257530848869494,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-lrrkx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b5420cc-a0cd-4f34-8f65-9fa6fd5dd02f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57402126568c76ff63c95511c256896d8fb9ec9889deab2a2816385513386b86,PodSandboxId:140b66d4a3d1b0ac227dce90bb7a1a9e190801662abfdd179aef4272e409e8ea,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726257523259928114,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jv77q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28580bbe-7c5f-4161-8
370-41f3286d508c,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d21ac9f9341fd40b5f181668035e7327fd6c2310c56a7f5f0158d440bfd2e9a3,PodSandboxId:cebcdd6272ca6944b6d37519f1e1fbac7d6fad2fa1b184d187925cb4ef791c6a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726257523225956258,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afa99920-b51a-4d30-a8e0-269a0beee
e8a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7288e6c437a29f9a016812446ce5905f4cc78974775e1f26b227cd41414a8c0,PodSandboxId:9316c569b83c7cf2caf7ead97ea76c085819b6df6e94f6aa6420682f33f0e80a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726257519534934341,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-175374,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53f4ce16a79335202e881123ba1a8a01,},Annotations:map[string]string{io.kub
ernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c32212fb06588456e47850075601e1e9622d8fab0cd30faa9a6289ff74e4bc9f,PodSandboxId:33eefe67c1e3ab16447af68ae4af63fa0d3d9d282dccdb6f1e74ed4271f2e4f7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726257519507998419,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-175374,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 744c32c281fdfe02bb00a97e4b471c7e,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e8d6c49b3b396aea040cc47de27b36e0f6018361f1925fe36424adffe6a6b73,PodSandboxId:45b75d912b1093e8d1b29e8318ce06e122867a44a2b29b108d2b56b1ad8db987,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726257519513900636,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-175374,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bda8edd45b328faa386561c0e2decf96,},Annotations:map[string]string{io.
kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c6b66cfda64cc01b2add3842317e8e38bd44940ec52b87ff6868e2b782ceb0d,PodSandboxId:ccac873610d8af48a339296f7ebf6ac764ab8fbe209055034eed3d552a630193,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726257519504805650,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-175374,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e70f941756c7e7a05397efce3a7696f8,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5df61e35-fa73-4bb8-8bba-52a4796877dc name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	db0694e689431       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       2                   cebcdd6272ca6       storage-provisioner
	6fc1d1764b640       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   5a65221b12cd8       busybox
	5a58f184d5704       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      13 minutes ago      Running             coredns                   1                   fe47c92942c95       coredns-7c65d6cfc9-lrrkx
	57402126568c7       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      13 minutes ago      Running             kube-proxy                1                   140b66d4a3d1b       kube-proxy-jv77q
	d21ac9f9341fd       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       1                   cebcdd6272ca6       storage-provisioner
	b7288e6c437a2       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      13 minutes ago      Running             etcd                      1                   9316c569b83c7       etcd-embed-certs-175374
	3e8d6c49b3b39       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      13 minutes ago      Running             kube-controller-manager   1                   45b75d912b109       kube-controller-manager-embed-certs-175374
	c32212fb06588       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      13 minutes ago      Running             kube-scheduler            1                   33eefe67c1e3a       kube-scheduler-embed-certs-175374
	8c6b66cfda64c       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      13 minutes ago      Running             kube-apiserver            1                   ccac873610d8a       kube-apiserver-embed-certs-175374
	
	
	==> coredns [5a58f184d570427c2adde5e982b72f455eb375ced496ce62a3217f22f7c409e7] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:56828 - 44655 "HINFO IN 3690377131981054951.5232825123940261538. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014623001s
	
	
	==> describe nodes <==
	Name:               embed-certs-175374
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-175374
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fdd33bebc6743cfd1c61ec7fe066add478610a92
	                    minikube.k8s.io/name=embed-certs-175374
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_13T19_49_37_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Sep 2024 19:49:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-175374
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 13 Sep 2024 20:12:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Sep 2024 20:09:25 +0000   Fri, 13 Sep 2024 19:49:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Sep 2024 20:09:25 +0000   Fri, 13 Sep 2024 19:49:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Sep 2024 20:09:25 +0000   Fri, 13 Sep 2024 19:49:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 13 Sep 2024 20:09:25 +0000   Fri, 13 Sep 2024 19:58:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.32
	  Hostname:    embed-certs-175374
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ed530f9a25374e51a3a8dd17430b96db
	  System UUID:                ed530f9a-2537-4e51-a3a8-dd17430b96db
	  Boot ID:                    15b3714b-88c3-4064-ac92-0b01d63e42fd
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 coredns-7c65d6cfc9-lrrkx                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     22m
	  kube-system                 etcd-embed-certs-175374                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         22m
	  kube-system                 kube-apiserver-embed-certs-175374             250m (12%)    0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-controller-manager-embed-certs-175374    200m (10%)    0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-proxy-jv77q                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-scheduler-embed-certs-175374             100m (5%)     0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 metrics-server-6867b74b74-fnznh               100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         22m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (17%)  170Mi (8%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 22m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientMemory  22m (x8 over 22m)  kubelet          Node embed-certs-175374 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m (x8 over 22m)  kubelet          Node embed-certs-175374 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22m (x7 over 22m)  kubelet          Node embed-certs-175374 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    22m                kubelet          Node embed-certs-175374 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  22m                kubelet          Node embed-certs-175374 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     22m                kubelet          Node embed-certs-175374 status is now: NodeHasSufficientPID
	  Normal  Starting                 22m                kubelet          Starting kubelet.
	  Normal  NodeReady                22m                kubelet          Node embed-certs-175374 status is now: NodeReady
	  Normal  RegisteredNode           22m                node-controller  Node embed-certs-175374 event: Registered Node embed-certs-175374 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node embed-certs-175374 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node embed-certs-175374 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node embed-certs-175374 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node embed-certs-175374 event: Registered Node embed-certs-175374 in Controller
	
	
	==> dmesg <==
	[Sep13 19:58] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.058470] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042323] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.159115] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.667593] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +2.413991] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000038] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.807696] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.059678] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.062069] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.191266] systemd-fstab-generator[649]: Ignoring "noauto" option for root device
	[  +0.133781] systemd-fstab-generator[661]: Ignoring "noauto" option for root device
	[  +0.296693] systemd-fstab-generator[691]: Ignoring "noauto" option for root device
	[  +4.084501] systemd-fstab-generator[779]: Ignoring "noauto" option for root device
	[  +2.050540] systemd-fstab-generator[901]: Ignoring "noauto" option for root device
	[  +0.073678] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.513518] kauditd_printk_skb: 69 callbacks suppressed
	[  +2.455135] systemd-fstab-generator[1522]: Ignoring "noauto" option for root device
	[  +3.300690] kauditd_printk_skb: 64 callbacks suppressed
	[  +5.140693] kauditd_printk_skb: 41 callbacks suppressed
	
	
	==> etcd [b7288e6c437a29f9a016812446ce5905f4cc78974775e1f26b227cd41414a8c0] <==
	{"level":"info","ts":"2024-09-13T19:58:40.120554Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-13T19:58:40.120759Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"d4c05646b7156589","initial-advertise-peer-urls":["https://192.168.39.32:2380"],"listen-peer-urls":["https://192.168.39.32:2380"],"advertise-client-urls":["https://192.168.39.32:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.32:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-13T19:58:40.120804Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-13T19:58:40.120924Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.32:2380"}
	{"level":"info","ts":"2024-09-13T19:58:40.120949Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.32:2380"}
	{"level":"info","ts":"2024-09-13T19:58:41.270139Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4c05646b7156589 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-13T19:58:41.270268Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4c05646b7156589 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-13T19:58:41.270310Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4c05646b7156589 received MsgPreVoteResp from d4c05646b7156589 at term 2"}
	{"level":"info","ts":"2024-09-13T19:58:41.270343Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4c05646b7156589 became candidate at term 3"}
	{"level":"info","ts":"2024-09-13T19:58:41.270367Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4c05646b7156589 received MsgVoteResp from d4c05646b7156589 at term 3"}
	{"level":"info","ts":"2024-09-13T19:58:41.270394Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4c05646b7156589 became leader at term 3"}
	{"level":"info","ts":"2024-09-13T19:58:41.270419Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d4c05646b7156589 elected leader d4c05646b7156589 at term 3"}
	{"level":"info","ts":"2024-09-13T19:58:41.272116Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-13T19:58:41.273113Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-13T19:58:41.273890Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-13T19:58:41.272070Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"d4c05646b7156589","local-member-attributes":"{Name:embed-certs-175374 ClientURLs:[https://192.168.39.32:2379]}","request-path":"/0/members/d4c05646b7156589/attributes","cluster-id":"68bdcbcbc4b793bb","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-13T19:58:41.282618Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-13T19:58:41.286546Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-13T19:58:41.286579Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-13T19:58:41.287174Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-13T19:58:41.287967Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.32:2379"}
	{"level":"info","ts":"2024-09-13T19:58:56.850452Z","caller":"traceutil/trace.go:171","msg":"trace[1093908443] transaction","detail":"{read_only:false; response_revision:594; number_of_response:1; }","duration":"127.521212ms","start":"2024-09-13T19:58:56.721883Z","end":"2024-09-13T19:58:56.849404Z","steps":["trace[1093908443] 'process raft request'  (duration: 127.074105ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-13T20:08:41.304552Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":849}
	{"level":"info","ts":"2024-09-13T20:08:41.315277Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":849,"took":"10.335713ms","hash":809977732,"current-db-size-bytes":2854912,"current-db-size":"2.9 MB","current-db-size-in-use-bytes":2854912,"current-db-size-in-use":"2.9 MB"}
	{"level":"info","ts":"2024-09-13T20:08:41.315342Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":809977732,"revision":849,"compact-revision":-1}
	
	
	==> kernel <==
	 20:12:12 up 13 min,  0 users,  load average: 0.15, 0.20, 0.14
	Linux embed-certs-175374 5.10.207 #1 SMP Thu Sep 12 19:03:33 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [8c6b66cfda64cc01b2add3842317e8e38bd44940ec52b87ff6868e2b782ceb0d] <==
	W0913 20:08:43.649055       1 handler_proxy.go:99] no RequestInfo found in the context
	E0913 20:08:43.649151       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0913 20:08:43.650128       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0913 20:08:43.650298       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0913 20:09:43.650821       1 handler_proxy.go:99] no RequestInfo found in the context
	E0913 20:09:43.651028       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0913 20:09:43.650856       1 handler_proxy.go:99] no RequestInfo found in the context
	E0913 20:09:43.651117       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0913 20:09:43.652271       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0913 20:09:43.652303       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0913 20:11:43.653423       1 handler_proxy.go:99] no RequestInfo found in the context
	E0913 20:11:43.653790       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0913 20:11:43.653898       1 handler_proxy.go:99] no RequestInfo found in the context
	E0913 20:11:43.653954       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0913 20:11:43.654981       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0913 20:11:43.655040       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [3e8d6c49b3b396aea040cc47de27b36e0f6018361f1925fe36424adffe6a6b73] <==
	E0913 20:06:46.280638       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0913 20:06:46.745641       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0913 20:07:16.286826       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0913 20:07:16.752927       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0913 20:07:46.292700       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0913 20:07:46.759822       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0913 20:08:16.299775       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0913 20:08:16.767446       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0913 20:08:46.306459       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0913 20:08:46.775769       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0913 20:09:16.313205       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0913 20:09:16.783560       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0913 20:09:25.896058       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="embed-certs-175374"
	I0913 20:09:39.910805       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="184.116µs"
	E0913 20:09:46.320068       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0913 20:09:46.790863       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0913 20:09:52.908755       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="96.925µs"
	E0913 20:10:16.326951       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0913 20:10:16.798698       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0913 20:10:46.333147       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0913 20:10:46.806149       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0913 20:11:16.340304       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0913 20:11:16.814137       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0913 20:11:46.346759       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0913 20:11:46.821900       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [57402126568c76ff63c95511c256896d8fb9ec9889deab2a2816385513386b86] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0913 19:58:43.472828       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0913 19:58:43.485393       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.32"]
	E0913 19:58:43.485676       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0913 19:58:43.525451       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0913 19:58:43.525565       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0913 19:58:43.525590       1 server_linux.go:169] "Using iptables Proxier"
	I0913 19:58:43.528314       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0913 19:58:43.528682       1 server.go:483] "Version info" version="v1.31.1"
	I0913 19:58:43.528706       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0913 19:58:43.530430       1 config.go:199] "Starting service config controller"
	I0913 19:58:43.530478       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0913 19:58:43.530574       1 config.go:105] "Starting endpoint slice config controller"
	I0913 19:58:43.530595       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0913 19:58:43.531024       1 config.go:328] "Starting node config controller"
	I0913 19:58:43.531054       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0913 19:58:43.630763       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0913 19:58:43.630810       1 shared_informer.go:320] Caches are synced for service config
	I0913 19:58:43.631286       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [c32212fb06588456e47850075601e1e9622d8fab0cd30faa9a6289ff74e4bc9f] <==
	I0913 19:58:41.120183       1 serving.go:386] Generated self-signed cert in-memory
	W0913 19:58:42.625768       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0913 19:58:42.625860       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0913 19:58:42.625870       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0913 19:58:42.625876       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0913 19:58:42.667566       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0913 19:58:42.667614       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0913 19:58:42.670054       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0913 19:58:42.670101       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0913 19:58:42.670759       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0913 19:58:42.670841       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0913 19:58:42.770403       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 13 20:11:08 embed-certs-175374 kubelet[908]: E0913 20:11:08.069321     908 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726258268068726983,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 20:11:08 embed-certs-175374 kubelet[908]: E0913 20:11:08.069651     908 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726258268068726983,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 20:11:09 embed-certs-175374 kubelet[908]: E0913 20:11:09.895442     908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-fnznh" podUID="9ca67e1c-a852-4513-abfc-ace5908d2727"
	Sep 13 20:11:18 embed-certs-175374 kubelet[908]: E0913 20:11:18.071291     908 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726258278070942706,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 20:11:18 embed-certs-175374 kubelet[908]: E0913 20:11:18.071723     908 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726258278070942706,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 20:11:21 embed-certs-175374 kubelet[908]: E0913 20:11:21.894172     908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-fnznh" podUID="9ca67e1c-a852-4513-abfc-ace5908d2727"
	Sep 13 20:11:28 embed-certs-175374 kubelet[908]: E0913 20:11:28.073038     908 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726258288072757058,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 20:11:28 embed-certs-175374 kubelet[908]: E0913 20:11:28.073064     908 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726258288072757058,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 20:11:33 embed-certs-175374 kubelet[908]: E0913 20:11:33.893387     908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-fnznh" podUID="9ca67e1c-a852-4513-abfc-ace5908d2727"
	Sep 13 20:11:37 embed-certs-175374 kubelet[908]: E0913 20:11:37.915320     908 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 13 20:11:37 embed-certs-175374 kubelet[908]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 13 20:11:37 embed-certs-175374 kubelet[908]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 13 20:11:37 embed-certs-175374 kubelet[908]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 13 20:11:37 embed-certs-175374 kubelet[908]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 13 20:11:38 embed-certs-175374 kubelet[908]: E0913 20:11:38.075767     908 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726258298075196374,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 20:11:38 embed-certs-175374 kubelet[908]: E0913 20:11:38.075799     908 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726258298075196374,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 20:11:45 embed-certs-175374 kubelet[908]: E0913 20:11:45.894881     908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-fnznh" podUID="9ca67e1c-a852-4513-abfc-ace5908d2727"
	Sep 13 20:11:48 embed-certs-175374 kubelet[908]: E0913 20:11:48.077282     908 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726258308076982465,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 20:11:48 embed-certs-175374 kubelet[908]: E0913 20:11:48.077790     908 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726258308076982465,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 20:11:56 embed-certs-175374 kubelet[908]: E0913 20:11:56.894084     908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-fnznh" podUID="9ca67e1c-a852-4513-abfc-ace5908d2727"
	Sep 13 20:11:58 embed-certs-175374 kubelet[908]: E0913 20:11:58.079735     908 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726258318079261639,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 20:11:58 embed-certs-175374 kubelet[908]: E0913 20:11:58.079781     908 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726258318079261639,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 20:12:08 embed-certs-175374 kubelet[908]: E0913 20:12:08.081779     908 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726258328081121266,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 20:12:08 embed-certs-175374 kubelet[908]: E0913 20:12:08.082105     908 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726258328081121266,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 20:12:10 embed-certs-175374 kubelet[908]: E0913 20:12:10.896408     908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-fnznh" podUID="9ca67e1c-a852-4513-abfc-ace5908d2727"
	
	
	==> storage-provisioner [d21ac9f9341fd40b5f181668035e7327fd6c2310c56a7f5f0158d440bfd2e9a3] <==
	I0913 19:58:43.371678       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0913 19:59:13.376333       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [db0694e689431d2416b903e0f01588bdf740ca0d24d9561799e853cfb065cb11] <==
	I0913 19:59:14.213607       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0913 19:59:14.227480       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0913 19:59:14.227685       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0913 19:59:14.238129       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0913 19:59:14.238388       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-175374_c1e9d576-090d-4312-b8cb-e13584169a47!
	I0913 19:59:14.244206       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"edd1b990-cadd-4e33-a979-885e0597261d", APIVersion:"v1", ResourceVersion:"613", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-175374_c1e9d576-090d-4312-b8cb-e13584169a47 became leader
	I0913 19:59:14.338936       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-175374_c1e9d576-090d-4312-b8cb-e13584169a47!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-175374 -n embed-certs-175374
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-175374 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-fnznh
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-175374 describe pod metrics-server-6867b74b74-fnznh
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-175374 describe pod metrics-server-6867b74b74-fnznh: exit status 1 (60.135571ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-fnznh" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-175374 describe pod metrics-server-6867b74b74-fnznh: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (545.80s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.51s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
E0913 20:06:23.762902   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/auto-604714/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
E0913 20:06:35.364571   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/kindnet-604714/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
E0913 20:06:39.295053   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/custom-flannel-604714/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
E0913 20:07:14.436826   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/calico-604714/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
E0913 20:07:19.587833   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/enable-default-cni-604714/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
E0913 20:07:48.868649   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/flannel-604714/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
E0913 20:08:02.360508   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/custom-flannel-604714/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
E0913 20:08:42.653009   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/enable-default-cni-604714/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
E0913 20:08:50.604230   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/bridge-604714/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
E0913 20:09:00.651174   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/functional-204039/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
E0913 20:09:06.601492   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
E0913 20:09:11.932110   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/flannel-604714/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
E0913 20:10:00.700597   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/auto-604714/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
E0913 20:10:12.299092   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/kindnet-604714/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
E0913 20:10:13.668680   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/bridge-604714/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
E0913 20:10:51.374017   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/calico-604714/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
E0913 20:10:57.576078   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/functional-204039/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
E0913 20:11:39.295821   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/custom-flannel-604714/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
E0913 20:12:19.588050   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/enable-default-cni-604714/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
E0913 20:12:48.868265   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/flannel-604714/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
E0913 20:13:50.603777   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/bridge-604714/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
E0913 20:14:06.600801   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
E0913 20:15:00.700217   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/auto-604714/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
E0913 20:15:12.299340   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/kindnet-604714/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-234290 -n old-k8s-version-234290
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-234290 -n old-k8s-version-234290: exit status 2 (226.666356ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-234290" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-234290 -n old-k8s-version-234290
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-234290 -n old-k8s-version-234290: exit status 2 (223.970912ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-234290 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-234290 logs -n 25: (1.674506883s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-604714 sudo cat                              | bridge-604714                | jenkins | v1.34.0 | 13 Sep 24 19:49 UTC | 13 Sep 24 19:49 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-604714 sudo                                  | bridge-604714                | jenkins | v1.34.0 | 13 Sep 24 19:49 UTC | 13 Sep 24 19:49 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-604714 sudo                                  | bridge-604714                | jenkins | v1.34.0 | 13 Sep 24 19:49 UTC | 13 Sep 24 19:49 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-604714 sudo                                  | bridge-604714                | jenkins | v1.34.0 | 13 Sep 24 19:49 UTC | 13 Sep 24 19:49 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-604714 sudo find                             | bridge-604714                | jenkins | v1.34.0 | 13 Sep 24 19:49 UTC | 13 Sep 24 19:49 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-604714 sudo crio                             | bridge-604714                | jenkins | v1.34.0 | 13 Sep 24 19:49 UTC | 13 Sep 24 19:49 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-604714                                       | bridge-604714                | jenkins | v1.34.0 | 13 Sep 24 19:49 UTC | 13 Sep 24 19:49 UTC |
	| delete  | -p                                                     | disable-driver-mounts-221882 | jenkins | v1.34.0 | 13 Sep 24 19:49 UTC | 13 Sep 24 19:49 UTC |
	|         | disable-driver-mounts-221882                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-512125 | jenkins | v1.34.0 | 13 Sep 24 19:49 UTC | 13 Sep 24 19:50 UTC |
	|         | default-k8s-diff-port-512125                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-175374            | embed-certs-175374           | jenkins | v1.34.0 | 13 Sep 24 19:50 UTC | 13 Sep 24 19:50 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-175374                                  | embed-certs-175374           | jenkins | v1.34.0 | 13 Sep 24 19:50 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-239327             | no-preload-239327            | jenkins | v1.34.0 | 13 Sep 24 19:50 UTC | 13 Sep 24 19:50 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-239327                                   | no-preload-239327            | jenkins | v1.34.0 | 13 Sep 24 19:50 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-512125  | default-k8s-diff-port-512125 | jenkins | v1.34.0 | 13 Sep 24 19:50 UTC | 13 Sep 24 19:50 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-512125 | jenkins | v1.34.0 | 13 Sep 24 19:50 UTC |                     |
	|         | default-k8s-diff-port-512125                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-234290        | old-k8s-version-234290       | jenkins | v1.34.0 | 13 Sep 24 19:52 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-175374                 | embed-certs-175374           | jenkins | v1.34.0 | 13 Sep 24 19:52 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-175374                                  | embed-certs-175374           | jenkins | v1.34.0 | 13 Sep 24 19:52 UTC | 13 Sep 24 20:03 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-239327                  | no-preload-239327            | jenkins | v1.34.0 | 13 Sep 24 19:52 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-239327                                   | no-preload-239327            | jenkins | v1.34.0 | 13 Sep 24 19:52 UTC | 13 Sep 24 20:02 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-512125       | default-k8s-diff-port-512125 | jenkins | v1.34.0 | 13 Sep 24 19:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-512125 | jenkins | v1.34.0 | 13 Sep 24 19:53 UTC | 13 Sep 24 20:03 UTC |
	|         | default-k8s-diff-port-512125                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-234290                              | old-k8s-version-234290       | jenkins | v1.34.0 | 13 Sep 24 19:53 UTC | 13 Sep 24 19:53 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-234290             | old-k8s-version-234290       | jenkins | v1.34.0 | 13 Sep 24 19:53 UTC | 13 Sep 24 19:53 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-234290                              | old-k8s-version-234290       | jenkins | v1.34.0 | 13 Sep 24 19:53 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/13 19:53:41
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0913 19:53:41.943032   71926 out.go:345] Setting OutFile to fd 1 ...
	I0913 19:53:41.943180   71926 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 19:53:41.943190   71926 out.go:358] Setting ErrFile to fd 2...
	I0913 19:53:41.943197   71926 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 19:53:41.943402   71926 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19636-3902/.minikube/bin
	I0913 19:53:41.943930   71926 out.go:352] Setting JSON to false
	I0913 19:53:41.944812   71926 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5765,"bootTime":1726251457,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0913 19:53:41.944913   71926 start.go:139] virtualization: kvm guest
	I0913 19:53:41.946864   71926 out.go:177] * [old-k8s-version-234290] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0913 19:53:41.948276   71926 out.go:177]   - MINIKUBE_LOCATION=19636
	I0913 19:53:41.948281   71926 notify.go:220] Checking for updates...
	I0913 19:53:41.950620   71926 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 19:53:41.951967   71926 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19636-3902/kubeconfig
	I0913 19:53:41.953232   71926 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19636-3902/.minikube
	I0913 19:53:41.954348   71926 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0913 19:53:41.955441   71926 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 19:53:41.957011   71926 config.go:182] Loaded profile config "old-k8s-version-234290": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0913 19:53:41.957371   71926 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:53:41.957441   71926 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:53:41.972835   71926 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38069
	I0913 19:53:41.973170   71926 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:53:41.973680   71926 main.go:141] libmachine: Using API Version  1
	I0913 19:53:41.973702   71926 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:53:41.974021   71926 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:53:41.974203   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .DriverName
	I0913 19:53:41.975950   71926 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0913 19:53:41.977070   71926 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 19:53:41.977361   71926 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:53:41.977393   71926 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:53:41.992564   71926 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46407
	I0913 19:53:41.992937   71926 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:53:41.993374   71926 main.go:141] libmachine: Using API Version  1
	I0913 19:53:41.993394   71926 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:53:41.993696   71926 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:53:41.993887   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .DriverName
	I0913 19:53:42.028938   71926 out.go:177] * Using the kvm2 driver based on existing profile
	I0913 19:53:42.030049   71926 start.go:297] selected driver: kvm2
	I0913 19:53:42.030059   71926 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-234290 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-234290 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.137 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 19:53:42.030200   71926 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 19:53:42.030849   71926 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 19:53:42.030932   71926 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19636-3902/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0913 19:53:42.045463   71926 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0913 19:53:42.045953   71926 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0913 19:53:42.045989   71926 cni.go:84] Creating CNI manager for ""
	I0913 19:53:42.046049   71926 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0913 19:53:42.046110   71926 start.go:340] cluster config:
	{Name:old-k8s-version-234290 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-234290 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.137 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 19:53:42.046256   71926 iso.go:125] acquiring lock: {Name:mk6d09a9e7ffc35d34bf41cb51ad15df14a4d34d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 19:53:42.047975   71926 out.go:177] * Starting "old-k8s-version-234290" primary control-plane node in "old-k8s-version-234290" cluster
	I0913 19:53:42.049111   71926 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0913 19:53:42.049142   71926 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0913 19:53:42.049152   71926 cache.go:56] Caching tarball of preloaded images
	I0913 19:53:42.049244   71926 preload.go:172] Found /home/jenkins/minikube-integration/19636-3902/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0913 19:53:42.049256   71926 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0913 19:53:42.049381   71926 profile.go:143] Saving config to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/old-k8s-version-234290/config.json ...
	I0913 19:53:42.049610   71926 start.go:360] acquireMachinesLock for old-k8s-version-234290: {Name:mk2a4fc9f87a3264088553785e086036ce1d8c5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 19:53:44.338294   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:53:47.410436   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:53:53.490365   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:53:56.562332   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:54:02.642421   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:54:05.714373   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:54:11.794509   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:54:14.866446   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:54:20.946376   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:54:24.018394   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:54:30.098454   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:54:33.170427   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:54:39.250379   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:54:42.322396   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:54:48.402383   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:54:51.474349   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:54:57.554326   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:55:00.626470   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:55:06.706406   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:55:09.778406   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:55:15.858396   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:55:18.930350   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:55:25.010369   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:55:28.082351   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:55:34.162384   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:55:37.234340   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:55:43.314402   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:55:46.386350   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:55:52.466366   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:55:55.538393   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:56:01.618347   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:56:04.690441   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:56:10.770383   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:56:13.842385   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:56:19.922294   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:56:22.994351   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:56:29.074375   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:56:32.146398   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:56:38.226398   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:56:41.298354   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:56:47.378372   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:56:50.450410   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:56:56.530367   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:56:59.602397   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:57:05.682382   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:57:08.754412   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:57:11.758611   71424 start.go:364] duration metric: took 4m20.559966284s to acquireMachinesLock for "no-preload-239327"
	I0913 19:57:11.758664   71424 start.go:96] Skipping create...Using existing machine configuration
	I0913 19:57:11.758671   71424 fix.go:54] fixHost starting: 
	I0913 19:57:11.759024   71424 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:57:11.759062   71424 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:57:11.773946   71424 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43109
	I0913 19:57:11.774454   71424 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:57:11.774923   71424 main.go:141] libmachine: Using API Version  1
	I0913 19:57:11.774944   71424 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:57:11.775249   71424 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:57:11.775449   71424 main.go:141] libmachine: (no-preload-239327) Calling .DriverName
	I0913 19:57:11.775561   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetState
	I0913 19:57:11.777226   71424 fix.go:112] recreateIfNeeded on no-preload-239327: state=Stopped err=<nil>
	I0913 19:57:11.777255   71424 main.go:141] libmachine: (no-preload-239327) Calling .DriverName
	W0913 19:57:11.777386   71424 fix.go:138] unexpected machine state, will restart: <nil>
	I0913 19:57:11.778991   71424 out.go:177] * Restarting existing kvm2 VM for "no-preload-239327" ...
	I0913 19:57:11.756000   71233 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0913 19:57:11.756057   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetMachineName
	I0913 19:57:11.756380   71233 buildroot.go:166] provisioning hostname "embed-certs-175374"
	I0913 19:57:11.756419   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetMachineName
	I0913 19:57:11.756625   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHHostname
	I0913 19:57:11.758480   71233 machine.go:96] duration metric: took 4m37.434582624s to provisionDockerMachine
	I0913 19:57:11.758528   71233 fix.go:56] duration metric: took 4m37.454978505s for fixHost
	I0913 19:57:11.758535   71233 start.go:83] releasing machines lock for "embed-certs-175374", held for 4m37.454997672s
	W0913 19:57:11.758553   71233 start.go:714] error starting host: provision: host is not running
	W0913 19:57:11.758636   71233 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0913 19:57:11.758644   71233 start.go:729] Will try again in 5 seconds ...
	I0913 19:57:11.780324   71424 main.go:141] libmachine: (no-preload-239327) Calling .Start
	I0913 19:57:11.780481   71424 main.go:141] libmachine: (no-preload-239327) Ensuring networks are active...
	I0913 19:57:11.781265   71424 main.go:141] libmachine: (no-preload-239327) Ensuring network default is active
	I0913 19:57:11.781663   71424 main.go:141] libmachine: (no-preload-239327) Ensuring network mk-no-preload-239327 is active
	I0913 19:57:11.782007   71424 main.go:141] libmachine: (no-preload-239327) Getting domain xml...
	I0913 19:57:11.782826   71424 main.go:141] libmachine: (no-preload-239327) Creating domain...
	I0913 19:57:12.992355   71424 main.go:141] libmachine: (no-preload-239327) Waiting to get IP...
	I0913 19:57:12.993373   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:12.993782   71424 main.go:141] libmachine: (no-preload-239327) DBG | unable to find current IP address of domain no-preload-239327 in network mk-no-preload-239327
	I0913 19:57:12.993855   71424 main.go:141] libmachine: (no-preload-239327) DBG | I0913 19:57:12.993770   72661 retry.go:31] will retry after 199.574184ms: waiting for machine to come up
	I0913 19:57:13.195419   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:13.195877   71424 main.go:141] libmachine: (no-preload-239327) DBG | unable to find current IP address of domain no-preload-239327 in network mk-no-preload-239327
	I0913 19:57:13.195911   71424 main.go:141] libmachine: (no-preload-239327) DBG | I0913 19:57:13.195826   72661 retry.go:31] will retry after 380.700462ms: waiting for machine to come up
	I0913 19:57:13.578683   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:13.579202   71424 main.go:141] libmachine: (no-preload-239327) DBG | unable to find current IP address of domain no-preload-239327 in network mk-no-preload-239327
	I0913 19:57:13.579222   71424 main.go:141] libmachine: (no-preload-239327) DBG | I0913 19:57:13.579162   72661 retry.go:31] will retry after 398.874813ms: waiting for machine to come up
	I0913 19:57:13.979670   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:13.979999   71424 main.go:141] libmachine: (no-preload-239327) DBG | unable to find current IP address of domain no-preload-239327 in network mk-no-preload-239327
	I0913 19:57:13.980026   71424 main.go:141] libmachine: (no-preload-239327) DBG | I0913 19:57:13.979969   72661 retry.go:31] will retry after 430.946638ms: waiting for machine to come up
	I0913 19:57:14.412524   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:14.412887   71424 main.go:141] libmachine: (no-preload-239327) DBG | unable to find current IP address of domain no-preload-239327 in network mk-no-preload-239327
	I0913 19:57:14.412919   71424 main.go:141] libmachine: (no-preload-239327) DBG | I0913 19:57:14.412851   72661 retry.go:31] will retry after 619.103851ms: waiting for machine to come up
	I0913 19:57:15.033546   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:15.034023   71424 main.go:141] libmachine: (no-preload-239327) DBG | unable to find current IP address of domain no-preload-239327 in network mk-no-preload-239327
	I0913 19:57:15.034049   71424 main.go:141] libmachine: (no-preload-239327) DBG | I0913 19:57:15.033968   72661 retry.go:31] will retry after 686.825946ms: waiting for machine to come up
	I0913 19:57:15.722892   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:15.723272   71424 main.go:141] libmachine: (no-preload-239327) DBG | unable to find current IP address of domain no-preload-239327 in network mk-no-preload-239327
	I0913 19:57:15.723291   71424 main.go:141] libmachine: (no-preload-239327) DBG | I0913 19:57:15.723232   72661 retry.go:31] will retry after 950.457281ms: waiting for machine to come up
	I0913 19:57:16.760330   71233 start.go:360] acquireMachinesLock for embed-certs-175374: {Name:mk2a4fc9f87a3264088553785e086036ce1d8c5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 19:57:16.675363   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:16.675847   71424 main.go:141] libmachine: (no-preload-239327) DBG | unable to find current IP address of domain no-preload-239327 in network mk-no-preload-239327
	I0913 19:57:16.675877   71424 main.go:141] libmachine: (no-preload-239327) DBG | I0913 19:57:16.675800   72661 retry.go:31] will retry after 1.216886459s: waiting for machine to come up
	I0913 19:57:17.894808   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:17.895217   71424 main.go:141] libmachine: (no-preload-239327) DBG | unable to find current IP address of domain no-preload-239327 in network mk-no-preload-239327
	I0913 19:57:17.895239   71424 main.go:141] libmachine: (no-preload-239327) DBG | I0913 19:57:17.895175   72661 retry.go:31] will retry after 1.427837109s: waiting for machine to come up
	I0913 19:57:19.324743   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:19.325196   71424 main.go:141] libmachine: (no-preload-239327) DBG | unable to find current IP address of domain no-preload-239327 in network mk-no-preload-239327
	I0913 19:57:19.325217   71424 main.go:141] libmachine: (no-preload-239327) DBG | I0913 19:57:19.325162   72661 retry.go:31] will retry after 1.457475552s: waiting for machine to come up
	I0913 19:57:20.783805   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:20.784266   71424 main.go:141] libmachine: (no-preload-239327) DBG | unable to find current IP address of domain no-preload-239327 in network mk-no-preload-239327
	I0913 19:57:20.784330   71424 main.go:141] libmachine: (no-preload-239327) DBG | I0913 19:57:20.784199   72661 retry.go:31] will retry after 1.982491512s: waiting for machine to come up
	I0913 19:57:22.768091   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:22.768617   71424 main.go:141] libmachine: (no-preload-239327) DBG | unable to find current IP address of domain no-preload-239327 in network mk-no-preload-239327
	I0913 19:57:22.768648   71424 main.go:141] libmachine: (no-preload-239327) DBG | I0913 19:57:22.768571   72661 retry.go:31] will retry after 2.984595157s: waiting for machine to come up
	I0913 19:57:25.756723   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:25.757201   71424 main.go:141] libmachine: (no-preload-239327) DBG | unable to find current IP address of domain no-preload-239327 in network mk-no-preload-239327
	I0913 19:57:25.757254   71424 main.go:141] libmachine: (no-preload-239327) DBG | I0913 19:57:25.757153   72661 retry.go:31] will retry after 3.54213444s: waiting for machine to come up
	I0913 19:57:30.479236   71702 start.go:364] duration metric: took 4m5.481713344s to acquireMachinesLock for "default-k8s-diff-port-512125"
	I0913 19:57:30.479302   71702 start.go:96] Skipping create...Using existing machine configuration
	I0913 19:57:30.479311   71702 fix.go:54] fixHost starting: 
	I0913 19:57:30.479747   71702 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:57:30.479800   71702 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:57:30.496493   71702 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36631
	I0913 19:57:30.497088   71702 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:57:30.497677   71702 main.go:141] libmachine: Using API Version  1
	I0913 19:57:30.497710   71702 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:57:30.498088   71702 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:57:30.498293   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .DriverName
	I0913 19:57:30.498469   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetState
	I0913 19:57:30.500176   71702 fix.go:112] recreateIfNeeded on default-k8s-diff-port-512125: state=Stopped err=<nil>
	I0913 19:57:30.500202   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .DriverName
	W0913 19:57:30.500367   71702 fix.go:138] unexpected machine state, will restart: <nil>
	I0913 19:57:30.503496   71702 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-512125" ...
	I0913 19:57:29.301999   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:29.302506   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has current primary IP address 192.168.50.13 and MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:29.302529   71424 main.go:141] libmachine: (no-preload-239327) Found IP for machine: 192.168.50.13
	I0913 19:57:29.302571   71424 main.go:141] libmachine: (no-preload-239327) Reserving static IP address...
	I0913 19:57:29.302937   71424 main.go:141] libmachine: (no-preload-239327) Reserved static IP address: 192.168.50.13
	I0913 19:57:29.302956   71424 main.go:141] libmachine: (no-preload-239327) Waiting for SSH to be available...
	I0913 19:57:29.302980   71424 main.go:141] libmachine: (no-preload-239327) DBG | found host DHCP lease matching {name: "no-preload-239327", mac: "52:54:00:14:8c:9d", ip: "192.168.50.13"} in network mk-no-preload-239327: {Iface:virbr1 ExpiryTime:2024-09-13 20:57:22 +0000 UTC Type:0 Mac:52:54:00:14:8c:9d Iaid: IPaddr:192.168.50.13 Prefix:24 Hostname:no-preload-239327 Clientid:01:52:54:00:14:8c:9d}
	I0913 19:57:29.303002   71424 main.go:141] libmachine: (no-preload-239327) DBG | skip adding static IP to network mk-no-preload-239327 - found existing host DHCP lease matching {name: "no-preload-239327", mac: "52:54:00:14:8c:9d", ip: "192.168.50.13"}
	I0913 19:57:29.303016   71424 main.go:141] libmachine: (no-preload-239327) DBG | Getting to WaitForSSH function...
	I0913 19:57:29.305047   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:29.305362   71424 main.go:141] libmachine: (no-preload-239327) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:8c:9d", ip: ""} in network mk-no-preload-239327: {Iface:virbr1 ExpiryTime:2024-09-13 20:57:22 +0000 UTC Type:0 Mac:52:54:00:14:8c:9d Iaid: IPaddr:192.168.50.13 Prefix:24 Hostname:no-preload-239327 Clientid:01:52:54:00:14:8c:9d}
	I0913 19:57:29.305404   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined IP address 192.168.50.13 and MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:29.305515   71424 main.go:141] libmachine: (no-preload-239327) DBG | Using SSH client type: external
	I0913 19:57:29.305542   71424 main.go:141] libmachine: (no-preload-239327) DBG | Using SSH private key: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/no-preload-239327/id_rsa (-rw-------)
	I0913 19:57:29.305564   71424 main.go:141] libmachine: (no-preload-239327) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.13 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19636-3902/.minikube/machines/no-preload-239327/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0913 19:57:29.305573   71424 main.go:141] libmachine: (no-preload-239327) DBG | About to run SSH command:
	I0913 19:57:29.305581   71424 main.go:141] libmachine: (no-preload-239327) DBG | exit 0
	I0913 19:57:29.425845   71424 main.go:141] libmachine: (no-preload-239327) DBG | SSH cmd err, output: <nil>: 
	I0913 19:57:29.426277   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetConfigRaw
	I0913 19:57:29.426883   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetIP
	I0913 19:57:29.429328   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:29.429569   71424 main.go:141] libmachine: (no-preload-239327) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:8c:9d", ip: ""} in network mk-no-preload-239327: {Iface:virbr1 ExpiryTime:2024-09-13 20:57:22 +0000 UTC Type:0 Mac:52:54:00:14:8c:9d Iaid: IPaddr:192.168.50.13 Prefix:24 Hostname:no-preload-239327 Clientid:01:52:54:00:14:8c:9d}
	I0913 19:57:29.429604   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined IP address 192.168.50.13 and MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:29.429866   71424 profile.go:143] Saving config to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/no-preload-239327/config.json ...
	I0913 19:57:29.430088   71424 machine.go:93] provisionDockerMachine start ...
	I0913 19:57:29.430124   71424 main.go:141] libmachine: (no-preload-239327) Calling .DriverName
	I0913 19:57:29.430316   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHHostname
	I0913 19:57:29.432371   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:29.432697   71424 main.go:141] libmachine: (no-preload-239327) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:8c:9d", ip: ""} in network mk-no-preload-239327: {Iface:virbr1 ExpiryTime:2024-09-13 20:57:22 +0000 UTC Type:0 Mac:52:54:00:14:8c:9d Iaid: IPaddr:192.168.50.13 Prefix:24 Hostname:no-preload-239327 Clientid:01:52:54:00:14:8c:9d}
	I0913 19:57:29.432723   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined IP address 192.168.50.13 and MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:29.432877   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHPort
	I0913 19:57:29.433028   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHKeyPath
	I0913 19:57:29.433161   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHKeyPath
	I0913 19:57:29.433304   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHUsername
	I0913 19:57:29.433452   71424 main.go:141] libmachine: Using SSH client type: native
	I0913 19:57:29.433659   71424 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.13 22 <nil> <nil>}
	I0913 19:57:29.433671   71424 main.go:141] libmachine: About to run SSH command:
	hostname
	I0913 19:57:29.530650   71424 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0913 19:57:29.530683   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetMachineName
	I0913 19:57:29.530900   71424 buildroot.go:166] provisioning hostname "no-preload-239327"
	I0913 19:57:29.530926   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetMachineName
	I0913 19:57:29.531118   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHHostname
	I0913 19:57:29.533702   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:29.534171   71424 main.go:141] libmachine: (no-preload-239327) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:8c:9d", ip: ""} in network mk-no-preload-239327: {Iface:virbr1 ExpiryTime:2024-09-13 20:57:22 +0000 UTC Type:0 Mac:52:54:00:14:8c:9d Iaid: IPaddr:192.168.50.13 Prefix:24 Hostname:no-preload-239327 Clientid:01:52:54:00:14:8c:9d}
	I0913 19:57:29.534199   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined IP address 192.168.50.13 and MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:29.534417   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHPort
	I0913 19:57:29.534572   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHKeyPath
	I0913 19:57:29.534745   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHKeyPath
	I0913 19:57:29.534891   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHUsername
	I0913 19:57:29.535019   71424 main.go:141] libmachine: Using SSH client type: native
	I0913 19:57:29.535187   71424 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.13 22 <nil> <nil>}
	I0913 19:57:29.535199   71424 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-239327 && echo "no-preload-239327" | sudo tee /etc/hostname
	I0913 19:57:29.648889   71424 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-239327
	
	I0913 19:57:29.648913   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHHostname
	I0913 19:57:29.651418   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:29.651794   71424 main.go:141] libmachine: (no-preload-239327) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:8c:9d", ip: ""} in network mk-no-preload-239327: {Iface:virbr1 ExpiryTime:2024-09-13 20:57:22 +0000 UTC Type:0 Mac:52:54:00:14:8c:9d Iaid: IPaddr:192.168.50.13 Prefix:24 Hostname:no-preload-239327 Clientid:01:52:54:00:14:8c:9d}
	I0913 19:57:29.651818   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined IP address 192.168.50.13 and MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:29.651947   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHPort
	I0913 19:57:29.652123   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHKeyPath
	I0913 19:57:29.652233   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHKeyPath
	I0913 19:57:29.652398   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHUsername
	I0913 19:57:29.652574   71424 main.go:141] libmachine: Using SSH client type: native
	I0913 19:57:29.652776   71424 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.13 22 <nil> <nil>}
	I0913 19:57:29.652794   71424 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-239327' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-239327/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-239327' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0913 19:57:29.762739   71424 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0913 19:57:29.762770   71424 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19636-3902/.minikube CaCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19636-3902/.minikube}
	I0913 19:57:29.762788   71424 buildroot.go:174] setting up certificates
	I0913 19:57:29.762798   71424 provision.go:84] configureAuth start
	I0913 19:57:29.762807   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetMachineName
	I0913 19:57:29.763076   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetIP
	I0913 19:57:29.765579   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:29.765844   71424 main.go:141] libmachine: (no-preload-239327) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:8c:9d", ip: ""} in network mk-no-preload-239327: {Iface:virbr1 ExpiryTime:2024-09-13 20:57:22 +0000 UTC Type:0 Mac:52:54:00:14:8c:9d Iaid: IPaddr:192.168.50.13 Prefix:24 Hostname:no-preload-239327 Clientid:01:52:54:00:14:8c:9d}
	I0913 19:57:29.765881   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined IP address 192.168.50.13 and MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:29.766037   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHHostname
	I0913 19:57:29.768073   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:29.768363   71424 main.go:141] libmachine: (no-preload-239327) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:8c:9d", ip: ""} in network mk-no-preload-239327: {Iface:virbr1 ExpiryTime:2024-09-13 20:57:22 +0000 UTC Type:0 Mac:52:54:00:14:8c:9d Iaid: IPaddr:192.168.50.13 Prefix:24 Hostname:no-preload-239327 Clientid:01:52:54:00:14:8c:9d}
	I0913 19:57:29.768389   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined IP address 192.168.50.13 and MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:29.768465   71424 provision.go:143] copyHostCerts
	I0913 19:57:29.768517   71424 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem, removing ...
	I0913 19:57:29.768527   71424 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem
	I0913 19:57:29.768590   71424 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem (1679 bytes)
	I0913 19:57:29.768687   71424 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem, removing ...
	I0913 19:57:29.768694   71424 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem
	I0913 19:57:29.768722   71424 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem (1082 bytes)
	I0913 19:57:29.768788   71424 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem, removing ...
	I0913 19:57:29.768795   71424 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem
	I0913 19:57:29.768817   71424 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem (1123 bytes)
	I0913 19:57:29.768889   71424 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem org=jenkins.no-preload-239327 san=[127.0.0.1 192.168.50.13 localhost minikube no-preload-239327]
	I0913 19:57:29.880624   71424 provision.go:177] copyRemoteCerts
	I0913 19:57:29.880682   71424 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0913 19:57:29.880717   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHHostname
	I0913 19:57:29.883382   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:29.883679   71424 main.go:141] libmachine: (no-preload-239327) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:8c:9d", ip: ""} in network mk-no-preload-239327: {Iface:virbr1 ExpiryTime:2024-09-13 20:57:22 +0000 UTC Type:0 Mac:52:54:00:14:8c:9d Iaid: IPaddr:192.168.50.13 Prefix:24 Hostname:no-preload-239327 Clientid:01:52:54:00:14:8c:9d}
	I0913 19:57:29.883706   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined IP address 192.168.50.13 and MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:29.883861   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHPort
	I0913 19:57:29.884034   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHKeyPath
	I0913 19:57:29.884172   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHUsername
	I0913 19:57:29.884299   71424 sshutil.go:53] new ssh client: &{IP:192.168.50.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/no-preload-239327/id_rsa Username:docker}
	I0913 19:57:29.964073   71424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0913 19:57:29.988940   71424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0913 19:57:30.013491   71424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0913 19:57:30.038401   71424 provision.go:87] duration metric: took 275.590034ms to configureAuth
	I0913 19:57:30.038427   71424 buildroot.go:189] setting minikube options for container-runtime
	I0913 19:57:30.038638   71424 config.go:182] Loaded profile config "no-preload-239327": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 19:57:30.038726   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHHostname
	I0913 19:57:30.041435   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:30.041734   71424 main.go:141] libmachine: (no-preload-239327) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:8c:9d", ip: ""} in network mk-no-preload-239327: {Iface:virbr1 ExpiryTime:2024-09-13 20:57:22 +0000 UTC Type:0 Mac:52:54:00:14:8c:9d Iaid: IPaddr:192.168.50.13 Prefix:24 Hostname:no-preload-239327 Clientid:01:52:54:00:14:8c:9d}
	I0913 19:57:30.041758   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined IP address 192.168.50.13 and MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:30.041939   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHPort
	I0913 19:57:30.042135   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHKeyPath
	I0913 19:57:30.042328   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHKeyPath
	I0913 19:57:30.042488   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHUsername
	I0913 19:57:30.042633   71424 main.go:141] libmachine: Using SSH client type: native
	I0913 19:57:30.042788   71424 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.13 22 <nil> <nil>}
	I0913 19:57:30.042803   71424 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0913 19:57:30.253339   71424 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0913 19:57:30.253366   71424 machine.go:96] duration metric: took 823.250507ms to provisionDockerMachine
	I0913 19:57:30.253379   71424 start.go:293] postStartSetup for "no-preload-239327" (driver="kvm2")
	I0913 19:57:30.253391   71424 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0913 19:57:30.253413   71424 main.go:141] libmachine: (no-preload-239327) Calling .DriverName
	I0913 19:57:30.253755   71424 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0913 19:57:30.253789   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHHostname
	I0913 19:57:30.256252   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:30.256514   71424 main.go:141] libmachine: (no-preload-239327) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:8c:9d", ip: ""} in network mk-no-preload-239327: {Iface:virbr1 ExpiryTime:2024-09-13 20:57:22 +0000 UTC Type:0 Mac:52:54:00:14:8c:9d Iaid: IPaddr:192.168.50.13 Prefix:24 Hostname:no-preload-239327 Clientid:01:52:54:00:14:8c:9d}
	I0913 19:57:30.256540   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined IP address 192.168.50.13 and MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:30.256711   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHPort
	I0913 19:57:30.256876   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHKeyPath
	I0913 19:57:30.257073   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHUsername
	I0913 19:57:30.257214   71424 sshutil.go:53] new ssh client: &{IP:192.168.50.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/no-preload-239327/id_rsa Username:docker}
	I0913 19:57:30.337478   71424 ssh_runner.go:195] Run: cat /etc/os-release
	I0913 19:57:30.342399   71424 info.go:137] Remote host: Buildroot 2023.02.9
	I0913 19:57:30.342432   71424 filesync.go:126] Scanning /home/jenkins/minikube-integration/19636-3902/.minikube/addons for local assets ...
	I0913 19:57:30.342520   71424 filesync.go:126] Scanning /home/jenkins/minikube-integration/19636-3902/.minikube/files for local assets ...
	I0913 19:57:30.342602   71424 filesync.go:149] local asset: /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem -> 110792.pem in /etc/ssl/certs
	I0913 19:57:30.342687   71424 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0913 19:57:30.352513   71424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem --> /etc/ssl/certs/110792.pem (1708 bytes)
	I0913 19:57:30.377672   71424 start.go:296] duration metric: took 124.280454ms for postStartSetup
	I0913 19:57:30.377713   71424 fix.go:56] duration metric: took 18.619042375s for fixHost
	I0913 19:57:30.377736   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHHostname
	I0913 19:57:30.380480   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:30.380762   71424 main.go:141] libmachine: (no-preload-239327) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:8c:9d", ip: ""} in network mk-no-preload-239327: {Iface:virbr1 ExpiryTime:2024-09-13 20:57:22 +0000 UTC Type:0 Mac:52:54:00:14:8c:9d Iaid: IPaddr:192.168.50.13 Prefix:24 Hostname:no-preload-239327 Clientid:01:52:54:00:14:8c:9d}
	I0913 19:57:30.380784   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined IP address 192.168.50.13 and MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:30.380956   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHPort
	I0913 19:57:30.381202   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHKeyPath
	I0913 19:57:30.381348   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHKeyPath
	I0913 19:57:30.381458   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHUsername
	I0913 19:57:30.381616   71424 main.go:141] libmachine: Using SSH client type: native
	I0913 19:57:30.381771   71424 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.13 22 <nil> <nil>}
	I0913 19:57:30.381780   71424 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0913 19:57:30.479035   71424 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726257450.452618583
	
	I0913 19:57:30.479060   71424 fix.go:216] guest clock: 1726257450.452618583
	I0913 19:57:30.479069   71424 fix.go:229] Guest: 2024-09-13 19:57:30.452618583 +0000 UTC Remote: 2024-09-13 19:57:30.377717716 +0000 UTC m=+279.312798159 (delta=74.900867ms)
	I0913 19:57:30.479125   71424 fix.go:200] guest clock delta is within tolerance: 74.900867ms
	I0913 19:57:30.479144   71424 start.go:83] releasing machines lock for "no-preload-239327", held for 18.720496354s
	I0913 19:57:30.479184   71424 main.go:141] libmachine: (no-preload-239327) Calling .DriverName
	I0913 19:57:30.479427   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetIP
	I0913 19:57:30.481882   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:30.482255   71424 main.go:141] libmachine: (no-preload-239327) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:8c:9d", ip: ""} in network mk-no-preload-239327: {Iface:virbr1 ExpiryTime:2024-09-13 20:57:22 +0000 UTC Type:0 Mac:52:54:00:14:8c:9d Iaid: IPaddr:192.168.50.13 Prefix:24 Hostname:no-preload-239327 Clientid:01:52:54:00:14:8c:9d}
	I0913 19:57:30.482282   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined IP address 192.168.50.13 and MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:30.482456   71424 main.go:141] libmachine: (no-preload-239327) Calling .DriverName
	I0913 19:57:30.482964   71424 main.go:141] libmachine: (no-preload-239327) Calling .DriverName
	I0913 19:57:30.483140   71424 main.go:141] libmachine: (no-preload-239327) Calling .DriverName
	I0913 19:57:30.483216   71424 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0913 19:57:30.483243   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHHostname
	I0913 19:57:30.483423   71424 ssh_runner.go:195] Run: cat /version.json
	I0913 19:57:30.483453   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHHostname
	I0913 19:57:30.485658   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:30.486000   71424 main.go:141] libmachine: (no-preload-239327) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:8c:9d", ip: ""} in network mk-no-preload-239327: {Iface:virbr1 ExpiryTime:2024-09-13 20:57:22 +0000 UTC Type:0 Mac:52:54:00:14:8c:9d Iaid: IPaddr:192.168.50.13 Prefix:24 Hostname:no-preload-239327 Clientid:01:52:54:00:14:8c:9d}
	I0913 19:57:30.486026   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined IP address 192.168.50.13 and MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:30.486080   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:30.486173   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHPort
	I0913 19:57:30.486321   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHKeyPath
	I0913 19:57:30.486463   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHUsername
	I0913 19:57:30.486536   71424 main.go:141] libmachine: (no-preload-239327) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:8c:9d", ip: ""} in network mk-no-preload-239327: {Iface:virbr1 ExpiryTime:2024-09-13 20:57:22 +0000 UTC Type:0 Mac:52:54:00:14:8c:9d Iaid: IPaddr:192.168.50.13 Prefix:24 Hostname:no-preload-239327 Clientid:01:52:54:00:14:8c:9d}
	I0913 19:57:30.486556   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined IP address 192.168.50.13 and MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:30.486581   71424 sshutil.go:53] new ssh client: &{IP:192.168.50.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/no-preload-239327/id_rsa Username:docker}
	I0913 19:57:30.486717   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHPort
	I0913 19:57:30.486859   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHKeyPath
	I0913 19:57:30.487019   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHUsername
	I0913 19:57:30.487177   71424 sshutil.go:53] new ssh client: &{IP:192.168.50.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/no-preload-239327/id_rsa Username:docker}
	I0913 19:57:30.567383   71424 ssh_runner.go:195] Run: systemctl --version
	I0913 19:57:30.589782   71424 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0913 19:57:30.731014   71424 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0913 19:57:30.737329   71424 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0913 19:57:30.737400   71424 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0913 19:57:30.753326   71424 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0913 19:57:30.753355   71424 start.go:495] detecting cgroup driver to use...
	I0913 19:57:30.753427   71424 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0913 19:57:30.769188   71424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0913 19:57:30.783273   71424 docker.go:217] disabling cri-docker service (if available) ...
	I0913 19:57:30.783338   71424 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0913 19:57:30.796488   71424 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0913 19:57:30.809856   71424 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0913 19:57:30.920704   71424 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0913 19:57:31.096766   71424 docker.go:233] disabling docker service ...
	I0913 19:57:31.096843   71424 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0913 19:57:31.111766   71424 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0913 19:57:31.127537   71424 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0913 19:57:31.243075   71424 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0913 19:57:31.367950   71424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0913 19:57:31.382349   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0913 19:57:31.401339   71424 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0913 19:57:31.401408   71424 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:57:31.412154   71424 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0913 19:57:31.412230   71424 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:57:31.423247   71424 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:57:31.433976   71424 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:57:31.445438   71424 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0913 19:57:31.457530   71424 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:57:31.468624   71424 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:57:31.487026   71424 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:57:31.498412   71424 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0913 19:57:31.508829   71424 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0913 19:57:31.508895   71424 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0913 19:57:31.524710   71424 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0913 19:57:31.535524   71424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 19:57:31.653359   71424 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0913 19:57:31.747320   71424 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0913 19:57:31.747407   71424 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0913 19:57:31.752629   71424 start.go:563] Will wait 60s for crictl version
	I0913 19:57:31.752688   71424 ssh_runner.go:195] Run: which crictl
	I0913 19:57:31.756745   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0913 19:57:31.801760   71424 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0913 19:57:31.801845   71424 ssh_runner.go:195] Run: crio --version
	I0913 19:57:31.831043   71424 ssh_runner.go:195] Run: crio --version
	I0913 19:57:31.864324   71424 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0913 19:57:30.504936   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .Start
	I0913 19:57:30.505113   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Ensuring networks are active...
	I0913 19:57:30.505954   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Ensuring network default is active
	I0913 19:57:30.506465   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Ensuring network mk-default-k8s-diff-port-512125 is active
	I0913 19:57:30.506848   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Getting domain xml...
	I0913 19:57:30.507643   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Creating domain...
	I0913 19:57:31.762345   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting to get IP...
	I0913 19:57:31.763307   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:31.763782   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | unable to find current IP address of domain default-k8s-diff-port-512125 in network mk-default-k8s-diff-port-512125
	I0913 19:57:31.763844   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | I0913 19:57:31.763764   72780 retry.go:31] will retry after 200.585233ms: waiting for machine to come up
	I0913 19:57:31.966496   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:31.968386   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | unable to find current IP address of domain default-k8s-diff-port-512125 in network mk-default-k8s-diff-port-512125
	I0913 19:57:31.968411   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | I0913 19:57:31.968318   72780 retry.go:31] will retry after 263.858664ms: waiting for machine to come up
	I0913 19:57:32.234115   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:32.234585   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | unable to find current IP address of domain default-k8s-diff-port-512125 in network mk-default-k8s-diff-port-512125
	I0913 19:57:32.234611   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | I0913 19:57:32.234528   72780 retry.go:31] will retry after 372.592721ms: waiting for machine to come up
	I0913 19:57:32.609295   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:32.609822   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | unable to find current IP address of domain default-k8s-diff-port-512125 in network mk-default-k8s-diff-port-512125
	I0913 19:57:32.609852   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | I0913 19:57:32.609783   72780 retry.go:31] will retry after 570.937116ms: waiting for machine to come up
	I0913 19:57:33.182680   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:33.183060   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | unable to find current IP address of domain default-k8s-diff-port-512125 in network mk-default-k8s-diff-port-512125
	I0913 19:57:33.183090   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | I0913 19:57:33.183013   72780 retry.go:31] will retry after 573.320817ms: waiting for machine to come up
	I0913 19:57:33.757741   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:33.758113   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | unable to find current IP address of domain default-k8s-diff-port-512125 in network mk-default-k8s-diff-port-512125
	I0913 19:57:33.758145   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | I0913 19:57:33.758052   72780 retry.go:31] will retry after 732.322448ms: waiting for machine to come up
	I0913 19:57:34.492123   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:34.492507   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | unable to find current IP address of domain default-k8s-diff-port-512125 in network mk-default-k8s-diff-port-512125
	I0913 19:57:34.492538   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | I0913 19:57:34.492457   72780 retry.go:31] will retry after 958.042939ms: waiting for machine to come up
	I0913 19:57:31.865671   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetIP
	I0913 19:57:31.868390   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:31.868769   71424 main.go:141] libmachine: (no-preload-239327) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:8c:9d", ip: ""} in network mk-no-preload-239327: {Iface:virbr1 ExpiryTime:2024-09-13 20:57:22 +0000 UTC Type:0 Mac:52:54:00:14:8c:9d Iaid: IPaddr:192.168.50.13 Prefix:24 Hostname:no-preload-239327 Clientid:01:52:54:00:14:8c:9d}
	I0913 19:57:31.868809   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined IP address 192.168.50.13 and MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:31.868948   71424 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0913 19:57:31.873443   71424 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0913 19:57:31.886704   71424 kubeadm.go:883] updating cluster {Name:no-preload-239327 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:no-preload-239327 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.13 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0913 19:57:31.886832   71424 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0913 19:57:31.886886   71424 ssh_runner.go:195] Run: sudo crictl images --output json
	I0913 19:57:31.925232   71424 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0913 19:57:31.925256   71424 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.1 registry.k8s.io/kube-controller-manager:v1.31.1 registry.k8s.io/kube-scheduler:v1.31.1 registry.k8s.io/kube-proxy:v1.31.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0913 19:57:31.925331   71424 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 19:57:31.925351   71424 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0913 19:57:31.925350   71424 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I0913 19:57:31.925433   71424 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0913 19:57:31.925483   71424 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0913 19:57:31.925542   71424 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I0913 19:57:31.925553   71424 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I0913 19:57:31.925619   71424 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0913 19:57:31.927195   71424 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0913 19:57:31.927210   71424 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I0913 19:57:31.927221   71424 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0913 19:57:31.927234   71424 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0913 19:57:31.927201   71424 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I0913 19:57:31.927210   71424 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 19:57:31.927265   71424 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I0913 19:57:31.927291   71424 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0913 19:57:32.127330   71424 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0913 19:57:32.132821   71424 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I0913 19:57:32.142922   71424 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.1
	I0913 19:57:32.151533   71424 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.1
	I0913 19:57:32.187158   71424 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.1
	I0913 19:57:32.196395   71424 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0913 19:57:32.196447   71424 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0913 19:57:32.196495   71424 ssh_runner.go:195] Run: which crictl
	I0913 19:57:32.197121   71424 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.1
	I0913 19:57:32.223747   71424 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0913 19:57:32.241044   71424 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.1" does not exist at hash "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee" in container runtime
	I0913 19:57:32.241098   71424 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.1
	I0913 19:57:32.241146   71424 ssh_runner.go:195] Run: which crictl
	I0913 19:57:32.241193   71424 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I0913 19:57:32.241248   71424 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I0913 19:57:32.241305   71424 ssh_runner.go:195] Run: which crictl
	I0913 19:57:32.307038   71424 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.1" does not exist at hash "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1" in container runtime
	I0913 19:57:32.307081   71424 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0913 19:57:32.307161   71424 ssh_runner.go:195] Run: which crictl
	I0913 19:57:32.310315   71424 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.1" does not exist at hash "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b" in container runtime
	I0913 19:57:32.310353   71424 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.1
	I0913 19:57:32.310403   71424 ssh_runner.go:195] Run: which crictl
	I0913 19:57:32.310456   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0913 19:57:32.310513   71424 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.1" needs transfer: "registry.k8s.io/kube-proxy:v1.31.1" does not exist at hash "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561" in container runtime
	I0913 19:57:32.310544   71424 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.1
	I0913 19:57:32.310579   71424 ssh_runner.go:195] Run: which crictl
	I0913 19:57:32.432848   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0913 19:57:32.432949   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0913 19:57:32.432981   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0913 19:57:32.433034   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0913 19:57:32.433086   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0913 19:57:32.433185   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0913 19:57:32.568999   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0913 19:57:32.569071   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0913 19:57:32.569090   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0913 19:57:32.569137   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0913 19:57:32.569158   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0913 19:57:32.569239   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0913 19:57:32.686591   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0913 19:57:32.709864   71424 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0913 19:57:32.709957   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0913 19:57:32.709984   71424 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0913 19:57:32.710022   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0913 19:57:32.710074   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0913 19:57:32.714371   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0913 19:57:32.812533   71424 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I0913 19:57:32.812546   71424 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I0913 19:57:32.812646   71424 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0913 19:57:32.812679   71424 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I0913 19:57:32.822802   71424 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0913 19:57:32.822821   71424 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0913 19:57:32.822870   71424 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0913 19:57:32.822949   71424 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I0913 19:57:32.823020   71424 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I0913 19:57:32.823036   71424 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I0913 19:57:32.823105   71424 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0913 19:57:32.823127   71424 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0913 19:57:32.823108   71424 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1
	I0913 19:57:32.827694   71424 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I0913 19:57:32.827935   71424 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.1 (exists)
	I0913 19:57:33.133519   71424 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 19:57:35.452314   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:35.452807   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | unable to find current IP address of domain default-k8s-diff-port-512125 in network mk-default-k8s-diff-port-512125
	I0913 19:57:35.452832   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | I0913 19:57:35.452764   72780 retry.go:31] will retry after 1.050724369s: waiting for machine to come up
	I0913 19:57:36.504580   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:36.505059   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | unable to find current IP address of domain default-k8s-diff-port-512125 in network mk-default-k8s-diff-port-512125
	I0913 19:57:36.505083   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | I0913 19:57:36.505005   72780 retry.go:31] will retry after 1.828970571s: waiting for machine to come up
	I0913 19:57:38.336079   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:38.336524   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | unable to find current IP address of domain default-k8s-diff-port-512125 in network mk-default-k8s-diff-port-512125
	I0913 19:57:38.336551   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | I0913 19:57:38.336484   72780 retry.go:31] will retry after 1.745975748s: waiting for machine to come up
	I0913 19:57:36.540092   71424 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.717200665s)
	I0913 19:57:36.540120   71424 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0913 19:57:36.540143   71424 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I0913 19:57:36.540185   71424 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1: (3.717045749s)
	I0913 19:57:36.540088   71424 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1: (3.716939076s)
	I0913 19:57:36.540246   71424 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1: (3.717074576s)
	I0913 19:57:36.540263   71424 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.1 (exists)
	I0913 19:57:36.540196   71424 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I0913 19:57:36.540247   71424 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.1 (exists)
	I0913 19:57:36.540220   71424 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.1 (exists)
	I0913 19:57:36.540318   71424 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (3.406769496s)
	I0913 19:57:36.540350   71424 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0913 19:57:36.540383   71424 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 19:57:36.540425   71424 ssh_runner.go:195] Run: which crictl
	I0913 19:57:38.607617   71424 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (2.06732841s)
	I0913 19:57:38.607656   71424 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I0913 19:57:38.607657   71424 ssh_runner.go:235] Completed: which crictl: (2.067217735s)
	I0913 19:57:38.607681   71424 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0913 19:57:38.607717   71424 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0913 19:57:38.607717   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 19:57:38.655710   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 19:57:40.096743   71424 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.440995963s)
	I0913 19:57:40.096836   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 19:57:40.096885   71424 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1: (1.489140573s)
	I0913 19:57:40.096912   71424 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 from cache
	I0913 19:57:40.096946   71424 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.1
	I0913 19:57:40.097003   71424 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1
	I0913 19:57:40.142959   71424 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0913 19:57:40.143072   71424 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0913 19:57:40.083781   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:40.084316   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | unable to find current IP address of domain default-k8s-diff-port-512125 in network mk-default-k8s-diff-port-512125
	I0913 19:57:40.084339   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | I0913 19:57:40.084202   72780 retry.go:31] will retry after 2.736824298s: waiting for machine to come up
	I0913 19:57:42.823269   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:42.823689   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | unable to find current IP address of domain default-k8s-diff-port-512125 in network mk-default-k8s-diff-port-512125
	I0913 19:57:42.823723   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | I0913 19:57:42.823648   72780 retry.go:31] will retry after 3.517461718s: waiting for machine to come up
	I0913 19:57:42.266895   71424 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1: (2.169865218s)
	I0913 19:57:42.266929   71424 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 from cache
	I0913 19:57:42.266971   71424 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0913 19:57:42.267074   71424 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0913 19:57:42.266978   71424 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (2.123869445s)
	I0913 19:57:42.267185   71424 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0913 19:57:44.129215   71424 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1: (1.86211411s)
	I0913 19:57:44.129248   71424 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 from cache
	I0913 19:57:44.129280   71424 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0913 19:57:44.129356   71424 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0913 19:57:46.077759   71424 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1: (1.948382667s)
	I0913 19:57:46.077791   71424 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 from cache
	I0913 19:57:46.077818   71424 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0913 19:57:46.077859   71424 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0913 19:57:46.342187   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:46.342624   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | unable to find current IP address of domain default-k8s-diff-port-512125 in network mk-default-k8s-diff-port-512125
	I0913 19:57:46.342661   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | I0913 19:57:46.342555   72780 retry.go:31] will retry after 3.728072283s: waiting for machine to come up
	I0913 19:57:46.728210   71424 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0913 19:57:46.728256   71424 cache_images.go:123] Successfully loaded all cached images
	I0913 19:57:46.728261   71424 cache_images.go:92] duration metric: took 14.802990931s to LoadCachedImages
	I0913 19:57:46.728274   71424 kubeadm.go:934] updating node { 192.168.50.13 8443 v1.31.1 crio true true} ...
	I0913 19:57:46.728393   71424 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-239327 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.13
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-239327 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0913 19:57:46.728503   71424 ssh_runner.go:195] Run: crio config
	I0913 19:57:46.777890   71424 cni.go:84] Creating CNI manager for ""
	I0913 19:57:46.777916   71424 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0913 19:57:46.777928   71424 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0913 19:57:46.777948   71424 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.13 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-239327 NodeName:no-preload-239327 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.13"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.13 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0913 19:57:46.778129   71424 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.13
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-239327"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.13
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.13"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0913 19:57:46.778201   71424 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0913 19:57:46.788550   71424 binaries.go:44] Found k8s binaries, skipping transfer
	I0913 19:57:46.788612   71424 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0913 19:57:46.797610   71424 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0913 19:57:46.813683   71424 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0913 19:57:46.829359   71424 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0913 19:57:46.846055   71424 ssh_runner.go:195] Run: grep 192.168.50.13	control-plane.minikube.internal$ /etc/hosts
	I0913 19:57:46.849820   71424 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.13	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0913 19:57:46.861351   71424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 19:57:46.976645   71424 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0913 19:57:46.993359   71424 certs.go:68] Setting up /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/no-preload-239327 for IP: 192.168.50.13
	I0913 19:57:46.993390   71424 certs.go:194] generating shared ca certs ...
	I0913 19:57:46.993410   71424 certs.go:226] acquiring lock for ca certs: {Name:mke780aab4c2895bded11772e31c7a357d07742c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 19:57:46.993586   71424 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.key
	I0913 19:57:46.993648   71424 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.key
	I0913 19:57:46.993661   71424 certs.go:256] generating profile certs ...
	I0913 19:57:46.993761   71424 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/no-preload-239327/client.key
	I0913 19:57:46.993845   71424 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/no-preload-239327/apiserver.key.1d2f30c2
	I0913 19:57:46.993896   71424 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/no-preload-239327/proxy-client.key
	I0913 19:57:46.994053   71424 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079.pem (1338 bytes)
	W0913 19:57:46.994120   71424 certs.go:480] ignoring /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079_empty.pem, impossibly tiny 0 bytes
	I0913 19:57:46.994134   71424 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem (1675 bytes)
	I0913 19:57:46.994178   71424 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem (1082 bytes)
	I0913 19:57:46.994218   71424 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem (1123 bytes)
	I0913 19:57:46.994250   71424 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem (1679 bytes)
	I0913 19:57:46.994307   71424 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem (1708 bytes)
	I0913 19:57:46.995114   71424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0913 19:57:47.025538   71424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0913 19:57:47.078641   71424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0913 19:57:47.107063   71424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0913 19:57:47.147536   71424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/no-preload-239327/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0913 19:57:47.179796   71424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/no-preload-239327/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0913 19:57:47.202593   71424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/no-preload-239327/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0913 19:57:47.227536   71424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/no-preload-239327/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0913 19:57:47.251324   71424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0913 19:57:47.274447   71424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079.pem --> /usr/share/ca-certificates/11079.pem (1338 bytes)
	I0913 19:57:47.297216   71424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem --> /usr/share/ca-certificates/110792.pem (1708 bytes)
	I0913 19:57:47.320138   71424 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0913 19:57:47.336696   71424 ssh_runner.go:195] Run: openssl version
	I0913 19:57:47.342403   71424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0913 19:57:47.352378   71424 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0913 19:57:47.356749   71424 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 13 18:22 /usr/share/ca-certificates/minikubeCA.pem
	I0913 19:57:47.356793   71424 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0913 19:57:47.362541   71424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0913 19:57:47.372621   71424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11079.pem && ln -fs /usr/share/ca-certificates/11079.pem /etc/ssl/certs/11079.pem"
	I0913 19:57:47.382729   71424 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11079.pem
	I0913 19:57:47.387369   71424 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 13 18:38 /usr/share/ca-certificates/11079.pem
	I0913 19:57:47.387431   71424 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11079.pem
	I0913 19:57:47.393218   71424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11079.pem /etc/ssl/certs/51391683.0"
	I0913 19:57:47.403529   71424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110792.pem && ln -fs /usr/share/ca-certificates/110792.pem /etc/ssl/certs/110792.pem"
	I0913 19:57:47.414210   71424 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110792.pem
	I0913 19:57:47.418917   71424 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 13 18:38 /usr/share/ca-certificates/110792.pem
	I0913 19:57:47.418965   71424 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110792.pem
	I0913 19:57:47.424414   71424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110792.pem /etc/ssl/certs/3ec20f2e.0"
	I0913 19:57:47.434850   71424 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0913 19:57:47.439245   71424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0913 19:57:47.445052   71424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0913 19:57:47.450680   71424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0913 19:57:47.456489   71424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0913 19:57:47.462051   71424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0913 19:57:47.467582   71424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0913 19:57:47.473181   71424 kubeadm.go:392] StartCluster: {Name:no-preload-239327 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:no-preload-239327 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.13 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 19:57:47.473256   71424 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0913 19:57:47.473295   71424 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0913 19:57:47.510432   71424 cri.go:89] found id: ""
	I0913 19:57:47.510508   71424 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0913 19:57:47.520272   71424 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0913 19:57:47.520293   71424 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0913 19:57:47.520338   71424 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0913 19:57:47.529391   71424 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0913 19:57:47.530298   71424 kubeconfig.go:125] found "no-preload-239327" server: "https://192.168.50.13:8443"
	I0913 19:57:47.532275   71424 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0913 19:57:47.541080   71424 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.13
	I0913 19:57:47.541115   71424 kubeadm.go:1160] stopping kube-system containers ...
	I0913 19:57:47.541130   71424 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0913 19:57:47.541167   71424 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0913 19:57:47.575726   71424 cri.go:89] found id: ""
	I0913 19:57:47.575797   71424 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0913 19:57:47.591640   71424 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0913 19:57:47.600616   71424 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0913 19:57:47.600634   71424 kubeadm.go:157] found existing configuration files:
	
	I0913 19:57:47.600680   71424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0913 19:57:47.609317   71424 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0913 19:57:47.609360   71424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0913 19:57:47.618729   71424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0913 19:57:47.627198   71424 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0913 19:57:47.627241   71424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0913 19:57:47.636259   71424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0913 19:57:47.645245   71424 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0913 19:57:47.645303   71424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0913 19:57:47.654245   71424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0913 19:57:47.662970   71424 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0913 19:57:47.663045   71424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0913 19:57:47.672250   71424 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0913 19:57:47.681504   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:57:47.783618   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:57:48.614939   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:57:48.812739   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:57:48.888885   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:57:48.999877   71424 api_server.go:52] waiting for apiserver process to appear ...
	I0913 19:57:48.999966   71424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:57:49.500587   71424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:57:50.001072   71424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:57:50.026939   71424 api_server.go:72] duration metric: took 1.027062019s to wait for apiserver process to appear ...
	I0913 19:57:50.026965   71424 api_server.go:88] waiting for apiserver healthz status ...
	I0913 19:57:50.026983   71424 api_server.go:253] Checking apiserver healthz at https://192.168.50.13:8443/healthz ...
	I0913 19:57:51.327259   71926 start.go:364] duration metric: took 4m9.277620447s to acquireMachinesLock for "old-k8s-version-234290"
	I0913 19:57:51.327324   71926 start.go:96] Skipping create...Using existing machine configuration
	I0913 19:57:51.327338   71926 fix.go:54] fixHost starting: 
	I0913 19:57:51.327769   71926 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:57:51.327815   71926 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:57:51.344030   71926 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37477
	I0913 19:57:51.344527   71926 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:57:51.344994   71926 main.go:141] libmachine: Using API Version  1
	I0913 19:57:51.345018   71926 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:57:51.345360   71926 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:57:51.345563   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .DriverName
	I0913 19:57:51.345700   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetState
	I0913 19:57:51.347144   71926 fix.go:112] recreateIfNeeded on old-k8s-version-234290: state=Stopped err=<nil>
	I0913 19:57:51.347169   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .DriverName
	W0913 19:57:51.347304   71926 fix.go:138] unexpected machine state, will restart: <nil>
	I0913 19:57:51.349231   71926 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-234290" ...
	I0913 19:57:51.350756   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .Start
	I0913 19:57:51.350906   71926 main.go:141] libmachine: (old-k8s-version-234290) Ensuring networks are active...
	I0913 19:57:51.351585   71926 main.go:141] libmachine: (old-k8s-version-234290) Ensuring network default is active
	I0913 19:57:51.351974   71926 main.go:141] libmachine: (old-k8s-version-234290) Ensuring network mk-old-k8s-version-234290 is active
	I0913 19:57:51.352333   71926 main.go:141] libmachine: (old-k8s-version-234290) Getting domain xml...
	I0913 19:57:51.352947   71926 main.go:141] libmachine: (old-k8s-version-234290) Creating domain...
	I0913 19:57:50.075284   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:50.075782   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has current primary IP address 192.168.61.3 and MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:50.075801   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Found IP for machine: 192.168.61.3
	I0913 19:57:50.075813   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Reserving static IP address...
	I0913 19:57:50.076344   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Reserved static IP address: 192.168.61.3
	I0913 19:57:50.076383   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting for SSH to be available...
	I0913 19:57:50.076420   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-512125", mac: "52:54:00:5b:54:e0", ip: "192.168.61.3"} in network mk-default-k8s-diff-port-512125: {Iface:virbr2 ExpiryTime:2024-09-13 20:49:35 +0000 UTC Type:0 Mac:52:54:00:5b:54:e0 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-512125 Clientid:01:52:54:00:5b:54:e0}
	I0913 19:57:50.076452   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | skip adding static IP to network mk-default-k8s-diff-port-512125 - found existing host DHCP lease matching {name: "default-k8s-diff-port-512125", mac: "52:54:00:5b:54:e0", ip: "192.168.61.3"}
	I0913 19:57:50.076468   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | Getting to WaitForSSH function...
	I0913 19:57:50.078783   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:50.079184   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:54:e0", ip: ""} in network mk-default-k8s-diff-port-512125: {Iface:virbr2 ExpiryTime:2024-09-13 20:49:35 +0000 UTC Type:0 Mac:52:54:00:5b:54:e0 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-512125 Clientid:01:52:54:00:5b:54:e0}
	I0913 19:57:50.079251   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined IP address 192.168.61.3 and MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:50.079322   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | Using SSH client type: external
	I0913 19:57:50.079363   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | Using SSH private key: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/default-k8s-diff-port-512125/id_rsa (-rw-------)
	I0913 19:57:50.079395   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.3 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19636-3902/.minikube/machines/default-k8s-diff-port-512125/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0913 19:57:50.079422   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | About to run SSH command:
	I0913 19:57:50.079444   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | exit 0
	I0913 19:57:50.206454   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | SSH cmd err, output: <nil>: 
	I0913 19:57:50.206818   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetConfigRaw
	I0913 19:57:50.207468   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetIP
	I0913 19:57:50.210231   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:50.210663   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:54:e0", ip: ""} in network mk-default-k8s-diff-port-512125: {Iface:virbr2 ExpiryTime:2024-09-13 20:49:35 +0000 UTC Type:0 Mac:52:54:00:5b:54:e0 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-512125 Clientid:01:52:54:00:5b:54:e0}
	I0913 19:57:50.210690   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined IP address 192.168.61.3 and MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:50.210983   71702 profile.go:143] Saving config to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/default-k8s-diff-port-512125/config.json ...
	I0913 19:57:50.211209   71702 machine.go:93] provisionDockerMachine start ...
	I0913 19:57:50.211228   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .DriverName
	I0913 19:57:50.211520   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHHostname
	I0913 19:57:50.214581   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:50.214920   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:54:e0", ip: ""} in network mk-default-k8s-diff-port-512125: {Iface:virbr2 ExpiryTime:2024-09-13 20:49:35 +0000 UTC Type:0 Mac:52:54:00:5b:54:e0 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-512125 Clientid:01:52:54:00:5b:54:e0}
	I0913 19:57:50.214943   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined IP address 192.168.61.3 and MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:50.215121   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHPort
	I0913 19:57:50.215303   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHKeyPath
	I0913 19:57:50.215451   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHKeyPath
	I0913 19:57:50.215645   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHUsername
	I0913 19:57:50.215804   71702 main.go:141] libmachine: Using SSH client type: native
	I0913 19:57:50.216045   71702 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.3 22 <nil> <nil>}
	I0913 19:57:50.216060   71702 main.go:141] libmachine: About to run SSH command:
	hostname
	I0913 19:57:50.331657   71702 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0913 19:57:50.331684   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetMachineName
	I0913 19:57:50.331934   71702 buildroot.go:166] provisioning hostname "default-k8s-diff-port-512125"
	I0913 19:57:50.331950   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetMachineName
	I0913 19:57:50.332149   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHHostname
	I0913 19:57:50.335159   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:50.335537   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:54:e0", ip: ""} in network mk-default-k8s-diff-port-512125: {Iface:virbr2 ExpiryTime:2024-09-13 20:49:35 +0000 UTC Type:0 Mac:52:54:00:5b:54:e0 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-512125 Clientid:01:52:54:00:5b:54:e0}
	I0913 19:57:50.335567   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined IP address 192.168.61.3 and MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:50.335723   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHPort
	I0913 19:57:50.335908   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHKeyPath
	I0913 19:57:50.336046   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHKeyPath
	I0913 19:57:50.336226   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHUsername
	I0913 19:57:50.336384   71702 main.go:141] libmachine: Using SSH client type: native
	I0913 19:57:50.336597   71702 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.3 22 <nil> <nil>}
	I0913 19:57:50.336616   71702 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-512125 && echo "default-k8s-diff-port-512125" | sudo tee /etc/hostname
	I0913 19:57:50.467731   71702 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-512125
	
	I0913 19:57:50.467765   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHHostname
	I0913 19:57:50.470668   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:50.471106   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:54:e0", ip: ""} in network mk-default-k8s-diff-port-512125: {Iface:virbr2 ExpiryTime:2024-09-13 20:49:35 +0000 UTC Type:0 Mac:52:54:00:5b:54:e0 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-512125 Clientid:01:52:54:00:5b:54:e0}
	I0913 19:57:50.471135   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined IP address 192.168.61.3 and MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:50.471401   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHPort
	I0913 19:57:50.471588   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHKeyPath
	I0913 19:57:50.471784   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHKeyPath
	I0913 19:57:50.471944   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHUsername
	I0913 19:57:50.472126   71702 main.go:141] libmachine: Using SSH client type: native
	I0913 19:57:50.472334   71702 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.3 22 <nil> <nil>}
	I0913 19:57:50.472352   71702 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-512125' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-512125/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-512125' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0913 19:57:50.587535   71702 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0913 19:57:50.587565   71702 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19636-3902/.minikube CaCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19636-3902/.minikube}
	I0913 19:57:50.587599   71702 buildroot.go:174] setting up certificates
	I0913 19:57:50.587608   71702 provision.go:84] configureAuth start
	I0913 19:57:50.587617   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetMachineName
	I0913 19:57:50.587881   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetIP
	I0913 19:57:50.590622   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:50.591016   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:54:e0", ip: ""} in network mk-default-k8s-diff-port-512125: {Iface:virbr2 ExpiryTime:2024-09-13 20:49:35 +0000 UTC Type:0 Mac:52:54:00:5b:54:e0 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-512125 Clientid:01:52:54:00:5b:54:e0}
	I0913 19:57:50.591046   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined IP address 192.168.61.3 and MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:50.591235   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHHostname
	I0913 19:57:50.593758   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:50.594161   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:54:e0", ip: ""} in network mk-default-k8s-diff-port-512125: {Iface:virbr2 ExpiryTime:2024-09-13 20:49:35 +0000 UTC Type:0 Mac:52:54:00:5b:54:e0 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-512125 Clientid:01:52:54:00:5b:54:e0}
	I0913 19:57:50.594188   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined IP address 192.168.61.3 and MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:50.594290   71702 provision.go:143] copyHostCerts
	I0913 19:57:50.594351   71702 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem, removing ...
	I0913 19:57:50.594364   71702 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem
	I0913 19:57:50.594423   71702 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem (1123 bytes)
	I0913 19:57:50.594504   71702 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem, removing ...
	I0913 19:57:50.594511   71702 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem
	I0913 19:57:50.594529   71702 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem (1679 bytes)
	I0913 19:57:50.594580   71702 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem, removing ...
	I0913 19:57:50.594586   71702 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem
	I0913 19:57:50.594603   71702 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem (1082 bytes)
	I0913 19:57:50.594654   71702 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-512125 san=[127.0.0.1 192.168.61.3 default-k8s-diff-port-512125 localhost minikube]
	I0913 19:57:50.688827   71702 provision.go:177] copyRemoteCerts
	I0913 19:57:50.688879   71702 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0913 19:57:50.688903   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHHostname
	I0913 19:57:50.691724   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:50.692112   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:54:e0", ip: ""} in network mk-default-k8s-diff-port-512125: {Iface:virbr2 ExpiryTime:2024-09-13 20:49:35 +0000 UTC Type:0 Mac:52:54:00:5b:54:e0 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-512125 Clientid:01:52:54:00:5b:54:e0}
	I0913 19:57:50.692142   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined IP address 192.168.61.3 and MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:50.692387   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHPort
	I0913 19:57:50.692579   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHKeyPath
	I0913 19:57:50.692754   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHUsername
	I0913 19:57:50.692876   71702 sshutil.go:53] new ssh client: &{IP:192.168.61.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/default-k8s-diff-port-512125/id_rsa Username:docker}
	I0913 19:57:50.776582   71702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0913 19:57:50.802453   71702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0913 19:57:50.827446   71702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0913 19:57:50.855966   71702 provision.go:87] duration metric: took 268.344608ms to configureAuth
	I0913 19:57:50.855995   71702 buildroot.go:189] setting minikube options for container-runtime
	I0913 19:57:50.856210   71702 config.go:182] Loaded profile config "default-k8s-diff-port-512125": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 19:57:50.856298   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHHostname
	I0913 19:57:50.859097   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:50.859426   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:54:e0", ip: ""} in network mk-default-k8s-diff-port-512125: {Iface:virbr2 ExpiryTime:2024-09-13 20:49:35 +0000 UTC Type:0 Mac:52:54:00:5b:54:e0 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-512125 Clientid:01:52:54:00:5b:54:e0}
	I0913 19:57:50.859464   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined IP address 192.168.61.3 and MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:50.859667   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHPort
	I0913 19:57:50.859851   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHKeyPath
	I0913 19:57:50.860001   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHKeyPath
	I0913 19:57:50.860103   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHUsername
	I0913 19:57:50.860270   71702 main.go:141] libmachine: Using SSH client type: native
	I0913 19:57:50.860450   71702 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.3 22 <nil> <nil>}
	I0913 19:57:50.860472   71702 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0913 19:57:51.091137   71702 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0913 19:57:51.091162   71702 machine.go:96] duration metric: took 879.939352ms to provisionDockerMachine
	I0913 19:57:51.091174   71702 start.go:293] postStartSetup for "default-k8s-diff-port-512125" (driver="kvm2")
	I0913 19:57:51.091187   71702 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0913 19:57:51.091208   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .DriverName
	I0913 19:57:51.091525   71702 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0913 19:57:51.091558   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHHostname
	I0913 19:57:51.094398   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:51.094755   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:54:e0", ip: ""} in network mk-default-k8s-diff-port-512125: {Iface:virbr2 ExpiryTime:2024-09-13 20:49:35 +0000 UTC Type:0 Mac:52:54:00:5b:54:e0 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-512125 Clientid:01:52:54:00:5b:54:e0}
	I0913 19:57:51.094783   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined IP address 192.168.61.3 and MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:51.094945   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHPort
	I0913 19:57:51.095112   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHKeyPath
	I0913 19:57:51.095269   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHUsername
	I0913 19:57:51.095391   71702 sshutil.go:53] new ssh client: &{IP:192.168.61.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/default-k8s-diff-port-512125/id_rsa Username:docker}
	I0913 19:57:51.176959   71702 ssh_runner.go:195] Run: cat /etc/os-release
	I0913 19:57:51.181585   71702 info.go:137] Remote host: Buildroot 2023.02.9
	I0913 19:57:51.181614   71702 filesync.go:126] Scanning /home/jenkins/minikube-integration/19636-3902/.minikube/addons for local assets ...
	I0913 19:57:51.181687   71702 filesync.go:126] Scanning /home/jenkins/minikube-integration/19636-3902/.minikube/files for local assets ...
	I0913 19:57:51.181768   71702 filesync.go:149] local asset: /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem -> 110792.pem in /etc/ssl/certs
	I0913 19:57:51.181857   71702 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0913 19:57:51.191417   71702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem --> /etc/ssl/certs/110792.pem (1708 bytes)
	I0913 19:57:51.218033   71702 start.go:296] duration metric: took 126.844149ms for postStartSetup
	I0913 19:57:51.218076   71702 fix.go:56] duration metric: took 20.738765131s for fixHost
	I0913 19:57:51.218119   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHHostname
	I0913 19:57:51.221206   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:51.221713   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:54:e0", ip: ""} in network mk-default-k8s-diff-port-512125: {Iface:virbr2 ExpiryTime:2024-09-13 20:49:35 +0000 UTC Type:0 Mac:52:54:00:5b:54:e0 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-512125 Clientid:01:52:54:00:5b:54:e0}
	I0913 19:57:51.221748   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined IP address 192.168.61.3 and MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:51.221946   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHPort
	I0913 19:57:51.222151   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHKeyPath
	I0913 19:57:51.222344   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHKeyPath
	I0913 19:57:51.222538   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHUsername
	I0913 19:57:51.222673   71702 main.go:141] libmachine: Using SSH client type: native
	I0913 19:57:51.222834   71702 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.3 22 <nil> <nil>}
	I0913 19:57:51.222844   71702 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0913 19:57:51.327091   71702 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726257471.303496315
	
	I0913 19:57:51.327121   71702 fix.go:216] guest clock: 1726257471.303496315
	I0913 19:57:51.327132   71702 fix.go:229] Guest: 2024-09-13 19:57:51.303496315 +0000 UTC Remote: 2024-09-13 19:57:51.218080493 +0000 UTC m=+266.360246627 (delta=85.415822ms)
	I0913 19:57:51.327179   71702 fix.go:200] guest clock delta is within tolerance: 85.415822ms
	I0913 19:57:51.327187   71702 start.go:83] releasing machines lock for "default-k8s-diff-port-512125", held for 20.847905198s
	I0913 19:57:51.327218   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .DriverName
	I0913 19:57:51.327478   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetIP
	I0913 19:57:51.330295   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:51.330668   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:54:e0", ip: ""} in network mk-default-k8s-diff-port-512125: {Iface:virbr2 ExpiryTime:2024-09-13 20:49:35 +0000 UTC Type:0 Mac:52:54:00:5b:54:e0 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-512125 Clientid:01:52:54:00:5b:54:e0}
	I0913 19:57:51.330701   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined IP address 192.168.61.3 and MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:51.330809   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .DriverName
	I0913 19:57:51.331309   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .DriverName
	I0913 19:57:51.331492   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .DriverName
	I0913 19:57:51.331611   71702 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0913 19:57:51.331653   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHHostname
	I0913 19:57:51.331703   71702 ssh_runner.go:195] Run: cat /version.json
	I0913 19:57:51.331728   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHHostname
	I0913 19:57:51.334221   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:51.334411   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:51.334581   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:54:e0", ip: ""} in network mk-default-k8s-diff-port-512125: {Iface:virbr2 ExpiryTime:2024-09-13 20:49:35 +0000 UTC Type:0 Mac:52:54:00:5b:54:e0 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-512125 Clientid:01:52:54:00:5b:54:e0}
	I0913 19:57:51.334609   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined IP address 192.168.61.3 and MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:51.334779   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHPort
	I0913 19:57:51.334879   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:54:e0", ip: ""} in network mk-default-k8s-diff-port-512125: {Iface:virbr2 ExpiryTime:2024-09-13 20:49:35 +0000 UTC Type:0 Mac:52:54:00:5b:54:e0 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-512125 Clientid:01:52:54:00:5b:54:e0}
	I0913 19:57:51.334919   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined IP address 192.168.61.3 and MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:51.334966   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHKeyPath
	I0913 19:57:51.335052   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHPort
	I0913 19:57:51.335126   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHUsername
	I0913 19:57:51.335198   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHKeyPath
	I0913 19:57:51.335270   71702 sshutil.go:53] new ssh client: &{IP:192.168.61.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/default-k8s-diff-port-512125/id_rsa Username:docker}
	I0913 19:57:51.335331   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHUsername
	I0913 19:57:51.335546   71702 sshutil.go:53] new ssh client: &{IP:192.168.61.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/default-k8s-diff-port-512125/id_rsa Username:docker}
	I0913 19:57:51.415552   71702 ssh_runner.go:195] Run: systemctl --version
	I0913 19:57:51.440411   71702 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0913 19:57:51.584757   71702 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0913 19:57:51.590531   71702 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0913 19:57:51.590604   71702 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0913 19:57:51.606595   71702 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0913 19:57:51.606619   71702 start.go:495] detecting cgroup driver to use...
	I0913 19:57:51.606678   71702 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0913 19:57:51.622887   71702 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0913 19:57:51.642168   71702 docker.go:217] disabling cri-docker service (if available) ...
	I0913 19:57:51.642235   71702 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0913 19:57:51.657201   71702 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0913 19:57:51.672504   71702 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0913 19:57:51.797046   71702 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0913 19:57:51.944856   71702 docker.go:233] disabling docker service ...
	I0913 19:57:51.944930   71702 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0913 19:57:51.962885   71702 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0913 19:57:51.979765   71702 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0913 19:57:52.144865   71702 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0913 19:57:52.305549   71702 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0913 19:57:52.319742   71702 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0913 19:57:52.341814   71702 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0913 19:57:52.341877   71702 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:57:52.356233   71702 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0913 19:57:52.356304   71702 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:57:52.367867   71702 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:57:52.380357   71702 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:57:52.396158   71702 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0913 19:57:52.409682   71702 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:57:52.425012   71702 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:57:52.443770   71702 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:57:52.455296   71702 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0913 19:57:52.471321   71702 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0913 19:57:52.471384   71702 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0913 19:57:52.486626   71702 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0913 19:57:52.503172   71702 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 19:57:52.637550   71702 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0913 19:57:52.749215   71702 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0913 19:57:52.749314   71702 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0913 19:57:52.755695   71702 start.go:563] Will wait 60s for crictl version
	I0913 19:57:52.755764   71702 ssh_runner.go:195] Run: which crictl
	I0913 19:57:52.760759   71702 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0913 19:57:52.810845   71702 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0913 19:57:52.810938   71702 ssh_runner.go:195] Run: crio --version
	I0913 19:57:52.843238   71702 ssh_runner.go:195] Run: crio --version
	I0913 19:57:52.881367   71702 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0913 19:57:52.882926   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetIP
	I0913 19:57:52.886161   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:52.886611   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:54:e0", ip: ""} in network mk-default-k8s-diff-port-512125: {Iface:virbr2 ExpiryTime:2024-09-13 20:49:35 +0000 UTC Type:0 Mac:52:54:00:5b:54:e0 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-512125 Clientid:01:52:54:00:5b:54:e0}
	I0913 19:57:52.886640   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined IP address 192.168.61.3 and MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:52.886873   71702 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0913 19:57:52.891585   71702 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0913 19:57:52.909764   71702 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-512125 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:default-k8s-diff-port-512125 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.3 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0913 19:57:52.909895   71702 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0913 19:57:52.909946   71702 ssh_runner.go:195] Run: sudo crictl images --output json
	I0913 19:57:52.951579   71702 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0913 19:57:52.951663   71702 ssh_runner.go:195] Run: which lz4
	I0913 19:57:52.956284   71702 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0913 19:57:52.961057   71702 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0913 19:57:52.961107   71702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0913 19:57:54.413207   71702 crio.go:462] duration metric: took 1.457013899s to copy over tarball
	I0913 19:57:54.413281   71702 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0913 19:57:53.355482   71424 api_server.go:279] https://192.168.50.13:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0913 19:57:53.355515   71424 api_server.go:103] status: https://192.168.50.13:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0913 19:57:53.355532   71424 api_server.go:253] Checking apiserver healthz at https://192.168.50.13:8443/healthz ...
	I0913 19:57:53.403530   71424 api_server.go:279] https://192.168.50.13:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0913 19:57:53.403563   71424 api_server.go:103] status: https://192.168.50.13:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0913 19:57:53.527891   71424 api_server.go:253] Checking apiserver healthz at https://192.168.50.13:8443/healthz ...
	I0913 19:57:53.540614   71424 api_server.go:279] https://192.168.50.13:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0913 19:57:53.540645   71424 api_server.go:103] status: https://192.168.50.13:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0913 19:57:54.027103   71424 api_server.go:253] Checking apiserver healthz at https://192.168.50.13:8443/healthz ...
	I0913 19:57:54.033969   71424 api_server.go:279] https://192.168.50.13:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0913 19:57:54.034007   71424 api_server.go:103] status: https://192.168.50.13:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0913 19:57:54.527232   71424 api_server.go:253] Checking apiserver healthz at https://192.168.50.13:8443/healthz ...
	I0913 19:57:54.533061   71424 api_server.go:279] https://192.168.50.13:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0913 19:57:54.533101   71424 api_server.go:103] status: https://192.168.50.13:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0913 19:57:55.027284   71424 api_server.go:253] Checking apiserver healthz at https://192.168.50.13:8443/healthz ...
	I0913 19:57:55.033940   71424 api_server.go:279] https://192.168.50.13:8443/healthz returned 200:
	ok
	I0913 19:57:55.041955   71424 api_server.go:141] control plane version: v1.31.1
	I0913 19:57:55.041994   71424 api_server.go:131] duration metric: took 5.01501979s to wait for apiserver health ...
	I0913 19:57:55.042004   71424 cni.go:84] Creating CNI manager for ""
	I0913 19:57:55.042012   71424 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0913 19:57:55.043980   71424 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0913 19:57:55.045528   71424 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0913 19:57:55.095694   71424 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0913 19:57:55.130974   71424 system_pods.go:43] waiting for kube-system pods to appear ...
	I0913 19:57:55.144810   71424 system_pods.go:59] 8 kube-system pods found
	I0913 19:57:55.144850   71424 system_pods.go:61] "coredns-7c65d6cfc9-fjzxv" [984f1946-61b1-4881-ae99-495855aaf948] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0913 19:57:55.144861   71424 system_pods.go:61] "etcd-no-preload-239327" [d0514967-93ea-4792-b49d-500c6800d102] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0913 19:57:55.144871   71424 system_pods.go:61] "kube-apiserver-no-preload-239327" [2e37433d-d767-4e6a-9697-79e99bb2cf74] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0913 19:57:55.144879   71424 system_pods.go:61] "kube-controller-manager-no-preload-239327" [71d86891-1378-43b5-af10-c45acb1ef854] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0913 19:57:55.144885   71424 system_pods.go:61] "kube-proxy-b24zg" [67fffd9e-ddf7-4abb-bfce-1528060d6b43] Running
	I0913 19:57:55.144892   71424 system_pods.go:61] "kube-scheduler-no-preload-239327" [22e17a4f-902f-4593-82dd-d2f04104e66a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0913 19:57:55.144899   71424 system_pods.go:61] "metrics-server-6867b74b74-bq7jp" [9920ad88-3d00-458f-94d4-3dcfd0cd9a01] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0913 19:57:55.144904   71424 system_pods.go:61] "storage-provisioner" [1cb55fe9-4adb-4d3e-9f26-34ee4b3f01a2] Running
	I0913 19:57:55.144912   71424 system_pods.go:74] duration metric: took 13.911878ms to wait for pod list to return data ...
	I0913 19:57:55.144925   71424 node_conditions.go:102] verifying NodePressure condition ...
	I0913 19:57:55.150452   71424 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0913 19:57:55.150485   71424 node_conditions.go:123] node cpu capacity is 2
	I0913 19:57:55.150498   71424 node_conditions.go:105] duration metric: took 5.568616ms to run NodePressure ...
	I0913 19:57:55.150517   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:57:55.469599   71424 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0913 19:57:55.475337   71424 kubeadm.go:739] kubelet initialised
	I0913 19:57:55.475361   71424 kubeadm.go:740] duration metric: took 5.681154ms waiting for restarted kubelet to initialise ...
	I0913 19:57:55.475372   71424 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0913 19:57:55.485218   71424 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-fjzxv" in "kube-system" namespace to be "Ready" ...
	I0913 19:57:55.495426   71424 pod_ready.go:98] node "no-preload-239327" hosting pod "coredns-7c65d6cfc9-fjzxv" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-239327" has status "Ready":"False"
	I0913 19:57:55.495451   71424 pod_ready.go:82] duration metric: took 10.207619ms for pod "coredns-7c65d6cfc9-fjzxv" in "kube-system" namespace to be "Ready" ...
	E0913 19:57:55.495464   71424 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-239327" hosting pod "coredns-7c65d6cfc9-fjzxv" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-239327" has status "Ready":"False"
	I0913 19:57:55.495474   71424 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-239327" in "kube-system" namespace to be "Ready" ...
	I0913 19:57:55.501722   71424 pod_ready.go:98] node "no-preload-239327" hosting pod "etcd-no-preload-239327" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-239327" has status "Ready":"False"
	I0913 19:57:55.501746   71424 pod_ready.go:82] duration metric: took 6.262633ms for pod "etcd-no-preload-239327" in "kube-system" namespace to be "Ready" ...
	E0913 19:57:55.501758   71424 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-239327" hosting pod "etcd-no-preload-239327" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-239327" has status "Ready":"False"
	I0913 19:57:55.501766   71424 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-239327" in "kube-system" namespace to be "Ready" ...
	I0913 19:57:55.508771   71424 pod_ready.go:98] node "no-preload-239327" hosting pod "kube-apiserver-no-preload-239327" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-239327" has status "Ready":"False"
	I0913 19:57:55.508797   71424 pod_ready.go:82] duration metric: took 7.022139ms for pod "kube-apiserver-no-preload-239327" in "kube-system" namespace to be "Ready" ...
	E0913 19:57:55.508808   71424 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-239327" hosting pod "kube-apiserver-no-preload-239327" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-239327" has status "Ready":"False"
	I0913 19:57:55.508816   71424 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-239327" in "kube-system" namespace to be "Ready" ...
	I0913 19:57:55.533464   71424 pod_ready.go:98] node "no-preload-239327" hosting pod "kube-controller-manager-no-preload-239327" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-239327" has status "Ready":"False"
	I0913 19:57:55.533494   71424 pod_ready.go:82] duration metric: took 24.667318ms for pod "kube-controller-manager-no-preload-239327" in "kube-system" namespace to be "Ready" ...
	E0913 19:57:55.533505   71424 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-239327" hosting pod "kube-controller-manager-no-preload-239327" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-239327" has status "Ready":"False"
	I0913 19:57:55.533515   71424 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-b24zg" in "kube-system" namespace to be "Ready" ...
	I0913 19:57:55.935346   71424 pod_ready.go:98] node "no-preload-239327" hosting pod "kube-proxy-b24zg" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-239327" has status "Ready":"False"
	I0913 19:57:55.935376   71424 pod_ready.go:82] duration metric: took 401.852235ms for pod "kube-proxy-b24zg" in "kube-system" namespace to be "Ready" ...
	E0913 19:57:55.935388   71424 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-239327" hosting pod "kube-proxy-b24zg" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-239327" has status "Ready":"False"
	I0913 19:57:55.935399   71424 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-239327" in "kube-system" namespace to be "Ready" ...
	I0913 19:57:56.335156   71424 pod_ready.go:98] node "no-preload-239327" hosting pod "kube-scheduler-no-preload-239327" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-239327" has status "Ready":"False"
	I0913 19:57:56.335194   71424 pod_ready.go:82] duration metric: took 399.782959ms for pod "kube-scheduler-no-preload-239327" in "kube-system" namespace to be "Ready" ...
	E0913 19:57:56.335207   71424 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-239327" hosting pod "kube-scheduler-no-preload-239327" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-239327" has status "Ready":"False"
	I0913 19:57:56.335216   71424 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace to be "Ready" ...
	I0913 19:57:56.734606   71424 pod_ready.go:98] node "no-preload-239327" hosting pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-239327" has status "Ready":"False"
	I0913 19:57:56.734633   71424 pod_ready.go:82] duration metric: took 399.405497ms for pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace to be "Ready" ...
	E0913 19:57:56.734644   71424 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-239327" hosting pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-239327" has status "Ready":"False"
	I0913 19:57:56.734654   71424 pod_ready.go:39] duration metric: took 1.259272309s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0913 19:57:56.734673   71424 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0913 19:57:56.748215   71424 ops.go:34] apiserver oom_adj: -16
	I0913 19:57:56.748236   71424 kubeadm.go:597] duration metric: took 9.227936606s to restartPrimaryControlPlane
	I0913 19:57:56.748247   71424 kubeadm.go:394] duration metric: took 9.275070425s to StartCluster
	I0913 19:57:56.748267   71424 settings.go:142] acquiring lock: {Name:mk3b751ea8a9f3a6c2d19469cffad500e96411b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 19:57:56.748361   71424 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19636-3902/kubeconfig
	I0913 19:57:56.750523   71424 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/kubeconfig: {Name:mk324cf401f96b93fed93af51e24b0634e5fa1a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 19:57:56.750818   71424 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.13 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0913 19:57:56.750914   71424 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0913 19:57:56.751016   71424 addons.go:69] Setting storage-provisioner=true in profile "no-preload-239327"
	I0913 19:57:56.751037   71424 addons.go:234] Setting addon storage-provisioner=true in "no-preload-239327"
	W0913 19:57:56.751046   71424 addons.go:243] addon storage-provisioner should already be in state true
	I0913 19:57:56.751034   71424 addons.go:69] Setting default-storageclass=true in profile "no-preload-239327"
	I0913 19:57:56.751066   71424 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-239327"
	I0913 19:57:56.751076   71424 host.go:66] Checking if "no-preload-239327" exists ...
	I0913 19:57:56.751108   71424 config.go:182] Loaded profile config "no-preload-239327": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 19:57:56.751172   71424 addons.go:69] Setting metrics-server=true in profile "no-preload-239327"
	I0913 19:57:56.751186   71424 addons.go:234] Setting addon metrics-server=true in "no-preload-239327"
	W0913 19:57:56.751208   71424 addons.go:243] addon metrics-server should already be in state true
	I0913 19:57:56.751231   71424 host.go:66] Checking if "no-preload-239327" exists ...
	I0913 19:57:56.751527   71424 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:57:56.751550   71424 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:57:56.751568   71424 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:57:56.751581   71424 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:57:56.751735   71424 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:57:56.751799   71424 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:57:56.753086   71424 out.go:177] * Verifying Kubernetes components...
	I0913 19:57:56.755069   71424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 19:57:56.769111   71424 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39191
	I0913 19:57:56.769722   71424 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:57:56.770138   71424 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37643
	I0913 19:57:56.770380   71424 main.go:141] libmachine: Using API Version  1
	I0913 19:57:56.770397   71424 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:57:56.770472   71424 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44401
	I0913 19:57:56.770616   71424 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:57:56.770858   71424 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:57:56.771033   71424 main.go:141] libmachine: Using API Version  1
	I0913 19:57:56.771054   71424 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:57:56.771358   71424 main.go:141] libmachine: Using API Version  1
	I0913 19:57:56.771375   71424 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:57:56.771393   71424 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:57:56.771418   71424 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:57:56.771553   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetState
	I0913 19:57:56.772058   71424 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:57:56.772097   71424 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:57:56.772313   71424 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:57:56.772870   71424 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:57:56.772911   71424 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:57:56.791429   71424 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39909
	I0913 19:57:56.791741   71424 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:57:56.791800   71424 addons.go:234] Setting addon default-storageclass=true in "no-preload-239327"
	W0913 19:57:56.791813   71424 addons.go:243] addon default-storageclass should already be in state true
	I0913 19:57:56.791841   71424 host.go:66] Checking if "no-preload-239327" exists ...
	I0913 19:57:56.792127   71424 main.go:141] libmachine: Using API Version  1
	I0913 19:57:56.792142   71424 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:57:56.792204   71424 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:57:56.792234   71424 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:57:56.792419   71424 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:57:56.792545   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetState
	I0913 19:57:56.794360   71424 main.go:141] libmachine: (no-preload-239327) Calling .DriverName
	I0913 19:57:56.796432   71424 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0913 19:57:56.797889   71424 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0913 19:57:56.797906   71424 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0913 19:57:56.797936   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHHostname
	I0913 19:57:56.801559   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:56.801916   71424 main.go:141] libmachine: (no-preload-239327) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:8c:9d", ip: ""} in network mk-no-preload-239327: {Iface:virbr1 ExpiryTime:2024-09-13 20:57:22 +0000 UTC Type:0 Mac:52:54:00:14:8c:9d Iaid: IPaddr:192.168.50.13 Prefix:24 Hostname:no-preload-239327 Clientid:01:52:54:00:14:8c:9d}
	I0913 19:57:56.801937   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined IP address 192.168.50.13 and MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:56.803787   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHPort
	I0913 19:57:56.803937   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHKeyPath
	I0913 19:57:56.806185   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHUsername
	I0913 19:57:56.806357   71424 sshutil.go:53] new ssh client: &{IP:192.168.50.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/no-preload-239327/id_rsa Username:docker}
	I0913 19:57:56.809000   71424 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40391
	I0913 19:57:56.809444   71424 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:57:56.809928   71424 main.go:141] libmachine: Using API Version  1
	I0913 19:57:56.809943   71424 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:57:56.809962   71424 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36435
	I0913 19:57:56.810309   71424 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:57:56.810511   71424 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:57:56.810829   71424 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:57:56.810862   71424 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:57:56.810872   71424 main.go:141] libmachine: Using API Version  1
	I0913 19:57:56.810886   71424 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:57:56.811194   71424 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:57:56.811321   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetState
	I0913 19:57:56.812760   71424 main.go:141] libmachine: (no-preload-239327) Calling .DriverName
	I0913 19:57:56.814270   71424 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 19:57:52.718732   71926 main.go:141] libmachine: (old-k8s-version-234290) Waiting to get IP...
	I0913 19:57:52.719575   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:57:52.720004   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | unable to find current IP address of domain old-k8s-version-234290 in network mk-old-k8s-version-234290
	I0913 19:57:52.720083   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | I0913 19:57:52.720009   72953 retry.go:31] will retry after 304.912151ms: waiting for machine to come up
	I0913 19:57:53.026797   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:57:53.027578   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | unable to find current IP address of domain old-k8s-version-234290 in network mk-old-k8s-version-234290
	I0913 19:57:53.027703   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | I0913 19:57:53.027641   72953 retry.go:31] will retry after 242.676909ms: waiting for machine to come up
	I0913 19:57:53.272108   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:57:53.272588   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | unable to find current IP address of domain old-k8s-version-234290 in network mk-old-k8s-version-234290
	I0913 19:57:53.272612   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | I0913 19:57:53.272518   72953 retry.go:31] will retry after 405.559393ms: waiting for machine to come up
	I0913 19:57:53.679940   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:57:53.680380   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | unable to find current IP address of domain old-k8s-version-234290 in network mk-old-k8s-version-234290
	I0913 19:57:53.680414   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | I0913 19:57:53.680348   72953 retry.go:31] will retry after 378.743628ms: waiting for machine to come up
	I0913 19:57:54.061169   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:57:54.061778   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | unable to find current IP address of domain old-k8s-version-234290 in network mk-old-k8s-version-234290
	I0913 19:57:54.061805   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | I0913 19:57:54.061698   72953 retry.go:31] will retry after 481.46563ms: waiting for machine to come up
	I0913 19:57:54.545134   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:57:54.545582   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | unable to find current IP address of domain old-k8s-version-234290 in network mk-old-k8s-version-234290
	I0913 19:57:54.545604   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | I0913 19:57:54.545548   72953 retry.go:31] will retry after 836.433898ms: waiting for machine to come up
	I0913 19:57:55.383396   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:57:55.384063   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | unable to find current IP address of domain old-k8s-version-234290 in network mk-old-k8s-version-234290
	I0913 19:57:55.384094   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | I0913 19:57:55.384023   72953 retry.go:31] will retry after 848.706378ms: waiting for machine to come up
	I0913 19:57:56.233996   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:57:56.234429   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | unable to find current IP address of domain old-k8s-version-234290 in network mk-old-k8s-version-234290
	I0913 19:57:56.234456   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | I0913 19:57:56.234381   72953 retry.go:31] will retry after 969.158848ms: waiting for machine to come up
	I0913 19:57:56.815854   71424 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0913 19:57:56.815866   71424 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0913 19:57:56.815878   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHHostname
	I0913 19:57:56.822710   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:56.823097   71424 main.go:141] libmachine: (no-preload-239327) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:8c:9d", ip: ""} in network mk-no-preload-239327: {Iface:virbr1 ExpiryTime:2024-09-13 20:57:22 +0000 UTC Type:0 Mac:52:54:00:14:8c:9d Iaid: IPaddr:192.168.50.13 Prefix:24 Hostname:no-preload-239327 Clientid:01:52:54:00:14:8c:9d}
	I0913 19:57:56.823115   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined IP address 192.168.50.13 and MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:56.823379   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHPort
	I0913 19:57:56.823519   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHKeyPath
	I0913 19:57:56.823634   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHUsername
	I0913 19:57:56.823721   71424 sshutil.go:53] new ssh client: &{IP:192.168.50.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/no-preload-239327/id_rsa Username:docker}
	I0913 19:57:56.830245   71424 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38447
	I0913 19:57:56.830634   71424 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:57:56.831243   71424 main.go:141] libmachine: Using API Version  1
	I0913 19:57:56.831258   71424 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:57:56.831746   71424 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:57:56.831977   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetState
	I0913 19:57:56.833771   71424 main.go:141] libmachine: (no-preload-239327) Calling .DriverName
	I0913 19:57:56.833953   71424 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0913 19:57:56.833966   71424 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0913 19:57:56.833981   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHHostname
	I0913 19:57:56.837171   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:56.837611   71424 main.go:141] libmachine: (no-preload-239327) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:8c:9d", ip: ""} in network mk-no-preload-239327: {Iface:virbr1 ExpiryTime:2024-09-13 20:57:22 +0000 UTC Type:0 Mac:52:54:00:14:8c:9d Iaid: IPaddr:192.168.50.13 Prefix:24 Hostname:no-preload-239327 Clientid:01:52:54:00:14:8c:9d}
	I0913 19:57:56.837630   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined IP address 192.168.50.13 and MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:56.837793   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHPort
	I0913 19:57:56.837962   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHKeyPath
	I0913 19:57:56.838198   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHUsername
	I0913 19:57:56.838323   71424 sshutil.go:53] new ssh client: &{IP:192.168.50.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/no-preload-239327/id_rsa Username:docker}
	I0913 19:57:57.030836   71424 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0913 19:57:57.056630   71424 node_ready.go:35] waiting up to 6m0s for node "no-preload-239327" to be "Ready" ...
	I0913 19:57:57.157478   71424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0913 19:57:57.169686   71424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0913 19:57:57.302368   71424 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0913 19:57:57.302395   71424 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0913 19:57:57.355982   71424 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0913 19:57:57.356013   71424 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0913 19:57:57.378079   71424 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0913 19:57:57.378128   71424 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0913 19:57:57.437879   71424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0913 19:57:59.395739   71424 node_ready.go:53] node "no-preload-239327" has status "Ready":"False"
	I0913 19:57:59.399929   71424 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.230206257s)
	I0913 19:57:59.399976   71424 main.go:141] libmachine: Making call to close driver server
	I0913 19:57:59.399988   71424 main.go:141] libmachine: (no-preload-239327) Calling .Close
	I0913 19:57:59.400026   71424 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.242509219s)
	I0913 19:57:59.400067   71424 main.go:141] libmachine: Making call to close driver server
	I0913 19:57:59.400083   71424 main.go:141] libmachine: (no-preload-239327) Calling .Close
	I0913 19:57:59.400273   71424 main.go:141] libmachine: Successfully made call to close driver server
	I0913 19:57:59.400287   71424 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 19:57:59.400297   71424 main.go:141] libmachine: Making call to close driver server
	I0913 19:57:59.400305   71424 main.go:141] libmachine: (no-preload-239327) Calling .Close
	I0913 19:57:59.400481   71424 main.go:141] libmachine: (no-preload-239327) DBG | Closing plugin on server side
	I0913 19:57:59.400514   71424 main.go:141] libmachine: Successfully made call to close driver server
	I0913 19:57:59.400529   71424 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 19:57:59.400548   71424 main.go:141] libmachine: Making call to close driver server
	I0913 19:57:59.400556   71424 main.go:141] libmachine: (no-preload-239327) Calling .Close
	I0913 19:57:59.400706   71424 main.go:141] libmachine: Successfully made call to close driver server
	I0913 19:57:59.400716   71424 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 19:57:59.402063   71424 main.go:141] libmachine: Successfully made call to close driver server
	I0913 19:57:59.402078   71424 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 19:57:59.402110   71424 main.go:141] libmachine: (no-preload-239327) DBG | Closing plugin on server side
	I0913 19:57:59.729071   71424 main.go:141] libmachine: Making call to close driver server
	I0913 19:57:59.729097   71424 main.go:141] libmachine: (no-preload-239327) Calling .Close
	I0913 19:57:59.729396   71424 main.go:141] libmachine: Successfully made call to close driver server
	I0913 19:57:59.729416   71424 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 19:57:59.862773   71424 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.424844753s)
	I0913 19:57:59.862831   71424 main.go:141] libmachine: Making call to close driver server
	I0913 19:57:59.862847   71424 main.go:141] libmachine: (no-preload-239327) Calling .Close
	I0913 19:57:59.863167   71424 main.go:141] libmachine: (no-preload-239327) DBG | Closing plugin on server side
	I0913 19:57:59.863223   71424 main.go:141] libmachine: Successfully made call to close driver server
	I0913 19:57:59.863241   71424 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 19:57:59.863253   71424 main.go:141] libmachine: Making call to close driver server
	I0913 19:57:59.863261   71424 main.go:141] libmachine: (no-preload-239327) Calling .Close
	I0913 19:57:59.863505   71424 main.go:141] libmachine: Successfully made call to close driver server
	I0913 19:57:59.863521   71424 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 19:57:59.863536   71424 addons.go:475] Verifying addon metrics-server=true in "no-preload-239327"
	I0913 19:57:59.865569   71424 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0913 19:57:56.673474   71702 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.260118506s)
	I0913 19:57:56.673521   71702 crio.go:469] duration metric: took 2.260277637s to extract the tarball
	I0913 19:57:56.673535   71702 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0913 19:57:56.710512   71702 ssh_runner.go:195] Run: sudo crictl images --output json
	I0913 19:57:56.757884   71702 crio.go:514] all images are preloaded for cri-o runtime.
	I0913 19:57:56.757904   71702 cache_images.go:84] Images are preloaded, skipping loading
	I0913 19:57:56.757913   71702 kubeadm.go:934] updating node { 192.168.61.3 8444 v1.31.1 crio true true} ...
	I0913 19:57:56.758026   71702 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-512125 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-512125 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0913 19:57:56.758115   71702 ssh_runner.go:195] Run: crio config
	I0913 19:57:56.832109   71702 cni.go:84] Creating CNI manager for ""
	I0913 19:57:56.832131   71702 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0913 19:57:56.832143   71702 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0913 19:57:56.832170   71702 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.3 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-512125 NodeName:default-k8s-diff-port-512125 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.3"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0913 19:57:56.832376   71702 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.3
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-512125"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.3"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0913 19:57:56.832442   71702 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0913 19:57:56.845057   71702 binaries.go:44] Found k8s binaries, skipping transfer
	I0913 19:57:56.845112   71702 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0913 19:57:56.855452   71702 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I0913 19:57:56.874607   71702 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0913 19:57:56.891656   71702 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0913 19:57:56.910268   71702 ssh_runner.go:195] Run: grep 192.168.61.3	control-plane.minikube.internal$ /etc/hosts
	I0913 19:57:56.915416   71702 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.3	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0913 19:57:56.929858   71702 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 19:57:57.051400   71702 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0913 19:57:57.073706   71702 certs.go:68] Setting up /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/default-k8s-diff-port-512125 for IP: 192.168.61.3
	I0913 19:57:57.073736   71702 certs.go:194] generating shared ca certs ...
	I0913 19:57:57.073756   71702 certs.go:226] acquiring lock for ca certs: {Name:mke780aab4c2895bded11772e31c7a357d07742c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 19:57:57.073920   71702 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.key
	I0913 19:57:57.073981   71702 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.key
	I0913 19:57:57.073997   71702 certs.go:256] generating profile certs ...
	I0913 19:57:57.074130   71702 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/default-k8s-diff-port-512125/client.key
	I0913 19:57:57.074222   71702 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/default-k8s-diff-port-512125/apiserver.key.c56bc154
	I0913 19:57:57.074281   71702 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/default-k8s-diff-port-512125/proxy-client.key
	I0913 19:57:57.074428   71702 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079.pem (1338 bytes)
	W0913 19:57:57.074478   71702 certs.go:480] ignoring /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079_empty.pem, impossibly tiny 0 bytes
	I0913 19:57:57.074492   71702 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem (1675 bytes)
	I0913 19:57:57.074524   71702 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem (1082 bytes)
	I0913 19:57:57.074552   71702 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem (1123 bytes)
	I0913 19:57:57.074588   71702 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem (1679 bytes)
	I0913 19:57:57.074648   71702 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem (1708 bytes)
	I0913 19:57:57.075352   71702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0913 19:57:57.116487   71702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0913 19:57:57.149579   71702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0913 19:57:57.181669   71702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0913 19:57:57.222493   71702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/default-k8s-diff-port-512125/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0913 19:57:57.265591   71702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/default-k8s-diff-port-512125/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0913 19:57:57.309431   71702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/default-k8s-diff-port-512125/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0913 19:57:57.337978   71702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/default-k8s-diff-port-512125/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0913 19:57:57.368737   71702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem --> /usr/share/ca-certificates/110792.pem (1708 bytes)
	I0913 19:57:57.395163   71702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0913 19:57:57.422620   71702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079.pem --> /usr/share/ca-certificates/11079.pem (1338 bytes)
	I0913 19:57:57.452103   71702 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0913 19:57:57.473413   71702 ssh_runner.go:195] Run: openssl version
	I0913 19:57:57.481312   71702 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11079.pem && ln -fs /usr/share/ca-certificates/11079.pem /etc/ssl/certs/11079.pem"
	I0913 19:57:57.492674   71702 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11079.pem
	I0913 19:57:57.497758   71702 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 13 18:38 /usr/share/ca-certificates/11079.pem
	I0913 19:57:57.497839   71702 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11079.pem
	I0913 19:57:57.504428   71702 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11079.pem /etc/ssl/certs/51391683.0"
	I0913 19:57:57.516174   71702 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110792.pem && ln -fs /usr/share/ca-certificates/110792.pem /etc/ssl/certs/110792.pem"
	I0913 19:57:57.531615   71702 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110792.pem
	I0913 19:57:57.536963   71702 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 13 18:38 /usr/share/ca-certificates/110792.pem
	I0913 19:57:57.537044   71702 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110792.pem
	I0913 19:57:57.543533   71702 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110792.pem /etc/ssl/certs/3ec20f2e.0"
	I0913 19:57:57.555225   71702 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0913 19:57:57.567042   71702 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0913 19:57:57.571812   71702 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 13 18:22 /usr/share/ca-certificates/minikubeCA.pem
	I0913 19:57:57.571880   71702 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0913 19:57:57.578078   71702 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0913 19:57:57.589068   71702 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0913 19:57:57.593977   71702 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0913 19:57:57.600118   71702 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0913 19:57:57.608059   71702 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0913 19:57:57.616018   71702 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0913 19:57:57.623731   71702 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0913 19:57:57.631334   71702 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0913 19:57:57.639262   71702 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-512125 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:default-k8s-diff-port-512125 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.3 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2
6280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 19:57:57.639371   71702 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0913 19:57:57.639428   71702 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0913 19:57:57.690322   71702 cri.go:89] found id: ""
	I0913 19:57:57.690474   71702 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0913 19:57:57.701319   71702 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0913 19:57:57.701343   71702 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0913 19:57:57.701398   71702 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0913 19:57:57.714480   71702 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0913 19:57:57.715899   71702 kubeconfig.go:125] found "default-k8s-diff-port-512125" server: "https://192.168.61.3:8444"
	I0913 19:57:57.719013   71702 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0913 19:57:57.732186   71702 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.3
	I0913 19:57:57.732229   71702 kubeadm.go:1160] stopping kube-system containers ...
	I0913 19:57:57.732243   71702 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0913 19:57:57.732295   71702 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0913 19:57:57.777389   71702 cri.go:89] found id: ""
	I0913 19:57:57.777469   71702 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0913 19:57:57.800158   71702 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0913 19:57:57.813502   71702 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0913 19:57:57.813524   71702 kubeadm.go:157] found existing configuration files:
	
	I0913 19:57:57.813587   71702 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0913 19:57:57.824010   71702 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0913 19:57:57.824089   71702 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0913 19:57:57.837916   71702 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0913 19:57:57.848018   71702 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0913 19:57:57.848100   71702 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0913 19:57:57.858224   71702 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0913 19:57:57.867720   71702 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0913 19:57:57.867791   71702 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0913 19:57:57.877546   71702 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0913 19:57:57.886880   71702 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0913 19:57:57.886946   71702 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0913 19:57:57.897287   71702 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0913 19:57:57.907278   71702 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:57:58.066862   71702 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:57:59.038179   71702 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:57:59.245671   71702 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:57:59.306302   71702 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:57:59.366665   71702 api_server.go:52] waiting for apiserver process to appear ...
	I0913 19:57:59.366755   71702 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:57:59.867295   71702 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:57:59.867010   71424 addons.go:510] duration metric: took 3.116105462s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0913 19:57:57.205383   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:57:57.205897   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | unable to find current IP address of domain old-k8s-version-234290 in network mk-old-k8s-version-234290
	I0913 19:57:57.205926   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | I0913 19:57:57.205815   72953 retry.go:31] will retry after 1.270443953s: waiting for machine to come up
	I0913 19:57:58.477621   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:57:58.478121   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | unable to find current IP address of domain old-k8s-version-234290 in network mk-old-k8s-version-234290
	I0913 19:57:58.478142   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | I0913 19:57:58.478071   72953 retry.go:31] will retry after 1.698380616s: waiting for machine to come up
	I0913 19:58:00.179093   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:00.179578   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | unable to find current IP address of domain old-k8s-version-234290 in network mk-old-k8s-version-234290
	I0913 19:58:00.179602   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | I0913 19:58:00.179528   72953 retry.go:31] will retry after 2.83575453s: waiting for machine to come up
	I0913 19:58:00.367089   71702 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:00.386556   71702 api_server.go:72] duration metric: took 1.019888667s to wait for apiserver process to appear ...
	I0913 19:58:00.386585   71702 api_server.go:88] waiting for apiserver healthz status ...
	I0913 19:58:00.386612   71702 api_server.go:253] Checking apiserver healthz at https://192.168.61.3:8444/healthz ...
	I0913 19:58:00.387195   71702 api_server.go:269] stopped: https://192.168.61.3:8444/healthz: Get "https://192.168.61.3:8444/healthz": dial tcp 192.168.61.3:8444: connect: connection refused
	I0913 19:58:00.887556   71702 api_server.go:253] Checking apiserver healthz at https://192.168.61.3:8444/healthz ...
	I0913 19:58:03.321626   71702 api_server.go:279] https://192.168.61.3:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0913 19:58:03.321655   71702 api_server.go:103] status: https://192.168.61.3:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0913 19:58:03.321671   71702 api_server.go:253] Checking apiserver healthz at https://192.168.61.3:8444/healthz ...
	I0913 19:58:03.348469   71702 api_server.go:279] https://192.168.61.3:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0913 19:58:03.348523   71702 api_server.go:103] status: https://192.168.61.3:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0913 19:58:03.386697   71702 api_server.go:253] Checking apiserver healthz at https://192.168.61.3:8444/healthz ...
	I0913 19:58:03.431803   71702 api_server.go:279] https://192.168.61.3:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0913 19:58:03.431840   71702 api_server.go:103] status: https://192.168.61.3:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0913 19:58:03.887458   71702 api_server.go:253] Checking apiserver healthz at https://192.168.61.3:8444/healthz ...
	I0913 19:58:03.892461   71702 api_server.go:279] https://192.168.61.3:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0913 19:58:03.892542   71702 api_server.go:103] status: https://192.168.61.3:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0913 19:58:04.387025   71702 api_server.go:253] Checking apiserver healthz at https://192.168.61.3:8444/healthz ...
	I0913 19:58:04.392727   71702 api_server.go:279] https://192.168.61.3:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0913 19:58:04.392754   71702 api_server.go:103] status: https://192.168.61.3:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0913 19:58:04.887683   71702 api_server.go:253] Checking apiserver healthz at https://192.168.61.3:8444/healthz ...
	I0913 19:58:04.892753   71702 api_server.go:279] https://192.168.61.3:8444/healthz returned 200:
	ok
	I0913 19:58:04.904148   71702 api_server.go:141] control plane version: v1.31.1
	I0913 19:58:04.904182   71702 api_server.go:131] duration metric: took 4.517588824s to wait for apiserver health ...
	I0913 19:58:04.904194   71702 cni.go:84] Creating CNI manager for ""
	I0913 19:58:04.904202   71702 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0913 19:58:04.905663   71702 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0913 19:58:01.560970   71424 node_ready.go:53] node "no-preload-239327" has status "Ready":"False"
	I0913 19:58:04.064801   71424 node_ready.go:49] node "no-preload-239327" has status "Ready":"True"
	I0913 19:58:04.064833   71424 node_ready.go:38] duration metric: took 7.008173513s for node "no-preload-239327" to be "Ready" ...
	I0913 19:58:04.064847   71424 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0913 19:58:04.071226   71424 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-fjzxv" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:04.075856   71424 pod_ready.go:93] pod "coredns-7c65d6cfc9-fjzxv" in "kube-system" namespace has status "Ready":"True"
	I0913 19:58:04.075876   71424 pod_ready.go:82] duration metric: took 4.620688ms for pod "coredns-7c65d6cfc9-fjzxv" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:04.075886   71424 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-239327" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:06.082608   71424 pod_ready.go:103] pod "etcd-no-preload-239327" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:03.017261   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:03.017797   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | unable to find current IP address of domain old-k8s-version-234290 in network mk-old-k8s-version-234290
	I0913 19:58:03.017824   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | I0913 19:58:03.017750   72953 retry.go:31] will retry after 2.837073214s: waiting for machine to come up
	I0913 19:58:05.856138   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:05.856521   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | unable to find current IP address of domain old-k8s-version-234290 in network mk-old-k8s-version-234290
	I0913 19:58:05.856541   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | I0913 19:58:05.856478   72953 retry.go:31] will retry after 3.468611434s: waiting for machine to come up
	I0913 19:58:04.907086   71702 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0913 19:58:04.935755   71702 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0913 19:58:04.972552   71702 system_pods.go:43] waiting for kube-system pods to appear ...
	I0913 19:58:04.987070   71702 system_pods.go:59] 8 kube-system pods found
	I0913 19:58:04.987104   71702 system_pods.go:61] "coredns-7c65d6cfc9-zvnss" [b6584e3d-4140-4666-8303-94c0900eaf8c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0913 19:58:04.987118   71702 system_pods.go:61] "etcd-default-k8s-diff-port-512125" [5eb1e9b1-b89a-427d-83f5-96d9109b10c6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0913 19:58:04.987128   71702 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-512125" [5118097e-a1ed-403e-8acb-22c7619a6db9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0913 19:58:04.987148   71702 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-512125" [37f11854-a2b8-45d5-8491-e2f92b860220] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0913 19:58:04.987160   71702 system_pods.go:61] "kube-proxy-xqv9m" [92c9dda2-fabe-4b3b-9bae-892e6daf0889] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0913 19:58:04.987172   71702 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-512125" [a9f4fa75-b73d-477a-83e9-e855ec50f90d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0913 19:58:04.987180   71702 system_pods.go:61] "metrics-server-6867b74b74-7ltrm" [8560dbda-82b3-49a1-8ed8-f149e5e99168] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0913 19:58:04.987188   71702 system_pods.go:61] "storage-provisioner" [d8f393fe-0f71-4f3c-b17e-6132503c2b9d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0913 19:58:04.987198   71702 system_pods.go:74] duration metric: took 14.623093ms to wait for pod list to return data ...
	I0913 19:58:04.987207   71702 node_conditions.go:102] verifying NodePressure condition ...
	I0913 19:58:04.991659   71702 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0913 19:58:04.991686   71702 node_conditions.go:123] node cpu capacity is 2
	I0913 19:58:04.991701   71702 node_conditions.go:105] duration metric: took 4.488975ms to run NodePressure ...
	I0913 19:58:04.991720   71702 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:58:05.329547   71702 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0913 19:58:05.342174   71702 kubeadm.go:739] kubelet initialised
	I0913 19:58:05.342208   71702 kubeadm.go:740] duration metric: took 12.632654ms waiting for restarted kubelet to initialise ...
	I0913 19:58:05.342218   71702 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0913 19:58:05.351246   71702 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-zvnss" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:07.371790   71702 pod_ready.go:103] pod "coredns-7c65d6cfc9-zvnss" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:09.857936   71702 pod_ready.go:93] pod "coredns-7c65d6cfc9-zvnss" in "kube-system" namespace has status "Ready":"True"
	I0913 19:58:09.857956   71702 pod_ready.go:82] duration metric: took 4.506679998s for pod "coredns-7c65d6cfc9-zvnss" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:09.857966   71702 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-512125" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:10.763154   71233 start.go:364] duration metric: took 54.002772677s to acquireMachinesLock for "embed-certs-175374"
	I0913 19:58:10.763209   71233 start.go:96] Skipping create...Using existing machine configuration
	I0913 19:58:10.763220   71233 fix.go:54] fixHost starting: 
	I0913 19:58:10.763652   71233 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:58:10.763701   71233 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:58:10.780781   71233 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34533
	I0913 19:58:10.781257   71233 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:58:10.781767   71233 main.go:141] libmachine: Using API Version  1
	I0913 19:58:10.781792   71233 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:58:10.782108   71233 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:58:10.782297   71233 main.go:141] libmachine: (embed-certs-175374) Calling .DriverName
	I0913 19:58:10.782435   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetState
	I0913 19:58:10.783818   71233 fix.go:112] recreateIfNeeded on embed-certs-175374: state=Stopped err=<nil>
	I0913 19:58:10.783838   71233 main.go:141] libmachine: (embed-certs-175374) Calling .DriverName
	W0913 19:58:10.783968   71233 fix.go:138] unexpected machine state, will restart: <nil>
	I0913 19:58:10.786142   71233 out.go:177] * Restarting existing kvm2 VM for "embed-certs-175374" ...
	I0913 19:58:07.082571   71424 pod_ready.go:93] pod "etcd-no-preload-239327" in "kube-system" namespace has status "Ready":"True"
	I0913 19:58:07.082601   71424 pod_ready.go:82] duration metric: took 3.006705611s for pod "etcd-no-preload-239327" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:07.082614   71424 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-239327" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:07.087377   71424 pod_ready.go:93] pod "kube-apiserver-no-preload-239327" in "kube-system" namespace has status "Ready":"True"
	I0913 19:58:07.087394   71424 pod_ready.go:82] duration metric: took 4.772922ms for pod "kube-apiserver-no-preload-239327" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:07.087403   71424 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-239327" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:07.091167   71424 pod_ready.go:93] pod "kube-controller-manager-no-preload-239327" in "kube-system" namespace has status "Ready":"True"
	I0913 19:58:07.091181   71424 pod_ready.go:82] duration metric: took 3.772461ms for pod "kube-controller-manager-no-preload-239327" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:07.091188   71424 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-b24zg" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:07.095143   71424 pod_ready.go:93] pod "kube-proxy-b24zg" in "kube-system" namespace has status "Ready":"True"
	I0913 19:58:07.095158   71424 pod_ready.go:82] duration metric: took 3.964773ms for pod "kube-proxy-b24zg" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:07.095164   71424 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-239327" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:07.259916   71424 pod_ready.go:93] pod "kube-scheduler-no-preload-239327" in "kube-system" namespace has status "Ready":"True"
	I0913 19:58:07.259939   71424 pod_ready.go:82] duration metric: took 164.768229ms for pod "kube-scheduler-no-preload-239327" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:07.259948   71424 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:09.267203   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:09.327843   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:09.328294   71926 main.go:141] libmachine: (old-k8s-version-234290) Found IP for machine: 192.168.72.137
	I0913 19:58:09.328318   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has current primary IP address 192.168.72.137 and MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:09.328326   71926 main.go:141] libmachine: (old-k8s-version-234290) Reserving static IP address...
	I0913 19:58:09.328829   71926 main.go:141] libmachine: (old-k8s-version-234290) Reserved static IP address: 192.168.72.137
	I0913 19:58:09.328863   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | found host DHCP lease matching {name: "old-k8s-version-234290", mac: "52:54:00:11:33:43", ip: "192.168.72.137"} in network mk-old-k8s-version-234290: {Iface:virbr4 ExpiryTime:2024-09-13 20:58:03 +0000 UTC Type:0 Mac:52:54:00:11:33:43 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-234290 Clientid:01:52:54:00:11:33:43}
	I0913 19:58:09.328879   71926 main.go:141] libmachine: (old-k8s-version-234290) Waiting for SSH to be available...
	I0913 19:58:09.328907   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | skip adding static IP to network mk-old-k8s-version-234290 - found existing host DHCP lease matching {name: "old-k8s-version-234290", mac: "52:54:00:11:33:43", ip: "192.168.72.137"}
	I0913 19:58:09.328936   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | Getting to WaitForSSH function...
	I0913 19:58:09.331039   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:09.331303   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:33:43", ip: ""} in network mk-old-k8s-version-234290: {Iface:virbr4 ExpiryTime:2024-09-13 20:58:03 +0000 UTC Type:0 Mac:52:54:00:11:33:43 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-234290 Clientid:01:52:54:00:11:33:43}
	I0913 19:58:09.331334   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined IP address 192.168.72.137 and MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:09.331400   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | Using SSH client type: external
	I0913 19:58:09.331435   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | Using SSH private key: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/old-k8s-version-234290/id_rsa (-rw-------)
	I0913 19:58:09.331513   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.137 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19636-3902/.minikube/machines/old-k8s-version-234290/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0913 19:58:09.331538   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | About to run SSH command:
	I0913 19:58:09.331555   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | exit 0
	I0913 19:58:09.458142   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | SSH cmd err, output: <nil>: 
	I0913 19:58:09.458503   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetConfigRaw
	I0913 19:58:09.459065   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetIP
	I0913 19:58:09.461622   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:09.461915   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:33:43", ip: ""} in network mk-old-k8s-version-234290: {Iface:virbr4 ExpiryTime:2024-09-13 20:58:03 +0000 UTC Type:0 Mac:52:54:00:11:33:43 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-234290 Clientid:01:52:54:00:11:33:43}
	I0913 19:58:09.461939   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined IP address 192.168.72.137 and MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:09.462215   71926 profile.go:143] Saving config to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/old-k8s-version-234290/config.json ...
	I0913 19:58:09.462430   71926 machine.go:93] provisionDockerMachine start ...
	I0913 19:58:09.462454   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .DriverName
	I0913 19:58:09.462652   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHHostname
	I0913 19:58:09.464731   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:09.465001   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:33:43", ip: ""} in network mk-old-k8s-version-234290: {Iface:virbr4 ExpiryTime:2024-09-13 20:58:03 +0000 UTC Type:0 Mac:52:54:00:11:33:43 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-234290 Clientid:01:52:54:00:11:33:43}
	I0913 19:58:09.465025   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined IP address 192.168.72.137 and MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:09.465140   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHPort
	I0913 19:58:09.465292   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHKeyPath
	I0913 19:58:09.465448   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHKeyPath
	I0913 19:58:09.465586   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHUsername
	I0913 19:58:09.465754   71926 main.go:141] libmachine: Using SSH client type: native
	I0913 19:58:09.465934   71926 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.137 22 <nil> <nil>}
	I0913 19:58:09.465944   71926 main.go:141] libmachine: About to run SSH command:
	hostname
	I0913 19:58:09.578790   71926 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0913 19:58:09.578821   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetMachineName
	I0913 19:58:09.579119   71926 buildroot.go:166] provisioning hostname "old-k8s-version-234290"
	I0913 19:58:09.579148   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetMachineName
	I0913 19:58:09.579352   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHHostname
	I0913 19:58:09.582085   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:09.582501   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:33:43", ip: ""} in network mk-old-k8s-version-234290: {Iface:virbr4 ExpiryTime:2024-09-13 20:58:03 +0000 UTC Type:0 Mac:52:54:00:11:33:43 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-234290 Clientid:01:52:54:00:11:33:43}
	I0913 19:58:09.582530   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined IP address 192.168.72.137 and MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:09.582677   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHPort
	I0913 19:58:09.582828   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHKeyPath
	I0913 19:58:09.582982   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHKeyPath
	I0913 19:58:09.583111   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHUsername
	I0913 19:58:09.583310   71926 main.go:141] libmachine: Using SSH client type: native
	I0913 19:58:09.583519   71926 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.137 22 <nil> <nil>}
	I0913 19:58:09.583536   71926 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-234290 && echo "old-k8s-version-234290" | sudo tee /etc/hostname
	I0913 19:58:09.712818   71926 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-234290
	
	I0913 19:58:09.712849   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHHostname
	I0913 19:58:09.715668   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:09.716012   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:33:43", ip: ""} in network mk-old-k8s-version-234290: {Iface:virbr4 ExpiryTime:2024-09-13 20:58:03 +0000 UTC Type:0 Mac:52:54:00:11:33:43 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-234290 Clientid:01:52:54:00:11:33:43}
	I0913 19:58:09.716033   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined IP address 192.168.72.137 and MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:09.716166   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHPort
	I0913 19:58:09.716370   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHKeyPath
	I0913 19:58:09.716550   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHKeyPath
	I0913 19:58:09.716740   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHUsername
	I0913 19:58:09.716935   71926 main.go:141] libmachine: Using SSH client type: native
	I0913 19:58:09.717120   71926 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.137 22 <nil> <nil>}
	I0913 19:58:09.717145   71926 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-234290' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-234290/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-234290' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0913 19:58:09.835137   71926 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0913 19:58:09.835169   71926 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19636-3902/.minikube CaCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19636-3902/.minikube}
	I0913 19:58:09.835198   71926 buildroot.go:174] setting up certificates
	I0913 19:58:09.835207   71926 provision.go:84] configureAuth start
	I0913 19:58:09.835220   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetMachineName
	I0913 19:58:09.835493   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetIP
	I0913 19:58:09.838090   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:09.838460   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:33:43", ip: ""} in network mk-old-k8s-version-234290: {Iface:virbr4 ExpiryTime:2024-09-13 20:58:03 +0000 UTC Type:0 Mac:52:54:00:11:33:43 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-234290 Clientid:01:52:54:00:11:33:43}
	I0913 19:58:09.838496   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined IP address 192.168.72.137 and MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:09.838570   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHHostname
	I0913 19:58:09.840786   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:09.841121   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:33:43", ip: ""} in network mk-old-k8s-version-234290: {Iface:virbr4 ExpiryTime:2024-09-13 20:58:03 +0000 UTC Type:0 Mac:52:54:00:11:33:43 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-234290 Clientid:01:52:54:00:11:33:43}
	I0913 19:58:09.841147   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined IP address 192.168.72.137 and MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:09.841283   71926 provision.go:143] copyHostCerts
	I0913 19:58:09.841339   71926 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem, removing ...
	I0913 19:58:09.841349   71926 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem
	I0913 19:58:09.841404   71926 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem (1082 bytes)
	I0913 19:58:09.841495   71926 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem, removing ...
	I0913 19:58:09.841503   71926 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem
	I0913 19:58:09.841525   71926 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem (1123 bytes)
	I0913 19:58:09.841589   71926 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem, removing ...
	I0913 19:58:09.841596   71926 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem
	I0913 19:58:09.841613   71926 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem (1679 bytes)
	I0913 19:58:09.841671   71926 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-234290 san=[127.0.0.1 192.168.72.137 localhost minikube old-k8s-version-234290]
	I0913 19:58:10.111960   71926 provision.go:177] copyRemoteCerts
	I0913 19:58:10.112014   71926 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0913 19:58:10.112042   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHHostname
	I0913 19:58:10.114801   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:10.115156   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:33:43", ip: ""} in network mk-old-k8s-version-234290: {Iface:virbr4 ExpiryTime:2024-09-13 20:58:03 +0000 UTC Type:0 Mac:52:54:00:11:33:43 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-234290 Clientid:01:52:54:00:11:33:43}
	I0913 19:58:10.115203   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined IP address 192.168.72.137 and MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:10.115378   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHPort
	I0913 19:58:10.115543   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHKeyPath
	I0913 19:58:10.115703   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHUsername
	I0913 19:58:10.115816   71926 sshutil.go:53] new ssh client: &{IP:192.168.72.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/old-k8s-version-234290/id_rsa Username:docker}
	I0913 19:58:10.201034   71926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0913 19:58:10.225250   71926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0913 19:58:10.248653   71926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0913 19:58:10.273946   71926 provision.go:87] duration metric: took 438.72916ms to configureAuth
	I0913 19:58:10.273971   71926 buildroot.go:189] setting minikube options for container-runtime
	I0913 19:58:10.274216   71926 config.go:182] Loaded profile config "old-k8s-version-234290": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0913 19:58:10.274293   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHHostname
	I0913 19:58:10.276661   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:10.276973   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:33:43", ip: ""} in network mk-old-k8s-version-234290: {Iface:virbr4 ExpiryTime:2024-09-13 20:58:03 +0000 UTC Type:0 Mac:52:54:00:11:33:43 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-234290 Clientid:01:52:54:00:11:33:43}
	I0913 19:58:10.277010   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined IP address 192.168.72.137 and MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:10.277098   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHPort
	I0913 19:58:10.277315   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHKeyPath
	I0913 19:58:10.277465   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHKeyPath
	I0913 19:58:10.277593   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHUsername
	I0913 19:58:10.277755   71926 main.go:141] libmachine: Using SSH client type: native
	I0913 19:58:10.277914   71926 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.137 22 <nil> <nil>}
	I0913 19:58:10.277929   71926 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0913 19:58:10.506712   71926 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0913 19:58:10.506740   71926 machine.go:96] duration metric: took 1.044293936s to provisionDockerMachine
	I0913 19:58:10.506752   71926 start.go:293] postStartSetup for "old-k8s-version-234290" (driver="kvm2")
	I0913 19:58:10.506761   71926 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0913 19:58:10.506786   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .DriverName
	I0913 19:58:10.507087   71926 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0913 19:58:10.507121   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHHostname
	I0913 19:58:10.509746   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:10.510073   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:33:43", ip: ""} in network mk-old-k8s-version-234290: {Iface:virbr4 ExpiryTime:2024-09-13 20:58:03 +0000 UTC Type:0 Mac:52:54:00:11:33:43 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-234290 Clientid:01:52:54:00:11:33:43}
	I0913 19:58:10.510115   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined IP address 192.168.72.137 and MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:10.510319   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHPort
	I0913 19:58:10.510493   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHKeyPath
	I0913 19:58:10.510652   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHUsername
	I0913 19:58:10.510791   71926 sshutil.go:53] new ssh client: &{IP:192.168.72.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/old-k8s-version-234290/id_rsa Username:docker}
	I0913 19:58:10.608187   71926 ssh_runner.go:195] Run: cat /etc/os-release
	I0913 19:58:10.612545   71926 info.go:137] Remote host: Buildroot 2023.02.9
	I0913 19:58:10.612570   71926 filesync.go:126] Scanning /home/jenkins/minikube-integration/19636-3902/.minikube/addons for local assets ...
	I0913 19:58:10.612659   71926 filesync.go:126] Scanning /home/jenkins/minikube-integration/19636-3902/.minikube/files for local assets ...
	I0913 19:58:10.612760   71926 filesync.go:149] local asset: /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem -> 110792.pem in /etc/ssl/certs
	I0913 19:58:10.612876   71926 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0913 19:58:10.623923   71926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem --> /etc/ssl/certs/110792.pem (1708 bytes)
	I0913 19:58:10.649632   71926 start.go:296] duration metric: took 142.866871ms for postStartSetup
	I0913 19:58:10.649667   71926 fix.go:56] duration metric: took 19.32233086s for fixHost
	I0913 19:58:10.649685   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHHostname
	I0913 19:58:10.652317   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:10.652626   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:33:43", ip: ""} in network mk-old-k8s-version-234290: {Iface:virbr4 ExpiryTime:2024-09-13 20:58:03 +0000 UTC Type:0 Mac:52:54:00:11:33:43 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-234290 Clientid:01:52:54:00:11:33:43}
	I0913 19:58:10.652654   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined IP address 192.168.72.137 and MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:10.652772   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHPort
	I0913 19:58:10.652954   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHKeyPath
	I0913 19:58:10.653102   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHKeyPath
	I0913 19:58:10.653224   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHUsername
	I0913 19:58:10.653391   71926 main.go:141] libmachine: Using SSH client type: native
	I0913 19:58:10.653546   71926 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.137 22 <nil> <nil>}
	I0913 19:58:10.653555   71926 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0913 19:58:10.762947   71926 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726257490.741456651
	
	I0913 19:58:10.763010   71926 fix.go:216] guest clock: 1726257490.741456651
	I0913 19:58:10.763025   71926 fix.go:229] Guest: 2024-09-13 19:58:10.741456651 +0000 UTC Remote: 2024-09-13 19:58:10.649671047 +0000 UTC m=+268.740736518 (delta=91.785604ms)
	I0913 19:58:10.763052   71926 fix.go:200] guest clock delta is within tolerance: 91.785604ms
	I0913 19:58:10.763059   71926 start.go:83] releasing machines lock for "old-k8s-version-234290", held for 19.435752772s
	I0913 19:58:10.763095   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .DriverName
	I0913 19:58:10.763368   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetIP
	I0913 19:58:10.766318   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:10.766797   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:33:43", ip: ""} in network mk-old-k8s-version-234290: {Iface:virbr4 ExpiryTime:2024-09-13 20:58:03 +0000 UTC Type:0 Mac:52:54:00:11:33:43 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-234290 Clientid:01:52:54:00:11:33:43}
	I0913 19:58:10.766838   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined IP address 192.168.72.137 and MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:10.767094   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .DriverName
	I0913 19:58:10.767602   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .DriverName
	I0913 19:58:10.767791   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .DriverName
	I0913 19:58:10.767901   71926 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0913 19:58:10.767938   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHHostname
	I0913 19:58:10.768057   71926 ssh_runner.go:195] Run: cat /version.json
	I0913 19:58:10.768077   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHHostname
	I0913 19:58:10.770835   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:10.770860   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:10.771204   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:33:43", ip: ""} in network mk-old-k8s-version-234290: {Iface:virbr4 ExpiryTime:2024-09-13 20:58:03 +0000 UTC Type:0 Mac:52:54:00:11:33:43 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-234290 Clientid:01:52:54:00:11:33:43}
	I0913 19:58:10.771231   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined IP address 192.168.72.137 and MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:10.771269   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:33:43", ip: ""} in network mk-old-k8s-version-234290: {Iface:virbr4 ExpiryTime:2024-09-13 20:58:03 +0000 UTC Type:0 Mac:52:54:00:11:33:43 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-234290 Clientid:01:52:54:00:11:33:43}
	I0913 19:58:10.771299   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined IP address 192.168.72.137 and MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:10.771390   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHPort
	I0913 19:58:10.771538   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHKeyPath
	I0913 19:58:10.771562   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHPort
	I0913 19:58:10.771722   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHUsername
	I0913 19:58:10.771758   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHKeyPath
	I0913 19:58:10.771888   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHUsername
	I0913 19:58:10.771910   71926 sshutil.go:53] new ssh client: &{IP:192.168.72.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/old-k8s-version-234290/id_rsa Username:docker}
	I0913 19:58:10.772009   71926 sshutil.go:53] new ssh client: &{IP:192.168.72.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/old-k8s-version-234290/id_rsa Username:docker}
	I0913 19:58:10.851445   71926 ssh_runner.go:195] Run: systemctl --version
	I0913 19:58:10.876291   71926 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0913 19:58:11.022514   71926 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0913 19:58:11.029415   71926 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0913 19:58:11.029478   71926 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0913 19:58:11.046313   71926 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0913 19:58:11.046338   71926 start.go:495] detecting cgroup driver to use...
	I0913 19:58:11.046411   71926 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0913 19:58:11.064638   71926 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0913 19:58:11.079465   71926 docker.go:217] disabling cri-docker service (if available) ...
	I0913 19:58:11.079555   71926 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0913 19:58:11.092965   71926 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0913 19:58:11.107487   71926 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0913 19:58:11.225260   71926 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0913 19:58:11.379777   71926 docker.go:233] disabling docker service ...
	I0913 19:58:11.379918   71926 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0913 19:58:11.399146   71926 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0913 19:58:11.418820   71926 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0913 19:58:11.608056   71926 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0913 19:58:11.793596   71926 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0913 19:58:11.809432   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0913 19:58:11.831794   71926 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0913 19:58:11.831867   71926 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:58:11.843613   71926 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0913 19:58:11.843681   71926 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:58:11.856437   71926 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:58:11.868563   71926 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:58:11.880448   71926 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0913 19:58:11.892795   71926 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0913 19:58:11.903756   71926 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0913 19:58:11.903820   71926 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0913 19:58:11.919323   71926 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0913 19:58:11.932414   71926 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 19:58:12.084112   71926 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0913 19:58:12.186561   71926 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0913 19:58:12.186626   71926 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0913 19:58:12.200935   71926 start.go:563] Will wait 60s for crictl version
	I0913 19:58:12.200999   71926 ssh_runner.go:195] Run: which crictl
	I0913 19:58:12.204888   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0913 19:58:12.251729   71926 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0913 19:58:12.251822   71926 ssh_runner.go:195] Run: crio --version
	I0913 19:58:12.284955   71926 ssh_runner.go:195] Run: crio --version
	I0913 19:58:12.316561   71926 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0913 19:58:10.787457   71233 main.go:141] libmachine: (embed-certs-175374) Calling .Start
	I0913 19:58:10.787620   71233 main.go:141] libmachine: (embed-certs-175374) Ensuring networks are active...
	I0913 19:58:10.788313   71233 main.go:141] libmachine: (embed-certs-175374) Ensuring network default is active
	I0913 19:58:10.788694   71233 main.go:141] libmachine: (embed-certs-175374) Ensuring network mk-embed-certs-175374 is active
	I0913 19:58:10.789203   71233 main.go:141] libmachine: (embed-certs-175374) Getting domain xml...
	I0913 19:58:10.790255   71233 main.go:141] libmachine: (embed-certs-175374) Creating domain...
	I0913 19:58:12.138157   71233 main.go:141] libmachine: (embed-certs-175374) Waiting to get IP...
	I0913 19:58:12.139236   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:12.139700   71233 main.go:141] libmachine: (embed-certs-175374) DBG | unable to find current IP address of domain embed-certs-175374 in network mk-embed-certs-175374
	I0913 19:58:12.139753   71233 main.go:141] libmachine: (embed-certs-175374) DBG | I0913 19:58:12.139667   73146 retry.go:31] will retry after 297.211027ms: waiting for machine to come up
	I0913 19:58:12.438089   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:12.438546   71233 main.go:141] libmachine: (embed-certs-175374) DBG | unable to find current IP address of domain embed-certs-175374 in network mk-embed-certs-175374
	I0913 19:58:12.438573   71233 main.go:141] libmachine: (embed-certs-175374) DBG | I0913 19:58:12.438508   73146 retry.go:31] will retry after 295.16699ms: waiting for machine to come up
	I0913 19:58:12.735114   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:12.735588   71233 main.go:141] libmachine: (embed-certs-175374) DBG | unable to find current IP address of domain embed-certs-175374 in network mk-embed-certs-175374
	I0913 19:58:12.735624   71233 main.go:141] libmachine: (embed-certs-175374) DBG | I0913 19:58:12.735558   73146 retry.go:31] will retry after 439.751807ms: waiting for machine to come up
	I0913 19:58:13.177095   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:13.177613   71233 main.go:141] libmachine: (embed-certs-175374) DBG | unable to find current IP address of domain embed-certs-175374 in network mk-embed-certs-175374
	I0913 19:58:13.177643   71233 main.go:141] libmachine: (embed-certs-175374) DBG | I0913 19:58:13.177584   73146 retry.go:31] will retry after 561.896034ms: waiting for machine to come up
	I0913 19:58:13.741520   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:13.742128   71233 main.go:141] libmachine: (embed-certs-175374) DBG | unable to find current IP address of domain embed-certs-175374 in network mk-embed-certs-175374
	I0913 19:58:13.742164   71233 main.go:141] libmachine: (embed-certs-175374) DBG | I0913 19:58:13.742027   73146 retry.go:31] will retry after 713.20889ms: waiting for machine to come up
	I0913 19:58:11.865414   71702 pod_ready.go:103] pod "etcd-default-k8s-diff-port-512125" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:13.865756   71702 pod_ready.go:103] pod "etcd-default-k8s-diff-port-512125" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:11.267770   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:13.269041   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:15.768231   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:12.317917   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetIP
	I0913 19:58:12.320920   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:12.321261   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:33:43", ip: ""} in network mk-old-k8s-version-234290: {Iface:virbr4 ExpiryTime:2024-09-13 20:58:03 +0000 UTC Type:0 Mac:52:54:00:11:33:43 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-234290 Clientid:01:52:54:00:11:33:43}
	I0913 19:58:12.321291   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined IP address 192.168.72.137 and MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:12.321498   71926 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0913 19:58:12.325745   71926 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0913 19:58:12.340042   71926 kubeadm.go:883] updating cluster {Name:old-k8s-version-234290 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-234290 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.137 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0913 19:58:12.340163   71926 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0913 19:58:12.340227   71926 ssh_runner.go:195] Run: sudo crictl images --output json
	I0913 19:58:12.387772   71926 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0913 19:58:12.387831   71926 ssh_runner.go:195] Run: which lz4
	I0913 19:58:12.391877   71926 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0913 19:58:12.397084   71926 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0913 19:58:12.397111   71926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0913 19:58:14.108496   71926 crio.go:462] duration metric: took 1.716639607s to copy over tarball
	I0913 19:58:14.108580   71926 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0913 19:58:14.457047   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:14.457530   71233 main.go:141] libmachine: (embed-certs-175374) DBG | unable to find current IP address of domain embed-certs-175374 in network mk-embed-certs-175374
	I0913 19:58:14.457578   71233 main.go:141] libmachine: (embed-certs-175374) DBG | I0913 19:58:14.457461   73146 retry.go:31] will retry after 696.737044ms: waiting for machine to come up
	I0913 19:58:15.156145   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:15.156601   71233 main.go:141] libmachine: (embed-certs-175374) DBG | unable to find current IP address of domain embed-certs-175374 in network mk-embed-certs-175374
	I0913 19:58:15.156634   71233 main.go:141] libmachine: (embed-certs-175374) DBG | I0913 19:58:15.156555   73146 retry.go:31] will retry after 799.457406ms: waiting for machine to come up
	I0913 19:58:15.957762   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:15.958268   71233 main.go:141] libmachine: (embed-certs-175374) DBG | unable to find current IP address of domain embed-certs-175374 in network mk-embed-certs-175374
	I0913 19:58:15.958296   71233 main.go:141] libmachine: (embed-certs-175374) DBG | I0913 19:58:15.958218   73146 retry.go:31] will retry after 1.037426883s: waiting for machine to come up
	I0913 19:58:16.996752   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:16.997283   71233 main.go:141] libmachine: (embed-certs-175374) DBG | unable to find current IP address of domain embed-certs-175374 in network mk-embed-certs-175374
	I0913 19:58:16.997310   71233 main.go:141] libmachine: (embed-certs-175374) DBG | I0913 19:58:16.997233   73146 retry.go:31] will retry after 1.529310984s: waiting for machine to come up
	I0913 19:58:18.528167   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:18.528770   71233 main.go:141] libmachine: (embed-certs-175374) DBG | unable to find current IP address of domain embed-certs-175374 in network mk-embed-certs-175374
	I0913 19:58:18.528817   71233 main.go:141] libmachine: (embed-certs-175374) DBG | I0913 19:58:18.528732   73146 retry.go:31] will retry after 1.63281335s: waiting for machine to come up
	I0913 19:58:15.866154   71702 pod_ready.go:103] pod "etcd-default-k8s-diff-port-512125" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:16.865395   71702 pod_ready.go:93] pod "etcd-default-k8s-diff-port-512125" in "kube-system" namespace has status "Ready":"True"
	I0913 19:58:16.865434   71702 pod_ready.go:82] duration metric: took 7.007454177s for pod "etcd-default-k8s-diff-port-512125" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:16.865449   71702 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-512125" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:16.871374   71702 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-512125" in "kube-system" namespace has status "Ready":"True"
	I0913 19:58:16.871398   71702 pod_ready.go:82] duration metric: took 5.94123ms for pod "kube-apiserver-default-k8s-diff-port-512125" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:16.871410   71702 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-512125" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:19.122189   71702 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-512125" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:19.413846   71702 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-512125" in "kube-system" namespace has status "Ready":"True"
	I0913 19:58:19.413866   71702 pod_ready.go:82] duration metric: took 2.542449272s for pod "kube-controller-manager-default-k8s-diff-port-512125" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:19.413880   71702 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-xqv9m" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:19.419124   71702 pod_ready.go:93] pod "kube-proxy-xqv9m" in "kube-system" namespace has status "Ready":"True"
	I0913 19:58:19.419146   71702 pod_ready.go:82] duration metric: took 5.258451ms for pod "kube-proxy-xqv9m" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:19.419157   71702 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-512125" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:19.424347   71702 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-512125" in "kube-system" namespace has status "Ready":"True"
	I0913 19:58:19.424369   71702 pod_ready.go:82] duration metric: took 5.205567ms for pod "kube-scheduler-default-k8s-diff-port-512125" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:19.424378   71702 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:18.266585   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:20.267496   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:17.092899   71926 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.984287393s)
	I0913 19:58:17.092927   71926 crio.go:469] duration metric: took 2.984399164s to extract the tarball
	I0913 19:58:17.092936   71926 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0913 19:58:17.134595   71926 ssh_runner.go:195] Run: sudo crictl images --output json
	I0913 19:58:17.173233   71926 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0913 19:58:17.173261   71926 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0913 19:58:17.173353   71926 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 19:58:17.173420   71926 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0913 19:58:17.173496   71926 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0913 19:58:17.173545   71926 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0913 19:58:17.173432   71926 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0913 19:58:17.173391   71926 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0913 19:58:17.173404   71926 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0913 19:58:17.173354   71926 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0913 19:58:17.174795   71926 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0913 19:58:17.174996   71926 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0913 19:58:17.175016   71926 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0913 19:58:17.175093   71926 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0913 19:58:17.175261   71926 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0913 19:58:17.175274   71926 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0913 19:58:17.175314   71926 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 19:58:17.175374   71926 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0913 19:58:17.389976   71926 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0913 19:58:17.401858   71926 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0913 19:58:17.422207   71926 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0913 19:58:17.437983   71926 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0913 19:58:17.441827   71926 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0913 19:58:17.444353   71926 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0913 19:58:17.448291   71926 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0913 19:58:17.448327   71926 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0913 19:58:17.448360   71926 ssh_runner.go:195] Run: which crictl
	I0913 19:58:17.484630   71926 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0913 19:58:17.486907   71926 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0913 19:58:17.486953   71926 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0913 19:58:17.486992   71926 ssh_runner.go:195] Run: which crictl
	I0913 19:58:17.552972   71926 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0913 19:58:17.553017   71926 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0913 19:58:17.553070   71926 ssh_runner.go:195] Run: which crictl
	I0913 19:58:17.553094   71926 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0913 19:58:17.553131   71926 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0913 19:58:17.553172   71926 ssh_runner.go:195] Run: which crictl
	I0913 19:58:17.573201   71926 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0913 19:58:17.573248   71926 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0913 19:58:17.573298   71926 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0913 19:58:17.573320   71926 ssh_runner.go:195] Run: which crictl
	I0913 19:58:17.573340   71926 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0913 19:58:17.573386   71926 ssh_runner.go:195] Run: which crictl
	I0913 19:58:17.573434   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0913 19:58:17.605151   71926 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0913 19:58:17.605199   71926 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0913 19:58:17.605248   71926 ssh_runner.go:195] Run: which crictl
	I0913 19:58:17.605300   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0913 19:58:17.605347   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0913 19:58:17.605439   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0913 19:58:17.628947   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0913 19:58:17.628992   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0913 19:58:17.629158   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0913 19:58:17.629483   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0913 19:58:17.714772   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0913 19:58:17.714812   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0913 19:58:17.755168   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0913 19:58:17.813880   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0913 19:58:17.813980   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0913 19:58:17.822268   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0913 19:58:17.822277   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0913 19:58:17.864418   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0913 19:58:17.864418   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0913 19:58:17.930419   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0913 19:58:18.000454   71926 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0913 19:58:18.000613   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0913 19:58:18.000655   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0913 19:58:18.014400   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0913 19:58:18.065836   71926 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0913 19:58:18.065876   71926 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0913 19:58:18.065922   71926 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0913 19:58:18.068943   71926 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0913 19:58:18.075442   71926 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0913 19:58:18.102221   71926 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0913 19:58:18.330677   71926 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 19:58:18.474412   71926 cache_images.go:92] duration metric: took 1.301129458s to LoadCachedImages
	W0913 19:58:18.474515   71926 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0913 19:58:18.474532   71926 kubeadm.go:934] updating node { 192.168.72.137 8443 v1.20.0 crio true true} ...
	I0913 19:58:18.474668   71926 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-234290 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.137
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-234290 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0913 19:58:18.474764   71926 ssh_runner.go:195] Run: crio config
	I0913 19:58:18.528116   71926 cni.go:84] Creating CNI manager for ""
	I0913 19:58:18.528142   71926 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0913 19:58:18.528153   71926 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0913 19:58:18.528175   71926 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.137 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-234290 NodeName:old-k8s-version-234290 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.137"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.137 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0913 19:58:18.528341   71926 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.137
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-234290"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.137
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.137"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0913 19:58:18.528421   71926 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0913 19:58:18.539309   71926 binaries.go:44] Found k8s binaries, skipping transfer
	I0913 19:58:18.539396   71926 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0913 19:58:18.549279   71926 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0913 19:58:18.566974   71926 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0913 19:58:18.585652   71926 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0913 19:58:18.606817   71926 ssh_runner.go:195] Run: grep 192.168.72.137	control-plane.minikube.internal$ /etc/hosts
	I0913 19:58:18.610999   71926 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.137	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0913 19:58:18.623650   71926 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 19:58:18.738375   71926 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0913 19:58:18.759935   71926 certs.go:68] Setting up /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/old-k8s-version-234290 for IP: 192.168.72.137
	I0913 19:58:18.759960   71926 certs.go:194] generating shared ca certs ...
	I0913 19:58:18.759976   71926 certs.go:226] acquiring lock for ca certs: {Name:mke780aab4c2895bded11772e31c7a357d07742c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 19:58:18.760149   71926 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.key
	I0913 19:58:18.760202   71926 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.key
	I0913 19:58:18.760217   71926 certs.go:256] generating profile certs ...
	I0913 19:58:18.760337   71926 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/old-k8s-version-234290/client.key
	I0913 19:58:18.760412   71926 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/old-k8s-version-234290/apiserver.key.e5f62d17
	I0913 19:58:18.760468   71926 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/old-k8s-version-234290/proxy-client.key
	I0913 19:58:18.760623   71926 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079.pem (1338 bytes)
	W0913 19:58:18.760669   71926 certs.go:480] ignoring /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079_empty.pem, impossibly tiny 0 bytes
	I0913 19:58:18.760681   71926 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem (1675 bytes)
	I0913 19:58:18.760718   71926 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem (1082 bytes)
	I0913 19:58:18.760751   71926 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem (1123 bytes)
	I0913 19:58:18.760779   71926 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem (1679 bytes)
	I0913 19:58:18.760832   71926 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem (1708 bytes)
	I0913 19:58:18.761583   71926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0913 19:58:18.793014   71926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0913 19:58:18.848745   71926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0913 19:58:18.886745   71926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0913 19:58:18.924588   71926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/old-k8s-version-234290/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0913 19:58:18.958481   71926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/old-k8s-version-234290/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0913 19:58:18.991482   71926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/old-k8s-version-234290/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0913 19:58:19.032324   71926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/old-k8s-version-234290/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0913 19:58:19.059068   71926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079.pem --> /usr/share/ca-certificates/11079.pem (1338 bytes)
	I0913 19:58:19.085949   71926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem --> /usr/share/ca-certificates/110792.pem (1708 bytes)
	I0913 19:58:19.113643   71926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0913 19:58:19.145333   71926 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0913 19:58:19.163687   71926 ssh_runner.go:195] Run: openssl version
	I0913 19:58:19.171767   71926 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11079.pem && ln -fs /usr/share/ca-certificates/11079.pem /etc/ssl/certs/11079.pem"
	I0913 19:58:19.186554   71926 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11079.pem
	I0913 19:58:19.192330   71926 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 13 18:38 /usr/share/ca-certificates/11079.pem
	I0913 19:58:19.192401   71926 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11079.pem
	I0913 19:58:19.198792   71926 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11079.pem /etc/ssl/certs/51391683.0"
	I0913 19:58:19.210300   71926 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110792.pem && ln -fs /usr/share/ca-certificates/110792.pem /etc/ssl/certs/110792.pem"
	I0913 19:58:19.223407   71926 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110792.pem
	I0913 19:58:19.228291   71926 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 13 18:38 /usr/share/ca-certificates/110792.pem
	I0913 19:58:19.228349   71926 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110792.pem
	I0913 19:58:19.234308   71926 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110792.pem /etc/ssl/certs/3ec20f2e.0"
	I0913 19:58:19.245203   71926 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0913 19:58:19.256773   71926 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0913 19:58:19.262488   71926 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 13 18:22 /usr/share/ca-certificates/minikubeCA.pem
	I0913 19:58:19.262571   71926 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0913 19:58:19.269483   71926 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0913 19:58:19.281592   71926 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0913 19:58:19.286741   71926 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0913 19:58:19.293353   71926 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0913 19:58:19.299808   71926 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0913 19:58:19.306799   71926 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0913 19:58:19.313162   71926 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0913 19:58:19.319027   71926 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0913 19:58:19.325179   71926 kubeadm.go:392] StartCluster: {Name:old-k8s-version-234290 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-234290 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.137 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 19:58:19.325264   71926 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0913 19:58:19.325324   71926 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0913 19:58:19.369346   71926 cri.go:89] found id: ""
	I0913 19:58:19.369426   71926 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0913 19:58:19.379886   71926 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0913 19:58:19.379909   71926 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0913 19:58:19.379970   71926 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0913 19:58:19.390431   71926 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0913 19:58:19.391399   71926 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-234290" does not appear in /home/jenkins/minikube-integration/19636-3902/kubeconfig
	I0913 19:58:19.392019   71926 kubeconfig.go:62] /home/jenkins/minikube-integration/19636-3902/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-234290" cluster setting kubeconfig missing "old-k8s-version-234290" context setting]
	I0913 19:58:19.392914   71926 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/kubeconfig: {Name:mk324cf401f96b93fed93af51e24b0634e5fa1a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 19:58:19.407513   71926 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0913 19:58:19.419283   71926 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.137
	I0913 19:58:19.419314   71926 kubeadm.go:1160] stopping kube-system containers ...
	I0913 19:58:19.419326   71926 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0913 19:58:19.419379   71926 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0913 19:58:19.461864   71926 cri.go:89] found id: ""
	I0913 19:58:19.461935   71926 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0913 19:58:19.479746   71926 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0913 19:58:19.490722   71926 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0913 19:58:19.490746   71926 kubeadm.go:157] found existing configuration files:
	
	I0913 19:58:19.490793   71926 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0913 19:58:19.500968   71926 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0913 19:58:19.501031   71926 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0913 19:58:19.511172   71926 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0913 19:58:19.521623   71926 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0913 19:58:19.521690   71926 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0913 19:58:19.532035   71926 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0913 19:58:19.542058   71926 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0913 19:58:19.542139   71926 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0913 19:58:19.551747   71926 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0913 19:58:19.561183   71926 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0913 19:58:19.561240   71926 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0913 19:58:19.571631   71926 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0913 19:58:19.582073   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:58:19.730805   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:58:20.577463   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:58:20.815243   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:58:20.907599   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:58:21.008611   71926 api_server.go:52] waiting for apiserver process to appear ...
	I0913 19:58:21.008687   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:21.509128   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:20.163342   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:20.163836   71233 main.go:141] libmachine: (embed-certs-175374) DBG | unable to find current IP address of domain embed-certs-175374 in network mk-embed-certs-175374
	I0913 19:58:20.163866   71233 main.go:141] libmachine: (embed-certs-175374) DBG | I0913 19:58:20.163797   73146 retry.go:31] will retry after 2.608130242s: waiting for machine to come up
	I0913 19:58:22.773220   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:22.773746   71233 main.go:141] libmachine: (embed-certs-175374) DBG | unable to find current IP address of domain embed-certs-175374 in network mk-embed-certs-175374
	I0913 19:58:22.773773   71233 main.go:141] libmachine: (embed-certs-175374) DBG | I0913 19:58:22.773702   73146 retry.go:31] will retry after 2.358024102s: waiting for machine to come up
	I0913 19:58:21.432080   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:23.931775   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:22.766841   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:24.767073   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:22.009465   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:22.509128   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:23.009680   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:23.509066   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:24.009717   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:24.509499   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:25.008831   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:25.509742   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:26.009748   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:26.509405   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:25.134055   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:25.134613   71233 main.go:141] libmachine: (embed-certs-175374) DBG | unable to find current IP address of domain embed-certs-175374 in network mk-embed-certs-175374
	I0913 19:58:25.134637   71233 main.go:141] libmachine: (embed-certs-175374) DBG | I0913 19:58:25.134569   73146 retry.go:31] will retry after 3.938314294s: waiting for machine to come up
	I0913 19:58:29.076283   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:29.076741   71233 main.go:141] libmachine: (embed-certs-175374) Found IP for machine: 192.168.39.32
	I0913 19:58:29.076760   71233 main.go:141] libmachine: (embed-certs-175374) Reserving static IP address...
	I0913 19:58:29.076770   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has current primary IP address 192.168.39.32 and MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:29.077137   71233 main.go:141] libmachine: (embed-certs-175374) DBG | found host DHCP lease matching {name: "embed-certs-175374", mac: "52:54:00:72:57:cd", ip: "192.168.39.32"} in network mk-embed-certs-175374: {Iface:virbr3 ExpiryTime:2024-09-13 20:58:22 +0000 UTC Type:0 Mac:52:54:00:72:57:cd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:embed-certs-175374 Clientid:01:52:54:00:72:57:cd}
	I0913 19:58:29.077164   71233 main.go:141] libmachine: (embed-certs-175374) DBG | skip adding static IP to network mk-embed-certs-175374 - found existing host DHCP lease matching {name: "embed-certs-175374", mac: "52:54:00:72:57:cd", ip: "192.168.39.32"}
	I0913 19:58:29.077174   71233 main.go:141] libmachine: (embed-certs-175374) Reserved static IP address: 192.168.39.32
	I0913 19:58:29.077185   71233 main.go:141] libmachine: (embed-certs-175374) Waiting for SSH to be available...
	I0913 19:58:29.077194   71233 main.go:141] libmachine: (embed-certs-175374) DBG | Getting to WaitForSSH function...
	I0913 19:58:29.079065   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:29.079375   71233 main.go:141] libmachine: (embed-certs-175374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:57:cd", ip: ""} in network mk-embed-certs-175374: {Iface:virbr3 ExpiryTime:2024-09-13 20:58:22 +0000 UTC Type:0 Mac:52:54:00:72:57:cd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:embed-certs-175374 Clientid:01:52:54:00:72:57:cd}
	I0913 19:58:29.079407   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined IP address 192.168.39.32 and MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:29.079508   71233 main.go:141] libmachine: (embed-certs-175374) DBG | Using SSH client type: external
	I0913 19:58:29.079559   71233 main.go:141] libmachine: (embed-certs-175374) DBG | Using SSH private key: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/embed-certs-175374/id_rsa (-rw-------)
	I0913 19:58:29.079600   71233 main.go:141] libmachine: (embed-certs-175374) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.32 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19636-3902/.minikube/machines/embed-certs-175374/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0913 19:58:29.079615   71233 main.go:141] libmachine: (embed-certs-175374) DBG | About to run SSH command:
	I0913 19:58:29.079643   71233 main.go:141] libmachine: (embed-certs-175374) DBG | exit 0
	I0913 19:58:29.202138   71233 main.go:141] libmachine: (embed-certs-175374) DBG | SSH cmd err, output: <nil>: 
	I0913 19:58:29.202522   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetConfigRaw
	I0913 19:58:26.431735   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:28.930537   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:27.266331   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:29.272314   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:29.203122   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetIP
	I0913 19:58:29.205936   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:29.206304   71233 main.go:141] libmachine: (embed-certs-175374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:57:cd", ip: ""} in network mk-embed-certs-175374: {Iface:virbr3 ExpiryTime:2024-09-13 20:58:22 +0000 UTC Type:0 Mac:52:54:00:72:57:cd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:embed-certs-175374 Clientid:01:52:54:00:72:57:cd}
	I0913 19:58:29.206326   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined IP address 192.168.39.32 and MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:29.206567   71233 profile.go:143] Saving config to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/embed-certs-175374/config.json ...
	I0913 19:58:29.206799   71233 machine.go:93] provisionDockerMachine start ...
	I0913 19:58:29.206820   71233 main.go:141] libmachine: (embed-certs-175374) Calling .DriverName
	I0913 19:58:29.207047   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHHostname
	I0913 19:58:29.209407   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:29.209733   71233 main.go:141] libmachine: (embed-certs-175374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:57:cd", ip: ""} in network mk-embed-certs-175374: {Iface:virbr3 ExpiryTime:2024-09-13 20:58:22 +0000 UTC Type:0 Mac:52:54:00:72:57:cd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:embed-certs-175374 Clientid:01:52:54:00:72:57:cd}
	I0913 19:58:29.209755   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined IP address 192.168.39.32 and MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:29.209880   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHPort
	I0913 19:58:29.210087   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHKeyPath
	I0913 19:58:29.210264   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHKeyPath
	I0913 19:58:29.210475   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHUsername
	I0913 19:58:29.210613   71233 main.go:141] libmachine: Using SSH client type: native
	I0913 19:58:29.210806   71233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.32 22 <nil> <nil>}
	I0913 19:58:29.210819   71233 main.go:141] libmachine: About to run SSH command:
	hostname
	I0913 19:58:29.318615   71233 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0913 19:58:29.318647   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetMachineName
	I0913 19:58:29.318874   71233 buildroot.go:166] provisioning hostname "embed-certs-175374"
	I0913 19:58:29.318891   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetMachineName
	I0913 19:58:29.319050   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHHostname
	I0913 19:58:29.321627   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:29.321981   71233 main.go:141] libmachine: (embed-certs-175374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:57:cd", ip: ""} in network mk-embed-certs-175374: {Iface:virbr3 ExpiryTime:2024-09-13 20:58:22 +0000 UTC Type:0 Mac:52:54:00:72:57:cd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:embed-certs-175374 Clientid:01:52:54:00:72:57:cd}
	I0913 19:58:29.322007   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined IP address 192.168.39.32 and MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:29.322233   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHPort
	I0913 19:58:29.322411   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHKeyPath
	I0913 19:58:29.322524   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHKeyPath
	I0913 19:58:29.322665   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHUsername
	I0913 19:58:29.322814   71233 main.go:141] libmachine: Using SSH client type: native
	I0913 19:58:29.322993   71233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.32 22 <nil> <nil>}
	I0913 19:58:29.323011   71233 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-175374 && echo "embed-certs-175374" | sudo tee /etc/hostname
	I0913 19:58:29.441656   71233 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-175374
	
	I0913 19:58:29.441686   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHHostname
	I0913 19:58:29.444529   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:29.444942   71233 main.go:141] libmachine: (embed-certs-175374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:57:cd", ip: ""} in network mk-embed-certs-175374: {Iface:virbr3 ExpiryTime:2024-09-13 20:58:22 +0000 UTC Type:0 Mac:52:54:00:72:57:cd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:embed-certs-175374 Clientid:01:52:54:00:72:57:cd}
	I0913 19:58:29.444973   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined IP address 192.168.39.32 and MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:29.445107   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHPort
	I0913 19:58:29.445291   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHKeyPath
	I0913 19:58:29.445451   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHKeyPath
	I0913 19:58:29.445560   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHUsername
	I0913 19:58:29.445756   71233 main.go:141] libmachine: Using SSH client type: native
	I0913 19:58:29.445939   71233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.32 22 <nil> <nil>}
	I0913 19:58:29.445961   71233 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-175374' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-175374/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-175374' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0913 19:58:29.555773   71233 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0913 19:58:29.555798   71233 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19636-3902/.minikube CaCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19636-3902/.minikube}
	I0913 19:58:29.555815   71233 buildroot.go:174] setting up certificates
	I0913 19:58:29.555836   71233 provision.go:84] configureAuth start
	I0913 19:58:29.555845   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetMachineName
	I0913 19:58:29.556128   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetIP
	I0913 19:58:29.559064   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:29.559438   71233 main.go:141] libmachine: (embed-certs-175374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:57:cd", ip: ""} in network mk-embed-certs-175374: {Iface:virbr3 ExpiryTime:2024-09-13 20:58:22 +0000 UTC Type:0 Mac:52:54:00:72:57:cd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:embed-certs-175374 Clientid:01:52:54:00:72:57:cd}
	I0913 19:58:29.559459   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined IP address 192.168.39.32 and MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:29.559589   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHHostname
	I0913 19:58:29.561763   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:29.562078   71233 main.go:141] libmachine: (embed-certs-175374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:57:cd", ip: ""} in network mk-embed-certs-175374: {Iface:virbr3 ExpiryTime:2024-09-13 20:58:22 +0000 UTC Type:0 Mac:52:54:00:72:57:cd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:embed-certs-175374 Clientid:01:52:54:00:72:57:cd}
	I0913 19:58:29.562120   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined IP address 192.168.39.32 and MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:29.562218   71233 provision.go:143] copyHostCerts
	I0913 19:58:29.562277   71233 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem, removing ...
	I0913 19:58:29.562288   71233 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem
	I0913 19:58:29.562362   71233 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem (1082 bytes)
	I0913 19:58:29.562476   71233 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem, removing ...
	I0913 19:58:29.562487   71233 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem
	I0913 19:58:29.562519   71233 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem (1123 bytes)
	I0913 19:58:29.562621   71233 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem, removing ...
	I0913 19:58:29.562630   71233 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem
	I0913 19:58:29.562657   71233 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem (1679 bytes)
	I0913 19:58:29.562729   71233 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem org=jenkins.embed-certs-175374 san=[127.0.0.1 192.168.39.32 embed-certs-175374 localhost minikube]
	I0913 19:58:29.724450   71233 provision.go:177] copyRemoteCerts
	I0913 19:58:29.724502   71233 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0913 19:58:29.724524   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHHostname
	I0913 19:58:29.727348   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:29.727653   71233 main.go:141] libmachine: (embed-certs-175374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:57:cd", ip: ""} in network mk-embed-certs-175374: {Iface:virbr3 ExpiryTime:2024-09-13 20:58:22 +0000 UTC Type:0 Mac:52:54:00:72:57:cd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:embed-certs-175374 Clientid:01:52:54:00:72:57:cd}
	I0913 19:58:29.727680   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined IP address 192.168.39.32 and MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:29.727870   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHPort
	I0913 19:58:29.728028   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHKeyPath
	I0913 19:58:29.728142   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHUsername
	I0913 19:58:29.728291   71233 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/embed-certs-175374/id_rsa Username:docker}
	I0913 19:58:29.807752   71233 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0913 19:58:29.832344   71233 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0913 19:58:29.856275   71233 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0913 19:58:29.879235   71233 provision.go:87] duration metric: took 323.386002ms to configureAuth
	I0913 19:58:29.879264   71233 buildroot.go:189] setting minikube options for container-runtime
	I0913 19:58:29.879464   71233 config.go:182] Loaded profile config "embed-certs-175374": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 19:58:29.879535   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHHostname
	I0913 19:58:29.882178   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:29.882577   71233 main.go:141] libmachine: (embed-certs-175374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:57:cd", ip: ""} in network mk-embed-certs-175374: {Iface:virbr3 ExpiryTime:2024-09-13 20:58:22 +0000 UTC Type:0 Mac:52:54:00:72:57:cd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:embed-certs-175374 Clientid:01:52:54:00:72:57:cd}
	I0913 19:58:29.882608   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined IP address 192.168.39.32 and MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:29.882736   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHPort
	I0913 19:58:29.883001   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHKeyPath
	I0913 19:58:29.883187   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHKeyPath
	I0913 19:58:29.883328   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHUsername
	I0913 19:58:29.883519   71233 main.go:141] libmachine: Using SSH client type: native
	I0913 19:58:29.883723   71233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.32 22 <nil> <nil>}
	I0913 19:58:29.883747   71233 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0913 19:58:30.103532   71233 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0913 19:58:30.103557   71233 machine.go:96] duration metric: took 896.744413ms to provisionDockerMachine
	I0913 19:58:30.103574   71233 start.go:293] postStartSetup for "embed-certs-175374" (driver="kvm2")
	I0913 19:58:30.103588   71233 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0913 19:58:30.103610   71233 main.go:141] libmachine: (embed-certs-175374) Calling .DriverName
	I0913 19:58:30.103908   71233 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0913 19:58:30.103935   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHHostname
	I0913 19:58:30.106889   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:30.107288   71233 main.go:141] libmachine: (embed-certs-175374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:57:cd", ip: ""} in network mk-embed-certs-175374: {Iface:virbr3 ExpiryTime:2024-09-13 20:58:22 +0000 UTC Type:0 Mac:52:54:00:72:57:cd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:embed-certs-175374 Clientid:01:52:54:00:72:57:cd}
	I0913 19:58:30.107320   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined IP address 192.168.39.32 and MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:30.107434   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHPort
	I0913 19:58:30.107613   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHKeyPath
	I0913 19:58:30.107766   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHUsername
	I0913 19:58:30.107900   71233 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/embed-certs-175374/id_rsa Username:docker}
	I0913 19:58:30.189085   71233 ssh_runner.go:195] Run: cat /etc/os-release
	I0913 19:58:30.193560   71233 info.go:137] Remote host: Buildroot 2023.02.9
	I0913 19:58:30.193587   71233 filesync.go:126] Scanning /home/jenkins/minikube-integration/19636-3902/.minikube/addons for local assets ...
	I0913 19:58:30.193667   71233 filesync.go:126] Scanning /home/jenkins/minikube-integration/19636-3902/.minikube/files for local assets ...
	I0913 19:58:30.193767   71233 filesync.go:149] local asset: /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem -> 110792.pem in /etc/ssl/certs
	I0913 19:58:30.193878   71233 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0913 19:58:30.203533   71233 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem --> /etc/ssl/certs/110792.pem (1708 bytes)
	I0913 19:58:30.227895   71233 start.go:296] duration metric: took 124.307474ms for postStartSetup
	I0913 19:58:30.227936   71233 fix.go:56] duration metric: took 19.464716966s for fixHost
	I0913 19:58:30.227956   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHHostname
	I0913 19:58:30.230672   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:30.230977   71233 main.go:141] libmachine: (embed-certs-175374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:57:cd", ip: ""} in network mk-embed-certs-175374: {Iface:virbr3 ExpiryTime:2024-09-13 20:58:22 +0000 UTC Type:0 Mac:52:54:00:72:57:cd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:embed-certs-175374 Clientid:01:52:54:00:72:57:cd}
	I0913 19:58:30.231003   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined IP address 192.168.39.32 and MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:30.231167   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHPort
	I0913 19:58:30.231432   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHKeyPath
	I0913 19:58:30.231610   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHKeyPath
	I0913 19:58:30.231758   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHUsername
	I0913 19:58:30.231913   71233 main.go:141] libmachine: Using SSH client type: native
	I0913 19:58:30.232089   71233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.32 22 <nil> <nil>}
	I0913 19:58:30.232100   71233 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0913 19:58:30.331036   71233 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726257510.303110870
	
	I0913 19:58:30.331065   71233 fix.go:216] guest clock: 1726257510.303110870
	I0913 19:58:30.331076   71233 fix.go:229] Guest: 2024-09-13 19:58:30.30311087 +0000 UTC Remote: 2024-09-13 19:58:30.227940037 +0000 UTC m=+356.058673795 (delta=75.170833ms)
	I0913 19:58:30.331112   71233 fix.go:200] guest clock delta is within tolerance: 75.170833ms
	I0913 19:58:30.331117   71233 start.go:83] releasing machines lock for "embed-certs-175374", held for 19.567934671s
	I0913 19:58:30.331140   71233 main.go:141] libmachine: (embed-certs-175374) Calling .DriverName
	I0913 19:58:30.331423   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetIP
	I0913 19:58:30.334022   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:30.334506   71233 main.go:141] libmachine: (embed-certs-175374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:57:cd", ip: ""} in network mk-embed-certs-175374: {Iface:virbr3 ExpiryTime:2024-09-13 20:58:22 +0000 UTC Type:0 Mac:52:54:00:72:57:cd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:embed-certs-175374 Clientid:01:52:54:00:72:57:cd}
	I0913 19:58:30.334533   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined IP address 192.168.39.32 and MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:30.334671   71233 main.go:141] libmachine: (embed-certs-175374) Calling .DriverName
	I0913 19:58:30.335259   71233 main.go:141] libmachine: (embed-certs-175374) Calling .DriverName
	I0913 19:58:30.335431   71233 main.go:141] libmachine: (embed-certs-175374) Calling .DriverName
	I0913 19:58:30.335489   71233 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0913 19:58:30.335528   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHHostname
	I0913 19:58:30.335642   71233 ssh_runner.go:195] Run: cat /version.json
	I0913 19:58:30.335660   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHHostname
	I0913 19:58:30.338223   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:30.338556   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:30.338585   71233 main.go:141] libmachine: (embed-certs-175374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:57:cd", ip: ""} in network mk-embed-certs-175374: {Iface:virbr3 ExpiryTime:2024-09-13 20:58:22 +0000 UTC Type:0 Mac:52:54:00:72:57:cd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:embed-certs-175374 Clientid:01:52:54:00:72:57:cd}
	I0913 19:58:30.338608   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined IP address 192.168.39.32 and MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:30.338738   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHPort
	I0913 19:58:30.338891   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHKeyPath
	I0913 19:58:30.339037   71233 main.go:141] libmachine: (embed-certs-175374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:57:cd", ip: ""} in network mk-embed-certs-175374: {Iface:virbr3 ExpiryTime:2024-09-13 20:58:22 +0000 UTC Type:0 Mac:52:54:00:72:57:cd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:embed-certs-175374 Clientid:01:52:54:00:72:57:cd}
	I0913 19:58:30.339057   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined IP address 192.168.39.32 and MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:30.339072   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHUsername
	I0913 19:58:30.339199   71233 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/embed-certs-175374/id_rsa Username:docker}
	I0913 19:58:30.339247   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHPort
	I0913 19:58:30.339387   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHKeyPath
	I0913 19:58:30.339526   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHUsername
	I0913 19:58:30.339639   71233 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/embed-certs-175374/id_rsa Username:docker}
	I0913 19:58:30.415622   71233 ssh_runner.go:195] Run: systemctl --version
	I0913 19:58:30.440604   71233 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0913 19:58:30.586022   71233 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0913 19:58:30.594584   71233 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0913 19:58:30.594660   71233 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0913 19:58:30.611349   71233 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0913 19:58:30.611371   71233 start.go:495] detecting cgroup driver to use...
	I0913 19:58:30.611431   71233 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0913 19:58:30.626916   71233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0913 19:58:30.641834   71233 docker.go:217] disabling cri-docker service (if available) ...
	I0913 19:58:30.641899   71233 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0913 19:58:30.656109   71233 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0913 19:58:30.670053   71233 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0913 19:58:30.785264   71233 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0913 19:58:30.936484   71233 docker.go:233] disabling docker service ...
	I0913 19:58:30.936548   71233 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0913 19:58:30.951998   71233 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0913 19:58:30.965863   71233 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0913 19:58:31.117753   71233 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0913 19:58:31.241750   71233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0913 19:58:31.255910   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0913 19:58:31.276372   71233 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0913 19:58:31.276453   71233 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:58:31.286686   71233 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0913 19:58:31.286749   71233 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:58:31.296762   71233 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:58:31.306752   71233 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:58:31.317435   71233 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0913 19:58:31.328859   71233 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:58:31.339508   71233 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:58:31.358855   71233 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:58:31.369756   71233 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0913 19:58:31.379838   71233 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0913 19:58:31.379908   71233 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0913 19:58:31.392714   71233 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0913 19:58:31.402973   71233 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 19:58:31.543089   71233 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0913 19:58:31.635184   71233 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0913 19:58:31.635259   71233 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0913 19:58:31.640122   71233 start.go:563] Will wait 60s for crictl version
	I0913 19:58:31.640190   71233 ssh_runner.go:195] Run: which crictl
	I0913 19:58:31.644326   71233 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0913 19:58:31.687840   71233 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0913 19:58:31.687936   71233 ssh_runner.go:195] Run: crio --version
	I0913 19:58:31.716376   71233 ssh_runner.go:195] Run: crio --version
	I0913 19:58:31.749357   71233 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0913 19:58:27.009130   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:27.509574   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:28.009714   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:28.508758   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:29.008768   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:29.509523   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:30.009031   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:30.508827   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:31.009653   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:31.509554   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:31.750649   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetIP
	I0913 19:58:31.753235   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:31.753547   71233 main.go:141] libmachine: (embed-certs-175374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:57:cd", ip: ""} in network mk-embed-certs-175374: {Iface:virbr3 ExpiryTime:2024-09-13 20:58:22 +0000 UTC Type:0 Mac:52:54:00:72:57:cd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:embed-certs-175374 Clientid:01:52:54:00:72:57:cd}
	I0913 19:58:31.753576   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined IP address 192.168.39.32 and MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:31.753809   71233 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0913 19:58:31.757927   71233 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0913 19:58:31.771018   71233 kubeadm.go:883] updating cluster {Name:embed-certs-175374 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:embed-certs-175374 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.32 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0913 19:58:31.771171   71233 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0913 19:58:31.771221   71233 ssh_runner.go:195] Run: sudo crictl images --output json
	I0913 19:58:31.810741   71233 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0913 19:58:31.810798   71233 ssh_runner.go:195] Run: which lz4
	I0913 19:58:31.814892   71233 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0913 19:58:31.819229   71233 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0913 19:58:31.819269   71233 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0913 19:58:33.221865   71233 crio.go:462] duration metric: took 1.407002501s to copy over tarball
	I0913 19:58:33.221951   71233 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0913 19:58:30.931694   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:32.934639   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:31.767243   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:33.767834   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:35.768301   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:32.009337   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:32.509870   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:33.009618   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:33.509364   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:34.009124   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:34.509744   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:35.009610   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:35.509772   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:36.009680   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:36.509743   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:35.282125   71233 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.060124935s)
	I0913 19:58:35.282151   71233 crio.go:469] duration metric: took 2.060254719s to extract the tarball
	I0913 19:58:35.282158   71233 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0913 19:58:35.320685   71233 ssh_runner.go:195] Run: sudo crictl images --output json
	I0913 19:58:35.364371   71233 crio.go:514] all images are preloaded for cri-o runtime.
	I0913 19:58:35.364396   71233 cache_images.go:84] Images are preloaded, skipping loading
	I0913 19:58:35.364404   71233 kubeadm.go:934] updating node { 192.168.39.32 8443 v1.31.1 crio true true} ...
	I0913 19:58:35.364505   71233 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-175374 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.32
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-175374 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0913 19:58:35.364574   71233 ssh_runner.go:195] Run: crio config
	I0913 19:58:35.409662   71233 cni.go:84] Creating CNI manager for ""
	I0913 19:58:35.409684   71233 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0913 19:58:35.409692   71233 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0913 19:58:35.409711   71233 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.32 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-175374 NodeName:embed-certs-175374 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.32"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.32 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0913 19:58:35.409829   71233 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.32
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-175374"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.32
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.32"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0913 19:58:35.409886   71233 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0913 19:58:35.420286   71233 binaries.go:44] Found k8s binaries, skipping transfer
	I0913 19:58:35.420354   71233 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0913 19:58:35.430624   71233 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0913 19:58:35.448662   71233 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0913 19:58:35.465838   71233 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0913 19:58:35.483262   71233 ssh_runner.go:195] Run: grep 192.168.39.32	control-plane.minikube.internal$ /etc/hosts
	I0913 19:58:35.487299   71233 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.32	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0913 19:58:35.500571   71233 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 19:58:35.615618   71233 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0913 19:58:35.634191   71233 certs.go:68] Setting up /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/embed-certs-175374 for IP: 192.168.39.32
	I0913 19:58:35.634216   71233 certs.go:194] generating shared ca certs ...
	I0913 19:58:35.634237   71233 certs.go:226] acquiring lock for ca certs: {Name:mke780aab4c2895bded11772e31c7a357d07742c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 19:58:35.634421   71233 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.key
	I0913 19:58:35.634489   71233 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.key
	I0913 19:58:35.634503   71233 certs.go:256] generating profile certs ...
	I0913 19:58:35.634599   71233 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/embed-certs-175374/client.key
	I0913 19:58:35.634664   71233 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/embed-certs-175374/apiserver.key.f26b0d46
	I0913 19:58:35.634719   71233 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/embed-certs-175374/proxy-client.key
	I0913 19:58:35.634847   71233 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079.pem (1338 bytes)
	W0913 19:58:35.634888   71233 certs.go:480] ignoring /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079_empty.pem, impossibly tiny 0 bytes
	I0913 19:58:35.634903   71233 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem (1675 bytes)
	I0913 19:58:35.634940   71233 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem (1082 bytes)
	I0913 19:58:35.634974   71233 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem (1123 bytes)
	I0913 19:58:35.635013   71233 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem (1679 bytes)
	I0913 19:58:35.635070   71233 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem (1708 bytes)
	I0913 19:58:35.635679   71233 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0913 19:58:35.680013   71233 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0913 19:58:35.708836   71233 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0913 19:58:35.742138   71233 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0913 19:58:35.783230   71233 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/embed-certs-175374/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0913 19:58:35.816022   71233 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/embed-certs-175374/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0913 19:58:35.847365   71233 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/embed-certs-175374/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0913 19:58:35.871389   71233 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/embed-certs-175374/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0913 19:58:35.896617   71233 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem --> /usr/share/ca-certificates/110792.pem (1708 bytes)
	I0913 19:58:35.920811   71233 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0913 19:58:35.947119   71233 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079.pem --> /usr/share/ca-certificates/11079.pem (1338 bytes)
	I0913 19:58:35.971590   71233 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0913 19:58:35.988797   71233 ssh_runner.go:195] Run: openssl version
	I0913 19:58:35.994690   71233 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11079.pem && ln -fs /usr/share/ca-certificates/11079.pem /etc/ssl/certs/11079.pem"
	I0913 19:58:36.006056   71233 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11079.pem
	I0913 19:58:36.010744   71233 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 13 18:38 /usr/share/ca-certificates/11079.pem
	I0913 19:58:36.010813   71233 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11079.pem
	I0913 19:58:36.016820   71233 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11079.pem /etc/ssl/certs/51391683.0"
	I0913 19:58:36.028895   71233 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110792.pem && ln -fs /usr/share/ca-certificates/110792.pem /etc/ssl/certs/110792.pem"
	I0913 19:58:36.040296   71233 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110792.pem
	I0913 19:58:36.044904   71233 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 13 18:38 /usr/share/ca-certificates/110792.pem
	I0913 19:58:36.044948   71233 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110792.pem
	I0913 19:58:36.050727   71233 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110792.pem /etc/ssl/certs/3ec20f2e.0"
	I0913 19:58:36.061195   71233 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0913 19:58:36.071527   71233 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0913 19:58:36.076171   71233 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 13 18:22 /usr/share/ca-certificates/minikubeCA.pem
	I0913 19:58:36.076204   71233 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0913 19:58:36.081765   71233 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0913 19:58:36.093815   71233 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0913 19:58:36.098729   71233 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0913 19:58:36.105238   71233 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0913 19:58:36.111340   71233 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0913 19:58:36.117349   71233 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0913 19:58:36.123329   71233 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0913 19:58:36.129083   71233 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0913 19:58:36.134952   71233 kubeadm.go:392] StartCluster: {Name:embed-certs-175374 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:embed-certs-175374 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.32 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 19:58:36.135035   71233 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0913 19:58:36.135095   71233 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0913 19:58:36.177680   71233 cri.go:89] found id: ""
	I0913 19:58:36.177743   71233 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0913 19:58:36.188511   71233 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0913 19:58:36.188531   71233 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0913 19:58:36.188580   71233 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0913 19:58:36.199007   71233 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0913 19:58:36.200034   71233 kubeconfig.go:125] found "embed-certs-175374" server: "https://192.168.39.32:8443"
	I0913 19:58:36.201838   71233 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0913 19:58:36.211823   71233 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.32
	I0913 19:58:36.211850   71233 kubeadm.go:1160] stopping kube-system containers ...
	I0913 19:58:36.211863   71233 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0913 19:58:36.211907   71233 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0913 19:58:36.254383   71233 cri.go:89] found id: ""
	I0913 19:58:36.254452   71233 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0913 19:58:36.274482   71233 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0913 19:58:36.284752   71233 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0913 19:58:36.284776   71233 kubeadm.go:157] found existing configuration files:
	
	I0913 19:58:36.284826   71233 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0913 19:58:36.294122   71233 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0913 19:58:36.294186   71233 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0913 19:58:36.303848   71233 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0913 19:58:36.313197   71233 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0913 19:58:36.313270   71233 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0913 19:58:36.322754   71233 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0913 19:58:36.332018   71233 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0913 19:58:36.332078   71233 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0913 19:58:36.341980   71233 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0913 19:58:36.351251   71233 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0913 19:58:36.351308   71233 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0913 19:58:36.360867   71233 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0913 19:58:36.370253   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:58:36.476811   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:58:37.459731   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:58:37.701271   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:58:37.795569   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:58:37.884961   71233 api_server.go:52] waiting for apiserver process to appear ...
	I0913 19:58:37.885054   71233 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:38.385265   71233 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:38.886038   71233 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:35.431757   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:37.930698   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:38.869696   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:37.009084   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:37.509368   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:38.009698   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:38.509699   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:39.008821   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:39.509724   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:40.008865   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:40.509533   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:41.009397   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:41.508872   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:39.385638   71233 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:39.885566   71233 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:39.901409   71233 api_server.go:72] duration metric: took 2.016446791s to wait for apiserver process to appear ...
	I0913 19:58:39.901438   71233 api_server.go:88] waiting for apiserver healthz status ...
	I0913 19:58:39.901469   71233 api_server.go:253] Checking apiserver healthz at https://192.168.39.32:8443/healthz ...
	I0913 19:58:42.607623   71233 api_server.go:279] https://192.168.39.32:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0913 19:58:42.607656   71233 api_server.go:103] status: https://192.168.39.32:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0913 19:58:42.607672   71233 api_server.go:253] Checking apiserver healthz at https://192.168.39.32:8443/healthz ...
	I0913 19:58:42.625107   71233 api_server.go:279] https://192.168.39.32:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0913 19:58:42.625134   71233 api_server.go:103] status: https://192.168.39.32:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0913 19:58:42.902512   71233 api_server.go:253] Checking apiserver healthz at https://192.168.39.32:8443/healthz ...
	I0913 19:58:42.912382   71233 api_server.go:279] https://192.168.39.32:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0913 19:58:42.912424   71233 api_server.go:103] status: https://192.168.39.32:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0913 19:58:43.401981   71233 api_server.go:253] Checking apiserver healthz at https://192.168.39.32:8443/healthz ...
	I0913 19:58:43.406231   71233 api_server.go:279] https://192.168.39.32:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0913 19:58:43.406253   71233 api_server.go:103] status: https://192.168.39.32:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0913 19:58:43.901758   71233 api_server.go:253] Checking apiserver healthz at https://192.168.39.32:8443/healthz ...
	I0913 19:58:43.909236   71233 api_server.go:279] https://192.168.39.32:8443/healthz returned 200:
	ok
	I0913 19:58:43.915858   71233 api_server.go:141] control plane version: v1.31.1
	I0913 19:58:43.915878   71233 api_server.go:131] duration metric: took 4.014433541s to wait for apiserver health ...
	I0913 19:58:43.915886   71233 cni.go:84] Creating CNI manager for ""
	I0913 19:58:43.915892   71233 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0913 19:58:43.917333   71233 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0913 19:58:43.918437   71233 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0913 19:58:43.929803   71233 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0913 19:58:43.962264   71233 system_pods.go:43] waiting for kube-system pods to appear ...
	I0913 19:58:43.974064   71233 system_pods.go:59] 8 kube-system pods found
	I0913 19:58:43.974124   71233 system_pods.go:61] "coredns-7c65d6cfc9-lrrkx" [3b5420cc-a0cd-4f34-8f65-9fa6fd5dd02f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0913 19:58:43.974132   71233 system_pods.go:61] "etcd-embed-certs-175374" [4f645ba5-cffa-49a2-9219-8e635f58abd3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0913 19:58:43.974140   71233 system_pods.go:61] "kube-apiserver-embed-certs-175374" [4c21b983-d949-4642-b17c-b547330b0d05] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0913 19:58:43.974146   71233 system_pods.go:61] "kube-controller-manager-embed-certs-175374" [defb4f6c-81f8-4405-b2c0-abe91c846f4c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0913 19:58:43.974154   71233 system_pods.go:61] "kube-proxy-jv77q" [28580bbe-7c5f-4161-8370-41f3286d508c] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0913 19:58:43.974159   71233 system_pods.go:61] "kube-scheduler-embed-certs-175374" [c5aa66b9-523c-46e8-b8e4-70680490dbf5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0913 19:58:43.974168   71233 system_pods.go:61] "metrics-server-6867b74b74-fnznh" [9ca67e1c-a852-4513-abfc-ace5908d2727] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0913 19:58:43.974174   71233 system_pods.go:61] "storage-provisioner" [afa99920-b51a-4d30-a8e0-269a0beeee8a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0913 19:58:43.974180   71233 system_pods.go:74] duration metric: took 11.890984ms to wait for pod list to return data ...
	I0913 19:58:43.974191   71233 node_conditions.go:102] verifying NodePressure condition ...
	I0913 19:58:43.978060   71233 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0913 19:58:43.978084   71233 node_conditions.go:123] node cpu capacity is 2
	I0913 19:58:43.978115   71233 node_conditions.go:105] duration metric: took 3.91914ms to run NodePressure ...
	I0913 19:58:43.978136   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:58:39.931725   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:41.931904   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:43.932454   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:44.265300   71233 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0913 19:58:44.270133   71233 kubeadm.go:739] kubelet initialised
	I0913 19:58:44.270161   71233 kubeadm.go:740] duration metric: took 4.829768ms waiting for restarted kubelet to initialise ...
	I0913 19:58:44.270170   71233 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0913 19:58:44.275324   71233 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-lrrkx" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:44.280420   71233 pod_ready.go:98] node "embed-certs-175374" hosting pod "coredns-7c65d6cfc9-lrrkx" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-175374" has status "Ready":"False"
	I0913 19:58:44.280443   71233 pod_ready.go:82] duration metric: took 5.093507ms for pod "coredns-7c65d6cfc9-lrrkx" in "kube-system" namespace to be "Ready" ...
	E0913 19:58:44.280452   71233 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-175374" hosting pod "coredns-7c65d6cfc9-lrrkx" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-175374" has status "Ready":"False"
	I0913 19:58:44.280459   71233 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-175374" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:44.284917   71233 pod_ready.go:98] node "embed-certs-175374" hosting pod "etcd-embed-certs-175374" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-175374" has status "Ready":"False"
	I0913 19:58:44.284937   71233 pod_ready.go:82] duration metric: took 4.469078ms for pod "etcd-embed-certs-175374" in "kube-system" namespace to be "Ready" ...
	E0913 19:58:44.284945   71233 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-175374" hosting pod "etcd-embed-certs-175374" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-175374" has status "Ready":"False"
	I0913 19:58:44.284952   71233 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-175374" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:44.288979   71233 pod_ready.go:98] node "embed-certs-175374" hosting pod "kube-apiserver-embed-certs-175374" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-175374" has status "Ready":"False"
	I0913 19:58:44.289001   71233 pod_ready.go:82] duration metric: took 4.040314ms for pod "kube-apiserver-embed-certs-175374" in "kube-system" namespace to be "Ready" ...
	E0913 19:58:44.289012   71233 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-175374" hosting pod "kube-apiserver-embed-certs-175374" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-175374" has status "Ready":"False"
	I0913 19:58:44.289019   71233 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-175374" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:44.366067   71233 pod_ready.go:98] node "embed-certs-175374" hosting pod "kube-controller-manager-embed-certs-175374" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-175374" has status "Ready":"False"
	I0913 19:58:44.366115   71233 pod_ready.go:82] duration metric: took 77.081723ms for pod "kube-controller-manager-embed-certs-175374" in "kube-system" namespace to be "Ready" ...
	E0913 19:58:44.366130   71233 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-175374" hosting pod "kube-controller-manager-embed-certs-175374" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-175374" has status "Ready":"False"
	I0913 19:58:44.366138   71233 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-jv77q" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:44.768797   71233 pod_ready.go:98] node "embed-certs-175374" hosting pod "kube-proxy-jv77q" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-175374" has status "Ready":"False"
	I0913 19:58:44.768829   71233 pod_ready.go:82] duration metric: took 402.677833ms for pod "kube-proxy-jv77q" in "kube-system" namespace to be "Ready" ...
	E0913 19:58:44.768838   71233 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-175374" hosting pod "kube-proxy-jv77q" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-175374" has status "Ready":"False"
	I0913 19:58:44.768845   71233 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-175374" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:45.166011   71233 pod_ready.go:98] node "embed-certs-175374" hosting pod "kube-scheduler-embed-certs-175374" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-175374" has status "Ready":"False"
	I0913 19:58:45.166046   71233 pod_ready.go:82] duration metric: took 397.193399ms for pod "kube-scheduler-embed-certs-175374" in "kube-system" namespace to be "Ready" ...
	E0913 19:58:45.166059   71233 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-175374" hosting pod "kube-scheduler-embed-certs-175374" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-175374" has status "Ready":"False"
	I0913 19:58:45.166068   71233 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:45.565304   71233 pod_ready.go:98] node "embed-certs-175374" hosting pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-175374" has status "Ready":"False"
	I0913 19:58:45.565328   71233 pod_ready.go:82] duration metric: took 399.249933ms for pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace to be "Ready" ...
	E0913 19:58:45.565337   71233 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-175374" hosting pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-175374" has status "Ready":"False"
	I0913 19:58:45.565350   71233 pod_ready.go:39] duration metric: took 1.295171906s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0913 19:58:45.565371   71233 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0913 19:58:45.577831   71233 ops.go:34] apiserver oom_adj: -16
	I0913 19:58:45.577857   71233 kubeadm.go:597] duration metric: took 9.389319229s to restartPrimaryControlPlane
	I0913 19:58:45.577868   71233 kubeadm.go:394] duration metric: took 9.442921883s to StartCluster
	I0913 19:58:45.577884   71233 settings.go:142] acquiring lock: {Name:mk3b751ea8a9f3a6c2d19469cffad500e96411b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 19:58:45.577967   71233 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19636-3902/kubeconfig
	I0913 19:58:45.579765   71233 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/kubeconfig: {Name:mk324cf401f96b93fed93af51e24b0634e5fa1a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 19:58:45.580068   71233 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.32 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0913 19:58:45.580156   71233 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0913 19:58:45.580249   71233 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-175374"
	I0913 19:58:45.580272   71233 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-175374"
	W0913 19:58:45.580281   71233 addons.go:243] addon storage-provisioner should already be in state true
	I0913 19:58:45.580295   71233 config.go:182] Loaded profile config "embed-certs-175374": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 19:58:45.580311   71233 host.go:66] Checking if "embed-certs-175374" exists ...
	I0913 19:58:45.580300   71233 addons.go:69] Setting default-storageclass=true in profile "embed-certs-175374"
	I0913 19:58:45.580353   71233 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-175374"
	I0913 19:58:45.580341   71233 addons.go:69] Setting metrics-server=true in profile "embed-certs-175374"
	I0913 19:58:45.580395   71233 addons.go:234] Setting addon metrics-server=true in "embed-certs-175374"
	W0913 19:58:45.580409   71233 addons.go:243] addon metrics-server should already be in state true
	I0913 19:58:45.580482   71233 host.go:66] Checking if "embed-certs-175374" exists ...
	I0913 19:58:45.580753   71233 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:58:45.580799   71233 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:58:45.580846   71233 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:58:45.580894   71233 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:58:45.580952   71233 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:58:45.581001   71233 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:58:45.581828   71233 out.go:177] * Verifying Kubernetes components...
	I0913 19:58:45.583145   71233 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 19:58:45.596215   71233 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38289
	I0913 19:58:45.596347   71233 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43797
	I0913 19:58:45.596650   71233 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:58:45.596775   71233 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44367
	I0913 19:58:45.596889   71233 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:58:45.597150   71233 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:58:45.597156   71233 main.go:141] libmachine: Using API Version  1
	I0913 19:58:45.597175   71233 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:58:45.597345   71233 main.go:141] libmachine: Using API Version  1
	I0913 19:58:45.597359   71233 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:58:45.597606   71233 main.go:141] libmachine: Using API Version  1
	I0913 19:58:45.597623   71233 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:58:45.597659   71233 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:58:45.597683   71233 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:58:45.597842   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetState
	I0913 19:58:45.597952   71233 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:58:45.598212   71233 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:58:45.598243   71233 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:58:45.598512   71233 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:58:45.598541   71233 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:58:45.601548   71233 addons.go:234] Setting addon default-storageclass=true in "embed-certs-175374"
	W0913 19:58:45.601569   71233 addons.go:243] addon default-storageclass should already be in state true
	I0913 19:58:45.601596   71233 host.go:66] Checking if "embed-certs-175374" exists ...
	I0913 19:58:45.601941   71233 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:58:45.601971   71233 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:58:45.613596   71233 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37017
	I0913 19:58:45.614086   71233 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:58:45.614646   71233 main.go:141] libmachine: Using API Version  1
	I0913 19:58:45.614670   71233 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:58:45.615015   71233 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:58:45.615328   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetState
	I0913 19:58:45.615792   71233 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42809
	I0913 19:58:45.616459   71233 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:58:45.617057   71233 main.go:141] libmachine: Using API Version  1
	I0913 19:58:45.617076   71233 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:58:45.617135   71233 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40251
	I0913 19:58:45.617429   71233 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:58:45.617492   71233 main.go:141] libmachine: (embed-certs-175374) Calling .DriverName
	I0913 19:58:45.617538   71233 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:58:45.617720   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetState
	I0913 19:58:45.618009   71233 main.go:141] libmachine: Using API Version  1
	I0913 19:58:45.618029   71233 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:58:45.618610   71233 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:58:45.619215   71233 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:58:45.619257   71233 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:58:45.619496   71233 main.go:141] libmachine: (embed-certs-175374) Calling .DriverName
	I0913 19:58:45.619734   71233 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0913 19:58:45.620863   71233 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 19:58:41.266572   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:43.267658   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:45.768086   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:42.009722   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:42.509784   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:43.009630   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:43.508726   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:44.009339   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:44.509674   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:45.009675   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:45.509437   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:46.009589   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:46.509457   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:45.620906   71233 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0913 19:58:45.620921   71233 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0913 19:58:45.620940   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHHostname
	I0913 19:58:45.622242   71233 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0913 19:58:45.622255   71233 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0913 19:58:45.622272   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHHostname
	I0913 19:58:45.624230   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:45.624735   71233 main.go:141] libmachine: (embed-certs-175374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:57:cd", ip: ""} in network mk-embed-certs-175374: {Iface:virbr3 ExpiryTime:2024-09-13 20:58:22 +0000 UTC Type:0 Mac:52:54:00:72:57:cd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:embed-certs-175374 Clientid:01:52:54:00:72:57:cd}
	I0913 19:58:45.624763   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined IP address 192.168.39.32 and MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:45.624903   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHPort
	I0913 19:58:45.625063   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHKeyPath
	I0913 19:58:45.625200   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHUsername
	I0913 19:58:45.625354   71233 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/embed-certs-175374/id_rsa Username:docker}
	I0913 19:58:45.625501   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:45.625915   71233 main.go:141] libmachine: (embed-certs-175374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:57:cd", ip: ""} in network mk-embed-certs-175374: {Iface:virbr3 ExpiryTime:2024-09-13 20:58:22 +0000 UTC Type:0 Mac:52:54:00:72:57:cd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:embed-certs-175374 Clientid:01:52:54:00:72:57:cd}
	I0913 19:58:45.625938   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined IP address 192.168.39.32 and MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:45.626141   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHPort
	I0913 19:58:45.626285   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHKeyPath
	I0913 19:58:45.626451   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHUsername
	I0913 19:58:45.626625   71233 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/embed-certs-175374/id_rsa Username:docker}
	I0913 19:58:45.658599   71233 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45219
	I0913 19:58:45.659088   71233 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:58:45.659729   71233 main.go:141] libmachine: Using API Version  1
	I0913 19:58:45.659752   71233 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:58:45.660087   71233 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:58:45.660266   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetState
	I0913 19:58:45.661894   71233 main.go:141] libmachine: (embed-certs-175374) Calling .DriverName
	I0913 19:58:45.662127   71233 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0913 19:58:45.662143   71233 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0913 19:58:45.662159   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHHostname
	I0913 19:58:45.664987   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:45.665347   71233 main.go:141] libmachine: (embed-certs-175374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:57:cd", ip: ""} in network mk-embed-certs-175374: {Iface:virbr3 ExpiryTime:2024-09-13 20:58:22 +0000 UTC Type:0 Mac:52:54:00:72:57:cd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:embed-certs-175374 Clientid:01:52:54:00:72:57:cd}
	I0913 19:58:45.665369   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined IP address 192.168.39.32 and MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:45.665475   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHPort
	I0913 19:58:45.665622   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHKeyPath
	I0913 19:58:45.665765   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHUsername
	I0913 19:58:45.665890   71233 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/embed-certs-175374/id_rsa Username:docker}
	I0913 19:58:45.771910   71233 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0913 19:58:45.788103   71233 node_ready.go:35] waiting up to 6m0s for node "embed-certs-175374" to be "Ready" ...
	I0913 19:58:45.849115   71233 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0913 19:58:45.954823   71233 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0913 19:58:45.954845   71233 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0913 19:58:45.972602   71233 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0913 19:58:46.008217   71233 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0913 19:58:46.008243   71233 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0913 19:58:46.087347   71233 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0913 19:58:46.087374   71233 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0913 19:58:46.145493   71233 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0913 19:58:46.413833   71233 main.go:141] libmachine: Making call to close driver server
	I0913 19:58:46.413867   71233 main.go:141] libmachine: (embed-certs-175374) Calling .Close
	I0913 19:58:46.414152   71233 main.go:141] libmachine: Successfully made call to close driver server
	I0913 19:58:46.414211   71233 main.go:141] libmachine: (embed-certs-175374) DBG | Closing plugin on server side
	I0913 19:58:46.414228   71233 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 19:58:46.414239   71233 main.go:141] libmachine: Making call to close driver server
	I0913 19:58:46.414257   71233 main.go:141] libmachine: (embed-certs-175374) Calling .Close
	I0913 19:58:46.414562   71233 main.go:141] libmachine: (embed-certs-175374) DBG | Closing plugin on server side
	I0913 19:58:46.414574   71233 main.go:141] libmachine: Successfully made call to close driver server
	I0913 19:58:46.414587   71233 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 19:58:46.420582   71233 main.go:141] libmachine: Making call to close driver server
	I0913 19:58:46.420600   71233 main.go:141] libmachine: (embed-certs-175374) Calling .Close
	I0913 19:58:46.420839   71233 main.go:141] libmachine: Successfully made call to close driver server
	I0913 19:58:46.420855   71233 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 19:58:46.960928   71233 main.go:141] libmachine: Making call to close driver server
	I0913 19:58:46.960961   71233 main.go:141] libmachine: (embed-certs-175374) Calling .Close
	I0913 19:58:46.961258   71233 main.go:141] libmachine: Successfully made call to close driver server
	I0913 19:58:46.961292   71233 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 19:58:46.961298   71233 main.go:141] libmachine: (embed-certs-175374) DBG | Closing plugin on server side
	I0913 19:58:46.961314   71233 main.go:141] libmachine: Making call to close driver server
	I0913 19:58:46.961325   71233 main.go:141] libmachine: (embed-certs-175374) Calling .Close
	I0913 19:58:46.961592   71233 main.go:141] libmachine: Successfully made call to close driver server
	I0913 19:58:46.961607   71233 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 19:58:47.205831   71233 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.060299398s)
	I0913 19:58:47.205881   71233 main.go:141] libmachine: Making call to close driver server
	I0913 19:58:47.205896   71233 main.go:141] libmachine: (embed-certs-175374) Calling .Close
	I0913 19:58:47.206177   71233 main.go:141] libmachine: Successfully made call to close driver server
	I0913 19:58:47.206198   71233 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 19:58:47.206211   71233 main.go:141] libmachine: Making call to close driver server
	I0913 19:58:47.206209   71233 main.go:141] libmachine: (embed-certs-175374) DBG | Closing plugin on server side
	I0913 19:58:47.206218   71233 main.go:141] libmachine: (embed-certs-175374) Calling .Close
	I0913 19:58:47.206422   71233 main.go:141] libmachine: (embed-certs-175374) DBG | Closing plugin on server side
	I0913 19:58:47.206461   71233 main.go:141] libmachine: Successfully made call to close driver server
	I0913 19:58:47.206469   71233 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 19:58:47.206482   71233 addons.go:475] Verifying addon metrics-server=true in "embed-certs-175374"
	I0913 19:58:47.208308   71233 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0913 19:58:47.209327   71233 addons.go:510] duration metric: took 1.629176141s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0913 19:58:47.792485   71233 node_ready.go:53] node "embed-certs-175374" has status "Ready":"False"
	I0913 19:58:46.431055   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:48.930705   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:48.265994   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:50.266158   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:47.009340   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:47.509159   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:48.009550   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:48.509199   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:49.009364   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:49.509522   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:50.008790   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:50.509733   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:51.009675   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:51.509423   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:50.293136   71233 node_ready.go:53] node "embed-certs-175374" has status "Ready":"False"
	I0913 19:58:52.792201   71233 node_ready.go:53] node "embed-certs-175374" has status "Ready":"False"
	I0913 19:58:53.291781   71233 node_ready.go:49] node "embed-certs-175374" has status "Ready":"True"
	I0913 19:58:53.291808   71233 node_ready.go:38] duration metric: took 7.503674244s for node "embed-certs-175374" to be "Ready" ...
	I0913 19:58:53.291817   71233 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0913 19:58:53.297601   71233 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-lrrkx" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:53.304575   71233 pod_ready.go:93] pod "coredns-7c65d6cfc9-lrrkx" in "kube-system" namespace has status "Ready":"True"
	I0913 19:58:53.304599   71233 pod_ready.go:82] duration metric: took 6.973055ms for pod "coredns-7c65d6cfc9-lrrkx" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:53.304608   71233 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-175374" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:50.932102   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:53.431177   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:52.267198   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:54.267301   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:52.009563   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:52.509133   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:53.008751   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:53.508990   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:54.009430   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:54.508835   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:55.009332   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:55.509474   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:56.009345   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:56.509025   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:55.312022   71233 pod_ready.go:103] pod "etcd-embed-certs-175374" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:57.310407   71233 pod_ready.go:93] pod "etcd-embed-certs-175374" in "kube-system" namespace has status "Ready":"True"
	I0913 19:58:57.310430   71233 pod_ready.go:82] duration metric: took 4.0058159s for pod "etcd-embed-certs-175374" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:57.310440   71233 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-175374" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:57.315573   71233 pod_ready.go:93] pod "kube-apiserver-embed-certs-175374" in "kube-system" namespace has status "Ready":"True"
	I0913 19:58:57.315592   71233 pod_ready.go:82] duration metric: took 5.146474ms for pod "kube-apiserver-embed-certs-175374" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:57.315600   71233 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-175374" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:57.319332   71233 pod_ready.go:93] pod "kube-controller-manager-embed-certs-175374" in "kube-system" namespace has status "Ready":"True"
	I0913 19:58:57.319347   71233 pod_ready.go:82] duration metric: took 3.741976ms for pod "kube-controller-manager-embed-certs-175374" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:57.319356   71233 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-jv77q" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:57.323231   71233 pod_ready.go:93] pod "kube-proxy-jv77q" in "kube-system" namespace has status "Ready":"True"
	I0913 19:58:57.323247   71233 pod_ready.go:82] duration metric: took 3.886178ms for pod "kube-proxy-jv77q" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:57.323254   71233 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-175374" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:57.329250   71233 pod_ready.go:93] pod "kube-scheduler-embed-certs-175374" in "kube-system" namespace has status "Ready":"True"
	I0913 19:58:57.329264   71233 pod_ready.go:82] duration metric: took 6.005366ms for pod "kube-scheduler-embed-certs-175374" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:57.329273   71233 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:55.932146   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:58.430922   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:56.765730   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:58.767104   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:57.009384   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:57.509443   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:58.008849   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:58.509514   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:59.009139   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:59.509433   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:00.009778   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:00.508827   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:01.009427   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:01.508910   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:59.335308   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:01.335559   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:03.337207   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:00.930860   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:02.932443   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:01.267236   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:03.765856   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:05.766799   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:02.009696   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:02.509043   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:03.008825   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:03.509139   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:04.009549   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:04.509093   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:05.009633   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:05.509496   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:06.008914   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:06.509007   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:05.835701   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:07.836050   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:05.431045   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:07.431161   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:08.266221   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:10.267540   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:07.009527   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:07.509637   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:08.009264   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:08.509505   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:09.008838   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:09.509478   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:10.009569   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:10.509697   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:11.009352   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:11.509280   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:10.335743   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:12.835060   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:09.930272   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:11.930469   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:14.431325   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:12.766317   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:14.766811   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:12.009200   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:12.509540   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:13.008811   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:13.509512   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:14.008877   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:14.509361   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:15.008823   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:15.509022   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:16.009176   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:16.509654   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:14.836303   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:17.336034   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:16.431384   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:18.930816   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:17.266683   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:19.268476   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:17.009758   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:17.509429   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:18.009470   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:18.509732   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:19.008954   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:19.509475   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:20.008916   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:20.509623   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:21.009697   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 19:59:21.009781   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 19:59:21.044701   71926 cri.go:89] found id: ""
	I0913 19:59:21.044724   71926 logs.go:276] 0 containers: []
	W0913 19:59:21.044734   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 19:59:21.044739   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 19:59:21.044786   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 19:59:21.086372   71926 cri.go:89] found id: ""
	I0913 19:59:21.086399   71926 logs.go:276] 0 containers: []
	W0913 19:59:21.086408   71926 logs.go:278] No container was found matching "etcd"
	I0913 19:59:21.086413   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 19:59:21.086474   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 19:59:21.121663   71926 cri.go:89] found id: ""
	I0913 19:59:21.121691   71926 logs.go:276] 0 containers: []
	W0913 19:59:21.121702   71926 logs.go:278] No container was found matching "coredns"
	I0913 19:59:21.121709   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 19:59:21.121772   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 19:59:21.159432   71926 cri.go:89] found id: ""
	I0913 19:59:21.159462   71926 logs.go:276] 0 containers: []
	W0913 19:59:21.159474   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 19:59:21.159481   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 19:59:21.159547   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 19:59:21.196077   71926 cri.go:89] found id: ""
	I0913 19:59:21.196108   71926 logs.go:276] 0 containers: []
	W0913 19:59:21.196120   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 19:59:21.196128   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 19:59:21.196202   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 19:59:21.228527   71926 cri.go:89] found id: ""
	I0913 19:59:21.228560   71926 logs.go:276] 0 containers: []
	W0913 19:59:21.228572   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 19:59:21.228579   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 19:59:21.228642   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 19:59:21.261972   71926 cri.go:89] found id: ""
	I0913 19:59:21.262004   71926 logs.go:276] 0 containers: []
	W0913 19:59:21.262015   71926 logs.go:278] No container was found matching "kindnet"
	I0913 19:59:21.262023   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 19:59:21.262088   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 19:59:21.297147   71926 cri.go:89] found id: ""
	I0913 19:59:21.297178   71926 logs.go:276] 0 containers: []
	W0913 19:59:21.297191   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 19:59:21.297201   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 19:59:21.297214   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 19:59:21.351067   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 19:59:21.351099   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 19:59:21.367453   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 19:59:21.367492   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 19:59:21.497454   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 19:59:21.497474   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 19:59:21.497487   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 19:59:21.568483   71926 logs.go:123] Gathering logs for container status ...
	I0913 19:59:21.568519   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 19:59:19.836218   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:22.336293   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:21.430519   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:23.930458   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:21.767677   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:24.267717   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:24.114940   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:24.127514   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 19:59:24.127597   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 19:59:24.164868   71926 cri.go:89] found id: ""
	I0913 19:59:24.164896   71926 logs.go:276] 0 containers: []
	W0913 19:59:24.164909   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 19:59:24.164917   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 19:59:24.164976   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 19:59:24.200342   71926 cri.go:89] found id: ""
	I0913 19:59:24.200374   71926 logs.go:276] 0 containers: []
	W0913 19:59:24.200386   71926 logs.go:278] No container was found matching "etcd"
	I0913 19:59:24.200393   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 19:59:24.200453   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 19:59:24.237566   71926 cri.go:89] found id: ""
	I0913 19:59:24.237592   71926 logs.go:276] 0 containers: []
	W0913 19:59:24.237603   71926 logs.go:278] No container was found matching "coredns"
	I0913 19:59:24.237619   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 19:59:24.237691   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 19:59:24.275357   71926 cri.go:89] found id: ""
	I0913 19:59:24.275386   71926 logs.go:276] 0 containers: []
	W0913 19:59:24.275397   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 19:59:24.275412   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 19:59:24.275476   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 19:59:24.312715   71926 cri.go:89] found id: ""
	I0913 19:59:24.312744   71926 logs.go:276] 0 containers: []
	W0913 19:59:24.312754   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 19:59:24.312759   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 19:59:24.312821   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 19:59:24.348027   71926 cri.go:89] found id: ""
	I0913 19:59:24.348053   71926 logs.go:276] 0 containers: []
	W0913 19:59:24.348064   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 19:59:24.348071   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 19:59:24.348149   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 19:59:24.382195   71926 cri.go:89] found id: ""
	I0913 19:59:24.382219   71926 logs.go:276] 0 containers: []
	W0913 19:59:24.382229   71926 logs.go:278] No container was found matching "kindnet"
	I0913 19:59:24.382235   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 19:59:24.382282   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 19:59:24.418783   71926 cri.go:89] found id: ""
	I0913 19:59:24.418806   71926 logs.go:276] 0 containers: []
	W0913 19:59:24.418815   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 19:59:24.418823   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 19:59:24.418833   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 19:59:24.504681   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 19:59:24.504705   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 19:59:24.504718   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 19:59:24.578961   71926 logs.go:123] Gathering logs for container status ...
	I0913 19:59:24.578993   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 19:59:24.623057   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 19:59:24.623083   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 19:59:24.673801   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 19:59:24.673835   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 19:59:24.336593   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:26.835014   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:28.836636   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:25.932213   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:28.431013   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:26.767205   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:29.266801   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:27.188790   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:27.201419   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 19:59:27.201476   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 19:59:27.234511   71926 cri.go:89] found id: ""
	I0913 19:59:27.234535   71926 logs.go:276] 0 containers: []
	W0913 19:59:27.234543   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 19:59:27.234550   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 19:59:27.234610   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 19:59:27.273066   71926 cri.go:89] found id: ""
	I0913 19:59:27.273089   71926 logs.go:276] 0 containers: []
	W0913 19:59:27.273098   71926 logs.go:278] No container was found matching "etcd"
	I0913 19:59:27.273112   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 19:59:27.273169   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 19:59:27.306510   71926 cri.go:89] found id: ""
	I0913 19:59:27.306531   71926 logs.go:276] 0 containers: []
	W0913 19:59:27.306540   71926 logs.go:278] No container was found matching "coredns"
	I0913 19:59:27.306545   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 19:59:27.306630   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 19:59:27.343335   71926 cri.go:89] found id: ""
	I0913 19:59:27.343359   71926 logs.go:276] 0 containers: []
	W0913 19:59:27.343371   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 19:59:27.343378   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 19:59:27.343427   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 19:59:27.380440   71926 cri.go:89] found id: ""
	I0913 19:59:27.380469   71926 logs.go:276] 0 containers: []
	W0913 19:59:27.380478   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 19:59:27.380483   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 19:59:27.380536   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 19:59:27.419250   71926 cri.go:89] found id: ""
	I0913 19:59:27.419280   71926 logs.go:276] 0 containers: []
	W0913 19:59:27.419292   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 19:59:27.419299   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 19:59:27.419370   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 19:59:27.454315   71926 cri.go:89] found id: ""
	I0913 19:59:27.454337   71926 logs.go:276] 0 containers: []
	W0913 19:59:27.454346   71926 logs.go:278] No container was found matching "kindnet"
	I0913 19:59:27.454352   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 19:59:27.454402   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 19:59:27.491075   71926 cri.go:89] found id: ""
	I0913 19:59:27.491107   71926 logs.go:276] 0 containers: []
	W0913 19:59:27.491118   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 19:59:27.491128   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 19:59:27.491170   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 19:59:27.540849   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 19:59:27.540877   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 19:59:27.554829   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 19:59:27.554860   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 19:59:27.624534   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 19:59:27.624562   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 19:59:27.624577   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 19:59:27.702577   71926 logs.go:123] Gathering logs for container status ...
	I0913 19:59:27.702612   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 19:59:30.242489   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:30.255585   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 19:59:30.255667   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 19:59:30.298214   71926 cri.go:89] found id: ""
	I0913 19:59:30.298241   71926 logs.go:276] 0 containers: []
	W0913 19:59:30.298253   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 19:59:30.298259   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 19:59:30.298332   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 19:59:30.335270   71926 cri.go:89] found id: ""
	I0913 19:59:30.335300   71926 logs.go:276] 0 containers: []
	W0913 19:59:30.335309   71926 logs.go:278] No container was found matching "etcd"
	I0913 19:59:30.335314   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 19:59:30.335379   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 19:59:30.373478   71926 cri.go:89] found id: ""
	I0913 19:59:30.373506   71926 logs.go:276] 0 containers: []
	W0913 19:59:30.373517   71926 logs.go:278] No container was found matching "coredns"
	I0913 19:59:30.373524   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 19:59:30.373583   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 19:59:30.411820   71926 cri.go:89] found id: ""
	I0913 19:59:30.411845   71926 logs.go:276] 0 containers: []
	W0913 19:59:30.411854   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 19:59:30.411863   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 19:59:30.411908   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 19:59:30.449434   71926 cri.go:89] found id: ""
	I0913 19:59:30.449463   71926 logs.go:276] 0 containers: []
	W0913 19:59:30.449479   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 19:59:30.449486   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 19:59:30.449547   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 19:59:30.482794   71926 cri.go:89] found id: ""
	I0913 19:59:30.482822   71926 logs.go:276] 0 containers: []
	W0913 19:59:30.482831   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 19:59:30.482837   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 19:59:30.482887   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 19:59:30.517320   71926 cri.go:89] found id: ""
	I0913 19:59:30.517342   71926 logs.go:276] 0 containers: []
	W0913 19:59:30.517351   71926 logs.go:278] No container was found matching "kindnet"
	I0913 19:59:30.517357   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 19:59:30.517404   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 19:59:30.553119   71926 cri.go:89] found id: ""
	I0913 19:59:30.553146   71926 logs.go:276] 0 containers: []
	W0913 19:59:30.553154   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 19:59:30.553162   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 19:59:30.553172   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 19:59:30.605857   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 19:59:30.605886   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 19:59:30.620823   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 19:59:30.620855   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 19:59:30.689618   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 19:59:30.689637   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 19:59:30.689650   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 19:59:30.772324   71926 logs.go:123] Gathering logs for container status ...
	I0913 19:59:30.772359   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 19:59:31.335265   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:33.336711   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:30.431957   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:32.930866   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:31.765595   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:33.768217   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:33.313109   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:33.327584   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 19:59:33.327642   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 19:59:33.362880   71926 cri.go:89] found id: ""
	I0913 19:59:33.362904   71926 logs.go:276] 0 containers: []
	W0913 19:59:33.362912   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 19:59:33.362919   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 19:59:33.362979   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 19:59:33.395790   71926 cri.go:89] found id: ""
	I0913 19:59:33.395818   71926 logs.go:276] 0 containers: []
	W0913 19:59:33.395828   71926 logs.go:278] No container was found matching "etcd"
	I0913 19:59:33.395833   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 19:59:33.395883   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 19:59:33.430371   71926 cri.go:89] found id: ""
	I0913 19:59:33.430397   71926 logs.go:276] 0 containers: []
	W0913 19:59:33.430405   71926 logs.go:278] No container was found matching "coredns"
	I0913 19:59:33.430410   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 19:59:33.430522   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 19:59:33.465466   71926 cri.go:89] found id: ""
	I0913 19:59:33.465494   71926 logs.go:276] 0 containers: []
	W0913 19:59:33.465502   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 19:59:33.465508   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 19:59:33.465554   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 19:59:33.500340   71926 cri.go:89] found id: ""
	I0913 19:59:33.500370   71926 logs.go:276] 0 containers: []
	W0913 19:59:33.500385   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 19:59:33.500390   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 19:59:33.500440   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 19:59:33.537219   71926 cri.go:89] found id: ""
	I0913 19:59:33.537248   71926 logs.go:276] 0 containers: []
	W0913 19:59:33.537259   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 19:59:33.537267   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 19:59:33.537315   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 19:59:33.576171   71926 cri.go:89] found id: ""
	I0913 19:59:33.576201   71926 logs.go:276] 0 containers: []
	W0913 19:59:33.576209   71926 logs.go:278] No container was found matching "kindnet"
	I0913 19:59:33.576214   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 19:59:33.576261   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 19:59:33.618525   71926 cri.go:89] found id: ""
	I0913 19:59:33.618552   71926 logs.go:276] 0 containers: []
	W0913 19:59:33.618564   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 19:59:33.618574   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 19:59:33.618588   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 19:59:33.667903   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 19:59:33.667932   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 19:59:33.683870   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 19:59:33.683897   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 19:59:33.755651   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 19:59:33.755675   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 19:59:33.755687   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 19:59:33.834518   71926 logs.go:123] Gathering logs for container status ...
	I0913 19:59:33.834563   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 19:59:36.375763   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:36.389874   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 19:59:36.389945   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 19:59:36.423918   71926 cri.go:89] found id: ""
	I0913 19:59:36.423943   71926 logs.go:276] 0 containers: []
	W0913 19:59:36.423955   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 19:59:36.423962   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 19:59:36.424021   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 19:59:36.461591   71926 cri.go:89] found id: ""
	I0913 19:59:36.461615   71926 logs.go:276] 0 containers: []
	W0913 19:59:36.461627   71926 logs.go:278] No container was found matching "etcd"
	I0913 19:59:36.461633   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 19:59:36.461686   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 19:59:36.500927   71926 cri.go:89] found id: ""
	I0913 19:59:36.500951   71926 logs.go:276] 0 containers: []
	W0913 19:59:36.500961   71926 logs.go:278] No container was found matching "coredns"
	I0913 19:59:36.500966   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 19:59:36.501024   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 19:59:36.538153   71926 cri.go:89] found id: ""
	I0913 19:59:36.538178   71926 logs.go:276] 0 containers: []
	W0913 19:59:36.538189   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 19:59:36.538196   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 19:59:36.538253   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 19:59:36.586559   71926 cri.go:89] found id: ""
	I0913 19:59:36.586593   71926 logs.go:276] 0 containers: []
	W0913 19:59:36.586604   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 19:59:36.586612   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 19:59:36.586671   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 19:59:36.635198   71926 cri.go:89] found id: ""
	I0913 19:59:36.635226   71926 logs.go:276] 0 containers: []
	W0913 19:59:36.635238   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 19:59:36.635246   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 19:59:36.635312   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 19:59:36.679531   71926 cri.go:89] found id: ""
	I0913 19:59:36.679554   71926 logs.go:276] 0 containers: []
	W0913 19:59:36.679565   71926 logs.go:278] No container was found matching "kindnet"
	I0913 19:59:36.679572   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 19:59:36.679635   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 19:59:36.714288   71926 cri.go:89] found id: ""
	I0913 19:59:36.714315   71926 logs.go:276] 0 containers: []
	W0913 19:59:36.714327   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 19:59:36.714338   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 19:59:36.714352   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 19:59:36.793900   71926 logs.go:123] Gathering logs for container status ...
	I0913 19:59:36.793938   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 19:59:36.831700   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 19:59:36.831732   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 19:59:36.884424   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 19:59:36.884466   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 19:59:36.900593   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 19:59:36.900622   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 19:59:35.835628   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:37.836645   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:34.931979   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:37.429866   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:39.431100   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:36.265867   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:38.266340   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:40.767051   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	W0913 19:59:36.979173   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 19:59:39.480036   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:39.494368   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 19:59:39.494434   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 19:59:39.528620   71926 cri.go:89] found id: ""
	I0913 19:59:39.528647   71926 logs.go:276] 0 containers: []
	W0913 19:59:39.528655   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 19:59:39.528661   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 19:59:39.528708   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 19:59:39.564307   71926 cri.go:89] found id: ""
	I0913 19:59:39.564339   71926 logs.go:276] 0 containers: []
	W0913 19:59:39.564348   71926 logs.go:278] No container was found matching "etcd"
	I0913 19:59:39.564354   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 19:59:39.564402   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 19:59:39.596786   71926 cri.go:89] found id: ""
	I0913 19:59:39.596813   71926 logs.go:276] 0 containers: []
	W0913 19:59:39.596822   71926 logs.go:278] No container was found matching "coredns"
	I0913 19:59:39.596828   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 19:59:39.596887   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 19:59:39.640612   71926 cri.go:89] found id: ""
	I0913 19:59:39.640636   71926 logs.go:276] 0 containers: []
	W0913 19:59:39.640649   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 19:59:39.640654   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 19:59:39.640701   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 19:59:39.685855   71926 cri.go:89] found id: ""
	I0913 19:59:39.685876   71926 logs.go:276] 0 containers: []
	W0913 19:59:39.685884   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 19:59:39.685890   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 19:59:39.685937   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 19:59:39.723549   71926 cri.go:89] found id: ""
	I0913 19:59:39.723578   71926 logs.go:276] 0 containers: []
	W0913 19:59:39.723586   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 19:59:39.723592   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 19:59:39.723647   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 19:59:39.761901   71926 cri.go:89] found id: ""
	I0913 19:59:39.761928   71926 logs.go:276] 0 containers: []
	W0913 19:59:39.761938   71926 logs.go:278] No container was found matching "kindnet"
	I0913 19:59:39.761944   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 19:59:39.762005   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 19:59:39.804200   71926 cri.go:89] found id: ""
	I0913 19:59:39.804233   71926 logs.go:276] 0 containers: []
	W0913 19:59:39.804244   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 19:59:39.804254   71926 logs.go:123] Gathering logs for container status ...
	I0913 19:59:39.804268   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 19:59:39.843760   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 19:59:39.843792   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 19:59:39.898610   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 19:59:39.898640   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 19:59:39.915710   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 19:59:39.915733   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 19:59:39.991138   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 19:59:39.991161   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 19:59:39.991175   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 19:59:40.335372   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:42.339270   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:41.431411   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:43.930395   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:43.266899   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:45.769316   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:42.567023   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:42.579927   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 19:59:42.580001   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 19:59:42.613569   71926 cri.go:89] found id: ""
	I0913 19:59:42.613595   71926 logs.go:276] 0 containers: []
	W0913 19:59:42.613603   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 19:59:42.613608   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 19:59:42.613654   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 19:59:42.649375   71926 cri.go:89] found id: ""
	I0913 19:59:42.649408   71926 logs.go:276] 0 containers: []
	W0913 19:59:42.649421   71926 logs.go:278] No container was found matching "etcd"
	I0913 19:59:42.649433   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 19:59:42.649502   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 19:59:42.685299   71926 cri.go:89] found id: ""
	I0913 19:59:42.685322   71926 logs.go:276] 0 containers: []
	W0913 19:59:42.685330   71926 logs.go:278] No container was found matching "coredns"
	I0913 19:59:42.685336   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 19:59:42.685383   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 19:59:42.718646   71926 cri.go:89] found id: ""
	I0913 19:59:42.718671   71926 logs.go:276] 0 containers: []
	W0913 19:59:42.718680   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 19:59:42.718686   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 19:59:42.718736   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 19:59:42.755277   71926 cri.go:89] found id: ""
	I0913 19:59:42.755310   71926 logs.go:276] 0 containers: []
	W0913 19:59:42.755322   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 19:59:42.755330   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 19:59:42.755399   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 19:59:42.791071   71926 cri.go:89] found id: ""
	I0913 19:59:42.791099   71926 logs.go:276] 0 containers: []
	W0913 19:59:42.791110   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 19:59:42.791117   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 19:59:42.791191   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 19:59:42.824895   71926 cri.go:89] found id: ""
	I0913 19:59:42.824924   71926 logs.go:276] 0 containers: []
	W0913 19:59:42.824935   71926 logs.go:278] No container was found matching "kindnet"
	I0913 19:59:42.824942   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 19:59:42.825004   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 19:59:42.864526   71926 cri.go:89] found id: ""
	I0913 19:59:42.864555   71926 logs.go:276] 0 containers: []
	W0913 19:59:42.864567   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 19:59:42.864576   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 19:59:42.864590   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 19:59:42.913990   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 19:59:42.914023   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 19:59:42.929285   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 19:59:42.929319   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 19:59:43.003029   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 19:59:43.003061   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 19:59:43.003075   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 19:59:43.083457   71926 logs.go:123] Gathering logs for container status ...
	I0913 19:59:43.083490   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 19:59:45.625941   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:45.639111   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 19:59:45.639200   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 19:59:45.672356   71926 cri.go:89] found id: ""
	I0913 19:59:45.672383   71926 logs.go:276] 0 containers: []
	W0913 19:59:45.672391   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 19:59:45.672397   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 19:59:45.672463   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 19:59:45.713528   71926 cri.go:89] found id: ""
	I0913 19:59:45.713551   71926 logs.go:276] 0 containers: []
	W0913 19:59:45.713557   71926 logs.go:278] No container was found matching "etcd"
	I0913 19:59:45.713564   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 19:59:45.713618   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 19:59:45.749950   71926 cri.go:89] found id: ""
	I0913 19:59:45.749974   71926 logs.go:276] 0 containers: []
	W0913 19:59:45.749982   71926 logs.go:278] No container was found matching "coredns"
	I0913 19:59:45.749988   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 19:59:45.750036   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 19:59:45.787366   71926 cri.go:89] found id: ""
	I0913 19:59:45.787403   71926 logs.go:276] 0 containers: []
	W0913 19:59:45.787415   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 19:59:45.787423   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 19:59:45.787482   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 19:59:45.822464   71926 cri.go:89] found id: ""
	I0913 19:59:45.822493   71926 logs.go:276] 0 containers: []
	W0913 19:59:45.822504   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 19:59:45.822511   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 19:59:45.822574   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 19:59:45.857614   71926 cri.go:89] found id: ""
	I0913 19:59:45.857643   71926 logs.go:276] 0 containers: []
	W0913 19:59:45.857654   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 19:59:45.857666   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 19:59:45.857716   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 19:59:45.896323   71926 cri.go:89] found id: ""
	I0913 19:59:45.896349   71926 logs.go:276] 0 containers: []
	W0913 19:59:45.896357   71926 logs.go:278] No container was found matching "kindnet"
	I0913 19:59:45.896362   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 19:59:45.896416   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 19:59:45.935706   71926 cri.go:89] found id: ""
	I0913 19:59:45.935731   71926 logs.go:276] 0 containers: []
	W0913 19:59:45.935742   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 19:59:45.935751   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 19:59:45.935763   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 19:59:45.986687   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 19:59:45.986721   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 19:59:46.001549   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 19:59:46.001584   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 19:59:46.075482   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 19:59:46.075505   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 19:59:46.075520   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 19:59:46.152094   71926 logs.go:123] Gathering logs for container status ...
	I0913 19:59:46.152130   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 19:59:44.836085   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:46.836175   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:45.932069   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:47.932660   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:48.266623   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:50.766356   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:48.698127   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:48.711198   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 19:59:48.711271   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 19:59:48.746560   71926 cri.go:89] found id: ""
	I0913 19:59:48.746588   71926 logs.go:276] 0 containers: []
	W0913 19:59:48.746598   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 19:59:48.746605   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 19:59:48.746662   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 19:59:48.780588   71926 cri.go:89] found id: ""
	I0913 19:59:48.780614   71926 logs.go:276] 0 containers: []
	W0913 19:59:48.780624   71926 logs.go:278] No container was found matching "etcd"
	I0913 19:59:48.780631   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 19:59:48.780689   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 19:59:48.812521   71926 cri.go:89] found id: ""
	I0913 19:59:48.812547   71926 logs.go:276] 0 containers: []
	W0913 19:59:48.812560   71926 logs.go:278] No container was found matching "coredns"
	I0913 19:59:48.812567   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 19:59:48.812626   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 19:59:48.850273   71926 cri.go:89] found id: ""
	I0913 19:59:48.850303   71926 logs.go:276] 0 containers: []
	W0913 19:59:48.850314   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 19:59:48.850322   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 19:59:48.850384   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 19:59:48.887865   71926 cri.go:89] found id: ""
	I0913 19:59:48.887888   71926 logs.go:276] 0 containers: []
	W0913 19:59:48.887896   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 19:59:48.887901   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 19:59:48.887966   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 19:59:48.928155   71926 cri.go:89] found id: ""
	I0913 19:59:48.928182   71926 logs.go:276] 0 containers: []
	W0913 19:59:48.928193   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 19:59:48.928201   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 19:59:48.928263   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 19:59:48.966159   71926 cri.go:89] found id: ""
	I0913 19:59:48.966185   71926 logs.go:276] 0 containers: []
	W0913 19:59:48.966194   71926 logs.go:278] No container was found matching "kindnet"
	I0913 19:59:48.966199   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 19:59:48.966267   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 19:59:48.999627   71926 cri.go:89] found id: ""
	I0913 19:59:48.999654   71926 logs.go:276] 0 containers: []
	W0913 19:59:48.999663   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 19:59:48.999674   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 19:59:48.999685   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 19:59:49.087362   71926 logs.go:123] Gathering logs for container status ...
	I0913 19:59:49.087398   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 19:59:49.131037   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 19:59:49.131065   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 19:59:49.183183   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 19:59:49.183214   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 19:59:49.197511   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 19:59:49.197536   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 19:59:49.269251   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 19:59:51.769494   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:51.783274   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 19:59:51.783334   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 19:59:51.819611   71926 cri.go:89] found id: ""
	I0913 19:59:51.819644   71926 logs.go:276] 0 containers: []
	W0913 19:59:51.819655   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 19:59:51.819662   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 19:59:51.819722   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 19:59:51.857054   71926 cri.go:89] found id: ""
	I0913 19:59:51.857081   71926 logs.go:276] 0 containers: []
	W0913 19:59:51.857093   71926 logs.go:278] No container was found matching "etcd"
	I0913 19:59:51.857101   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 19:59:51.857183   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 19:59:51.892270   71926 cri.go:89] found id: ""
	I0913 19:59:51.892292   71926 logs.go:276] 0 containers: []
	W0913 19:59:51.892301   71926 logs.go:278] No container was found matching "coredns"
	I0913 19:59:51.892306   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 19:59:51.892354   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 19:59:51.926615   71926 cri.go:89] found id: ""
	I0913 19:59:51.926641   71926 logs.go:276] 0 containers: []
	W0913 19:59:51.926653   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 19:59:51.926667   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 19:59:51.926728   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 19:59:49.336581   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:51.837000   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:53.838872   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:49.936518   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:52.430631   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:52.767109   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:55.265920   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:51.966322   71926 cri.go:89] found id: ""
	I0913 19:59:51.966352   71926 logs.go:276] 0 containers: []
	W0913 19:59:51.966372   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 19:59:51.966380   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 19:59:51.966446   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 19:59:52.008145   71926 cri.go:89] found id: ""
	I0913 19:59:52.008173   71926 logs.go:276] 0 containers: []
	W0913 19:59:52.008182   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 19:59:52.008189   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 19:59:52.008241   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 19:59:52.044486   71926 cri.go:89] found id: ""
	I0913 19:59:52.044512   71926 logs.go:276] 0 containers: []
	W0913 19:59:52.044520   71926 logs.go:278] No container was found matching "kindnet"
	I0913 19:59:52.044527   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 19:59:52.044590   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 19:59:52.080845   71926 cri.go:89] found id: ""
	I0913 19:59:52.080873   71926 logs.go:276] 0 containers: []
	W0913 19:59:52.080885   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 19:59:52.080895   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 19:59:52.080910   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 19:59:52.094040   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 19:59:52.094067   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 19:59:52.163809   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 19:59:52.163836   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 19:59:52.163850   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 19:59:52.244680   71926 logs.go:123] Gathering logs for container status ...
	I0913 19:59:52.244724   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 19:59:52.284651   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 19:59:52.284686   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 19:59:54.841167   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:54.853992   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 19:59:54.854055   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 19:59:54.886801   71926 cri.go:89] found id: ""
	I0913 19:59:54.886830   71926 logs.go:276] 0 containers: []
	W0913 19:59:54.886841   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 19:59:54.886848   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 19:59:54.886922   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 19:59:54.921963   71926 cri.go:89] found id: ""
	I0913 19:59:54.921990   71926 logs.go:276] 0 containers: []
	W0913 19:59:54.922001   71926 logs.go:278] No container was found matching "etcd"
	I0913 19:59:54.922009   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 19:59:54.922074   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 19:59:54.960805   71926 cri.go:89] found id: ""
	I0913 19:59:54.960840   71926 logs.go:276] 0 containers: []
	W0913 19:59:54.960852   71926 logs.go:278] No container was found matching "coredns"
	I0913 19:59:54.960859   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 19:59:54.960938   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 19:59:54.998466   71926 cri.go:89] found id: ""
	I0913 19:59:54.998490   71926 logs.go:276] 0 containers: []
	W0913 19:59:54.998501   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 19:59:54.998508   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 19:59:54.998570   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 19:59:55.036768   71926 cri.go:89] found id: ""
	I0913 19:59:55.036795   71926 logs.go:276] 0 containers: []
	W0913 19:59:55.036803   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 19:59:55.036809   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 19:59:55.036870   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 19:59:55.075135   71926 cri.go:89] found id: ""
	I0913 19:59:55.075165   71926 logs.go:276] 0 containers: []
	W0913 19:59:55.075176   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 19:59:55.075184   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 19:59:55.075244   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 19:59:55.112784   71926 cri.go:89] found id: ""
	I0913 19:59:55.112806   71926 logs.go:276] 0 containers: []
	W0913 19:59:55.112815   71926 logs.go:278] No container was found matching "kindnet"
	I0913 19:59:55.112821   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 19:59:55.112866   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 19:59:55.147189   71926 cri.go:89] found id: ""
	I0913 19:59:55.147215   71926 logs.go:276] 0 containers: []
	W0913 19:59:55.147226   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 19:59:55.147236   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 19:59:55.147247   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 19:59:55.199769   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 19:59:55.199802   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 19:59:55.214075   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 19:59:55.214124   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 19:59:55.285621   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 19:59:55.285640   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 19:59:55.285650   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 19:59:55.366727   71926 logs.go:123] Gathering logs for container status ...
	I0913 19:59:55.366762   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 19:59:56.336491   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:58.836762   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:54.932054   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:57.431007   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:57.266309   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:59.266774   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:57.913453   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:57.927109   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 19:59:57.927249   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 19:59:57.964607   71926 cri.go:89] found id: ""
	I0913 19:59:57.964635   71926 logs.go:276] 0 containers: []
	W0913 19:59:57.964647   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 19:59:57.964655   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 19:59:57.964723   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 19:59:58.006749   71926 cri.go:89] found id: ""
	I0913 19:59:58.006773   71926 logs.go:276] 0 containers: []
	W0913 19:59:58.006782   71926 logs.go:278] No container was found matching "etcd"
	I0913 19:59:58.006788   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 19:59:58.006835   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 19:59:58.042438   71926 cri.go:89] found id: ""
	I0913 19:59:58.042461   71926 logs.go:276] 0 containers: []
	W0913 19:59:58.042470   71926 logs.go:278] No container was found matching "coredns"
	I0913 19:59:58.042475   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 19:59:58.042526   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 19:59:58.076413   71926 cri.go:89] found id: ""
	I0913 19:59:58.076443   71926 logs.go:276] 0 containers: []
	W0913 19:59:58.076456   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 19:59:58.076463   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 19:59:58.076525   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 19:59:58.110474   71926 cri.go:89] found id: ""
	I0913 19:59:58.110495   71926 logs.go:276] 0 containers: []
	W0913 19:59:58.110502   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 19:59:58.110507   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 19:59:58.110555   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 19:59:58.145068   71926 cri.go:89] found id: ""
	I0913 19:59:58.145090   71926 logs.go:276] 0 containers: []
	W0913 19:59:58.145098   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 19:59:58.145104   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 19:59:58.145152   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 19:59:58.179071   71926 cri.go:89] found id: ""
	I0913 19:59:58.179102   71926 logs.go:276] 0 containers: []
	W0913 19:59:58.179115   71926 logs.go:278] No container was found matching "kindnet"
	I0913 19:59:58.179122   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 19:59:58.179176   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 19:59:58.212749   71926 cri.go:89] found id: ""
	I0913 19:59:58.212779   71926 logs.go:276] 0 containers: []
	W0913 19:59:58.212791   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 19:59:58.212801   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 19:59:58.212814   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 19:59:58.297012   71926 logs.go:123] Gathering logs for container status ...
	I0913 19:59:58.297046   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 19:59:58.336206   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 19:59:58.336229   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 19:59:58.388982   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 19:59:58.389014   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 19:59:58.404582   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 19:59:58.404612   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 19:59:58.476882   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:00.977647   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:00.991660   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:00.991743   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:01.028753   71926 cri.go:89] found id: ""
	I0913 20:00:01.028779   71926 logs.go:276] 0 containers: []
	W0913 20:00:01.028788   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:01.028794   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:01.028843   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:01.064606   71926 cri.go:89] found id: ""
	I0913 20:00:01.064638   71926 logs.go:276] 0 containers: []
	W0913 20:00:01.064647   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:01.064652   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:01.064701   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:01.100564   71926 cri.go:89] found id: ""
	I0913 20:00:01.100597   71926 logs.go:276] 0 containers: []
	W0913 20:00:01.100609   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:01.100615   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:01.100678   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:01.135197   71926 cri.go:89] found id: ""
	I0913 20:00:01.135230   71926 logs.go:276] 0 containers: []
	W0913 20:00:01.135242   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:01.135249   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:01.135313   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:01.187712   71926 cri.go:89] found id: ""
	I0913 20:00:01.187783   71926 logs.go:276] 0 containers: []
	W0913 20:00:01.187796   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:01.187804   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:01.187870   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:01.225400   71926 cri.go:89] found id: ""
	I0913 20:00:01.225434   71926 logs.go:276] 0 containers: []
	W0913 20:00:01.225443   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:01.225449   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:01.225511   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:01.263652   71926 cri.go:89] found id: ""
	I0913 20:00:01.263685   71926 logs.go:276] 0 containers: []
	W0913 20:00:01.263696   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:01.263703   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:01.263764   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:01.302630   71926 cri.go:89] found id: ""
	I0913 20:00:01.302652   71926 logs.go:276] 0 containers: []
	W0913 20:00:01.302661   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:01.302669   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:01.302680   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:01.316837   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:01.316871   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:01.389124   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:01.389149   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:01.389159   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:01.475528   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:01.475565   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:01.518896   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:01.518922   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:01.338229   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:03.836029   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:59.932112   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:01.932389   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:03.932525   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:01.267699   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:03.268309   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:05.765913   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:04.069894   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:04.083877   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:04.083940   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:04.119079   71926 cri.go:89] found id: ""
	I0913 20:00:04.119109   71926 logs.go:276] 0 containers: []
	W0913 20:00:04.119118   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:04.119123   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:04.119175   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:04.154999   71926 cri.go:89] found id: ""
	I0913 20:00:04.155026   71926 logs.go:276] 0 containers: []
	W0913 20:00:04.155035   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:04.155040   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:04.155087   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:04.189390   71926 cri.go:89] found id: ""
	I0913 20:00:04.189414   71926 logs.go:276] 0 containers: []
	W0913 20:00:04.189422   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:04.189428   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:04.189477   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:04.224883   71926 cri.go:89] found id: ""
	I0913 20:00:04.224912   71926 logs.go:276] 0 containers: []
	W0913 20:00:04.224924   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:04.224932   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:04.224990   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:04.265300   71926 cri.go:89] found id: ""
	I0913 20:00:04.265328   71926 logs.go:276] 0 containers: []
	W0913 20:00:04.265340   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:04.265347   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:04.265403   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:04.308225   71926 cri.go:89] found id: ""
	I0913 20:00:04.308253   71926 logs.go:276] 0 containers: []
	W0913 20:00:04.308264   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:04.308271   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:04.308339   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:04.347508   71926 cri.go:89] found id: ""
	I0913 20:00:04.347539   71926 logs.go:276] 0 containers: []
	W0913 20:00:04.347552   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:04.347558   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:04.347615   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:04.385610   71926 cri.go:89] found id: ""
	I0913 20:00:04.385635   71926 logs.go:276] 0 containers: []
	W0913 20:00:04.385644   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:04.385651   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:04.385664   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:04.438181   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:04.438210   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:04.452207   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:04.452231   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:04.527920   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:04.527942   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:04.527956   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:04.617212   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:04.617256   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:05.836478   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:08.336478   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:06.429978   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:08.430153   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:08.266149   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:10.267683   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:07.159525   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:07.172355   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:07.172433   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:07.210683   71926 cri.go:89] found id: ""
	I0913 20:00:07.210713   71926 logs.go:276] 0 containers: []
	W0913 20:00:07.210725   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:07.210733   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:07.210794   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:07.252477   71926 cri.go:89] found id: ""
	I0913 20:00:07.252501   71926 logs.go:276] 0 containers: []
	W0913 20:00:07.252511   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:07.252516   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:07.252563   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:07.293646   71926 cri.go:89] found id: ""
	I0913 20:00:07.293671   71926 logs.go:276] 0 containers: []
	W0913 20:00:07.293679   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:07.293685   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:07.293744   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:07.328603   71926 cri.go:89] found id: ""
	I0913 20:00:07.328631   71926 logs.go:276] 0 containers: []
	W0913 20:00:07.328644   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:07.328652   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:07.328716   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:07.366358   71926 cri.go:89] found id: ""
	I0913 20:00:07.366385   71926 logs.go:276] 0 containers: []
	W0913 20:00:07.366395   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:07.366402   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:07.366480   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:07.400380   71926 cri.go:89] found id: ""
	I0913 20:00:07.400406   71926 logs.go:276] 0 containers: []
	W0913 20:00:07.400417   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:07.400425   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:07.400476   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:07.438332   71926 cri.go:89] found id: ""
	I0913 20:00:07.438360   71926 logs.go:276] 0 containers: []
	W0913 20:00:07.438371   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:07.438380   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:07.438437   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:07.477262   71926 cri.go:89] found id: ""
	I0913 20:00:07.477294   71926 logs.go:276] 0 containers: []
	W0913 20:00:07.477305   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:07.477315   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:07.477329   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:07.528404   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:07.528437   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:07.543025   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:07.543058   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:07.619580   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:07.619599   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:07.619611   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:07.698002   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:07.698037   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:10.241528   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:10.255274   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:10.255350   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:10.289635   71926 cri.go:89] found id: ""
	I0913 20:00:10.289660   71926 logs.go:276] 0 containers: []
	W0913 20:00:10.289670   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:10.289675   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:10.289726   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:10.323749   71926 cri.go:89] found id: ""
	I0913 20:00:10.323772   71926 logs.go:276] 0 containers: []
	W0913 20:00:10.323781   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:10.323788   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:10.323835   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:10.360399   71926 cri.go:89] found id: ""
	I0913 20:00:10.360424   71926 logs.go:276] 0 containers: []
	W0913 20:00:10.360432   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:10.360441   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:10.360517   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:10.396685   71926 cri.go:89] found id: ""
	I0913 20:00:10.396714   71926 logs.go:276] 0 containers: []
	W0913 20:00:10.396724   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:10.396731   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:10.396793   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:10.439076   71926 cri.go:89] found id: ""
	I0913 20:00:10.439104   71926 logs.go:276] 0 containers: []
	W0913 20:00:10.439116   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:10.439126   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:10.439202   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:10.484451   71926 cri.go:89] found id: ""
	I0913 20:00:10.484474   71926 logs.go:276] 0 containers: []
	W0913 20:00:10.484483   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:10.484490   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:10.484553   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:10.521271   71926 cri.go:89] found id: ""
	I0913 20:00:10.521297   71926 logs.go:276] 0 containers: []
	W0913 20:00:10.521307   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:10.521313   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:10.521371   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:10.556468   71926 cri.go:89] found id: ""
	I0913 20:00:10.556490   71926 logs.go:276] 0 containers: []
	W0913 20:00:10.556499   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:10.556507   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:10.556519   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:10.593210   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:10.593237   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:10.645456   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:10.645493   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:10.659365   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:10.659393   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:10.734764   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:10.734786   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:10.734800   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:10.338631   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:12.835744   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:10.430954   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:12.931007   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:12.767070   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:15.267220   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:13.320229   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:13.335065   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:13.335136   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:13.373500   71926 cri.go:89] found id: ""
	I0913 20:00:13.373531   71926 logs.go:276] 0 containers: []
	W0913 20:00:13.373543   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:13.373550   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:13.373613   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:13.410784   71926 cri.go:89] found id: ""
	I0913 20:00:13.410815   71926 logs.go:276] 0 containers: []
	W0913 20:00:13.410827   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:13.410834   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:13.410900   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:13.447564   71926 cri.go:89] found id: ""
	I0913 20:00:13.447592   71926 logs.go:276] 0 containers: []
	W0913 20:00:13.447603   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:13.447611   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:13.447668   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:13.482858   71926 cri.go:89] found id: ""
	I0913 20:00:13.482885   71926 logs.go:276] 0 containers: []
	W0913 20:00:13.482895   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:13.482902   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:13.482963   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:13.517827   71926 cri.go:89] found id: ""
	I0913 20:00:13.517853   71926 logs.go:276] 0 containers: []
	W0913 20:00:13.517864   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:13.517870   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:13.517932   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:13.553032   71926 cri.go:89] found id: ""
	I0913 20:00:13.553063   71926 logs.go:276] 0 containers: []
	W0913 20:00:13.553081   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:13.553088   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:13.553149   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:13.588527   71926 cri.go:89] found id: ""
	I0913 20:00:13.588553   71926 logs.go:276] 0 containers: []
	W0913 20:00:13.588561   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:13.588567   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:13.588620   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:13.625094   71926 cri.go:89] found id: ""
	I0913 20:00:13.625118   71926 logs.go:276] 0 containers: []
	W0913 20:00:13.625131   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:13.625141   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:13.625158   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:13.677821   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:13.677851   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:13.691860   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:13.691887   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:13.764966   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:13.764993   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:13.765009   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:13.847866   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:13.847908   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:16.389642   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:16.403272   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:16.403331   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:16.440078   71926 cri.go:89] found id: ""
	I0913 20:00:16.440104   71926 logs.go:276] 0 containers: []
	W0913 20:00:16.440114   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:16.440122   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:16.440190   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:16.474256   71926 cri.go:89] found id: ""
	I0913 20:00:16.474283   71926 logs.go:276] 0 containers: []
	W0913 20:00:16.474301   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:16.474308   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:16.474366   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:16.510715   71926 cri.go:89] found id: ""
	I0913 20:00:16.510749   71926 logs.go:276] 0 containers: []
	W0913 20:00:16.510760   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:16.510767   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:16.510828   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:16.547051   71926 cri.go:89] found id: ""
	I0913 20:00:16.547081   71926 logs.go:276] 0 containers: []
	W0913 20:00:16.547090   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:16.547095   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:16.547181   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:16.583643   71926 cri.go:89] found id: ""
	I0913 20:00:16.583673   71926 logs.go:276] 0 containers: []
	W0913 20:00:16.583684   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:16.583692   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:16.583751   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:16.620508   71926 cri.go:89] found id: ""
	I0913 20:00:16.620531   71926 logs.go:276] 0 containers: []
	W0913 20:00:16.620538   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:16.620544   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:16.620591   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:16.659447   71926 cri.go:89] found id: ""
	I0913 20:00:16.659474   71926 logs.go:276] 0 containers: []
	W0913 20:00:16.659483   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:16.659487   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:16.659551   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:16.696858   71926 cri.go:89] found id: ""
	I0913 20:00:16.696883   71926 logs.go:276] 0 containers: []
	W0913 20:00:16.696892   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:16.696900   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:16.696913   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:16.767299   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:16.767322   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:16.767337   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:16.847320   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:16.847356   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:16.922176   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:16.922209   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:14.836490   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:16.838300   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:15.430562   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:17.431842   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:17.766696   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:19.767921   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:16.982583   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:16.982627   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:19.496581   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:19.509942   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:19.510040   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:19.546457   71926 cri.go:89] found id: ""
	I0913 20:00:19.546493   71926 logs.go:276] 0 containers: []
	W0913 20:00:19.546510   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:19.546517   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:19.546584   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:19.585577   71926 cri.go:89] found id: ""
	I0913 20:00:19.585613   71926 logs.go:276] 0 containers: []
	W0913 20:00:19.585624   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:19.585631   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:19.585681   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:19.624383   71926 cri.go:89] found id: ""
	I0913 20:00:19.624416   71926 logs.go:276] 0 containers: []
	W0913 20:00:19.624428   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:19.624436   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:19.624492   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:19.662531   71926 cri.go:89] found id: ""
	I0913 20:00:19.662558   71926 logs.go:276] 0 containers: []
	W0913 20:00:19.662570   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:19.662578   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:19.662636   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:19.704253   71926 cri.go:89] found id: ""
	I0913 20:00:19.704278   71926 logs.go:276] 0 containers: []
	W0913 20:00:19.704290   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:19.704296   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:19.704354   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:19.743087   71926 cri.go:89] found id: ""
	I0913 20:00:19.743113   71926 logs.go:276] 0 containers: []
	W0913 20:00:19.743122   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:19.743128   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:19.743175   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:19.779598   71926 cri.go:89] found id: ""
	I0913 20:00:19.779625   71926 logs.go:276] 0 containers: []
	W0913 20:00:19.779635   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:19.779643   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:19.779692   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:19.817509   71926 cri.go:89] found id: ""
	I0913 20:00:19.817541   71926 logs.go:276] 0 containers: []
	W0913 20:00:19.817553   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:19.817564   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:19.817577   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:19.870071   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:19.870120   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:19.884612   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:19.884638   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:19.959650   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:19.959675   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:19.959687   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:20.040351   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:20.040384   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:19.335437   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:21.335913   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:23.838023   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:19.931244   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:22.430934   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:24.431456   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:22.266411   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:24.266828   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:22.581978   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:22.599144   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:22.599205   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:22.652862   71926 cri.go:89] found id: ""
	I0913 20:00:22.652894   71926 logs.go:276] 0 containers: []
	W0913 20:00:22.652906   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:22.652913   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:22.652998   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:22.712188   71926 cri.go:89] found id: ""
	I0913 20:00:22.712220   71926 logs.go:276] 0 containers: []
	W0913 20:00:22.712231   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:22.712238   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:22.712299   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:22.750212   71926 cri.go:89] found id: ""
	I0913 20:00:22.750238   71926 logs.go:276] 0 containers: []
	W0913 20:00:22.750249   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:22.750257   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:22.750319   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:22.786432   71926 cri.go:89] found id: ""
	I0913 20:00:22.786463   71926 logs.go:276] 0 containers: []
	W0913 20:00:22.786475   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:22.786483   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:22.786547   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:22.822681   71926 cri.go:89] found id: ""
	I0913 20:00:22.822707   71926 logs.go:276] 0 containers: []
	W0913 20:00:22.822716   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:22.822722   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:22.822780   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:22.862076   71926 cri.go:89] found id: ""
	I0913 20:00:22.862143   71926 logs.go:276] 0 containers: []
	W0913 20:00:22.862157   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:22.862166   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:22.862230   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:22.899489   71926 cri.go:89] found id: ""
	I0913 20:00:22.899519   71926 logs.go:276] 0 containers: []
	W0913 20:00:22.899528   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:22.899535   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:22.899604   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:22.937229   71926 cri.go:89] found id: ""
	I0913 20:00:22.937255   71926 logs.go:276] 0 containers: []
	W0913 20:00:22.937270   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:22.937282   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:22.937300   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:23.007842   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:23.007871   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:23.007887   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:23.092726   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:23.092763   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:23.131372   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:23.131403   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:23.183785   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:23.183819   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:25.698367   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:25.712256   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:25.712338   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:25.747797   71926 cri.go:89] found id: ""
	I0913 20:00:25.747824   71926 logs.go:276] 0 containers: []
	W0913 20:00:25.747835   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:25.747842   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:25.747929   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:25.783257   71926 cri.go:89] found id: ""
	I0913 20:00:25.783285   71926 logs.go:276] 0 containers: []
	W0913 20:00:25.783295   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:25.783301   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:25.783352   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:25.817087   71926 cri.go:89] found id: ""
	I0913 20:00:25.817120   71926 logs.go:276] 0 containers: []
	W0913 20:00:25.817132   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:25.817142   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:25.817203   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:25.854055   71926 cri.go:89] found id: ""
	I0913 20:00:25.854082   71926 logs.go:276] 0 containers: []
	W0913 20:00:25.854108   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:25.854116   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:25.854188   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:25.889972   71926 cri.go:89] found id: ""
	I0913 20:00:25.889994   71926 logs.go:276] 0 containers: []
	W0913 20:00:25.890002   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:25.890008   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:25.890058   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:25.927061   71926 cri.go:89] found id: ""
	I0913 20:00:25.927093   71926 logs.go:276] 0 containers: []
	W0913 20:00:25.927104   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:25.927115   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:25.927169   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:25.965541   71926 cri.go:89] found id: ""
	I0913 20:00:25.965570   71926 logs.go:276] 0 containers: []
	W0913 20:00:25.965582   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:25.965588   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:25.965649   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:26.002772   71926 cri.go:89] found id: ""
	I0913 20:00:26.002801   71926 logs.go:276] 0 containers: []
	W0913 20:00:26.002814   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:26.002825   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:26.002840   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:26.054407   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:26.054442   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:26.069608   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:26.069634   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:26.141529   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:26.141557   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:26.141572   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:26.223623   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:26.223657   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:26.336386   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:28.836218   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:26.431607   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:28.431821   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:26.267742   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:28.766624   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:30.767391   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:28.764312   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:28.779319   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:28.779383   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:28.813431   71926 cri.go:89] found id: ""
	I0913 20:00:28.813457   71926 logs.go:276] 0 containers: []
	W0913 20:00:28.813465   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:28.813473   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:28.813532   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:28.850083   71926 cri.go:89] found id: ""
	I0913 20:00:28.850145   71926 logs.go:276] 0 containers: []
	W0913 20:00:28.850157   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:28.850163   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:28.850221   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:28.890348   71926 cri.go:89] found id: ""
	I0913 20:00:28.890373   71926 logs.go:276] 0 containers: []
	W0913 20:00:28.890384   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:28.890390   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:28.890440   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:28.926531   71926 cri.go:89] found id: ""
	I0913 20:00:28.926564   71926 logs.go:276] 0 containers: []
	W0913 20:00:28.926576   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:28.926583   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:28.926650   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:28.968307   71926 cri.go:89] found id: ""
	I0913 20:00:28.968336   71926 logs.go:276] 0 containers: []
	W0913 20:00:28.968349   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:28.968356   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:28.968420   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:29.004257   71926 cri.go:89] found id: ""
	I0913 20:00:29.004285   71926 logs.go:276] 0 containers: []
	W0913 20:00:29.004297   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:29.004304   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:29.004369   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:29.039438   71926 cri.go:89] found id: ""
	I0913 20:00:29.039466   71926 logs.go:276] 0 containers: []
	W0913 20:00:29.039478   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:29.039486   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:29.039546   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:29.075138   71926 cri.go:89] found id: ""
	I0913 20:00:29.075172   71926 logs.go:276] 0 containers: []
	W0913 20:00:29.075184   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:29.075195   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:29.075210   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:29.151631   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:29.151671   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:29.193680   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:29.193707   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:29.244926   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:29.244960   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:29.258897   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:29.258922   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:29.329212   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:31.829955   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:31.846683   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:31.846747   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:31.888898   71926 cri.go:89] found id: ""
	I0913 20:00:31.888933   71926 logs.go:276] 0 containers: []
	W0913 20:00:31.888944   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:31.888952   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:31.889023   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:31.923896   71926 cri.go:89] found id: ""
	I0913 20:00:31.923922   71926 logs.go:276] 0 containers: []
	W0913 20:00:31.923933   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:31.923940   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:31.923999   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:30.836587   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:33.335323   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:30.431964   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:32.931375   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:32.770852   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:35.267129   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:31.965263   71926 cri.go:89] found id: ""
	I0913 20:00:31.965297   71926 logs.go:276] 0 containers: []
	W0913 20:00:31.965309   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:31.965317   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:31.965387   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:31.998461   71926 cri.go:89] found id: ""
	I0913 20:00:31.998491   71926 logs.go:276] 0 containers: []
	W0913 20:00:31.998505   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:31.998512   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:31.998564   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:32.039654   71926 cri.go:89] found id: ""
	I0913 20:00:32.039681   71926 logs.go:276] 0 containers: []
	W0913 20:00:32.039690   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:32.039696   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:32.039747   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:32.073387   71926 cri.go:89] found id: ""
	I0913 20:00:32.073413   71926 logs.go:276] 0 containers: []
	W0913 20:00:32.073424   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:32.073432   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:32.073491   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:32.112119   71926 cri.go:89] found id: ""
	I0913 20:00:32.112148   71926 logs.go:276] 0 containers: []
	W0913 20:00:32.112159   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:32.112166   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:32.112231   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:32.156508   71926 cri.go:89] found id: ""
	I0913 20:00:32.156539   71926 logs.go:276] 0 containers: []
	W0913 20:00:32.156548   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:32.156556   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:32.156567   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:32.210961   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:32.210994   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:32.224674   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:32.224703   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:32.297699   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:32.297725   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:32.297738   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:32.383090   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:32.383130   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:34.926212   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:34.942930   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:34.943015   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:34.983115   71926 cri.go:89] found id: ""
	I0913 20:00:34.983140   71926 logs.go:276] 0 containers: []
	W0913 20:00:34.983151   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:34.983159   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:34.983232   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:35.022893   71926 cri.go:89] found id: ""
	I0913 20:00:35.022916   71926 logs.go:276] 0 containers: []
	W0913 20:00:35.022924   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:35.022931   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:35.022980   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:35.061105   71926 cri.go:89] found id: ""
	I0913 20:00:35.061129   71926 logs.go:276] 0 containers: []
	W0913 20:00:35.061137   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:35.061143   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:35.061191   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:35.095853   71926 cri.go:89] found id: ""
	I0913 20:00:35.095879   71926 logs.go:276] 0 containers: []
	W0913 20:00:35.095890   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:35.095897   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:35.095966   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:35.132771   71926 cri.go:89] found id: ""
	I0913 20:00:35.132796   71926 logs.go:276] 0 containers: []
	W0913 20:00:35.132811   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:35.132816   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:35.132879   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:35.171692   71926 cri.go:89] found id: ""
	I0913 20:00:35.171720   71926 logs.go:276] 0 containers: []
	W0913 20:00:35.171729   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:35.171734   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:35.171782   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:35.212217   71926 cri.go:89] found id: ""
	I0913 20:00:35.212248   71926 logs.go:276] 0 containers: []
	W0913 20:00:35.212258   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:35.212266   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:35.212318   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:35.247910   71926 cri.go:89] found id: ""
	I0913 20:00:35.247938   71926 logs.go:276] 0 containers: []
	W0913 20:00:35.247949   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:35.247958   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:35.247972   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:35.321607   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:35.321627   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:35.321641   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:35.405442   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:35.405483   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:35.450174   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:35.450201   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:35.503640   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:35.503673   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:35.336847   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:37.337476   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:34.931775   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:37.430241   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:39.432113   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:37.268324   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:39.766957   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:38.019116   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:38.033625   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:38.033696   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:38.068648   71926 cri.go:89] found id: ""
	I0913 20:00:38.068677   71926 logs.go:276] 0 containers: []
	W0913 20:00:38.068688   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:38.068696   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:38.068764   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:38.103808   71926 cri.go:89] found id: ""
	I0913 20:00:38.103837   71926 logs.go:276] 0 containers: []
	W0913 20:00:38.103850   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:38.103857   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:38.103920   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:38.143305   71926 cri.go:89] found id: ""
	I0913 20:00:38.143326   71926 logs.go:276] 0 containers: []
	W0913 20:00:38.143335   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:38.143341   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:38.143397   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:38.181356   71926 cri.go:89] found id: ""
	I0913 20:00:38.181382   71926 logs.go:276] 0 containers: []
	W0913 20:00:38.181394   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:38.181401   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:38.181459   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:38.216935   71926 cri.go:89] found id: ""
	I0913 20:00:38.216966   71926 logs.go:276] 0 containers: []
	W0913 20:00:38.216977   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:38.216985   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:38.217043   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:38.253565   71926 cri.go:89] found id: ""
	I0913 20:00:38.253594   71926 logs.go:276] 0 containers: []
	W0913 20:00:38.253606   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:38.253614   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:38.253670   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:38.288672   71926 cri.go:89] found id: ""
	I0913 20:00:38.288698   71926 logs.go:276] 0 containers: []
	W0913 20:00:38.288714   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:38.288721   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:38.288782   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:38.323979   71926 cri.go:89] found id: ""
	I0913 20:00:38.324001   71926 logs.go:276] 0 containers: []
	W0913 20:00:38.324009   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:38.324017   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:38.324028   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:38.375830   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:38.375861   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:38.391703   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:38.391742   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:38.464586   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:38.464615   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:38.464630   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:38.545577   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:38.545616   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:41.090184   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:41.104040   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:41.104129   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:41.148332   71926 cri.go:89] found id: ""
	I0913 20:00:41.148362   71926 logs.go:276] 0 containers: []
	W0913 20:00:41.148371   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:41.148377   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:41.148441   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:41.187205   71926 cri.go:89] found id: ""
	I0913 20:00:41.187230   71926 logs.go:276] 0 containers: []
	W0913 20:00:41.187238   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:41.187244   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:41.187299   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:41.223654   71926 cri.go:89] found id: ""
	I0913 20:00:41.223677   71926 logs.go:276] 0 containers: []
	W0913 20:00:41.223685   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:41.223690   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:41.223750   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:41.258481   71926 cri.go:89] found id: ""
	I0913 20:00:41.258505   71926 logs.go:276] 0 containers: []
	W0913 20:00:41.258515   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:41.258520   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:41.258578   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:41.293205   71926 cri.go:89] found id: ""
	I0913 20:00:41.293236   71926 logs.go:276] 0 containers: []
	W0913 20:00:41.293248   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:41.293255   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:41.293313   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:41.331425   71926 cri.go:89] found id: ""
	I0913 20:00:41.331454   71926 logs.go:276] 0 containers: []
	W0913 20:00:41.331466   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:41.331474   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:41.331524   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:41.367916   71926 cri.go:89] found id: ""
	I0913 20:00:41.367942   71926 logs.go:276] 0 containers: []
	W0913 20:00:41.367953   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:41.367960   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:41.368024   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:41.413683   71926 cri.go:89] found id: ""
	I0913 20:00:41.413713   71926 logs.go:276] 0 containers: []
	W0913 20:00:41.413724   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:41.413736   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:41.413752   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:41.468018   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:41.468053   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:41.482397   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:41.482424   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:41.552203   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:41.552223   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:41.552238   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:41.628515   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:41.628553   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:39.835678   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:42.336092   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:41.932753   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:44.431833   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:41.768156   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:44.268056   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:44.167382   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:44.180046   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:44.180127   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:44.218376   71926 cri.go:89] found id: ""
	I0913 20:00:44.218400   71926 logs.go:276] 0 containers: []
	W0913 20:00:44.218409   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:44.218415   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:44.218474   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:44.255250   71926 cri.go:89] found id: ""
	I0913 20:00:44.255275   71926 logs.go:276] 0 containers: []
	W0913 20:00:44.255284   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:44.255289   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:44.255338   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:44.292561   71926 cri.go:89] found id: ""
	I0913 20:00:44.292587   71926 logs.go:276] 0 containers: []
	W0913 20:00:44.292597   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:44.292604   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:44.292662   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:44.326876   71926 cri.go:89] found id: ""
	I0913 20:00:44.326900   71926 logs.go:276] 0 containers: []
	W0913 20:00:44.326912   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:44.326919   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:44.326977   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:44.362531   71926 cri.go:89] found id: ""
	I0913 20:00:44.362562   71926 logs.go:276] 0 containers: []
	W0913 20:00:44.362574   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:44.362582   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:44.362646   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:44.396145   71926 cri.go:89] found id: ""
	I0913 20:00:44.396170   71926 logs.go:276] 0 containers: []
	W0913 20:00:44.396181   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:44.396188   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:44.396251   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:44.434531   71926 cri.go:89] found id: ""
	I0913 20:00:44.434556   71926 logs.go:276] 0 containers: []
	W0913 20:00:44.434564   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:44.434570   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:44.434627   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:44.470926   71926 cri.go:89] found id: ""
	I0913 20:00:44.470955   71926 logs.go:276] 0 containers: []
	W0913 20:00:44.470965   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:44.470976   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:44.470991   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:44.523651   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:44.523685   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:44.537739   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:44.537766   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:44.608055   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:44.608083   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:44.608098   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:44.694384   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:44.694416   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:44.835785   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:47.336699   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:46.932718   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:49.431805   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:46.766589   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:48.773406   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:47.235609   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:47.248691   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:47.248749   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:47.284011   71926 cri.go:89] found id: ""
	I0913 20:00:47.284037   71926 logs.go:276] 0 containers: []
	W0913 20:00:47.284049   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:47.284056   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:47.284118   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:47.319729   71926 cri.go:89] found id: ""
	I0913 20:00:47.319755   71926 logs.go:276] 0 containers: []
	W0913 20:00:47.319766   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:47.319772   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:47.319833   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:47.356053   71926 cri.go:89] found id: ""
	I0913 20:00:47.356079   71926 logs.go:276] 0 containers: []
	W0913 20:00:47.356087   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:47.356094   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:47.356172   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:47.390054   71926 cri.go:89] found id: ""
	I0913 20:00:47.390083   71926 logs.go:276] 0 containers: []
	W0913 20:00:47.390101   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:47.390109   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:47.390171   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:47.428894   71926 cri.go:89] found id: ""
	I0913 20:00:47.428920   71926 logs.go:276] 0 containers: []
	W0913 20:00:47.428932   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:47.428939   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:47.428996   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:47.463328   71926 cri.go:89] found id: ""
	I0913 20:00:47.463363   71926 logs.go:276] 0 containers: []
	W0913 20:00:47.463376   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:47.463389   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:47.463450   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:47.500739   71926 cri.go:89] found id: ""
	I0913 20:00:47.500764   71926 logs.go:276] 0 containers: []
	W0913 20:00:47.500773   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:47.500779   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:47.500827   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:47.534425   71926 cri.go:89] found id: ""
	I0913 20:00:47.534456   71926 logs.go:276] 0 containers: []
	W0913 20:00:47.534468   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:47.534479   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:47.534493   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:47.584525   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:47.584553   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:47.599468   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:47.599497   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:47.683020   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:47.683044   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:47.683055   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:47.767236   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:47.767272   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:50.310385   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:50.324059   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:50.324136   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:50.359347   71926 cri.go:89] found id: ""
	I0913 20:00:50.359381   71926 logs.go:276] 0 containers: []
	W0913 20:00:50.359393   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:50.359401   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:50.359451   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:50.395291   71926 cri.go:89] found id: ""
	I0913 20:00:50.395339   71926 logs.go:276] 0 containers: []
	W0913 20:00:50.395352   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:50.395360   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:50.395416   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:50.430783   71926 cri.go:89] found id: ""
	I0913 20:00:50.430806   71926 logs.go:276] 0 containers: []
	W0913 20:00:50.430816   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:50.430823   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:50.430884   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:50.467883   71926 cri.go:89] found id: ""
	I0913 20:00:50.467916   71926 logs.go:276] 0 containers: []
	W0913 20:00:50.467928   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:50.467935   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:50.467997   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:50.505962   71926 cri.go:89] found id: ""
	I0913 20:00:50.505995   71926 logs.go:276] 0 containers: []
	W0913 20:00:50.506007   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:50.506014   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:50.506080   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:50.542408   71926 cri.go:89] found id: ""
	I0913 20:00:50.542432   71926 logs.go:276] 0 containers: []
	W0913 20:00:50.542440   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:50.542445   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:50.542492   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:50.576431   71926 cri.go:89] found id: ""
	I0913 20:00:50.576454   71926 logs.go:276] 0 containers: []
	W0913 20:00:50.576463   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:50.576469   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:50.576533   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:50.609940   71926 cri.go:89] found id: ""
	I0913 20:00:50.609982   71926 logs.go:276] 0 containers: []
	W0913 20:00:50.609994   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:50.610004   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:50.610022   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:50.661793   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:50.661829   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:50.676085   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:50.676128   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:50.745092   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:50.745118   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:50.745132   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:50.830719   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:50.830754   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:49.835228   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:51.835655   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:53.835956   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:51.930403   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:53.931943   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:51.266576   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:53.267140   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:55.267966   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:53.369220   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:53.381944   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:53.382009   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:53.414010   71926 cri.go:89] found id: ""
	I0913 20:00:53.414032   71926 logs.go:276] 0 containers: []
	W0913 20:00:53.414041   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:53.414047   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:53.414131   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:53.448480   71926 cri.go:89] found id: ""
	I0913 20:00:53.448512   71926 logs.go:276] 0 containers: []
	W0913 20:00:53.448523   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:53.448530   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:53.448589   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:53.489451   71926 cri.go:89] found id: ""
	I0913 20:00:53.489483   71926 logs.go:276] 0 containers: []
	W0913 20:00:53.489494   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:53.489502   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:53.489562   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:53.531122   71926 cri.go:89] found id: ""
	I0913 20:00:53.531143   71926 logs.go:276] 0 containers: []
	W0913 20:00:53.531154   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:53.531169   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:53.531228   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:53.568070   71926 cri.go:89] found id: ""
	I0913 20:00:53.568098   71926 logs.go:276] 0 containers: []
	W0913 20:00:53.568109   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:53.568116   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:53.568187   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:53.601557   71926 cri.go:89] found id: ""
	I0913 20:00:53.601580   71926 logs.go:276] 0 containers: []
	W0913 20:00:53.601589   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:53.601595   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:53.601646   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:53.637689   71926 cri.go:89] found id: ""
	I0913 20:00:53.637710   71926 logs.go:276] 0 containers: []
	W0913 20:00:53.637719   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:53.637724   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:53.637782   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:53.672876   71926 cri.go:89] found id: ""
	I0913 20:00:53.672899   71926 logs.go:276] 0 containers: []
	W0913 20:00:53.672908   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:53.672916   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:53.672932   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:53.686015   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:53.686044   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:53.753998   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:53.754023   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:53.754042   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:53.844298   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:53.844329   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:53.883828   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:53.883853   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:56.440474   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:56.453970   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:56.454039   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:56.490986   71926 cri.go:89] found id: ""
	I0913 20:00:56.491013   71926 logs.go:276] 0 containers: []
	W0913 20:00:56.491024   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:56.491031   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:56.491094   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:56.530035   71926 cri.go:89] found id: ""
	I0913 20:00:56.530059   71926 logs.go:276] 0 containers: []
	W0913 20:00:56.530070   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:56.530077   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:56.530175   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:56.565256   71926 cri.go:89] found id: ""
	I0913 20:00:56.565280   71926 logs.go:276] 0 containers: []
	W0913 20:00:56.565290   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:56.565297   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:56.565358   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:56.600685   71926 cri.go:89] found id: ""
	I0913 20:00:56.600705   71926 logs.go:276] 0 containers: []
	W0913 20:00:56.600714   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:56.600725   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:56.600777   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:56.635015   71926 cri.go:89] found id: ""
	I0913 20:00:56.635042   71926 logs.go:276] 0 containers: []
	W0913 20:00:56.635053   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:56.635060   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:56.635125   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:56.671319   71926 cri.go:89] found id: ""
	I0913 20:00:56.671346   71926 logs.go:276] 0 containers: []
	W0913 20:00:56.671354   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:56.671359   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:56.671407   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:56.732238   71926 cri.go:89] found id: ""
	I0913 20:00:56.732269   71926 logs.go:276] 0 containers: []
	W0913 20:00:56.732280   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:56.732287   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:56.732347   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:56.779316   71926 cri.go:89] found id: ""
	I0913 20:00:56.779345   71926 logs.go:276] 0 containers: []
	W0913 20:00:56.779356   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:56.779366   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:56.779378   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:56.858727   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:56.858755   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:56.899449   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:56.899480   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:55.836469   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:58.335760   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:56.431305   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:58.431336   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:57.766219   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:59.767250   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:56.952972   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:56.953003   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:56.967092   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:56.967131   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:57.052690   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:59.553696   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:59.569295   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:59.569370   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:59.608548   71926 cri.go:89] found id: ""
	I0913 20:00:59.608592   71926 logs.go:276] 0 containers: []
	W0913 20:00:59.608605   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:59.608612   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:59.608674   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:59.643673   71926 cri.go:89] found id: ""
	I0913 20:00:59.643699   71926 logs.go:276] 0 containers: []
	W0913 20:00:59.643710   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:59.643716   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:59.643776   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:59.687204   71926 cri.go:89] found id: ""
	I0913 20:00:59.687228   71926 logs.go:276] 0 containers: []
	W0913 20:00:59.687237   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:59.687242   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:59.687306   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:59.720335   71926 cri.go:89] found id: ""
	I0913 20:00:59.720360   71926 logs.go:276] 0 containers: []
	W0913 20:00:59.720371   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:59.720378   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:59.720446   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:59.761164   71926 cri.go:89] found id: ""
	I0913 20:00:59.761194   71926 logs.go:276] 0 containers: []
	W0913 20:00:59.761205   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:59.761213   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:59.761278   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:59.796770   71926 cri.go:89] found id: ""
	I0913 20:00:59.796796   71926 logs.go:276] 0 containers: []
	W0913 20:00:59.796807   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:59.796814   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:59.796880   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:59.832333   71926 cri.go:89] found id: ""
	I0913 20:00:59.832364   71926 logs.go:276] 0 containers: []
	W0913 20:00:59.832377   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:59.832385   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:59.832444   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:59.868387   71926 cri.go:89] found id: ""
	I0913 20:00:59.868415   71926 logs.go:276] 0 containers: []
	W0913 20:00:59.868427   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:59.868437   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:59.868450   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:59.920632   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:59.920663   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:59.937573   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:59.937609   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:00.007803   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:00.007826   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:00.007837   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:00.085289   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:00.085324   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:00.336553   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:02.835544   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:00.931173   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:02.931879   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:02.267501   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:04.766302   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:02.628580   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:02.642122   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:02.642200   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:02.678238   71926 cri.go:89] found id: ""
	I0913 20:01:02.678260   71926 logs.go:276] 0 containers: []
	W0913 20:01:02.678268   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:02.678274   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:02.678325   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:02.714164   71926 cri.go:89] found id: ""
	I0913 20:01:02.714187   71926 logs.go:276] 0 containers: []
	W0913 20:01:02.714197   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:02.714202   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:02.714267   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:02.750200   71926 cri.go:89] found id: ""
	I0913 20:01:02.750228   71926 logs.go:276] 0 containers: []
	W0913 20:01:02.750237   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:02.750243   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:02.750291   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:02.786893   71926 cri.go:89] found id: ""
	I0913 20:01:02.786921   71926 logs.go:276] 0 containers: []
	W0913 20:01:02.786929   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:02.786935   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:02.786987   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:02.824161   71926 cri.go:89] found id: ""
	I0913 20:01:02.824192   71926 logs.go:276] 0 containers: []
	W0913 20:01:02.824204   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:02.824211   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:02.824274   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:02.861567   71926 cri.go:89] found id: ""
	I0913 20:01:02.861594   71926 logs.go:276] 0 containers: []
	W0913 20:01:02.861606   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:02.861613   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:02.861678   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:02.897791   71926 cri.go:89] found id: ""
	I0913 20:01:02.897813   71926 logs.go:276] 0 containers: []
	W0913 20:01:02.897822   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:02.897827   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:02.897875   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:02.935790   71926 cri.go:89] found id: ""
	I0913 20:01:02.935818   71926 logs.go:276] 0 containers: []
	W0913 20:01:02.935830   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:02.935840   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:02.935853   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:02.987011   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:02.987044   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:03.000688   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:03.000722   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:03.075757   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:03.075780   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:03.075795   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:03.167565   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:03.167611   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:05.713852   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:05.727810   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:05.727876   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:05.765630   71926 cri.go:89] found id: ""
	I0913 20:01:05.765655   71926 logs.go:276] 0 containers: []
	W0913 20:01:05.765666   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:05.765672   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:05.765720   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:05.801085   71926 cri.go:89] found id: ""
	I0913 20:01:05.801123   71926 logs.go:276] 0 containers: []
	W0913 20:01:05.801136   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:05.801142   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:05.801209   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:05.840112   71926 cri.go:89] found id: ""
	I0913 20:01:05.840146   71926 logs.go:276] 0 containers: []
	W0913 20:01:05.840156   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:05.840163   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:05.840225   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:05.876082   71926 cri.go:89] found id: ""
	I0913 20:01:05.876107   71926 logs.go:276] 0 containers: []
	W0913 20:01:05.876118   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:05.876125   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:05.876188   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:05.913772   71926 cri.go:89] found id: ""
	I0913 20:01:05.913801   71926 logs.go:276] 0 containers: []
	W0913 20:01:05.913813   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:05.913820   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:05.913879   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:05.947670   71926 cri.go:89] found id: ""
	I0913 20:01:05.947694   71926 logs.go:276] 0 containers: []
	W0913 20:01:05.947702   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:05.947708   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:05.947756   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:05.989763   71926 cri.go:89] found id: ""
	I0913 20:01:05.989794   71926 logs.go:276] 0 containers: []
	W0913 20:01:05.989807   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:05.989814   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:05.989875   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:06.030319   71926 cri.go:89] found id: ""
	I0913 20:01:06.030361   71926 logs.go:276] 0 containers: []
	W0913 20:01:06.030373   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:06.030383   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:06.030397   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:06.111153   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:06.111185   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:06.149331   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:06.149357   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:06.202013   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:06.202056   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:06.216224   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:06.216252   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:06.291004   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:04.839716   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:07.334774   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:04.932814   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:07.431144   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:09.431578   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:06.766410   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:09.267184   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:08.791573   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:08.804872   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:08.804930   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:08.846040   71926 cri.go:89] found id: ""
	I0913 20:01:08.846068   71926 logs.go:276] 0 containers: []
	W0913 20:01:08.846081   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:08.846088   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:08.846159   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:08.892954   71926 cri.go:89] found id: ""
	I0913 20:01:08.892986   71926 logs.go:276] 0 containers: []
	W0913 20:01:08.892998   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:08.893005   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:08.893068   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:08.949737   71926 cri.go:89] found id: ""
	I0913 20:01:08.949762   71926 logs.go:276] 0 containers: []
	W0913 20:01:08.949773   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:08.949779   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:08.949847   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:08.991915   71926 cri.go:89] found id: ""
	I0913 20:01:08.991938   71926 logs.go:276] 0 containers: []
	W0913 20:01:08.991950   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:08.991956   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:08.992009   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:09.028702   71926 cri.go:89] found id: ""
	I0913 20:01:09.028733   71926 logs.go:276] 0 containers: []
	W0913 20:01:09.028754   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:09.028764   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:09.028828   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:09.063615   71926 cri.go:89] found id: ""
	I0913 20:01:09.063639   71926 logs.go:276] 0 containers: []
	W0913 20:01:09.063648   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:09.063653   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:09.063713   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:09.096739   71926 cri.go:89] found id: ""
	I0913 20:01:09.096766   71926 logs.go:276] 0 containers: []
	W0913 20:01:09.096776   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:09.096782   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:09.096838   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:09.129605   71926 cri.go:89] found id: ""
	I0913 20:01:09.129635   71926 logs.go:276] 0 containers: []
	W0913 20:01:09.129647   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:09.129657   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:09.129674   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:09.181245   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:09.181280   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:09.195598   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:09.195627   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:09.271676   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:09.271703   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:09.271718   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:09.353657   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:09.353688   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:11.896613   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:11.910334   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:11.910410   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:09.336081   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:11.336204   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:13.336445   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:11.934825   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:14.430581   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:11.766779   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:14.267119   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:11.947632   71926 cri.go:89] found id: ""
	I0913 20:01:11.947654   71926 logs.go:276] 0 containers: []
	W0913 20:01:11.947663   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:11.947669   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:11.947727   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:11.983996   71926 cri.go:89] found id: ""
	I0913 20:01:11.984019   71926 logs.go:276] 0 containers: []
	W0913 20:01:11.984028   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:11.984034   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:11.984082   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:12.020497   71926 cri.go:89] found id: ""
	I0913 20:01:12.020536   71926 logs.go:276] 0 containers: []
	W0913 20:01:12.020548   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:12.020556   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:12.020615   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:12.059799   71926 cri.go:89] found id: ""
	I0913 20:01:12.059821   71926 logs.go:276] 0 containers: []
	W0913 20:01:12.059829   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:12.059835   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:12.059882   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:12.099079   71926 cri.go:89] found id: ""
	I0913 20:01:12.099113   71926 logs.go:276] 0 containers: []
	W0913 20:01:12.099125   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:12.099132   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:12.099193   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:12.132677   71926 cri.go:89] found id: ""
	I0913 20:01:12.132704   71926 logs.go:276] 0 containers: []
	W0913 20:01:12.132716   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:12.132722   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:12.132769   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:12.165522   71926 cri.go:89] found id: ""
	I0913 20:01:12.165546   71926 logs.go:276] 0 containers: []
	W0913 20:01:12.165560   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:12.165565   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:12.165625   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:12.207145   71926 cri.go:89] found id: ""
	I0913 20:01:12.207168   71926 logs.go:276] 0 containers: []
	W0913 20:01:12.207177   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:12.207185   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:12.207195   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:12.260523   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:12.260556   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:12.275208   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:12.275236   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:12.346424   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:12.346447   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:12.346463   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:12.431012   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:12.431050   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:14.970909   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:14.984452   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:14.984512   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:15.020614   71926 cri.go:89] found id: ""
	I0913 20:01:15.020635   71926 logs.go:276] 0 containers: []
	W0913 20:01:15.020645   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:15.020650   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:15.020695   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:15.060481   71926 cri.go:89] found id: ""
	I0913 20:01:15.060508   71926 logs.go:276] 0 containers: []
	W0913 20:01:15.060516   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:15.060520   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:15.060580   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:15.095109   71926 cri.go:89] found id: ""
	I0913 20:01:15.095134   71926 logs.go:276] 0 containers: []
	W0913 20:01:15.095143   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:15.095148   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:15.095201   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:15.130733   71926 cri.go:89] found id: ""
	I0913 20:01:15.130754   71926 logs.go:276] 0 containers: []
	W0913 20:01:15.130762   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:15.130768   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:15.130816   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:15.164998   71926 cri.go:89] found id: ""
	I0913 20:01:15.165027   71926 logs.go:276] 0 containers: []
	W0913 20:01:15.165042   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:15.165049   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:15.165100   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:15.205957   71926 cri.go:89] found id: ""
	I0913 20:01:15.205981   71926 logs.go:276] 0 containers: []
	W0913 20:01:15.205989   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:15.205994   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:15.206053   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:15.240502   71926 cri.go:89] found id: ""
	I0913 20:01:15.240526   71926 logs.go:276] 0 containers: []
	W0913 20:01:15.240535   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:15.240540   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:15.240586   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:15.275900   71926 cri.go:89] found id: ""
	I0913 20:01:15.275920   71926 logs.go:276] 0 containers: []
	W0913 20:01:15.275928   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:15.275936   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:15.275946   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:15.343658   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:15.343677   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:15.343690   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:15.426317   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:15.426355   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:15.474538   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:15.474569   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:15.525405   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:15.525439   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:15.836259   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:18.336529   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:16.431423   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:18.930385   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:16.766863   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:19.266906   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:18.040698   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:18.053759   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:18.053820   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:18.086056   71926 cri.go:89] found id: ""
	I0913 20:01:18.086078   71926 logs.go:276] 0 containers: []
	W0913 20:01:18.086087   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:18.086092   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:18.086166   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:18.119247   71926 cri.go:89] found id: ""
	I0913 20:01:18.119277   71926 logs.go:276] 0 containers: []
	W0913 20:01:18.119290   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:18.119301   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:18.119366   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:18.163417   71926 cri.go:89] found id: ""
	I0913 20:01:18.163442   71926 logs.go:276] 0 containers: []
	W0913 20:01:18.163450   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:18.163456   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:18.163504   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:18.196786   71926 cri.go:89] found id: ""
	I0913 20:01:18.196812   71926 logs.go:276] 0 containers: []
	W0913 20:01:18.196820   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:18.196826   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:18.196878   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:18.231864   71926 cri.go:89] found id: ""
	I0913 20:01:18.231893   71926 logs.go:276] 0 containers: []
	W0913 20:01:18.231903   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:18.231909   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:18.231973   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:18.268498   71926 cri.go:89] found id: ""
	I0913 20:01:18.268518   71926 logs.go:276] 0 containers: []
	W0913 20:01:18.268527   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:18.268534   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:18.268586   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:18.305391   71926 cri.go:89] found id: ""
	I0913 20:01:18.305418   71926 logs.go:276] 0 containers: []
	W0913 20:01:18.305430   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:18.305438   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:18.305499   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:18.340160   71926 cri.go:89] found id: ""
	I0913 20:01:18.340187   71926 logs.go:276] 0 containers: []
	W0913 20:01:18.340197   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:18.340207   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:18.340220   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:18.391723   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:18.391757   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:18.405914   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:18.405943   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:18.476941   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:18.476960   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:18.476972   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:18.556812   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:18.556845   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:21.098172   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:21.111147   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:21.111211   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:21.147273   71926 cri.go:89] found id: ""
	I0913 20:01:21.147299   71926 logs.go:276] 0 containers: []
	W0913 20:01:21.147316   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:21.147322   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:21.147370   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:21.185018   71926 cri.go:89] found id: ""
	I0913 20:01:21.185047   71926 logs.go:276] 0 containers: []
	W0913 20:01:21.185059   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:21.185066   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:21.185148   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:21.219763   71926 cri.go:89] found id: ""
	I0913 20:01:21.219798   71926 logs.go:276] 0 containers: []
	W0913 20:01:21.219809   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:21.219816   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:21.219882   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:21.257121   71926 cri.go:89] found id: ""
	I0913 20:01:21.257149   71926 logs.go:276] 0 containers: []
	W0913 20:01:21.257161   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:21.257167   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:21.257229   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:21.292011   71926 cri.go:89] found id: ""
	I0913 20:01:21.292038   71926 logs.go:276] 0 containers: []
	W0913 20:01:21.292049   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:21.292060   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:21.292124   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:21.327563   71926 cri.go:89] found id: ""
	I0913 20:01:21.327592   71926 logs.go:276] 0 containers: []
	W0913 20:01:21.327604   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:21.327612   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:21.327679   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:21.365175   71926 cri.go:89] found id: ""
	I0913 20:01:21.365201   71926 logs.go:276] 0 containers: []
	W0913 20:01:21.365209   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:21.365215   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:21.365272   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:21.403238   71926 cri.go:89] found id: ""
	I0913 20:01:21.403266   71926 logs.go:276] 0 containers: []
	W0913 20:01:21.403278   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:21.403288   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:21.403310   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:21.417704   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:21.417737   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:21.490232   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:21.490264   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:21.490283   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:21.573376   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:21.573414   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:21.619573   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:21.619604   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:20.835709   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:22.835800   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:20.931257   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:22.932350   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:21.267729   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:23.767489   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:25.768029   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:24.173931   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:24.187538   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:24.187608   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:24.227803   71926 cri.go:89] found id: ""
	I0913 20:01:24.227827   71926 logs.go:276] 0 containers: []
	W0913 20:01:24.227836   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:24.227841   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:24.227898   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:24.263194   71926 cri.go:89] found id: ""
	I0913 20:01:24.263223   71926 logs.go:276] 0 containers: []
	W0913 20:01:24.263239   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:24.263246   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:24.263308   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:24.298159   71926 cri.go:89] found id: ""
	I0913 20:01:24.298188   71926 logs.go:276] 0 containers: []
	W0913 20:01:24.298201   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:24.298209   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:24.298267   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:24.340590   71926 cri.go:89] found id: ""
	I0913 20:01:24.340621   71926 logs.go:276] 0 containers: []
	W0913 20:01:24.340633   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:24.340640   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:24.340699   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:24.385839   71926 cri.go:89] found id: ""
	I0913 20:01:24.385866   71926 logs.go:276] 0 containers: []
	W0913 20:01:24.385875   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:24.385880   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:24.385944   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:24.424723   71926 cri.go:89] found id: ""
	I0913 20:01:24.424750   71926 logs.go:276] 0 containers: []
	W0913 20:01:24.424761   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:24.424767   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:24.424828   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:24.461879   71926 cri.go:89] found id: ""
	I0913 20:01:24.461911   71926 logs.go:276] 0 containers: []
	W0913 20:01:24.461922   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:24.461929   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:24.461995   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:24.497230   71926 cri.go:89] found id: ""
	I0913 20:01:24.497257   71926 logs.go:276] 0 containers: []
	W0913 20:01:24.497269   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:24.497278   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:24.497297   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:24.538048   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:24.538084   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:24.592840   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:24.592880   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:24.608817   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:24.608851   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:24.683335   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:24.683356   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:24.683367   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:24.836044   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:27.335709   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:25.431310   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:27.931864   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:28.266427   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:30.765946   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:27.262211   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:27.275199   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:27.275277   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:27.308960   71926 cri.go:89] found id: ""
	I0913 20:01:27.308986   71926 logs.go:276] 0 containers: []
	W0913 20:01:27.308994   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:27.309000   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:27.309065   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:27.345030   71926 cri.go:89] found id: ""
	I0913 20:01:27.345055   71926 logs.go:276] 0 containers: []
	W0913 20:01:27.345067   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:27.345074   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:27.345132   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:27.380791   71926 cri.go:89] found id: ""
	I0913 20:01:27.380823   71926 logs.go:276] 0 containers: []
	W0913 20:01:27.380831   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:27.380837   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:27.380896   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:27.416561   71926 cri.go:89] found id: ""
	I0913 20:01:27.416589   71926 logs.go:276] 0 containers: []
	W0913 20:01:27.416599   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:27.416604   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:27.416654   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:27.455781   71926 cri.go:89] found id: ""
	I0913 20:01:27.455809   71926 logs.go:276] 0 containers: []
	W0913 20:01:27.455820   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:27.455827   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:27.455887   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:27.493920   71926 cri.go:89] found id: ""
	I0913 20:01:27.493950   71926 logs.go:276] 0 containers: []
	W0913 20:01:27.493959   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:27.493967   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:27.494016   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:27.531706   71926 cri.go:89] found id: ""
	I0913 20:01:27.531730   71926 logs.go:276] 0 containers: []
	W0913 20:01:27.531740   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:27.531746   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:27.531796   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:27.568697   71926 cri.go:89] found id: ""
	I0913 20:01:27.568726   71926 logs.go:276] 0 containers: []
	W0913 20:01:27.568735   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:27.568744   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:27.568755   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:27.620618   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:27.620655   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:27.636353   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:27.636378   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:27.709779   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:27.709806   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:27.709819   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:27.784476   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:27.784508   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:30.331351   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:30.344583   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:30.344647   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:30.379992   71926 cri.go:89] found id: ""
	I0913 20:01:30.380018   71926 logs.go:276] 0 containers: []
	W0913 20:01:30.380051   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:30.380059   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:30.380129   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:30.415148   71926 cri.go:89] found id: ""
	I0913 20:01:30.415174   71926 logs.go:276] 0 containers: []
	W0913 20:01:30.415185   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:30.415192   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:30.415250   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:30.450578   71926 cri.go:89] found id: ""
	I0913 20:01:30.450602   71926 logs.go:276] 0 containers: []
	W0913 20:01:30.450611   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:30.450616   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:30.450675   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:30.484582   71926 cri.go:89] found id: ""
	I0913 20:01:30.484612   71926 logs.go:276] 0 containers: []
	W0913 20:01:30.484623   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:30.484631   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:30.484683   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:30.520476   71926 cri.go:89] found id: ""
	I0913 20:01:30.520498   71926 logs.go:276] 0 containers: []
	W0913 20:01:30.520506   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:30.520512   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:30.520559   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:30.556898   71926 cri.go:89] found id: ""
	I0913 20:01:30.556928   71926 logs.go:276] 0 containers: []
	W0913 20:01:30.556937   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:30.556943   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:30.556993   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:30.595216   71926 cri.go:89] found id: ""
	I0913 20:01:30.595240   71926 logs.go:276] 0 containers: []
	W0913 20:01:30.595252   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:30.595259   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:30.595312   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:30.629834   71926 cri.go:89] found id: ""
	I0913 20:01:30.629866   71926 logs.go:276] 0 containers: []
	W0913 20:01:30.629880   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:30.629891   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:30.629907   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:30.643487   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:30.643516   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:30.718267   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:30.718290   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:30.718304   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:30.803014   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:30.803044   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:30.843670   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:30.843694   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:29.336064   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:31.836582   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:29.932193   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:32.431217   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:32.766473   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:34.767287   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:33.396566   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:33.409579   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:33.409641   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:33.444647   71926 cri.go:89] found id: ""
	I0913 20:01:33.444671   71926 logs.go:276] 0 containers: []
	W0913 20:01:33.444682   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:33.444689   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:33.444747   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:33.479545   71926 cri.go:89] found id: ""
	I0913 20:01:33.479568   71926 logs.go:276] 0 containers: []
	W0913 20:01:33.479577   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:33.479583   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:33.479634   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:33.517343   71926 cri.go:89] found id: ""
	I0913 20:01:33.517366   71926 logs.go:276] 0 containers: []
	W0913 20:01:33.517375   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:33.517380   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:33.517437   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:33.551394   71926 cri.go:89] found id: ""
	I0913 20:01:33.551424   71926 logs.go:276] 0 containers: []
	W0913 20:01:33.551436   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:33.551449   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:33.551499   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:33.583871   71926 cri.go:89] found id: ""
	I0913 20:01:33.583893   71926 logs.go:276] 0 containers: []
	W0913 20:01:33.583902   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:33.583907   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:33.583966   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:33.616984   71926 cri.go:89] found id: ""
	I0913 20:01:33.617008   71926 logs.go:276] 0 containers: []
	W0913 20:01:33.617018   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:33.617023   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:33.617078   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:33.660231   71926 cri.go:89] found id: ""
	I0913 20:01:33.660252   71926 logs.go:276] 0 containers: []
	W0913 20:01:33.660261   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:33.660267   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:33.660315   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:33.705140   71926 cri.go:89] found id: ""
	I0913 20:01:33.705176   71926 logs.go:276] 0 containers: []
	W0913 20:01:33.705188   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:33.705198   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:33.705213   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:33.781889   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:33.781916   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:33.781931   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:33.860132   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:33.860171   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:33.899723   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:33.899754   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:33.953380   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:33.953416   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:36.469123   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:36.482260   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:36.482324   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:36.516469   71926 cri.go:89] found id: ""
	I0913 20:01:36.516494   71926 logs.go:276] 0 containers: []
	W0913 20:01:36.516504   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:36.516509   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:36.516599   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:36.554065   71926 cri.go:89] found id: ""
	I0913 20:01:36.554103   71926 logs.go:276] 0 containers: []
	W0913 20:01:36.554116   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:36.554123   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:36.554197   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:36.587706   71926 cri.go:89] found id: ""
	I0913 20:01:36.587736   71926 logs.go:276] 0 containers: []
	W0913 20:01:36.587745   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:36.587750   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:36.587812   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:36.621866   71926 cri.go:89] found id: ""
	I0913 20:01:36.621897   71926 logs.go:276] 0 containers: []
	W0913 20:01:36.621908   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:36.621915   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:36.621984   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:36.656374   71926 cri.go:89] found id: ""
	I0913 20:01:36.656403   71926 logs.go:276] 0 containers: []
	W0913 20:01:36.656411   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:36.656416   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:36.656465   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:36.690236   71926 cri.go:89] found id: ""
	I0913 20:01:36.690259   71926 logs.go:276] 0 containers: []
	W0913 20:01:36.690268   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:36.690273   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:36.690329   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:36.725544   71926 cri.go:89] found id: ""
	I0913 20:01:36.725580   71926 logs.go:276] 0 containers: []
	W0913 20:01:36.725592   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:36.725600   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:36.725656   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:36.762143   71926 cri.go:89] found id: ""
	I0913 20:01:36.762175   71926 logs.go:276] 0 containers: []
	W0913 20:01:36.762186   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:36.762195   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:36.762210   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:36.837436   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:36.837459   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:36.837473   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:36.916272   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:36.916310   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:34.334975   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:36.335436   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:38.835559   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:34.930444   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:36.931136   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:39.430007   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:37.266186   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:39.769801   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:36.962300   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:36.962332   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:37.016419   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:37.016449   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:39.532924   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:39.546066   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:39.546150   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:39.585711   71926 cri.go:89] found id: ""
	I0913 20:01:39.585733   71926 logs.go:276] 0 containers: []
	W0913 20:01:39.585741   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:39.585747   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:39.585803   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:39.623366   71926 cri.go:89] found id: ""
	I0913 20:01:39.623409   71926 logs.go:276] 0 containers: []
	W0913 20:01:39.623419   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:39.623425   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:39.623487   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:39.660533   71926 cri.go:89] found id: ""
	I0913 20:01:39.660558   71926 logs.go:276] 0 containers: []
	W0913 20:01:39.660567   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:39.660575   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:39.660637   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:39.694266   71926 cri.go:89] found id: ""
	I0913 20:01:39.694293   71926 logs.go:276] 0 containers: []
	W0913 20:01:39.694304   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:39.694311   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:39.694373   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:39.734258   71926 cri.go:89] found id: ""
	I0913 20:01:39.734284   71926 logs.go:276] 0 containers: []
	W0913 20:01:39.734295   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:39.734302   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:39.734361   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:39.770202   71926 cri.go:89] found id: ""
	I0913 20:01:39.770219   71926 logs.go:276] 0 containers: []
	W0913 20:01:39.770227   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:39.770233   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:39.770284   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:39.804380   71926 cri.go:89] found id: ""
	I0913 20:01:39.804417   71926 logs.go:276] 0 containers: []
	W0913 20:01:39.804425   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:39.804431   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:39.804481   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:39.837961   71926 cri.go:89] found id: ""
	I0913 20:01:39.837989   71926 logs.go:276] 0 containers: []
	W0913 20:01:39.838011   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:39.838021   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:39.838047   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:39.851152   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:39.851176   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:39.930199   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:39.930224   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:39.930239   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:40.007475   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:40.007507   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:40.050291   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:40.050320   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:40.835948   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:42.836933   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:41.431508   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:43.930509   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:42.265895   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:44.267214   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:42.599100   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:42.613586   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:42.613662   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:42.648187   71926 cri.go:89] found id: ""
	I0913 20:01:42.648213   71926 logs.go:276] 0 containers: []
	W0913 20:01:42.648225   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:42.648232   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:42.648283   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:42.692692   71926 cri.go:89] found id: ""
	I0913 20:01:42.692721   71926 logs.go:276] 0 containers: []
	W0913 20:01:42.692730   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:42.692736   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:42.692787   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:42.726248   71926 cri.go:89] found id: ""
	I0913 20:01:42.726273   71926 logs.go:276] 0 containers: []
	W0913 20:01:42.726283   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:42.726291   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:42.726364   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:42.764372   71926 cri.go:89] found id: ""
	I0913 20:01:42.764416   71926 logs.go:276] 0 containers: []
	W0913 20:01:42.764428   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:42.764434   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:42.764495   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:42.802871   71926 cri.go:89] found id: ""
	I0913 20:01:42.802902   71926 logs.go:276] 0 containers: []
	W0913 20:01:42.802914   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:42.802922   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:42.802983   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:42.840018   71926 cri.go:89] found id: ""
	I0913 20:01:42.840048   71926 logs.go:276] 0 containers: []
	W0913 20:01:42.840060   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:42.840067   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:42.840127   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:42.876359   71926 cri.go:89] found id: ""
	I0913 20:01:42.876388   71926 logs.go:276] 0 containers: []
	W0913 20:01:42.876400   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:42.876408   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:42.876473   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:42.911673   71926 cri.go:89] found id: ""
	I0913 20:01:42.911697   71926 logs.go:276] 0 containers: []
	W0913 20:01:42.911706   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:42.911713   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:42.911725   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:43.016107   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:43.016143   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:43.061781   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:43.061811   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:43.112989   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:43.113035   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:43.129172   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:43.129201   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:43.209596   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:45.710278   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:45.723939   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:45.724013   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:45.759393   71926 cri.go:89] found id: ""
	I0913 20:01:45.759420   71926 logs.go:276] 0 containers: []
	W0913 20:01:45.759432   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:45.759439   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:45.759500   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:45.796795   71926 cri.go:89] found id: ""
	I0913 20:01:45.796820   71926 logs.go:276] 0 containers: []
	W0913 20:01:45.796831   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:45.796838   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:45.796911   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:45.837904   71926 cri.go:89] found id: ""
	I0913 20:01:45.837929   71926 logs.go:276] 0 containers: []
	W0913 20:01:45.837939   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:45.837945   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:45.838005   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:45.873079   71926 cri.go:89] found id: ""
	I0913 20:01:45.873104   71926 logs.go:276] 0 containers: []
	W0913 20:01:45.873115   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:45.873122   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:45.873194   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:45.910115   71926 cri.go:89] found id: ""
	I0913 20:01:45.910144   71926 logs.go:276] 0 containers: []
	W0913 20:01:45.910160   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:45.910167   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:45.910228   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:45.949135   71926 cri.go:89] found id: ""
	I0913 20:01:45.949166   71926 logs.go:276] 0 containers: []
	W0913 20:01:45.949178   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:45.949186   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:45.949246   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:45.984997   71926 cri.go:89] found id: ""
	I0913 20:01:45.985023   71926 logs.go:276] 0 containers: []
	W0913 20:01:45.985033   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:45.985038   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:45.985088   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:46.023590   71926 cri.go:89] found id: ""
	I0913 20:01:46.023618   71926 logs.go:276] 0 containers: []
	W0913 20:01:46.023632   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:46.023642   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:46.023656   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:46.062364   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:46.062392   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:46.114617   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:46.114650   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:46.129756   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:46.129799   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:46.202031   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:46.202052   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:46.202063   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:45.337317   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:47.834948   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:45.931344   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:47.932508   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:46.776369   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:49.268050   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:48.782933   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:48.796685   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:48.796758   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:48.832035   71926 cri.go:89] found id: ""
	I0913 20:01:48.832061   71926 logs.go:276] 0 containers: []
	W0913 20:01:48.832074   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:48.832081   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:48.832155   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:48.866904   71926 cri.go:89] found id: ""
	I0913 20:01:48.866929   71926 logs.go:276] 0 containers: []
	W0913 20:01:48.866942   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:48.866947   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:48.867004   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:48.904082   71926 cri.go:89] found id: ""
	I0913 20:01:48.904105   71926 logs.go:276] 0 containers: []
	W0913 20:01:48.904113   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:48.904118   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:48.904174   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:48.947496   71926 cri.go:89] found id: ""
	I0913 20:01:48.947519   71926 logs.go:276] 0 containers: []
	W0913 20:01:48.947526   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:48.947532   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:48.947588   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:48.982918   71926 cri.go:89] found id: ""
	I0913 20:01:48.982954   71926 logs.go:276] 0 containers: []
	W0913 20:01:48.982964   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:48.982969   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:48.983034   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:49.023593   71926 cri.go:89] found id: ""
	I0913 20:01:49.023615   71926 logs.go:276] 0 containers: []
	W0913 20:01:49.023623   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:49.023629   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:49.023690   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:49.068211   71926 cri.go:89] found id: ""
	I0913 20:01:49.068233   71926 logs.go:276] 0 containers: []
	W0913 20:01:49.068241   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:49.068246   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:49.068310   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:49.104174   71926 cri.go:89] found id: ""
	I0913 20:01:49.104195   71926 logs.go:276] 0 containers: []
	W0913 20:01:49.104203   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:49.104210   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:49.104221   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:49.158282   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:49.158317   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:49.173592   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:49.173617   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:49.249039   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:49.249067   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:49.249082   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:49.334924   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:49.334969   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:51.884986   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:51.898020   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:51.898172   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:51.937858   71926 cri.go:89] found id: ""
	I0913 20:01:51.937885   71926 logs.go:276] 0 containers: []
	W0913 20:01:51.937896   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:51.937905   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:51.937971   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:49.836646   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:52.337477   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:50.432045   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:52.930984   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:51.765027   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:53.766659   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:55.766923   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:51.977712   71926 cri.go:89] found id: ""
	I0913 20:01:51.977743   71926 logs.go:276] 0 containers: []
	W0913 20:01:51.977756   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:51.977766   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:51.977852   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:52.012504   71926 cri.go:89] found id: ""
	I0913 20:01:52.012529   71926 logs.go:276] 0 containers: []
	W0913 20:01:52.012539   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:52.012544   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:52.012604   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:52.047642   71926 cri.go:89] found id: ""
	I0913 20:01:52.047671   71926 logs.go:276] 0 containers: []
	W0913 20:01:52.047683   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:52.047692   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:52.047743   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:52.087213   71926 cri.go:89] found id: ""
	I0913 20:01:52.087240   71926 logs.go:276] 0 containers: []
	W0913 20:01:52.087251   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:52.087258   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:52.087317   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:52.126609   71926 cri.go:89] found id: ""
	I0913 20:01:52.126633   71926 logs.go:276] 0 containers: []
	W0913 20:01:52.126641   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:52.126647   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:52.126699   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:52.160754   71926 cri.go:89] found id: ""
	I0913 20:01:52.160778   71926 logs.go:276] 0 containers: []
	W0913 20:01:52.160789   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:52.160796   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:52.160857   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:52.196945   71926 cri.go:89] found id: ""
	I0913 20:01:52.196967   71926 logs.go:276] 0 containers: []
	W0913 20:01:52.196975   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:52.196983   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:52.196994   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:52.248515   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:52.248552   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:52.264602   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:52.264629   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:52.339626   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:52.339653   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:52.339668   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:52.419793   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:52.419827   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:54.966220   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:54.981234   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:54.981299   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:55.015806   71926 cri.go:89] found id: ""
	I0913 20:01:55.015843   71926 logs.go:276] 0 containers: []
	W0913 20:01:55.015855   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:55.015861   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:55.015931   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:55.053979   71926 cri.go:89] found id: ""
	I0913 20:01:55.054002   71926 logs.go:276] 0 containers: []
	W0913 20:01:55.054011   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:55.054016   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:55.054075   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:55.096464   71926 cri.go:89] found id: ""
	I0913 20:01:55.096495   71926 logs.go:276] 0 containers: []
	W0913 20:01:55.096506   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:55.096514   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:55.096591   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:55.146557   71926 cri.go:89] found id: ""
	I0913 20:01:55.146586   71926 logs.go:276] 0 containers: []
	W0913 20:01:55.146599   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:55.146606   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:55.146667   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:55.199034   71926 cri.go:89] found id: ""
	I0913 20:01:55.199063   71926 logs.go:276] 0 containers: []
	W0913 20:01:55.199074   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:55.199083   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:55.199144   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:55.234731   71926 cri.go:89] found id: ""
	I0913 20:01:55.234760   71926 logs.go:276] 0 containers: []
	W0913 20:01:55.234772   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:55.234780   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:55.234830   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:55.273543   71926 cri.go:89] found id: ""
	I0913 20:01:55.273570   71926 logs.go:276] 0 containers: []
	W0913 20:01:55.273579   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:55.273585   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:55.273643   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:55.310449   71926 cri.go:89] found id: ""
	I0913 20:01:55.310483   71926 logs.go:276] 0 containers: []
	W0913 20:01:55.310496   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:55.310507   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:55.310521   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:55.360725   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:55.360761   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:55.375346   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:55.375374   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:55.445075   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:55.445098   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:55.445108   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:55.520105   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:55.520144   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:54.835305   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:56.835825   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:58.836975   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:55.431354   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:57.930223   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:57.767026   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:00.266415   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:58.059379   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:58.072373   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:58.072454   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:58.108624   71926 cri.go:89] found id: ""
	I0913 20:01:58.108650   71926 logs.go:276] 0 containers: []
	W0913 20:01:58.108659   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:58.108665   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:58.108722   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:58.143978   71926 cri.go:89] found id: ""
	I0913 20:01:58.143999   71926 logs.go:276] 0 containers: []
	W0913 20:01:58.144007   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:58.144013   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:58.144059   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:58.178996   71926 cri.go:89] found id: ""
	I0913 20:01:58.179024   71926 logs.go:276] 0 containers: []
	W0913 20:01:58.179032   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:58.179038   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:58.179097   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:58.211596   71926 cri.go:89] found id: ""
	I0913 20:01:58.211624   71926 logs.go:276] 0 containers: []
	W0913 20:01:58.211634   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:58.211640   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:58.211696   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:58.245034   71926 cri.go:89] found id: ""
	I0913 20:01:58.245065   71926 logs.go:276] 0 containers: []
	W0913 20:01:58.245077   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:58.245085   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:58.245148   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:58.283216   71926 cri.go:89] found id: ""
	I0913 20:01:58.283238   71926 logs.go:276] 0 containers: []
	W0913 20:01:58.283247   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:58.283252   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:58.283309   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:58.321463   71926 cri.go:89] found id: ""
	I0913 20:01:58.321484   71926 logs.go:276] 0 containers: []
	W0913 20:01:58.321492   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:58.321498   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:58.321544   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:58.355902   71926 cri.go:89] found id: ""
	I0913 20:01:58.355926   71926 logs.go:276] 0 containers: []
	W0913 20:01:58.355937   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:58.355947   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:58.355965   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:58.413005   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:58.413035   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:58.428082   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:58.428108   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:58.498169   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:58.498197   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:58.498212   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:58.578725   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:58.578757   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:02:01.119256   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:02:01.146017   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:02:01.146114   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:02:01.196918   71926 cri.go:89] found id: ""
	I0913 20:02:01.196945   71926 logs.go:276] 0 containers: []
	W0913 20:02:01.196953   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:02:01.196959   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:02:01.197016   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:02:01.233679   71926 cri.go:89] found id: ""
	I0913 20:02:01.233718   71926 logs.go:276] 0 containers: []
	W0913 20:02:01.233729   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:02:01.233736   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:02:01.233800   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:02:01.269464   71926 cri.go:89] found id: ""
	I0913 20:02:01.269492   71926 logs.go:276] 0 containers: []
	W0913 20:02:01.269503   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:02:01.269510   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:02:01.269570   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:02:01.305729   71926 cri.go:89] found id: ""
	I0913 20:02:01.305754   71926 logs.go:276] 0 containers: []
	W0913 20:02:01.305763   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:02:01.305769   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:02:01.305828   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:02:01.342388   71926 cri.go:89] found id: ""
	I0913 20:02:01.342415   71926 logs.go:276] 0 containers: []
	W0913 20:02:01.342426   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:02:01.342433   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:02:01.342496   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:02:01.381841   71926 cri.go:89] found id: ""
	I0913 20:02:01.381870   71926 logs.go:276] 0 containers: []
	W0913 20:02:01.381887   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:02:01.381895   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:02:01.381959   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:02:01.418835   71926 cri.go:89] found id: ""
	I0913 20:02:01.418875   71926 logs.go:276] 0 containers: []
	W0913 20:02:01.418886   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:02:01.418893   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:02:01.418957   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:02:01.457113   71926 cri.go:89] found id: ""
	I0913 20:02:01.457147   71926 logs.go:276] 0 containers: []
	W0913 20:02:01.457158   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:02:01.457168   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:02:01.457178   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:02:01.513024   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:02:01.513060   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:02:01.528102   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:02:01.528128   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:02:01.602459   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:02:01.602480   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:02:01.602495   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:02:01.685567   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:02:01.685599   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:02:01.336152   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:03.836139   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:59.931408   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:02.430247   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:04.431966   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:02.266731   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:04.768148   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:04.231832   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:02:04.246134   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:02:04.246215   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:02:04.285749   71926 cri.go:89] found id: ""
	I0913 20:02:04.285779   71926 logs.go:276] 0 containers: []
	W0913 20:02:04.285789   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:02:04.285796   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:02:04.285857   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:02:04.320201   71926 cri.go:89] found id: ""
	I0913 20:02:04.320235   71926 logs.go:276] 0 containers: []
	W0913 20:02:04.320246   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:02:04.320252   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:02:04.320314   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:02:04.356354   71926 cri.go:89] found id: ""
	I0913 20:02:04.356376   71926 logs.go:276] 0 containers: []
	W0913 20:02:04.356387   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:02:04.356394   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:02:04.356452   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:02:04.391995   71926 cri.go:89] found id: ""
	I0913 20:02:04.392025   71926 logs.go:276] 0 containers: []
	W0913 20:02:04.392036   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:02:04.392044   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:02:04.392111   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:02:04.425216   71926 cri.go:89] found id: ""
	I0913 20:02:04.425245   71926 logs.go:276] 0 containers: []
	W0913 20:02:04.425255   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:02:04.425262   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:02:04.425327   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:02:04.473200   71926 cri.go:89] found id: ""
	I0913 20:02:04.473223   71926 logs.go:276] 0 containers: []
	W0913 20:02:04.473232   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:02:04.473238   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:02:04.473283   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:02:04.508088   71926 cri.go:89] found id: ""
	I0913 20:02:04.508110   71926 logs.go:276] 0 containers: []
	W0913 20:02:04.508119   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:02:04.508124   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:02:04.508175   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:02:04.543322   71926 cri.go:89] found id: ""
	I0913 20:02:04.543343   71926 logs.go:276] 0 containers: []
	W0913 20:02:04.543351   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:02:04.543360   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:02:04.543370   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:02:04.618660   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:02:04.618679   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:02:04.618691   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:02:04.695411   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:02:04.695446   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:02:04.735797   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:02:04.735829   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:02:04.792281   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:02:04.792318   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:02:05.836177   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:07.837164   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:06.931841   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:09.432062   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:07.266508   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:07.266540   71424 pod_ready.go:82] duration metric: took 4m0.00658418s for pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace to be "Ready" ...
	E0913 20:02:07.266553   71424 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0913 20:02:07.266569   71424 pod_ready.go:39] duration metric: took 4m3.201709894s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0913 20:02:07.266588   71424 api_server.go:52] waiting for apiserver process to appear ...
	I0913 20:02:07.266618   71424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:02:07.266671   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:02:07.316650   71424 cri.go:89] found id: "7b1108fd5841778e635c87c901ca2bc56071cc7f48f9b9812e1bf8fa063284c3"
	I0913 20:02:07.316674   71424 cri.go:89] found id: ""
	I0913 20:02:07.316681   71424 logs.go:276] 1 containers: [7b1108fd5841778e635c87c901ca2bc56071cc7f48f9b9812e1bf8fa063284c3]
	I0913 20:02:07.316740   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:07.321334   71424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:02:07.321407   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:02:07.373164   71424 cri.go:89] found id: "a3490cc2f99b270938b5fed10c34d8466f4e2d304dac9cdacb69b239c36988e3"
	I0913 20:02:07.373187   71424 cri.go:89] found id: ""
	I0913 20:02:07.373197   71424 logs.go:276] 1 containers: [a3490cc2f99b270938b5fed10c34d8466f4e2d304dac9cdacb69b239c36988e3]
	I0913 20:02:07.373247   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:07.377883   71424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:02:07.377954   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:02:07.424142   71424 cri.go:89] found id: "e70559352db6a4d712cb06065541ce8338cb0ea989eac571f538aa205d022f73"
	I0913 20:02:07.424169   71424 cri.go:89] found id: ""
	I0913 20:02:07.424179   71424 logs.go:276] 1 containers: [e70559352db6a4d712cb06065541ce8338cb0ea989eac571f538aa205d022f73]
	I0913 20:02:07.424241   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:07.429508   71424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:02:07.429578   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:02:07.484114   71424 cri.go:89] found id: "4c2bf4fed4e33c404d568d430bf58fe30ca0c9f0cf8964c530e93f1fd1a51edf"
	I0913 20:02:07.484180   71424 cri.go:89] found id: ""
	I0913 20:02:07.484193   71424 logs.go:276] 1 containers: [4c2bf4fed4e33c404d568d430bf58fe30ca0c9f0cf8964c530e93f1fd1a51edf]
	I0913 20:02:07.484250   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:07.488689   71424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:02:07.488757   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:02:07.527755   71424 cri.go:89] found id: "adbec8ff0ed7ae2d76b2b4191fbf08117c3c2c17aaf2f56bf763ce14fba83991"
	I0913 20:02:07.527777   71424 cri.go:89] found id: ""
	I0913 20:02:07.527786   71424 logs.go:276] 1 containers: [adbec8ff0ed7ae2d76b2b4191fbf08117c3c2c17aaf2f56bf763ce14fba83991]
	I0913 20:02:07.527840   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:07.532748   71424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:02:07.532806   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:02:07.570018   71424 cri.go:89] found id: "e6169bebe5711890f53f70a7ec514779cf495ee725c60dab67e7acdca64ef03d"
	I0913 20:02:07.570043   71424 cri.go:89] found id: ""
	I0913 20:02:07.570052   71424 logs.go:276] 1 containers: [e6169bebe5711890f53f70a7ec514779cf495ee725c60dab67e7acdca64ef03d]
	I0913 20:02:07.570125   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:07.574697   71424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:02:07.574765   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:02:07.618877   71424 cri.go:89] found id: ""
	I0913 20:02:07.618971   71424 logs.go:276] 0 containers: []
	W0913 20:02:07.618998   71424 logs.go:278] No container was found matching "kindnet"
	I0913 20:02:07.619014   71424 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0913 20:02:07.619122   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0913 20:02:07.659244   71424 cri.go:89] found id: "fc01d7b17bbc929489326ae7e806a248f2d3d5cbc145bf51eb1b0cf2ba88d950"
	I0913 20:02:07.659270   71424 cri.go:89] found id: "4a9c61bb677322f851f9504c0ffe2044897d80333391be76078a4f7825afe9fe"
	I0913 20:02:07.659275   71424 cri.go:89] found id: ""
	I0913 20:02:07.659283   71424 logs.go:276] 2 containers: [fc01d7b17bbc929489326ae7e806a248f2d3d5cbc145bf51eb1b0cf2ba88d950 4a9c61bb677322f851f9504c0ffe2044897d80333391be76078a4f7825afe9fe]
	I0913 20:02:07.659335   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:07.664257   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:07.668591   71424 logs.go:123] Gathering logs for kube-scheduler [4c2bf4fed4e33c404d568d430bf58fe30ca0c9f0cf8964c530e93f1fd1a51edf] ...
	I0913 20:02:07.668613   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c2bf4fed4e33c404d568d430bf58fe30ca0c9f0cf8964c530e93f1fd1a51edf"
	I0913 20:02:07.709612   71424 logs.go:123] Gathering logs for kube-controller-manager [e6169bebe5711890f53f70a7ec514779cf495ee725c60dab67e7acdca64ef03d] ...
	I0913 20:02:07.709638   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e6169bebe5711890f53f70a7ec514779cf495ee725c60dab67e7acdca64ef03d"
	I0913 20:02:07.765784   71424 logs.go:123] Gathering logs for storage-provisioner [4a9c61bb677322f851f9504c0ffe2044897d80333391be76078a4f7825afe9fe] ...
	I0913 20:02:07.765838   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4a9c61bb677322f851f9504c0ffe2044897d80333391be76078a4f7825afe9fe"
	I0913 20:02:07.808828   71424 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:02:07.808853   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:02:08.315417   71424 logs.go:123] Gathering logs for container status ...
	I0913 20:02:08.315462   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:02:08.361953   71424 logs.go:123] Gathering logs for kubelet ...
	I0913 20:02:08.361984   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:02:08.434091   71424 logs.go:123] Gathering logs for dmesg ...
	I0913 20:02:08.434143   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:02:08.448853   71424 logs.go:123] Gathering logs for etcd [a3490cc2f99b270938b5fed10c34d8466f4e2d304dac9cdacb69b239c36988e3] ...
	I0913 20:02:08.448877   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a3490cc2f99b270938b5fed10c34d8466f4e2d304dac9cdacb69b239c36988e3"
	I0913 20:02:08.510886   71424 logs.go:123] Gathering logs for coredns [e70559352db6a4d712cb06065541ce8338cb0ea989eac571f538aa205d022f73] ...
	I0913 20:02:08.510919   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e70559352db6a4d712cb06065541ce8338cb0ea989eac571f538aa205d022f73"
	I0913 20:02:08.547445   71424 logs.go:123] Gathering logs for kube-proxy [adbec8ff0ed7ae2d76b2b4191fbf08117c3c2c17aaf2f56bf763ce14fba83991] ...
	I0913 20:02:08.547482   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 adbec8ff0ed7ae2d76b2b4191fbf08117c3c2c17aaf2f56bf763ce14fba83991"
	I0913 20:02:08.585883   71424 logs.go:123] Gathering logs for storage-provisioner [fc01d7b17bbc929489326ae7e806a248f2d3d5cbc145bf51eb1b0cf2ba88d950] ...
	I0913 20:02:08.585907   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc01d7b17bbc929489326ae7e806a248f2d3d5cbc145bf51eb1b0cf2ba88d950"
	I0913 20:02:08.628105   71424 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:02:08.628134   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 20:02:08.764531   71424 logs.go:123] Gathering logs for kube-apiserver [7b1108fd5841778e635c87c901ca2bc56071cc7f48f9b9812e1bf8fa063284c3] ...
	I0913 20:02:08.764562   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7b1108fd5841778e635c87c901ca2bc56071cc7f48f9b9812e1bf8fa063284c3"
	I0913 20:02:07.307778   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:02:07.322032   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:02:07.322110   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:02:07.357558   71926 cri.go:89] found id: ""
	I0913 20:02:07.357585   71926 logs.go:276] 0 containers: []
	W0913 20:02:07.357597   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:02:07.357605   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:02:07.357664   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:02:07.391428   71926 cri.go:89] found id: ""
	I0913 20:02:07.391457   71926 logs.go:276] 0 containers: []
	W0913 20:02:07.391468   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:02:07.391476   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:02:07.391531   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:02:07.427244   71926 cri.go:89] found id: ""
	I0913 20:02:07.427268   71926 logs.go:276] 0 containers: []
	W0913 20:02:07.427281   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:02:07.427289   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:02:07.427364   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:02:07.477369   71926 cri.go:89] found id: ""
	I0913 20:02:07.477399   71926 logs.go:276] 0 containers: []
	W0913 20:02:07.477411   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:02:07.477420   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:02:07.477478   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:02:07.519477   71926 cri.go:89] found id: ""
	I0913 20:02:07.519505   71926 logs.go:276] 0 containers: []
	W0913 20:02:07.519516   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:02:07.519524   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:02:07.519586   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:02:07.556229   71926 cri.go:89] found id: ""
	I0913 20:02:07.556252   71926 logs.go:276] 0 containers: []
	W0913 20:02:07.556260   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:02:07.556270   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:02:07.556329   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:02:07.594498   71926 cri.go:89] found id: ""
	I0913 20:02:07.594531   71926 logs.go:276] 0 containers: []
	W0913 20:02:07.594543   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:02:07.594551   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:02:07.594609   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:02:07.634000   71926 cri.go:89] found id: ""
	I0913 20:02:07.634027   71926 logs.go:276] 0 containers: []
	W0913 20:02:07.634038   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:02:07.634048   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:02:07.634061   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:02:07.690769   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:02:07.690801   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:02:07.708059   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:02:07.708087   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:02:07.783421   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:02:07.783440   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:02:07.783481   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:02:07.866138   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:02:07.866169   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:02:10.416560   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:02:10.430464   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:02:10.430536   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:02:10.467821   71926 cri.go:89] found id: ""
	I0913 20:02:10.467849   71926 logs.go:276] 0 containers: []
	W0913 20:02:10.467858   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:02:10.467864   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:02:10.467930   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:02:10.504320   71926 cri.go:89] found id: ""
	I0913 20:02:10.504347   71926 logs.go:276] 0 containers: []
	W0913 20:02:10.504358   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:02:10.504371   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:02:10.504461   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:02:10.541261   71926 cri.go:89] found id: ""
	I0913 20:02:10.541290   71926 logs.go:276] 0 containers: []
	W0913 20:02:10.541302   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:02:10.541309   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:02:10.541376   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:02:10.576270   71926 cri.go:89] found id: ""
	I0913 20:02:10.576297   71926 logs.go:276] 0 containers: []
	W0913 20:02:10.576310   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:02:10.576317   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:02:10.576373   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:02:10.614974   71926 cri.go:89] found id: ""
	I0913 20:02:10.615004   71926 logs.go:276] 0 containers: []
	W0913 20:02:10.615022   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:02:10.615029   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:02:10.615091   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:02:10.650917   71926 cri.go:89] found id: ""
	I0913 20:02:10.650947   71926 logs.go:276] 0 containers: []
	W0913 20:02:10.650959   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:02:10.650967   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:02:10.651028   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:02:10.688597   71926 cri.go:89] found id: ""
	I0913 20:02:10.688622   71926 logs.go:276] 0 containers: []
	W0913 20:02:10.688632   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:02:10.688640   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:02:10.688699   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:02:10.723937   71926 cri.go:89] found id: ""
	I0913 20:02:10.723962   71926 logs.go:276] 0 containers: []
	W0913 20:02:10.723973   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:02:10.723983   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:02:10.723998   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:02:10.776033   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:02:10.776065   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:02:10.791601   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:02:10.791624   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:02:10.870427   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:02:10.870453   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:02:10.870481   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:02:10.950924   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:02:10.950958   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:02:10.335945   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:12.336240   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:11.932240   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:14.430527   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:11.311597   71424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:02:11.329620   71424 api_server.go:72] duration metric: took 4m14.578764648s to wait for apiserver process to appear ...
	I0913 20:02:11.329645   71424 api_server.go:88] waiting for apiserver healthz status ...
	I0913 20:02:11.329689   71424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:02:11.329748   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:02:11.372419   71424 cri.go:89] found id: "7b1108fd5841778e635c87c901ca2bc56071cc7f48f9b9812e1bf8fa063284c3"
	I0913 20:02:11.372443   71424 cri.go:89] found id: ""
	I0913 20:02:11.372454   71424 logs.go:276] 1 containers: [7b1108fd5841778e635c87c901ca2bc56071cc7f48f9b9812e1bf8fa063284c3]
	I0913 20:02:11.372510   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:11.377048   71424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:02:11.377112   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:02:11.415150   71424 cri.go:89] found id: "a3490cc2f99b270938b5fed10c34d8466f4e2d304dac9cdacb69b239c36988e3"
	I0913 20:02:11.415177   71424 cri.go:89] found id: ""
	I0913 20:02:11.415186   71424 logs.go:276] 1 containers: [a3490cc2f99b270938b5fed10c34d8466f4e2d304dac9cdacb69b239c36988e3]
	I0913 20:02:11.415255   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:11.420007   71424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:02:11.420092   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:02:11.459538   71424 cri.go:89] found id: "e70559352db6a4d712cb06065541ce8338cb0ea989eac571f538aa205d022f73"
	I0913 20:02:11.459560   71424 cri.go:89] found id: ""
	I0913 20:02:11.459568   71424 logs.go:276] 1 containers: [e70559352db6a4d712cb06065541ce8338cb0ea989eac571f538aa205d022f73]
	I0913 20:02:11.459626   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:11.464079   71424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:02:11.464133   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:02:11.502877   71424 cri.go:89] found id: "4c2bf4fed4e33c404d568d430bf58fe30ca0c9f0cf8964c530e93f1fd1a51edf"
	I0913 20:02:11.502902   71424 cri.go:89] found id: ""
	I0913 20:02:11.502909   71424 logs.go:276] 1 containers: [4c2bf4fed4e33c404d568d430bf58fe30ca0c9f0cf8964c530e93f1fd1a51edf]
	I0913 20:02:11.502958   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:11.507529   71424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:02:11.507614   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:02:11.553452   71424 cri.go:89] found id: "adbec8ff0ed7ae2d76b2b4191fbf08117c3c2c17aaf2f56bf763ce14fba83991"
	I0913 20:02:11.553476   71424 cri.go:89] found id: ""
	I0913 20:02:11.553484   71424 logs.go:276] 1 containers: [adbec8ff0ed7ae2d76b2b4191fbf08117c3c2c17aaf2f56bf763ce14fba83991]
	I0913 20:02:11.553538   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:11.557584   71424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:02:11.557649   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:02:11.598606   71424 cri.go:89] found id: "e6169bebe5711890f53f70a7ec514779cf495ee725c60dab67e7acdca64ef03d"
	I0913 20:02:11.598632   71424 cri.go:89] found id: ""
	I0913 20:02:11.598641   71424 logs.go:276] 1 containers: [e6169bebe5711890f53f70a7ec514779cf495ee725c60dab67e7acdca64ef03d]
	I0913 20:02:11.598694   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:11.602735   71424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:02:11.602803   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:02:11.637072   71424 cri.go:89] found id: ""
	I0913 20:02:11.637099   71424 logs.go:276] 0 containers: []
	W0913 20:02:11.637110   71424 logs.go:278] No container was found matching "kindnet"
	I0913 20:02:11.637133   71424 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0913 20:02:11.637197   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0913 20:02:11.680922   71424 cri.go:89] found id: "fc01d7b17bbc929489326ae7e806a248f2d3d5cbc145bf51eb1b0cf2ba88d950"
	I0913 20:02:11.680941   71424 cri.go:89] found id: "4a9c61bb677322f851f9504c0ffe2044897d80333391be76078a4f7825afe9fe"
	I0913 20:02:11.680945   71424 cri.go:89] found id: ""
	I0913 20:02:11.680952   71424 logs.go:276] 2 containers: [fc01d7b17bbc929489326ae7e806a248f2d3d5cbc145bf51eb1b0cf2ba88d950 4a9c61bb677322f851f9504c0ffe2044897d80333391be76078a4f7825afe9fe]
	I0913 20:02:11.680993   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:11.685264   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:11.689862   71424 logs.go:123] Gathering logs for etcd [a3490cc2f99b270938b5fed10c34d8466f4e2d304dac9cdacb69b239c36988e3] ...
	I0913 20:02:11.689887   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a3490cc2f99b270938b5fed10c34d8466f4e2d304dac9cdacb69b239c36988e3"
	I0913 20:02:11.758440   71424 logs.go:123] Gathering logs for coredns [e70559352db6a4d712cb06065541ce8338cb0ea989eac571f538aa205d022f73] ...
	I0913 20:02:11.758475   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e70559352db6a4d712cb06065541ce8338cb0ea989eac571f538aa205d022f73"
	I0913 20:02:11.799263   71424 logs.go:123] Gathering logs for kube-proxy [adbec8ff0ed7ae2d76b2b4191fbf08117c3c2c17aaf2f56bf763ce14fba83991] ...
	I0913 20:02:11.799295   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 adbec8ff0ed7ae2d76b2b4191fbf08117c3c2c17aaf2f56bf763ce14fba83991"
	I0913 20:02:11.837890   71424 logs.go:123] Gathering logs for kube-controller-manager [e6169bebe5711890f53f70a7ec514779cf495ee725c60dab67e7acdca64ef03d] ...
	I0913 20:02:11.837918   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e6169bebe5711890f53f70a7ec514779cf495ee725c60dab67e7acdca64ef03d"
	I0913 20:02:11.902156   71424 logs.go:123] Gathering logs for storage-provisioner [fc01d7b17bbc929489326ae7e806a248f2d3d5cbc145bf51eb1b0cf2ba88d950] ...
	I0913 20:02:11.902189   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc01d7b17bbc929489326ae7e806a248f2d3d5cbc145bf51eb1b0cf2ba88d950"
	I0913 20:02:11.953825   71424 logs.go:123] Gathering logs for kubelet ...
	I0913 20:02:11.953854   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:02:12.022461   71424 logs.go:123] Gathering logs for dmesg ...
	I0913 20:02:12.022498   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:02:12.038744   71424 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:02:12.038773   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 20:02:12.156945   71424 logs.go:123] Gathering logs for storage-provisioner [4a9c61bb677322f851f9504c0ffe2044897d80333391be76078a4f7825afe9fe] ...
	I0913 20:02:12.156982   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4a9c61bb677322f851f9504c0ffe2044897d80333391be76078a4f7825afe9fe"
	I0913 20:02:12.191539   71424 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:02:12.191576   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:02:12.615499   71424 logs.go:123] Gathering logs for kube-apiserver [7b1108fd5841778e635c87c901ca2bc56071cc7f48f9b9812e1bf8fa063284c3] ...
	I0913 20:02:12.615539   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7b1108fd5841778e635c87c901ca2bc56071cc7f48f9b9812e1bf8fa063284c3"
	I0913 20:02:12.662305   71424 logs.go:123] Gathering logs for kube-scheduler [4c2bf4fed4e33c404d568d430bf58fe30ca0c9f0cf8964c530e93f1fd1a51edf] ...
	I0913 20:02:12.662340   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c2bf4fed4e33c404d568d430bf58fe30ca0c9f0cf8964c530e93f1fd1a51edf"
	I0913 20:02:12.701720   71424 logs.go:123] Gathering logs for container status ...
	I0913 20:02:12.701747   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:02:15.241370   71424 api_server.go:253] Checking apiserver healthz at https://192.168.50.13:8443/healthz ...
	I0913 20:02:15.246417   71424 api_server.go:279] https://192.168.50.13:8443/healthz returned 200:
	ok
	I0913 20:02:15.247538   71424 api_server.go:141] control plane version: v1.31.1
	I0913 20:02:15.247557   71424 api_server.go:131] duration metric: took 3.917905929s to wait for apiserver health ...
	I0913 20:02:15.247565   71424 system_pods.go:43] waiting for kube-system pods to appear ...
	I0913 20:02:15.247592   71424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:02:15.247646   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:02:15.287202   71424 cri.go:89] found id: "7b1108fd5841778e635c87c901ca2bc56071cc7f48f9b9812e1bf8fa063284c3"
	I0913 20:02:15.287223   71424 cri.go:89] found id: ""
	I0913 20:02:15.287231   71424 logs.go:276] 1 containers: [7b1108fd5841778e635c87c901ca2bc56071cc7f48f9b9812e1bf8fa063284c3]
	I0913 20:02:15.287285   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:15.292060   71424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:02:15.292115   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:02:15.327342   71424 cri.go:89] found id: "a3490cc2f99b270938b5fed10c34d8466f4e2d304dac9cdacb69b239c36988e3"
	I0913 20:02:15.327367   71424 cri.go:89] found id: ""
	I0913 20:02:15.327376   71424 logs.go:276] 1 containers: [a3490cc2f99b270938b5fed10c34d8466f4e2d304dac9cdacb69b239c36988e3]
	I0913 20:02:15.327441   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:15.332284   71424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:02:15.332356   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:02:15.374686   71424 cri.go:89] found id: "e70559352db6a4d712cb06065541ce8338cb0ea989eac571f538aa205d022f73"
	I0913 20:02:15.374708   71424 cri.go:89] found id: ""
	I0913 20:02:15.374714   71424 logs.go:276] 1 containers: [e70559352db6a4d712cb06065541ce8338cb0ea989eac571f538aa205d022f73]
	I0913 20:02:15.374771   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:15.379199   71424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:02:15.379269   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:02:15.422011   71424 cri.go:89] found id: "4c2bf4fed4e33c404d568d430bf58fe30ca0c9f0cf8964c530e93f1fd1a51edf"
	I0913 20:02:15.422034   71424 cri.go:89] found id: ""
	I0913 20:02:15.422044   71424 logs.go:276] 1 containers: [4c2bf4fed4e33c404d568d430bf58fe30ca0c9f0cf8964c530e93f1fd1a51edf]
	I0913 20:02:15.422110   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:15.426331   71424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:02:15.426395   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:02:15.471552   71424 cri.go:89] found id: "adbec8ff0ed7ae2d76b2b4191fbf08117c3c2c17aaf2f56bf763ce14fba83991"
	I0913 20:02:15.471570   71424 cri.go:89] found id: ""
	I0913 20:02:15.471577   71424 logs.go:276] 1 containers: [adbec8ff0ed7ae2d76b2b4191fbf08117c3c2c17aaf2f56bf763ce14fba83991]
	I0913 20:02:15.471630   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:15.475964   71424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:02:15.476021   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:02:15.520619   71424 cri.go:89] found id: "e6169bebe5711890f53f70a7ec514779cf495ee725c60dab67e7acdca64ef03d"
	I0913 20:02:15.520647   71424 cri.go:89] found id: ""
	I0913 20:02:15.520656   71424 logs.go:276] 1 containers: [e6169bebe5711890f53f70a7ec514779cf495ee725c60dab67e7acdca64ef03d]
	I0913 20:02:15.520713   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:15.524851   71424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:02:15.524912   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:02:15.559283   71424 cri.go:89] found id: ""
	I0913 20:02:15.559309   71424 logs.go:276] 0 containers: []
	W0913 20:02:15.559320   71424 logs.go:278] No container was found matching "kindnet"
	I0913 20:02:15.559327   71424 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0913 20:02:15.559383   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0913 20:02:15.597439   71424 cri.go:89] found id: "fc01d7b17bbc929489326ae7e806a248f2d3d5cbc145bf51eb1b0cf2ba88d950"
	I0913 20:02:15.597465   71424 cri.go:89] found id: "4a9c61bb677322f851f9504c0ffe2044897d80333391be76078a4f7825afe9fe"
	I0913 20:02:15.597471   71424 cri.go:89] found id: ""
	I0913 20:02:15.597480   71424 logs.go:276] 2 containers: [fc01d7b17bbc929489326ae7e806a248f2d3d5cbc145bf51eb1b0cf2ba88d950 4a9c61bb677322f851f9504c0ffe2044897d80333391be76078a4f7825afe9fe]
	I0913 20:02:15.597540   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:15.601932   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:15.605741   71424 logs.go:123] Gathering logs for kube-scheduler [4c2bf4fed4e33c404d568d430bf58fe30ca0c9f0cf8964c530e93f1fd1a51edf] ...
	I0913 20:02:15.605765   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c2bf4fed4e33c404d568d430bf58fe30ca0c9f0cf8964c530e93f1fd1a51edf"
	I0913 20:02:15.641300   71424 logs.go:123] Gathering logs for kube-proxy [adbec8ff0ed7ae2d76b2b4191fbf08117c3c2c17aaf2f56bf763ce14fba83991] ...
	I0913 20:02:15.641328   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 adbec8ff0ed7ae2d76b2b4191fbf08117c3c2c17aaf2f56bf763ce14fba83991"
	I0913 20:02:15.679604   71424 logs.go:123] Gathering logs for kube-controller-manager [e6169bebe5711890f53f70a7ec514779cf495ee725c60dab67e7acdca64ef03d] ...
	I0913 20:02:15.679633   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e6169bebe5711890f53f70a7ec514779cf495ee725c60dab67e7acdca64ef03d"
	I0913 20:02:15.731316   71424 logs.go:123] Gathering logs for storage-provisioner [fc01d7b17bbc929489326ae7e806a248f2d3d5cbc145bf51eb1b0cf2ba88d950] ...
	I0913 20:02:15.731348   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc01d7b17bbc929489326ae7e806a248f2d3d5cbc145bf51eb1b0cf2ba88d950"
	I0913 20:02:15.774692   71424 logs.go:123] Gathering logs for dmesg ...
	I0913 20:02:15.774719   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:02:15.789708   71424 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:02:15.789733   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 20:02:15.899485   71424 logs.go:123] Gathering logs for kube-apiserver [7b1108fd5841778e635c87c901ca2bc56071cc7f48f9b9812e1bf8fa063284c3] ...
	I0913 20:02:15.899517   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7b1108fd5841778e635c87c901ca2bc56071cc7f48f9b9812e1bf8fa063284c3"
	I0913 20:02:15.953758   71424 logs.go:123] Gathering logs for coredns [e70559352db6a4d712cb06065541ce8338cb0ea989eac571f538aa205d022f73] ...
	I0913 20:02:15.953795   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e70559352db6a4d712cb06065541ce8338cb0ea989eac571f538aa205d022f73"
	I0913 20:02:15.996235   71424 logs.go:123] Gathering logs for storage-provisioner [4a9c61bb677322f851f9504c0ffe2044897d80333391be76078a4f7825afe9fe] ...
	I0913 20:02:15.996266   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4a9c61bb677322f851f9504c0ffe2044897d80333391be76078a4f7825afe9fe"
	I0913 20:02:16.033729   71424 logs.go:123] Gathering logs for container status ...
	I0913 20:02:16.033765   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:02:16.083481   71424 logs.go:123] Gathering logs for kubelet ...
	I0913 20:02:16.083514   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:02:13.497982   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:02:13.511795   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:02:13.511865   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:02:13.551686   71926 cri.go:89] found id: ""
	I0913 20:02:13.551714   71926 logs.go:276] 0 containers: []
	W0913 20:02:13.551723   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:02:13.551729   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:02:13.551779   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:02:13.584641   71926 cri.go:89] found id: ""
	I0913 20:02:13.584671   71926 logs.go:276] 0 containers: []
	W0913 20:02:13.584682   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:02:13.584689   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:02:13.584740   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:02:13.622693   71926 cri.go:89] found id: ""
	I0913 20:02:13.622720   71926 logs.go:276] 0 containers: []
	W0913 20:02:13.622731   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:02:13.622739   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:02:13.622801   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:02:13.661315   71926 cri.go:89] found id: ""
	I0913 20:02:13.661343   71926 logs.go:276] 0 containers: []
	W0913 20:02:13.661355   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:02:13.661363   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:02:13.661422   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:02:13.698433   71926 cri.go:89] found id: ""
	I0913 20:02:13.698460   71926 logs.go:276] 0 containers: []
	W0913 20:02:13.698471   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:02:13.698485   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:02:13.698541   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:02:13.733216   71926 cri.go:89] found id: ""
	I0913 20:02:13.733245   71926 logs.go:276] 0 containers: []
	W0913 20:02:13.733256   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:02:13.733264   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:02:13.733323   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:02:13.769397   71926 cri.go:89] found id: ""
	I0913 20:02:13.769426   71926 logs.go:276] 0 containers: []
	W0913 20:02:13.769436   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:02:13.769441   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:02:13.769502   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:02:13.806343   71926 cri.go:89] found id: ""
	I0913 20:02:13.806367   71926 logs.go:276] 0 containers: []
	W0913 20:02:13.806378   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:02:13.806389   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:02:13.806402   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:02:13.885918   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:02:13.885958   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:02:13.927909   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:02:13.927948   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:02:13.983289   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:02:13.983325   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:02:13.996867   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:02:13.996892   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:02:14.065876   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:02:16.566876   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:02:16.581980   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:02:16.582059   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:02:16.620669   71926 cri.go:89] found id: ""
	I0913 20:02:16.620694   71926 logs.go:276] 0 containers: []
	W0913 20:02:16.620703   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:02:16.620709   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:02:16.620758   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:02:16.660133   71926 cri.go:89] found id: ""
	I0913 20:02:16.660156   71926 logs.go:276] 0 containers: []
	W0913 20:02:16.660165   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:02:16.660171   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:02:16.660218   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:02:16.695475   71926 cri.go:89] found id: ""
	I0913 20:02:16.695503   71926 logs.go:276] 0 containers: []
	W0913 20:02:16.695515   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:02:16.695522   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:02:16.695581   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:02:16.730018   71926 cri.go:89] found id: ""
	I0913 20:02:16.730051   71926 logs.go:276] 0 containers: []
	W0913 20:02:16.730063   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:02:16.730069   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:02:16.730136   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:02:16.763193   71926 cri.go:89] found id: ""
	I0913 20:02:16.763219   71926 logs.go:276] 0 containers: []
	W0913 20:02:16.763230   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:02:16.763236   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:02:16.763303   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:02:16.800623   71926 cri.go:89] found id: ""
	I0913 20:02:16.800650   71926 logs.go:276] 0 containers: []
	W0913 20:02:16.800662   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:02:16.800670   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:02:16.800730   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:02:16.834922   71926 cri.go:89] found id: ""
	I0913 20:02:16.834950   71926 logs.go:276] 0 containers: []
	W0913 20:02:16.834961   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:02:16.834968   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:02:16.835012   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:02:16.872570   71926 cri.go:89] found id: ""
	I0913 20:02:16.872598   71926 logs.go:276] 0 containers: []
	W0913 20:02:16.872607   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:02:16.872615   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:02:16.872625   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:02:16.922229   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:02:16.922265   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:02:16.936954   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:02:16.936985   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 20:02:16.155161   71424 logs.go:123] Gathering logs for etcd [a3490cc2f99b270938b5fed10c34d8466f4e2d304dac9cdacb69b239c36988e3] ...
	I0913 20:02:16.155202   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a3490cc2f99b270938b5fed10c34d8466f4e2d304dac9cdacb69b239c36988e3"
	I0913 20:02:16.213457   71424 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:02:16.213494   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:02:19.078923   71424 system_pods.go:59] 8 kube-system pods found
	I0913 20:02:19.078950   71424 system_pods.go:61] "coredns-7c65d6cfc9-fjzxv" [984f1946-61b1-4881-ae99-495855aaf948] Running
	I0913 20:02:19.078956   71424 system_pods.go:61] "etcd-no-preload-239327" [d0514967-93ea-4792-b49d-500c6800d102] Running
	I0913 20:02:19.078959   71424 system_pods.go:61] "kube-apiserver-no-preload-239327" [2e37433d-d767-4e6a-9697-79e99bb2cf74] Running
	I0913 20:02:19.078964   71424 system_pods.go:61] "kube-controller-manager-no-preload-239327" [71d86891-1378-43b5-af10-c45acb1ef854] Running
	I0913 20:02:19.078967   71424 system_pods.go:61] "kube-proxy-b24zg" [67fffd9e-ddf7-4abb-bfce-1528060d6b43] Running
	I0913 20:02:19.078971   71424 system_pods.go:61] "kube-scheduler-no-preload-239327" [22e17a4f-902f-4593-82dd-d2f04104e66a] Running
	I0913 20:02:19.078976   71424 system_pods.go:61] "metrics-server-6867b74b74-bq7jp" [9920ad88-3d00-458f-94d4-3dcfd0cd9a01] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0913 20:02:19.078980   71424 system_pods.go:61] "storage-provisioner" [1cb55fe9-4adb-4d3e-9f26-34ee4b3f01a2] Running
	I0913 20:02:19.078988   71424 system_pods.go:74] duration metric: took 3.831418395s to wait for pod list to return data ...
	I0913 20:02:19.078995   71424 default_sa.go:34] waiting for default service account to be created ...
	I0913 20:02:19.081391   71424 default_sa.go:45] found service account: "default"
	I0913 20:02:19.081412   71424 default_sa.go:55] duration metric: took 2.412971ms for default service account to be created ...
	I0913 20:02:19.081419   71424 system_pods.go:116] waiting for k8s-apps to be running ...
	I0913 20:02:19.085561   71424 system_pods.go:86] 8 kube-system pods found
	I0913 20:02:19.085580   71424 system_pods.go:89] "coredns-7c65d6cfc9-fjzxv" [984f1946-61b1-4881-ae99-495855aaf948] Running
	I0913 20:02:19.085586   71424 system_pods.go:89] "etcd-no-preload-239327" [d0514967-93ea-4792-b49d-500c6800d102] Running
	I0913 20:02:19.085590   71424 system_pods.go:89] "kube-apiserver-no-preload-239327" [2e37433d-d767-4e6a-9697-79e99bb2cf74] Running
	I0913 20:02:19.085594   71424 system_pods.go:89] "kube-controller-manager-no-preload-239327" [71d86891-1378-43b5-af10-c45acb1ef854] Running
	I0913 20:02:19.085597   71424 system_pods.go:89] "kube-proxy-b24zg" [67fffd9e-ddf7-4abb-bfce-1528060d6b43] Running
	I0913 20:02:19.085601   71424 system_pods.go:89] "kube-scheduler-no-preload-239327" [22e17a4f-902f-4593-82dd-d2f04104e66a] Running
	I0913 20:02:19.085607   71424 system_pods.go:89] "metrics-server-6867b74b74-bq7jp" [9920ad88-3d00-458f-94d4-3dcfd0cd9a01] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0913 20:02:19.085610   71424 system_pods.go:89] "storage-provisioner" [1cb55fe9-4adb-4d3e-9f26-34ee4b3f01a2] Running
	I0913 20:02:19.085616   71424 system_pods.go:126] duration metric: took 4.193561ms to wait for k8s-apps to be running ...
	I0913 20:02:19.085625   71424 system_svc.go:44] waiting for kubelet service to be running ....
	I0913 20:02:19.085664   71424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 20:02:19.105440   71424 system_svc.go:56] duration metric: took 19.808703ms WaitForService to wait for kubelet
	I0913 20:02:19.105469   71424 kubeadm.go:582] duration metric: took 4m22.354619761s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0913 20:02:19.105491   71424 node_conditions.go:102] verifying NodePressure condition ...
	I0913 20:02:19.109107   71424 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0913 20:02:19.109126   71424 node_conditions.go:123] node cpu capacity is 2
	I0913 20:02:19.109136   71424 node_conditions.go:105] duration metric: took 3.640406ms to run NodePressure ...
	I0913 20:02:19.109146   71424 start.go:241] waiting for startup goroutines ...
	I0913 20:02:19.109153   71424 start.go:246] waiting for cluster config update ...
	I0913 20:02:19.109163   71424 start.go:255] writing updated cluster config ...
	I0913 20:02:19.109412   71424 ssh_runner.go:195] Run: rm -f paused
	I0913 20:02:19.156906   71424 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0913 20:02:19.158757   71424 out.go:177] * Done! kubectl is now configured to use "no-preload-239327" cluster and "default" namespace by default
	I0913 20:02:14.835749   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:17.335566   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:16.431024   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:18.434223   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:19.425264   71702 pod_ready.go:82] duration metric: took 4m0.000872269s for pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace to be "Ready" ...
	E0913 20:02:19.425295   71702 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0913 20:02:19.425314   71702 pod_ready.go:39] duration metric: took 4m14.083085064s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0913 20:02:19.425344   71702 kubeadm.go:597] duration metric: took 4m21.72399516s to restartPrimaryControlPlane
	W0913 20:02:19.425404   71702 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0913 20:02:19.425434   71702 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	W0913 20:02:17.022260   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:02:17.022281   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:02:17.022292   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:02:17.103233   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:02:17.103262   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:02:19.648871   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:02:19.664542   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:02:19.664619   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:02:19.704693   71926 cri.go:89] found id: ""
	I0913 20:02:19.704725   71926 logs.go:276] 0 containers: []
	W0913 20:02:19.704738   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:02:19.704745   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:02:19.704808   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:02:19.746522   71926 cri.go:89] found id: ""
	I0913 20:02:19.746550   71926 logs.go:276] 0 containers: []
	W0913 20:02:19.746562   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:02:19.746569   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:02:19.746630   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:02:19.782277   71926 cri.go:89] found id: ""
	I0913 20:02:19.782305   71926 logs.go:276] 0 containers: []
	W0913 20:02:19.782316   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:02:19.782323   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:02:19.782390   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:02:19.822083   71926 cri.go:89] found id: ""
	I0913 20:02:19.822147   71926 logs.go:276] 0 containers: []
	W0913 20:02:19.822157   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:02:19.822163   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:02:19.822221   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:02:19.865403   71926 cri.go:89] found id: ""
	I0913 20:02:19.865433   71926 logs.go:276] 0 containers: []
	W0913 20:02:19.865443   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:02:19.865451   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:02:19.865513   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:02:19.906436   71926 cri.go:89] found id: ""
	I0913 20:02:19.906462   71926 logs.go:276] 0 containers: []
	W0913 20:02:19.906471   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:02:19.906477   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:02:19.906535   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:02:19.943277   71926 cri.go:89] found id: ""
	I0913 20:02:19.943301   71926 logs.go:276] 0 containers: []
	W0913 20:02:19.943311   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:02:19.943318   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:02:19.943379   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:02:19.978660   71926 cri.go:89] found id: ""
	I0913 20:02:19.978683   71926 logs.go:276] 0 containers: []
	W0913 20:02:19.978694   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:02:19.978704   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:02:19.978718   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:02:20.051748   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:02:20.051773   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:02:20.051788   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:02:20.133912   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:02:20.133951   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:02:20.174826   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:02:20.174854   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:02:20.228002   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:02:20.228038   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:02:22.743346   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:02:22.757641   71926 kubeadm.go:597] duration metric: took 4m3.377721408s to restartPrimaryControlPlane
	W0913 20:02:22.757719   71926 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0913 20:02:22.757750   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0913 20:02:23.489114   71926 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 20:02:23.505494   71926 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0913 20:02:23.516804   71926 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0913 20:02:23.527373   71926 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0913 20:02:23.527392   71926 kubeadm.go:157] found existing configuration files:
	
	I0913 20:02:23.527433   71926 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0913 20:02:23.537725   71926 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0913 20:02:23.537797   71926 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0913 20:02:23.548667   71926 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0913 20:02:23.558212   71926 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0913 20:02:23.558288   71926 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0913 20:02:23.568235   71926 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0913 20:02:23.577925   71926 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0913 20:02:23.577989   71926 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0913 20:02:23.587819   71926 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0913 20:02:23.597266   71926 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0913 20:02:23.597330   71926 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0913 20:02:23.607021   71926 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0913 20:02:23.681432   71926 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0913 20:02:23.681576   71926 kubeadm.go:310] [preflight] Running pre-flight checks
	I0913 20:02:23.837573   71926 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0913 20:02:23.837727   71926 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0913 20:02:23.837858   71926 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0913 20:02:24.016267   71926 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0913 20:02:19.336285   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:21.836115   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:23.837035   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:24.018067   71926 out.go:235]   - Generating certificates and keys ...
	I0913 20:02:24.018195   71926 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0913 20:02:24.018297   71926 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0913 20:02:24.018394   71926 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0913 20:02:24.018482   71926 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0913 20:02:24.018586   71926 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0913 20:02:24.019167   71926 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0913 20:02:24.019590   71926 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0913 20:02:24.020258   71926 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0913 20:02:24.020814   71926 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0913 20:02:24.021409   71926 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0913 20:02:24.021519   71926 kubeadm.go:310] [certs] Using the existing "sa" key
	I0913 20:02:24.021596   71926 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0913 20:02:24.094249   71926 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0913 20:02:24.178186   71926 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0913 20:02:24.412313   71926 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0913 20:02:24.570296   71926 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0913 20:02:24.585365   71926 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0913 20:02:24.586493   71926 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0913 20:02:24.586548   71926 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0913 20:02:24.726026   71926 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0913 20:02:24.727979   71926 out.go:235]   - Booting up control plane ...
	I0913 20:02:24.728117   71926 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0913 20:02:24.740176   71926 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0913 20:02:24.741334   71926 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0913 20:02:24.742057   71926 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0913 20:02:24.744724   71926 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0913 20:02:26.336853   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:28.841632   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:31.336243   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:33.835739   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:36.337341   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:38.835188   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:40.836019   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:42.836112   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:45.681212   71702 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.255746666s)
	I0913 20:02:45.681319   71702 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 20:02:45.700645   71702 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0913 20:02:45.716032   71702 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0913 20:02:45.735914   71702 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0913 20:02:45.735934   71702 kubeadm.go:157] found existing configuration files:
	
	I0913 20:02:45.735991   71702 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0913 20:02:45.746143   71702 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0913 20:02:45.746212   71702 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0913 20:02:45.756542   71702 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0913 20:02:45.774317   71702 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0913 20:02:45.774371   71702 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0913 20:02:45.786627   71702 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0913 20:02:45.796851   71702 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0913 20:02:45.796913   71702 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0913 20:02:45.817449   71702 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0913 20:02:45.827702   71702 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0913 20:02:45.827769   71702 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0913 20:02:45.838431   71702 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0913 20:02:45.891108   71702 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0913 20:02:45.891320   71702 kubeadm.go:310] [preflight] Running pre-flight checks
	I0913 20:02:46.000041   71702 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0913 20:02:46.000212   71702 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0913 20:02:46.000375   71702 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0913 20:02:46.008967   71702 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0913 20:02:46.010730   71702 out.go:235]   - Generating certificates and keys ...
	I0913 20:02:46.010839   71702 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0913 20:02:46.010943   71702 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0913 20:02:46.011058   71702 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0913 20:02:46.011180   71702 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0913 20:02:46.011270   71702 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0913 20:02:46.011352   71702 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0913 20:02:46.011438   71702 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0913 20:02:46.011528   71702 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0913 20:02:46.011627   71702 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0913 20:02:46.011727   71702 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0913 20:02:46.011781   71702 kubeadm.go:310] [certs] Using the existing "sa" key
	I0913 20:02:46.011850   71702 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0913 20:02:46.203740   71702 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0913 20:02:46.287426   71702 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0913 20:02:46.417622   71702 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0913 20:02:46.837809   71702 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0913 20:02:47.159346   71702 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0913 20:02:47.159994   71702 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0913 20:02:47.162768   71702 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0913 20:02:45.335134   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:47.338183   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:47.164508   71702 out.go:235]   - Booting up control plane ...
	I0913 20:02:47.164636   71702 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0913 20:02:47.164740   71702 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0913 20:02:47.164827   71702 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0913 20:02:47.182734   71702 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0913 20:02:47.188946   71702 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0913 20:02:47.189012   71702 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0913 20:02:47.311613   71702 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0913 20:02:47.311820   71702 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0913 20:02:47.812730   71702 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.220732ms
	I0913 20:02:47.812859   71702 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0913 20:02:53.314958   71702 kubeadm.go:310] [api-check] The API server is healthy after 5.502078323s
	I0913 20:02:53.332711   71702 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0913 20:02:53.363295   71702 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0913 20:02:53.416780   71702 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0913 20:02:53.417000   71702 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-512125 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0913 20:02:53.450532   71702 kubeadm.go:310] [bootstrap-token] Using token: omlshd.2vtm45ugvt4lb37m
	I0913 20:02:49.837005   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:52.336369   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:53.451903   71702 out.go:235]   - Configuring RBAC rules ...
	I0913 20:02:53.452024   71702 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0913 20:02:53.474646   71702 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0913 20:02:53.501155   71702 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0913 20:02:53.510978   71702 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0913 20:02:53.529034   71702 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0913 20:02:53.540839   71702 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0913 20:02:53.724625   71702 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0913 20:02:54.178585   71702 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0913 20:02:54.728758   71702 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0913 20:02:54.729745   71702 kubeadm.go:310] 
	I0913 20:02:54.729808   71702 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0913 20:02:54.729816   71702 kubeadm.go:310] 
	I0913 20:02:54.729906   71702 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0913 20:02:54.729931   71702 kubeadm.go:310] 
	I0913 20:02:54.729981   71702 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0913 20:02:54.730079   71702 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0913 20:02:54.730170   71702 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0913 20:02:54.730180   71702 kubeadm.go:310] 
	I0913 20:02:54.730386   71702 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0913 20:02:54.730403   71702 kubeadm.go:310] 
	I0913 20:02:54.730453   71702 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0913 20:02:54.730476   71702 kubeadm.go:310] 
	I0913 20:02:54.730538   71702 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0913 20:02:54.730642   71702 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0913 20:02:54.730737   71702 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0913 20:02:54.730746   71702 kubeadm.go:310] 
	I0913 20:02:54.730866   71702 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0913 20:02:54.730978   71702 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0913 20:02:54.730990   71702 kubeadm.go:310] 
	I0913 20:02:54.731059   71702 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token omlshd.2vtm45ugvt4lb37m \
	I0913 20:02:54.731147   71702 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a4240e11abfbfd373505036507e5ef67d548fb372e560a526b421dfcccc65691 \
	I0913 20:02:54.731172   71702 kubeadm.go:310] 	--control-plane 
	I0913 20:02:54.731178   71702 kubeadm.go:310] 
	I0913 20:02:54.731250   71702 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0913 20:02:54.731265   71702 kubeadm.go:310] 
	I0913 20:02:54.731385   71702 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token omlshd.2vtm45ugvt4lb37m \
	I0913 20:02:54.731537   71702 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a4240e11abfbfd373505036507e5ef67d548fb372e560a526b421dfcccc65691 
	I0913 20:02:54.732490   71702 kubeadm.go:310] W0913 20:02:45.866846    2534 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0913 20:02:54.732825   71702 kubeadm.go:310] W0913 20:02:45.867680    2534 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0913 20:02:54.732991   71702 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0913 20:02:54.733013   71702 cni.go:84] Creating CNI manager for ""
	I0913 20:02:54.733024   71702 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0913 20:02:54.734613   71702 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0913 20:02:54.735888   71702 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0913 20:02:54.747812   71702 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0913 20:02:54.769810   71702 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0913 20:02:54.769849   71702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 20:02:54.769936   71702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-512125 minikube.k8s.io/updated_at=2024_09_13T20_02_54_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=fdd33bebc6743cfd1c61ec7fe066add478610a92 minikube.k8s.io/name=default-k8s-diff-port-512125 minikube.k8s.io/primary=true
	I0913 20:02:54.934477   71702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 20:02:55.021422   71702 ops.go:34] apiserver oom_adj: -16
	I0913 20:02:55.435528   71702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 20:02:55.935089   71702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 20:02:56.434609   71702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 20:02:56.934698   71702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 20:02:57.434523   71702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 20:02:57.935430   71702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 20:02:58.434786   71702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 20:02:58.935296   71702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 20:02:59.068131   71702 kubeadm.go:1113] duration metric: took 4.298327621s to wait for elevateKubeSystemPrivileges
	I0913 20:02:59.068171   71702 kubeadm.go:394] duration metric: took 5m1.428919049s to StartCluster
	I0913 20:02:59.068191   71702 settings.go:142] acquiring lock: {Name:mk3b751ea8a9f3a6c2d19469cffad500e96411b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 20:02:59.068274   71702 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19636-3902/kubeconfig
	I0913 20:02:59.069936   71702 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/kubeconfig: {Name:mk324cf401f96b93fed93af51e24b0634e5fa1a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 20:02:59.070196   71702 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.3 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0913 20:02:59.070258   71702 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0913 20:02:59.070355   71702 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-512125"
	I0913 20:02:59.070373   71702 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-512125"
	W0913 20:02:59.070386   71702 addons.go:243] addon storage-provisioner should already be in state true
	I0913 20:02:59.070383   71702 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-512125"
	I0913 20:02:59.070407   71702 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-512125"
	I0913 20:02:59.070407   71702 config.go:182] Loaded profile config "default-k8s-diff-port-512125": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 20:02:59.070425   71702 host.go:66] Checking if "default-k8s-diff-port-512125" exists ...
	I0913 20:02:59.070413   71702 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-512125"
	I0913 20:02:59.070447   71702 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-512125"
	W0913 20:02:59.070457   71702 addons.go:243] addon metrics-server should already be in state true
	I0913 20:02:59.070481   71702 host.go:66] Checking if "default-k8s-diff-port-512125" exists ...
	I0913 20:02:59.070819   71702 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 20:02:59.070863   71702 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 20:02:59.070866   71702 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 20:02:59.070891   71702 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 20:02:59.070911   71702 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 20:02:59.070935   71702 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 20:02:59.072027   71702 out.go:177] * Verifying Kubernetes components...
	I0913 20:02:59.073600   71702 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 20:02:59.088175   71702 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46813
	I0913 20:02:59.088737   71702 main.go:141] libmachine: () Calling .GetVersion
	I0913 20:02:59.089296   71702 main.go:141] libmachine: Using API Version  1
	I0913 20:02:59.089321   71702 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 20:02:59.089716   71702 main.go:141] libmachine: () Calling .GetMachineName
	I0913 20:02:59.090168   71702 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41303
	I0913 20:02:59.090184   71702 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43597
	I0913 20:02:59.090323   71702 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 20:02:59.090370   71702 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 20:02:59.090639   71702 main.go:141] libmachine: () Calling .GetVersion
	I0913 20:02:59.090642   71702 main.go:141] libmachine: () Calling .GetVersion
	I0913 20:02:59.091125   71702 main.go:141] libmachine: Using API Version  1
	I0913 20:02:59.091157   71702 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 20:02:59.091295   71702 main.go:141] libmachine: Using API Version  1
	I0913 20:02:59.091309   71702 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 20:02:59.091691   71702 main.go:141] libmachine: () Calling .GetMachineName
	I0913 20:02:59.091749   71702 main.go:141] libmachine: () Calling .GetMachineName
	I0913 20:02:59.092208   71702 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 20:02:59.092244   71702 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 20:02:59.092420   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetState
	I0913 20:02:59.096383   71702 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-512125"
	W0913 20:02:59.096408   71702 addons.go:243] addon default-storageclass should already be in state true
	I0913 20:02:59.096439   71702 host.go:66] Checking if "default-k8s-diff-port-512125" exists ...
	I0913 20:02:59.096799   71702 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 20:02:59.096839   71702 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 20:02:59.110299   71702 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32853
	I0913 20:02:59.110382   71702 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41891
	I0913 20:02:59.110847   71702 main.go:141] libmachine: () Calling .GetVersion
	I0913 20:02:59.110951   71702 main.go:141] libmachine: () Calling .GetVersion
	I0913 20:02:59.111458   71702 main.go:141] libmachine: Using API Version  1
	I0913 20:02:59.111472   71702 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 20:02:59.111483   71702 main.go:141] libmachine: Using API Version  1
	I0913 20:02:59.111500   71702 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 20:02:59.111815   71702 main.go:141] libmachine: () Calling .GetMachineName
	I0913 20:02:59.111979   71702 main.go:141] libmachine: () Calling .GetMachineName
	I0913 20:02:59.112029   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetState
	I0913 20:02:59.112585   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetState
	I0913 20:02:59.114070   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .DriverName
	I0913 20:02:59.114919   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .DriverName
	I0913 20:02:59.116054   71702 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 20:02:59.116911   71702 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0913 20:02:54.837749   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:57.335281   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:57.335308   71233 pod_ready.go:82] duration metric: took 4m0.006028535s for pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace to be "Ready" ...
	E0913 20:02:57.335316   71233 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0913 20:02:57.335325   71233 pod_ready.go:39] duration metric: took 4m4.043499675s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0913 20:02:57.335338   71233 api_server.go:52] waiting for apiserver process to appear ...
	I0913 20:02:57.335365   71233 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:02:57.335429   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:02:57.384724   71233 cri.go:89] found id: "8c6b66cfda64cc01b2add3842317e8e38bd44940ec52b87ff6868e2b782ceb0d"
	I0913 20:02:57.384750   71233 cri.go:89] found id: ""
	I0913 20:02:57.384759   71233 logs.go:276] 1 containers: [8c6b66cfda64cc01b2add3842317e8e38bd44940ec52b87ff6868e2b782ceb0d]
	I0913 20:02:57.384816   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:02:57.393335   71233 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:02:57.393406   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:02:57.432064   71233 cri.go:89] found id: "b7288e6c437a29f9a016812446ce5905f4cc78974775e1f26b227cd41414a8c0"
	I0913 20:02:57.432112   71233 cri.go:89] found id: ""
	I0913 20:02:57.432121   71233 logs.go:276] 1 containers: [b7288e6c437a29f9a016812446ce5905f4cc78974775e1f26b227cd41414a8c0]
	I0913 20:02:57.432170   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:02:57.437305   71233 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:02:57.437363   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:02:57.484101   71233 cri.go:89] found id: "5a58f184d570427c2adde5e982b72f455eb375ced496ce62a3217f22f7c409e7"
	I0913 20:02:57.484125   71233 cri.go:89] found id: ""
	I0913 20:02:57.484135   71233 logs.go:276] 1 containers: [5a58f184d570427c2adde5e982b72f455eb375ced496ce62a3217f22f7c409e7]
	I0913 20:02:57.484204   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:02:57.489057   71233 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:02:57.489129   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:02:57.531094   71233 cri.go:89] found id: "c32212fb06588456e47850075601e1e9622d8fab0cd30faa9a6289ff74e4bc9f"
	I0913 20:02:57.531138   71233 cri.go:89] found id: ""
	I0913 20:02:57.531147   71233 logs.go:276] 1 containers: [c32212fb06588456e47850075601e1e9622d8fab0cd30faa9a6289ff74e4bc9f]
	I0913 20:02:57.531208   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:02:57.536227   71233 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:02:57.536290   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:02:57.575177   71233 cri.go:89] found id: "57402126568c76ff63c95511c256896d8fb9ec9889deab2a2816385513386b86"
	I0913 20:02:57.575204   71233 cri.go:89] found id: ""
	I0913 20:02:57.575213   71233 logs.go:276] 1 containers: [57402126568c76ff63c95511c256896d8fb9ec9889deab2a2816385513386b86]
	I0913 20:02:57.575265   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:02:57.580702   71233 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:02:57.580772   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:02:57.616846   71233 cri.go:89] found id: "3e8d6c49b3b396aea040cc47de27b36e0f6018361f1925fe36424adffe6a6b73"
	I0913 20:02:57.616872   71233 cri.go:89] found id: ""
	I0913 20:02:57.616881   71233 logs.go:276] 1 containers: [3e8d6c49b3b396aea040cc47de27b36e0f6018361f1925fe36424adffe6a6b73]
	I0913 20:02:57.616937   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:02:57.626381   71233 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:02:57.626438   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:02:57.665834   71233 cri.go:89] found id: ""
	I0913 20:02:57.665859   71233 logs.go:276] 0 containers: []
	W0913 20:02:57.665868   71233 logs.go:278] No container was found matching "kindnet"
	I0913 20:02:57.665873   71233 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0913 20:02:57.665924   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0913 20:02:57.709261   71233 cri.go:89] found id: "db0694e689431d2416b903e0f01588bdf740ca0d24d9561799e853cfb065cb11"
	I0913 20:02:57.709282   71233 cri.go:89] found id: "d21ac9f9341fd40b5f181668035e7327fd6c2310c56a7f5f0158d440bfd2e9a3"
	I0913 20:02:57.709286   71233 cri.go:89] found id: ""
	I0913 20:02:57.709293   71233 logs.go:276] 2 containers: [db0694e689431d2416b903e0f01588bdf740ca0d24d9561799e853cfb065cb11 d21ac9f9341fd40b5f181668035e7327fd6c2310c56a7f5f0158d440bfd2e9a3]
	I0913 20:02:57.709352   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:02:57.713629   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:02:57.717722   71233 logs.go:123] Gathering logs for kubelet ...
	I0913 20:02:57.717739   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:02:57.791226   71233 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:02:57.791258   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 20:02:57.967572   71233 logs.go:123] Gathering logs for kube-apiserver [8c6b66cfda64cc01b2add3842317e8e38bd44940ec52b87ff6868e2b782ceb0d] ...
	I0913 20:02:57.967614   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8c6b66cfda64cc01b2add3842317e8e38bd44940ec52b87ff6868e2b782ceb0d"
	I0913 20:02:58.035311   71233 logs.go:123] Gathering logs for kube-scheduler [c32212fb06588456e47850075601e1e9622d8fab0cd30faa9a6289ff74e4bc9f] ...
	I0913 20:02:58.035356   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c32212fb06588456e47850075601e1e9622d8fab0cd30faa9a6289ff74e4bc9f"
	I0913 20:02:58.076771   71233 logs.go:123] Gathering logs for kube-proxy [57402126568c76ff63c95511c256896d8fb9ec9889deab2a2816385513386b86] ...
	I0913 20:02:58.076801   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57402126568c76ff63c95511c256896d8fb9ec9889deab2a2816385513386b86"
	I0913 20:02:58.120108   71233 logs.go:123] Gathering logs for storage-provisioner [db0694e689431d2416b903e0f01588bdf740ca0d24d9561799e853cfb065cb11] ...
	I0913 20:02:58.120138   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db0694e689431d2416b903e0f01588bdf740ca0d24d9561799e853cfb065cb11"
	I0913 20:02:58.169935   71233 logs.go:123] Gathering logs for storage-provisioner [d21ac9f9341fd40b5f181668035e7327fd6c2310c56a7f5f0158d440bfd2e9a3] ...
	I0913 20:02:58.169964   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d21ac9f9341fd40b5f181668035e7327fd6c2310c56a7f5f0158d440bfd2e9a3"
	I0913 20:02:58.213552   71233 logs.go:123] Gathering logs for dmesg ...
	I0913 20:02:58.213579   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:02:58.227590   71233 logs.go:123] Gathering logs for etcd [b7288e6c437a29f9a016812446ce5905f4cc78974775e1f26b227cd41414a8c0] ...
	I0913 20:02:58.227618   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b7288e6c437a29f9a016812446ce5905f4cc78974775e1f26b227cd41414a8c0"
	I0913 20:02:58.272273   71233 logs.go:123] Gathering logs for coredns [5a58f184d570427c2adde5e982b72f455eb375ced496ce62a3217f22f7c409e7] ...
	I0913 20:02:58.272304   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a58f184d570427c2adde5e982b72f455eb375ced496ce62a3217f22f7c409e7"
	I0913 20:02:58.325246   71233 logs.go:123] Gathering logs for kube-controller-manager [3e8d6c49b3b396aea040cc47de27b36e0f6018361f1925fe36424adffe6a6b73] ...
	I0913 20:02:58.325282   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e8d6c49b3b396aea040cc47de27b36e0f6018361f1925fe36424adffe6a6b73"
	I0913 20:02:58.383314   71233 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:02:58.383344   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:02:58.878384   71233 logs.go:123] Gathering logs for container status ...
	I0913 20:02:58.878423   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:02:59.116960   71702 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35415
	I0913 20:02:59.117841   71702 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0913 20:02:59.117861   71702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0913 20:02:59.117881   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHHostname
	I0913 20:02:59.117970   71702 main.go:141] libmachine: () Calling .GetVersion
	I0913 20:02:59.118540   71702 main.go:141] libmachine: Using API Version  1
	I0913 20:02:59.118559   71702 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 20:02:59.118756   71702 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0913 20:02:59.118776   71702 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0913 20:02:59.118795   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHHostname
	I0913 20:02:59.118937   71702 main.go:141] libmachine: () Calling .GetMachineName
	I0913 20:02:59.120038   71702 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 20:02:59.120119   71702 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 20:02:59.122253   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 20:02:59.122695   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 20:02:59.122693   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:54:e0", ip: ""} in network mk-default-k8s-diff-port-512125: {Iface:virbr2 ExpiryTime:2024-09-13 20:49:35 +0000 UTC Type:0 Mac:52:54:00:5b:54:e0 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-512125 Clientid:01:52:54:00:5b:54:e0}
	I0913 20:02:59.122727   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined IP address 192.168.61.3 and MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 20:02:59.122937   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHPort
	I0913 20:02:59.123131   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHKeyPath
	I0913 20:02:59.123151   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:54:e0", ip: ""} in network mk-default-k8s-diff-port-512125: {Iface:virbr2 ExpiryTime:2024-09-13 20:49:35 +0000 UTC Type:0 Mac:52:54:00:5b:54:e0 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-512125 Clientid:01:52:54:00:5b:54:e0}
	I0913 20:02:59.123172   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined IP address 192.168.61.3 and MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 20:02:59.123321   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHUsername
	I0913 20:02:59.123523   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHPort
	I0913 20:02:59.123531   71702 sshutil.go:53] new ssh client: &{IP:192.168.61.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/default-k8s-diff-port-512125/id_rsa Username:docker}
	I0913 20:02:59.123629   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHKeyPath
	I0913 20:02:59.123727   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHUsername
	I0913 20:02:59.123835   71702 sshutil.go:53] new ssh client: &{IP:192.168.61.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/default-k8s-diff-port-512125/id_rsa Username:docker}
	I0913 20:02:59.137333   71702 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45929
	I0913 20:02:59.137767   71702 main.go:141] libmachine: () Calling .GetVersion
	I0913 20:02:59.138291   71702 main.go:141] libmachine: Using API Version  1
	I0913 20:02:59.138311   71702 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 20:02:59.138659   71702 main.go:141] libmachine: () Calling .GetMachineName
	I0913 20:02:59.138865   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetState
	I0913 20:02:59.140658   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .DriverName
	I0913 20:02:59.140891   71702 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0913 20:02:59.140908   71702 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0913 20:02:59.140934   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHHostname
	I0913 20:02:59.144330   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 20:02:59.144802   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:54:e0", ip: ""} in network mk-default-k8s-diff-port-512125: {Iface:virbr2 ExpiryTime:2024-09-13 20:49:35 +0000 UTC Type:0 Mac:52:54:00:5b:54:e0 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-512125 Clientid:01:52:54:00:5b:54:e0}
	I0913 20:02:59.144834   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined IP address 192.168.61.3 and MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 20:02:59.144971   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHPort
	I0913 20:02:59.145149   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHKeyPath
	I0913 20:02:59.145280   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHUsername
	I0913 20:02:59.145398   71702 sshutil.go:53] new ssh client: &{IP:192.168.61.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/default-k8s-diff-port-512125/id_rsa Username:docker}
	I0913 20:02:59.313139   71702 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0913 20:02:59.364703   71702 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-512125" to be "Ready" ...
	I0913 20:02:59.390283   71702 node_ready.go:49] node "default-k8s-diff-port-512125" has status "Ready":"True"
	I0913 20:02:59.390322   71702 node_ready.go:38] duration metric: took 25.568477ms for node "default-k8s-diff-port-512125" to be "Ready" ...
	I0913 20:02:59.390335   71702 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0913 20:02:59.404911   71702 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-2qg68" in "kube-system" namespace to be "Ready" ...
	I0913 20:02:59.534386   71702 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0913 20:02:59.534414   71702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0913 20:02:59.562931   71702 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0913 20:02:59.562958   71702 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0913 20:02:59.569447   71702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0913 20:02:59.630245   71702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0913 20:02:59.664309   71702 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0913 20:02:59.664341   71702 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0913 20:02:59.766546   71702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0913 20:03:00.996748   71702 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.366470603s)
	I0913 20:03:00.996799   71702 main.go:141] libmachine: Making call to close driver server
	I0913 20:03:00.996814   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .Close
	I0913 20:03:00.996831   71702 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.427344727s)
	I0913 20:03:00.996874   71702 main.go:141] libmachine: Making call to close driver server
	I0913 20:03:00.996886   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .Close
	I0913 20:03:00.997187   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | Closing plugin on server side
	I0913 20:03:00.997223   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | Closing plugin on server side
	I0913 20:03:00.997216   71702 main.go:141] libmachine: Successfully made call to close driver server
	I0913 20:03:00.997261   71702 main.go:141] libmachine: Successfully made call to close driver server
	I0913 20:03:00.997272   71702 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 20:03:00.997283   71702 main.go:141] libmachine: Making call to close driver server
	I0913 20:03:00.997293   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .Close
	I0913 20:03:00.997261   71702 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 20:03:00.997352   71702 main.go:141] libmachine: Making call to close driver server
	I0913 20:03:00.997360   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .Close
	I0913 20:03:00.997576   71702 main.go:141] libmachine: Successfully made call to close driver server
	I0913 20:03:00.997619   71702 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 20:03:00.997631   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | Closing plugin on server side
	I0913 20:03:00.997657   71702 main.go:141] libmachine: Successfully made call to close driver server
	I0913 20:03:00.997717   71702 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 20:03:01.017603   71702 main.go:141] libmachine: Making call to close driver server
	I0913 20:03:01.017629   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .Close
	I0913 20:03:01.017896   71702 main.go:141] libmachine: Successfully made call to close driver server
	I0913 20:03:01.017913   71702 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 20:03:01.034684   71702 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.268104844s)
	I0913 20:03:01.034739   71702 main.go:141] libmachine: Making call to close driver server
	I0913 20:03:01.034756   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .Close
	I0913 20:03:01.035100   71702 main.go:141] libmachine: Successfully made call to close driver server
	I0913 20:03:01.035120   71702 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 20:03:01.035137   71702 main.go:141] libmachine: Making call to close driver server
	I0913 20:03:01.035145   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .Close
	I0913 20:03:01.036842   71702 main.go:141] libmachine: Successfully made call to close driver server
	I0913 20:03:01.036871   71702 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 20:03:01.036882   71702 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-512125"
	I0913 20:03:01.039496   71702 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0913 20:03:01.432233   71233 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:03:01.452473   71233 api_server.go:72] duration metric: took 4m15.872372226s to wait for apiserver process to appear ...
	I0913 20:03:01.452503   71233 api_server.go:88] waiting for apiserver healthz status ...
	I0913 20:03:01.452544   71233 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:03:01.452600   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:03:01.495509   71233 cri.go:89] found id: "8c6b66cfda64cc01b2add3842317e8e38bd44940ec52b87ff6868e2b782ceb0d"
	I0913 20:03:01.495532   71233 cri.go:89] found id: ""
	I0913 20:03:01.495539   71233 logs.go:276] 1 containers: [8c6b66cfda64cc01b2add3842317e8e38bd44940ec52b87ff6868e2b782ceb0d]
	I0913 20:03:01.495601   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:03:01.502156   71233 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:03:01.502244   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:03:01.545020   71233 cri.go:89] found id: "b7288e6c437a29f9a016812446ce5905f4cc78974775e1f26b227cd41414a8c0"
	I0913 20:03:01.545046   71233 cri.go:89] found id: ""
	I0913 20:03:01.545056   71233 logs.go:276] 1 containers: [b7288e6c437a29f9a016812446ce5905f4cc78974775e1f26b227cd41414a8c0]
	I0913 20:03:01.545114   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:03:01.549607   71233 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:03:01.549675   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:03:01.589590   71233 cri.go:89] found id: "5a58f184d570427c2adde5e982b72f455eb375ced496ce62a3217f22f7c409e7"
	I0913 20:03:01.589619   71233 cri.go:89] found id: ""
	I0913 20:03:01.589627   71233 logs.go:276] 1 containers: [5a58f184d570427c2adde5e982b72f455eb375ced496ce62a3217f22f7c409e7]
	I0913 20:03:01.589677   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:03:01.595352   71233 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:03:01.595429   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:03:01.642418   71233 cri.go:89] found id: "c32212fb06588456e47850075601e1e9622d8fab0cd30faa9a6289ff74e4bc9f"
	I0913 20:03:01.642441   71233 cri.go:89] found id: ""
	I0913 20:03:01.642449   71233 logs.go:276] 1 containers: [c32212fb06588456e47850075601e1e9622d8fab0cd30faa9a6289ff74e4bc9f]
	I0913 20:03:01.642511   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:03:01.647937   71233 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:03:01.648004   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:03:01.691575   71233 cri.go:89] found id: "57402126568c76ff63c95511c256896d8fb9ec9889deab2a2816385513386b86"
	I0913 20:03:01.691603   71233 cri.go:89] found id: ""
	I0913 20:03:01.691612   71233 logs.go:276] 1 containers: [57402126568c76ff63c95511c256896d8fb9ec9889deab2a2816385513386b86]
	I0913 20:03:01.691669   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:03:01.697223   71233 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:03:01.697296   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:03:01.737359   71233 cri.go:89] found id: "3e8d6c49b3b396aea040cc47de27b36e0f6018361f1925fe36424adffe6a6b73"
	I0913 20:03:01.737386   71233 cri.go:89] found id: ""
	I0913 20:03:01.737395   71233 logs.go:276] 1 containers: [3e8d6c49b3b396aea040cc47de27b36e0f6018361f1925fe36424adffe6a6b73]
	I0913 20:03:01.737453   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:03:01.743717   71233 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:03:01.743779   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:03:01.784813   71233 cri.go:89] found id: ""
	I0913 20:03:01.784836   71233 logs.go:276] 0 containers: []
	W0913 20:03:01.784845   71233 logs.go:278] No container was found matching "kindnet"
	I0913 20:03:01.784849   71233 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0913 20:03:01.784898   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0913 20:03:01.823391   71233 cri.go:89] found id: "db0694e689431d2416b903e0f01588bdf740ca0d24d9561799e853cfb065cb11"
	I0913 20:03:01.823420   71233 cri.go:89] found id: "d21ac9f9341fd40b5f181668035e7327fd6c2310c56a7f5f0158d440bfd2e9a3"
	I0913 20:03:01.823427   71233 cri.go:89] found id: ""
	I0913 20:03:01.823436   71233 logs.go:276] 2 containers: [db0694e689431d2416b903e0f01588bdf740ca0d24d9561799e853cfb065cb11 d21ac9f9341fd40b5f181668035e7327fd6c2310c56a7f5f0158d440bfd2e9a3]
	I0913 20:03:01.823484   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:03:01.828764   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:03:01.834519   71233 logs.go:123] Gathering logs for storage-provisioner [db0694e689431d2416b903e0f01588bdf740ca0d24d9561799e853cfb065cb11] ...
	I0913 20:03:01.834546   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db0694e689431d2416b903e0f01588bdf740ca0d24d9561799e853cfb065cb11"
	I0913 20:03:01.872925   71233 logs.go:123] Gathering logs for etcd [b7288e6c437a29f9a016812446ce5905f4cc78974775e1f26b227cd41414a8c0] ...
	I0913 20:03:01.872954   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b7288e6c437a29f9a016812446ce5905f4cc78974775e1f26b227cd41414a8c0"
	I0913 20:03:01.927669   71233 logs.go:123] Gathering logs for coredns [5a58f184d570427c2adde5e982b72f455eb375ced496ce62a3217f22f7c409e7] ...
	I0913 20:03:01.927702   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a58f184d570427c2adde5e982b72f455eb375ced496ce62a3217f22f7c409e7"
	I0913 20:03:01.973537   71233 logs.go:123] Gathering logs for kube-scheduler [c32212fb06588456e47850075601e1e9622d8fab0cd30faa9a6289ff74e4bc9f] ...
	I0913 20:03:01.973576   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c32212fb06588456e47850075601e1e9622d8fab0cd30faa9a6289ff74e4bc9f"
	I0913 20:03:02.017320   71233 logs.go:123] Gathering logs for kube-proxy [57402126568c76ff63c95511c256896d8fb9ec9889deab2a2816385513386b86] ...
	I0913 20:03:02.017353   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57402126568c76ff63c95511c256896d8fb9ec9889deab2a2816385513386b86"
	I0913 20:03:02.064003   71233 logs.go:123] Gathering logs for kubelet ...
	I0913 20:03:02.064042   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:03:02.134901   71233 logs.go:123] Gathering logs for dmesg ...
	I0913 20:03:02.134933   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:03:02.150541   71233 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:03:02.150575   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 20:03:02.268583   71233 logs.go:123] Gathering logs for kube-apiserver [8c6b66cfda64cc01b2add3842317e8e38bd44940ec52b87ff6868e2b782ceb0d] ...
	I0913 20:03:02.268626   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8c6b66cfda64cc01b2add3842317e8e38bd44940ec52b87ff6868e2b782ceb0d"
	I0913 20:03:02.320972   71233 logs.go:123] Gathering logs for kube-controller-manager [3e8d6c49b3b396aea040cc47de27b36e0f6018361f1925fe36424adffe6a6b73] ...
	I0913 20:03:02.321004   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e8d6c49b3b396aea040cc47de27b36e0f6018361f1925fe36424adffe6a6b73"
	I0913 20:03:02.373848   71233 logs.go:123] Gathering logs for storage-provisioner [d21ac9f9341fd40b5f181668035e7327fd6c2310c56a7f5f0158d440bfd2e9a3] ...
	I0913 20:03:02.373881   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d21ac9f9341fd40b5f181668035e7327fd6c2310c56a7f5f0158d440bfd2e9a3"
	I0913 20:03:02.409851   71233 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:03:02.409882   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:03:02.833329   71233 logs.go:123] Gathering logs for container status ...
	I0913 20:03:02.833384   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:03:01.041611   71702 addons.go:510] duration metric: took 1.971356508s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0913 20:03:01.415839   71702 pod_ready.go:103] pod "coredns-7c65d6cfc9-2qg68" in "kube-system" namespace has status "Ready":"False"
	I0913 20:03:03.911854   71702 pod_ready.go:103] pod "coredns-7c65d6cfc9-2qg68" in "kube-system" namespace has status "Ready":"False"
	I0913 20:03:04.745279   71926 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0913 20:03:04.745917   71926 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0913 20:03:04.746165   71926 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0913 20:03:05.413146   71702 pod_ready.go:93] pod "coredns-7c65d6cfc9-2qg68" in "kube-system" namespace has status "Ready":"True"
	I0913 20:03:05.413172   71702 pod_ready.go:82] duration metric: took 6.008227569s for pod "coredns-7c65d6cfc9-2qg68" in "kube-system" namespace to be "Ready" ...
	I0913 20:03:05.413184   71702 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-pm4s9" in "kube-system" namespace to be "Ready" ...
	I0913 20:03:07.420197   71702 pod_ready.go:103] pod "coredns-7c65d6cfc9-pm4s9" in "kube-system" namespace has status "Ready":"False"
	I0913 20:03:07.920309   71702 pod_ready.go:93] pod "coredns-7c65d6cfc9-pm4s9" in "kube-system" namespace has status "Ready":"True"
	I0913 20:03:07.920333   71702 pod_ready.go:82] duration metric: took 2.507141455s for pod "coredns-7c65d6cfc9-pm4s9" in "kube-system" namespace to be "Ready" ...
	I0913 20:03:07.920342   71702 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-512125" in "kube-system" namespace to be "Ready" ...
	I0913 20:03:07.924871   71702 pod_ready.go:93] pod "etcd-default-k8s-diff-port-512125" in "kube-system" namespace has status "Ready":"True"
	I0913 20:03:07.924892   71702 pod_ready.go:82] duration metric: took 4.543474ms for pod "etcd-default-k8s-diff-port-512125" in "kube-system" namespace to be "Ready" ...
	I0913 20:03:07.924901   71702 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-512125" in "kube-system" namespace to be "Ready" ...
	I0913 20:03:07.929323   71702 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-512125" in "kube-system" namespace has status "Ready":"True"
	I0913 20:03:07.929343   71702 pod_ready.go:82] duration metric: took 4.435416ms for pod "kube-apiserver-default-k8s-diff-port-512125" in "kube-system" namespace to be "Ready" ...
	I0913 20:03:07.929351   71702 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-512125" in "kube-system" namespace to be "Ready" ...
	I0913 20:03:07.933200   71702 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-512125" in "kube-system" namespace has status "Ready":"True"
	I0913 20:03:07.933225   71702 pod_ready.go:82] duration metric: took 3.865423ms for pod "kube-controller-manager-default-k8s-diff-port-512125" in "kube-system" namespace to be "Ready" ...
	I0913 20:03:07.933237   71702 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-6zfwm" in "kube-system" namespace to be "Ready" ...
	I0913 20:03:07.938215   71702 pod_ready.go:93] pod "kube-proxy-6zfwm" in "kube-system" namespace has status "Ready":"True"
	I0913 20:03:07.938241   71702 pod_ready.go:82] duration metric: took 4.996366ms for pod "kube-proxy-6zfwm" in "kube-system" namespace to be "Ready" ...
	I0913 20:03:07.938251   71702 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-512125" in "kube-system" namespace to be "Ready" ...
	I0913 20:03:08.317175   71702 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-512125" in "kube-system" namespace has status "Ready":"True"
	I0913 20:03:08.317200   71702 pod_ready.go:82] duration metric: took 378.941006ms for pod "kube-scheduler-default-k8s-diff-port-512125" in "kube-system" namespace to be "Ready" ...
	I0913 20:03:08.317207   71702 pod_ready.go:39] duration metric: took 8.926861264s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0913 20:03:08.317220   71702 api_server.go:52] waiting for apiserver process to appear ...
	I0913 20:03:08.317270   71702 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:03:08.332715   71702 api_server.go:72] duration metric: took 9.262487177s to wait for apiserver process to appear ...
	I0913 20:03:08.332745   71702 api_server.go:88] waiting for apiserver healthz status ...
	I0913 20:03:08.332766   71702 api_server.go:253] Checking apiserver healthz at https://192.168.61.3:8444/healthz ...
	I0913 20:03:08.337492   71702 api_server.go:279] https://192.168.61.3:8444/healthz returned 200:
	ok
	I0913 20:03:08.338513   71702 api_server.go:141] control plane version: v1.31.1
	I0913 20:03:08.338534   71702 api_server.go:131] duration metric: took 5.781718ms to wait for apiserver health ...
	I0913 20:03:08.338540   71702 system_pods.go:43] waiting for kube-system pods to appear ...
	I0913 20:03:08.519723   71702 system_pods.go:59] 9 kube-system pods found
	I0913 20:03:08.519751   71702 system_pods.go:61] "coredns-7c65d6cfc9-2qg68" [06d7bc39-7f7b-405c-828f-22d68741b063] Running
	I0913 20:03:08.519756   71702 system_pods.go:61] "coredns-7c65d6cfc9-pm4s9" [82a23abb-d3a2-415b-a992-971fe65ee840] Running
	I0913 20:03:08.519760   71702 system_pods.go:61] "etcd-default-k8s-diff-port-512125" [b146d4ea-c125-42c6-8435-b23a83c75597] Running
	I0913 20:03:08.519764   71702 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-512125" [69a6fc72-2e73-41ad-a945-985c4f3b406c] Running
	I0913 20:03:08.519767   71702 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-512125" [7875db7b-84da-4dae-b78c-beb539869479] Running
	I0913 20:03:08.519770   71702 system_pods.go:61] "kube-proxy-6zfwm" [b62cff15-1c67-42d6-a30b-6f43a914fa0c] Running
	I0913 20:03:08.519773   71702 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-512125" [840349ee-5068-4ccf-9e6f-9879904b3647] Running
	I0913 20:03:08.519779   71702 system_pods.go:61] "metrics-server-6867b74b74-tk8qn" [e4e5d427-7760-4397-8529-3ae3734ed891] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0913 20:03:08.519782   71702 system_pods.go:61] "storage-provisioner" [7cd5034f-5d90-4155-acab-804dca90a2ed] Running
	I0913 20:03:08.519790   71702 system_pods.go:74] duration metric: took 181.244915ms to wait for pod list to return data ...
	I0913 20:03:08.519797   71702 default_sa.go:34] waiting for default service account to be created ...
	I0913 20:03:08.717123   71702 default_sa.go:45] found service account: "default"
	I0913 20:03:08.717146   71702 default_sa.go:55] duration metric: took 197.343901ms for default service account to be created ...
	I0913 20:03:08.717155   71702 system_pods.go:116] waiting for k8s-apps to be running ...
	I0913 20:03:08.920347   71702 system_pods.go:86] 9 kube-system pods found
	I0913 20:03:08.920378   71702 system_pods.go:89] "coredns-7c65d6cfc9-2qg68" [06d7bc39-7f7b-405c-828f-22d68741b063] Running
	I0913 20:03:08.920383   71702 system_pods.go:89] "coredns-7c65d6cfc9-pm4s9" [82a23abb-d3a2-415b-a992-971fe65ee840] Running
	I0913 20:03:08.920388   71702 system_pods.go:89] "etcd-default-k8s-diff-port-512125" [b146d4ea-c125-42c6-8435-b23a83c75597] Running
	I0913 20:03:08.920392   71702 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-512125" [69a6fc72-2e73-41ad-a945-985c4f3b406c] Running
	I0913 20:03:08.920396   71702 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-512125" [7875db7b-84da-4dae-b78c-beb539869479] Running
	I0913 20:03:08.920401   71702 system_pods.go:89] "kube-proxy-6zfwm" [b62cff15-1c67-42d6-a30b-6f43a914fa0c] Running
	I0913 20:03:08.920407   71702 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-512125" [840349ee-5068-4ccf-9e6f-9879904b3647] Running
	I0913 20:03:08.920415   71702 system_pods.go:89] "metrics-server-6867b74b74-tk8qn" [e4e5d427-7760-4397-8529-3ae3734ed891] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0913 20:03:08.920421   71702 system_pods.go:89] "storage-provisioner" [7cd5034f-5d90-4155-acab-804dca90a2ed] Running
	I0913 20:03:08.920433   71702 system_pods.go:126] duration metric: took 203.271141ms to wait for k8s-apps to be running ...
	I0913 20:03:08.920446   71702 system_svc.go:44] waiting for kubelet service to be running ....
	I0913 20:03:08.920492   71702 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 20:03:08.937818   71702 system_svc.go:56] duration metric: took 17.363979ms WaitForService to wait for kubelet
	I0913 20:03:08.937850   71702 kubeadm.go:582] duration metric: took 9.867627646s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0913 20:03:08.937866   71702 node_conditions.go:102] verifying NodePressure condition ...
	I0913 20:03:09.117836   71702 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0913 20:03:09.117861   71702 node_conditions.go:123] node cpu capacity is 2
	I0913 20:03:09.117870   71702 node_conditions.go:105] duration metric: took 180.000591ms to run NodePressure ...
	I0913 20:03:09.117880   71702 start.go:241] waiting for startup goroutines ...
	I0913 20:03:09.117886   71702 start.go:246] waiting for cluster config update ...
	I0913 20:03:09.117896   71702 start.go:255] writing updated cluster config ...
	I0913 20:03:09.118224   71702 ssh_runner.go:195] Run: rm -f paused
	I0913 20:03:09.166470   71702 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0913 20:03:09.168569   71702 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-512125" cluster and "default" namespace by default
	I0913 20:03:05.379534   71233 api_server.go:253] Checking apiserver healthz at https://192.168.39.32:8443/healthz ...
	I0913 20:03:05.385296   71233 api_server.go:279] https://192.168.39.32:8443/healthz returned 200:
	ok
	I0913 20:03:05.386447   71233 api_server.go:141] control plane version: v1.31.1
	I0913 20:03:05.386467   71233 api_server.go:131] duration metric: took 3.933956718s to wait for apiserver health ...
	I0913 20:03:05.386476   71233 system_pods.go:43] waiting for kube-system pods to appear ...
	I0913 20:03:05.386501   71233 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:03:05.386558   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:03:05.435632   71233 cri.go:89] found id: "8c6b66cfda64cc01b2add3842317e8e38bd44940ec52b87ff6868e2b782ceb0d"
	I0913 20:03:05.435663   71233 cri.go:89] found id: ""
	I0913 20:03:05.435674   71233 logs.go:276] 1 containers: [8c6b66cfda64cc01b2add3842317e8e38bd44940ec52b87ff6868e2b782ceb0d]
	I0913 20:03:05.435734   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:03:05.440489   71233 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:03:05.440552   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:03:05.479659   71233 cri.go:89] found id: "b7288e6c437a29f9a016812446ce5905f4cc78974775e1f26b227cd41414a8c0"
	I0913 20:03:05.479684   71233 cri.go:89] found id: ""
	I0913 20:03:05.479692   71233 logs.go:276] 1 containers: [b7288e6c437a29f9a016812446ce5905f4cc78974775e1f26b227cd41414a8c0]
	I0913 20:03:05.479739   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:03:05.483811   71233 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:03:05.483868   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:03:05.519053   71233 cri.go:89] found id: "5a58f184d570427c2adde5e982b72f455eb375ced496ce62a3217f22f7c409e7"
	I0913 20:03:05.519077   71233 cri.go:89] found id: ""
	I0913 20:03:05.519085   71233 logs.go:276] 1 containers: [5a58f184d570427c2adde5e982b72f455eb375ced496ce62a3217f22f7c409e7]
	I0913 20:03:05.519139   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:03:05.523529   71233 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:03:05.523596   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:03:05.560575   71233 cri.go:89] found id: "c32212fb06588456e47850075601e1e9622d8fab0cd30faa9a6289ff74e4bc9f"
	I0913 20:03:05.560599   71233 cri.go:89] found id: ""
	I0913 20:03:05.560608   71233 logs.go:276] 1 containers: [c32212fb06588456e47850075601e1e9622d8fab0cd30faa9a6289ff74e4bc9f]
	I0913 20:03:05.560655   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:03:05.564712   71233 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:03:05.564761   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:03:05.602092   71233 cri.go:89] found id: "57402126568c76ff63c95511c256896d8fb9ec9889deab2a2816385513386b86"
	I0913 20:03:05.602131   71233 cri.go:89] found id: ""
	I0913 20:03:05.602141   71233 logs.go:276] 1 containers: [57402126568c76ff63c95511c256896d8fb9ec9889deab2a2816385513386b86]
	I0913 20:03:05.602202   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:03:05.606465   71233 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:03:05.606531   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:03:05.652471   71233 cri.go:89] found id: "3e8d6c49b3b396aea040cc47de27b36e0f6018361f1925fe36424adffe6a6b73"
	I0913 20:03:05.652499   71233 cri.go:89] found id: ""
	I0913 20:03:05.652509   71233 logs.go:276] 1 containers: [3e8d6c49b3b396aea040cc47de27b36e0f6018361f1925fe36424adffe6a6b73]
	I0913 20:03:05.652567   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:03:05.656969   71233 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:03:05.657028   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:03:05.695549   71233 cri.go:89] found id: ""
	I0913 20:03:05.695575   71233 logs.go:276] 0 containers: []
	W0913 20:03:05.695586   71233 logs.go:278] No container was found matching "kindnet"
	I0913 20:03:05.695594   71233 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0913 20:03:05.695657   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0913 20:03:05.732796   71233 cri.go:89] found id: "db0694e689431d2416b903e0f01588bdf740ca0d24d9561799e853cfb065cb11"
	I0913 20:03:05.732824   71233 cri.go:89] found id: "d21ac9f9341fd40b5f181668035e7327fd6c2310c56a7f5f0158d440bfd2e9a3"
	I0913 20:03:05.732830   71233 cri.go:89] found id: ""
	I0913 20:03:05.732838   71233 logs.go:276] 2 containers: [db0694e689431d2416b903e0f01588bdf740ca0d24d9561799e853cfb065cb11 d21ac9f9341fd40b5f181668035e7327fd6c2310c56a7f5f0158d440bfd2e9a3]
	I0913 20:03:05.732905   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:03:05.737676   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:03:05.742071   71233 logs.go:123] Gathering logs for etcd [b7288e6c437a29f9a016812446ce5905f4cc78974775e1f26b227cd41414a8c0] ...
	I0913 20:03:05.742109   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b7288e6c437a29f9a016812446ce5905f4cc78974775e1f26b227cd41414a8c0"
	I0913 20:03:05.792956   71233 logs.go:123] Gathering logs for kube-scheduler [c32212fb06588456e47850075601e1e9622d8fab0cd30faa9a6289ff74e4bc9f] ...
	I0913 20:03:05.792984   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c32212fb06588456e47850075601e1e9622d8fab0cd30faa9a6289ff74e4bc9f"
	I0913 20:03:05.834623   71233 logs.go:123] Gathering logs for kube-proxy [57402126568c76ff63c95511c256896d8fb9ec9889deab2a2816385513386b86] ...
	I0913 20:03:05.834651   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57402126568c76ff63c95511c256896d8fb9ec9889deab2a2816385513386b86"
	I0913 20:03:05.872365   71233 logs.go:123] Gathering logs for storage-provisioner [db0694e689431d2416b903e0f01588bdf740ca0d24d9561799e853cfb065cb11] ...
	I0913 20:03:05.872395   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db0694e689431d2416b903e0f01588bdf740ca0d24d9561799e853cfb065cb11"
	I0913 20:03:05.909565   71233 logs.go:123] Gathering logs for storage-provisioner [d21ac9f9341fd40b5f181668035e7327fd6c2310c56a7f5f0158d440bfd2e9a3] ...
	I0913 20:03:05.909589   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d21ac9f9341fd40b5f181668035e7327fd6c2310c56a7f5f0158d440bfd2e9a3"
	I0913 20:03:05.950037   71233 logs.go:123] Gathering logs for container status ...
	I0913 20:03:05.950073   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:03:06.006670   71233 logs.go:123] Gathering logs for kubelet ...
	I0913 20:03:06.006702   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:03:06.075591   71233 logs.go:123] Gathering logs for dmesg ...
	I0913 20:03:06.075633   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:03:06.090020   71233 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:03:06.090051   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 20:03:06.193190   71233 logs.go:123] Gathering logs for kube-apiserver [8c6b66cfda64cc01b2add3842317e8e38bd44940ec52b87ff6868e2b782ceb0d] ...
	I0913 20:03:06.193216   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8c6b66cfda64cc01b2add3842317e8e38bd44940ec52b87ff6868e2b782ceb0d"
	I0913 20:03:06.236386   71233 logs.go:123] Gathering logs for coredns [5a58f184d570427c2adde5e982b72f455eb375ced496ce62a3217f22f7c409e7] ...
	I0913 20:03:06.236414   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a58f184d570427c2adde5e982b72f455eb375ced496ce62a3217f22f7c409e7"
	I0913 20:03:06.276618   71233 logs.go:123] Gathering logs for kube-controller-manager [3e8d6c49b3b396aea040cc47de27b36e0f6018361f1925fe36424adffe6a6b73] ...
	I0913 20:03:06.276644   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e8d6c49b3b396aea040cc47de27b36e0f6018361f1925fe36424adffe6a6b73"
	I0913 20:03:06.332088   71233 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:03:06.332119   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:03:09.189499   71233 system_pods.go:59] 8 kube-system pods found
	I0913 20:03:09.189533   71233 system_pods.go:61] "coredns-7c65d6cfc9-lrrkx" [3b5420cc-a0cd-4f34-8f65-9fa6fd5dd02f] Running
	I0913 20:03:09.189542   71233 system_pods.go:61] "etcd-embed-certs-175374" [4f645ba5-cffa-49a2-9219-8e635f58abd3] Running
	I0913 20:03:09.189549   71233 system_pods.go:61] "kube-apiserver-embed-certs-175374" [4c21b983-d949-4642-b17c-b547330b0d05] Running
	I0913 20:03:09.189564   71233 system_pods.go:61] "kube-controller-manager-embed-certs-175374" [defb4f6c-81f8-4405-b2c0-abe91c846f4c] Running
	I0913 20:03:09.189571   71233 system_pods.go:61] "kube-proxy-jv77q" [28580bbe-7c5f-4161-8370-41f3286d508c] Running
	I0913 20:03:09.189577   71233 system_pods.go:61] "kube-scheduler-embed-certs-175374" [c5aa66b9-523c-46e8-b8e4-70680490dbf5] Running
	I0913 20:03:09.189588   71233 system_pods.go:61] "metrics-server-6867b74b74-fnznh" [9ca67e1c-a852-4513-abfc-ace5908d2727] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0913 20:03:09.189597   71233 system_pods.go:61] "storage-provisioner" [afa99920-b51a-4d30-a8e0-269a0beeee8a] Running
	I0913 20:03:09.189610   71233 system_pods.go:74] duration metric: took 3.803122963s to wait for pod list to return data ...
	I0913 20:03:09.189618   71233 default_sa.go:34] waiting for default service account to be created ...
	I0913 20:03:09.192997   71233 default_sa.go:45] found service account: "default"
	I0913 20:03:09.193023   71233 default_sa.go:55] duration metric: took 3.397513ms for default service account to be created ...
	I0913 20:03:09.193033   71233 system_pods.go:116] waiting for k8s-apps to be running ...
	I0913 20:03:09.198238   71233 system_pods.go:86] 8 kube-system pods found
	I0913 20:03:09.198263   71233 system_pods.go:89] "coredns-7c65d6cfc9-lrrkx" [3b5420cc-a0cd-4f34-8f65-9fa6fd5dd02f] Running
	I0913 20:03:09.198268   71233 system_pods.go:89] "etcd-embed-certs-175374" [4f645ba5-cffa-49a2-9219-8e635f58abd3] Running
	I0913 20:03:09.198272   71233 system_pods.go:89] "kube-apiserver-embed-certs-175374" [4c21b983-d949-4642-b17c-b547330b0d05] Running
	I0913 20:03:09.198276   71233 system_pods.go:89] "kube-controller-manager-embed-certs-175374" [defb4f6c-81f8-4405-b2c0-abe91c846f4c] Running
	I0913 20:03:09.198280   71233 system_pods.go:89] "kube-proxy-jv77q" [28580bbe-7c5f-4161-8370-41f3286d508c] Running
	I0913 20:03:09.198284   71233 system_pods.go:89] "kube-scheduler-embed-certs-175374" [c5aa66b9-523c-46e8-b8e4-70680490dbf5] Running
	I0913 20:03:09.198291   71233 system_pods.go:89] "metrics-server-6867b74b74-fnznh" [9ca67e1c-a852-4513-abfc-ace5908d2727] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0913 20:03:09.198298   71233 system_pods.go:89] "storage-provisioner" [afa99920-b51a-4d30-a8e0-269a0beeee8a] Running
	I0913 20:03:09.198305   71233 system_pods.go:126] duration metric: took 5.267005ms to wait for k8s-apps to be running ...
	I0913 20:03:09.198314   71233 system_svc.go:44] waiting for kubelet service to be running ....
	I0913 20:03:09.198349   71233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 20:03:09.216256   71233 system_svc.go:56] duration metric: took 17.93212ms WaitForService to wait for kubelet
	I0913 20:03:09.216295   71233 kubeadm.go:582] duration metric: took 4m23.636198466s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0913 20:03:09.216318   71233 node_conditions.go:102] verifying NodePressure condition ...
	I0913 20:03:09.219598   71233 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0913 20:03:09.219623   71233 node_conditions.go:123] node cpu capacity is 2
	I0913 20:03:09.219634   71233 node_conditions.go:105] duration metric: took 3.310981ms to run NodePressure ...
	I0913 20:03:09.219644   71233 start.go:241] waiting for startup goroutines ...
	I0913 20:03:09.219650   71233 start.go:246] waiting for cluster config update ...
	I0913 20:03:09.219659   71233 start.go:255] writing updated cluster config ...
	I0913 20:03:09.219956   71233 ssh_runner.go:195] Run: rm -f paused
	I0913 20:03:09.275861   71233 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0913 20:03:09.277856   71233 out.go:177] * Done! kubectl is now configured to use "embed-certs-175374" cluster and "default" namespace by default
	I0913 20:03:09.746358   71926 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0913 20:03:09.746651   71926 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0913 20:03:19.746817   71926 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0913 20:03:19.747178   71926 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0913 20:03:39.747563   71926 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0913 20:03:39.747837   71926 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0913 20:04:19.749006   71926 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0913 20:04:19.749293   71926 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0913 20:04:19.749318   71926 kubeadm.go:310] 
	I0913 20:04:19.749381   71926 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0913 20:04:19.749450   71926 kubeadm.go:310] 		timed out waiting for the condition
	I0913 20:04:19.749486   71926 kubeadm.go:310] 
	I0913 20:04:19.749554   71926 kubeadm.go:310] 	This error is likely caused by:
	I0913 20:04:19.749588   71926 kubeadm.go:310] 		- The kubelet is not running
	I0913 20:04:19.749737   71926 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0913 20:04:19.749752   71926 kubeadm.go:310] 
	I0913 20:04:19.749887   71926 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0913 20:04:19.749920   71926 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0913 20:04:19.749949   71926 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0913 20:04:19.749955   71926 kubeadm.go:310] 
	I0913 20:04:19.750044   71926 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0913 20:04:19.750136   71926 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0913 20:04:19.750145   71926 kubeadm.go:310] 
	I0913 20:04:19.750247   71926 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0913 20:04:19.750339   71926 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0913 20:04:19.750430   71926 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0913 20:04:19.750523   71926 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0913 20:04:19.750574   71926 kubeadm.go:310] 
	I0913 20:04:19.751362   71926 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0913 20:04:19.751496   71926 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0913 20:04:19.751584   71926 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0913 20:04:19.751725   71926 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0913 20:04:19.751774   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0913 20:04:20.207124   71926 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 20:04:20.223940   71926 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0913 20:04:20.234902   71926 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0913 20:04:20.234923   71926 kubeadm.go:157] found existing configuration files:
	
	I0913 20:04:20.234970   71926 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0913 20:04:20.246057   71926 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0913 20:04:20.246154   71926 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0913 20:04:20.257102   71926 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0913 20:04:20.266497   71926 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0913 20:04:20.266545   71926 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0913 20:04:20.276259   71926 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0913 20:04:20.285640   71926 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0913 20:04:20.285690   71926 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0913 20:04:20.294988   71926 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0913 20:04:20.303978   71926 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0913 20:04:20.304021   71926 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0913 20:04:20.313426   71926 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0913 20:04:20.392431   71926 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0913 20:04:20.392519   71926 kubeadm.go:310] [preflight] Running pre-flight checks
	I0913 20:04:20.550959   71926 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0913 20:04:20.551072   71926 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0913 20:04:20.551169   71926 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0913 20:04:20.746999   71926 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0913 20:04:20.749008   71926 out.go:235]   - Generating certificates and keys ...
	I0913 20:04:20.749104   71926 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0913 20:04:20.749181   71926 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0913 20:04:20.749255   71926 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0913 20:04:20.749339   71926 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0913 20:04:20.749457   71926 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0913 20:04:20.749540   71926 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0913 20:04:20.749608   71926 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0913 20:04:20.749670   71926 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0913 20:04:20.749732   71926 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0913 20:04:20.749801   71926 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0913 20:04:20.749833   71926 kubeadm.go:310] [certs] Using the existing "sa" key
	I0913 20:04:20.749877   71926 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0913 20:04:20.997924   71926 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0913 20:04:21.175932   71926 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0913 20:04:21.442609   71926 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0913 20:04:21.714181   71926 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0913 20:04:21.737741   71926 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0913 20:04:21.738987   71926 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0913 20:04:21.739058   71926 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0913 20:04:21.885498   71926 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0913 20:04:21.887033   71926 out.go:235]   - Booting up control plane ...
	I0913 20:04:21.887170   71926 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0913 20:04:21.893768   71926 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0913 20:04:21.893887   71926 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0913 20:04:21.894035   71926 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0913 20:04:21.904503   71926 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0913 20:05:01.906985   71926 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0913 20:05:01.907109   71926 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0913 20:05:01.907459   71926 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0913 20:05:06.907586   71926 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0913 20:05:06.907859   71926 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0913 20:05:16.908086   71926 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0913 20:05:16.908310   71926 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0913 20:05:36.908887   71926 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0913 20:05:36.909114   71926 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0913 20:06:16.909944   71926 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0913 20:06:16.910449   71926 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0913 20:06:16.910509   71926 kubeadm.go:310] 
	I0913 20:06:16.910582   71926 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0913 20:06:16.910675   71926 kubeadm.go:310] 		timed out waiting for the condition
	I0913 20:06:16.910694   71926 kubeadm.go:310] 
	I0913 20:06:16.910764   71926 kubeadm.go:310] 	This error is likely caused by:
	I0913 20:06:16.910837   71926 kubeadm.go:310] 		- The kubelet is not running
	I0913 20:06:16.910965   71926 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0913 20:06:16.910979   71926 kubeadm.go:310] 
	I0913 20:06:16.911088   71926 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0913 20:06:16.911126   71926 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0913 20:06:16.911172   71926 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0913 20:06:16.911180   71926 kubeadm.go:310] 
	I0913 20:06:16.911298   71926 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0913 20:06:16.911400   71926 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0913 20:06:16.911415   71926 kubeadm.go:310] 
	I0913 20:06:16.911586   71926 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0913 20:06:16.911787   71926 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0913 20:06:16.911938   71926 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0913 20:06:16.912063   71926 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0913 20:06:16.912087   71926 kubeadm.go:310] 
	I0913 20:06:16.912489   71926 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0913 20:06:16.912671   71926 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0913 20:06:16.912770   71926 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0913 20:06:16.912842   71926 kubeadm.go:394] duration metric: took 7m57.58767006s to StartCluster
	I0913 20:06:16.912893   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:06:16.912960   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:06:16.957280   71926 cri.go:89] found id: ""
	I0913 20:06:16.957307   71926 logs.go:276] 0 containers: []
	W0913 20:06:16.957319   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:06:16.957326   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:06:16.957393   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:06:16.997065   71926 cri.go:89] found id: ""
	I0913 20:06:16.997095   71926 logs.go:276] 0 containers: []
	W0913 20:06:16.997106   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:06:16.997115   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:06:16.997193   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:06:17.033075   71926 cri.go:89] found id: ""
	I0913 20:06:17.033099   71926 logs.go:276] 0 containers: []
	W0913 20:06:17.033107   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:06:17.033112   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:06:17.033180   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:06:17.071062   71926 cri.go:89] found id: ""
	I0913 20:06:17.071090   71926 logs.go:276] 0 containers: []
	W0913 20:06:17.071101   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:06:17.071108   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:06:17.071176   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:06:17.107558   71926 cri.go:89] found id: ""
	I0913 20:06:17.107584   71926 logs.go:276] 0 containers: []
	W0913 20:06:17.107594   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:06:17.107599   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:06:17.107658   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:06:17.146037   71926 cri.go:89] found id: ""
	I0913 20:06:17.146066   71926 logs.go:276] 0 containers: []
	W0913 20:06:17.146076   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:06:17.146083   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:06:17.146158   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:06:17.189129   71926 cri.go:89] found id: ""
	I0913 20:06:17.189163   71926 logs.go:276] 0 containers: []
	W0913 20:06:17.189174   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:06:17.189181   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:06:17.189241   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:06:17.224018   71926 cri.go:89] found id: ""
	I0913 20:06:17.224045   71926 logs.go:276] 0 containers: []
	W0913 20:06:17.224056   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:06:17.224067   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:06:17.224081   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:06:17.238494   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:06:17.238526   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:06:17.319627   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:06:17.319650   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:06:17.319663   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:06:17.472823   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:06:17.472854   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:06:17.515919   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:06:17.515957   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0913 20:06:17.566082   71926 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0913 20:06:17.566145   71926 out.go:270] * 
	W0913 20:06:17.566249   71926 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0913 20:06:17.566272   71926 out.go:270] * 
	W0913 20:06:17.567031   71926 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0913 20:06:17.570669   71926 out.go:201] 
	W0913 20:06:17.571961   71926 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0913 20:06:17.572007   71926 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0913 20:06:17.572024   71926 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0913 20:06:17.573440   71926 out.go:201] 
	
	
	==> CRI-O <==
	Sep 13 20:15:22 old-k8s-version-234290 crio[635]: time="2024-09-13 20:15:22.910570899Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726258522910542214,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bdbcf6e0-3d84-4729-9674-7211cba77993 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 20:15:22 old-k8s-version-234290 crio[635]: time="2024-09-13 20:15:22.911262627Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=aa78e6ab-09f8-4669-819e-168a5047533c name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 20:15:22 old-k8s-version-234290 crio[635]: time="2024-09-13 20:15:22.911315090Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=aa78e6ab-09f8-4669-819e-168a5047533c name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 20:15:22 old-k8s-version-234290 crio[635]: time="2024-09-13 20:15:22.911356839Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=aa78e6ab-09f8-4669-819e-168a5047533c name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 20:15:22 old-k8s-version-234290 crio[635]: time="2024-09-13 20:15:22.943373673Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=84fc43fc-f227-499d-ba35-47b097be4ec6 name=/runtime.v1.RuntimeService/Version
	Sep 13 20:15:22 old-k8s-version-234290 crio[635]: time="2024-09-13 20:15:22.943452469Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=84fc43fc-f227-499d-ba35-47b097be4ec6 name=/runtime.v1.RuntimeService/Version
	Sep 13 20:15:22 old-k8s-version-234290 crio[635]: time="2024-09-13 20:15:22.944797825Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=77089526-b63d-48a3-b6ba-f288a704ccf5 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 20:15:22 old-k8s-version-234290 crio[635]: time="2024-09-13 20:15:22.945218169Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726258522945194798,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=77089526-b63d-48a3-b6ba-f288a704ccf5 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 20:15:22 old-k8s-version-234290 crio[635]: time="2024-09-13 20:15:22.945874183Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=97769380-ce5c-476f-bf52-57dcb065a658 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 20:15:22 old-k8s-version-234290 crio[635]: time="2024-09-13 20:15:22.945925802Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=97769380-ce5c-476f-bf52-57dcb065a658 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 20:15:22 old-k8s-version-234290 crio[635]: time="2024-09-13 20:15:22.945965358Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=97769380-ce5c-476f-bf52-57dcb065a658 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 20:15:22 old-k8s-version-234290 crio[635]: time="2024-09-13 20:15:22.979822719Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=be7a2fc0-2407-4bc5-b775-d8b50ce37e09 name=/runtime.v1.RuntimeService/Version
	Sep 13 20:15:22 old-k8s-version-234290 crio[635]: time="2024-09-13 20:15:22.979914874Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=be7a2fc0-2407-4bc5-b775-d8b50ce37e09 name=/runtime.v1.RuntimeService/Version
	Sep 13 20:15:22 old-k8s-version-234290 crio[635]: time="2024-09-13 20:15:22.981330623Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f063b476-44b2-4f2b-84f1-65a1ba745f0a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 20:15:22 old-k8s-version-234290 crio[635]: time="2024-09-13 20:15:22.981858923Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726258522981774795,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f063b476-44b2-4f2b-84f1-65a1ba745f0a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 20:15:22 old-k8s-version-234290 crio[635]: time="2024-09-13 20:15:22.982424902Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d53968fa-f95a-4d19-b9dd-e84e6276bb33 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 20:15:22 old-k8s-version-234290 crio[635]: time="2024-09-13 20:15:22.982499842Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d53968fa-f95a-4d19-b9dd-e84e6276bb33 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 20:15:22 old-k8s-version-234290 crio[635]: time="2024-09-13 20:15:22.982542291Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=d53968fa-f95a-4d19-b9dd-e84e6276bb33 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 20:15:23 old-k8s-version-234290 crio[635]: time="2024-09-13 20:15:23.014885045Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=36bc22f7-b73a-46e0-b831-6e815a79898b name=/runtime.v1.RuntimeService/Version
	Sep 13 20:15:23 old-k8s-version-234290 crio[635]: time="2024-09-13 20:15:23.014981519Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=36bc22f7-b73a-46e0-b831-6e815a79898b name=/runtime.v1.RuntimeService/Version
	Sep 13 20:15:23 old-k8s-version-234290 crio[635]: time="2024-09-13 20:15:23.016250746Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c7b20c58-3111-4e18-b2e8-38b5d98789e7 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 20:15:23 old-k8s-version-234290 crio[635]: time="2024-09-13 20:15:23.016632088Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726258523016608659,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c7b20c58-3111-4e18-b2e8-38b5d98789e7 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 20:15:23 old-k8s-version-234290 crio[635]: time="2024-09-13 20:15:23.017056751Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=82cd4aec-209e-4b78-982f-c111a44b1a57 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 20:15:23 old-k8s-version-234290 crio[635]: time="2024-09-13 20:15:23.017106990Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=82cd4aec-209e-4b78-982f-c111a44b1a57 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 20:15:23 old-k8s-version-234290 crio[635]: time="2024-09-13 20:15:23.017141397Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=82cd4aec-209e-4b78-982f-c111a44b1a57 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Sep13 19:57] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.066109] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.048142] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Sep13 19:58] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.610500] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.676115] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.362178] systemd-fstab-generator[561]: Ignoring "noauto" option for root device
	[  +0.066050] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.062575] systemd-fstab-generator[573]: Ignoring "noauto" option for root device
	[  +0.203353] systemd-fstab-generator[587]: Ignoring "noauto" option for root device
	[  +0.197412] systemd-fstab-generator[599]: Ignoring "noauto" option for root device
	[  +0.328737] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +6.657608] systemd-fstab-generator[890]: Ignoring "noauto" option for root device
	[  +0.063640] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.000194] systemd-fstab-generator[1014]: Ignoring "noauto" option for root device
	[ +13.374485] kauditd_printk_skb: 46 callbacks suppressed
	[Sep13 20:02] systemd-fstab-generator[5056]: Ignoring "noauto" option for root device
	[Sep13 20:04] systemd-fstab-generator[5327]: Ignoring "noauto" option for root device
	[  +0.071026] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 20:15:23 up 17 min,  0 users,  load average: 0.08, 0.08, 0.04
	Linux old-k8s-version-234290 5.10.207 #1 SMP Thu Sep 12 19:03:33 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Sep 13 20:15:23 old-k8s-version-234290 kubelet[6503]: bufio.(*Reader).Read(0xc000cfeb40, 0xc000be2118, 0x9, 0x9, 0xc000c80dc8, 0x40a605, 0xc0000e4f00)
	Sep 13 20:15:23 old-k8s-version-234290 kubelet[6503]:         /usr/local/go/src/bufio/bufio.go:227 +0x222
	Sep 13 20:15:23 old-k8s-version-234290 kubelet[6503]: io.ReadAtLeast(0x4f04880, 0xc000cfeb40, 0xc000be2118, 0x9, 0x9, 0x9, 0xc000c50260, 0x3f50d20, 0xc000cdae80)
	Sep 13 20:15:23 old-k8s-version-234290 kubelet[6503]:         /usr/local/go/src/io/io.go:314 +0x87
	Sep 13 20:15:23 old-k8s-version-234290 kubelet[6503]: io.ReadFull(...)
	Sep 13 20:15:23 old-k8s-version-234290 kubelet[6503]:         /usr/local/go/src/io/io.go:333
	Sep 13 20:15:23 old-k8s-version-234290 kubelet[6503]: k8s.io/kubernetes/vendor/golang.org/x/net/http2.readFrameHeader(0xc000be2118, 0x9, 0x9, 0x4f04880, 0xc000cfeb40, 0x0, 0xc000000000, 0xc000cdae80, 0xc000122160)
	Sep 13 20:15:23 old-k8s-version-234290 kubelet[6503]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/frame.go:237 +0x89
	Sep 13 20:15:23 old-k8s-version-234290 kubelet[6503]: k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Framer).ReadFrame(0xc000be20e0, 0xc000cf59b0, 0x1, 0x0, 0x0)
	Sep 13 20:15:23 old-k8s-version-234290 kubelet[6503]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/frame.go:492 +0xa5
	Sep 13 20:15:23 old-k8s-version-234290 kubelet[6503]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*http2Client).reader(0xc00056ae00)
	Sep 13 20:15:23 old-k8s-version-234290 kubelet[6503]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:1265 +0x179
	Sep 13 20:15:23 old-k8s-version-234290 kubelet[6503]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Sep 13 20:15:23 old-k8s-version-234290 kubelet[6503]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:300 +0xd31
	Sep 13 20:15:23 old-k8s-version-234290 kubelet[6503]: goroutine 150 [select]:
	Sep 13 20:15:23 old-k8s-version-234290 kubelet[6503]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*controlBuffer).get(0xc000c4a280, 0x1, 0x0, 0x0, 0x0, 0x0)
	Sep 13 20:15:23 old-k8s-version-234290 kubelet[6503]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:395 +0x125
	Sep 13 20:15:23 old-k8s-version-234290 kubelet[6503]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*loopyWriter).run(0xc000cfec00, 0x0, 0x0)
	Sep 13 20:15:23 old-k8s-version-234290 kubelet[6503]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:513 +0x1d3
	Sep 13 20:15:23 old-k8s-version-234290 kubelet[6503]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client.func3(0xc00056ae00)
	Sep 13 20:15:23 old-k8s-version-234290 kubelet[6503]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:346 +0x7b
	Sep 13 20:15:23 old-k8s-version-234290 kubelet[6503]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Sep 13 20:15:23 old-k8s-version-234290 kubelet[6503]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:344 +0xefc
	Sep 13 20:15:23 old-k8s-version-234290 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Sep 13 20:15:23 old-k8s-version-234290 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-234290 -n old-k8s-version-234290
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-234290 -n old-k8s-version-234290: exit status 2 (228.384378ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-234290" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.51s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (425.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-239327 -n no-preload-239327
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-09-13 20:18:25.60002918 +0000 UTC m=+7059.079034405
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-239327 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-239327 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.639µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-239327 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-239327 -n no-preload-239327
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-239327 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-239327 logs -n 25: (2.587068026s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-604714 sudo                                  | bridge-604714                | jenkins | v1.34.0 | 13 Sep 24 19:49 UTC | 13 Sep 24 19:49 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-604714 sudo                                  | bridge-604714                | jenkins | v1.34.0 | 13 Sep 24 19:49 UTC | 13 Sep 24 19:49 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-604714 sudo find                             | bridge-604714                | jenkins | v1.34.0 | 13 Sep 24 19:49 UTC | 13 Sep 24 19:49 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-604714 sudo crio                             | bridge-604714                | jenkins | v1.34.0 | 13 Sep 24 19:49 UTC | 13 Sep 24 19:49 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-604714                                       | bridge-604714                | jenkins | v1.34.0 | 13 Sep 24 19:49 UTC | 13 Sep 24 19:49 UTC |
	| delete  | -p                                                     | disable-driver-mounts-221882 | jenkins | v1.34.0 | 13 Sep 24 19:49 UTC | 13 Sep 24 19:49 UTC |
	|         | disable-driver-mounts-221882                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-512125 | jenkins | v1.34.0 | 13 Sep 24 19:49 UTC | 13 Sep 24 19:50 UTC |
	|         | default-k8s-diff-port-512125                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-175374            | embed-certs-175374           | jenkins | v1.34.0 | 13 Sep 24 19:50 UTC | 13 Sep 24 19:50 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-175374                                  | embed-certs-175374           | jenkins | v1.34.0 | 13 Sep 24 19:50 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-239327             | no-preload-239327            | jenkins | v1.34.0 | 13 Sep 24 19:50 UTC | 13 Sep 24 19:50 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-239327                                   | no-preload-239327            | jenkins | v1.34.0 | 13 Sep 24 19:50 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-512125  | default-k8s-diff-port-512125 | jenkins | v1.34.0 | 13 Sep 24 19:50 UTC | 13 Sep 24 19:50 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-512125 | jenkins | v1.34.0 | 13 Sep 24 19:50 UTC |                     |
	|         | default-k8s-diff-port-512125                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-234290        | old-k8s-version-234290       | jenkins | v1.34.0 | 13 Sep 24 19:52 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-175374                 | embed-certs-175374           | jenkins | v1.34.0 | 13 Sep 24 19:52 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-175374                                  | embed-certs-175374           | jenkins | v1.34.0 | 13 Sep 24 19:52 UTC | 13 Sep 24 20:03 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-239327                  | no-preload-239327            | jenkins | v1.34.0 | 13 Sep 24 19:52 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-239327                                   | no-preload-239327            | jenkins | v1.34.0 | 13 Sep 24 19:52 UTC | 13 Sep 24 20:02 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-512125       | default-k8s-diff-port-512125 | jenkins | v1.34.0 | 13 Sep 24 19:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-512125 | jenkins | v1.34.0 | 13 Sep 24 19:53 UTC | 13 Sep 24 20:03 UTC |
	|         | default-k8s-diff-port-512125                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-234290                              | old-k8s-version-234290       | jenkins | v1.34.0 | 13 Sep 24 19:53 UTC | 13 Sep 24 19:53 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-234290             | old-k8s-version-234290       | jenkins | v1.34.0 | 13 Sep 24 19:53 UTC | 13 Sep 24 19:53 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-234290                              | old-k8s-version-234290       | jenkins | v1.34.0 | 13 Sep 24 19:53 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-234290                              | old-k8s-version-234290       | jenkins | v1.34.0 | 13 Sep 24 20:17 UTC | 13 Sep 24 20:17 UTC |
	| start   | -p newest-cni-350416 --memory=2200 --alsologtostderr   | newest-cni-350416            | jenkins | v1.34.0 | 13 Sep 24 20:17 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/13 20:17:53
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0913 20:17:53.070497   78618 out.go:345] Setting OutFile to fd 1 ...
	I0913 20:17:53.070593   78618 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 20:17:53.070600   78618 out.go:358] Setting ErrFile to fd 2...
	I0913 20:17:53.070605   78618 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 20:17:53.070769   78618 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19636-3902/.minikube/bin
	I0913 20:17:53.071310   78618 out.go:352] Setting JSON to false
	I0913 20:17:53.072197   78618 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7216,"bootTime":1726251457,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0913 20:17:53.072297   78618 start.go:139] virtualization: kvm guest
	I0913 20:17:53.074742   78618 out.go:177] * [newest-cni-350416] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0913 20:17:53.076168   78618 out.go:177]   - MINIKUBE_LOCATION=19636
	I0913 20:17:53.076173   78618 notify.go:220] Checking for updates...
	I0913 20:17:53.077496   78618 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 20:17:53.078841   78618 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19636-3902/kubeconfig
	I0913 20:17:53.080144   78618 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19636-3902/.minikube
	I0913 20:17:53.081355   78618 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0913 20:17:53.082651   78618 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 20:17:53.084248   78618 config.go:182] Loaded profile config "default-k8s-diff-port-512125": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 20:17:53.084356   78618 config.go:182] Loaded profile config "embed-certs-175374": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 20:17:53.084465   78618 config.go:182] Loaded profile config "no-preload-239327": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 20:17:53.084558   78618 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 20:17:53.123733   78618 out.go:177] * Using the kvm2 driver based on user configuration
	I0913 20:17:53.125240   78618 start.go:297] selected driver: kvm2
	I0913 20:17:53.125264   78618 start.go:901] validating driver "kvm2" against <nil>
	I0913 20:17:53.125275   78618 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 20:17:53.126002   78618 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 20:17:53.126118   78618 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19636-3902/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0913 20:17:53.141801   78618 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0913 20:17:53.141850   78618 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0913 20:17:53.141910   78618 out.go:270] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0913 20:17:53.142251   78618 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0913 20:17:53.142288   78618 cni.go:84] Creating CNI manager for ""
	I0913 20:17:53.142331   78618 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0913 20:17:53.142339   78618 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0913 20:17:53.142390   78618 start.go:340] cluster config:
	{Name:newest-cni-350416 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:newest-cni-350416 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 20:17:53.142485   78618 iso.go:125] acquiring lock: {Name:mk6d09a9e7ffc35d34bf41cb51ad15df14a4d34d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 20:17:53.144224   78618 out.go:177] * Starting "newest-cni-350416" primary control-plane node in "newest-cni-350416" cluster
	I0913 20:17:53.145413   78618 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0913 20:17:53.145459   78618 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0913 20:17:53.145469   78618 cache.go:56] Caching tarball of preloaded images
	I0913 20:17:53.145549   78618 preload.go:172] Found /home/jenkins/minikube-integration/19636-3902/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0913 20:17:53.145592   78618 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0913 20:17:53.145722   78618 profile.go:143] Saving config to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/newest-cni-350416/config.json ...
	I0913 20:17:53.145751   78618 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/newest-cni-350416/config.json: {Name:mkf82a3c8c9c4e29633352da6b0f98ea61c3d7f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 20:17:53.145944   78618 start.go:360] acquireMachinesLock for newest-cni-350416: {Name:mk2a4fc9f87a3264088553785e086036ce1d8c5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 20:17:53.145996   78618 start.go:364] duration metric: took 30.476µs to acquireMachinesLock for "newest-cni-350416"
	I0913 20:17:53.146021   78618 start.go:93] Provisioning new machine with config: &{Name:newest-cni-350416 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.1 ClusterName:newest-cni-350416 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0913 20:17:53.146081   78618 start.go:125] createHost starting for "" (driver="kvm2")
	I0913 20:17:53.147820   78618 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0913 20:17:53.147975   78618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 20:17:53.148020   78618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 20:17:53.163213   78618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40293
	I0913 20:17:53.163746   78618 main.go:141] libmachine: () Calling .GetVersion
	I0913 20:17:53.164384   78618 main.go:141] libmachine: Using API Version  1
	I0913 20:17:53.164409   78618 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 20:17:53.164812   78618 main.go:141] libmachine: () Calling .GetMachineName
	I0913 20:17:53.165005   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetMachineName
	I0913 20:17:53.165217   78618 main.go:141] libmachine: (newest-cni-350416) Calling .DriverName
	I0913 20:17:53.165432   78618 start.go:159] libmachine.API.Create for "newest-cni-350416" (driver="kvm2")
	I0913 20:17:53.165457   78618 client.go:168] LocalClient.Create starting
	I0913 20:17:53.165490   78618 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem
	I0913 20:17:53.165525   78618 main.go:141] libmachine: Decoding PEM data...
	I0913 20:17:53.165540   78618 main.go:141] libmachine: Parsing certificate...
	I0913 20:17:53.165588   78618 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem
	I0913 20:17:53.165605   78618 main.go:141] libmachine: Decoding PEM data...
	I0913 20:17:53.165615   78618 main.go:141] libmachine: Parsing certificate...
	I0913 20:17:53.165628   78618 main.go:141] libmachine: Running pre-create checks...
	I0913 20:17:53.165638   78618 main.go:141] libmachine: (newest-cni-350416) Calling .PreCreateCheck
	I0913 20:17:53.166045   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetConfigRaw
	I0913 20:17:53.166531   78618 main.go:141] libmachine: Creating machine...
	I0913 20:17:53.166545   78618 main.go:141] libmachine: (newest-cni-350416) Calling .Create
	I0913 20:17:53.166693   78618 main.go:141] libmachine: (newest-cni-350416) Creating KVM machine...
	I0913 20:17:53.167907   78618 main.go:141] libmachine: (newest-cni-350416) DBG | found existing default KVM network
	I0913 20:17:53.169112   78618 main.go:141] libmachine: (newest-cni-350416) DBG | I0913 20:17:53.168969   78657 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:be:5d:74} reservation:<nil>}
	I0913 20:17:53.169801   78618 main.go:141] libmachine: (newest-cni-350416) DBG | I0913 20:17:53.169741   78657 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:52:5e:80} reservation:<nil>}
	I0913 20:17:53.170645   78618 main.go:141] libmachine: (newest-cni-350416) DBG | I0913 20:17:53.170569   78657 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:25:64:3c} reservation:<nil>}
	I0913 20:17:53.171682   78618 main.go:141] libmachine: (newest-cni-350416) DBG | I0913 20:17:53.171623   78657 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0003acfc0}
	I0913 20:17:53.171713   78618 main.go:141] libmachine: (newest-cni-350416) DBG | created network xml: 
	I0913 20:17:53.171726   78618 main.go:141] libmachine: (newest-cni-350416) DBG | <network>
	I0913 20:17:53.171735   78618 main.go:141] libmachine: (newest-cni-350416) DBG |   <name>mk-newest-cni-350416</name>
	I0913 20:17:53.171743   78618 main.go:141] libmachine: (newest-cni-350416) DBG |   <dns enable='no'/>
	I0913 20:17:53.171769   78618 main.go:141] libmachine: (newest-cni-350416) DBG |   
	I0913 20:17:53.171791   78618 main.go:141] libmachine: (newest-cni-350416) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0913 20:17:53.171803   78618 main.go:141] libmachine: (newest-cni-350416) DBG |     <dhcp>
	I0913 20:17:53.171812   78618 main.go:141] libmachine: (newest-cni-350416) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0913 20:17:53.171818   78618 main.go:141] libmachine: (newest-cni-350416) DBG |     </dhcp>
	I0913 20:17:53.171824   78618 main.go:141] libmachine: (newest-cni-350416) DBG |   </ip>
	I0913 20:17:53.171839   78618 main.go:141] libmachine: (newest-cni-350416) DBG |   
	I0913 20:17:53.171843   78618 main.go:141] libmachine: (newest-cni-350416) DBG | </network>
	I0913 20:17:53.171849   78618 main.go:141] libmachine: (newest-cni-350416) DBG | 
	I0913 20:17:53.177191   78618 main.go:141] libmachine: (newest-cni-350416) DBG | trying to create private KVM network mk-newest-cni-350416 192.168.72.0/24...
	I0913 20:17:53.249161   78618 main.go:141] libmachine: (newest-cni-350416) DBG | private KVM network mk-newest-cni-350416 192.168.72.0/24 created
	I0913 20:17:53.249206   78618 main.go:141] libmachine: (newest-cni-350416) Setting up store path in /home/jenkins/minikube-integration/19636-3902/.minikube/machines/newest-cni-350416 ...
	I0913 20:17:53.249223   78618 main.go:141] libmachine: (newest-cni-350416) DBG | I0913 20:17:53.249133   78657 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19636-3902/.minikube
	I0913 20:17:53.249235   78618 main.go:141] libmachine: (newest-cni-350416) Building disk image from file:///home/jenkins/minikube-integration/19636-3902/.minikube/cache/iso/amd64/minikube-v1.34.0-1726156389-19616-amd64.iso
	I0913 20:17:53.249265   78618 main.go:141] libmachine: (newest-cni-350416) Downloading /home/jenkins/minikube-integration/19636-3902/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19636-3902/.minikube/cache/iso/amd64/minikube-v1.34.0-1726156389-19616-amd64.iso...
	I0913 20:17:53.497980   78618 main.go:141] libmachine: (newest-cni-350416) DBG | I0913 20:17:53.497825   78657 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/newest-cni-350416/id_rsa...
	I0913 20:17:53.694456   78618 main.go:141] libmachine: (newest-cni-350416) DBG | I0913 20:17:53.694323   78657 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/newest-cni-350416/newest-cni-350416.rawdisk...
	I0913 20:17:53.694477   78618 main.go:141] libmachine: (newest-cni-350416) DBG | Writing magic tar header
	I0913 20:17:53.694489   78618 main.go:141] libmachine: (newest-cni-350416) DBG | Writing SSH key tar header
	I0913 20:17:53.694497   78618 main.go:141] libmachine: (newest-cni-350416) DBG | I0913 20:17:53.694436   78657 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19636-3902/.minikube/machines/newest-cni-350416 ...
	I0913 20:17:53.694523   78618 main.go:141] libmachine: (newest-cni-350416) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/newest-cni-350416
	I0913 20:17:53.694555   78618 main.go:141] libmachine: (newest-cni-350416) Setting executable bit set on /home/jenkins/minikube-integration/19636-3902/.minikube/machines/newest-cni-350416 (perms=drwx------)
	I0913 20:17:53.694576   78618 main.go:141] libmachine: (newest-cni-350416) Setting executable bit set on /home/jenkins/minikube-integration/19636-3902/.minikube/machines (perms=drwxr-xr-x)
	I0913 20:17:53.694594   78618 main.go:141] libmachine: (newest-cni-350416) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19636-3902/.minikube/machines
	I0913 20:17:53.694607   78618 main.go:141] libmachine: (newest-cni-350416) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19636-3902/.minikube
	I0913 20:17:53.694612   78618 main.go:141] libmachine: (newest-cni-350416) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19636-3902
	I0913 20:17:53.694619   78618 main.go:141] libmachine: (newest-cni-350416) Setting executable bit set on /home/jenkins/minikube-integration/19636-3902/.minikube (perms=drwxr-xr-x)
	I0913 20:17:53.694625   78618 main.go:141] libmachine: (newest-cni-350416) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0913 20:17:53.694638   78618 main.go:141] libmachine: (newest-cni-350416) DBG | Checking permissions on dir: /home/jenkins
	I0913 20:17:53.694660   78618 main.go:141] libmachine: (newest-cni-350416) DBG | Checking permissions on dir: /home
	I0913 20:17:53.694673   78618 main.go:141] libmachine: (newest-cni-350416) DBG | Skipping /home - not owner
	I0913 20:17:53.694689   78618 main.go:141] libmachine: (newest-cni-350416) Setting executable bit set on /home/jenkins/minikube-integration/19636-3902 (perms=drwxrwxr-x)
	I0913 20:17:53.694700   78618 main.go:141] libmachine: (newest-cni-350416) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0913 20:17:53.694708   78618 main.go:141] libmachine: (newest-cni-350416) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0913 20:17:53.694715   78618 main.go:141] libmachine: (newest-cni-350416) Creating domain...
	I0913 20:17:53.695851   78618 main.go:141] libmachine: (newest-cni-350416) define libvirt domain using xml: 
	I0913 20:17:53.695876   78618 main.go:141] libmachine: (newest-cni-350416) <domain type='kvm'>
	I0913 20:17:53.695885   78618 main.go:141] libmachine: (newest-cni-350416)   <name>newest-cni-350416</name>
	I0913 20:17:53.695891   78618 main.go:141] libmachine: (newest-cni-350416)   <memory unit='MiB'>2200</memory>
	I0913 20:17:53.695900   78618 main.go:141] libmachine: (newest-cni-350416)   <vcpu>2</vcpu>
	I0913 20:17:53.695910   78618 main.go:141] libmachine: (newest-cni-350416)   <features>
	I0913 20:17:53.695929   78618 main.go:141] libmachine: (newest-cni-350416)     <acpi/>
	I0913 20:17:53.695945   78618 main.go:141] libmachine: (newest-cni-350416)     <apic/>
	I0913 20:17:53.695952   78618 main.go:141] libmachine: (newest-cni-350416)     <pae/>
	I0913 20:17:53.695958   78618 main.go:141] libmachine: (newest-cni-350416)     
	I0913 20:17:53.695967   78618 main.go:141] libmachine: (newest-cni-350416)   </features>
	I0913 20:17:53.695974   78618 main.go:141] libmachine: (newest-cni-350416)   <cpu mode='host-passthrough'>
	I0913 20:17:53.695981   78618 main.go:141] libmachine: (newest-cni-350416)   
	I0913 20:17:53.695990   78618 main.go:141] libmachine: (newest-cni-350416)   </cpu>
	I0913 20:17:53.695998   78618 main.go:141] libmachine: (newest-cni-350416)   <os>
	I0913 20:17:53.696012   78618 main.go:141] libmachine: (newest-cni-350416)     <type>hvm</type>
	I0913 20:17:53.696023   78618 main.go:141] libmachine: (newest-cni-350416)     <boot dev='cdrom'/>
	I0913 20:17:53.696037   78618 main.go:141] libmachine: (newest-cni-350416)     <boot dev='hd'/>
	I0913 20:17:53.696042   78618 main.go:141] libmachine: (newest-cni-350416)     <bootmenu enable='no'/>
	I0913 20:17:53.696049   78618 main.go:141] libmachine: (newest-cni-350416)   </os>
	I0913 20:17:53.696054   78618 main.go:141] libmachine: (newest-cni-350416)   <devices>
	I0913 20:17:53.696061   78618 main.go:141] libmachine: (newest-cni-350416)     <disk type='file' device='cdrom'>
	I0913 20:17:53.696084   78618 main.go:141] libmachine: (newest-cni-350416)       <source file='/home/jenkins/minikube-integration/19636-3902/.minikube/machines/newest-cni-350416/boot2docker.iso'/>
	I0913 20:17:53.696103   78618 main.go:141] libmachine: (newest-cni-350416)       <target dev='hdc' bus='scsi'/>
	I0913 20:17:53.696116   78618 main.go:141] libmachine: (newest-cni-350416)       <readonly/>
	I0913 20:17:53.696129   78618 main.go:141] libmachine: (newest-cni-350416)     </disk>
	I0913 20:17:53.696137   78618 main.go:141] libmachine: (newest-cni-350416)     <disk type='file' device='disk'>
	I0913 20:17:53.696149   78618 main.go:141] libmachine: (newest-cni-350416)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0913 20:17:53.696164   78618 main.go:141] libmachine: (newest-cni-350416)       <source file='/home/jenkins/minikube-integration/19636-3902/.minikube/machines/newest-cni-350416/newest-cni-350416.rawdisk'/>
	I0913 20:17:53.696179   78618 main.go:141] libmachine: (newest-cni-350416)       <target dev='hda' bus='virtio'/>
	I0913 20:17:53.696189   78618 main.go:141] libmachine: (newest-cni-350416)     </disk>
	I0913 20:17:53.696199   78618 main.go:141] libmachine: (newest-cni-350416)     <interface type='network'>
	I0913 20:17:53.696205   78618 main.go:141] libmachine: (newest-cni-350416)       <source network='mk-newest-cni-350416'/>
	I0913 20:17:53.696212   78618 main.go:141] libmachine: (newest-cni-350416)       <model type='virtio'/>
	I0913 20:17:53.696217   78618 main.go:141] libmachine: (newest-cni-350416)     </interface>
	I0913 20:17:53.696222   78618 main.go:141] libmachine: (newest-cni-350416)     <interface type='network'>
	I0913 20:17:53.696233   78618 main.go:141] libmachine: (newest-cni-350416)       <source network='default'/>
	I0913 20:17:53.696246   78618 main.go:141] libmachine: (newest-cni-350416)       <model type='virtio'/>
	I0913 20:17:53.696257   78618 main.go:141] libmachine: (newest-cni-350416)     </interface>
	I0913 20:17:53.696267   78618 main.go:141] libmachine: (newest-cni-350416)     <serial type='pty'>
	I0913 20:17:53.696275   78618 main.go:141] libmachine: (newest-cni-350416)       <target port='0'/>
	I0913 20:17:53.696283   78618 main.go:141] libmachine: (newest-cni-350416)     </serial>
	I0913 20:17:53.696291   78618 main.go:141] libmachine: (newest-cni-350416)     <console type='pty'>
	I0913 20:17:53.696301   78618 main.go:141] libmachine: (newest-cni-350416)       <target type='serial' port='0'/>
	I0913 20:17:53.696327   78618 main.go:141] libmachine: (newest-cni-350416)     </console>
	I0913 20:17:53.696344   78618 main.go:141] libmachine: (newest-cni-350416)     <rng model='virtio'>
	I0913 20:17:53.696351   78618 main.go:141] libmachine: (newest-cni-350416)       <backend model='random'>/dev/random</backend>
	I0913 20:17:53.696358   78618 main.go:141] libmachine: (newest-cni-350416)     </rng>
	I0913 20:17:53.696363   78618 main.go:141] libmachine: (newest-cni-350416)     
	I0913 20:17:53.696368   78618 main.go:141] libmachine: (newest-cni-350416)     
	I0913 20:17:53.696374   78618 main.go:141] libmachine: (newest-cni-350416)   </devices>
	I0913 20:17:53.696380   78618 main.go:141] libmachine: (newest-cni-350416) </domain>
	I0913 20:17:53.696387   78618 main.go:141] libmachine: (newest-cni-350416) 
	I0913 20:17:53.700720   78618 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined MAC address 52:54:00:6d:56:e9 in network default
	I0913 20:17:53.701294   78618 main.go:141] libmachine: (newest-cni-350416) Ensuring networks are active...
	I0913 20:17:53.701319   78618 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:17:53.701919   78618 main.go:141] libmachine: (newest-cni-350416) Ensuring network default is active
	I0913 20:17:53.702293   78618 main.go:141] libmachine: (newest-cni-350416) Ensuring network mk-newest-cni-350416 is active
	I0913 20:17:53.702778   78618 main.go:141] libmachine: (newest-cni-350416) Getting domain xml...
	I0913 20:17:53.703421   78618 main.go:141] libmachine: (newest-cni-350416) Creating domain...
	I0913 20:17:54.970084   78618 main.go:141] libmachine: (newest-cni-350416) Waiting to get IP...
	I0913 20:17:54.970893   78618 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:17:54.971372   78618 main.go:141] libmachine: (newest-cni-350416) DBG | unable to find current IP address of domain newest-cni-350416 in network mk-newest-cni-350416
	I0913 20:17:54.971406   78618 main.go:141] libmachine: (newest-cni-350416) DBG | I0913 20:17:54.971360   78657 retry.go:31] will retry after 284.279719ms: waiting for machine to come up
	I0913 20:17:55.257056   78618 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:17:55.257642   78618 main.go:141] libmachine: (newest-cni-350416) DBG | unable to find current IP address of domain newest-cni-350416 in network mk-newest-cni-350416
	I0913 20:17:55.257721   78618 main.go:141] libmachine: (newest-cni-350416) DBG | I0913 20:17:55.257618   78657 retry.go:31] will retry after 364.649975ms: waiting for machine to come up
	I0913 20:17:55.624307   78618 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:17:55.624756   78618 main.go:141] libmachine: (newest-cni-350416) DBG | unable to find current IP address of domain newest-cni-350416 in network mk-newest-cni-350416
	I0913 20:17:55.624784   78618 main.go:141] libmachine: (newest-cni-350416) DBG | I0913 20:17:55.624741   78657 retry.go:31] will retry after 351.238866ms: waiting for machine to come up
	I0913 20:17:55.977346   78618 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:17:55.977888   78618 main.go:141] libmachine: (newest-cni-350416) DBG | unable to find current IP address of domain newest-cni-350416 in network mk-newest-cni-350416
	I0913 20:17:55.977915   78618 main.go:141] libmachine: (newest-cni-350416) DBG | I0913 20:17:55.977853   78657 retry.go:31] will retry after 522.890335ms: waiting for machine to come up
	I0913 20:17:56.502105   78618 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:17:56.502648   78618 main.go:141] libmachine: (newest-cni-350416) DBG | unable to find current IP address of domain newest-cni-350416 in network mk-newest-cni-350416
	I0913 20:17:56.502674   78618 main.go:141] libmachine: (newest-cni-350416) DBG | I0913 20:17:56.502586   78657 retry.go:31] will retry after 513.308242ms: waiting for machine to come up
	I0913 20:17:57.017258   78618 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:17:57.017728   78618 main.go:141] libmachine: (newest-cni-350416) DBG | unable to find current IP address of domain newest-cni-350416 in network mk-newest-cni-350416
	I0913 20:17:57.017790   78618 main.go:141] libmachine: (newest-cni-350416) DBG | I0913 20:17:57.017705   78657 retry.go:31] will retry after 619.411725ms: waiting for machine to come up
	I0913 20:17:57.638526   78618 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:17:57.638898   78618 main.go:141] libmachine: (newest-cni-350416) DBG | unable to find current IP address of domain newest-cni-350416 in network mk-newest-cni-350416
	I0913 20:17:57.638950   78618 main.go:141] libmachine: (newest-cni-350416) DBG | I0913 20:17:57.638877   78657 retry.go:31] will retry after 1.010741913s: waiting for machine to come up
	I0913 20:17:58.650971   78618 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:17:58.651466   78618 main.go:141] libmachine: (newest-cni-350416) DBG | unable to find current IP address of domain newest-cni-350416 in network mk-newest-cni-350416
	I0913 20:17:58.651491   78618 main.go:141] libmachine: (newest-cni-350416) DBG | I0913 20:17:58.651419   78657 retry.go:31] will retry after 915.874231ms: waiting for machine to come up
	I0913 20:17:59.568434   78618 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:17:59.568867   78618 main.go:141] libmachine: (newest-cni-350416) DBG | unable to find current IP address of domain newest-cni-350416 in network mk-newest-cni-350416
	I0913 20:17:59.568908   78618 main.go:141] libmachine: (newest-cni-350416) DBG | I0913 20:17:59.568813   78657 retry.go:31] will retry after 1.198526884s: waiting for machine to come up
	I0913 20:18:00.769373   78618 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:18:00.769749   78618 main.go:141] libmachine: (newest-cni-350416) DBG | unable to find current IP address of domain newest-cni-350416 in network mk-newest-cni-350416
	I0913 20:18:00.769778   78618 main.go:141] libmachine: (newest-cni-350416) DBG | I0913 20:18:00.769701   78657 retry.go:31] will retry after 2.086733775s: waiting for machine to come up
	I0913 20:18:02.858968   78618 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:18:02.859429   78618 main.go:141] libmachine: (newest-cni-350416) DBG | unable to find current IP address of domain newest-cni-350416 in network mk-newest-cni-350416
	I0913 20:18:02.859453   78618 main.go:141] libmachine: (newest-cni-350416) DBG | I0913 20:18:02.859396   78657 retry.go:31] will retry after 2.555556586s: waiting for machine to come up
	I0913 20:18:05.416191   78618 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:18:05.416660   78618 main.go:141] libmachine: (newest-cni-350416) DBG | unable to find current IP address of domain newest-cni-350416 in network mk-newest-cni-350416
	I0913 20:18:05.416689   78618 main.go:141] libmachine: (newest-cni-350416) DBG | I0913 20:18:05.416629   78657 retry.go:31] will retry after 3.585122192s: waiting for machine to come up
	I0913 20:18:09.003278   78618 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:18:09.003679   78618 main.go:141] libmachine: (newest-cni-350416) DBG | unable to find current IP address of domain newest-cni-350416 in network mk-newest-cni-350416
	I0913 20:18:09.003697   78618 main.go:141] libmachine: (newest-cni-350416) DBG | I0913 20:18:09.003659   78657 retry.go:31] will retry after 4.250465496s: waiting for machine to come up
	I0913 20:18:13.256148   78618 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:18:13.256661   78618 main.go:141] libmachine: (newest-cni-350416) DBG | unable to find current IP address of domain newest-cni-350416 in network mk-newest-cni-350416
	I0913 20:18:13.256681   78618 main.go:141] libmachine: (newest-cni-350416) DBG | I0913 20:18:13.256617   78657 retry.go:31] will retry after 4.555625296s: waiting for machine to come up
	I0913 20:18:17.815183   78618 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:18:17.815655   78618 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has current primary IP address 192.168.72.56 and MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:18:17.815674   78618 main.go:141] libmachine: (newest-cni-350416) Found IP for machine: 192.168.72.56
	I0913 20:18:17.815686   78618 main.go:141] libmachine: (newest-cni-350416) Reserving static IP address...
	I0913 20:18:17.816054   78618 main.go:141] libmachine: (newest-cni-350416) DBG | unable to find host DHCP lease matching {name: "newest-cni-350416", mac: "52:54:00:ca:5a:f4", ip: "192.168.72.56"} in network mk-newest-cni-350416
	I0913 20:18:17.892348   78618 main.go:141] libmachine: (newest-cni-350416) DBG | Getting to WaitForSSH function...
	I0913 20:18:17.892375   78618 main.go:141] libmachine: (newest-cni-350416) Reserved static IP address: 192.168.72.56
	I0913 20:18:17.892387   78618 main.go:141] libmachine: (newest-cni-350416) Waiting for SSH to be available...
	I0913 20:18:17.895469   78618 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:18:17.895847   78618 main.go:141] libmachine: (newest-cni-350416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:5a:f4", ip: ""} in network mk-newest-cni-350416: {Iface:virbr4 ExpiryTime:2024-09-13 21:18:08 +0000 UTC Type:0 Mac:52:54:00:ca:5a:f4 Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:minikube Clientid:01:52:54:00:ca:5a:f4}
	I0913 20:18:17.895885   78618 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined IP address 192.168.72.56 and MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:18:17.896035   78618 main.go:141] libmachine: (newest-cni-350416) DBG | Using SSH client type: external
	I0913 20:18:17.896068   78618 main.go:141] libmachine: (newest-cni-350416) DBG | Using SSH private key: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/newest-cni-350416/id_rsa (-rw-------)
	I0913 20:18:17.896096   78618 main.go:141] libmachine: (newest-cni-350416) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.56 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19636-3902/.minikube/machines/newest-cni-350416/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0913 20:18:17.896116   78618 main.go:141] libmachine: (newest-cni-350416) DBG | About to run SSH command:
	I0913 20:18:17.896128   78618 main.go:141] libmachine: (newest-cni-350416) DBG | exit 0
	I0913 20:18:18.022569   78618 main.go:141] libmachine: (newest-cni-350416) DBG | SSH cmd err, output: <nil>: 
	I0913 20:18:18.022808   78618 main.go:141] libmachine: (newest-cni-350416) KVM machine creation complete!
	I0913 20:18:18.023083   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetConfigRaw
	I0913 20:18:18.023635   78618 main.go:141] libmachine: (newest-cni-350416) Calling .DriverName
	I0913 20:18:18.023792   78618 main.go:141] libmachine: (newest-cni-350416) Calling .DriverName
	I0913 20:18:18.023936   78618 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0913 20:18:18.023950   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetState
	I0913 20:18:18.025193   78618 main.go:141] libmachine: Detecting operating system of created instance...
	I0913 20:18:18.025210   78618 main.go:141] libmachine: Waiting for SSH to be available...
	I0913 20:18:18.025215   78618 main.go:141] libmachine: Getting to WaitForSSH function...
	I0913 20:18:18.025220   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHHostname
	I0913 20:18:18.027344   78618 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:18:18.027770   78618 main.go:141] libmachine: (newest-cni-350416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:5a:f4", ip: ""} in network mk-newest-cni-350416: {Iface:virbr4 ExpiryTime:2024-09-13 21:18:08 +0000 UTC Type:0 Mac:52:54:00:ca:5a:f4 Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:newest-cni-350416 Clientid:01:52:54:00:ca:5a:f4}
	I0913 20:18:18.027797   78618 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined IP address 192.168.72.56 and MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:18:18.027955   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHPort
	I0913 20:18:18.028114   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHKeyPath
	I0913 20:18:18.028276   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHKeyPath
	I0913 20:18:18.028371   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHUsername
	I0913 20:18:18.028512   78618 main.go:141] libmachine: Using SSH client type: native
	I0913 20:18:18.028721   78618 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.56 22 <nil> <nil>}
	I0913 20:18:18.028736   78618 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0913 20:18:18.145557   78618 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0913 20:18:18.145581   78618 main.go:141] libmachine: Detecting the provisioner...
	I0913 20:18:18.145589   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHHostname
	I0913 20:18:18.148375   78618 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:18:18.148748   78618 main.go:141] libmachine: (newest-cni-350416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:5a:f4", ip: ""} in network mk-newest-cni-350416: {Iface:virbr4 ExpiryTime:2024-09-13 21:18:08 +0000 UTC Type:0 Mac:52:54:00:ca:5a:f4 Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:newest-cni-350416 Clientid:01:52:54:00:ca:5a:f4}
	I0913 20:18:18.148768   78618 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined IP address 192.168.72.56 and MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:18:18.148908   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHPort
	I0913 20:18:18.149093   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHKeyPath
	I0913 20:18:18.149252   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHKeyPath
	I0913 20:18:18.149392   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHUsername
	I0913 20:18:18.149567   78618 main.go:141] libmachine: Using SSH client type: native
	I0913 20:18:18.149725   78618 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.56 22 <nil> <nil>}
	I0913 20:18:18.149735   78618 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0913 20:18:18.259256   78618 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0913 20:18:18.259356   78618 main.go:141] libmachine: found compatible host: buildroot
	I0913 20:18:18.259371   78618 main.go:141] libmachine: Provisioning with buildroot...
	I0913 20:18:18.259380   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetMachineName
	I0913 20:18:18.259630   78618 buildroot.go:166] provisioning hostname "newest-cni-350416"
	I0913 20:18:18.259658   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetMachineName
	I0913 20:18:18.259841   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHHostname
	I0913 20:18:18.262454   78618 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:18:18.262896   78618 main.go:141] libmachine: (newest-cni-350416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:5a:f4", ip: ""} in network mk-newest-cni-350416: {Iface:virbr4 ExpiryTime:2024-09-13 21:18:08 +0000 UTC Type:0 Mac:52:54:00:ca:5a:f4 Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:newest-cni-350416 Clientid:01:52:54:00:ca:5a:f4}
	I0913 20:18:18.262917   78618 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined IP address 192.168.72.56 and MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:18:18.263098   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHPort
	I0913 20:18:18.263274   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHKeyPath
	I0913 20:18:18.263417   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHKeyPath
	I0913 20:18:18.263547   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHUsername
	I0913 20:18:18.263732   78618 main.go:141] libmachine: Using SSH client type: native
	I0913 20:18:18.263934   78618 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.56 22 <nil> <nil>}
	I0913 20:18:18.263947   78618 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-350416 && echo "newest-cni-350416" | sudo tee /etc/hostname
	I0913 20:18:18.391203   78618 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-350416
	
	I0913 20:18:18.391231   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHHostname
	I0913 20:18:18.394245   78618 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:18:18.394654   78618 main.go:141] libmachine: (newest-cni-350416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:5a:f4", ip: ""} in network mk-newest-cni-350416: {Iface:virbr4 ExpiryTime:2024-09-13 21:18:08 +0000 UTC Type:0 Mac:52:54:00:ca:5a:f4 Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:newest-cni-350416 Clientid:01:52:54:00:ca:5a:f4}
	I0913 20:18:18.394685   78618 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined IP address 192.168.72.56 and MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:18:18.394864   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHPort
	I0913 20:18:18.395046   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHKeyPath
	I0913 20:18:18.395231   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHKeyPath
	I0913 20:18:18.395362   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHUsername
	I0913 20:18:18.395511   78618 main.go:141] libmachine: Using SSH client type: native
	I0913 20:18:18.395725   78618 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.56 22 <nil> <nil>}
	I0913 20:18:18.395756   78618 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-350416' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-350416/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-350416' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0913 20:18:18.512849   78618 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0913 20:18:18.512878   78618 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19636-3902/.minikube CaCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19636-3902/.minikube}
	I0913 20:18:18.512897   78618 buildroot.go:174] setting up certificates
	I0913 20:18:18.512905   78618 provision.go:84] configureAuth start
	I0913 20:18:18.512914   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetMachineName
	I0913 20:18:18.513194   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetIP
	I0913 20:18:18.516150   78618 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:18:18.516474   78618 main.go:141] libmachine: (newest-cni-350416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:5a:f4", ip: ""} in network mk-newest-cni-350416: {Iface:virbr4 ExpiryTime:2024-09-13 21:18:08 +0000 UTC Type:0 Mac:52:54:00:ca:5a:f4 Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:newest-cni-350416 Clientid:01:52:54:00:ca:5a:f4}
	I0913 20:18:18.516491   78618 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined IP address 192.168.72.56 and MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:18:18.516733   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHHostname
	I0913 20:18:18.519202   78618 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:18:18.519508   78618 main.go:141] libmachine: (newest-cni-350416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:5a:f4", ip: ""} in network mk-newest-cni-350416: {Iface:virbr4 ExpiryTime:2024-09-13 21:18:08 +0000 UTC Type:0 Mac:52:54:00:ca:5a:f4 Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:newest-cni-350416 Clientid:01:52:54:00:ca:5a:f4}
	I0913 20:18:18.519548   78618 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined IP address 192.168.72.56 and MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:18:18.519649   78618 provision.go:143] copyHostCerts
	I0913 20:18:18.519704   78618 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem, removing ...
	I0913 20:18:18.519717   78618 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem
	I0913 20:18:18.519801   78618 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem (1123 bytes)
	I0913 20:18:18.519905   78618 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem, removing ...
	I0913 20:18:18.519916   78618 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem
	I0913 20:18:18.519961   78618 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem (1679 bytes)
	I0913 20:18:18.520070   78618 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem, removing ...
	I0913 20:18:18.520082   78618 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem
	I0913 20:18:18.520121   78618 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem (1082 bytes)
	I0913 20:18:18.520200   78618 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem org=jenkins.newest-cni-350416 san=[127.0.0.1 192.168.72.56 localhost minikube newest-cni-350416]
	I0913 20:18:18.590824   78618 provision.go:177] copyRemoteCerts
	I0913 20:18:18.590894   78618 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0913 20:18:18.590925   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHHostname
	I0913 20:18:18.594149   78618 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:18:18.594575   78618 main.go:141] libmachine: (newest-cni-350416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:5a:f4", ip: ""} in network mk-newest-cni-350416: {Iface:virbr4 ExpiryTime:2024-09-13 21:18:08 +0000 UTC Type:0 Mac:52:54:00:ca:5a:f4 Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:newest-cni-350416 Clientid:01:52:54:00:ca:5a:f4}
	I0913 20:18:18.594604   78618 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined IP address 192.168.72.56 and MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:18:18.594845   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHPort
	I0913 20:18:18.595032   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHKeyPath
	I0913 20:18:18.595209   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHUsername
	I0913 20:18:18.595363   78618 sshutil.go:53] new ssh client: &{IP:192.168.72.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/newest-cni-350416/id_rsa Username:docker}
	I0913 20:18:18.684929   78618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0913 20:18:18.710605   78618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0913 20:18:18.736528   78618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0913 20:18:18.761086   78618 provision.go:87] duration metric: took 248.16824ms to configureAuth
	I0913 20:18:18.761127   78618 buildroot.go:189] setting minikube options for container-runtime
	I0913 20:18:18.761333   78618 config.go:182] Loaded profile config "newest-cni-350416": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 20:18:18.761462   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHHostname
	I0913 20:18:18.764233   78618 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:18:18.764591   78618 main.go:141] libmachine: (newest-cni-350416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:5a:f4", ip: ""} in network mk-newest-cni-350416: {Iface:virbr4 ExpiryTime:2024-09-13 21:18:08 +0000 UTC Type:0 Mac:52:54:00:ca:5a:f4 Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:newest-cni-350416 Clientid:01:52:54:00:ca:5a:f4}
	I0913 20:18:18.764632   78618 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined IP address 192.168.72.56 and MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:18:18.764783   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHPort
	I0913 20:18:18.764956   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHKeyPath
	I0913 20:18:18.765056   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHKeyPath
	I0913 20:18:18.765205   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHUsername
	I0913 20:18:18.765347   78618 main.go:141] libmachine: Using SSH client type: native
	I0913 20:18:18.765502   78618 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.56 22 <nil> <nil>}
	I0913 20:18:18.765533   78618 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0913 20:18:19.001904   78618 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0913 20:18:19.001937   78618 main.go:141] libmachine: Checking connection to Docker...
	I0913 20:18:19.001949   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetURL
	I0913 20:18:19.003220   78618 main.go:141] libmachine: (newest-cni-350416) DBG | Using libvirt version 6000000
	I0913 20:18:19.005080   78618 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:18:19.005546   78618 main.go:141] libmachine: (newest-cni-350416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:5a:f4", ip: ""} in network mk-newest-cni-350416: {Iface:virbr4 ExpiryTime:2024-09-13 21:18:08 +0000 UTC Type:0 Mac:52:54:00:ca:5a:f4 Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:newest-cni-350416 Clientid:01:52:54:00:ca:5a:f4}
	I0913 20:18:19.005574   78618 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined IP address 192.168.72.56 and MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:18:19.005778   78618 main.go:141] libmachine: Docker is up and running!
	I0913 20:18:19.005793   78618 main.go:141] libmachine: Reticulating splines...
	I0913 20:18:19.005801   78618 client.go:171] duration metric: took 25.840331943s to LocalClient.Create
	I0913 20:18:19.005841   78618 start.go:167] duration metric: took 25.840394382s to libmachine.API.Create "newest-cni-350416"
	I0913 20:18:19.005854   78618 start.go:293] postStartSetup for "newest-cni-350416" (driver="kvm2")
	I0913 20:18:19.005866   78618 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0913 20:18:19.005883   78618 main.go:141] libmachine: (newest-cni-350416) Calling .DriverName
	I0913 20:18:19.006157   78618 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0913 20:18:19.006188   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHHostname
	I0913 20:18:19.008175   78618 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:18:19.008553   78618 main.go:141] libmachine: (newest-cni-350416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:5a:f4", ip: ""} in network mk-newest-cni-350416: {Iface:virbr4 ExpiryTime:2024-09-13 21:18:08 +0000 UTC Type:0 Mac:52:54:00:ca:5a:f4 Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:newest-cni-350416 Clientid:01:52:54:00:ca:5a:f4}
	I0913 20:18:19.008578   78618 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined IP address 192.168.72.56 and MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:18:19.008668   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHPort
	I0913 20:18:19.008932   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHKeyPath
	I0913 20:18:19.009122   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHUsername
	I0913 20:18:19.009411   78618 sshutil.go:53] new ssh client: &{IP:192.168.72.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/newest-cni-350416/id_rsa Username:docker}
	I0913 20:18:19.092751   78618 ssh_runner.go:195] Run: cat /etc/os-release
	I0913 20:18:19.097025   78618 info.go:137] Remote host: Buildroot 2023.02.9
	I0913 20:18:19.097052   78618 filesync.go:126] Scanning /home/jenkins/minikube-integration/19636-3902/.minikube/addons for local assets ...
	I0913 20:18:19.097117   78618 filesync.go:126] Scanning /home/jenkins/minikube-integration/19636-3902/.minikube/files for local assets ...
	I0913 20:18:19.097203   78618 filesync.go:149] local asset: /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem -> 110792.pem in /etc/ssl/certs
	I0913 20:18:19.097285   78618 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0913 20:18:19.106334   78618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem --> /etc/ssl/certs/110792.pem (1708 bytes)
	I0913 20:18:19.131631   78618 start.go:296] duration metric: took 125.762424ms for postStartSetup
	I0913 20:18:19.131689   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetConfigRaw
	I0913 20:18:19.132358   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetIP
	I0913 20:18:19.135146   78618 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:18:19.135579   78618 main.go:141] libmachine: (newest-cni-350416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:5a:f4", ip: ""} in network mk-newest-cni-350416: {Iface:virbr4 ExpiryTime:2024-09-13 21:18:08 +0000 UTC Type:0 Mac:52:54:00:ca:5a:f4 Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:newest-cni-350416 Clientid:01:52:54:00:ca:5a:f4}
	I0913 20:18:19.135605   78618 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined IP address 192.168.72.56 and MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:18:19.135853   78618 profile.go:143] Saving config to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/newest-cni-350416/config.json ...
	I0913 20:18:19.136034   78618 start.go:128] duration metric: took 25.989944651s to createHost
	I0913 20:18:19.136059   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHHostname
	I0913 20:18:19.138242   78618 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:18:19.138636   78618 main.go:141] libmachine: (newest-cni-350416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:5a:f4", ip: ""} in network mk-newest-cni-350416: {Iface:virbr4 ExpiryTime:2024-09-13 21:18:08 +0000 UTC Type:0 Mac:52:54:00:ca:5a:f4 Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:newest-cni-350416 Clientid:01:52:54:00:ca:5a:f4}
	I0913 20:18:19.138661   78618 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined IP address 192.168.72.56 and MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:18:19.138781   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHPort
	I0913 20:18:19.138945   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHKeyPath
	I0913 20:18:19.139114   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHKeyPath
	I0913 20:18:19.139239   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHUsername
	I0913 20:18:19.139400   78618 main.go:141] libmachine: Using SSH client type: native
	I0913 20:18:19.139610   78618 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.56 22 <nil> <nil>}
	I0913 20:18:19.139624   78618 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0913 20:18:19.255172   78618 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726258699.232425808
	
	I0913 20:18:19.255198   78618 fix.go:216] guest clock: 1726258699.232425808
	I0913 20:18:19.255208   78618 fix.go:229] Guest: 2024-09-13 20:18:19.232425808 +0000 UTC Remote: 2024-09-13 20:18:19.136046627 +0000 UTC m=+26.102279958 (delta=96.379181ms)
	I0913 20:18:19.255235   78618 fix.go:200] guest clock delta is within tolerance: 96.379181ms
	I0913 20:18:19.255244   78618 start.go:83] releasing machines lock for "newest-cni-350416", held for 26.109236556s
	I0913 20:18:19.255272   78618 main.go:141] libmachine: (newest-cni-350416) Calling .DriverName
	I0913 20:18:19.255549   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetIP
	I0913 20:18:19.258112   78618 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:18:19.258603   78618 main.go:141] libmachine: (newest-cni-350416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:5a:f4", ip: ""} in network mk-newest-cni-350416: {Iface:virbr4 ExpiryTime:2024-09-13 21:18:08 +0000 UTC Type:0 Mac:52:54:00:ca:5a:f4 Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:newest-cni-350416 Clientid:01:52:54:00:ca:5a:f4}
	I0913 20:18:19.258642   78618 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined IP address 192.168.72.56 and MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:18:19.258795   78618 main.go:141] libmachine: (newest-cni-350416) Calling .DriverName
	I0913 20:18:19.259238   78618 main.go:141] libmachine: (newest-cni-350416) Calling .DriverName
	I0913 20:18:19.259508   78618 main.go:141] libmachine: (newest-cni-350416) Calling .DriverName
	I0913 20:18:19.259612   78618 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0913 20:18:19.259651   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHHostname
	I0913 20:18:19.259710   78618 ssh_runner.go:195] Run: cat /version.json
	I0913 20:18:19.259735   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHHostname
	I0913 20:18:19.262387   78618 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:18:19.262616   78618 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:18:19.262760   78618 main.go:141] libmachine: (newest-cni-350416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:5a:f4", ip: ""} in network mk-newest-cni-350416: {Iface:virbr4 ExpiryTime:2024-09-13 21:18:08 +0000 UTC Type:0 Mac:52:54:00:ca:5a:f4 Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:newest-cni-350416 Clientid:01:52:54:00:ca:5a:f4}
	I0913 20:18:19.262789   78618 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined IP address 192.168.72.56 and MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:18:19.262928   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHPort
	I0913 20:18:19.263022   78618 main.go:141] libmachine: (newest-cni-350416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:5a:f4", ip: ""} in network mk-newest-cni-350416: {Iface:virbr4 ExpiryTime:2024-09-13 21:18:08 +0000 UTC Type:0 Mac:52:54:00:ca:5a:f4 Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:newest-cni-350416 Clientid:01:52:54:00:ca:5a:f4}
	I0913 20:18:19.263052   78618 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined IP address 192.168.72.56 and MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:18:19.263139   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHKeyPath
	I0913 20:18:19.263213   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHPort
	I0913 20:18:19.263291   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHUsername
	I0913 20:18:19.263400   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHKeyPath
	I0913 20:18:19.263444   78618 sshutil.go:53] new ssh client: &{IP:192.168.72.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/newest-cni-350416/id_rsa Username:docker}
	I0913 20:18:19.263546   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHUsername
	I0913 20:18:19.263690   78618 sshutil.go:53] new ssh client: &{IP:192.168.72.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/newest-cni-350416/id_rsa Username:docker}
	I0913 20:18:19.367326   78618 ssh_runner.go:195] Run: systemctl --version
	I0913 20:18:19.373133   78618 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0913 20:18:19.533204   78618 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0913 20:18:19.540078   78618 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0913 20:18:19.540145   78618 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0913 20:18:19.557385   78618 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0913 20:18:19.557411   78618 start.go:495] detecting cgroup driver to use...
	I0913 20:18:19.557481   78618 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0913 20:18:19.575471   78618 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0913 20:18:19.589534   78618 docker.go:217] disabling cri-docker service (if available) ...
	I0913 20:18:19.589601   78618 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0913 20:18:19.602905   78618 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0913 20:18:19.616392   78618 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0913 20:18:19.735766   78618 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0913 20:18:19.880340   78618 docker.go:233] disabling docker service ...
	I0913 20:18:19.880416   78618 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0913 20:18:19.895106   78618 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0913 20:18:19.908658   78618 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0913 20:18:20.058672   78618 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0913 20:18:20.180401   78618 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0913 20:18:20.194786   78618 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0913 20:18:20.213713   78618 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0913 20:18:20.213770   78618 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 20:18:20.223957   78618 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0913 20:18:20.224012   78618 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 20:18:20.234176   78618 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 20:18:20.244507   78618 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 20:18:20.254714   78618 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0913 20:18:20.265967   78618 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 20:18:20.276060   78618 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 20:18:20.293622   78618 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 20:18:20.303609   78618 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0913 20:18:20.313513   78618 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0913 20:18:20.313562   78618 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0913 20:18:20.327748   78618 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0913 20:18:20.339098   78618 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 20:18:20.455219   78618 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0913 20:18:20.560379   78618 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0913 20:18:20.560446   78618 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0913 20:18:20.565345   78618 start.go:563] Will wait 60s for crictl version
	I0913 20:18:20.565408   78618 ssh_runner.go:195] Run: which crictl
	I0913 20:18:20.569510   78618 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0913 20:18:20.609857   78618 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0913 20:18:20.609922   78618 ssh_runner.go:195] Run: crio --version
	I0913 20:18:20.638401   78618 ssh_runner.go:195] Run: crio --version
	I0913 20:18:20.671339   78618 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0913 20:18:20.672486   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetIP
	I0913 20:18:20.675093   78618 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:18:20.675401   78618 main.go:141] libmachine: (newest-cni-350416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:5a:f4", ip: ""} in network mk-newest-cni-350416: {Iface:virbr4 ExpiryTime:2024-09-13 21:18:08 +0000 UTC Type:0 Mac:52:54:00:ca:5a:f4 Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:newest-cni-350416 Clientid:01:52:54:00:ca:5a:f4}
	I0913 20:18:20.675431   78618 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined IP address 192.168.72.56 and MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:18:20.675616   78618 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0913 20:18:20.680277   78618 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0913 20:18:20.694660   78618 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0913 20:18:20.695840   78618 kubeadm.go:883] updating cluster {Name:newest-cni-350416 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:newest-cni-350416 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.56 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVers
ion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0913 20:18:20.695967   78618 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0913 20:18:20.696036   78618 ssh_runner.go:195] Run: sudo crictl images --output json
	I0913 20:18:20.731785   78618 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0913 20:18:20.731866   78618 ssh_runner.go:195] Run: which lz4
	I0913 20:18:20.736208   78618 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0913 20:18:20.740562   78618 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0913 20:18:20.740598   78618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0913 20:18:22.144859   78618 crio.go:462] duration metric: took 1.408702239s to copy over tarball
	I0913 20:18:22.144936   78618 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0913 20:18:24.177149   78618 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.032188373s)
	I0913 20:18:24.177174   78618 crio.go:469] duration metric: took 2.032289486s to extract the tarball
	I0913 20:18:24.177182   78618 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0913 20:18:24.215670   78618 ssh_runner.go:195] Run: sudo crictl images --output json
	I0913 20:18:24.258983   78618 crio.go:514] all images are preloaded for cri-o runtime.
	I0913 20:18:24.259003   78618 cache_images.go:84] Images are preloaded, skipping loading
	I0913 20:18:24.259010   78618 kubeadm.go:934] updating node { 192.168.72.56 8443 v1.31.1 crio true true} ...
	I0913 20:18:24.259101   78618 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --feature-gates=ServerSideApply=true --hostname-override=newest-cni-350416 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.56
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:newest-cni-350416 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0913 20:18:24.259162   78618 ssh_runner.go:195] Run: crio config
	I0913 20:18:24.309703   78618 cni.go:84] Creating CNI manager for ""
	I0913 20:18:24.309727   78618 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0913 20:18:24.309737   78618 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0913 20:18:24.309757   78618 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.72.56 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-350416 NodeName:newest-cni-350416 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.56"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs:map[
] NodeIP:192.168.72.56 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0913 20:18:24.309895   78618 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.56
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-350416"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.56
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.56"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0913 20:18:24.309954   78618 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0913 20:18:24.320322   78618 binaries.go:44] Found k8s binaries, skipping transfer
	I0913 20:18:24.320415   78618 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0913 20:18:24.330176   78618 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (353 bytes)
	I0913 20:18:24.352594   78618 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0913 20:18:24.372861   78618 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2282 bytes)
	I0913 20:18:24.389601   78618 ssh_runner.go:195] Run: grep 192.168.72.56	control-plane.minikube.internal$ /etc/hosts
	I0913 20:18:24.393348   78618 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.56	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0913 20:18:24.405088   78618 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 20:18:24.539341   78618 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0913 20:18:24.561683   78618 certs.go:68] Setting up /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/newest-cni-350416 for IP: 192.168.72.56
	I0913 20:18:24.561704   78618 certs.go:194] generating shared ca certs ...
	I0913 20:18:24.561723   78618 certs.go:226] acquiring lock for ca certs: {Name:mke780aab4c2895bded11772e31c7a357d07742c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 20:18:24.561902   78618 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.key
	I0913 20:18:24.561964   78618 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.key
	I0913 20:18:24.561980   78618 certs.go:256] generating profile certs ...
	I0913 20:18:24.562046   78618 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/newest-cni-350416/client.key
	I0913 20:18:24.562078   78618 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/newest-cni-350416/client.crt with IP's: []
	I0913 20:18:24.681770   78618 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/newest-cni-350416/client.crt ...
	I0913 20:18:24.681801   78618 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/newest-cni-350416/client.crt: {Name:mk0a18100c95f2446b4dae27c8d4ce3bd1331da4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 20:18:24.681996   78618 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/newest-cni-350416/client.key ...
	I0913 20:18:24.682013   78618 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/newest-cni-350416/client.key: {Name:mk81a2f20e5c3515cf4258741dd2a03651473768 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 20:18:24.682139   78618 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/newest-cni-350416/apiserver.key.33b8c2ee
	I0913 20:18:24.682164   78618 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/newest-cni-350416/apiserver.crt.33b8c2ee with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.56]
	I0913 20:18:24.875783   78618 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/newest-cni-350416/apiserver.crt.33b8c2ee ...
	I0913 20:18:24.875815   78618 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/newest-cni-350416/apiserver.crt.33b8c2ee: {Name:mk9447078ee811271cf60ea7f788f6363d1810f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 20:18:24.876046   78618 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/newest-cni-350416/apiserver.key.33b8c2ee ...
	I0913 20:18:24.876063   78618 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/newest-cni-350416/apiserver.key.33b8c2ee: {Name:mkadad9a8167aaefd31ad5e191beee2d93039c9d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 20:18:24.876210   78618 certs.go:381] copying /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/newest-cni-350416/apiserver.crt.33b8c2ee -> /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/newest-cni-350416/apiserver.crt
	I0913 20:18:24.876312   78618 certs.go:385] copying /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/newest-cni-350416/apiserver.key.33b8c2ee -> /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/newest-cni-350416/apiserver.key
	I0913 20:18:24.876398   78618 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/newest-cni-350416/proxy-client.key
	I0913 20:18:24.876425   78618 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/newest-cni-350416/proxy-client.crt with IP's: []
	I0913 20:18:24.973620   78618 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/newest-cni-350416/proxy-client.crt ...
	I0913 20:18:24.973650   78618 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/newest-cni-350416/proxy-client.crt: {Name:mkf9ad5559c5cf3dd38ce74b5e325fd4b60bbcb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 20:18:24.988202   78618 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/newest-cni-350416/proxy-client.key ...
	I0913 20:18:24.988268   78618 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/newest-cni-350416/proxy-client.key: {Name:mkafc4915b8b2b8b957b9031db517e8d2d2a7699 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 20:18:24.988549   78618 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079.pem (1338 bytes)
	W0913 20:18:24.988619   78618 certs.go:480] ignoring /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079_empty.pem, impossibly tiny 0 bytes
	I0913 20:18:24.988638   78618 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem (1675 bytes)
	I0913 20:18:24.988667   78618 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem (1082 bytes)
	I0913 20:18:24.988697   78618 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem (1123 bytes)
	I0913 20:18:24.988728   78618 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem (1679 bytes)
	I0913 20:18:24.988783   78618 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem (1708 bytes)
	I0913 20:18:24.989497   78618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0913 20:18:25.017766   78618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0913 20:18:25.044635   78618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0913 20:18:25.073179   78618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0913 20:18:25.098366   78618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/newest-cni-350416/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0913 20:18:25.124948   78618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/newest-cni-350416/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0913 20:18:25.151129   78618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/newest-cni-350416/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0913 20:18:25.176023   78618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/newest-cni-350416/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0913 20:18:25.202723   78618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079.pem --> /usr/share/ca-certificates/11079.pem (1338 bytes)
	I0913 20:18:25.228509   78618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem --> /usr/share/ca-certificates/110792.pem (1708 bytes)
	I0913 20:18:25.254758   78618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0913 20:18:25.280044   78618 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0913 20:18:25.297000   78618 ssh_runner.go:195] Run: openssl version
	I0913 20:18:25.302667   78618 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11079.pem && ln -fs /usr/share/ca-certificates/11079.pem /etc/ssl/certs/11079.pem"
	I0913 20:18:25.313134   78618 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11079.pem
	I0913 20:18:25.317979   78618 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 13 18:38 /usr/share/ca-certificates/11079.pem
	I0913 20:18:25.318035   78618 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11079.pem
	I0913 20:18:25.325042   78618 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11079.pem /etc/ssl/certs/51391683.0"
	I0913 20:18:25.342834   78618 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110792.pem && ln -fs /usr/share/ca-certificates/110792.pem /etc/ssl/certs/110792.pem"
	I0913 20:18:25.373652   78618 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110792.pem
	I0913 20:18:25.381253   78618 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 13 18:38 /usr/share/ca-certificates/110792.pem
	I0913 20:18:25.381333   78618 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110792.pem
	I0913 20:18:25.391426   78618 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110792.pem /etc/ssl/certs/3ec20f2e.0"
	I0913 20:18:25.411098   78618 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0913 20:18:25.423420   78618 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0913 20:18:25.428094   78618 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 13 18:22 /usr/share/ca-certificates/minikubeCA.pem
	I0913 20:18:25.428150   78618 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0913 20:18:25.436588   78618 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0913 20:18:25.450503   78618 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0913 20:18:25.454998   78618 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0913 20:18:25.455060   78618 kubeadm.go:392] StartCluster: {Name:newest-cni-350416 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:newest-cni-350416 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.56 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion
:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 20:18:25.455154   78618 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0913 20:18:25.455213   78618 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0913 20:18:25.504001   78618 cri.go:89] found id: ""
	I0913 20:18:25.504074   78618 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0913 20:18:25.515134   78618 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0913 20:18:25.526660   78618 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0913 20:18:25.538810   78618 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0913 20:18:25.538836   78618 kubeadm.go:157] found existing configuration files:
	
	I0913 20:18:25.538887   78618 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0913 20:18:25.552145   78618 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0913 20:18:25.552198   78618 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0913 20:18:25.566590   78618 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0913 20:18:25.579683   78618 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0913 20:18:25.579749   78618 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0913 20:18:25.592991   78618 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0913 20:18:25.603885   78618 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0913 20:18:25.603927   78618 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0913 20:18:25.614932   78618 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0913 20:18:25.626230   78618 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0913 20:18:25.626293   78618 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0913 20:18:25.642955   78618 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0913 20:18:25.771410   78618 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0913 20:18:25.771504   78618 kubeadm.go:310] [preflight] Running pre-flight checks
	I0913 20:18:25.883049   78618 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0913 20:18:25.883224   78618 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0913 20:18:25.883350   78618 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0913 20:18:25.897018   78618 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	
	
	==> CRI-O <==
	Sep 13 20:18:27 no-preload-239327 crio[706]: time="2024-09-13 20:18:27.367345066Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1c78d31d-f190-4021-a970-f6fbd8f37775 name=/runtime.v1.RuntimeService/Version
	Sep 13 20:18:27 no-preload-239327 crio[706]: time="2024-09-13 20:18:27.369574786Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2a59228e-2160-485b-b64a-603417b5f44b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 20:18:27 no-preload-239327 crio[706]: time="2024-09-13 20:18:27.370273861Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726258707370142789,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2a59228e-2160-485b-b64a-603417b5f44b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 20:18:27 no-preload-239327 crio[706]: time="2024-09-13 20:18:27.371216046Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=99a11bac-01f8-4029-8ac9-e88cccbf2cd3 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 20:18:27 no-preload-239327 crio[706]: time="2024-09-13 20:18:27.371285752Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=99a11bac-01f8-4029-8ac9-e88cccbf2cd3 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 20:18:27 no-preload-239327 crio[706]: time="2024-09-13 20:18:27.371497365Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fc01d7b17bbc929489326ae7e806a248f2d3d5cbc145bf51eb1b0cf2ba88d950,PodSandboxId:7385496e03b484759e09d27bb4f8bf5d45b469ce208eeb6f389e8363673d1376,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726257505260601200,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cb55fe9-4adb-4d3e-9f26-34ee4b3f01a2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7072134a2004c7ca1f46e019b8df505cf51ba9f28fd395093aa6bc3f953085e2,PodSandboxId:aad57f23f7b9d4707a8bfffdf0e25d4af243348041ce9e8d754b9cc72ffdedda,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1726257485507100102,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bbf45dbd-00fc-4d1f-952b-e3741f1e2e96,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e70559352db6a4d712cb06065541ce8338cb0ea989eac571f538aa205d022f73,PodSandboxId:2a5a9a6660ec0a956732d808a6966894dc14c94018e55071bb0bd441909aec88,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726257482009005191,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fjzxv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 984f1946-61b1-4881-ae99-495855aaf948,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:adbec8ff0ed7ae2d76b2b4191fbf08117c3c2c17aaf2f56bf763ce14fba83991,PodSandboxId:e122a3e335a48d85769ac9fa798ae16aa9d7379b4181a9497fa9fef5b04e0432,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726257474620787945,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b24zg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67fffd9e-ddf7-4abb-bf
ce-1528060d6b43,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a9c61bb677322f851f9504c0ffe2044897d80333391be76078a4f7825afe9fe,PodSandboxId:7385496e03b484759e09d27bb4f8bf5d45b469ce208eeb6f389e8363673d1376,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726257474692014207,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cb55fe9-4adb-4d3e-9f26-34ee4b3f01
a2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6169bebe5711890f53f70a7ec514779cf495ee725c60dab67e7acdca64ef03d,PodSandboxId:c47a79173c9568e56ea539306a842412f7e30ef571f434f95d7f524ecbf75bfe,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726257469623037132,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-239327,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d27d74d3adbf9e1a
8fb5f27e34765015,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3490cc2f99b270938b5fed10c34d8466f4e2d304dac9cdacb69b239c36988e3,PodSandboxId:414bfb6888204e21d8c51c3b0ae29137e36a30c27fc16f3c5885352b1a0f2fff,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726257469706994027,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-239327,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86f02b0f8fb994379d37353d7e37c6d9,},Annotations:map[string]s
tring{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c2bf4fed4e33c404d568d430bf58fe30ca0c9f0cf8964c530e93f1fd1a51edf,PodSandboxId:bf03222df8fb595c984c7172cd535c8969884b0d5f80cd5150cd0bf8da544763,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726257469685186220,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-239327,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6be3a374c3b71fab710754c2fbd15de6,},Annotations:map[string]string{io.kube
rnetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b1108fd5841778e635c87c901ca2bc56071cc7f48f9b9812e1bf8fa063284c3,PodSandboxId:db3f50d1e105c112dd17655faeda9d724a50e582413bae6643c5639081a5325e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726257469583573446,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-239327,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62f2766c1d6aefa961c8bc9e97da2ac4,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=99a11bac-01f8-4029-8ac9-e88cccbf2cd3 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 20:18:27 no-preload-239327 crio[706]: time="2024-09-13 20:18:27.410378517Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3333acf2-4acb-4cde-9e5e-22638cad8390 name=/runtime.v1.RuntimeService/Version
	Sep 13 20:18:27 no-preload-239327 crio[706]: time="2024-09-13 20:18:27.410460487Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3333acf2-4acb-4cde-9e5e-22638cad8390 name=/runtime.v1.RuntimeService/Version
	Sep 13 20:18:27 no-preload-239327 crio[706]: time="2024-09-13 20:18:27.411436539Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=89dfc64b-6cb0-4655-bf5b-de49ca07b7d4 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 20:18:27 no-preload-239327 crio[706]: time="2024-09-13 20:18:27.411776326Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726258707411755031,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=89dfc64b-6cb0-4655-bf5b-de49ca07b7d4 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 20:18:27 no-preload-239327 crio[706]: time="2024-09-13 20:18:27.412325628Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b59fb0c5-8c2e-4b4f-9887-4b895faf1d4f name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 20:18:27 no-preload-239327 crio[706]: time="2024-09-13 20:18:27.412373185Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b59fb0c5-8c2e-4b4f-9887-4b895faf1d4f name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 20:18:27 no-preload-239327 crio[706]: time="2024-09-13 20:18:27.412611050Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fc01d7b17bbc929489326ae7e806a248f2d3d5cbc145bf51eb1b0cf2ba88d950,PodSandboxId:7385496e03b484759e09d27bb4f8bf5d45b469ce208eeb6f389e8363673d1376,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726257505260601200,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cb55fe9-4adb-4d3e-9f26-34ee4b3f01a2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7072134a2004c7ca1f46e019b8df505cf51ba9f28fd395093aa6bc3f953085e2,PodSandboxId:aad57f23f7b9d4707a8bfffdf0e25d4af243348041ce9e8d754b9cc72ffdedda,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1726257485507100102,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bbf45dbd-00fc-4d1f-952b-e3741f1e2e96,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e70559352db6a4d712cb06065541ce8338cb0ea989eac571f538aa205d022f73,PodSandboxId:2a5a9a6660ec0a956732d808a6966894dc14c94018e55071bb0bd441909aec88,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726257482009005191,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fjzxv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 984f1946-61b1-4881-ae99-495855aaf948,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:adbec8ff0ed7ae2d76b2b4191fbf08117c3c2c17aaf2f56bf763ce14fba83991,PodSandboxId:e122a3e335a48d85769ac9fa798ae16aa9d7379b4181a9497fa9fef5b04e0432,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726257474620787945,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b24zg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67fffd9e-ddf7-4abb-bf
ce-1528060d6b43,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a9c61bb677322f851f9504c0ffe2044897d80333391be76078a4f7825afe9fe,PodSandboxId:7385496e03b484759e09d27bb4f8bf5d45b469ce208eeb6f389e8363673d1376,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726257474692014207,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cb55fe9-4adb-4d3e-9f26-34ee4b3f01
a2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6169bebe5711890f53f70a7ec514779cf495ee725c60dab67e7acdca64ef03d,PodSandboxId:c47a79173c9568e56ea539306a842412f7e30ef571f434f95d7f524ecbf75bfe,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726257469623037132,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-239327,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d27d74d3adbf9e1a
8fb5f27e34765015,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3490cc2f99b270938b5fed10c34d8466f4e2d304dac9cdacb69b239c36988e3,PodSandboxId:414bfb6888204e21d8c51c3b0ae29137e36a30c27fc16f3c5885352b1a0f2fff,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726257469706994027,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-239327,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86f02b0f8fb994379d37353d7e37c6d9,},Annotations:map[string]s
tring{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c2bf4fed4e33c404d568d430bf58fe30ca0c9f0cf8964c530e93f1fd1a51edf,PodSandboxId:bf03222df8fb595c984c7172cd535c8969884b0d5f80cd5150cd0bf8da544763,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726257469685186220,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-239327,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6be3a374c3b71fab710754c2fbd15de6,},Annotations:map[string]string{io.kube
rnetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b1108fd5841778e635c87c901ca2bc56071cc7f48f9b9812e1bf8fa063284c3,PodSandboxId:db3f50d1e105c112dd17655faeda9d724a50e582413bae6643c5639081a5325e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726257469583573446,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-239327,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62f2766c1d6aefa961c8bc9e97da2ac4,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b59fb0c5-8c2e-4b4f-9887-4b895faf1d4f name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 20:18:27 no-preload-239327 crio[706]: time="2024-09-13 20:18:27.445345126Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d5689dbb-ba1c-4f71-92c5-a437056098c7 name=/runtime.v1.RuntimeService/Version
	Sep 13 20:18:27 no-preload-239327 crio[706]: time="2024-09-13 20:18:27.445424745Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d5689dbb-ba1c-4f71-92c5-a437056098c7 name=/runtime.v1.RuntimeService/Version
	Sep 13 20:18:27 no-preload-239327 crio[706]: time="2024-09-13 20:18:27.446521584Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e95e0513-b492-4284-87b0-fc0830c13a04 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 20:18:27 no-preload-239327 crio[706]: time="2024-09-13 20:18:27.447167585Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726258707447038972,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e95e0513-b492-4284-87b0-fc0830c13a04 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 20:18:27 no-preload-239327 crio[706]: time="2024-09-13 20:18:27.447802920Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=156dccff-543c-4eed-a52d-5db04ccbdb53 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 20:18:27 no-preload-239327 crio[706]: time="2024-09-13 20:18:27.447935271Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=156dccff-543c-4eed-a52d-5db04ccbdb53 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 20:18:27 no-preload-239327 crio[706]: time="2024-09-13 20:18:27.448140337Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fc01d7b17bbc929489326ae7e806a248f2d3d5cbc145bf51eb1b0cf2ba88d950,PodSandboxId:7385496e03b484759e09d27bb4f8bf5d45b469ce208eeb6f389e8363673d1376,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726257505260601200,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cb55fe9-4adb-4d3e-9f26-34ee4b3f01a2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7072134a2004c7ca1f46e019b8df505cf51ba9f28fd395093aa6bc3f953085e2,PodSandboxId:aad57f23f7b9d4707a8bfffdf0e25d4af243348041ce9e8d754b9cc72ffdedda,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1726257485507100102,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bbf45dbd-00fc-4d1f-952b-e3741f1e2e96,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e70559352db6a4d712cb06065541ce8338cb0ea989eac571f538aa205d022f73,PodSandboxId:2a5a9a6660ec0a956732d808a6966894dc14c94018e55071bb0bd441909aec88,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726257482009005191,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fjzxv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 984f1946-61b1-4881-ae99-495855aaf948,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:adbec8ff0ed7ae2d76b2b4191fbf08117c3c2c17aaf2f56bf763ce14fba83991,PodSandboxId:e122a3e335a48d85769ac9fa798ae16aa9d7379b4181a9497fa9fef5b04e0432,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726257474620787945,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b24zg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67fffd9e-ddf7-4abb-bf
ce-1528060d6b43,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a9c61bb677322f851f9504c0ffe2044897d80333391be76078a4f7825afe9fe,PodSandboxId:7385496e03b484759e09d27bb4f8bf5d45b469ce208eeb6f389e8363673d1376,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726257474692014207,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cb55fe9-4adb-4d3e-9f26-34ee4b3f01
a2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6169bebe5711890f53f70a7ec514779cf495ee725c60dab67e7acdca64ef03d,PodSandboxId:c47a79173c9568e56ea539306a842412f7e30ef571f434f95d7f524ecbf75bfe,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726257469623037132,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-239327,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d27d74d3adbf9e1a
8fb5f27e34765015,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3490cc2f99b270938b5fed10c34d8466f4e2d304dac9cdacb69b239c36988e3,PodSandboxId:414bfb6888204e21d8c51c3b0ae29137e36a30c27fc16f3c5885352b1a0f2fff,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726257469706994027,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-239327,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86f02b0f8fb994379d37353d7e37c6d9,},Annotations:map[string]s
tring{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c2bf4fed4e33c404d568d430bf58fe30ca0c9f0cf8964c530e93f1fd1a51edf,PodSandboxId:bf03222df8fb595c984c7172cd535c8969884b0d5f80cd5150cd0bf8da544763,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726257469685186220,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-239327,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6be3a374c3b71fab710754c2fbd15de6,},Annotations:map[string]string{io.kube
rnetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b1108fd5841778e635c87c901ca2bc56071cc7f48f9b9812e1bf8fa063284c3,PodSandboxId:db3f50d1e105c112dd17655faeda9d724a50e582413bae6643c5639081a5325e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726257469583573446,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-239327,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62f2766c1d6aefa961c8bc9e97da2ac4,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=156dccff-543c-4eed-a52d-5db04ccbdb53 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 20:18:27 no-preload-239327 crio[706]: time="2024-09-13 20:18:27.525445220Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=417be956-802e-4bb9-b329-4e21c1f2f735 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 13 20:18:27 no-preload-239327 crio[706]: time="2024-09-13 20:18:27.525731139Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:aad57f23f7b9d4707a8bfffdf0e25d4af243348041ce9e8d754b9cc72ffdedda,Metadata:&PodSandboxMetadata{Name:busybox,Uid:bbf45dbd-00fc-4d1f-952b-e3741f1e2e96,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726257481811817760,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bbf45dbd-00fc-4d1f-952b-e3741f1e2e96,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-13T19:57:53.920287413Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2a5a9a6660ec0a956732d808a6966894dc14c94018e55071bb0bd441909aec88,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-fjzxv,Uid:984f1946-61b1-4881-ae99-495855aaf948,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:17262574817210923
01,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-fjzxv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 984f1946-61b1-4881-ae99-495855aaf948,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-13T19:57:53.920274385Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:af593b9fbb3849bd6d8b3e93ea01df2ba0f17d5d4bf6ffed9bcf7c20707d07fe,Metadata:&PodSandboxMetadata{Name:metrics-server-6867b74b74-bq7jp,Uid:9920ad88-3d00-458f-94d4-3dcfd0cd9a01,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726257480007277723,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-6867b74b74-bq7jp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9920ad88-3d00-458f-94d4-3dcfd0cd9a01,k8s-app: metrics-server,pod-template-hash: 6867b74b74,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-13T19:57:53.9
20289766Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:7385496e03b484759e09d27bb4f8bf5d45b469ce208eeb6f389e8363673d1376,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:1cb55fe9-4adb-4d3e-9f26-34ee4b3f01a2,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726257474243768754,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cb55fe9-4adb-4d3e-9f26-34ee4b3f01a2,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-m
inikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-09-13T19:57:53.920286102Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e122a3e335a48d85769ac9fa798ae16aa9d7379b4181a9497fa9fef5b04e0432,Metadata:&PodSandboxMetadata{Name:kube-proxy-b24zg,Uid:67fffd9e-ddf7-4abb-bfce-1528060d6b43,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726257474231034313,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-b24zg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67fffd9e-ddf7-4abb-bfce-1528060d6b43,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io
/config.seen: 2024-09-13T19:57:53.920288539Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c47a79173c9568e56ea539306a842412f7e30ef571f434f95d7f524ecbf75bfe,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-no-preload-239327,Uid:d27d74d3adbf9e1a8fb5f27e34765015,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726257469436961608,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-no-preload-239327,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d27d74d3adbf9e1a8fb5f27e34765015,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: d27d74d3adbf9e1a8fb5f27e34765015,kubernetes.io/config.seen: 2024-09-13T19:57:48.908340856Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:414bfb6888204e21d8c51c3b0ae29137e36a30c27fc16f3c5885352b1a0f2fff,Metadata:&PodSandboxMetadata{Name:etcd-no-preload-239327,Uid:86f02b0f8fb994379d37353d7e3
7c6d9,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726257469435515739,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-no-preload-239327,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86f02b0f8fb994379d37353d7e37c6d9,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.50.13:2379,kubernetes.io/config.hash: 86f02b0f8fb994379d37353d7e37c6d9,kubernetes.io/config.seen: 2024-09-13T19:57:49.000450377Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:bf03222df8fb595c984c7172cd535c8969884b0d5f80cd5150cd0bf8da544763,Metadata:&PodSandboxMetadata{Name:kube-scheduler-no-preload-239327,Uid:6be3a374c3b71fab710754c2fbd15de6,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726257469432321000,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-no-preload-239327,io.k
ubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6be3a374c3b71fab710754c2fbd15de6,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 6be3a374c3b71fab710754c2fbd15de6,kubernetes.io/config.seen: 2024-09-13T19:57:48.908345993Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:db3f50d1e105c112dd17655faeda9d724a50e582413bae6643c5639081a5325e,Metadata:&PodSandboxMetadata{Name:kube-apiserver-no-preload-239327,Uid:62f2766c1d6aefa961c8bc9e97da2ac4,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726257469415901157,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-no-preload-239327,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62f2766c1d6aefa961c8bc9e97da2ac4,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.13:8443,kubernetes.io/config.hash: 62f2766c1d6aefa961c8bc9e97da2ac4,kube
rnetes.io/config.seen: 2024-09-13T19:57:48.908347145Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=417be956-802e-4bb9-b329-4e21c1f2f735 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 13 20:18:27 no-preload-239327 crio[706]: time="2024-09-13 20:18:27.526480655Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6a65817b-c6be-4a28-b492-c8eccac7f364 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 20:18:27 no-preload-239327 crio[706]: time="2024-09-13 20:18:27.526540289Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6a65817b-c6be-4a28-b492-c8eccac7f364 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 20:18:27 no-preload-239327 crio[706]: time="2024-09-13 20:18:27.526744554Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fc01d7b17bbc929489326ae7e806a248f2d3d5cbc145bf51eb1b0cf2ba88d950,PodSandboxId:7385496e03b484759e09d27bb4f8bf5d45b469ce208eeb6f389e8363673d1376,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726257505260601200,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cb55fe9-4adb-4d3e-9f26-34ee4b3f01a2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7072134a2004c7ca1f46e019b8df505cf51ba9f28fd395093aa6bc3f953085e2,PodSandboxId:aad57f23f7b9d4707a8bfffdf0e25d4af243348041ce9e8d754b9cc72ffdedda,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1726257485507100102,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bbf45dbd-00fc-4d1f-952b-e3741f1e2e96,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e70559352db6a4d712cb06065541ce8338cb0ea989eac571f538aa205d022f73,PodSandboxId:2a5a9a6660ec0a956732d808a6966894dc14c94018e55071bb0bd441909aec88,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726257482009005191,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fjzxv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 984f1946-61b1-4881-ae99-495855aaf948,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:adbec8ff0ed7ae2d76b2b4191fbf08117c3c2c17aaf2f56bf763ce14fba83991,PodSandboxId:e122a3e335a48d85769ac9fa798ae16aa9d7379b4181a9497fa9fef5b04e0432,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726257474620787945,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b24zg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67fffd9e-ddf7-4abb-bf
ce-1528060d6b43,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a9c61bb677322f851f9504c0ffe2044897d80333391be76078a4f7825afe9fe,PodSandboxId:7385496e03b484759e09d27bb4f8bf5d45b469ce208eeb6f389e8363673d1376,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726257474692014207,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cb55fe9-4adb-4d3e-9f26-34ee4b3f01
a2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6169bebe5711890f53f70a7ec514779cf495ee725c60dab67e7acdca64ef03d,PodSandboxId:c47a79173c9568e56ea539306a842412f7e30ef571f434f95d7f524ecbf75bfe,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726257469623037132,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-239327,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d27d74d3adbf9e1a
8fb5f27e34765015,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3490cc2f99b270938b5fed10c34d8466f4e2d304dac9cdacb69b239c36988e3,PodSandboxId:414bfb6888204e21d8c51c3b0ae29137e36a30c27fc16f3c5885352b1a0f2fff,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726257469706994027,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-239327,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86f02b0f8fb994379d37353d7e37c6d9,},Annotations:map[string]s
tring{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c2bf4fed4e33c404d568d430bf58fe30ca0c9f0cf8964c530e93f1fd1a51edf,PodSandboxId:bf03222df8fb595c984c7172cd535c8969884b0d5f80cd5150cd0bf8da544763,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726257469685186220,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-239327,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6be3a374c3b71fab710754c2fbd15de6,},Annotations:map[string]string{io.kube
rnetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b1108fd5841778e635c87c901ca2bc56071cc7f48f9b9812e1bf8fa063284c3,PodSandboxId:db3f50d1e105c112dd17655faeda9d724a50e582413bae6643c5639081a5325e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726257469583573446,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-239327,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62f2766c1d6aefa961c8bc9e97da2ac4,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6a65817b-c6be-4a28-b492-c8eccac7f364 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	fc01d7b17bbc9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      20 minutes ago      Running             storage-provisioner       2                   7385496e03b48       storage-provisioner
	7072134a2004c       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   20 minutes ago      Running             busybox                   1                   aad57f23f7b9d       busybox
	e70559352db6a       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      20 minutes ago      Running             coredns                   1                   2a5a9a6660ec0       coredns-7c65d6cfc9-fjzxv
	4a9c61bb67732       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      20 minutes ago      Exited              storage-provisioner       1                   7385496e03b48       storage-provisioner
	adbec8ff0ed7a       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      20 minutes ago      Running             kube-proxy                1                   e122a3e335a48       kube-proxy-b24zg
	a3490cc2f99b2       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      20 minutes ago      Running             etcd                      1                   414bfb6888204       etcd-no-preload-239327
	4c2bf4fed4e33       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      20 minutes ago      Running             kube-scheduler            1                   bf03222df8fb5       kube-scheduler-no-preload-239327
	e6169bebe5711       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      20 minutes ago      Running             kube-controller-manager   1                   c47a79173c956       kube-controller-manager-no-preload-239327
	7b1108fd58417       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      20 minutes ago      Running             kube-apiserver            1                   db3f50d1e105c       kube-apiserver-no-preload-239327
	
	
	==> coredns [e70559352db6a4d712cb06065541ce8338cb0ea989eac571f538aa205d022f73] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:42306 - 61102 "HINFO IN 6614262023756072451.4504198368740859932. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015714591s
	
	
	==> describe nodes <==
	Name:               no-preload-239327
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-239327
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fdd33bebc6743cfd1c61ec7fe066add478610a92
	                    minikube.k8s.io/name=no-preload-239327
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_13T19_49_25_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Sep 2024 19:49:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-239327
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 13 Sep 2024 20:18:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Sep 2024 20:13:41 +0000   Fri, 13 Sep 2024 19:49:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Sep 2024 20:13:41 +0000   Fri, 13 Sep 2024 19:49:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Sep 2024 20:13:41 +0000   Fri, 13 Sep 2024 19:49:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 13 Sep 2024 20:13:41 +0000   Fri, 13 Sep 2024 19:58:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.13
	  Hostname:    no-preload-239327
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 8853583287464402a98383f1ee71c8a5
	  System UUID:                88535832-8746-4402-a983-83f1ee71c8a5
	  Boot ID:                    299616d8-5112-4b28-a916-bc79aca3145c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 coredns-7c65d6cfc9-fjzxv                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     28m
	  kube-system                 etcd-no-preload-239327                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         29m
	  kube-system                 kube-apiserver-no-preload-239327             250m (12%)    0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-controller-manager-no-preload-239327    200m (10%)    0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-proxy-b24zg                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 kube-scheduler-no-preload-239327             100m (5%)     0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 metrics-server-6867b74b74-bq7jp              100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         28m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (17%)  170Mi (8%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28m                kube-proxy       
	  Normal  Starting                 20m                kube-proxy       
	  Normal  NodeHasSufficientPID     29m                kubelet          Node no-preload-239327 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  29m                kubelet          Node no-preload-239327 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29m                kubelet          Node no-preload-239327 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 29m                kubelet          Starting kubelet.
	  Normal  NodeReady                29m                kubelet          Node no-preload-239327 status is now: NodeReady
	  Normal  RegisteredNode           28m                node-controller  Node no-preload-239327 event: Registered Node no-preload-239327 in Controller
	  Normal  Starting                 20m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  20m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  20m (x8 over 20m)  kubelet          Node no-preload-239327 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20m (x8 over 20m)  kubelet          Node no-preload-239327 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20m (x7 over 20m)  kubelet          Node no-preload-239327 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           20m                node-controller  Node no-preload-239327 event: Registered Node no-preload-239327 in Controller
	
	
	==> dmesg <==
	[Sep13 19:57] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050867] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040055] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.774977] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.450333] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.553765] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.972016] systemd-fstab-generator[630]: Ignoring "noauto" option for root device
	[  +0.061361] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.063867] systemd-fstab-generator[642]: Ignoring "noauto" option for root device
	[  +0.199840] systemd-fstab-generator[656]: Ignoring "noauto" option for root device
	[  +0.117715] systemd-fstab-generator[668]: Ignoring "noauto" option for root device
	[  +0.286650] systemd-fstab-generator[698]: Ignoring "noauto" option for root device
	[ +15.330883] systemd-fstab-generator[1236]: Ignoring "noauto" option for root device
	[  +0.057567] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.769303] systemd-fstab-generator[1358]: Ignoring "noauto" option for root device
	[  +4.446395] kauditd_printk_skb: 97 callbacks suppressed
	[  +3.675493] systemd-fstab-generator[1993]: Ignoring "noauto" option for root device
	[  +3.180062] kauditd_printk_skb: 61 callbacks suppressed
	[Sep13 19:58] kauditd_printk_skb: 41 callbacks suppressed
	
	
	==> etcd [a3490cc2f99b270938b5fed10c34d8466f4e2d304dac9cdacb69b239c36988e3] <==
	{"level":"info","ts":"2024-09-13T19:58:39.351707Z","caller":"traceutil/trace.go:171","msg":"trace[788229311] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:636; }","duration":"365.587976ms","start":"2024-09-13T19:58:38.986107Z","end":"2024-09-13T19:58:39.351695Z","steps":["trace[788229311] 'range keys from in-memory index tree'  (duration: 365.48118ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-13T19:58:39.352463Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"365.732487ms","expected-duration":"100ms","prefix":"","request":"header:<ID:2041698453018029481 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/metrics-server-6867b74b74-bq7jp.17f4e608407ccf27\" mod_revision:618 > success:<request_put:<key:\"/registry/events/kube-system/metrics-server-6867b74b74-bq7jp.17f4e608407ccf27\" value_size:668 lease:2041698453018028692 >> failure:<request_range:<key:\"/registry/events/kube-system/metrics-server-6867b74b74-bq7jp.17f4e608407ccf27\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-09-13T19:58:39.352574Z","caller":"traceutil/trace.go:171","msg":"trace[1188443948] linearizableReadLoop","detail":"{readStateIndex:682; appliedIndex:681; }","duration":"486.471612ms","start":"2024-09-13T19:58:38.866095Z","end":"2024-09-13T19:58:39.352567Z","steps":["trace[1188443948] 'read index received'  (duration: 120.524016ms)","trace[1188443948] 'applied index is now lower than readState.Index'  (duration: 365.946524ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-13T19:58:39.352655Z","caller":"traceutil/trace.go:171","msg":"trace[1463682818] transaction","detail":"{read_only:false; response_revision:637; number_of_response:1; }","duration":"487.94197ms","start":"2024-09-13T19:58:38.864706Z","end":"2024-09-13T19:58:39.352648Z","steps":["trace[1463682818] 'process raft request'  (duration: 121.973667ms)","trace[1463682818] 'compare'  (duration: 364.520022ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-13T19:58:39.352865Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-13T19:58:38.864687Z","time spent":"488.057523ms","remote":"127.0.0.1:48108","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":763,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/metrics-server-6867b74b74-bq7jp.17f4e608407ccf27\" mod_revision:618 > success:<request_put:<key:\"/registry/events/kube-system/metrics-server-6867b74b74-bq7jp.17f4e608407ccf27\" value_size:668 lease:2041698453018028692 >> failure:<request_range:<key:\"/registry/events/kube-system/metrics-server-6867b74b74-bq7jp.17f4e608407ccf27\" > >"}
	{"level":"warn","ts":"2024-09-13T19:58:39.353100Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"428.286032ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-13T19:58:39.353158Z","caller":"traceutil/trace.go:171","msg":"trace[475232123] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:637; }","duration":"428.348122ms","start":"2024-09-13T19:58:38.924801Z","end":"2024-09-13T19:58:39.353149Z","steps":["trace[475232123] 'agreement among raft nodes before linearized reading'  (duration: 428.242447ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-13T19:58:39.353186Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-13T19:58:38.924772Z","time spent":"428.407395ms","remote":"127.0.0.1:48042","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-09-13T19:58:39.353118Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"487.01352ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/no-preload-239327\" ","response":"range_response_count:1 size:4663"}
	{"level":"info","ts":"2024-09-13T19:58:39.353371Z","caller":"traceutil/trace.go:171","msg":"trace[807714909] range","detail":"{range_begin:/registry/minions/no-preload-239327; range_end:; response_count:1; response_revision:637; }","duration":"487.26507ms","start":"2024-09-13T19:58:38.866092Z","end":"2024-09-13T19:58:39.353357Z","steps":["trace[807714909] 'agreement among raft nodes before linearized reading'  (duration: 486.932274ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-13T19:58:39.353412Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-13T19:58:38.866068Z","time spent":"487.335403ms","remote":"127.0.0.1:48208","response type":"/etcdserverpb.KV/Range","request count":0,"request size":37,"response count":1,"response size":4686,"request content":"key:\"/registry/minions/no-preload-239327\" "}
	{"level":"info","ts":"2024-09-13T20:07:51.913360Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":854}
	{"level":"info","ts":"2024-09-13T20:07:51.932008Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":854,"took":"17.84173ms","hash":4071954628,"current-db-size-bytes":2633728,"current-db-size":"2.6 MB","current-db-size-in-use-bytes":2633728,"current-db-size-in-use":"2.6 MB"}
	{"level":"info","ts":"2024-09-13T20:07:51.932116Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4071954628,"revision":854,"compact-revision":-1}
	{"level":"info","ts":"2024-09-13T20:12:51.929423Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1097}
	{"level":"info","ts":"2024-09-13T20:12:51.934309Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1097,"took":"4.10201ms","hash":2761343508,"current-db-size-bytes":2633728,"current-db-size":"2.6 MB","current-db-size-in-use-bytes":1552384,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-09-13T20:12:51.934396Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2761343508,"revision":1097,"compact-revision":854}
	{"level":"info","ts":"2024-09-13T20:17:51.937243Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1339}
	{"level":"info","ts":"2024-09-13T20:17:51.942506Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1339,"took":"4.389314ms","hash":2439219832,"current-db-size-bytes":2633728,"current-db-size":"2.6 MB","current-db-size-in-use-bytes":1531904,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2024-09-13T20:17:51.942624Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2439219832,"revision":1339,"compact-revision":1097}
	{"level":"info","ts":"2024-09-13T20:18:24.990442Z","caller":"traceutil/trace.go:171","msg":"trace[749573176] transaction","detail":"{read_only:false; response_revision:1610; number_of_response:1; }","duration":"167.520445ms","start":"2024-09-13T20:18:24.822884Z","end":"2024-09-13T20:18:24.990405Z","steps":["trace[749573176] 'process raft request'  (duration: 166.986692ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-13T20:18:25.232589Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"118.687226ms","expected-duration":"100ms","prefix":"","request":"header:<ID:2041698453018037281 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:15-second id:1c5591ecf69acc20>","response":"size:40"}
	{"level":"warn","ts":"2024-09-13T20:18:25.487883Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"126.603911ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/secrets/\" range_end:\"/registry/secrets0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-09-13T20:18:25.487972Z","caller":"traceutil/trace.go:171","msg":"trace[451327988] range","detail":"{range_begin:/registry/secrets/; range_end:/registry/secrets0; response_count:0; response_revision:1611; }","duration":"126.75609ms","start":"2024-09-13T20:18:25.361177Z","end":"2024-09-13T20:18:25.487933Z","steps":["trace[451327988] 'count revisions from in-memory index tree'  (duration: 126.511495ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-13T20:18:27.127096Z","caller":"traceutil/trace.go:171","msg":"trace[1865895183] transaction","detail":"{read_only:false; response_revision:1612; number_of_response:1; }","duration":"121.113722ms","start":"2024-09-13T20:18:27.005961Z","end":"2024-09-13T20:18:27.127075Z","steps":["trace[1865895183] 'process raft request'  (duration: 120.957212ms)"],"step_count":1}
	
	
	==> kernel <==
	 20:18:28 up 21 min,  0 users,  load average: 0.11, 0.11, 0.12
	Linux no-preload-239327 5.10.207 #1 SMP Thu Sep 12 19:03:33 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [7b1108fd5841778e635c87c901ca2bc56071cc7f48f9b9812e1bf8fa063284c3] <==
	I0913 20:13:54.506168       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0913 20:13:54.506223       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0913 20:15:54.507095       1 handler_proxy.go:99] no RequestInfo found in the context
	E0913 20:15:54.507357       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0913 20:15:54.507426       1 handler_proxy.go:99] no RequestInfo found in the context
	E0913 20:15:54.507495       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0913 20:15:54.508673       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0913 20:15:54.508761       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0913 20:17:53.508750       1 handler_proxy.go:99] no RequestInfo found in the context
	E0913 20:17:53.509002       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0913 20:17:54.511124       1 handler_proxy.go:99] no RequestInfo found in the context
	E0913 20:17:54.511247       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0913 20:17:54.511135       1 handler_proxy.go:99] no RequestInfo found in the context
	E0913 20:17:54.511272       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0913 20:17:54.512552       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0913 20:17:54.512675       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [e6169bebe5711890f53f70a7ec514779cf495ee725c60dab67e7acdca64ef03d] <==
	E0913 20:13:27.158549       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0913 20:13:27.650221       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0913 20:13:41.151154       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-239327"
	E0913 20:13:57.165585       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0913 20:13:57.658122       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0913 20:14:09.012434       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="253.754µs"
	I0913 20:14:23.017787       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="133.685µs"
	E0913 20:14:27.172652       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0913 20:14:27.666521       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0913 20:14:57.179984       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0913 20:14:57.674606       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0913 20:15:27.186352       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0913 20:15:27.685463       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0913 20:15:57.192495       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0913 20:15:57.697794       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0913 20:16:27.199992       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0913 20:16:27.707175       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0913 20:16:57.208056       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0913 20:16:57.715099       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0913 20:17:27.215067       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0913 20:17:27.724544       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0913 20:17:57.221971       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0913 20:17:57.733640       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0913 20:18:27.230411       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0913 20:18:27.744676       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [adbec8ff0ed7ae2d76b2b4191fbf08117c3c2c17aaf2f56bf763ce14fba83991] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0913 19:57:54.989255       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0913 19:57:54.998774       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.13"]
	E0913 19:57:54.999177       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0913 19:57:55.059172       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0913 19:57:55.059241       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0913 19:57:55.059285       1 server_linux.go:169] "Using iptables Proxier"
	I0913 19:57:55.065727       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0913 19:57:55.066754       1 server.go:483] "Version info" version="v1.31.1"
	I0913 19:57:55.066930       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0913 19:57:55.072475       1 config.go:199] "Starting service config controller"
	I0913 19:57:55.072498       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0913 19:57:55.072522       1 config.go:105] "Starting endpoint slice config controller"
	I0913 19:57:55.072528       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0913 19:57:55.096787       1 config.go:328] "Starting node config controller"
	I0913 19:57:55.096988       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0913 19:57:55.173653       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0913 19:57:55.173705       1 shared_informer.go:320] Caches are synced for service config
	I0913 19:57:55.202080       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [4c2bf4fed4e33c404d568d430bf58fe30ca0c9f0cf8964c530e93f1fd1a51edf] <==
	I0913 19:57:50.801470       1 serving.go:386] Generated self-signed cert in-memory
	W0913 19:57:53.398674       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0913 19:57:53.398773       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0913 19:57:53.398788       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0913 19:57:53.398796       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0913 19:57:53.468034       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0913 19:57:53.468095       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0913 19:57:53.478910       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0913 19:57:53.479011       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0913 19:57:53.481652       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0913 19:57:53.482128       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0913 19:57:53.585294       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 13 20:17:17 no-preload-239327 kubelet[1365]: E0913 20:17:17.994760    1365 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-bq7jp" podUID="9920ad88-3d00-458f-94d4-3dcfd0cd9a01"
	Sep 13 20:17:19 no-preload-239327 kubelet[1365]: E0913 20:17:19.247937    1365 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726258639247584312,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 20:17:19 no-preload-239327 kubelet[1365]: E0913 20:17:19.248024    1365 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726258639247584312,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 20:17:29 no-preload-239327 kubelet[1365]: E0913 20:17:29.249606    1365 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726258649248951070,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 20:17:29 no-preload-239327 kubelet[1365]: E0913 20:17:29.249661    1365 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726258649248951070,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 20:17:29 no-preload-239327 kubelet[1365]: E0913 20:17:29.993954    1365 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-bq7jp" podUID="9920ad88-3d00-458f-94d4-3dcfd0cd9a01"
	Sep 13 20:17:39 no-preload-239327 kubelet[1365]: E0913 20:17:39.252389    1365 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726258659251312579,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 20:17:39 no-preload-239327 kubelet[1365]: E0913 20:17:39.252433    1365 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726258659251312579,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 20:17:44 no-preload-239327 kubelet[1365]: E0913 20:17:44.994226    1365 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-bq7jp" podUID="9920ad88-3d00-458f-94d4-3dcfd0cd9a01"
	Sep 13 20:17:49 no-preload-239327 kubelet[1365]: E0913 20:17:49.016317    1365 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 13 20:17:49 no-preload-239327 kubelet[1365]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 13 20:17:49 no-preload-239327 kubelet[1365]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 13 20:17:49 no-preload-239327 kubelet[1365]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 13 20:17:49 no-preload-239327 kubelet[1365]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 13 20:17:49 no-preload-239327 kubelet[1365]: E0913 20:17:49.257285    1365 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726258669255805760,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 20:17:49 no-preload-239327 kubelet[1365]: E0913 20:17:49.257327    1365 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726258669255805760,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 20:17:55 no-preload-239327 kubelet[1365]: E0913 20:17:55.993709    1365 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-bq7jp" podUID="9920ad88-3d00-458f-94d4-3dcfd0cd9a01"
	Sep 13 20:17:59 no-preload-239327 kubelet[1365]: E0913 20:17:59.258952    1365 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726258679258546712,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 20:17:59 no-preload-239327 kubelet[1365]: E0913 20:17:59.259025    1365 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726258679258546712,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 20:18:06 no-preload-239327 kubelet[1365]: E0913 20:18:06.996991    1365 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-bq7jp" podUID="9920ad88-3d00-458f-94d4-3dcfd0cd9a01"
	Sep 13 20:18:09 no-preload-239327 kubelet[1365]: E0913 20:18:09.260314    1365 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726258689260008505,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 20:18:09 no-preload-239327 kubelet[1365]: E0913 20:18:09.260734    1365 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726258689260008505,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 20:18:19 no-preload-239327 kubelet[1365]: E0913 20:18:19.262168    1365 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726258699261811988,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 20:18:19 no-preload-239327 kubelet[1365]: E0913 20:18:19.262231    1365 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726258699261811988,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 20:18:20 no-preload-239327 kubelet[1365]: E0913 20:18:20.996199    1365 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-bq7jp" podUID="9920ad88-3d00-458f-94d4-3dcfd0cd9a01"
	
	
	==> storage-provisioner [4a9c61bb677322f851f9504c0ffe2044897d80333391be76078a4f7825afe9fe] <==
	I0913 19:57:54.844408       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0913 19:58:24.850436       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [fc01d7b17bbc929489326ae7e806a248f2d3d5cbc145bf51eb1b0cf2ba88d950] <==
	I0913 19:58:25.373098       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0913 19:58:25.383445       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0913 19:58:25.383565       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0913 19:58:42.787282       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0913 19:58:42.787524       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-239327_be5d3fbe-1a7b-4ab6-9f3a-8c29448760b2!
	I0913 19:58:42.795656       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d2fae039-fec2-4875-a26f-88621d1b9405", APIVersion:"v1", ResourceVersion:"638", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-239327_be5d3fbe-1a7b-4ab6-9f3a-8c29448760b2 became leader
	I0913 19:58:42.888766       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-239327_be5d3fbe-1a7b-4ab6-9f3a-8c29448760b2!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-239327 -n no-preload-239327
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-239327 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-bq7jp
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-239327 describe pod metrics-server-6867b74b74-bq7jp
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-239327 describe pod metrics-server-6867b74b74-bq7jp: exit status 1 (68.823294ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-bq7jp" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-239327 describe pod metrics-server-6867b74b74-bq7jp: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (425.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (426.98s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-512125 -n default-k8s-diff-port-512125
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-09-13 20:19:18.240818643 +0000 UTC m=+7111.719823870
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-512125 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-512125 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.813µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-512125 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-512125 -n default-k8s-diff-port-512125
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-512125 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-512125 logs -n 25: (3.167688124s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p                                                     | default-k8s-diff-port-512125 | jenkins | v1.34.0 | 13 Sep 24 19:49 UTC | 13 Sep 24 19:50 UTC |
	|         | default-k8s-diff-port-512125                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-175374            | embed-certs-175374           | jenkins | v1.34.0 | 13 Sep 24 19:50 UTC | 13 Sep 24 19:50 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-175374                                  | embed-certs-175374           | jenkins | v1.34.0 | 13 Sep 24 19:50 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-239327             | no-preload-239327            | jenkins | v1.34.0 | 13 Sep 24 19:50 UTC | 13 Sep 24 19:50 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-239327                                   | no-preload-239327            | jenkins | v1.34.0 | 13 Sep 24 19:50 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-512125  | default-k8s-diff-port-512125 | jenkins | v1.34.0 | 13 Sep 24 19:50 UTC | 13 Sep 24 19:50 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-512125 | jenkins | v1.34.0 | 13 Sep 24 19:50 UTC |                     |
	|         | default-k8s-diff-port-512125                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-234290        | old-k8s-version-234290       | jenkins | v1.34.0 | 13 Sep 24 19:52 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-175374                 | embed-certs-175374           | jenkins | v1.34.0 | 13 Sep 24 19:52 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-175374                                  | embed-certs-175374           | jenkins | v1.34.0 | 13 Sep 24 19:52 UTC | 13 Sep 24 20:03 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-239327                  | no-preload-239327            | jenkins | v1.34.0 | 13 Sep 24 19:52 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-239327                                   | no-preload-239327            | jenkins | v1.34.0 | 13 Sep 24 19:52 UTC | 13 Sep 24 20:02 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-512125       | default-k8s-diff-port-512125 | jenkins | v1.34.0 | 13 Sep 24 19:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-512125 | jenkins | v1.34.0 | 13 Sep 24 19:53 UTC | 13 Sep 24 20:03 UTC |
	|         | default-k8s-diff-port-512125                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-234290                              | old-k8s-version-234290       | jenkins | v1.34.0 | 13 Sep 24 19:53 UTC | 13 Sep 24 19:53 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-234290             | old-k8s-version-234290       | jenkins | v1.34.0 | 13 Sep 24 19:53 UTC | 13 Sep 24 19:53 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-234290                              | old-k8s-version-234290       | jenkins | v1.34.0 | 13 Sep 24 19:53 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-234290                              | old-k8s-version-234290       | jenkins | v1.34.0 | 13 Sep 24 20:17 UTC | 13 Sep 24 20:17 UTC |
	| start   | -p newest-cni-350416 --memory=2200 --alsologtostderr   | newest-cni-350416            | jenkins | v1.34.0 | 13 Sep 24 20:17 UTC | 13 Sep 24 20:18 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| delete  | -p no-preload-239327                                   | no-preload-239327            | jenkins | v1.34.0 | 13 Sep 24 20:18 UTC | 13 Sep 24 20:18 UTC |
	| addons  | enable metrics-server -p newest-cni-350416             | newest-cni-350416            | jenkins | v1.34.0 | 13 Sep 24 20:18 UTC | 13 Sep 24 20:18 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-350416                                   | newest-cni-350416            | jenkins | v1.34.0 | 13 Sep 24 20:18 UTC | 13 Sep 24 20:18 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p embed-certs-175374                                  | embed-certs-175374           | jenkins | v1.34.0 | 13 Sep 24 20:18 UTC | 13 Sep 24 20:18 UTC |
	| addons  | enable dashboard -p newest-cni-350416                  | newest-cni-350416            | jenkins | v1.34.0 | 13 Sep 24 20:18 UTC | 13 Sep 24 20:18 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-350416 --memory=2200 --alsologtostderr   | newest-cni-350416            | jenkins | v1.34.0 | 13 Sep 24 20:18 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/13 20:18:53
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0913 20:18:53.759323   79551 out.go:345] Setting OutFile to fd 1 ...
	I0913 20:18:53.759433   79551 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 20:18:53.759441   79551 out.go:358] Setting ErrFile to fd 2...
	I0913 20:18:53.759445   79551 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 20:18:53.759626   79551 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19636-3902/.minikube/bin
	I0913 20:18:53.760184   79551 out.go:352] Setting JSON to false
	I0913 20:18:53.761216   79551 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7277,"bootTime":1726251457,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0913 20:18:53.761313   79551 start.go:139] virtualization: kvm guest
	I0913 20:18:53.763562   79551 out.go:177] * [newest-cni-350416] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0913 20:18:53.765452   79551 notify.go:220] Checking for updates...
	I0913 20:18:53.765474   79551 out.go:177]   - MINIKUBE_LOCATION=19636
	I0913 20:18:53.767054   79551 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 20:18:53.768443   79551 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19636-3902/kubeconfig
	I0913 20:18:53.769862   79551 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19636-3902/.minikube
	I0913 20:18:53.771259   79551 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0913 20:18:53.772553   79551 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 20:18:53.774200   79551 config.go:182] Loaded profile config "newest-cni-350416": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 20:18:53.774601   79551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 20:18:53.774679   79551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 20:18:53.789594   79551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45331
	I0913 20:18:53.790052   79551 main.go:141] libmachine: () Calling .GetVersion
	I0913 20:18:53.790634   79551 main.go:141] libmachine: Using API Version  1
	I0913 20:18:53.790658   79551 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 20:18:53.790965   79551 main.go:141] libmachine: () Calling .GetMachineName
	I0913 20:18:53.791202   79551 main.go:141] libmachine: (newest-cni-350416) Calling .DriverName
	I0913 20:18:53.791446   79551 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 20:18:53.791728   79551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 20:18:53.791771   79551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 20:18:53.807421   79551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46729
	I0913 20:18:53.807951   79551 main.go:141] libmachine: () Calling .GetVersion
	I0913 20:18:53.808506   79551 main.go:141] libmachine: Using API Version  1
	I0913 20:18:53.808530   79551 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 20:18:53.808894   79551 main.go:141] libmachine: () Calling .GetMachineName
	I0913 20:18:53.809088   79551 main.go:141] libmachine: (newest-cni-350416) Calling .DriverName
	I0913 20:18:54.063702   79551 out.go:177] * Using the kvm2 driver based on existing profile
	I0913 20:18:54.065035   79551 start.go:297] selected driver: kvm2
	I0913 20:18:54.065051   79551 start.go:901] validating driver "kvm2" against &{Name:newest-cni-350416 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:newest-cni-350416 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.56 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] Sta
rtHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 20:18:54.065168   79551 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 20:18:54.065851   79551 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 20:18:54.065932   79551 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19636-3902/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0913 20:18:54.081629   79551 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0913 20:18:54.082184   79551 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0913 20:18:54.082232   79551 cni.go:84] Creating CNI manager for ""
	I0913 20:18:54.082291   79551 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0913 20:18:54.082341   79551 start.go:340] cluster config:
	{Name:newest-cni-350416 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:newest-cni-350416 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.56 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:
Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 20:18:54.082504   79551 iso.go:125] acquiring lock: {Name:mk6d09a9e7ffc35d34bf41cb51ad15df14a4d34d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 20:18:54.084301   79551 out.go:177] * Starting "newest-cni-350416" primary control-plane node in "newest-cni-350416" cluster
	I0913 20:18:54.085548   79551 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0913 20:18:54.085591   79551 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0913 20:18:54.085606   79551 cache.go:56] Caching tarball of preloaded images
	I0913 20:18:54.085693   79551 preload.go:172] Found /home/jenkins/minikube-integration/19636-3902/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0913 20:18:54.085704   79551 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0913 20:18:54.085830   79551 profile.go:143] Saving config to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/newest-cni-350416/config.json ...
	I0913 20:18:54.086017   79551 start.go:360] acquireMachinesLock for newest-cni-350416: {Name:mk2a4fc9f87a3264088553785e086036ce1d8c5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 20:18:54.086072   79551 start.go:364] duration metric: took 29.973µs to acquireMachinesLock for "newest-cni-350416"
	I0913 20:18:54.086087   79551 start.go:96] Skipping create...Using existing machine configuration
	I0913 20:18:54.086113   79551 fix.go:54] fixHost starting: 
	I0913 20:18:54.086449   79551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 20:18:54.086482   79551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 20:18:54.101557   79551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44323
	I0913 20:18:54.101961   79551 main.go:141] libmachine: () Calling .GetVersion
	I0913 20:18:54.102500   79551 main.go:141] libmachine: Using API Version  1
	I0913 20:18:54.102518   79551 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 20:18:54.102830   79551 main.go:141] libmachine: () Calling .GetMachineName
	I0913 20:18:54.103098   79551 main.go:141] libmachine: (newest-cni-350416) Calling .DriverName
	I0913 20:18:54.103232   79551 main.go:141] libmachine: (newest-cni-350416) Calling .GetState
	I0913 20:18:54.105099   79551 fix.go:112] recreateIfNeeded on newest-cni-350416: state=Stopped err=<nil>
	I0913 20:18:54.105125   79551 main.go:141] libmachine: (newest-cni-350416) Calling .DriverName
	W0913 20:18:54.105263   79551 fix.go:138] unexpected machine state, will restart: <nil>
	I0913 20:18:54.107516   79551 out.go:177] * Restarting existing kvm2 VM for "newest-cni-350416" ...
	I0913 20:18:54.108702   79551 main.go:141] libmachine: (newest-cni-350416) Calling .Start
	I0913 20:18:54.108900   79551 main.go:141] libmachine: (newest-cni-350416) Ensuring networks are active...
	I0913 20:18:54.109722   79551 main.go:141] libmachine: (newest-cni-350416) Ensuring network default is active
	I0913 20:18:54.110069   79551 main.go:141] libmachine: (newest-cni-350416) Ensuring network mk-newest-cni-350416 is active
	I0913 20:18:54.110462   79551 main.go:141] libmachine: (newest-cni-350416) Getting domain xml...
	I0913 20:18:54.111232   79551 main.go:141] libmachine: (newest-cni-350416) Creating domain...
	I0913 20:18:55.331693   79551 main.go:141] libmachine: (newest-cni-350416) Waiting to get IP...
	I0913 20:18:55.332682   79551 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:18:55.333082   79551 main.go:141] libmachine: (newest-cni-350416) DBG | unable to find current IP address of domain newest-cni-350416 in network mk-newest-cni-350416
	I0913 20:18:55.333155   79551 main.go:141] libmachine: (newest-cni-350416) DBG | I0913 20:18:55.333083   79600 retry.go:31] will retry after 200.737767ms: waiting for machine to come up
	I0913 20:18:55.535472   79551 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:18:55.536001   79551 main.go:141] libmachine: (newest-cni-350416) DBG | unable to find current IP address of domain newest-cni-350416 in network mk-newest-cni-350416
	I0913 20:18:55.536031   79551 main.go:141] libmachine: (newest-cni-350416) DBG | I0913 20:18:55.535954   79600 retry.go:31] will retry after 267.837737ms: waiting for machine to come up
	I0913 20:18:55.805413   79551 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:18:55.806058   79551 main.go:141] libmachine: (newest-cni-350416) DBG | unable to find current IP address of domain newest-cni-350416 in network mk-newest-cni-350416
	I0913 20:18:55.806159   79551 main.go:141] libmachine: (newest-cni-350416) DBG | I0913 20:18:55.806000   79600 retry.go:31] will retry after 416.295006ms: waiting for machine to come up
	I0913 20:18:56.224217   79551 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:18:56.224647   79551 main.go:141] libmachine: (newest-cni-350416) DBG | unable to find current IP address of domain newest-cni-350416 in network mk-newest-cni-350416
	I0913 20:18:56.224676   79551 main.go:141] libmachine: (newest-cni-350416) DBG | I0913 20:18:56.224600   79600 retry.go:31] will retry after 525.59048ms: waiting for machine to come up
	I0913 20:18:56.751314   79551 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:18:56.751841   79551 main.go:141] libmachine: (newest-cni-350416) DBG | unable to find current IP address of domain newest-cni-350416 in network mk-newest-cni-350416
	I0913 20:18:56.751864   79551 main.go:141] libmachine: (newest-cni-350416) DBG | I0913 20:18:56.751796   79600 retry.go:31] will retry after 464.785636ms: waiting for machine to come up
	I0913 20:18:57.218079   79551 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:18:57.218563   79551 main.go:141] libmachine: (newest-cni-350416) DBG | unable to find current IP address of domain newest-cni-350416 in network mk-newest-cni-350416
	I0913 20:18:57.218597   79551 main.go:141] libmachine: (newest-cni-350416) DBG | I0913 20:18:57.218508   79600 retry.go:31] will retry after 653.393502ms: waiting for machine to come up
	I0913 20:18:57.873205   79551 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:18:57.873665   79551 main.go:141] libmachine: (newest-cni-350416) DBG | unable to find current IP address of domain newest-cni-350416 in network mk-newest-cni-350416
	I0913 20:18:57.873692   79551 main.go:141] libmachine: (newest-cni-350416) DBG | I0913 20:18:57.873614   79600 retry.go:31] will retry after 1.173666669s: waiting for machine to come up
	I0913 20:18:59.048958   79551 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:18:59.049464   79551 main.go:141] libmachine: (newest-cni-350416) DBG | unable to find current IP address of domain newest-cni-350416 in network mk-newest-cni-350416
	I0913 20:18:59.049486   79551 main.go:141] libmachine: (newest-cni-350416) DBG | I0913 20:18:59.049416   79600 retry.go:31] will retry after 958.878636ms: waiting for machine to come up
	I0913 20:19:00.009914   79551 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:19:00.010307   79551 main.go:141] libmachine: (newest-cni-350416) DBG | unable to find current IP address of domain newest-cni-350416 in network mk-newest-cni-350416
	I0913 20:19:00.010338   79551 main.go:141] libmachine: (newest-cni-350416) DBG | I0913 20:19:00.010270   79600 retry.go:31] will retry after 1.537036673s: waiting for machine to come up
	I0913 20:19:01.549999   79551 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:19:01.550427   79551 main.go:141] libmachine: (newest-cni-350416) DBG | unable to find current IP address of domain newest-cni-350416 in network mk-newest-cni-350416
	I0913 20:19:01.550458   79551 main.go:141] libmachine: (newest-cni-350416) DBG | I0913 20:19:01.550381   79600 retry.go:31] will retry after 2.242391182s: waiting for machine to come up
	I0913 20:19:03.795746   79551 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:19:03.796119   79551 main.go:141] libmachine: (newest-cni-350416) DBG | unable to find current IP address of domain newest-cni-350416 in network mk-newest-cni-350416
	I0913 20:19:03.796139   79551 main.go:141] libmachine: (newest-cni-350416) DBG | I0913 20:19:03.796085   79600 retry.go:31] will retry after 2.401285703s: waiting for machine to come up
	I0913 20:19:06.199447   79551 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:19:06.199894   79551 main.go:141] libmachine: (newest-cni-350416) DBG | unable to find current IP address of domain newest-cni-350416 in network mk-newest-cni-350416
	I0913 20:19:06.199921   79551 main.go:141] libmachine: (newest-cni-350416) DBG | I0913 20:19:06.199839   79600 retry.go:31] will retry after 2.585332609s: waiting for machine to come up
	I0913 20:19:08.788517   79551 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:19:08.788920   79551 main.go:141] libmachine: (newest-cni-350416) DBG | unable to find current IP address of domain newest-cni-350416 in network mk-newest-cni-350416
	I0913 20:19:08.788941   79551 main.go:141] libmachine: (newest-cni-350416) DBG | I0913 20:19:08.788898   79600 retry.go:31] will retry after 2.836541648s: waiting for machine to come up
	I0913 20:19:11.629116   79551 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:19:11.629643   79551 main.go:141] libmachine: (newest-cni-350416) Found IP for machine: 192.168.72.56
	I0913 20:19:11.629670   79551 main.go:141] libmachine: (newest-cni-350416) Reserving static IP address...
	I0913 20:19:11.629693   79551 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has current primary IP address 192.168.72.56 and MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:19:11.630055   79551 main.go:141] libmachine: (newest-cni-350416) DBG | found host DHCP lease matching {name: "newest-cni-350416", mac: "52:54:00:ca:5a:f4", ip: "192.168.72.56"} in network mk-newest-cni-350416: {Iface:virbr4 ExpiryTime:2024-09-13 21:19:05 +0000 UTC Type:0 Mac:52:54:00:ca:5a:f4 Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:newest-cni-350416 Clientid:01:52:54:00:ca:5a:f4}
	I0913 20:19:11.630083   79551 main.go:141] libmachine: (newest-cni-350416) DBG | skip adding static IP to network mk-newest-cni-350416 - found existing host DHCP lease matching {name: "newest-cni-350416", mac: "52:54:00:ca:5a:f4", ip: "192.168.72.56"}
	I0913 20:19:11.630112   79551 main.go:141] libmachine: (newest-cni-350416) Reserved static IP address: 192.168.72.56
	I0913 20:19:11.630126   79551 main.go:141] libmachine: (newest-cni-350416) Waiting for SSH to be available...
	I0913 20:19:11.630139   79551 main.go:141] libmachine: (newest-cni-350416) DBG | Getting to WaitForSSH function...
	I0913 20:19:11.632244   79551 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:19:11.632562   79551 main.go:141] libmachine: (newest-cni-350416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:5a:f4", ip: ""} in network mk-newest-cni-350416: {Iface:virbr4 ExpiryTime:2024-09-13 21:19:05 +0000 UTC Type:0 Mac:52:54:00:ca:5a:f4 Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:newest-cni-350416 Clientid:01:52:54:00:ca:5a:f4}
	I0913 20:19:11.632583   79551 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined IP address 192.168.72.56 and MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:19:11.632667   79551 main.go:141] libmachine: (newest-cni-350416) DBG | Using SSH client type: external
	I0913 20:19:11.632689   79551 main.go:141] libmachine: (newest-cni-350416) DBG | Using SSH private key: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/newest-cni-350416/id_rsa (-rw-------)
	I0913 20:19:11.632725   79551 main.go:141] libmachine: (newest-cni-350416) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.56 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19636-3902/.minikube/machines/newest-cni-350416/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0913 20:19:11.632748   79551 main.go:141] libmachine: (newest-cni-350416) DBG | About to run SSH command:
	I0913 20:19:11.632759   79551 main.go:141] libmachine: (newest-cni-350416) DBG | exit 0
	I0913 20:19:11.754299   79551 main.go:141] libmachine: (newest-cni-350416) DBG | SSH cmd err, output: <nil>: 
	I0913 20:19:11.754670   79551 main.go:141] libmachine: (newest-cni-350416) Calling .GetConfigRaw
	I0913 20:19:11.755251   79551 main.go:141] libmachine: (newest-cni-350416) Calling .GetIP
	I0913 20:19:11.757735   79551 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:19:11.758053   79551 main.go:141] libmachine: (newest-cni-350416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:5a:f4", ip: ""} in network mk-newest-cni-350416: {Iface:virbr4 ExpiryTime:2024-09-13 21:19:05 +0000 UTC Type:0 Mac:52:54:00:ca:5a:f4 Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:newest-cni-350416 Clientid:01:52:54:00:ca:5a:f4}
	I0913 20:19:11.758108   79551 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined IP address 192.168.72.56 and MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:19:11.758323   79551 profile.go:143] Saving config to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/newest-cni-350416/config.json ...
	I0913 20:19:11.758507   79551 machine.go:93] provisionDockerMachine start ...
	I0913 20:19:11.758527   79551 main.go:141] libmachine: (newest-cni-350416) Calling .DriverName
	I0913 20:19:11.758740   79551 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHHostname
	I0913 20:19:11.760685   79551 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:19:11.760961   79551 main.go:141] libmachine: (newest-cni-350416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:5a:f4", ip: ""} in network mk-newest-cni-350416: {Iface:virbr4 ExpiryTime:2024-09-13 21:19:05 +0000 UTC Type:0 Mac:52:54:00:ca:5a:f4 Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:newest-cni-350416 Clientid:01:52:54:00:ca:5a:f4}
	I0913 20:19:11.760991   79551 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined IP address 192.168.72.56 and MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:19:11.761081   79551 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHPort
	I0913 20:19:11.761242   79551 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHKeyPath
	I0913 20:19:11.761378   79551 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHKeyPath
	I0913 20:19:11.761550   79551 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHUsername
	I0913 20:19:11.761720   79551 main.go:141] libmachine: Using SSH client type: native
	I0913 20:19:11.761957   79551 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.56 22 <nil> <nil>}
	I0913 20:19:11.761978   79551 main.go:141] libmachine: About to run SSH command:
	hostname
	I0913 20:19:11.862586   79551 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0913 20:19:11.862618   79551 main.go:141] libmachine: (newest-cni-350416) Calling .GetMachineName
	I0913 20:19:11.862860   79551 buildroot.go:166] provisioning hostname "newest-cni-350416"
	I0913 20:19:11.862885   79551 main.go:141] libmachine: (newest-cni-350416) Calling .GetMachineName
	I0913 20:19:11.863067   79551 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHHostname
	I0913 20:19:11.865802   79551 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:19:11.866236   79551 main.go:141] libmachine: (newest-cni-350416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:5a:f4", ip: ""} in network mk-newest-cni-350416: {Iface:virbr4 ExpiryTime:2024-09-13 21:19:05 +0000 UTC Type:0 Mac:52:54:00:ca:5a:f4 Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:newest-cni-350416 Clientid:01:52:54:00:ca:5a:f4}
	I0913 20:19:11.866260   79551 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined IP address 192.168.72.56 and MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:19:11.866369   79551 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHPort
	I0913 20:19:11.866551   79551 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHKeyPath
	I0913 20:19:11.866699   79551 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHKeyPath
	I0913 20:19:11.866827   79551 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHUsername
	I0913 20:19:11.866970   79551 main.go:141] libmachine: Using SSH client type: native
	I0913 20:19:11.867137   79551 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.56 22 <nil> <nil>}
	I0913 20:19:11.867148   79551 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-350416 && echo "newest-cni-350416" | sudo tee /etc/hostname
	I0913 20:19:11.980835   79551 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-350416
	
	I0913 20:19:11.980870   79551 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHHostname
	I0913 20:19:11.984394   79551 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:19:11.984756   79551 main.go:141] libmachine: (newest-cni-350416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:5a:f4", ip: ""} in network mk-newest-cni-350416: {Iface:virbr4 ExpiryTime:2024-09-13 21:19:05 +0000 UTC Type:0 Mac:52:54:00:ca:5a:f4 Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:newest-cni-350416 Clientid:01:52:54:00:ca:5a:f4}
	I0913 20:19:11.984783   79551 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined IP address 192.168.72.56 and MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:19:11.984972   79551 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHPort
	I0913 20:19:11.985153   79551 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHKeyPath
	I0913 20:19:11.985338   79551 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHKeyPath
	I0913 20:19:11.985478   79551 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHUsername
	I0913 20:19:11.985639   79551 main.go:141] libmachine: Using SSH client type: native
	I0913 20:19:11.985856   79551 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.56 22 <nil> <nil>}
	I0913 20:19:11.985873   79551 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-350416' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-350416/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-350416' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0913 20:19:12.095249   79551 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0913 20:19:12.095275   79551 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19636-3902/.minikube CaCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19636-3902/.minikube}
	I0913 20:19:12.095316   79551 buildroot.go:174] setting up certificates
	I0913 20:19:12.095329   79551 provision.go:84] configureAuth start
	I0913 20:19:12.095342   79551 main.go:141] libmachine: (newest-cni-350416) Calling .GetMachineName
	I0913 20:19:12.095617   79551 main.go:141] libmachine: (newest-cni-350416) Calling .GetIP
	I0913 20:19:12.098470   79551 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:19:12.098835   79551 main.go:141] libmachine: (newest-cni-350416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:5a:f4", ip: ""} in network mk-newest-cni-350416: {Iface:virbr4 ExpiryTime:2024-09-13 21:19:05 +0000 UTC Type:0 Mac:52:54:00:ca:5a:f4 Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:newest-cni-350416 Clientid:01:52:54:00:ca:5a:f4}
	I0913 20:19:12.098873   79551 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined IP address 192.168.72.56 and MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:19:12.099073   79551 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHHostname
	I0913 20:19:12.101779   79551 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:19:12.102213   79551 main.go:141] libmachine: (newest-cni-350416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:5a:f4", ip: ""} in network mk-newest-cni-350416: {Iface:virbr4 ExpiryTime:2024-09-13 21:19:05 +0000 UTC Type:0 Mac:52:54:00:ca:5a:f4 Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:newest-cni-350416 Clientid:01:52:54:00:ca:5a:f4}
	I0913 20:19:12.102256   79551 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined IP address 192.168.72.56 and MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:19:12.102373   79551 provision.go:143] copyHostCerts
	I0913 20:19:12.102434   79551 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem, removing ...
	I0913 20:19:12.102444   79551 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem
	I0913 20:19:12.102504   79551 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem (1082 bytes)
	I0913 20:19:12.102605   79551 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem, removing ...
	I0913 20:19:12.102614   79551 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem
	I0913 20:19:12.102639   79551 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem (1123 bytes)
	I0913 20:19:12.102702   79551 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem, removing ...
	I0913 20:19:12.102709   79551 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem
	I0913 20:19:12.102729   79551 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem (1679 bytes)
	I0913 20:19:12.102787   79551 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem org=jenkins.newest-cni-350416 san=[127.0.0.1 192.168.72.56 localhost minikube newest-cni-350416]
	I0913 20:19:12.203231   79551 provision.go:177] copyRemoteCerts
	I0913 20:19:12.203287   79551 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0913 20:19:12.203313   79551 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHHostname
	I0913 20:19:12.206024   79551 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:19:12.206402   79551 main.go:141] libmachine: (newest-cni-350416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:5a:f4", ip: ""} in network mk-newest-cni-350416: {Iface:virbr4 ExpiryTime:2024-09-13 21:19:05 +0000 UTC Type:0 Mac:52:54:00:ca:5a:f4 Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:newest-cni-350416 Clientid:01:52:54:00:ca:5a:f4}
	I0913 20:19:12.206432   79551 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined IP address 192.168.72.56 and MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:19:12.206631   79551 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHPort
	I0913 20:19:12.206785   79551 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHKeyPath
	I0913 20:19:12.206922   79551 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHUsername
	I0913 20:19:12.207029   79551 sshutil.go:53] new ssh client: &{IP:192.168.72.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/newest-cni-350416/id_rsa Username:docker}
	I0913 20:19:12.288893   79551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0913 20:19:12.313439   79551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0913 20:19:12.337748   79551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0913 20:19:12.361364   79551 provision.go:87] duration metric: took 266.019744ms to configureAuth
	I0913 20:19:12.361395   79551 buildroot.go:189] setting minikube options for container-runtime
	I0913 20:19:12.361626   79551 config.go:182] Loaded profile config "newest-cni-350416": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 20:19:12.361714   79551 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHHostname
	I0913 20:19:12.364525   79551 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:19:12.364878   79551 main.go:141] libmachine: (newest-cni-350416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:5a:f4", ip: ""} in network mk-newest-cni-350416: {Iface:virbr4 ExpiryTime:2024-09-13 21:19:05 +0000 UTC Type:0 Mac:52:54:00:ca:5a:f4 Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:newest-cni-350416 Clientid:01:52:54:00:ca:5a:f4}
	I0913 20:19:12.364918   79551 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined IP address 192.168.72.56 and MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:19:12.365149   79551 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHPort
	I0913 20:19:12.365322   79551 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHKeyPath
	I0913 20:19:12.365506   79551 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHKeyPath
	I0913 20:19:12.365656   79551 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHUsername
	I0913 20:19:12.365844   79551 main.go:141] libmachine: Using SSH client type: native
	I0913 20:19:12.366049   79551 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.56 22 <nil> <nil>}
	I0913 20:19:12.366070   79551 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0913 20:19:12.587413   79551 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0913 20:19:12.587446   79551 machine.go:96] duration metric: took 828.924193ms to provisionDockerMachine
	I0913 20:19:12.587462   79551 start.go:293] postStartSetup for "newest-cni-350416" (driver="kvm2")
	I0913 20:19:12.587476   79551 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0913 20:19:12.587502   79551 main.go:141] libmachine: (newest-cni-350416) Calling .DriverName
	I0913 20:19:12.587829   79551 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0913 20:19:12.587865   79551 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHHostname
	I0913 20:19:12.590676   79551 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:19:12.591054   79551 main.go:141] libmachine: (newest-cni-350416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:5a:f4", ip: ""} in network mk-newest-cni-350416: {Iface:virbr4 ExpiryTime:2024-09-13 21:19:05 +0000 UTC Type:0 Mac:52:54:00:ca:5a:f4 Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:newest-cni-350416 Clientid:01:52:54:00:ca:5a:f4}
	I0913 20:19:12.591083   79551 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined IP address 192.168.72.56 and MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:19:12.591237   79551 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHPort
	I0913 20:19:12.591402   79551 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHKeyPath
	I0913 20:19:12.591593   79551 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHUsername
	I0913 20:19:12.591769   79551 sshutil.go:53] new ssh client: &{IP:192.168.72.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/newest-cni-350416/id_rsa Username:docker}
	I0913 20:19:12.677362   79551 ssh_runner.go:195] Run: cat /etc/os-release
	I0913 20:19:12.681750   79551 info.go:137] Remote host: Buildroot 2023.02.9
	I0913 20:19:12.681785   79551 filesync.go:126] Scanning /home/jenkins/minikube-integration/19636-3902/.minikube/addons for local assets ...
	I0913 20:19:12.681850   79551 filesync.go:126] Scanning /home/jenkins/minikube-integration/19636-3902/.minikube/files for local assets ...
	I0913 20:19:12.681928   79551 filesync.go:149] local asset: /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem -> 110792.pem in /etc/ssl/certs
	I0913 20:19:12.682023   79551 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0913 20:19:12.691897   79551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem --> /etc/ssl/certs/110792.pem (1708 bytes)
	I0913 20:19:12.715794   79551 start.go:296] duration metric: took 128.319383ms for postStartSetup
	I0913 20:19:12.715837   79551 fix.go:56] duration metric: took 18.629724218s for fixHost
	I0913 20:19:12.715861   79551 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHHostname
	I0913 20:19:12.718434   79551 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:19:12.718806   79551 main.go:141] libmachine: (newest-cni-350416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:5a:f4", ip: ""} in network mk-newest-cni-350416: {Iface:virbr4 ExpiryTime:2024-09-13 21:19:05 +0000 UTC Type:0 Mac:52:54:00:ca:5a:f4 Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:newest-cni-350416 Clientid:01:52:54:00:ca:5a:f4}
	I0913 20:19:12.718841   79551 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined IP address 192.168.72.56 and MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:19:12.719001   79551 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHPort
	I0913 20:19:12.719276   79551 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHKeyPath
	I0913 20:19:12.719431   79551 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHKeyPath
	I0913 20:19:12.719546   79551 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHUsername
	I0913 20:19:12.719708   79551 main.go:141] libmachine: Using SSH client type: native
	I0913 20:19:12.719883   79551 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.56 22 <nil> <nil>}
	I0913 20:19:12.719893   79551 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0913 20:19:12.827461   79551 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726258752.800203025
	
	I0913 20:19:12.827485   79551 fix.go:216] guest clock: 1726258752.800203025
	I0913 20:19:12.827492   79551 fix.go:229] Guest: 2024-09-13 20:19:12.800203025 +0000 UTC Remote: 2024-09-13 20:19:12.715842601 +0000 UTC m=+18.993525116 (delta=84.360424ms)
	I0913 20:19:12.827511   79551 fix.go:200] guest clock delta is within tolerance: 84.360424ms
	I0913 20:19:12.827516   79551 start.go:83] releasing machines lock for "newest-cni-350416", held for 18.741436522s
	I0913 20:19:12.827537   79551 main.go:141] libmachine: (newest-cni-350416) Calling .DriverName
	I0913 20:19:12.827788   79551 main.go:141] libmachine: (newest-cni-350416) Calling .GetIP
	I0913 20:19:12.830571   79551 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:19:12.830872   79551 main.go:141] libmachine: (newest-cni-350416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:5a:f4", ip: ""} in network mk-newest-cni-350416: {Iface:virbr4 ExpiryTime:2024-09-13 21:19:05 +0000 UTC Type:0 Mac:52:54:00:ca:5a:f4 Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:newest-cni-350416 Clientid:01:52:54:00:ca:5a:f4}
	I0913 20:19:12.830902   79551 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined IP address 192.168.72.56 and MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:19:12.831156   79551 main.go:141] libmachine: (newest-cni-350416) Calling .DriverName
	I0913 20:19:12.831675   79551 main.go:141] libmachine: (newest-cni-350416) Calling .DriverName
	I0913 20:19:12.831882   79551 main.go:141] libmachine: (newest-cni-350416) Calling .DriverName
	I0913 20:19:12.831973   79551 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0913 20:19:12.832010   79551 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHHostname
	I0913 20:19:12.832102   79551 ssh_runner.go:195] Run: cat /version.json
	I0913 20:19:12.832124   79551 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHHostname
	I0913 20:19:12.834964   79551 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:19:12.835153   79551 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:19:12.835347   79551 main.go:141] libmachine: (newest-cni-350416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:5a:f4", ip: ""} in network mk-newest-cni-350416: {Iface:virbr4 ExpiryTime:2024-09-13 21:19:05 +0000 UTC Type:0 Mac:52:54:00:ca:5a:f4 Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:newest-cni-350416 Clientid:01:52:54:00:ca:5a:f4}
	I0913 20:19:12.835370   79551 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined IP address 192.168.72.56 and MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:19:12.835492   79551 main.go:141] libmachine: (newest-cni-350416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:5a:f4", ip: ""} in network mk-newest-cni-350416: {Iface:virbr4 ExpiryTime:2024-09-13 21:19:05 +0000 UTC Type:0 Mac:52:54:00:ca:5a:f4 Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:newest-cni-350416 Clientid:01:52:54:00:ca:5a:f4}
	I0913 20:19:12.835500   79551 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHPort
	I0913 20:19:12.835520   79551 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined IP address 192.168.72.56 and MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:19:12.835708   79551 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHPort
	I0913 20:19:12.835742   79551 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHKeyPath
	I0913 20:19:12.835866   79551 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHKeyPath
	I0913 20:19:12.835880   79551 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHUsername
	I0913 20:19:12.836035   79551 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHUsername
	I0913 20:19:12.836050   79551 sshutil.go:53] new ssh client: &{IP:192.168.72.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/newest-cni-350416/id_rsa Username:docker}
	I0913 20:19:12.836179   79551 sshutil.go:53] new ssh client: &{IP:192.168.72.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/newest-cni-350416/id_rsa Username:docker}
	I0913 20:19:12.911615   79551 ssh_runner.go:195] Run: systemctl --version
	I0913 20:19:12.937260   79551 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0913 20:19:13.084735   79551 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0913 20:19:13.090875   79551 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0913 20:19:13.090935   79551 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0913 20:19:13.108698   79551 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0913 20:19:13.108728   79551 start.go:495] detecting cgroup driver to use...
	I0913 20:19:13.108799   79551 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0913 20:19:13.125140   79551 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0913 20:19:13.139757   79551 docker.go:217] disabling cri-docker service (if available) ...
	I0913 20:19:13.139812   79551 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0913 20:19:13.154156   79551 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0913 20:19:13.168132   79551 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0913 20:19:13.278279   79551 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0913 20:19:13.422467   79551 docker.go:233] disabling docker service ...
	I0913 20:19:13.422546   79551 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0913 20:19:13.436977   79551 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0913 20:19:13.449746   79551 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0913 20:19:13.582525   79551 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0913 20:19:13.690603   79551 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0913 20:19:13.704600   79551 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0913 20:19:13.723337   79551 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0913 20:19:13.723482   79551 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 20:19:13.735494   79551 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0913 20:19:13.735581   79551 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 20:19:13.746721   79551 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 20:19:13.758074   79551 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 20:19:13.769764   79551 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0913 20:19:13.781194   79551 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 20:19:13.792562   79551 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 20:19:13.810798   79551 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 20:19:13.822353   79551 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0913 20:19:13.832656   79551 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0913 20:19:13.832726   79551 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0913 20:19:13.848815   79551 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0913 20:19:13.860692   79551 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 20:19:13.987850   79551 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0913 20:19:14.074538   79551 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0913 20:19:14.074606   79551 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0913 20:19:14.079409   79551 start.go:563] Will wait 60s for crictl version
	I0913 20:19:14.079454   79551 ssh_runner.go:195] Run: which crictl
	I0913 20:19:14.083144   79551 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0913 20:19:14.122455   79551 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0913 20:19:14.122563   79551 ssh_runner.go:195] Run: crio --version
	I0913 20:19:14.151139   79551 ssh_runner.go:195] Run: crio --version
	I0913 20:19:14.182139   79551 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0913 20:19:14.183234   79551 main.go:141] libmachine: (newest-cni-350416) Calling .GetIP
	I0913 20:19:14.185908   79551 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:19:14.186297   79551 main.go:141] libmachine: (newest-cni-350416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:5a:f4", ip: ""} in network mk-newest-cni-350416: {Iface:virbr4 ExpiryTime:2024-09-13 21:19:05 +0000 UTC Type:0 Mac:52:54:00:ca:5a:f4 Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:newest-cni-350416 Clientid:01:52:54:00:ca:5a:f4}
	I0913 20:19:14.186331   79551 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined IP address 192.168.72.56 and MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:19:14.186599   79551 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0913 20:19:14.191083   79551 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0913 20:19:14.205601   79551 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	
	
	==> CRI-O <==
	Sep 13 20:19:18 default-k8s-diff-port-512125 crio[712]: time="2024-09-13 20:19:18.873347780Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726258758873326813,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ff360964-1fce-4078-b165-de09d6917634 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 20:19:18 default-k8s-diff-port-512125 crio[712]: time="2024-09-13 20:19:18.873852621Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0430afae-ce8d-40ee-97b4-16f913c322c9 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 20:19:18 default-k8s-diff-port-512125 crio[712]: time="2024-09-13 20:19:18.873938863Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0430afae-ce8d-40ee-97b4-16f913c322c9 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 20:19:18 default-k8s-diff-port-512125 crio[712]: time="2024-09-13 20:19:18.874168647Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:727272a23be61853f465dcb60dae0ed961217133d96457374632a3fcc490b3fc,PodSandboxId:58a301000ce7d132d84c690126e5f2f9f7eff2fc43a1d7768691bff03ea08313,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726257781400421218,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cd5034f-5d90-4155-acab-804dca90a2ed,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c02b3652c8f8a1a5dfd3605162b44848742e491870aadfc38df2bf02f7aa191,PodSandboxId:957a4906fe4a4e7af85afea666bfa6beda0c586c60782f73ba1c3ce8f0aacb43,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726257780435880084,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-pm4s9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82a23abb-d3a2-415b-a992-971fe65ee840,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02eb787bf6a192de4d39c5e03d0b0940577c08e94c3a958c288248ddf709afa7,PodSandboxId:8ca473c893d6c0b9405ac375a7d7c2ad2715421b55d1e9d84ee73e677eddb418,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726257780357558748,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-2qg68,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 06d7bc39-7f7b-405c-828f-22d68741b063,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00782ad9f16fb059fd73915c67417c6aab1d191dbcd6af0059a9949bc8bc295e,PodSandboxId:ca0fbd46713432a083043ccee64f17e4ceca4a97500e12a9fea4dbd07362d118,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING
,CreatedAt:1726257779537974164,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6zfwm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b62cff15-1c67-42d6-a30b-6f43a914fa0c,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25c925f18c1640fdc04855eb6a2c9b653041cd3edece8e4a852a8e2872f674db,PodSandboxId:f3d61314d58f911a3ad49dc18ac26e4edca862c5c03c819a84071747d5ebc7da,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726257768419189757,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-512125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f711b8224e50435ad450d5b09efe44bc,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b172dac6b2fe57ea534fdb4e230ff081590b6a591d05a565a442ececc0a1da4,PodSandboxId:07baef0ade19d34ea642f9426e6fd5557dc09c1231327f9c854b988cb784afcf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726257768395139286,Labels:map[string]string{io.ku
bernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-512125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85b2cdcedba1ec7b01b6340beecfc4da,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b227cf71d8db583d3217e10f5801b9139d31d9df3bb72e96789ebfcd8099db4a,PodSandboxId:80d1a409081b6c307bf3d47519ec9f442388e5ed7cec5c223a6203f1d926d853,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726257768356325605,Labels:map[string]string{io.kube
rnetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-512125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e143ccd1e9a00975eade6b9f4aeb4e03,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c7c881fbf40e5abef1e031d9144018705696ddc7366b9d3140690504d0a30a1,PodSandboxId:11cb043226726e899517bfb84bed802095dbc122bbd90257e857f8ee6bcbe74f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726257768284642266,Labels:map[string]string{
io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-512125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32e7999a85302bfc3f13eb8f97e0e72c,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:683d63db2439b412660887896403aa799b25c848d4dbd98b9a50e08af1663258,PodSandboxId:1127567245cc692a52caf3f61e8bdb50773f23137b36b09cebc8af2b3dc89ff1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726257480085198223,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-512125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85b2cdcedba1ec7b01b6340beecfc4da,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0430afae-ce8d-40ee-97b4-16f913c322c9 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 20:19:18 default-k8s-diff-port-512125 crio[712]: time="2024-09-13 20:19:18.923565669Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0c8a4809-3fca-4b3f-9be5-c421106b2814 name=/runtime.v1.RuntimeService/Version
	Sep 13 20:19:18 default-k8s-diff-port-512125 crio[712]: time="2024-09-13 20:19:18.923694623Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0c8a4809-3fca-4b3f-9be5-c421106b2814 name=/runtime.v1.RuntimeService/Version
	Sep 13 20:19:18 default-k8s-diff-port-512125 crio[712]: time="2024-09-13 20:19:18.926576178Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=19e90c3d-75df-4c27-805f-116bbb682940 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 20:19:18 default-k8s-diff-port-512125 crio[712]: time="2024-09-13 20:19:18.927369953Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726258758927336540,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=19e90c3d-75df-4c27-805f-116bbb682940 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 20:19:18 default-k8s-diff-port-512125 crio[712]: time="2024-09-13 20:19:18.928494240Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=88c08478-bb43-4986-a6e8-3be3ff463abc name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 20:19:18 default-k8s-diff-port-512125 crio[712]: time="2024-09-13 20:19:18.928597269Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=88c08478-bb43-4986-a6e8-3be3ff463abc name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 20:19:18 default-k8s-diff-port-512125 crio[712]: time="2024-09-13 20:19:18.928966116Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:727272a23be61853f465dcb60dae0ed961217133d96457374632a3fcc490b3fc,PodSandboxId:58a301000ce7d132d84c690126e5f2f9f7eff2fc43a1d7768691bff03ea08313,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726257781400421218,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cd5034f-5d90-4155-acab-804dca90a2ed,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c02b3652c8f8a1a5dfd3605162b44848742e491870aadfc38df2bf02f7aa191,PodSandboxId:957a4906fe4a4e7af85afea666bfa6beda0c586c60782f73ba1c3ce8f0aacb43,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726257780435880084,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-pm4s9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82a23abb-d3a2-415b-a992-971fe65ee840,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02eb787bf6a192de4d39c5e03d0b0940577c08e94c3a958c288248ddf709afa7,PodSandboxId:8ca473c893d6c0b9405ac375a7d7c2ad2715421b55d1e9d84ee73e677eddb418,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726257780357558748,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-2qg68,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 06d7bc39-7f7b-405c-828f-22d68741b063,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00782ad9f16fb059fd73915c67417c6aab1d191dbcd6af0059a9949bc8bc295e,PodSandboxId:ca0fbd46713432a083043ccee64f17e4ceca4a97500e12a9fea4dbd07362d118,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING
,CreatedAt:1726257779537974164,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6zfwm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b62cff15-1c67-42d6-a30b-6f43a914fa0c,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25c925f18c1640fdc04855eb6a2c9b653041cd3edece8e4a852a8e2872f674db,PodSandboxId:f3d61314d58f911a3ad49dc18ac26e4edca862c5c03c819a84071747d5ebc7da,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726257768419189757,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-512125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f711b8224e50435ad450d5b09efe44bc,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b172dac6b2fe57ea534fdb4e230ff081590b6a591d05a565a442ececc0a1da4,PodSandboxId:07baef0ade19d34ea642f9426e6fd5557dc09c1231327f9c854b988cb784afcf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726257768395139286,Labels:map[string]string{io.ku
bernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-512125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85b2cdcedba1ec7b01b6340beecfc4da,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b227cf71d8db583d3217e10f5801b9139d31d9df3bb72e96789ebfcd8099db4a,PodSandboxId:80d1a409081b6c307bf3d47519ec9f442388e5ed7cec5c223a6203f1d926d853,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726257768356325605,Labels:map[string]string{io.kube
rnetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-512125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e143ccd1e9a00975eade6b9f4aeb4e03,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c7c881fbf40e5abef1e031d9144018705696ddc7366b9d3140690504d0a30a1,PodSandboxId:11cb043226726e899517bfb84bed802095dbc122bbd90257e857f8ee6bcbe74f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726257768284642266,Labels:map[string]string{
io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-512125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32e7999a85302bfc3f13eb8f97e0e72c,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:683d63db2439b412660887896403aa799b25c848d4dbd98b9a50e08af1663258,PodSandboxId:1127567245cc692a52caf3f61e8bdb50773f23137b36b09cebc8af2b3dc89ff1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726257480085198223,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-512125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85b2cdcedba1ec7b01b6340beecfc4da,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=88c08478-bb43-4986-a6e8-3be3ff463abc name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 20:19:18 default-k8s-diff-port-512125 crio[712]: time="2024-09-13 20:19:18.972999665Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0e5d536a-5209-4ca1-bb47-2992db3c31f3 name=/runtime.v1.RuntimeService/Version
	Sep 13 20:19:18 default-k8s-diff-port-512125 crio[712]: time="2024-09-13 20:19:18.973806951Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0e5d536a-5209-4ca1-bb47-2992db3c31f3 name=/runtime.v1.RuntimeService/Version
	Sep 13 20:19:18 default-k8s-diff-port-512125 crio[712]: time="2024-09-13 20:19:18.975237325Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=070d5a25-be46-48c5-b044-209ba5fd0fa8 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 20:19:18 default-k8s-diff-port-512125 crio[712]: time="2024-09-13 20:19:18.975839548Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726258758975730814,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=070d5a25-be46-48c5-b044-209ba5fd0fa8 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 20:19:18 default-k8s-diff-port-512125 crio[712]: time="2024-09-13 20:19:18.976655381Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bc7fd7c0-5d01-4bd9-ab5f-af2534eee2b9 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 20:19:18 default-k8s-diff-port-512125 crio[712]: time="2024-09-13 20:19:18.976817009Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bc7fd7c0-5d01-4bd9-ab5f-af2534eee2b9 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 20:19:18 default-k8s-diff-port-512125 crio[712]: time="2024-09-13 20:19:18.977849930Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:727272a23be61853f465dcb60dae0ed961217133d96457374632a3fcc490b3fc,PodSandboxId:58a301000ce7d132d84c690126e5f2f9f7eff2fc43a1d7768691bff03ea08313,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726257781400421218,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cd5034f-5d90-4155-acab-804dca90a2ed,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c02b3652c8f8a1a5dfd3605162b44848742e491870aadfc38df2bf02f7aa191,PodSandboxId:957a4906fe4a4e7af85afea666bfa6beda0c586c60782f73ba1c3ce8f0aacb43,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726257780435880084,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-pm4s9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82a23abb-d3a2-415b-a992-971fe65ee840,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02eb787bf6a192de4d39c5e03d0b0940577c08e94c3a958c288248ddf709afa7,PodSandboxId:8ca473c893d6c0b9405ac375a7d7c2ad2715421b55d1e9d84ee73e677eddb418,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726257780357558748,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-2qg68,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 06d7bc39-7f7b-405c-828f-22d68741b063,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00782ad9f16fb059fd73915c67417c6aab1d191dbcd6af0059a9949bc8bc295e,PodSandboxId:ca0fbd46713432a083043ccee64f17e4ceca4a97500e12a9fea4dbd07362d118,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING
,CreatedAt:1726257779537974164,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6zfwm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b62cff15-1c67-42d6-a30b-6f43a914fa0c,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25c925f18c1640fdc04855eb6a2c9b653041cd3edece8e4a852a8e2872f674db,PodSandboxId:f3d61314d58f911a3ad49dc18ac26e4edca862c5c03c819a84071747d5ebc7da,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726257768419189757,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-512125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f711b8224e50435ad450d5b09efe44bc,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b172dac6b2fe57ea534fdb4e230ff081590b6a591d05a565a442ececc0a1da4,PodSandboxId:07baef0ade19d34ea642f9426e6fd5557dc09c1231327f9c854b988cb784afcf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726257768395139286,Labels:map[string]string{io.ku
bernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-512125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85b2cdcedba1ec7b01b6340beecfc4da,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b227cf71d8db583d3217e10f5801b9139d31d9df3bb72e96789ebfcd8099db4a,PodSandboxId:80d1a409081b6c307bf3d47519ec9f442388e5ed7cec5c223a6203f1d926d853,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726257768356325605,Labels:map[string]string{io.kube
rnetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-512125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e143ccd1e9a00975eade6b9f4aeb4e03,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c7c881fbf40e5abef1e031d9144018705696ddc7366b9d3140690504d0a30a1,PodSandboxId:11cb043226726e899517bfb84bed802095dbc122bbd90257e857f8ee6bcbe74f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726257768284642266,Labels:map[string]string{
io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-512125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32e7999a85302bfc3f13eb8f97e0e72c,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:683d63db2439b412660887896403aa799b25c848d4dbd98b9a50e08af1663258,PodSandboxId:1127567245cc692a52caf3f61e8bdb50773f23137b36b09cebc8af2b3dc89ff1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726257480085198223,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-512125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85b2cdcedba1ec7b01b6340beecfc4da,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bc7fd7c0-5d01-4bd9-ab5f-af2534eee2b9 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 20:19:19 default-k8s-diff-port-512125 crio[712]: time="2024-09-13 20:19:19.024322521Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d9dec3bf-b7e8-40b0-809b-21bb62778648 name=/runtime.v1.RuntimeService/Version
	Sep 13 20:19:19 default-k8s-diff-port-512125 crio[712]: time="2024-09-13 20:19:19.024452181Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d9dec3bf-b7e8-40b0-809b-21bb62778648 name=/runtime.v1.RuntimeService/Version
	Sep 13 20:19:19 default-k8s-diff-port-512125 crio[712]: time="2024-09-13 20:19:19.026067934Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d94d6f40-bbda-49a0-b742-2c115a4b38ff name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 20:19:19 default-k8s-diff-port-512125 crio[712]: time="2024-09-13 20:19:19.026646630Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726258759026614470,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d94d6f40-bbda-49a0-b742-2c115a4b38ff name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 20:19:19 default-k8s-diff-port-512125 crio[712]: time="2024-09-13 20:19:19.027452257Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=499f52fc-ea54-4ee7-80c8-6b1b47758799 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 20:19:19 default-k8s-diff-port-512125 crio[712]: time="2024-09-13 20:19:19.027521716Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=499f52fc-ea54-4ee7-80c8-6b1b47758799 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 20:19:19 default-k8s-diff-port-512125 crio[712]: time="2024-09-13 20:19:19.027855475Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:727272a23be61853f465dcb60dae0ed961217133d96457374632a3fcc490b3fc,PodSandboxId:58a301000ce7d132d84c690126e5f2f9f7eff2fc43a1d7768691bff03ea08313,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726257781400421218,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cd5034f-5d90-4155-acab-804dca90a2ed,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c02b3652c8f8a1a5dfd3605162b44848742e491870aadfc38df2bf02f7aa191,PodSandboxId:957a4906fe4a4e7af85afea666bfa6beda0c586c60782f73ba1c3ce8f0aacb43,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726257780435880084,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-pm4s9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82a23abb-d3a2-415b-a992-971fe65ee840,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02eb787bf6a192de4d39c5e03d0b0940577c08e94c3a958c288248ddf709afa7,PodSandboxId:8ca473c893d6c0b9405ac375a7d7c2ad2715421b55d1e9d84ee73e677eddb418,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726257780357558748,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-2qg68,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 06d7bc39-7f7b-405c-828f-22d68741b063,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00782ad9f16fb059fd73915c67417c6aab1d191dbcd6af0059a9949bc8bc295e,PodSandboxId:ca0fbd46713432a083043ccee64f17e4ceca4a97500e12a9fea4dbd07362d118,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING
,CreatedAt:1726257779537974164,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6zfwm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b62cff15-1c67-42d6-a30b-6f43a914fa0c,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25c925f18c1640fdc04855eb6a2c9b653041cd3edece8e4a852a8e2872f674db,PodSandboxId:f3d61314d58f911a3ad49dc18ac26e4edca862c5c03c819a84071747d5ebc7da,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726257768419189757,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-512125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f711b8224e50435ad450d5b09efe44bc,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b172dac6b2fe57ea534fdb4e230ff081590b6a591d05a565a442ececc0a1da4,PodSandboxId:07baef0ade19d34ea642f9426e6fd5557dc09c1231327f9c854b988cb784afcf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726257768395139286,Labels:map[string]string{io.ku
bernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-512125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85b2cdcedba1ec7b01b6340beecfc4da,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b227cf71d8db583d3217e10f5801b9139d31d9df3bb72e96789ebfcd8099db4a,PodSandboxId:80d1a409081b6c307bf3d47519ec9f442388e5ed7cec5c223a6203f1d926d853,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726257768356325605,Labels:map[string]string{io.kube
rnetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-512125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e143ccd1e9a00975eade6b9f4aeb4e03,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c7c881fbf40e5abef1e031d9144018705696ddc7366b9d3140690504d0a30a1,PodSandboxId:11cb043226726e899517bfb84bed802095dbc122bbd90257e857f8ee6bcbe74f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726257768284642266,Labels:map[string]string{
io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-512125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32e7999a85302bfc3f13eb8f97e0e72c,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:683d63db2439b412660887896403aa799b25c848d4dbd98b9a50e08af1663258,PodSandboxId:1127567245cc692a52caf3f61e8bdb50773f23137b36b09cebc8af2b3dc89ff1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726257480085198223,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-512125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85b2cdcedba1ec7b01b6340beecfc4da,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=499f52fc-ea54-4ee7-80c8-6b1b47758799 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	727272a23be61       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   16 minutes ago      Running             storage-provisioner       0                   58a301000ce7d       storage-provisioner
	7c02b3652c8f8       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   16 minutes ago      Running             coredns                   0                   957a4906fe4a4       coredns-7c65d6cfc9-pm4s9
	02eb787bf6a19       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   16 minutes ago      Running             coredns                   0                   8ca473c893d6c       coredns-7c65d6cfc9-2qg68
	00782ad9f16fb       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   16 minutes ago      Running             kube-proxy                0                   ca0fbd4671343       kube-proxy-6zfwm
	25c925f18c164       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   16 minutes ago      Running             etcd                      2                   f3d61314d58f9       etcd-default-k8s-diff-port-512125
	3b172dac6b2fe       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   16 minutes ago      Running             kube-apiserver            2                   07baef0ade19d       kube-apiserver-default-k8s-diff-port-512125
	b227cf71d8db5       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   16 minutes ago      Running             kube-scheduler            2                   80d1a409081b6       kube-scheduler-default-k8s-diff-port-512125
	1c7c881fbf40e       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   16 minutes ago      Running             kube-controller-manager   2                   11cb043226726       kube-controller-manager-default-k8s-diff-port-512125
	683d63db2439b       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   21 minutes ago      Exited              kube-apiserver            1                   1127567245cc6       kube-apiserver-default-k8s-diff-port-512125
	
	
	==> coredns [02eb787bf6a192de4d39c5e03d0b0940577c08e94c3a958c288248ddf709afa7] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [7c02b3652c8f8a1a5dfd3605162b44848742e491870aadfc38df2bf02f7aa191] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-512125
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-512125
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fdd33bebc6743cfd1c61ec7fe066add478610a92
	                    minikube.k8s.io/name=default-k8s-diff-port-512125
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_13T20_02_54_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Sep 2024 20:02:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-512125
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 13 Sep 2024 20:19:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Sep 2024 20:18:21 +0000   Fri, 13 Sep 2024 20:02:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Sep 2024 20:18:21 +0000   Fri, 13 Sep 2024 20:02:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Sep 2024 20:18:21 +0000   Fri, 13 Sep 2024 20:02:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 13 Sep 2024 20:18:21 +0000   Fri, 13 Sep 2024 20:02:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.3
	  Hostname:    default-k8s-diff-port-512125
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 fa295e50dae8466ebb3dcc5231a36e2f
	  System UUID:                fa295e50-dae8-466e-bb3d-cc5231a36e2f
	  Boot ID:                    abbc88b8-2b85-4789-973d-2b37147e3020
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-2qg68                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     16m
	  kube-system                 coredns-7c65d6cfc9-pm4s9                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     16m
	  kube-system                 etcd-default-k8s-diff-port-512125                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         16m
	  kube-system                 kube-apiserver-default-k8s-diff-port-512125             250m (12%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-512125    200m (10%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-proxy-6zfwm                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-scheduler-default-k8s-diff-port-512125             100m (5%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 metrics-server-6867b74b74-tk8qn                         100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         16m
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 16m                kube-proxy       
	  Normal  Starting                 16m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  16m (x8 over 16m)  kubelet          Node default-k8s-diff-port-512125 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m (x8 over 16m)  kubelet          Node default-k8s-diff-port-512125 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m (x7 over 16m)  kubelet          Node default-k8s-diff-port-512125 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 16m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  16m                kubelet          Node default-k8s-diff-port-512125 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m                kubelet          Node default-k8s-diff-port-512125 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m                kubelet          Node default-k8s-diff-port-512125 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           16m                node-controller  Node default-k8s-diff-port-512125 event: Registered Node default-k8s-diff-port-512125 in Controller
	
	
	==> dmesg <==
	[  +0.050519] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041535] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.876583] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.607494] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.628590] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.724239] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.062732] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.064778] systemd-fstab-generator[647]: Ignoring "noauto" option for root device
	[  +0.188071] systemd-fstab-generator[661]: Ignoring "noauto" option for root device
	[  +0.188934] systemd-fstab-generator[673]: Ignoring "noauto" option for root device
	[  +0.329211] systemd-fstab-generator[702]: Ignoring "noauto" option for root device
	[  +4.416439] systemd-fstab-generator[794]: Ignoring "noauto" option for root device
	[  +0.067089] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.123926] systemd-fstab-generator[915]: Ignoring "noauto" option for root device
	[Sep13 19:58] kauditd_printk_skb: 97 callbacks suppressed
	[  +6.952992] kauditd_printk_skb: 87 callbacks suppressed
	[Sep13 20:02] systemd-fstab-generator[2561]: Ignoring "noauto" option for root device
	[  +0.059068] kauditd_printk_skb: 8 callbacks suppressed
	[  +6.487896] systemd-fstab-generator[2876]: Ignoring "noauto" option for root device
	[  +0.081889] kauditd_printk_skb: 54 callbacks suppressed
	[  +5.316007] systemd-fstab-generator[2993]: Ignoring "noauto" option for root device
	[  +0.121809] kauditd_printk_skb: 12 callbacks suppressed
	[Sep13 20:03] kauditd_printk_skb: 88 callbacks suppressed
	
	
	==> etcd [25c925f18c1640fdc04855eb6a2c9b653041cd3edece8e4a852a8e2872f674db] <==
	{"level":"info","ts":"2024-09-13T20:02:49.231970Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bd69003d43e617bf received MsgVoteResp from bd69003d43e617bf at term 2"}
	{"level":"info","ts":"2024-09-13T20:02:49.231978Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bd69003d43e617bf became leader at term 2"}
	{"level":"info","ts":"2024-09-13T20:02:49.231985Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: bd69003d43e617bf elected leader bd69003d43e617bf at term 2"}
	{"level":"info","ts":"2024-09-13T20:02:49.235959Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-13T20:02:49.240031Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"bd69003d43e617bf","local-member-attributes":"{Name:default-k8s-diff-port-512125 ClientURLs:[https://192.168.61.3:2379]}","request-path":"/0/members/bd69003d43e617bf/attributes","cluster-id":"bd78613cdcde8fe4","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-13T20:02:49.240165Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-13T20:02:49.240497Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-13T20:02:49.240635Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-13T20:02:49.240662Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-13T20:02:49.241285Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-13T20:02:49.249112Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-13T20:02:49.249719Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-13T20:02:49.252441Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.3:2379"}
	{"level":"info","ts":"2024-09-13T20:02:49.283665Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"bd78613cdcde8fe4","local-member-id":"bd69003d43e617bf","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-13T20:02:49.299689Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-13T20:02:49.317863Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-13T20:12:49.424934Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":725}
	{"level":"info","ts":"2024-09-13T20:12:49.434731Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":725,"took":"9.435955ms","hash":3355780421,"current-db-size-bytes":2326528,"current-db-size":"2.3 MB","current-db-size-in-use-bytes":2326528,"current-db-size-in-use":"2.3 MB"}
	{"level":"info","ts":"2024-09-13T20:12:49.434825Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3355780421,"revision":725,"compact-revision":-1}
	{"level":"info","ts":"2024-09-13T20:17:49.434310Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":968}
	{"level":"info","ts":"2024-09-13T20:17:49.440180Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":968,"took":"5.161701ms","hash":3224548840,"current-db-size-bytes":2326528,"current-db-size":"2.3 MB","current-db-size-in-use-bytes":1617920,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-09-13T20:17:49.440336Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3224548840,"revision":968,"compact-revision":725}
	{"level":"info","ts":"2024-09-13T20:18:26.473681Z","caller":"traceutil/trace.go:171","msg":"trace[138711295] transaction","detail":"{read_only:false; response_revision:1244; number_of_response:1; }","duration":"168.084962ms","start":"2024-09-13T20:18:26.305552Z","end":"2024-09-13T20:18:26.473637Z","steps":["trace[138711295] 'process raft request'  (duration: 167.956921ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-13T20:18:26.737345Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"143.029861ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-13T20:18:26.737442Z","caller":"traceutil/trace.go:171","msg":"trace[1161693405] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1244; }","duration":"143.259601ms","start":"2024-09-13T20:18:26.594167Z","end":"2024-09-13T20:18:26.737426Z","steps":["trace[1161693405] 'range keys from in-memory index tree'  (duration: 142.969824ms)"],"step_count":1}
	
	
	==> kernel <==
	 20:19:21 up 21 min,  0 users,  load average: 0.71, 0.38, 0.20
	Linux default-k8s-diff-port-512125 5.10.207 #1 SMP Thu Sep 12 19:03:33 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [3b172dac6b2fe57ea534fdb4e230ff081590b6a591d05a565a442ececc0a1da4] <==
	I0913 20:15:52.168736       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0913 20:15:52.168802       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0913 20:17:51.167801       1 handler_proxy.go:99] no RequestInfo found in the context
	E0913 20:17:51.168109       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0913 20:17:52.171305       1 handler_proxy.go:99] no RequestInfo found in the context
	E0913 20:17:52.171365       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0913 20:17:52.171446       1 handler_proxy.go:99] no RequestInfo found in the context
	E0913 20:17:52.171500       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0913 20:17:52.172513       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0913 20:17:52.172643       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0913 20:18:52.173625       1 handler_proxy.go:99] no RequestInfo found in the context
	E0913 20:18:52.173815       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0913 20:18:52.173873       1 handler_proxy.go:99] no RequestInfo found in the context
	E0913 20:18:52.173893       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0913 20:18:52.174984       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0913 20:18:52.175046       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [683d63db2439b412660887896403aa799b25c848d4dbd98b9a50e08af1663258] <==
	W0913 20:02:40.399694       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0913 20:02:40.419486       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0913 20:02:40.463223       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0913 20:02:40.466808       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0913 20:02:40.480031       1 logging.go:55] [core] [Channel #106 SubChannel #107]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0913 20:02:40.485464       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0913 20:02:40.553488       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0913 20:02:40.561976       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0913 20:02:40.591993       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0913 20:02:40.799670       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0913 20:02:40.819464       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0913 20:02:40.844168       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0913 20:02:40.862136       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0913 20:02:40.869303       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0913 20:02:40.902666       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0913 20:02:45.075127       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0913 20:02:45.126562       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0913 20:02:45.154011       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0913 20:02:45.286688       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0913 20:02:45.391667       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0913 20:02:45.476941       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0913 20:02:45.496305       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0913 20:02:45.505060       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0913 20:02:45.510503       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0913 20:02:45.581627       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [1c7c881fbf40e5abef1e031d9144018705696ddc7366b9d3140690504d0a30a1] <==
	I0913 20:13:58.777903       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0913 20:14:03.026714       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="203.663µs"
	I0913 20:14:16.030350       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="173.815µs"
	E0913 20:14:28.324730       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0913 20:14:28.788315       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0913 20:14:58.332105       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0913 20:14:58.799029       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0913 20:15:28.338258       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0913 20:15:28.809404       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0913 20:15:58.344513       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0913 20:15:58.816660       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0913 20:16:28.350550       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0913 20:16:28.824293       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0913 20:16:58.357556       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0913 20:16:58.833017       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0913 20:17:28.363407       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0913 20:17:28.841591       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0913 20:17:58.378672       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0913 20:17:58.852254       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0913 20:18:21.297258       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-512125"
	E0913 20:18:28.385816       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0913 20:18:28.861964       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0913 20:18:58.392350       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0913 20:18:58.871637       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0913 20:19:11.025932       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="980.114µs"
	
	
	==> kube-proxy [00782ad9f16fb059fd73915c67417c6aab1d191dbcd6af0059a9949bc8bc295e] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0913 20:02:59.809546       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0913 20:02:59.840326       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.61.3"]
	E0913 20:02:59.840513       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0913 20:02:59.924649       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0913 20:02:59.924965       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0913 20:02:59.924999       1 server_linux.go:169] "Using iptables Proxier"
	I0913 20:02:59.931854       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0913 20:02:59.932135       1 server.go:483] "Version info" version="v1.31.1"
	I0913 20:02:59.932147       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0913 20:02:59.936990       1 config.go:199] "Starting service config controller"
	I0913 20:02:59.937083       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0913 20:02:59.937128       1 config.go:105] "Starting endpoint slice config controller"
	I0913 20:02:59.937136       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0913 20:02:59.937607       1 config.go:328] "Starting node config controller"
	I0913 20:02:59.937613       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0913 20:03:00.037710       1 shared_informer.go:320] Caches are synced for node config
	I0913 20:03:00.037803       1 shared_informer.go:320] Caches are synced for service config
	I0913 20:03:00.037812       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [b227cf71d8db583d3217e10f5801b9139d31d9df3bb72e96789ebfcd8099db4a] <==
	W0913 20:02:51.222246       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0913 20:02:51.222298       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0913 20:02:51.222332       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0913 20:02:51.222384       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0913 20:02:51.222344       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0913 20:02:51.222445       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0913 20:02:52.088834       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0913 20:02:52.088897       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0913 20:02:52.101928       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0913 20:02:52.102042       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0913 20:02:52.107104       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0913 20:02:52.107635       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0913 20:02:52.209655       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0913 20:02:52.209799       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0913 20:02:52.371498       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0913 20:02:52.371562       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0913 20:02:52.407135       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0913 20:02:52.407191       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0913 20:02:52.408348       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0913 20:02:52.408394       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0913 20:02:52.459633       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0913 20:02:52.459669       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0913 20:02:52.494617       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0913 20:02:52.494651       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0913 20:02:55.114858       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 13 20:18:21 default-k8s-diff-port-512125 kubelet[2883]: E0913 20:18:21.009547    2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-tk8qn" podUID="e4e5d427-7760-4397-8529-3ae3734ed891"
	Sep 13 20:18:24 default-k8s-diff-port-512125 kubelet[2883]: E0913 20:18:24.311249    2883 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726258704310579990,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 20:18:24 default-k8s-diff-port-512125 kubelet[2883]: E0913 20:18:24.311355    2883 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726258704310579990,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 20:18:33 default-k8s-diff-port-512125 kubelet[2883]: E0913 20:18:33.009464    2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-tk8qn" podUID="e4e5d427-7760-4397-8529-3ae3734ed891"
	Sep 13 20:18:34 default-k8s-diff-port-512125 kubelet[2883]: E0913 20:18:34.313551    2883 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726258714313167698,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 20:18:34 default-k8s-diff-port-512125 kubelet[2883]: E0913 20:18:34.313581    2883 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726258714313167698,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 20:18:44 default-k8s-diff-port-512125 kubelet[2883]: E0913 20:18:44.008886    2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-tk8qn" podUID="e4e5d427-7760-4397-8529-3ae3734ed891"
	Sep 13 20:18:44 default-k8s-diff-port-512125 kubelet[2883]: E0913 20:18:44.315320    2883 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726258724315014294,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 20:18:44 default-k8s-diff-port-512125 kubelet[2883]: E0913 20:18:44.315503    2883 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726258724315014294,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 20:18:54 default-k8s-diff-port-512125 kubelet[2883]: E0913 20:18:54.023361    2883 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 13 20:18:54 default-k8s-diff-port-512125 kubelet[2883]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 13 20:18:54 default-k8s-diff-port-512125 kubelet[2883]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 13 20:18:54 default-k8s-diff-port-512125 kubelet[2883]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 13 20:18:54 default-k8s-diff-port-512125 kubelet[2883]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 13 20:18:54 default-k8s-diff-port-512125 kubelet[2883]: E0913 20:18:54.317440    2883 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726258734316969686,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 20:18:54 default-k8s-diff-port-512125 kubelet[2883]: E0913 20:18:54.317554    2883 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726258734316969686,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 20:18:57 default-k8s-diff-port-512125 kubelet[2883]: E0913 20:18:57.025950    2883 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Sep 13 20:18:57 default-k8s-diff-port-512125 kubelet[2883]: E0913 20:18:57.026033    2883 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Sep 13 20:18:57 default-k8s-diff-port-512125 kubelet[2883]: E0913 20:18:57.026224    2883 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qdrl4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountP
ropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile
:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-6867b74b74-tk8qn_kube-system(e4e5d427-7760-4397-8529-3ae3734ed891): ErrImagePull: pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" logger="UnhandledError"
	Sep 13 20:18:57 default-k8s-diff-port-512125 kubelet[2883]: E0913 20:18:57.027678    2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-6867b74b74-tk8qn" podUID="e4e5d427-7760-4397-8529-3ae3734ed891"
	Sep 13 20:19:04 default-k8s-diff-port-512125 kubelet[2883]: E0913 20:19:04.319951    2883 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726258744319427337,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 20:19:04 default-k8s-diff-port-512125 kubelet[2883]: E0913 20:19:04.320462    2883 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726258744319427337,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 20:19:11 default-k8s-diff-port-512125 kubelet[2883]: E0913 20:19:11.008960    2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-tk8qn" podUID="e4e5d427-7760-4397-8529-3ae3734ed891"
	Sep 13 20:19:14 default-k8s-diff-port-512125 kubelet[2883]: E0913 20:19:14.325953    2883 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726258754323307206,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 20:19:14 default-k8s-diff-port-512125 kubelet[2883]: E0913 20:19:14.325999    2883 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726258754323307206,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [727272a23be61853f465dcb60dae0ed961217133d96457374632a3fcc490b3fc] <==
	I0913 20:03:01.511928       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0913 20:03:01.531597       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0913 20:03:01.531643       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0913 20:03:01.558588       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0913 20:03:01.559120       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-512125_ca6bfefb-05ff-422d-a0d8-62ddadbf9f62!
	I0913 20:03:01.561274       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"13d78089-4533-4fc7-aeb3-4b7fda570d53", APIVersion:"v1", ResourceVersion:"441", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-512125_ca6bfefb-05ff-422d-a0d8-62ddadbf9f62 became leader
	I0913 20:03:01.659906       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-512125_ca6bfefb-05ff-422d-a0d8-62ddadbf9f62!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-512125 -n default-k8s-diff-port-512125
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-512125 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-tk8qn
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-512125 describe pod metrics-server-6867b74b74-tk8qn
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-512125 describe pod metrics-server-6867b74b74-tk8qn: exit status 1 (76.452627ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-tk8qn" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-512125 describe pod metrics-server-6867b74b74-tk8qn: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (426.98s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (397.91s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-175374 -n embed-certs-175374
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-09-13 20:18:51.375777846 +0000 UTC m=+7084.854783064
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-175374 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-175374 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.464µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-175374 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-175374 -n embed-certs-175374
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-175374 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-175374 logs -n 25: (1.209715381s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-604714 sudo crio                             | bridge-604714                | jenkins | v1.34.0 | 13 Sep 24 19:49 UTC | 13 Sep 24 19:49 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-604714                                       | bridge-604714                | jenkins | v1.34.0 | 13 Sep 24 19:49 UTC | 13 Sep 24 19:49 UTC |
	| delete  | -p                                                     | disable-driver-mounts-221882 | jenkins | v1.34.0 | 13 Sep 24 19:49 UTC | 13 Sep 24 19:49 UTC |
	|         | disable-driver-mounts-221882                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-512125 | jenkins | v1.34.0 | 13 Sep 24 19:49 UTC | 13 Sep 24 19:50 UTC |
	|         | default-k8s-diff-port-512125                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-175374            | embed-certs-175374           | jenkins | v1.34.0 | 13 Sep 24 19:50 UTC | 13 Sep 24 19:50 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-175374                                  | embed-certs-175374           | jenkins | v1.34.0 | 13 Sep 24 19:50 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-239327             | no-preload-239327            | jenkins | v1.34.0 | 13 Sep 24 19:50 UTC | 13 Sep 24 19:50 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-239327                                   | no-preload-239327            | jenkins | v1.34.0 | 13 Sep 24 19:50 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-512125  | default-k8s-diff-port-512125 | jenkins | v1.34.0 | 13 Sep 24 19:50 UTC | 13 Sep 24 19:50 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-512125 | jenkins | v1.34.0 | 13 Sep 24 19:50 UTC |                     |
	|         | default-k8s-diff-port-512125                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-234290        | old-k8s-version-234290       | jenkins | v1.34.0 | 13 Sep 24 19:52 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-175374                 | embed-certs-175374           | jenkins | v1.34.0 | 13 Sep 24 19:52 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-175374                                  | embed-certs-175374           | jenkins | v1.34.0 | 13 Sep 24 19:52 UTC | 13 Sep 24 20:03 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-239327                  | no-preload-239327            | jenkins | v1.34.0 | 13 Sep 24 19:52 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-239327                                   | no-preload-239327            | jenkins | v1.34.0 | 13 Sep 24 19:52 UTC | 13 Sep 24 20:02 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-512125       | default-k8s-diff-port-512125 | jenkins | v1.34.0 | 13 Sep 24 19:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-512125 | jenkins | v1.34.0 | 13 Sep 24 19:53 UTC | 13 Sep 24 20:03 UTC |
	|         | default-k8s-diff-port-512125                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-234290                              | old-k8s-version-234290       | jenkins | v1.34.0 | 13 Sep 24 19:53 UTC | 13 Sep 24 19:53 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-234290             | old-k8s-version-234290       | jenkins | v1.34.0 | 13 Sep 24 19:53 UTC | 13 Sep 24 19:53 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-234290                              | old-k8s-version-234290       | jenkins | v1.34.0 | 13 Sep 24 19:53 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-234290                              | old-k8s-version-234290       | jenkins | v1.34.0 | 13 Sep 24 20:17 UTC | 13 Sep 24 20:17 UTC |
	| start   | -p newest-cni-350416 --memory=2200 --alsologtostderr   | newest-cni-350416            | jenkins | v1.34.0 | 13 Sep 24 20:17 UTC | 13 Sep 24 20:18 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| delete  | -p no-preload-239327                                   | no-preload-239327            | jenkins | v1.34.0 | 13 Sep 24 20:18 UTC | 13 Sep 24 20:18 UTC |
	| addons  | enable metrics-server -p newest-cni-350416             | newest-cni-350416            | jenkins | v1.34.0 | 13 Sep 24 20:18 UTC | 13 Sep 24 20:18 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-350416                                   | newest-cni-350416            | jenkins | v1.34.0 | 13 Sep 24 20:18 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/13 20:17:53
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0913 20:17:53.070497   78618 out.go:345] Setting OutFile to fd 1 ...
	I0913 20:17:53.070593   78618 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 20:17:53.070600   78618 out.go:358] Setting ErrFile to fd 2...
	I0913 20:17:53.070605   78618 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 20:17:53.070769   78618 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19636-3902/.minikube/bin
	I0913 20:17:53.071310   78618 out.go:352] Setting JSON to false
	I0913 20:17:53.072197   78618 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7216,"bootTime":1726251457,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0913 20:17:53.072297   78618 start.go:139] virtualization: kvm guest
	I0913 20:17:53.074742   78618 out.go:177] * [newest-cni-350416] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0913 20:17:53.076168   78618 out.go:177]   - MINIKUBE_LOCATION=19636
	I0913 20:17:53.076173   78618 notify.go:220] Checking for updates...
	I0913 20:17:53.077496   78618 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 20:17:53.078841   78618 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19636-3902/kubeconfig
	I0913 20:17:53.080144   78618 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19636-3902/.minikube
	I0913 20:17:53.081355   78618 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0913 20:17:53.082651   78618 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 20:17:53.084248   78618 config.go:182] Loaded profile config "default-k8s-diff-port-512125": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 20:17:53.084356   78618 config.go:182] Loaded profile config "embed-certs-175374": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 20:17:53.084465   78618 config.go:182] Loaded profile config "no-preload-239327": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 20:17:53.084558   78618 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 20:17:53.123733   78618 out.go:177] * Using the kvm2 driver based on user configuration
	I0913 20:17:53.125240   78618 start.go:297] selected driver: kvm2
	I0913 20:17:53.125264   78618 start.go:901] validating driver "kvm2" against <nil>
	I0913 20:17:53.125275   78618 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 20:17:53.126002   78618 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 20:17:53.126118   78618 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19636-3902/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0913 20:17:53.141801   78618 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0913 20:17:53.141850   78618 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0913 20:17:53.141910   78618 out.go:270] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0913 20:17:53.142251   78618 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0913 20:17:53.142288   78618 cni.go:84] Creating CNI manager for ""
	I0913 20:17:53.142331   78618 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0913 20:17:53.142339   78618 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0913 20:17:53.142390   78618 start.go:340] cluster config:
	{Name:newest-cni-350416 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:newest-cni-350416 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 20:17:53.142485   78618 iso.go:125] acquiring lock: {Name:mk6d09a9e7ffc35d34bf41cb51ad15df14a4d34d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 20:17:53.144224   78618 out.go:177] * Starting "newest-cni-350416" primary control-plane node in "newest-cni-350416" cluster
	I0913 20:17:53.145413   78618 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0913 20:17:53.145459   78618 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0913 20:17:53.145469   78618 cache.go:56] Caching tarball of preloaded images
	I0913 20:17:53.145549   78618 preload.go:172] Found /home/jenkins/minikube-integration/19636-3902/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0913 20:17:53.145592   78618 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0913 20:17:53.145722   78618 profile.go:143] Saving config to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/newest-cni-350416/config.json ...
	I0913 20:17:53.145751   78618 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/newest-cni-350416/config.json: {Name:mkf82a3c8c9c4e29633352da6b0f98ea61c3d7f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 20:17:53.145944   78618 start.go:360] acquireMachinesLock for newest-cni-350416: {Name:mk2a4fc9f87a3264088553785e086036ce1d8c5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 20:17:53.145996   78618 start.go:364] duration metric: took 30.476µs to acquireMachinesLock for "newest-cni-350416"
	I0913 20:17:53.146021   78618 start.go:93] Provisioning new machine with config: &{Name:newest-cni-350416 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.1 ClusterName:newest-cni-350416 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0913 20:17:53.146081   78618 start.go:125] createHost starting for "" (driver="kvm2")
	I0913 20:17:53.147820   78618 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0913 20:17:53.147975   78618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 20:17:53.148020   78618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 20:17:53.163213   78618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40293
	I0913 20:17:53.163746   78618 main.go:141] libmachine: () Calling .GetVersion
	I0913 20:17:53.164384   78618 main.go:141] libmachine: Using API Version  1
	I0913 20:17:53.164409   78618 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 20:17:53.164812   78618 main.go:141] libmachine: () Calling .GetMachineName
	I0913 20:17:53.165005   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetMachineName
	I0913 20:17:53.165217   78618 main.go:141] libmachine: (newest-cni-350416) Calling .DriverName
	I0913 20:17:53.165432   78618 start.go:159] libmachine.API.Create for "newest-cni-350416" (driver="kvm2")
	I0913 20:17:53.165457   78618 client.go:168] LocalClient.Create starting
	I0913 20:17:53.165490   78618 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem
	I0913 20:17:53.165525   78618 main.go:141] libmachine: Decoding PEM data...
	I0913 20:17:53.165540   78618 main.go:141] libmachine: Parsing certificate...
	I0913 20:17:53.165588   78618 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem
	I0913 20:17:53.165605   78618 main.go:141] libmachine: Decoding PEM data...
	I0913 20:17:53.165615   78618 main.go:141] libmachine: Parsing certificate...
	I0913 20:17:53.165628   78618 main.go:141] libmachine: Running pre-create checks...
	I0913 20:17:53.165638   78618 main.go:141] libmachine: (newest-cni-350416) Calling .PreCreateCheck
	I0913 20:17:53.166045   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetConfigRaw
	I0913 20:17:53.166531   78618 main.go:141] libmachine: Creating machine...
	I0913 20:17:53.166545   78618 main.go:141] libmachine: (newest-cni-350416) Calling .Create
	I0913 20:17:53.166693   78618 main.go:141] libmachine: (newest-cni-350416) Creating KVM machine...
	I0913 20:17:53.167907   78618 main.go:141] libmachine: (newest-cni-350416) DBG | found existing default KVM network
	I0913 20:17:53.169112   78618 main.go:141] libmachine: (newest-cni-350416) DBG | I0913 20:17:53.168969   78657 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:be:5d:74} reservation:<nil>}
	I0913 20:17:53.169801   78618 main.go:141] libmachine: (newest-cni-350416) DBG | I0913 20:17:53.169741   78657 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:52:5e:80} reservation:<nil>}
	I0913 20:17:53.170645   78618 main.go:141] libmachine: (newest-cni-350416) DBG | I0913 20:17:53.170569   78657 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:25:64:3c} reservation:<nil>}
	I0913 20:17:53.171682   78618 main.go:141] libmachine: (newest-cni-350416) DBG | I0913 20:17:53.171623   78657 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0003acfc0}
	I0913 20:17:53.171713   78618 main.go:141] libmachine: (newest-cni-350416) DBG | created network xml: 
	I0913 20:17:53.171726   78618 main.go:141] libmachine: (newest-cni-350416) DBG | <network>
	I0913 20:17:53.171735   78618 main.go:141] libmachine: (newest-cni-350416) DBG |   <name>mk-newest-cni-350416</name>
	I0913 20:17:53.171743   78618 main.go:141] libmachine: (newest-cni-350416) DBG |   <dns enable='no'/>
	I0913 20:17:53.171769   78618 main.go:141] libmachine: (newest-cni-350416) DBG |   
	I0913 20:17:53.171791   78618 main.go:141] libmachine: (newest-cni-350416) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0913 20:17:53.171803   78618 main.go:141] libmachine: (newest-cni-350416) DBG |     <dhcp>
	I0913 20:17:53.171812   78618 main.go:141] libmachine: (newest-cni-350416) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0913 20:17:53.171818   78618 main.go:141] libmachine: (newest-cni-350416) DBG |     </dhcp>
	I0913 20:17:53.171824   78618 main.go:141] libmachine: (newest-cni-350416) DBG |   </ip>
	I0913 20:17:53.171839   78618 main.go:141] libmachine: (newest-cni-350416) DBG |   
	I0913 20:17:53.171843   78618 main.go:141] libmachine: (newest-cni-350416) DBG | </network>
	I0913 20:17:53.171849   78618 main.go:141] libmachine: (newest-cni-350416) DBG | 
	I0913 20:17:53.177191   78618 main.go:141] libmachine: (newest-cni-350416) DBG | trying to create private KVM network mk-newest-cni-350416 192.168.72.0/24...
	I0913 20:17:53.249161   78618 main.go:141] libmachine: (newest-cni-350416) DBG | private KVM network mk-newest-cni-350416 192.168.72.0/24 created
	I0913 20:17:53.249206   78618 main.go:141] libmachine: (newest-cni-350416) Setting up store path in /home/jenkins/minikube-integration/19636-3902/.minikube/machines/newest-cni-350416 ...
	I0913 20:17:53.249223   78618 main.go:141] libmachine: (newest-cni-350416) DBG | I0913 20:17:53.249133   78657 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19636-3902/.minikube
	I0913 20:17:53.249235   78618 main.go:141] libmachine: (newest-cni-350416) Building disk image from file:///home/jenkins/minikube-integration/19636-3902/.minikube/cache/iso/amd64/minikube-v1.34.0-1726156389-19616-amd64.iso
	I0913 20:17:53.249265   78618 main.go:141] libmachine: (newest-cni-350416) Downloading /home/jenkins/minikube-integration/19636-3902/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19636-3902/.minikube/cache/iso/amd64/minikube-v1.34.0-1726156389-19616-amd64.iso...
	I0913 20:17:53.497980   78618 main.go:141] libmachine: (newest-cni-350416) DBG | I0913 20:17:53.497825   78657 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/newest-cni-350416/id_rsa...
	I0913 20:17:53.694456   78618 main.go:141] libmachine: (newest-cni-350416) DBG | I0913 20:17:53.694323   78657 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/newest-cni-350416/newest-cni-350416.rawdisk...
	I0913 20:17:53.694477   78618 main.go:141] libmachine: (newest-cni-350416) DBG | Writing magic tar header
	I0913 20:17:53.694489   78618 main.go:141] libmachine: (newest-cni-350416) DBG | Writing SSH key tar header
	I0913 20:17:53.694497   78618 main.go:141] libmachine: (newest-cni-350416) DBG | I0913 20:17:53.694436   78657 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19636-3902/.minikube/machines/newest-cni-350416 ...
	I0913 20:17:53.694523   78618 main.go:141] libmachine: (newest-cni-350416) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/newest-cni-350416
	I0913 20:17:53.694555   78618 main.go:141] libmachine: (newest-cni-350416) Setting executable bit set on /home/jenkins/minikube-integration/19636-3902/.minikube/machines/newest-cni-350416 (perms=drwx------)
	I0913 20:17:53.694576   78618 main.go:141] libmachine: (newest-cni-350416) Setting executable bit set on /home/jenkins/minikube-integration/19636-3902/.minikube/machines (perms=drwxr-xr-x)
	I0913 20:17:53.694594   78618 main.go:141] libmachine: (newest-cni-350416) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19636-3902/.minikube/machines
	I0913 20:17:53.694607   78618 main.go:141] libmachine: (newest-cni-350416) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19636-3902/.minikube
	I0913 20:17:53.694612   78618 main.go:141] libmachine: (newest-cni-350416) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19636-3902
	I0913 20:17:53.694619   78618 main.go:141] libmachine: (newest-cni-350416) Setting executable bit set on /home/jenkins/minikube-integration/19636-3902/.minikube (perms=drwxr-xr-x)
	I0913 20:17:53.694625   78618 main.go:141] libmachine: (newest-cni-350416) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0913 20:17:53.694638   78618 main.go:141] libmachine: (newest-cni-350416) DBG | Checking permissions on dir: /home/jenkins
	I0913 20:17:53.694660   78618 main.go:141] libmachine: (newest-cni-350416) DBG | Checking permissions on dir: /home
	I0913 20:17:53.694673   78618 main.go:141] libmachine: (newest-cni-350416) DBG | Skipping /home - not owner
	I0913 20:17:53.694689   78618 main.go:141] libmachine: (newest-cni-350416) Setting executable bit set on /home/jenkins/minikube-integration/19636-3902 (perms=drwxrwxr-x)
	I0913 20:17:53.694700   78618 main.go:141] libmachine: (newest-cni-350416) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0913 20:17:53.694708   78618 main.go:141] libmachine: (newest-cni-350416) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0913 20:17:53.694715   78618 main.go:141] libmachine: (newest-cni-350416) Creating domain...
	I0913 20:17:53.695851   78618 main.go:141] libmachine: (newest-cni-350416) define libvirt domain using xml: 
	I0913 20:17:53.695876   78618 main.go:141] libmachine: (newest-cni-350416) <domain type='kvm'>
	I0913 20:17:53.695885   78618 main.go:141] libmachine: (newest-cni-350416)   <name>newest-cni-350416</name>
	I0913 20:17:53.695891   78618 main.go:141] libmachine: (newest-cni-350416)   <memory unit='MiB'>2200</memory>
	I0913 20:17:53.695900   78618 main.go:141] libmachine: (newest-cni-350416)   <vcpu>2</vcpu>
	I0913 20:17:53.695910   78618 main.go:141] libmachine: (newest-cni-350416)   <features>
	I0913 20:17:53.695929   78618 main.go:141] libmachine: (newest-cni-350416)     <acpi/>
	I0913 20:17:53.695945   78618 main.go:141] libmachine: (newest-cni-350416)     <apic/>
	I0913 20:17:53.695952   78618 main.go:141] libmachine: (newest-cni-350416)     <pae/>
	I0913 20:17:53.695958   78618 main.go:141] libmachine: (newest-cni-350416)     
	I0913 20:17:53.695967   78618 main.go:141] libmachine: (newest-cni-350416)   </features>
	I0913 20:17:53.695974   78618 main.go:141] libmachine: (newest-cni-350416)   <cpu mode='host-passthrough'>
	I0913 20:17:53.695981   78618 main.go:141] libmachine: (newest-cni-350416)   
	I0913 20:17:53.695990   78618 main.go:141] libmachine: (newest-cni-350416)   </cpu>
	I0913 20:17:53.695998   78618 main.go:141] libmachine: (newest-cni-350416)   <os>
	I0913 20:17:53.696012   78618 main.go:141] libmachine: (newest-cni-350416)     <type>hvm</type>
	I0913 20:17:53.696023   78618 main.go:141] libmachine: (newest-cni-350416)     <boot dev='cdrom'/>
	I0913 20:17:53.696037   78618 main.go:141] libmachine: (newest-cni-350416)     <boot dev='hd'/>
	I0913 20:17:53.696042   78618 main.go:141] libmachine: (newest-cni-350416)     <bootmenu enable='no'/>
	I0913 20:17:53.696049   78618 main.go:141] libmachine: (newest-cni-350416)   </os>
	I0913 20:17:53.696054   78618 main.go:141] libmachine: (newest-cni-350416)   <devices>
	I0913 20:17:53.696061   78618 main.go:141] libmachine: (newest-cni-350416)     <disk type='file' device='cdrom'>
	I0913 20:17:53.696084   78618 main.go:141] libmachine: (newest-cni-350416)       <source file='/home/jenkins/minikube-integration/19636-3902/.minikube/machines/newest-cni-350416/boot2docker.iso'/>
	I0913 20:17:53.696103   78618 main.go:141] libmachine: (newest-cni-350416)       <target dev='hdc' bus='scsi'/>
	I0913 20:17:53.696116   78618 main.go:141] libmachine: (newest-cni-350416)       <readonly/>
	I0913 20:17:53.696129   78618 main.go:141] libmachine: (newest-cni-350416)     </disk>
	I0913 20:17:53.696137   78618 main.go:141] libmachine: (newest-cni-350416)     <disk type='file' device='disk'>
	I0913 20:17:53.696149   78618 main.go:141] libmachine: (newest-cni-350416)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0913 20:17:53.696164   78618 main.go:141] libmachine: (newest-cni-350416)       <source file='/home/jenkins/minikube-integration/19636-3902/.minikube/machines/newest-cni-350416/newest-cni-350416.rawdisk'/>
	I0913 20:17:53.696179   78618 main.go:141] libmachine: (newest-cni-350416)       <target dev='hda' bus='virtio'/>
	I0913 20:17:53.696189   78618 main.go:141] libmachine: (newest-cni-350416)     </disk>
	I0913 20:17:53.696199   78618 main.go:141] libmachine: (newest-cni-350416)     <interface type='network'>
	I0913 20:17:53.696205   78618 main.go:141] libmachine: (newest-cni-350416)       <source network='mk-newest-cni-350416'/>
	I0913 20:17:53.696212   78618 main.go:141] libmachine: (newest-cni-350416)       <model type='virtio'/>
	I0913 20:17:53.696217   78618 main.go:141] libmachine: (newest-cni-350416)     </interface>
	I0913 20:17:53.696222   78618 main.go:141] libmachine: (newest-cni-350416)     <interface type='network'>
	I0913 20:17:53.696233   78618 main.go:141] libmachine: (newest-cni-350416)       <source network='default'/>
	I0913 20:17:53.696246   78618 main.go:141] libmachine: (newest-cni-350416)       <model type='virtio'/>
	I0913 20:17:53.696257   78618 main.go:141] libmachine: (newest-cni-350416)     </interface>
	I0913 20:17:53.696267   78618 main.go:141] libmachine: (newest-cni-350416)     <serial type='pty'>
	I0913 20:17:53.696275   78618 main.go:141] libmachine: (newest-cni-350416)       <target port='0'/>
	I0913 20:17:53.696283   78618 main.go:141] libmachine: (newest-cni-350416)     </serial>
	I0913 20:17:53.696291   78618 main.go:141] libmachine: (newest-cni-350416)     <console type='pty'>
	I0913 20:17:53.696301   78618 main.go:141] libmachine: (newest-cni-350416)       <target type='serial' port='0'/>
	I0913 20:17:53.696327   78618 main.go:141] libmachine: (newest-cni-350416)     </console>
	I0913 20:17:53.696344   78618 main.go:141] libmachine: (newest-cni-350416)     <rng model='virtio'>
	I0913 20:17:53.696351   78618 main.go:141] libmachine: (newest-cni-350416)       <backend model='random'>/dev/random</backend>
	I0913 20:17:53.696358   78618 main.go:141] libmachine: (newest-cni-350416)     </rng>
	I0913 20:17:53.696363   78618 main.go:141] libmachine: (newest-cni-350416)     
	I0913 20:17:53.696368   78618 main.go:141] libmachine: (newest-cni-350416)     
	I0913 20:17:53.696374   78618 main.go:141] libmachine: (newest-cni-350416)   </devices>
	I0913 20:17:53.696380   78618 main.go:141] libmachine: (newest-cni-350416) </domain>
	I0913 20:17:53.696387   78618 main.go:141] libmachine: (newest-cni-350416) 
	I0913 20:17:53.700720   78618 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined MAC address 52:54:00:6d:56:e9 in network default
	I0913 20:17:53.701294   78618 main.go:141] libmachine: (newest-cni-350416) Ensuring networks are active...
	I0913 20:17:53.701319   78618 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:17:53.701919   78618 main.go:141] libmachine: (newest-cni-350416) Ensuring network default is active
	I0913 20:17:53.702293   78618 main.go:141] libmachine: (newest-cni-350416) Ensuring network mk-newest-cni-350416 is active
	I0913 20:17:53.702778   78618 main.go:141] libmachine: (newest-cni-350416) Getting domain xml...
	I0913 20:17:53.703421   78618 main.go:141] libmachine: (newest-cni-350416) Creating domain...
	I0913 20:17:54.970084   78618 main.go:141] libmachine: (newest-cni-350416) Waiting to get IP...
	I0913 20:17:54.970893   78618 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:17:54.971372   78618 main.go:141] libmachine: (newest-cni-350416) DBG | unable to find current IP address of domain newest-cni-350416 in network mk-newest-cni-350416
	I0913 20:17:54.971406   78618 main.go:141] libmachine: (newest-cni-350416) DBG | I0913 20:17:54.971360   78657 retry.go:31] will retry after 284.279719ms: waiting for machine to come up
	I0913 20:17:55.257056   78618 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:17:55.257642   78618 main.go:141] libmachine: (newest-cni-350416) DBG | unable to find current IP address of domain newest-cni-350416 in network mk-newest-cni-350416
	I0913 20:17:55.257721   78618 main.go:141] libmachine: (newest-cni-350416) DBG | I0913 20:17:55.257618   78657 retry.go:31] will retry after 364.649975ms: waiting for machine to come up
	I0913 20:17:55.624307   78618 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:17:55.624756   78618 main.go:141] libmachine: (newest-cni-350416) DBG | unable to find current IP address of domain newest-cni-350416 in network mk-newest-cni-350416
	I0913 20:17:55.624784   78618 main.go:141] libmachine: (newest-cni-350416) DBG | I0913 20:17:55.624741   78657 retry.go:31] will retry after 351.238866ms: waiting for machine to come up
	I0913 20:17:55.977346   78618 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:17:55.977888   78618 main.go:141] libmachine: (newest-cni-350416) DBG | unable to find current IP address of domain newest-cni-350416 in network mk-newest-cni-350416
	I0913 20:17:55.977915   78618 main.go:141] libmachine: (newest-cni-350416) DBG | I0913 20:17:55.977853   78657 retry.go:31] will retry after 522.890335ms: waiting for machine to come up
	I0913 20:17:56.502105   78618 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:17:56.502648   78618 main.go:141] libmachine: (newest-cni-350416) DBG | unable to find current IP address of domain newest-cni-350416 in network mk-newest-cni-350416
	I0913 20:17:56.502674   78618 main.go:141] libmachine: (newest-cni-350416) DBG | I0913 20:17:56.502586   78657 retry.go:31] will retry after 513.308242ms: waiting for machine to come up
	I0913 20:17:57.017258   78618 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:17:57.017728   78618 main.go:141] libmachine: (newest-cni-350416) DBG | unable to find current IP address of domain newest-cni-350416 in network mk-newest-cni-350416
	I0913 20:17:57.017790   78618 main.go:141] libmachine: (newest-cni-350416) DBG | I0913 20:17:57.017705   78657 retry.go:31] will retry after 619.411725ms: waiting for machine to come up
	I0913 20:17:57.638526   78618 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:17:57.638898   78618 main.go:141] libmachine: (newest-cni-350416) DBG | unable to find current IP address of domain newest-cni-350416 in network mk-newest-cni-350416
	I0913 20:17:57.638950   78618 main.go:141] libmachine: (newest-cni-350416) DBG | I0913 20:17:57.638877   78657 retry.go:31] will retry after 1.010741913s: waiting for machine to come up
	I0913 20:17:58.650971   78618 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:17:58.651466   78618 main.go:141] libmachine: (newest-cni-350416) DBG | unable to find current IP address of domain newest-cni-350416 in network mk-newest-cni-350416
	I0913 20:17:58.651491   78618 main.go:141] libmachine: (newest-cni-350416) DBG | I0913 20:17:58.651419   78657 retry.go:31] will retry after 915.874231ms: waiting for machine to come up
	I0913 20:17:59.568434   78618 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:17:59.568867   78618 main.go:141] libmachine: (newest-cni-350416) DBG | unable to find current IP address of domain newest-cni-350416 in network mk-newest-cni-350416
	I0913 20:17:59.568908   78618 main.go:141] libmachine: (newest-cni-350416) DBG | I0913 20:17:59.568813   78657 retry.go:31] will retry after 1.198526884s: waiting for machine to come up
	I0913 20:18:00.769373   78618 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:18:00.769749   78618 main.go:141] libmachine: (newest-cni-350416) DBG | unable to find current IP address of domain newest-cni-350416 in network mk-newest-cni-350416
	I0913 20:18:00.769778   78618 main.go:141] libmachine: (newest-cni-350416) DBG | I0913 20:18:00.769701   78657 retry.go:31] will retry after 2.086733775s: waiting for machine to come up
	I0913 20:18:02.858968   78618 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:18:02.859429   78618 main.go:141] libmachine: (newest-cni-350416) DBG | unable to find current IP address of domain newest-cni-350416 in network mk-newest-cni-350416
	I0913 20:18:02.859453   78618 main.go:141] libmachine: (newest-cni-350416) DBG | I0913 20:18:02.859396   78657 retry.go:31] will retry after 2.555556586s: waiting for machine to come up
	I0913 20:18:05.416191   78618 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:18:05.416660   78618 main.go:141] libmachine: (newest-cni-350416) DBG | unable to find current IP address of domain newest-cni-350416 in network mk-newest-cni-350416
	I0913 20:18:05.416689   78618 main.go:141] libmachine: (newest-cni-350416) DBG | I0913 20:18:05.416629   78657 retry.go:31] will retry after 3.585122192s: waiting for machine to come up
	I0913 20:18:09.003278   78618 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:18:09.003679   78618 main.go:141] libmachine: (newest-cni-350416) DBG | unable to find current IP address of domain newest-cni-350416 in network mk-newest-cni-350416
	I0913 20:18:09.003697   78618 main.go:141] libmachine: (newest-cni-350416) DBG | I0913 20:18:09.003659   78657 retry.go:31] will retry after 4.250465496s: waiting for machine to come up
	I0913 20:18:13.256148   78618 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:18:13.256661   78618 main.go:141] libmachine: (newest-cni-350416) DBG | unable to find current IP address of domain newest-cni-350416 in network mk-newest-cni-350416
	I0913 20:18:13.256681   78618 main.go:141] libmachine: (newest-cni-350416) DBG | I0913 20:18:13.256617   78657 retry.go:31] will retry after 4.555625296s: waiting for machine to come up
	I0913 20:18:17.815183   78618 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:18:17.815655   78618 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has current primary IP address 192.168.72.56 and MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:18:17.815674   78618 main.go:141] libmachine: (newest-cni-350416) Found IP for machine: 192.168.72.56
	I0913 20:18:17.815686   78618 main.go:141] libmachine: (newest-cni-350416) Reserving static IP address...
	I0913 20:18:17.816054   78618 main.go:141] libmachine: (newest-cni-350416) DBG | unable to find host DHCP lease matching {name: "newest-cni-350416", mac: "52:54:00:ca:5a:f4", ip: "192.168.72.56"} in network mk-newest-cni-350416
	I0913 20:18:17.892348   78618 main.go:141] libmachine: (newest-cni-350416) DBG | Getting to WaitForSSH function...
	I0913 20:18:17.892375   78618 main.go:141] libmachine: (newest-cni-350416) Reserved static IP address: 192.168.72.56
	I0913 20:18:17.892387   78618 main.go:141] libmachine: (newest-cni-350416) Waiting for SSH to be available...
	I0913 20:18:17.895469   78618 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:18:17.895847   78618 main.go:141] libmachine: (newest-cni-350416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:5a:f4", ip: ""} in network mk-newest-cni-350416: {Iface:virbr4 ExpiryTime:2024-09-13 21:18:08 +0000 UTC Type:0 Mac:52:54:00:ca:5a:f4 Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:minikube Clientid:01:52:54:00:ca:5a:f4}
	I0913 20:18:17.895885   78618 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined IP address 192.168.72.56 and MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:18:17.896035   78618 main.go:141] libmachine: (newest-cni-350416) DBG | Using SSH client type: external
	I0913 20:18:17.896068   78618 main.go:141] libmachine: (newest-cni-350416) DBG | Using SSH private key: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/newest-cni-350416/id_rsa (-rw-------)
	I0913 20:18:17.896096   78618 main.go:141] libmachine: (newest-cni-350416) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.56 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19636-3902/.minikube/machines/newest-cni-350416/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0913 20:18:17.896116   78618 main.go:141] libmachine: (newest-cni-350416) DBG | About to run SSH command:
	I0913 20:18:17.896128   78618 main.go:141] libmachine: (newest-cni-350416) DBG | exit 0
	I0913 20:18:18.022569   78618 main.go:141] libmachine: (newest-cni-350416) DBG | SSH cmd err, output: <nil>: 
	I0913 20:18:18.022808   78618 main.go:141] libmachine: (newest-cni-350416) KVM machine creation complete!
	I0913 20:18:18.023083   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetConfigRaw
	I0913 20:18:18.023635   78618 main.go:141] libmachine: (newest-cni-350416) Calling .DriverName
	I0913 20:18:18.023792   78618 main.go:141] libmachine: (newest-cni-350416) Calling .DriverName
	I0913 20:18:18.023936   78618 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0913 20:18:18.023950   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetState
	I0913 20:18:18.025193   78618 main.go:141] libmachine: Detecting operating system of created instance...
	I0913 20:18:18.025210   78618 main.go:141] libmachine: Waiting for SSH to be available...
	I0913 20:18:18.025215   78618 main.go:141] libmachine: Getting to WaitForSSH function...
	I0913 20:18:18.025220   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHHostname
	I0913 20:18:18.027344   78618 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:18:18.027770   78618 main.go:141] libmachine: (newest-cni-350416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:5a:f4", ip: ""} in network mk-newest-cni-350416: {Iface:virbr4 ExpiryTime:2024-09-13 21:18:08 +0000 UTC Type:0 Mac:52:54:00:ca:5a:f4 Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:newest-cni-350416 Clientid:01:52:54:00:ca:5a:f4}
	I0913 20:18:18.027797   78618 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined IP address 192.168.72.56 and MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:18:18.027955   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHPort
	I0913 20:18:18.028114   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHKeyPath
	I0913 20:18:18.028276   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHKeyPath
	I0913 20:18:18.028371   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHUsername
	I0913 20:18:18.028512   78618 main.go:141] libmachine: Using SSH client type: native
	I0913 20:18:18.028721   78618 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.56 22 <nil> <nil>}
	I0913 20:18:18.028736   78618 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0913 20:18:18.145557   78618 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0913 20:18:18.145581   78618 main.go:141] libmachine: Detecting the provisioner...
	I0913 20:18:18.145589   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHHostname
	I0913 20:18:18.148375   78618 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:18:18.148748   78618 main.go:141] libmachine: (newest-cni-350416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:5a:f4", ip: ""} in network mk-newest-cni-350416: {Iface:virbr4 ExpiryTime:2024-09-13 21:18:08 +0000 UTC Type:0 Mac:52:54:00:ca:5a:f4 Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:newest-cni-350416 Clientid:01:52:54:00:ca:5a:f4}
	I0913 20:18:18.148768   78618 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined IP address 192.168.72.56 and MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:18:18.148908   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHPort
	I0913 20:18:18.149093   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHKeyPath
	I0913 20:18:18.149252   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHKeyPath
	I0913 20:18:18.149392   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHUsername
	I0913 20:18:18.149567   78618 main.go:141] libmachine: Using SSH client type: native
	I0913 20:18:18.149725   78618 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.56 22 <nil> <nil>}
	I0913 20:18:18.149735   78618 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0913 20:18:18.259256   78618 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0913 20:18:18.259356   78618 main.go:141] libmachine: found compatible host: buildroot
	I0913 20:18:18.259371   78618 main.go:141] libmachine: Provisioning with buildroot...
	I0913 20:18:18.259380   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetMachineName
	I0913 20:18:18.259630   78618 buildroot.go:166] provisioning hostname "newest-cni-350416"
	I0913 20:18:18.259658   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetMachineName
	I0913 20:18:18.259841   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHHostname
	I0913 20:18:18.262454   78618 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:18:18.262896   78618 main.go:141] libmachine: (newest-cni-350416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:5a:f4", ip: ""} in network mk-newest-cni-350416: {Iface:virbr4 ExpiryTime:2024-09-13 21:18:08 +0000 UTC Type:0 Mac:52:54:00:ca:5a:f4 Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:newest-cni-350416 Clientid:01:52:54:00:ca:5a:f4}
	I0913 20:18:18.262917   78618 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined IP address 192.168.72.56 and MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:18:18.263098   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHPort
	I0913 20:18:18.263274   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHKeyPath
	I0913 20:18:18.263417   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHKeyPath
	I0913 20:18:18.263547   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHUsername
	I0913 20:18:18.263732   78618 main.go:141] libmachine: Using SSH client type: native
	I0913 20:18:18.263934   78618 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.56 22 <nil> <nil>}
	I0913 20:18:18.263947   78618 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-350416 && echo "newest-cni-350416" | sudo tee /etc/hostname
	I0913 20:18:18.391203   78618 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-350416
	
	I0913 20:18:18.391231   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHHostname
	I0913 20:18:18.394245   78618 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:18:18.394654   78618 main.go:141] libmachine: (newest-cni-350416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:5a:f4", ip: ""} in network mk-newest-cni-350416: {Iface:virbr4 ExpiryTime:2024-09-13 21:18:08 +0000 UTC Type:0 Mac:52:54:00:ca:5a:f4 Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:newest-cni-350416 Clientid:01:52:54:00:ca:5a:f4}
	I0913 20:18:18.394685   78618 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined IP address 192.168.72.56 and MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:18:18.394864   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHPort
	I0913 20:18:18.395046   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHKeyPath
	I0913 20:18:18.395231   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHKeyPath
	I0913 20:18:18.395362   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHUsername
	I0913 20:18:18.395511   78618 main.go:141] libmachine: Using SSH client type: native
	I0913 20:18:18.395725   78618 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.56 22 <nil> <nil>}
	I0913 20:18:18.395756   78618 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-350416' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-350416/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-350416' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0913 20:18:18.512849   78618 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0913 20:18:18.512878   78618 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19636-3902/.minikube CaCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19636-3902/.minikube}
	I0913 20:18:18.512897   78618 buildroot.go:174] setting up certificates
	I0913 20:18:18.512905   78618 provision.go:84] configureAuth start
	I0913 20:18:18.512914   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetMachineName
	I0913 20:18:18.513194   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetIP
	I0913 20:18:18.516150   78618 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:18:18.516474   78618 main.go:141] libmachine: (newest-cni-350416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:5a:f4", ip: ""} in network mk-newest-cni-350416: {Iface:virbr4 ExpiryTime:2024-09-13 21:18:08 +0000 UTC Type:0 Mac:52:54:00:ca:5a:f4 Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:newest-cni-350416 Clientid:01:52:54:00:ca:5a:f4}
	I0913 20:18:18.516491   78618 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined IP address 192.168.72.56 and MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:18:18.516733   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHHostname
	I0913 20:18:18.519202   78618 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:18:18.519508   78618 main.go:141] libmachine: (newest-cni-350416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:5a:f4", ip: ""} in network mk-newest-cni-350416: {Iface:virbr4 ExpiryTime:2024-09-13 21:18:08 +0000 UTC Type:0 Mac:52:54:00:ca:5a:f4 Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:newest-cni-350416 Clientid:01:52:54:00:ca:5a:f4}
	I0913 20:18:18.519548   78618 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined IP address 192.168.72.56 and MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:18:18.519649   78618 provision.go:143] copyHostCerts
	I0913 20:18:18.519704   78618 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem, removing ...
	I0913 20:18:18.519717   78618 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem
	I0913 20:18:18.519801   78618 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem (1123 bytes)
	I0913 20:18:18.519905   78618 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem, removing ...
	I0913 20:18:18.519916   78618 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem
	I0913 20:18:18.519961   78618 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem (1679 bytes)
	I0913 20:18:18.520070   78618 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem, removing ...
	I0913 20:18:18.520082   78618 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem
	I0913 20:18:18.520121   78618 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem (1082 bytes)
	I0913 20:18:18.520200   78618 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem org=jenkins.newest-cni-350416 san=[127.0.0.1 192.168.72.56 localhost minikube newest-cni-350416]
	I0913 20:18:18.590824   78618 provision.go:177] copyRemoteCerts
	I0913 20:18:18.590894   78618 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0913 20:18:18.590925   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHHostname
	I0913 20:18:18.594149   78618 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:18:18.594575   78618 main.go:141] libmachine: (newest-cni-350416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:5a:f4", ip: ""} in network mk-newest-cni-350416: {Iface:virbr4 ExpiryTime:2024-09-13 21:18:08 +0000 UTC Type:0 Mac:52:54:00:ca:5a:f4 Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:newest-cni-350416 Clientid:01:52:54:00:ca:5a:f4}
	I0913 20:18:18.594604   78618 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined IP address 192.168.72.56 and MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:18:18.594845   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHPort
	I0913 20:18:18.595032   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHKeyPath
	I0913 20:18:18.595209   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHUsername
	I0913 20:18:18.595363   78618 sshutil.go:53] new ssh client: &{IP:192.168.72.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/newest-cni-350416/id_rsa Username:docker}
	I0913 20:18:18.684929   78618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0913 20:18:18.710605   78618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0913 20:18:18.736528   78618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0913 20:18:18.761086   78618 provision.go:87] duration metric: took 248.16824ms to configureAuth
	I0913 20:18:18.761127   78618 buildroot.go:189] setting minikube options for container-runtime
	I0913 20:18:18.761333   78618 config.go:182] Loaded profile config "newest-cni-350416": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 20:18:18.761462   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHHostname
	I0913 20:18:18.764233   78618 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:18:18.764591   78618 main.go:141] libmachine: (newest-cni-350416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:5a:f4", ip: ""} in network mk-newest-cni-350416: {Iface:virbr4 ExpiryTime:2024-09-13 21:18:08 +0000 UTC Type:0 Mac:52:54:00:ca:5a:f4 Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:newest-cni-350416 Clientid:01:52:54:00:ca:5a:f4}
	I0913 20:18:18.764632   78618 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined IP address 192.168.72.56 and MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:18:18.764783   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHPort
	I0913 20:18:18.764956   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHKeyPath
	I0913 20:18:18.765056   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHKeyPath
	I0913 20:18:18.765205   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHUsername
	I0913 20:18:18.765347   78618 main.go:141] libmachine: Using SSH client type: native
	I0913 20:18:18.765502   78618 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.56 22 <nil> <nil>}
	I0913 20:18:18.765533   78618 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0913 20:18:19.001904   78618 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0913 20:18:19.001937   78618 main.go:141] libmachine: Checking connection to Docker...
	I0913 20:18:19.001949   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetURL
	I0913 20:18:19.003220   78618 main.go:141] libmachine: (newest-cni-350416) DBG | Using libvirt version 6000000
	I0913 20:18:19.005080   78618 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:18:19.005546   78618 main.go:141] libmachine: (newest-cni-350416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:5a:f4", ip: ""} in network mk-newest-cni-350416: {Iface:virbr4 ExpiryTime:2024-09-13 21:18:08 +0000 UTC Type:0 Mac:52:54:00:ca:5a:f4 Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:newest-cni-350416 Clientid:01:52:54:00:ca:5a:f4}
	I0913 20:18:19.005574   78618 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined IP address 192.168.72.56 and MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:18:19.005778   78618 main.go:141] libmachine: Docker is up and running!
	I0913 20:18:19.005793   78618 main.go:141] libmachine: Reticulating splines...
	I0913 20:18:19.005801   78618 client.go:171] duration metric: took 25.840331943s to LocalClient.Create
	I0913 20:18:19.005841   78618 start.go:167] duration metric: took 25.840394382s to libmachine.API.Create "newest-cni-350416"
	I0913 20:18:19.005854   78618 start.go:293] postStartSetup for "newest-cni-350416" (driver="kvm2")
	I0913 20:18:19.005866   78618 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0913 20:18:19.005883   78618 main.go:141] libmachine: (newest-cni-350416) Calling .DriverName
	I0913 20:18:19.006157   78618 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0913 20:18:19.006188   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHHostname
	I0913 20:18:19.008175   78618 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:18:19.008553   78618 main.go:141] libmachine: (newest-cni-350416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:5a:f4", ip: ""} in network mk-newest-cni-350416: {Iface:virbr4 ExpiryTime:2024-09-13 21:18:08 +0000 UTC Type:0 Mac:52:54:00:ca:5a:f4 Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:newest-cni-350416 Clientid:01:52:54:00:ca:5a:f4}
	I0913 20:18:19.008578   78618 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined IP address 192.168.72.56 and MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:18:19.008668   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHPort
	I0913 20:18:19.008932   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHKeyPath
	I0913 20:18:19.009122   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHUsername
	I0913 20:18:19.009411   78618 sshutil.go:53] new ssh client: &{IP:192.168.72.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/newest-cni-350416/id_rsa Username:docker}
	I0913 20:18:19.092751   78618 ssh_runner.go:195] Run: cat /etc/os-release
	I0913 20:18:19.097025   78618 info.go:137] Remote host: Buildroot 2023.02.9
	I0913 20:18:19.097052   78618 filesync.go:126] Scanning /home/jenkins/minikube-integration/19636-3902/.minikube/addons for local assets ...
	I0913 20:18:19.097117   78618 filesync.go:126] Scanning /home/jenkins/minikube-integration/19636-3902/.minikube/files for local assets ...
	I0913 20:18:19.097203   78618 filesync.go:149] local asset: /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem -> 110792.pem in /etc/ssl/certs
	I0913 20:18:19.097285   78618 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0913 20:18:19.106334   78618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem --> /etc/ssl/certs/110792.pem (1708 bytes)
	I0913 20:18:19.131631   78618 start.go:296] duration metric: took 125.762424ms for postStartSetup
	I0913 20:18:19.131689   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetConfigRaw
	I0913 20:18:19.132358   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetIP
	I0913 20:18:19.135146   78618 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:18:19.135579   78618 main.go:141] libmachine: (newest-cni-350416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:5a:f4", ip: ""} in network mk-newest-cni-350416: {Iface:virbr4 ExpiryTime:2024-09-13 21:18:08 +0000 UTC Type:0 Mac:52:54:00:ca:5a:f4 Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:newest-cni-350416 Clientid:01:52:54:00:ca:5a:f4}
	I0913 20:18:19.135605   78618 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined IP address 192.168.72.56 and MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:18:19.135853   78618 profile.go:143] Saving config to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/newest-cni-350416/config.json ...
	I0913 20:18:19.136034   78618 start.go:128] duration metric: took 25.989944651s to createHost
	I0913 20:18:19.136059   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHHostname
	I0913 20:18:19.138242   78618 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:18:19.138636   78618 main.go:141] libmachine: (newest-cni-350416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:5a:f4", ip: ""} in network mk-newest-cni-350416: {Iface:virbr4 ExpiryTime:2024-09-13 21:18:08 +0000 UTC Type:0 Mac:52:54:00:ca:5a:f4 Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:newest-cni-350416 Clientid:01:52:54:00:ca:5a:f4}
	I0913 20:18:19.138661   78618 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined IP address 192.168.72.56 and MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:18:19.138781   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHPort
	I0913 20:18:19.138945   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHKeyPath
	I0913 20:18:19.139114   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHKeyPath
	I0913 20:18:19.139239   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHUsername
	I0913 20:18:19.139400   78618 main.go:141] libmachine: Using SSH client type: native
	I0913 20:18:19.139610   78618 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.56 22 <nil> <nil>}
	I0913 20:18:19.139624   78618 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0913 20:18:19.255172   78618 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726258699.232425808
	
	I0913 20:18:19.255198   78618 fix.go:216] guest clock: 1726258699.232425808
	I0913 20:18:19.255208   78618 fix.go:229] Guest: 2024-09-13 20:18:19.232425808 +0000 UTC Remote: 2024-09-13 20:18:19.136046627 +0000 UTC m=+26.102279958 (delta=96.379181ms)
	I0913 20:18:19.255235   78618 fix.go:200] guest clock delta is within tolerance: 96.379181ms
	I0913 20:18:19.255244   78618 start.go:83] releasing machines lock for "newest-cni-350416", held for 26.109236556s
	I0913 20:18:19.255272   78618 main.go:141] libmachine: (newest-cni-350416) Calling .DriverName
	I0913 20:18:19.255549   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetIP
	I0913 20:18:19.258112   78618 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:18:19.258603   78618 main.go:141] libmachine: (newest-cni-350416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:5a:f4", ip: ""} in network mk-newest-cni-350416: {Iface:virbr4 ExpiryTime:2024-09-13 21:18:08 +0000 UTC Type:0 Mac:52:54:00:ca:5a:f4 Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:newest-cni-350416 Clientid:01:52:54:00:ca:5a:f4}
	I0913 20:18:19.258642   78618 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined IP address 192.168.72.56 and MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:18:19.258795   78618 main.go:141] libmachine: (newest-cni-350416) Calling .DriverName
	I0913 20:18:19.259238   78618 main.go:141] libmachine: (newest-cni-350416) Calling .DriverName
	I0913 20:18:19.259508   78618 main.go:141] libmachine: (newest-cni-350416) Calling .DriverName
	I0913 20:18:19.259612   78618 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0913 20:18:19.259651   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHHostname
	I0913 20:18:19.259710   78618 ssh_runner.go:195] Run: cat /version.json
	I0913 20:18:19.259735   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHHostname
	I0913 20:18:19.262387   78618 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:18:19.262616   78618 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:18:19.262760   78618 main.go:141] libmachine: (newest-cni-350416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:5a:f4", ip: ""} in network mk-newest-cni-350416: {Iface:virbr4 ExpiryTime:2024-09-13 21:18:08 +0000 UTC Type:0 Mac:52:54:00:ca:5a:f4 Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:newest-cni-350416 Clientid:01:52:54:00:ca:5a:f4}
	I0913 20:18:19.262789   78618 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined IP address 192.168.72.56 and MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:18:19.262928   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHPort
	I0913 20:18:19.263022   78618 main.go:141] libmachine: (newest-cni-350416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:5a:f4", ip: ""} in network mk-newest-cni-350416: {Iface:virbr4 ExpiryTime:2024-09-13 21:18:08 +0000 UTC Type:0 Mac:52:54:00:ca:5a:f4 Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:newest-cni-350416 Clientid:01:52:54:00:ca:5a:f4}
	I0913 20:18:19.263052   78618 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined IP address 192.168.72.56 and MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:18:19.263139   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHKeyPath
	I0913 20:18:19.263213   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHPort
	I0913 20:18:19.263291   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHUsername
	I0913 20:18:19.263400   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHKeyPath
	I0913 20:18:19.263444   78618 sshutil.go:53] new ssh client: &{IP:192.168.72.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/newest-cni-350416/id_rsa Username:docker}
	I0913 20:18:19.263546   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHUsername
	I0913 20:18:19.263690   78618 sshutil.go:53] new ssh client: &{IP:192.168.72.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/newest-cni-350416/id_rsa Username:docker}
	I0913 20:18:19.367326   78618 ssh_runner.go:195] Run: systemctl --version
	I0913 20:18:19.373133   78618 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0913 20:18:19.533204   78618 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0913 20:18:19.540078   78618 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0913 20:18:19.540145   78618 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0913 20:18:19.557385   78618 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0913 20:18:19.557411   78618 start.go:495] detecting cgroup driver to use...
	I0913 20:18:19.557481   78618 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0913 20:18:19.575471   78618 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0913 20:18:19.589534   78618 docker.go:217] disabling cri-docker service (if available) ...
	I0913 20:18:19.589601   78618 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0913 20:18:19.602905   78618 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0913 20:18:19.616392   78618 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0913 20:18:19.735766   78618 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0913 20:18:19.880340   78618 docker.go:233] disabling docker service ...
	I0913 20:18:19.880416   78618 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0913 20:18:19.895106   78618 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0913 20:18:19.908658   78618 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0913 20:18:20.058672   78618 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0913 20:18:20.180401   78618 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0913 20:18:20.194786   78618 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0913 20:18:20.213713   78618 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0913 20:18:20.213770   78618 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 20:18:20.223957   78618 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0913 20:18:20.224012   78618 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 20:18:20.234176   78618 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 20:18:20.244507   78618 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 20:18:20.254714   78618 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0913 20:18:20.265967   78618 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 20:18:20.276060   78618 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 20:18:20.293622   78618 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 20:18:20.303609   78618 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0913 20:18:20.313513   78618 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0913 20:18:20.313562   78618 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0913 20:18:20.327748   78618 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0913 20:18:20.339098   78618 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 20:18:20.455219   78618 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0913 20:18:20.560379   78618 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0913 20:18:20.560446   78618 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0913 20:18:20.565345   78618 start.go:563] Will wait 60s for crictl version
	I0913 20:18:20.565408   78618 ssh_runner.go:195] Run: which crictl
	I0913 20:18:20.569510   78618 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0913 20:18:20.609857   78618 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0913 20:18:20.609922   78618 ssh_runner.go:195] Run: crio --version
	I0913 20:18:20.638401   78618 ssh_runner.go:195] Run: crio --version
	I0913 20:18:20.671339   78618 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0913 20:18:20.672486   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetIP
	I0913 20:18:20.675093   78618 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:18:20.675401   78618 main.go:141] libmachine: (newest-cni-350416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:5a:f4", ip: ""} in network mk-newest-cni-350416: {Iface:virbr4 ExpiryTime:2024-09-13 21:18:08 +0000 UTC Type:0 Mac:52:54:00:ca:5a:f4 Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:newest-cni-350416 Clientid:01:52:54:00:ca:5a:f4}
	I0913 20:18:20.675431   78618 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined IP address 192.168.72.56 and MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:18:20.675616   78618 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0913 20:18:20.680277   78618 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0913 20:18:20.694660   78618 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0913 20:18:20.695840   78618 kubeadm.go:883] updating cluster {Name:newest-cni-350416 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:newest-cni-350416 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.56 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVers
ion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0913 20:18:20.695967   78618 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0913 20:18:20.696036   78618 ssh_runner.go:195] Run: sudo crictl images --output json
	I0913 20:18:20.731785   78618 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0913 20:18:20.731866   78618 ssh_runner.go:195] Run: which lz4
	I0913 20:18:20.736208   78618 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0913 20:18:20.740562   78618 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0913 20:18:20.740598   78618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0913 20:18:22.144859   78618 crio.go:462] duration metric: took 1.408702239s to copy over tarball
	I0913 20:18:22.144936   78618 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0913 20:18:24.177149   78618 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.032188373s)
	I0913 20:18:24.177174   78618 crio.go:469] duration metric: took 2.032289486s to extract the tarball
	I0913 20:18:24.177182   78618 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0913 20:18:24.215670   78618 ssh_runner.go:195] Run: sudo crictl images --output json
	I0913 20:18:24.258983   78618 crio.go:514] all images are preloaded for cri-o runtime.
	I0913 20:18:24.259003   78618 cache_images.go:84] Images are preloaded, skipping loading
	I0913 20:18:24.259010   78618 kubeadm.go:934] updating node { 192.168.72.56 8443 v1.31.1 crio true true} ...
	I0913 20:18:24.259101   78618 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --feature-gates=ServerSideApply=true --hostname-override=newest-cni-350416 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.56
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:newest-cni-350416 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0913 20:18:24.259162   78618 ssh_runner.go:195] Run: crio config
	I0913 20:18:24.309703   78618 cni.go:84] Creating CNI manager for ""
	I0913 20:18:24.309727   78618 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0913 20:18:24.309737   78618 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0913 20:18:24.309757   78618 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.72.56 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-350416 NodeName:newest-cni-350416 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.56"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs:map[
] NodeIP:192.168.72.56 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0913 20:18:24.309895   78618 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.56
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-350416"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.56
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.56"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0913 20:18:24.309954   78618 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0913 20:18:24.320322   78618 binaries.go:44] Found k8s binaries, skipping transfer
	I0913 20:18:24.320415   78618 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0913 20:18:24.330176   78618 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (353 bytes)
	I0913 20:18:24.352594   78618 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0913 20:18:24.372861   78618 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2282 bytes)
	I0913 20:18:24.389601   78618 ssh_runner.go:195] Run: grep 192.168.72.56	control-plane.minikube.internal$ /etc/hosts
	I0913 20:18:24.393348   78618 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.56	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0913 20:18:24.405088   78618 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 20:18:24.539341   78618 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0913 20:18:24.561683   78618 certs.go:68] Setting up /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/newest-cni-350416 for IP: 192.168.72.56
	I0913 20:18:24.561704   78618 certs.go:194] generating shared ca certs ...
	I0913 20:18:24.561723   78618 certs.go:226] acquiring lock for ca certs: {Name:mke780aab4c2895bded11772e31c7a357d07742c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 20:18:24.561902   78618 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.key
	I0913 20:18:24.561964   78618 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.key
	I0913 20:18:24.561980   78618 certs.go:256] generating profile certs ...
	I0913 20:18:24.562046   78618 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/newest-cni-350416/client.key
	I0913 20:18:24.562078   78618 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/newest-cni-350416/client.crt with IP's: []
	I0913 20:18:24.681770   78618 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/newest-cni-350416/client.crt ...
	I0913 20:18:24.681801   78618 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/newest-cni-350416/client.crt: {Name:mk0a18100c95f2446b4dae27c8d4ce3bd1331da4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 20:18:24.681996   78618 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/newest-cni-350416/client.key ...
	I0913 20:18:24.682013   78618 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/newest-cni-350416/client.key: {Name:mk81a2f20e5c3515cf4258741dd2a03651473768 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 20:18:24.682139   78618 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/newest-cni-350416/apiserver.key.33b8c2ee
	I0913 20:18:24.682164   78618 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/newest-cni-350416/apiserver.crt.33b8c2ee with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.56]
	I0913 20:18:24.875783   78618 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/newest-cni-350416/apiserver.crt.33b8c2ee ...
	I0913 20:18:24.875815   78618 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/newest-cni-350416/apiserver.crt.33b8c2ee: {Name:mk9447078ee811271cf60ea7f788f6363d1810f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 20:18:24.876046   78618 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/newest-cni-350416/apiserver.key.33b8c2ee ...
	I0913 20:18:24.876063   78618 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/newest-cni-350416/apiserver.key.33b8c2ee: {Name:mkadad9a8167aaefd31ad5e191beee2d93039c9d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 20:18:24.876210   78618 certs.go:381] copying /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/newest-cni-350416/apiserver.crt.33b8c2ee -> /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/newest-cni-350416/apiserver.crt
	I0913 20:18:24.876312   78618 certs.go:385] copying /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/newest-cni-350416/apiserver.key.33b8c2ee -> /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/newest-cni-350416/apiserver.key
	I0913 20:18:24.876398   78618 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/newest-cni-350416/proxy-client.key
	I0913 20:18:24.876425   78618 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/newest-cni-350416/proxy-client.crt with IP's: []
	I0913 20:18:24.973620   78618 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/newest-cni-350416/proxy-client.crt ...
	I0913 20:18:24.973650   78618 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/newest-cni-350416/proxy-client.crt: {Name:mkf9ad5559c5cf3dd38ce74b5e325fd4b60bbcb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 20:18:24.988202   78618 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/newest-cni-350416/proxy-client.key ...
	I0913 20:18:24.988268   78618 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/newest-cni-350416/proxy-client.key: {Name:mkafc4915b8b2b8b957b9031db517e8d2d2a7699 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 20:18:24.988549   78618 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079.pem (1338 bytes)
	W0913 20:18:24.988619   78618 certs.go:480] ignoring /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079_empty.pem, impossibly tiny 0 bytes
	I0913 20:18:24.988638   78618 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem (1675 bytes)
	I0913 20:18:24.988667   78618 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem (1082 bytes)
	I0913 20:18:24.988697   78618 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem (1123 bytes)
	I0913 20:18:24.988728   78618 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem (1679 bytes)
	I0913 20:18:24.988783   78618 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem (1708 bytes)
	I0913 20:18:24.989497   78618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0913 20:18:25.017766   78618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0913 20:18:25.044635   78618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0913 20:18:25.073179   78618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0913 20:18:25.098366   78618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/newest-cni-350416/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0913 20:18:25.124948   78618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/newest-cni-350416/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0913 20:18:25.151129   78618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/newest-cni-350416/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0913 20:18:25.176023   78618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/newest-cni-350416/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0913 20:18:25.202723   78618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079.pem --> /usr/share/ca-certificates/11079.pem (1338 bytes)
	I0913 20:18:25.228509   78618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem --> /usr/share/ca-certificates/110792.pem (1708 bytes)
	I0913 20:18:25.254758   78618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0913 20:18:25.280044   78618 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0913 20:18:25.297000   78618 ssh_runner.go:195] Run: openssl version
	I0913 20:18:25.302667   78618 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11079.pem && ln -fs /usr/share/ca-certificates/11079.pem /etc/ssl/certs/11079.pem"
	I0913 20:18:25.313134   78618 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11079.pem
	I0913 20:18:25.317979   78618 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 13 18:38 /usr/share/ca-certificates/11079.pem
	I0913 20:18:25.318035   78618 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11079.pem
	I0913 20:18:25.325042   78618 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11079.pem /etc/ssl/certs/51391683.0"
	I0913 20:18:25.342834   78618 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110792.pem && ln -fs /usr/share/ca-certificates/110792.pem /etc/ssl/certs/110792.pem"
	I0913 20:18:25.373652   78618 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110792.pem
	I0913 20:18:25.381253   78618 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 13 18:38 /usr/share/ca-certificates/110792.pem
	I0913 20:18:25.381333   78618 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110792.pem
	I0913 20:18:25.391426   78618 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110792.pem /etc/ssl/certs/3ec20f2e.0"
	I0913 20:18:25.411098   78618 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0913 20:18:25.423420   78618 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0913 20:18:25.428094   78618 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 13 18:22 /usr/share/ca-certificates/minikubeCA.pem
	I0913 20:18:25.428150   78618 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0913 20:18:25.436588   78618 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0913 20:18:25.450503   78618 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0913 20:18:25.454998   78618 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0913 20:18:25.455060   78618 kubeadm.go:392] StartCluster: {Name:newest-cni-350416 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:newest-cni-350416 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.56 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion
:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 20:18:25.455154   78618 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0913 20:18:25.455213   78618 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0913 20:18:25.504001   78618 cri.go:89] found id: ""
	I0913 20:18:25.504074   78618 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0913 20:18:25.515134   78618 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0913 20:18:25.526660   78618 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0913 20:18:25.538810   78618 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0913 20:18:25.538836   78618 kubeadm.go:157] found existing configuration files:
	
	I0913 20:18:25.538887   78618 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0913 20:18:25.552145   78618 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0913 20:18:25.552198   78618 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0913 20:18:25.566590   78618 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0913 20:18:25.579683   78618 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0913 20:18:25.579749   78618 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0913 20:18:25.592991   78618 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0913 20:18:25.603885   78618 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0913 20:18:25.603927   78618 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0913 20:18:25.614932   78618 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0913 20:18:25.626230   78618 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0913 20:18:25.626293   78618 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0913 20:18:25.642955   78618 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0913 20:18:25.771410   78618 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0913 20:18:25.771504   78618 kubeadm.go:310] [preflight] Running pre-flight checks
	I0913 20:18:25.883049   78618 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0913 20:18:25.883224   78618 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0913 20:18:25.883350   78618 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0913 20:18:25.897018   78618 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0913 20:18:26.070585   78618 out.go:235]   - Generating certificates and keys ...
	I0913 20:18:26.070726   78618 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0913 20:18:26.070849   78618 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0913 20:18:26.070977   78618 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0913 20:18:26.093575   78618 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0913 20:18:26.492569   78618 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0913 20:18:26.640062   78618 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0913 20:18:26.710698   78618 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0913 20:18:26.710875   78618 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-350416] and IPs [192.168.72.56 127.0.0.1 ::1]
	I0913 20:18:27.148898   78618 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0913 20:18:27.149116   78618 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-350416] and IPs [192.168.72.56 127.0.0.1 ::1]
	I0913 20:18:27.290447   78618 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0913 20:18:27.432680   78618 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0913 20:18:27.699323   78618 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0913 20:18:27.699539   78618 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0913 20:18:27.855592   78618 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0913 20:18:27.988316   78618 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0913 20:18:28.351998   78618 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0913 20:18:28.507450   78618 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0913 20:18:28.621183   78618 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0913 20:18:28.621746   78618 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0913 20:18:28.627751   78618 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0913 20:18:28.629447   78618 out.go:235]   - Booting up control plane ...
	I0913 20:18:28.629605   78618 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0913 20:18:28.629722   78618 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0913 20:18:28.630221   78618 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0913 20:18:28.646872   78618 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0913 20:18:28.655449   78618 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0913 20:18:28.655502   78618 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0913 20:18:28.829200   78618 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0913 20:18:28.829381   78618 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0913 20:18:29.330605   78618 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.580308ms
	I0913 20:18:29.330725   78618 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0913 20:18:34.329503   78618 kubeadm.go:310] [api-check] The API server is healthy after 5.001842152s
	I0913 20:18:34.346705   78618 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0913 20:18:34.373033   78618 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0913 20:18:34.412844   78618 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0913 20:18:34.413022   78618 kubeadm.go:310] [mark-control-plane] Marking the node newest-cni-350416 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0913 20:18:34.429238   78618 kubeadm.go:310] [bootstrap-token] Using token: 284bo8.ywoehqn8qyl74v88
	I0913 20:18:34.431063   78618 out.go:235]   - Configuring RBAC rules ...
	I0913 20:18:34.431210   78618 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0913 20:18:34.442054   78618 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0913 20:18:34.454773   78618 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0913 20:18:34.459628   78618 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0913 20:18:34.464010   78618 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0913 20:18:34.468179   78618 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0913 20:18:34.738629   78618 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0913 20:18:35.159520   78618 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0913 20:18:35.738327   78618 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0913 20:18:35.739948   78618 kubeadm.go:310] 
	I0913 20:18:35.740048   78618 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0913 20:18:35.740062   78618 kubeadm.go:310] 
	I0913 20:18:35.740195   78618 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0913 20:18:35.740220   78618 kubeadm.go:310] 
	I0913 20:18:35.740260   78618 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0913 20:18:35.740339   78618 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0913 20:18:35.740420   78618 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0913 20:18:35.740427   78618 kubeadm.go:310] 
	I0913 20:18:35.740530   78618 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0913 20:18:35.740550   78618 kubeadm.go:310] 
	I0913 20:18:35.740616   78618 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0913 20:18:35.740625   78618 kubeadm.go:310] 
	I0913 20:18:35.740689   78618 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0913 20:18:35.740787   78618 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0913 20:18:35.740882   78618 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0913 20:18:35.740892   78618 kubeadm.go:310] 
	I0913 20:18:35.741001   78618 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0913 20:18:35.741129   78618 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0913 20:18:35.741147   78618 kubeadm.go:310] 
	I0913 20:18:35.741273   78618 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 284bo8.ywoehqn8qyl74v88 \
	I0913 20:18:35.741417   78618 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a4240e11abfbfd373505036507e5ef67d548fb372e560a526b421dfcccc65691 \
	I0913 20:18:35.741452   78618 kubeadm.go:310] 	--control-plane 
	I0913 20:18:35.741463   78618 kubeadm.go:310] 
	I0913 20:18:35.741567   78618 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0913 20:18:35.741581   78618 kubeadm.go:310] 
	I0913 20:18:35.741654   78618 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 284bo8.ywoehqn8qyl74v88 \
	I0913 20:18:35.741759   78618 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a4240e11abfbfd373505036507e5ef67d548fb372e560a526b421dfcccc65691 
	I0913 20:18:35.743258   78618 kubeadm.go:310] W0913 20:18:25.753040     830 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0913 20:18:35.743527   78618 kubeadm.go:310] W0913 20:18:25.753925     830 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0913 20:18:35.743650   78618 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0913 20:18:35.743692   78618 cni.go:84] Creating CNI manager for ""
	I0913 20:18:35.743702   78618 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0913 20:18:35.746310   78618 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0913 20:18:35.747485   78618 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0913 20:18:35.757809   78618 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0913 20:18:35.780777   78618 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0913 20:18:35.780851   78618 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 20:18:35.780884   78618 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-350416 minikube.k8s.io/updated_at=2024_09_13T20_18_35_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=fdd33bebc6743cfd1c61ec7fe066add478610a92 minikube.k8s.io/name=newest-cni-350416 minikube.k8s.io/primary=true
	I0913 20:18:35.807675   78618 ops.go:34] apiserver oom_adj: -16
	I0913 20:18:36.024487   78618 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 20:18:36.525498   78618 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 20:18:37.024950   78618 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 20:18:37.525345   78618 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 20:18:38.025349   78618 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 20:18:38.525459   78618 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 20:18:39.025202   78618 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 20:18:39.525578   78618 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 20:18:40.025215   78618 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 20:18:40.128497   78618 kubeadm.go:1113] duration metric: took 4.347709754s to wait for elevateKubeSystemPrivileges
	I0913 20:18:40.128536   78618 kubeadm.go:394] duration metric: took 14.673487582s to StartCluster
	I0913 20:18:40.128559   78618 settings.go:142] acquiring lock: {Name:mk3b751ea8a9f3a6c2d19469cffad500e96411b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 20:18:40.128644   78618 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19636-3902/kubeconfig
	I0913 20:18:40.130304   78618 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/kubeconfig: {Name:mk324cf401f96b93fed93af51e24b0634e5fa1a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 20:18:40.130546   78618 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0913 20:18:40.130542   78618 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.56 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0913 20:18:40.130566   78618 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0913 20:18:40.130712   78618 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-350416"
	I0913 20:18:40.130729   78618 addons.go:234] Setting addon storage-provisioner=true in "newest-cni-350416"
	I0913 20:18:40.130749   78618 config.go:182] Loaded profile config "newest-cni-350416": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 20:18:40.130763   78618 host.go:66] Checking if "newest-cni-350416" exists ...
	I0913 20:18:40.130777   78618 addons.go:69] Setting default-storageclass=true in profile "newest-cni-350416"
	I0913 20:18:40.130800   78618 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-350416"
	I0913 20:18:40.131151   78618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 20:18:40.131207   78618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 20:18:40.131279   78618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 20:18:40.131319   78618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 20:18:40.132439   78618 out.go:177] * Verifying Kubernetes components...
	I0913 20:18:40.133913   78618 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 20:18:40.146503   78618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43795
	I0913 20:18:40.146512   78618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44495
	I0913 20:18:40.146920   78618 main.go:141] libmachine: () Calling .GetVersion
	I0913 20:18:40.147026   78618 main.go:141] libmachine: () Calling .GetVersion
	I0913 20:18:40.147512   78618 main.go:141] libmachine: Using API Version  1
	I0913 20:18:40.147526   78618 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 20:18:40.147657   78618 main.go:141] libmachine: Using API Version  1
	I0913 20:18:40.147672   78618 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 20:18:40.147862   78618 main.go:141] libmachine: () Calling .GetMachineName
	I0913 20:18:40.147952   78618 main.go:141] libmachine: () Calling .GetMachineName
	I0913 20:18:40.148308   78618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 20:18:40.148339   78618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 20:18:40.148497   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetState
	I0913 20:18:40.152884   78618 addons.go:234] Setting addon default-storageclass=true in "newest-cni-350416"
	I0913 20:18:40.152945   78618 host.go:66] Checking if "newest-cni-350416" exists ...
	I0913 20:18:40.153335   78618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 20:18:40.153388   78618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 20:18:40.164416   78618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46845
	I0913 20:18:40.164981   78618 main.go:141] libmachine: () Calling .GetVersion
	I0913 20:18:40.165524   78618 main.go:141] libmachine: Using API Version  1
	I0913 20:18:40.165551   78618 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 20:18:40.165888   78618 main.go:141] libmachine: () Calling .GetMachineName
	I0913 20:18:40.166165   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetState
	I0913 20:18:40.168216   78618 main.go:141] libmachine: (newest-cni-350416) Calling .DriverName
	I0913 20:18:40.168983   78618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44731
	I0913 20:18:40.169544   78618 main.go:141] libmachine: () Calling .GetVersion
	I0913 20:18:40.170006   78618 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 20:18:40.170062   78618 main.go:141] libmachine: Using API Version  1
	I0913 20:18:40.170104   78618 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 20:18:40.170644   78618 main.go:141] libmachine: () Calling .GetMachineName
	I0913 20:18:40.171263   78618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 20:18:40.171334   78618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 20:18:40.172270   78618 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0913 20:18:40.172293   78618 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0913 20:18:40.172313   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHHostname
	I0913 20:18:40.175787   78618 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:18:40.176417   78618 main.go:141] libmachine: (newest-cni-350416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:5a:f4", ip: ""} in network mk-newest-cni-350416: {Iface:virbr4 ExpiryTime:2024-09-13 21:18:08 +0000 UTC Type:0 Mac:52:54:00:ca:5a:f4 Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:newest-cni-350416 Clientid:01:52:54:00:ca:5a:f4}
	I0913 20:18:40.176460   78618 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined IP address 192.168.72.56 and MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:18:40.176611   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHPort
	I0913 20:18:40.176793   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHKeyPath
	I0913 20:18:40.176930   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHUsername
	I0913 20:18:40.177043   78618 sshutil.go:53] new ssh client: &{IP:192.168.72.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/newest-cni-350416/id_rsa Username:docker}
	I0913 20:18:40.187677   78618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35335
	I0913 20:18:40.188274   78618 main.go:141] libmachine: () Calling .GetVersion
	I0913 20:18:40.188796   78618 main.go:141] libmachine: Using API Version  1
	I0913 20:18:40.188813   78618 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 20:18:40.189146   78618 main.go:141] libmachine: () Calling .GetMachineName
	I0913 20:18:40.189389   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetState
	I0913 20:18:40.190984   78618 main.go:141] libmachine: (newest-cni-350416) Calling .DriverName
	I0913 20:18:40.191181   78618 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0913 20:18:40.191199   78618 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0913 20:18:40.191218   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHHostname
	I0913 20:18:40.194284   78618 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:18:40.194756   78618 main.go:141] libmachine: (newest-cni-350416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:5a:f4", ip: ""} in network mk-newest-cni-350416: {Iface:virbr4 ExpiryTime:2024-09-13 21:18:08 +0000 UTC Type:0 Mac:52:54:00:ca:5a:f4 Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:newest-cni-350416 Clientid:01:52:54:00:ca:5a:f4}
	I0913 20:18:40.194784   78618 main.go:141] libmachine: (newest-cni-350416) DBG | domain newest-cni-350416 has defined IP address 192.168.72.56 and MAC address 52:54:00:ca:5a:f4 in network mk-newest-cni-350416
	I0913 20:18:40.194935   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHPort
	I0913 20:18:40.195085   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHKeyPath
	I0913 20:18:40.195184   78618 main.go:141] libmachine: (newest-cni-350416) Calling .GetSSHUsername
	I0913 20:18:40.195287   78618 sshutil.go:53] new ssh client: &{IP:192.168.72.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/newest-cni-350416/id_rsa Username:docker}
	I0913 20:18:40.425852   78618 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0913 20:18:40.425919   78618 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0913 20:18:40.543875   78618 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0913 20:18:40.566667   78618 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0913 20:18:41.064659   78618 start.go:971] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0913 20:18:41.064750   78618 main.go:141] libmachine: Making call to close driver server
	I0913 20:18:41.064773   78618 main.go:141] libmachine: (newest-cni-350416) Calling .Close
	I0913 20:18:41.065084   78618 main.go:141] libmachine: Successfully made call to close driver server
	I0913 20:18:41.065097   78618 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 20:18:41.065106   78618 main.go:141] libmachine: Making call to close driver server
	I0913 20:18:41.065113   78618 main.go:141] libmachine: (newest-cni-350416) Calling .Close
	I0913 20:18:41.065324   78618 main.go:141] libmachine: Successfully made call to close driver server
	I0913 20:18:41.065344   78618 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 20:18:41.066355   78618 api_server.go:52] waiting for apiserver process to appear ...
	I0913 20:18:41.066434   78618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:18:41.084502   78618 main.go:141] libmachine: Making call to close driver server
	I0913 20:18:41.084525   78618 main.go:141] libmachine: (newest-cni-350416) Calling .Close
	I0913 20:18:41.084836   78618 main.go:141] libmachine: Successfully made call to close driver server
	I0913 20:18:41.084873   78618 main.go:141] libmachine: (newest-cni-350416) DBG | Closing plugin on server side
	I0913 20:18:41.084905   78618 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 20:18:41.573085   78618 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-350416" context rescaled to 1 replicas
	I0913 20:18:41.802359   78618 api_server.go:72] duration metric: took 1.671714332s to wait for apiserver process to appear ...
	I0913 20:18:41.802385   78618 api_server.go:88] waiting for apiserver healthz status ...
	I0913 20:18:41.802409   78618 api_server.go:253] Checking apiserver healthz at https://192.168.72.56:8443/healthz ...
	I0913 20:18:41.802444   78618 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.235741645s)
	I0913 20:18:41.802487   78618 main.go:141] libmachine: Making call to close driver server
	I0913 20:18:41.802501   78618 main.go:141] libmachine: (newest-cni-350416) Calling .Close
	I0913 20:18:41.802815   78618 main.go:141] libmachine: (newest-cni-350416) DBG | Closing plugin on server side
	I0913 20:18:41.802826   78618 main.go:141] libmachine: Successfully made call to close driver server
	I0913 20:18:41.802838   78618 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 20:18:41.802848   78618 main.go:141] libmachine: Making call to close driver server
	I0913 20:18:41.802857   78618 main.go:141] libmachine: (newest-cni-350416) Calling .Close
	I0913 20:18:41.803105   78618 main.go:141] libmachine: (newest-cni-350416) DBG | Closing plugin on server side
	I0913 20:18:41.803132   78618 main.go:141] libmachine: Successfully made call to close driver server
	I0913 20:18:41.803149   78618 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 20:18:41.804825   78618 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0913 20:18:41.805976   78618 addons.go:510] duration metric: took 1.675402997s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0913 20:18:41.808026   78618 api_server.go:279] https://192.168.72.56:8443/healthz returned 200:
	ok
	I0913 20:18:41.809580   78618 api_server.go:141] control plane version: v1.31.1
	I0913 20:18:41.809599   78618 api_server.go:131] duration metric: took 7.208007ms to wait for apiserver health ...
	I0913 20:18:41.809606   78618 system_pods.go:43] waiting for kube-system pods to appear ...
	I0913 20:18:41.826914   78618 system_pods.go:59] 8 kube-system pods found
	I0913 20:18:41.826948   78618 system_pods.go:61] "coredns-7c65d6cfc9-b5smz" [ec6b9566-051d-4bcd-ba42-64ecf294a2a5] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0913 20:18:41.826956   78618 system_pods.go:61] "coredns-7c65d6cfc9-k9gw7" [3d072f05-3f04-4ff5-80c0-4d1b01bb6b3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0913 20:18:41.826962   78618 system_pods.go:61] "etcd-newest-cni-350416" [944a341b-7877-4595-834e-5aef1382594c] Running
	I0913 20:18:41.826967   78618 system_pods.go:61] "kube-apiserver-newest-cni-350416" [db8f05d7-7d3b-4260-a392-b22d482425bf] Running
	I0913 20:18:41.826974   78618 system_pods.go:61] "kube-controller-manager-newest-cni-350416" [56edb59d-ca1a-4ec7-b43c-d6b783f5fb53] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0913 20:18:41.826978   78618 system_pods.go:61] "kube-proxy-865x9" [f438767f-0508-42ce-bd32-1b1801ab437f] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0913 20:18:41.826983   78618 system_pods.go:61] "kube-scheduler-newest-cni-350416" [e6b38e44-0b02-4f01-a53f-31c8aa7c49d4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0913 20:18:41.826987   78618 system_pods.go:61] "storage-provisioner" [0c2fdea1-31ab-4244-9b0b-54f438c234b2] Pending
	I0913 20:18:41.826993   78618 system_pods.go:74] duration metric: took 17.383041ms to wait for pod list to return data ...
	I0913 20:18:41.827000   78618 default_sa.go:34] waiting for default service account to be created ...
	I0913 20:18:41.832769   78618 default_sa.go:45] found service account: "default"
	I0913 20:18:41.832799   78618 default_sa.go:55] duration metric: took 5.79434ms for default service account to be created ...
	I0913 20:18:41.832810   78618 kubeadm.go:582] duration metric: took 1.702171178s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0913 20:18:41.832829   78618 node_conditions.go:102] verifying NodePressure condition ...
	I0913 20:18:41.841371   78618 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0913 20:18:41.841404   78618 node_conditions.go:123] node cpu capacity is 2
	I0913 20:18:41.841426   78618 node_conditions.go:105] duration metric: took 8.592099ms to run NodePressure ...
	I0913 20:18:41.841441   78618 start.go:241] waiting for startup goroutines ...
	I0913 20:18:41.841451   78618 start.go:246] waiting for cluster config update ...
	I0913 20:18:41.841464   78618 start.go:255] writing updated cluster config ...
	I0913 20:18:41.841807   78618 ssh_runner.go:195] Run: rm -f paused
	I0913 20:18:41.900854   78618 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0913 20:18:41.902625   78618 out.go:177] * Done! kubectl is now configured to use "newest-cni-350416" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 13 20:18:51 embed-certs-175374 crio[698]: time="2024-09-13 20:18:51.934808055Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726258731934775728,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d07e32d9-2363-47c3-8a0d-27aaefa86cb1 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 20:18:51 embed-certs-175374 crio[698]: time="2024-09-13 20:18:51.935284231Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=28195f35-8087-4ee5-a3f3-9f9c776a9193 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 20:18:51 embed-certs-175374 crio[698]: time="2024-09-13 20:18:51.935338129Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=28195f35-8087-4ee5-a3f3-9f9c776a9193 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 20:18:51 embed-certs-175374 crio[698]: time="2024-09-13 20:18:51.935603424Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:db0694e689431d2416b903e0f01588bdf740ca0d24d9561799e853cfb065cb11,PodSandboxId:cebcdd6272ca6944b6d37519f1e1fbac7d6fad2fa1b184d187925cb4ef791c6a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726257554121691214,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afa99920-b51a-4d30-a8e0-269a0beeee8a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fc1d1764b64031585d14059ac9cb84e25c4d56f6cce3f844422b55edf3e5740,PodSandboxId:5a65221b12cd8104483023513fe85d970fc91bfa5e41a3b2e3b03c884a7d0927,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1726257534279950015,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 78550100-7601-4019-a699-49a888b727ec,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a58f184d570427c2adde5e982b72f455eb375ced496ce62a3217f22f7c409e7,PodSandboxId:fe47c92942c956149df15d774f8c96fd707a9c4d65c0de5d2aeb9fbb7d9e8887,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726257530848869494,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-lrrkx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b5420cc-a0cd-4f34-8f65-9fa6fd5dd02f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57402126568c76ff63c95511c256896d8fb9ec9889deab2a2816385513386b86,PodSandboxId:140b66d4a3d1b0ac227dce90bb7a1a9e190801662abfdd179aef4272e409e8ea,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726257523259928114,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jv77q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28580bbe-7c5f-4161-8
370-41f3286d508c,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d21ac9f9341fd40b5f181668035e7327fd6c2310c56a7f5f0158d440bfd2e9a3,PodSandboxId:cebcdd6272ca6944b6d37519f1e1fbac7d6fad2fa1b184d187925cb4ef791c6a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726257523225956258,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afa99920-b51a-4d30-a8e0-269a0beee
e8a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7288e6c437a29f9a016812446ce5905f4cc78974775e1f26b227cd41414a8c0,PodSandboxId:9316c569b83c7cf2caf7ead97ea76c085819b6df6e94f6aa6420682f33f0e80a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726257519534934341,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-175374,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53f4ce16a79335202e881123ba1a8a01,},Annotations:map[string]string{io.kub
ernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c32212fb06588456e47850075601e1e9622d8fab0cd30faa9a6289ff74e4bc9f,PodSandboxId:33eefe67c1e3ab16447af68ae4af63fa0d3d9d282dccdb6f1e74ed4271f2e4f7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726257519507998419,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-175374,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 744c32c281fdfe02bb00a97e4b471c7e,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e8d6c49b3b396aea040cc47de27b36e0f6018361f1925fe36424adffe6a6b73,PodSandboxId:45b75d912b1093e8d1b29e8318ce06e122867a44a2b29b108d2b56b1ad8db987,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726257519513900636,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-175374,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bda8edd45b328faa386561c0e2decf96,},Annotations:map[string]string{io.
kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c6b66cfda64cc01b2add3842317e8e38bd44940ec52b87ff6868e2b782ceb0d,PodSandboxId:ccac873610d8af48a339296f7ebf6ac764ab8fbe209055034eed3d552a630193,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726257519504805650,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-175374,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e70f941756c7e7a05397efce3a7696f8,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=28195f35-8087-4ee5-a3f3-9f9c776a9193 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 20:18:51 embed-certs-175374 crio[698]: time="2024-09-13 20:18:51.979305512Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9ee8ad37-afeb-4cb5-a619-0744ba4e0b69 name=/runtime.v1.RuntimeService/Version
	Sep 13 20:18:51 embed-certs-175374 crio[698]: time="2024-09-13 20:18:51.979422086Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9ee8ad37-afeb-4cb5-a619-0744ba4e0b69 name=/runtime.v1.RuntimeService/Version
	Sep 13 20:18:51 embed-certs-175374 crio[698]: time="2024-09-13 20:18:51.981996897Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fb29a169-43a3-4ed3-aa34-8887e80e53cb name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 20:18:51 embed-certs-175374 crio[698]: time="2024-09-13 20:18:51.982373200Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726258731982352304,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fb29a169-43a3-4ed3-aa34-8887e80e53cb name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 20:18:51 embed-certs-175374 crio[698]: time="2024-09-13 20:18:51.983307534Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=01de52e3-8026-42cd-b6e4-258adeb1785e name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 20:18:51 embed-certs-175374 crio[698]: time="2024-09-13 20:18:51.983362164Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=01de52e3-8026-42cd-b6e4-258adeb1785e name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 20:18:51 embed-certs-175374 crio[698]: time="2024-09-13 20:18:51.983663824Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:db0694e689431d2416b903e0f01588bdf740ca0d24d9561799e853cfb065cb11,PodSandboxId:cebcdd6272ca6944b6d37519f1e1fbac7d6fad2fa1b184d187925cb4ef791c6a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726257554121691214,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afa99920-b51a-4d30-a8e0-269a0beeee8a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fc1d1764b64031585d14059ac9cb84e25c4d56f6cce3f844422b55edf3e5740,PodSandboxId:5a65221b12cd8104483023513fe85d970fc91bfa5e41a3b2e3b03c884a7d0927,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1726257534279950015,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 78550100-7601-4019-a699-49a888b727ec,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a58f184d570427c2adde5e982b72f455eb375ced496ce62a3217f22f7c409e7,PodSandboxId:fe47c92942c956149df15d774f8c96fd707a9c4d65c0de5d2aeb9fbb7d9e8887,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726257530848869494,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-lrrkx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b5420cc-a0cd-4f34-8f65-9fa6fd5dd02f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57402126568c76ff63c95511c256896d8fb9ec9889deab2a2816385513386b86,PodSandboxId:140b66d4a3d1b0ac227dce90bb7a1a9e190801662abfdd179aef4272e409e8ea,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726257523259928114,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jv77q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28580bbe-7c5f-4161-8
370-41f3286d508c,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d21ac9f9341fd40b5f181668035e7327fd6c2310c56a7f5f0158d440bfd2e9a3,PodSandboxId:cebcdd6272ca6944b6d37519f1e1fbac7d6fad2fa1b184d187925cb4ef791c6a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726257523225956258,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afa99920-b51a-4d30-a8e0-269a0beee
e8a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7288e6c437a29f9a016812446ce5905f4cc78974775e1f26b227cd41414a8c0,PodSandboxId:9316c569b83c7cf2caf7ead97ea76c085819b6df6e94f6aa6420682f33f0e80a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726257519534934341,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-175374,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53f4ce16a79335202e881123ba1a8a01,},Annotations:map[string]string{io.kub
ernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c32212fb06588456e47850075601e1e9622d8fab0cd30faa9a6289ff74e4bc9f,PodSandboxId:33eefe67c1e3ab16447af68ae4af63fa0d3d9d282dccdb6f1e74ed4271f2e4f7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726257519507998419,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-175374,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 744c32c281fdfe02bb00a97e4b471c7e,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e8d6c49b3b396aea040cc47de27b36e0f6018361f1925fe36424adffe6a6b73,PodSandboxId:45b75d912b1093e8d1b29e8318ce06e122867a44a2b29b108d2b56b1ad8db987,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726257519513900636,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-175374,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bda8edd45b328faa386561c0e2decf96,},Annotations:map[string]string{io.
kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c6b66cfda64cc01b2add3842317e8e38bd44940ec52b87ff6868e2b782ceb0d,PodSandboxId:ccac873610d8af48a339296f7ebf6ac764ab8fbe209055034eed3d552a630193,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726257519504805650,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-175374,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e70f941756c7e7a05397efce3a7696f8,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=01de52e3-8026-42cd-b6e4-258adeb1785e name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 20:18:52 embed-certs-175374 crio[698]: time="2024-09-13 20:18:52.025712741Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4c24fcd0-407f-4f63-a82e-c5fb28bc242f name=/runtime.v1.RuntimeService/Version
	Sep 13 20:18:52 embed-certs-175374 crio[698]: time="2024-09-13 20:18:52.025784134Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4c24fcd0-407f-4f63-a82e-c5fb28bc242f name=/runtime.v1.RuntimeService/Version
	Sep 13 20:18:52 embed-certs-175374 crio[698]: time="2024-09-13 20:18:52.026844841Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=056d26d2-d37c-4fd2-a898-fa72a38422d7 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 20:18:52 embed-certs-175374 crio[698]: time="2024-09-13 20:18:52.027204044Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726258732027184679,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=056d26d2-d37c-4fd2-a898-fa72a38422d7 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 20:18:52 embed-certs-175374 crio[698]: time="2024-09-13 20:18:52.028074029Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e2dd508d-7f51-4f8b-a68c-614f3f39f988 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 20:18:52 embed-certs-175374 crio[698]: time="2024-09-13 20:18:52.028124618Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e2dd508d-7f51-4f8b-a68c-614f3f39f988 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 20:18:52 embed-certs-175374 crio[698]: time="2024-09-13 20:18:52.028321204Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:db0694e689431d2416b903e0f01588bdf740ca0d24d9561799e853cfb065cb11,PodSandboxId:cebcdd6272ca6944b6d37519f1e1fbac7d6fad2fa1b184d187925cb4ef791c6a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726257554121691214,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afa99920-b51a-4d30-a8e0-269a0beeee8a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fc1d1764b64031585d14059ac9cb84e25c4d56f6cce3f844422b55edf3e5740,PodSandboxId:5a65221b12cd8104483023513fe85d970fc91bfa5e41a3b2e3b03c884a7d0927,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1726257534279950015,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 78550100-7601-4019-a699-49a888b727ec,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a58f184d570427c2adde5e982b72f455eb375ced496ce62a3217f22f7c409e7,PodSandboxId:fe47c92942c956149df15d774f8c96fd707a9c4d65c0de5d2aeb9fbb7d9e8887,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726257530848869494,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-lrrkx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b5420cc-a0cd-4f34-8f65-9fa6fd5dd02f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57402126568c76ff63c95511c256896d8fb9ec9889deab2a2816385513386b86,PodSandboxId:140b66d4a3d1b0ac227dce90bb7a1a9e190801662abfdd179aef4272e409e8ea,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726257523259928114,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jv77q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28580bbe-7c5f-4161-8
370-41f3286d508c,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d21ac9f9341fd40b5f181668035e7327fd6c2310c56a7f5f0158d440bfd2e9a3,PodSandboxId:cebcdd6272ca6944b6d37519f1e1fbac7d6fad2fa1b184d187925cb4ef791c6a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726257523225956258,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afa99920-b51a-4d30-a8e0-269a0beee
e8a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7288e6c437a29f9a016812446ce5905f4cc78974775e1f26b227cd41414a8c0,PodSandboxId:9316c569b83c7cf2caf7ead97ea76c085819b6df6e94f6aa6420682f33f0e80a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726257519534934341,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-175374,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53f4ce16a79335202e881123ba1a8a01,},Annotations:map[string]string{io.kub
ernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c32212fb06588456e47850075601e1e9622d8fab0cd30faa9a6289ff74e4bc9f,PodSandboxId:33eefe67c1e3ab16447af68ae4af63fa0d3d9d282dccdb6f1e74ed4271f2e4f7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726257519507998419,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-175374,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 744c32c281fdfe02bb00a97e4b471c7e,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e8d6c49b3b396aea040cc47de27b36e0f6018361f1925fe36424adffe6a6b73,PodSandboxId:45b75d912b1093e8d1b29e8318ce06e122867a44a2b29b108d2b56b1ad8db987,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726257519513900636,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-175374,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bda8edd45b328faa386561c0e2decf96,},Annotations:map[string]string{io.
kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c6b66cfda64cc01b2add3842317e8e38bd44940ec52b87ff6868e2b782ceb0d,PodSandboxId:ccac873610d8af48a339296f7ebf6ac764ab8fbe209055034eed3d552a630193,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726257519504805650,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-175374,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e70f941756c7e7a05397efce3a7696f8,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e2dd508d-7f51-4f8b-a68c-614f3f39f988 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 20:18:52 embed-certs-175374 crio[698]: time="2024-09-13 20:18:52.070315246Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5af4981b-b25e-437c-bc3a-88d6785c5a76 name=/runtime.v1.RuntimeService/Version
	Sep 13 20:18:52 embed-certs-175374 crio[698]: time="2024-09-13 20:18:52.070398519Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5af4981b-b25e-437c-bc3a-88d6785c5a76 name=/runtime.v1.RuntimeService/Version
	Sep 13 20:18:52 embed-certs-175374 crio[698]: time="2024-09-13 20:18:52.071773892Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a9d72149-693d-4012-90c3-00ff6e193ac6 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 20:18:52 embed-certs-175374 crio[698]: time="2024-09-13 20:18:52.072156992Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726258732072135249,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a9d72149-693d-4012-90c3-00ff6e193ac6 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 20:18:52 embed-certs-175374 crio[698]: time="2024-09-13 20:18:52.072725661Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d3f88e66-a0fd-4ba6-adef-7c4a09620909 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 20:18:52 embed-certs-175374 crio[698]: time="2024-09-13 20:18:52.072783064Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d3f88e66-a0fd-4ba6-adef-7c4a09620909 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 20:18:52 embed-certs-175374 crio[698]: time="2024-09-13 20:18:52.072968219Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:db0694e689431d2416b903e0f01588bdf740ca0d24d9561799e853cfb065cb11,PodSandboxId:cebcdd6272ca6944b6d37519f1e1fbac7d6fad2fa1b184d187925cb4ef791c6a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726257554121691214,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afa99920-b51a-4d30-a8e0-269a0beeee8a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fc1d1764b64031585d14059ac9cb84e25c4d56f6cce3f844422b55edf3e5740,PodSandboxId:5a65221b12cd8104483023513fe85d970fc91bfa5e41a3b2e3b03c884a7d0927,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1726257534279950015,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 78550100-7601-4019-a699-49a888b727ec,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a58f184d570427c2adde5e982b72f455eb375ced496ce62a3217f22f7c409e7,PodSandboxId:fe47c92942c956149df15d774f8c96fd707a9c4d65c0de5d2aeb9fbb7d9e8887,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726257530848869494,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-lrrkx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b5420cc-a0cd-4f34-8f65-9fa6fd5dd02f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57402126568c76ff63c95511c256896d8fb9ec9889deab2a2816385513386b86,PodSandboxId:140b66d4a3d1b0ac227dce90bb7a1a9e190801662abfdd179aef4272e409e8ea,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726257523259928114,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jv77q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28580bbe-7c5f-4161-8
370-41f3286d508c,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d21ac9f9341fd40b5f181668035e7327fd6c2310c56a7f5f0158d440bfd2e9a3,PodSandboxId:cebcdd6272ca6944b6d37519f1e1fbac7d6fad2fa1b184d187925cb4ef791c6a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726257523225956258,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afa99920-b51a-4d30-a8e0-269a0beee
e8a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7288e6c437a29f9a016812446ce5905f4cc78974775e1f26b227cd41414a8c0,PodSandboxId:9316c569b83c7cf2caf7ead97ea76c085819b6df6e94f6aa6420682f33f0e80a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726257519534934341,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-175374,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53f4ce16a79335202e881123ba1a8a01,},Annotations:map[string]string{io.kub
ernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c32212fb06588456e47850075601e1e9622d8fab0cd30faa9a6289ff74e4bc9f,PodSandboxId:33eefe67c1e3ab16447af68ae4af63fa0d3d9d282dccdb6f1e74ed4271f2e4f7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726257519507998419,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-175374,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 744c32c281fdfe02bb00a97e4b471c7e,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e8d6c49b3b396aea040cc47de27b36e0f6018361f1925fe36424adffe6a6b73,PodSandboxId:45b75d912b1093e8d1b29e8318ce06e122867a44a2b29b108d2b56b1ad8db987,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726257519513900636,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-175374,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bda8edd45b328faa386561c0e2decf96,},Annotations:map[string]string{io.
kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c6b66cfda64cc01b2add3842317e8e38bd44940ec52b87ff6868e2b782ceb0d,PodSandboxId:ccac873610d8af48a339296f7ebf6ac764ab8fbe209055034eed3d552a630193,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726257519504805650,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-175374,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e70f941756c7e7a05397efce3a7696f8,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d3f88e66-a0fd-4ba6-adef-7c4a09620909 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	db0694e689431       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      19 minutes ago      Running             storage-provisioner       2                   cebcdd6272ca6       storage-provisioner
	6fc1d1764b640       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   19 minutes ago      Running             busybox                   1                   5a65221b12cd8       busybox
	5a58f184d5704       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      20 minutes ago      Running             coredns                   1                   fe47c92942c95       coredns-7c65d6cfc9-lrrkx
	57402126568c7       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      20 minutes ago      Running             kube-proxy                1                   140b66d4a3d1b       kube-proxy-jv77q
	d21ac9f9341fd       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      20 minutes ago      Exited              storage-provisioner       1                   cebcdd6272ca6       storage-provisioner
	b7288e6c437a2       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      20 minutes ago      Running             etcd                      1                   9316c569b83c7       etcd-embed-certs-175374
	3e8d6c49b3b39       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      20 minutes ago      Running             kube-controller-manager   1                   45b75d912b109       kube-controller-manager-embed-certs-175374
	c32212fb06588       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      20 minutes ago      Running             kube-scheduler            1                   33eefe67c1e3a       kube-scheduler-embed-certs-175374
	8c6b66cfda64c       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      20 minutes ago      Running             kube-apiserver            1                   ccac873610d8a       kube-apiserver-embed-certs-175374
	
	
	==> coredns [5a58f184d570427c2adde5e982b72f455eb375ced496ce62a3217f22f7c409e7] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:56828 - 44655 "HINFO IN 3690377131981054951.5232825123940261538. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014623001s
	
	
	==> describe nodes <==
	Name:               embed-certs-175374
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-175374
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fdd33bebc6743cfd1c61ec7fe066add478610a92
	                    minikube.k8s.io/name=embed-certs-175374
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_13T19_49_37_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Sep 2024 19:49:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-175374
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 13 Sep 2024 20:18:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Sep 2024 20:14:33 +0000   Fri, 13 Sep 2024 19:49:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Sep 2024 20:14:33 +0000   Fri, 13 Sep 2024 19:49:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Sep 2024 20:14:33 +0000   Fri, 13 Sep 2024 19:49:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 13 Sep 2024 20:14:33 +0000   Fri, 13 Sep 2024 19:58:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.32
	  Hostname:    embed-certs-175374
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ed530f9a25374e51a3a8dd17430b96db
	  System UUID:                ed530f9a-2537-4e51-a3a8-dd17430b96db
	  Boot ID:                    15b3714b-88c3-4064-ac92-0b01d63e42fd
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 coredns-7c65d6cfc9-lrrkx                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     29m
	  kube-system                 etcd-embed-certs-175374                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         29m
	  kube-system                 kube-apiserver-embed-certs-175374             250m (12%)    0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-controller-manager-embed-certs-175374    200m (10%)    0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-proxy-jv77q                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-scheduler-embed-certs-175374             100m (5%)     0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 metrics-server-6867b74b74-fnznh               100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         28m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         29m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (17%)  170Mi (8%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 29m                kube-proxy       
	  Normal  Starting                 20m                kube-proxy       
	  Normal  NodeHasSufficientMemory  29m (x8 over 29m)  kubelet          Node embed-certs-175374 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29m (x8 over 29m)  kubelet          Node embed-certs-175374 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29m (x7 over 29m)  kubelet          Node embed-certs-175374 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    29m                kubelet          Node embed-certs-175374 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  29m                kubelet          Node embed-certs-175374 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     29m                kubelet          Node embed-certs-175374 status is now: NodeHasSufficientPID
	  Normal  Starting                 29m                kubelet          Starting kubelet.
	  Normal  NodeReady                29m                kubelet          Node embed-certs-175374 status is now: NodeReady
	  Normal  RegisteredNode           29m                node-controller  Node embed-certs-175374 event: Registered Node embed-certs-175374 in Controller
	  Normal  Starting                 20m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  20m (x8 over 20m)  kubelet          Node embed-certs-175374 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20m (x8 over 20m)  kubelet          Node embed-certs-175374 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20m (x7 over 20m)  kubelet          Node embed-certs-175374 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  20m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           20m                node-controller  Node embed-certs-175374 event: Registered Node embed-certs-175374 in Controller
	
	
	==> dmesg <==
	[Sep13 19:58] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.058470] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042323] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.159115] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.667593] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +2.413991] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000038] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.807696] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.059678] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.062069] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.191266] systemd-fstab-generator[649]: Ignoring "noauto" option for root device
	[  +0.133781] systemd-fstab-generator[661]: Ignoring "noauto" option for root device
	[  +0.296693] systemd-fstab-generator[691]: Ignoring "noauto" option for root device
	[  +4.084501] systemd-fstab-generator[779]: Ignoring "noauto" option for root device
	[  +2.050540] systemd-fstab-generator[901]: Ignoring "noauto" option for root device
	[  +0.073678] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.513518] kauditd_printk_skb: 69 callbacks suppressed
	[  +2.455135] systemd-fstab-generator[1522]: Ignoring "noauto" option for root device
	[  +3.300690] kauditd_printk_skb: 64 callbacks suppressed
	[  +5.140693] kauditd_printk_skb: 41 callbacks suppressed
	
	
	==> etcd [b7288e6c437a29f9a016812446ce5905f4cc78974775e1f26b227cd41414a8c0] <==
	{"level":"info","ts":"2024-09-13T19:58:41.270419Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d4c05646b7156589 elected leader d4c05646b7156589 at term 3"}
	{"level":"info","ts":"2024-09-13T19:58:41.272116Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-13T19:58:41.273113Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-13T19:58:41.273890Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-13T19:58:41.272070Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"d4c05646b7156589","local-member-attributes":"{Name:embed-certs-175374 ClientURLs:[https://192.168.39.32:2379]}","request-path":"/0/members/d4c05646b7156589/attributes","cluster-id":"68bdcbcbc4b793bb","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-13T19:58:41.282618Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-13T19:58:41.286546Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-13T19:58:41.286579Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-13T19:58:41.287174Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-13T19:58:41.287967Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.32:2379"}
	{"level":"info","ts":"2024-09-13T19:58:56.850452Z","caller":"traceutil/trace.go:171","msg":"trace[1093908443] transaction","detail":"{read_only:false; response_revision:594; number_of_response:1; }","duration":"127.521212ms","start":"2024-09-13T19:58:56.721883Z","end":"2024-09-13T19:58:56.849404Z","steps":["trace[1093908443] 'process raft request'  (duration: 127.074105ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-13T20:08:41.304552Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":849}
	{"level":"info","ts":"2024-09-13T20:08:41.315277Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":849,"took":"10.335713ms","hash":809977732,"current-db-size-bytes":2854912,"current-db-size":"2.9 MB","current-db-size-in-use-bytes":2854912,"current-db-size-in-use":"2.9 MB"}
	{"level":"info","ts":"2024-09-13T20:08:41.315342Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":809977732,"revision":849,"compact-revision":-1}
	{"level":"info","ts":"2024-09-13T20:13:41.314766Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1092}
	{"level":"info","ts":"2024-09-13T20:13:41.318742Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1092,"took":"3.641437ms","hash":3922799518,"current-db-size-bytes":2854912,"current-db-size":"2.9 MB","current-db-size-in-use-bytes":1658880,"current-db-size-in-use":"1.7 MB"}
	{"level":"info","ts":"2024-09-13T20:13:41.318797Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3922799518,"revision":1092,"compact-revision":849}
	{"level":"info","ts":"2024-09-13T20:18:26.317340Z","caller":"traceutil/trace.go:171","msg":"trace[1770507986] linearizableReadLoop","detail":"{readStateIndex:1838; appliedIndex:1837; }","duration":"302.471131ms","start":"2024-09-13T20:18:26.014840Z","end":"2024-09-13T20:18:26.317311Z","steps":["trace[1770507986] 'read index received'  (duration: 302.305054ms)","trace[1770507986] 'applied index is now lower than readState.Index'  (duration: 165.52µs)"],"step_count":2}
	{"level":"info","ts":"2024-09-13T20:18:26.317614Z","caller":"traceutil/trace.go:171","msg":"trace[1354806680] transaction","detail":"{read_only:false; response_revision:1565; number_of_response:1; }","duration":"319.890449ms","start":"2024-09-13T20:18:25.997708Z","end":"2024-09-13T20:18:26.317598Z","steps":["trace[1354806680] 'process raft request'  (duration: 319.470349ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-13T20:18:26.318960Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-13T20:18:25.997673Z","time spent":"320.449884ms","remote":"127.0.0.1:40600","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1103,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1564 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1030 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-09-13T20:18:26.317745Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"302.835462ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-13T20:18:26.319169Z","caller":"traceutil/trace.go:171","msg":"trace[610294962] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1565; }","duration":"304.324477ms","start":"2024-09-13T20:18:26.014834Z","end":"2024-09-13T20:18:26.319158Z","steps":["trace[610294962] 'agreement among raft nodes before linearized reading'  (duration: 302.820519ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-13T20:18:41.323153Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1334}
	{"level":"info","ts":"2024-09-13T20:18:41.328071Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1334,"took":"4.115794ms","hash":1992656185,"current-db-size-bytes":2854912,"current-db-size":"2.9 MB","current-db-size-in-use-bytes":1654784,"current-db-size-in-use":"1.7 MB"}
	{"level":"info","ts":"2024-09-13T20:18:41.328212Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1992656185,"revision":1334,"compact-revision":1092}
	
	
	==> kernel <==
	 20:18:52 up 20 min,  0 users,  load average: 0.15, 0.12, 0.11
	Linux embed-certs-175374 5.10.207 #1 SMP Thu Sep 12 19:03:33 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [8c6b66cfda64cc01b2add3842317e8e38bd44940ec52b87ff6868e2b782ceb0d] <==
	 > logger="UnhandledError"
	I0913 20:14:43.658968       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0913 20:16:43.658612       1 handler_proxy.go:99] no RequestInfo found in the context
	E0913 20:16:43.658769       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0913 20:16:43.659620       1 handler_proxy.go:99] no RequestInfo found in the context
	E0913 20:16:43.659749       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0913 20:16:43.660809       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0913 20:16:43.660809       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0913 20:18:42.661471       1 handler_proxy.go:99] no RequestInfo found in the context
	E0913 20:18:42.661838       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0913 20:18:43.663323       1 handler_proxy.go:99] no RequestInfo found in the context
	W0913 20:18:43.663377       1 handler_proxy.go:99] no RequestInfo found in the context
	E0913 20:18:43.663575       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0913 20:18:43.663669       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0913 20:18:43.664723       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0913 20:18:43.664789       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [3e8d6c49b3b396aea040cc47de27b36e0f6018361f1925fe36424adffe6a6b73] <==
	E0913 20:13:46.372477       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0913 20:13:46.854333       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0913 20:14:16.379098       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0913 20:14:16.862399       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0913 20:14:33.460897       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="embed-certs-175374"
	E0913 20:14:46.385943       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0913 20:14:46.869602       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0913 20:14:49.910961       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="200.398µs"
	I0913 20:15:04.908930       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="52.999µs"
	E0913 20:15:16.392016       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0913 20:15:16.876874       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0913 20:15:46.399887       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0913 20:15:46.884070       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0913 20:16:16.406440       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0913 20:16:16.891739       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0913 20:16:46.414413       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0913 20:16:46.901637       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0913 20:17:16.420764       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0913 20:17:16.909350       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0913 20:17:46.427101       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0913 20:17:46.918318       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0913 20:18:16.433149       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0913 20:18:16.925322       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0913 20:18:46.441135       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0913 20:18:46.934476       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [57402126568c76ff63c95511c256896d8fb9ec9889deab2a2816385513386b86] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0913 19:58:43.472828       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0913 19:58:43.485393       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.32"]
	E0913 19:58:43.485676       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0913 19:58:43.525451       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0913 19:58:43.525565       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0913 19:58:43.525590       1 server_linux.go:169] "Using iptables Proxier"
	I0913 19:58:43.528314       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0913 19:58:43.528682       1 server.go:483] "Version info" version="v1.31.1"
	I0913 19:58:43.528706       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0913 19:58:43.530430       1 config.go:199] "Starting service config controller"
	I0913 19:58:43.530478       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0913 19:58:43.530574       1 config.go:105] "Starting endpoint slice config controller"
	I0913 19:58:43.530595       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0913 19:58:43.531024       1 config.go:328] "Starting node config controller"
	I0913 19:58:43.531054       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0913 19:58:43.630763       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0913 19:58:43.630810       1 shared_informer.go:320] Caches are synced for service config
	I0913 19:58:43.631286       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [c32212fb06588456e47850075601e1e9622d8fab0cd30faa9a6289ff74e4bc9f] <==
	I0913 19:58:41.120183       1 serving.go:386] Generated self-signed cert in-memory
	W0913 19:58:42.625768       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0913 19:58:42.625860       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0913 19:58:42.625870       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0913 19:58:42.625876       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0913 19:58:42.667566       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0913 19:58:42.667614       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0913 19:58:42.670054       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0913 19:58:42.670101       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0913 19:58:42.670759       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0913 19:58:42.670841       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0913 19:58:42.770403       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 13 20:17:38 embed-certs-175374 kubelet[908]: E0913 20:17:38.158666     908 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726258658157931356,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 20:17:48 embed-certs-175374 kubelet[908]: E0913 20:17:48.160308     908 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726258668159832804,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 20:17:48 embed-certs-175374 kubelet[908]: E0913 20:17:48.160738     908 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726258668159832804,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 20:17:48 embed-certs-175374 kubelet[908]: E0913 20:17:48.894382     908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-fnznh" podUID="9ca67e1c-a852-4513-abfc-ace5908d2727"
	Sep 13 20:17:58 embed-certs-175374 kubelet[908]: E0913 20:17:58.163362     908 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726258678162685028,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 20:17:58 embed-certs-175374 kubelet[908]: E0913 20:17:58.164048     908 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726258678162685028,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 20:18:03 embed-certs-175374 kubelet[908]: E0913 20:18:03.894004     908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-fnznh" podUID="9ca67e1c-a852-4513-abfc-ace5908d2727"
	Sep 13 20:18:08 embed-certs-175374 kubelet[908]: E0913 20:18:08.166572     908 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726258688165960345,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 20:18:08 embed-certs-175374 kubelet[908]: E0913 20:18:08.166636     908 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726258688165960345,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 20:18:15 embed-certs-175374 kubelet[908]: E0913 20:18:15.896230     908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-fnznh" podUID="9ca67e1c-a852-4513-abfc-ace5908d2727"
	Sep 13 20:18:18 embed-certs-175374 kubelet[908]: E0913 20:18:18.168903     908 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726258698168377692,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 20:18:18 embed-certs-175374 kubelet[908]: E0913 20:18:18.168948     908 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726258698168377692,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 20:18:27 embed-certs-175374 kubelet[908]: E0913 20:18:27.897007     908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-fnznh" podUID="9ca67e1c-a852-4513-abfc-ace5908d2727"
	Sep 13 20:18:28 embed-certs-175374 kubelet[908]: E0913 20:18:28.172410     908 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726258708171554058,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 20:18:28 embed-certs-175374 kubelet[908]: E0913 20:18:28.172559     908 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726258708171554058,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 20:18:37 embed-certs-175374 kubelet[908]: E0913 20:18:37.916953     908 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 13 20:18:37 embed-certs-175374 kubelet[908]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 13 20:18:37 embed-certs-175374 kubelet[908]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 13 20:18:37 embed-certs-175374 kubelet[908]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 13 20:18:37 embed-certs-175374 kubelet[908]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 13 20:18:38 embed-certs-175374 kubelet[908]: E0913 20:18:38.174461     908 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726258718174090378,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 20:18:38 embed-certs-175374 kubelet[908]: E0913 20:18:38.174527     908 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726258718174090378,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 20:18:40 embed-certs-175374 kubelet[908]: E0913 20:18:40.895567     908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-fnznh" podUID="9ca67e1c-a852-4513-abfc-ace5908d2727"
	Sep 13 20:18:48 embed-certs-175374 kubelet[908]: E0913 20:18:48.176408     908 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726258728176052261,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 20:18:48 embed-certs-175374 kubelet[908]: E0913 20:18:48.176743     908 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726258728176052261,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [d21ac9f9341fd40b5f181668035e7327fd6c2310c56a7f5f0158d440bfd2e9a3] <==
	I0913 19:58:43.371678       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0913 19:59:13.376333       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [db0694e689431d2416b903e0f01588bdf740ca0d24d9561799e853cfb065cb11] <==
	I0913 19:59:14.213607       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0913 19:59:14.227480       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0913 19:59:14.227685       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0913 19:59:14.238129       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0913 19:59:14.238388       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-175374_c1e9d576-090d-4312-b8cb-e13584169a47!
	I0913 19:59:14.244206       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"edd1b990-cadd-4e33-a979-885e0597261d", APIVersion:"v1", ResourceVersion:"613", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-175374_c1e9d576-090d-4312-b8cb-e13584169a47 became leader
	I0913 19:59:14.338936       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-175374_c1e9d576-090d-4312-b8cb-e13584169a47!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-175374 -n embed-certs-175374
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-175374 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-fnznh
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-175374 describe pod metrics-server-6867b74b74-fnznh
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-175374 describe pod metrics-server-6867b74b74-fnznh: exit status 1 (61.712942ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-fnznh" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-175374 describe pod metrics-server-6867b74b74-fnznh: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (397.91s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (147.44s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
E0913 20:15:51.374238   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/calico-604714/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
E0913 20:15:57.576066   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/functional-204039/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
E0913 20:16:39.295284   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/custom-flannel-604714/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
E0913 20:17:19.587848   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/enable-default-cni-604714/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-234290 -n old-k8s-version-234290
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-234290 -n old-k8s-version-234290: exit status 2 (245.590868ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-234290" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-234290 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-234290 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.399µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-234290 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-234290 -n old-k8s-version-234290
E0913 20:17:48.868602   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/flannel-604714/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-234290 -n old-k8s-version-234290: exit status 2 (237.609528ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-234290 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-234290 logs -n 25: (1.726640912s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-604714 sudo cat                              | bridge-604714                | jenkins | v1.34.0 | 13 Sep 24 19:49 UTC | 13 Sep 24 19:49 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-604714 sudo                                  | bridge-604714                | jenkins | v1.34.0 | 13 Sep 24 19:49 UTC | 13 Sep 24 19:49 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-604714 sudo                                  | bridge-604714                | jenkins | v1.34.0 | 13 Sep 24 19:49 UTC | 13 Sep 24 19:49 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-604714 sudo                                  | bridge-604714                | jenkins | v1.34.0 | 13 Sep 24 19:49 UTC | 13 Sep 24 19:49 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-604714 sudo find                             | bridge-604714                | jenkins | v1.34.0 | 13 Sep 24 19:49 UTC | 13 Sep 24 19:49 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-604714 sudo crio                             | bridge-604714                | jenkins | v1.34.0 | 13 Sep 24 19:49 UTC | 13 Sep 24 19:49 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-604714                                       | bridge-604714                | jenkins | v1.34.0 | 13 Sep 24 19:49 UTC | 13 Sep 24 19:49 UTC |
	| delete  | -p                                                     | disable-driver-mounts-221882 | jenkins | v1.34.0 | 13 Sep 24 19:49 UTC | 13 Sep 24 19:49 UTC |
	|         | disable-driver-mounts-221882                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-512125 | jenkins | v1.34.0 | 13 Sep 24 19:49 UTC | 13 Sep 24 19:50 UTC |
	|         | default-k8s-diff-port-512125                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-175374            | embed-certs-175374           | jenkins | v1.34.0 | 13 Sep 24 19:50 UTC | 13 Sep 24 19:50 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-175374                                  | embed-certs-175374           | jenkins | v1.34.0 | 13 Sep 24 19:50 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-239327             | no-preload-239327            | jenkins | v1.34.0 | 13 Sep 24 19:50 UTC | 13 Sep 24 19:50 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-239327                                   | no-preload-239327            | jenkins | v1.34.0 | 13 Sep 24 19:50 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-512125  | default-k8s-diff-port-512125 | jenkins | v1.34.0 | 13 Sep 24 19:50 UTC | 13 Sep 24 19:50 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-512125 | jenkins | v1.34.0 | 13 Sep 24 19:50 UTC |                     |
	|         | default-k8s-diff-port-512125                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-234290        | old-k8s-version-234290       | jenkins | v1.34.0 | 13 Sep 24 19:52 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-175374                 | embed-certs-175374           | jenkins | v1.34.0 | 13 Sep 24 19:52 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-175374                                  | embed-certs-175374           | jenkins | v1.34.0 | 13 Sep 24 19:52 UTC | 13 Sep 24 20:03 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-239327                  | no-preload-239327            | jenkins | v1.34.0 | 13 Sep 24 19:52 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-239327                                   | no-preload-239327            | jenkins | v1.34.0 | 13 Sep 24 19:52 UTC | 13 Sep 24 20:02 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-512125       | default-k8s-diff-port-512125 | jenkins | v1.34.0 | 13 Sep 24 19:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-512125 | jenkins | v1.34.0 | 13 Sep 24 19:53 UTC | 13 Sep 24 20:03 UTC |
	|         | default-k8s-diff-port-512125                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-234290                              | old-k8s-version-234290       | jenkins | v1.34.0 | 13 Sep 24 19:53 UTC | 13 Sep 24 19:53 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-234290             | old-k8s-version-234290       | jenkins | v1.34.0 | 13 Sep 24 19:53 UTC | 13 Sep 24 19:53 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-234290                              | old-k8s-version-234290       | jenkins | v1.34.0 | 13 Sep 24 19:53 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/13 19:53:41
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0913 19:53:41.943032   71926 out.go:345] Setting OutFile to fd 1 ...
	I0913 19:53:41.943180   71926 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 19:53:41.943190   71926 out.go:358] Setting ErrFile to fd 2...
	I0913 19:53:41.943197   71926 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 19:53:41.943402   71926 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19636-3902/.minikube/bin
	I0913 19:53:41.943930   71926 out.go:352] Setting JSON to false
	I0913 19:53:41.944812   71926 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5765,"bootTime":1726251457,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0913 19:53:41.944913   71926 start.go:139] virtualization: kvm guest
	I0913 19:53:41.946864   71926 out.go:177] * [old-k8s-version-234290] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0913 19:53:41.948276   71926 out.go:177]   - MINIKUBE_LOCATION=19636
	I0913 19:53:41.948281   71926 notify.go:220] Checking for updates...
	I0913 19:53:41.950620   71926 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 19:53:41.951967   71926 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19636-3902/kubeconfig
	I0913 19:53:41.953232   71926 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19636-3902/.minikube
	I0913 19:53:41.954348   71926 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0913 19:53:41.955441   71926 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 19:53:41.957011   71926 config.go:182] Loaded profile config "old-k8s-version-234290": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0913 19:53:41.957371   71926 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:53:41.957441   71926 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:53:41.972835   71926 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38069
	I0913 19:53:41.973170   71926 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:53:41.973680   71926 main.go:141] libmachine: Using API Version  1
	I0913 19:53:41.973702   71926 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:53:41.974021   71926 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:53:41.974203   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .DriverName
	I0913 19:53:41.975950   71926 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0913 19:53:41.977070   71926 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 19:53:41.977361   71926 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:53:41.977393   71926 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:53:41.992564   71926 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46407
	I0913 19:53:41.992937   71926 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:53:41.993374   71926 main.go:141] libmachine: Using API Version  1
	I0913 19:53:41.993394   71926 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:53:41.993696   71926 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:53:41.993887   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .DriverName
	I0913 19:53:42.028938   71926 out.go:177] * Using the kvm2 driver based on existing profile
	I0913 19:53:42.030049   71926 start.go:297] selected driver: kvm2
	I0913 19:53:42.030059   71926 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-234290 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-234290 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.137 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 19:53:42.030200   71926 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 19:53:42.030849   71926 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 19:53:42.030932   71926 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19636-3902/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0913 19:53:42.045463   71926 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0913 19:53:42.045953   71926 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0913 19:53:42.045989   71926 cni.go:84] Creating CNI manager for ""
	I0913 19:53:42.046049   71926 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0913 19:53:42.046110   71926 start.go:340] cluster config:
	{Name:old-k8s-version-234290 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-234290 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.137 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 19:53:42.046256   71926 iso.go:125] acquiring lock: {Name:mk6d09a9e7ffc35d34bf41cb51ad15df14a4d34d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 19:53:42.047975   71926 out.go:177] * Starting "old-k8s-version-234290" primary control-plane node in "old-k8s-version-234290" cluster
	I0913 19:53:42.049111   71926 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0913 19:53:42.049142   71926 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0913 19:53:42.049152   71926 cache.go:56] Caching tarball of preloaded images
	I0913 19:53:42.049244   71926 preload.go:172] Found /home/jenkins/minikube-integration/19636-3902/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0913 19:53:42.049256   71926 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0913 19:53:42.049381   71926 profile.go:143] Saving config to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/old-k8s-version-234290/config.json ...
	I0913 19:53:42.049610   71926 start.go:360] acquireMachinesLock for old-k8s-version-234290: {Name:mk2a4fc9f87a3264088553785e086036ce1d8c5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 19:53:44.338294   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:53:47.410436   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:53:53.490365   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:53:56.562332   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:54:02.642421   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:54:05.714373   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:54:11.794509   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:54:14.866446   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:54:20.946376   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:54:24.018394   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:54:30.098454   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:54:33.170427   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:54:39.250379   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:54:42.322396   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:54:48.402383   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:54:51.474349   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:54:57.554326   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:55:00.626470   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:55:06.706406   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:55:09.778406   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:55:15.858396   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:55:18.930350   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:55:25.010369   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:55:28.082351   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:55:34.162384   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:55:37.234340   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:55:43.314402   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:55:46.386350   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:55:52.466366   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:55:55.538393   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:56:01.618347   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:56:04.690441   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:56:10.770383   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:56:13.842385   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:56:19.922294   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:56:22.994351   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:56:29.074375   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:56:32.146398   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:56:38.226398   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:56:41.298354   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:56:47.378372   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:56:50.450410   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:56:56.530367   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:56:59.602397   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:57:05.682382   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:57:08.754412   71233 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.32:22: connect: no route to host
	I0913 19:57:11.758611   71424 start.go:364] duration metric: took 4m20.559966284s to acquireMachinesLock for "no-preload-239327"
	I0913 19:57:11.758664   71424 start.go:96] Skipping create...Using existing machine configuration
	I0913 19:57:11.758671   71424 fix.go:54] fixHost starting: 
	I0913 19:57:11.759024   71424 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:57:11.759062   71424 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:57:11.773946   71424 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43109
	I0913 19:57:11.774454   71424 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:57:11.774923   71424 main.go:141] libmachine: Using API Version  1
	I0913 19:57:11.774944   71424 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:57:11.775249   71424 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:57:11.775449   71424 main.go:141] libmachine: (no-preload-239327) Calling .DriverName
	I0913 19:57:11.775561   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetState
	I0913 19:57:11.777226   71424 fix.go:112] recreateIfNeeded on no-preload-239327: state=Stopped err=<nil>
	I0913 19:57:11.777255   71424 main.go:141] libmachine: (no-preload-239327) Calling .DriverName
	W0913 19:57:11.777386   71424 fix.go:138] unexpected machine state, will restart: <nil>
	I0913 19:57:11.778991   71424 out.go:177] * Restarting existing kvm2 VM for "no-preload-239327" ...
	I0913 19:57:11.756000   71233 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0913 19:57:11.756057   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetMachineName
	I0913 19:57:11.756380   71233 buildroot.go:166] provisioning hostname "embed-certs-175374"
	I0913 19:57:11.756419   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetMachineName
	I0913 19:57:11.756625   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHHostname
	I0913 19:57:11.758480   71233 machine.go:96] duration metric: took 4m37.434582624s to provisionDockerMachine
	I0913 19:57:11.758528   71233 fix.go:56] duration metric: took 4m37.454978505s for fixHost
	I0913 19:57:11.758535   71233 start.go:83] releasing machines lock for "embed-certs-175374", held for 4m37.454997672s
	W0913 19:57:11.758553   71233 start.go:714] error starting host: provision: host is not running
	W0913 19:57:11.758636   71233 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0913 19:57:11.758644   71233 start.go:729] Will try again in 5 seconds ...
	I0913 19:57:11.780324   71424 main.go:141] libmachine: (no-preload-239327) Calling .Start
	I0913 19:57:11.780481   71424 main.go:141] libmachine: (no-preload-239327) Ensuring networks are active...
	I0913 19:57:11.781265   71424 main.go:141] libmachine: (no-preload-239327) Ensuring network default is active
	I0913 19:57:11.781663   71424 main.go:141] libmachine: (no-preload-239327) Ensuring network mk-no-preload-239327 is active
	I0913 19:57:11.782007   71424 main.go:141] libmachine: (no-preload-239327) Getting domain xml...
	I0913 19:57:11.782826   71424 main.go:141] libmachine: (no-preload-239327) Creating domain...
	I0913 19:57:12.992355   71424 main.go:141] libmachine: (no-preload-239327) Waiting to get IP...
	I0913 19:57:12.993373   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:12.993782   71424 main.go:141] libmachine: (no-preload-239327) DBG | unable to find current IP address of domain no-preload-239327 in network mk-no-preload-239327
	I0913 19:57:12.993855   71424 main.go:141] libmachine: (no-preload-239327) DBG | I0913 19:57:12.993770   72661 retry.go:31] will retry after 199.574184ms: waiting for machine to come up
	I0913 19:57:13.195419   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:13.195877   71424 main.go:141] libmachine: (no-preload-239327) DBG | unable to find current IP address of domain no-preload-239327 in network mk-no-preload-239327
	I0913 19:57:13.195911   71424 main.go:141] libmachine: (no-preload-239327) DBG | I0913 19:57:13.195826   72661 retry.go:31] will retry after 380.700462ms: waiting for machine to come up
	I0913 19:57:13.578683   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:13.579202   71424 main.go:141] libmachine: (no-preload-239327) DBG | unable to find current IP address of domain no-preload-239327 in network mk-no-preload-239327
	I0913 19:57:13.579222   71424 main.go:141] libmachine: (no-preload-239327) DBG | I0913 19:57:13.579162   72661 retry.go:31] will retry after 398.874813ms: waiting for machine to come up
	I0913 19:57:13.979670   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:13.979999   71424 main.go:141] libmachine: (no-preload-239327) DBG | unable to find current IP address of domain no-preload-239327 in network mk-no-preload-239327
	I0913 19:57:13.980026   71424 main.go:141] libmachine: (no-preload-239327) DBG | I0913 19:57:13.979969   72661 retry.go:31] will retry after 430.946638ms: waiting for machine to come up
	I0913 19:57:14.412524   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:14.412887   71424 main.go:141] libmachine: (no-preload-239327) DBG | unable to find current IP address of domain no-preload-239327 in network mk-no-preload-239327
	I0913 19:57:14.412919   71424 main.go:141] libmachine: (no-preload-239327) DBG | I0913 19:57:14.412851   72661 retry.go:31] will retry after 619.103851ms: waiting for machine to come up
	I0913 19:57:15.033546   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:15.034023   71424 main.go:141] libmachine: (no-preload-239327) DBG | unable to find current IP address of domain no-preload-239327 in network mk-no-preload-239327
	I0913 19:57:15.034049   71424 main.go:141] libmachine: (no-preload-239327) DBG | I0913 19:57:15.033968   72661 retry.go:31] will retry after 686.825946ms: waiting for machine to come up
	I0913 19:57:15.722892   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:15.723272   71424 main.go:141] libmachine: (no-preload-239327) DBG | unable to find current IP address of domain no-preload-239327 in network mk-no-preload-239327
	I0913 19:57:15.723291   71424 main.go:141] libmachine: (no-preload-239327) DBG | I0913 19:57:15.723232   72661 retry.go:31] will retry after 950.457281ms: waiting for machine to come up
	I0913 19:57:16.760330   71233 start.go:360] acquireMachinesLock for embed-certs-175374: {Name:mk2a4fc9f87a3264088553785e086036ce1d8c5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 19:57:16.675363   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:16.675847   71424 main.go:141] libmachine: (no-preload-239327) DBG | unable to find current IP address of domain no-preload-239327 in network mk-no-preload-239327
	I0913 19:57:16.675877   71424 main.go:141] libmachine: (no-preload-239327) DBG | I0913 19:57:16.675800   72661 retry.go:31] will retry after 1.216886459s: waiting for machine to come up
	I0913 19:57:17.894808   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:17.895217   71424 main.go:141] libmachine: (no-preload-239327) DBG | unable to find current IP address of domain no-preload-239327 in network mk-no-preload-239327
	I0913 19:57:17.895239   71424 main.go:141] libmachine: (no-preload-239327) DBG | I0913 19:57:17.895175   72661 retry.go:31] will retry after 1.427837109s: waiting for machine to come up
	I0913 19:57:19.324743   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:19.325196   71424 main.go:141] libmachine: (no-preload-239327) DBG | unable to find current IP address of domain no-preload-239327 in network mk-no-preload-239327
	I0913 19:57:19.325217   71424 main.go:141] libmachine: (no-preload-239327) DBG | I0913 19:57:19.325162   72661 retry.go:31] will retry after 1.457475552s: waiting for machine to come up
	I0913 19:57:20.783805   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:20.784266   71424 main.go:141] libmachine: (no-preload-239327) DBG | unable to find current IP address of domain no-preload-239327 in network mk-no-preload-239327
	I0913 19:57:20.784330   71424 main.go:141] libmachine: (no-preload-239327) DBG | I0913 19:57:20.784199   72661 retry.go:31] will retry after 1.982491512s: waiting for machine to come up
	I0913 19:57:22.768091   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:22.768617   71424 main.go:141] libmachine: (no-preload-239327) DBG | unable to find current IP address of domain no-preload-239327 in network mk-no-preload-239327
	I0913 19:57:22.768648   71424 main.go:141] libmachine: (no-preload-239327) DBG | I0913 19:57:22.768571   72661 retry.go:31] will retry after 2.984595157s: waiting for machine to come up
	I0913 19:57:25.756723   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:25.757201   71424 main.go:141] libmachine: (no-preload-239327) DBG | unable to find current IP address of domain no-preload-239327 in network mk-no-preload-239327
	I0913 19:57:25.757254   71424 main.go:141] libmachine: (no-preload-239327) DBG | I0913 19:57:25.757153   72661 retry.go:31] will retry after 3.54213444s: waiting for machine to come up
	I0913 19:57:30.479236   71702 start.go:364] duration metric: took 4m5.481713344s to acquireMachinesLock for "default-k8s-diff-port-512125"
	I0913 19:57:30.479302   71702 start.go:96] Skipping create...Using existing machine configuration
	I0913 19:57:30.479311   71702 fix.go:54] fixHost starting: 
	I0913 19:57:30.479747   71702 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:57:30.479800   71702 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:57:30.496493   71702 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36631
	I0913 19:57:30.497088   71702 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:57:30.497677   71702 main.go:141] libmachine: Using API Version  1
	I0913 19:57:30.497710   71702 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:57:30.498088   71702 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:57:30.498293   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .DriverName
	I0913 19:57:30.498469   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetState
	I0913 19:57:30.500176   71702 fix.go:112] recreateIfNeeded on default-k8s-diff-port-512125: state=Stopped err=<nil>
	I0913 19:57:30.500202   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .DriverName
	W0913 19:57:30.500367   71702 fix.go:138] unexpected machine state, will restart: <nil>
	I0913 19:57:30.503496   71702 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-512125" ...
	I0913 19:57:29.301999   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:29.302506   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has current primary IP address 192.168.50.13 and MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:29.302529   71424 main.go:141] libmachine: (no-preload-239327) Found IP for machine: 192.168.50.13
	I0913 19:57:29.302571   71424 main.go:141] libmachine: (no-preload-239327) Reserving static IP address...
	I0913 19:57:29.302937   71424 main.go:141] libmachine: (no-preload-239327) Reserved static IP address: 192.168.50.13
	I0913 19:57:29.302956   71424 main.go:141] libmachine: (no-preload-239327) Waiting for SSH to be available...
	I0913 19:57:29.302980   71424 main.go:141] libmachine: (no-preload-239327) DBG | found host DHCP lease matching {name: "no-preload-239327", mac: "52:54:00:14:8c:9d", ip: "192.168.50.13"} in network mk-no-preload-239327: {Iface:virbr1 ExpiryTime:2024-09-13 20:57:22 +0000 UTC Type:0 Mac:52:54:00:14:8c:9d Iaid: IPaddr:192.168.50.13 Prefix:24 Hostname:no-preload-239327 Clientid:01:52:54:00:14:8c:9d}
	I0913 19:57:29.303002   71424 main.go:141] libmachine: (no-preload-239327) DBG | skip adding static IP to network mk-no-preload-239327 - found existing host DHCP lease matching {name: "no-preload-239327", mac: "52:54:00:14:8c:9d", ip: "192.168.50.13"}
	I0913 19:57:29.303016   71424 main.go:141] libmachine: (no-preload-239327) DBG | Getting to WaitForSSH function...
	I0913 19:57:29.305047   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:29.305362   71424 main.go:141] libmachine: (no-preload-239327) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:8c:9d", ip: ""} in network mk-no-preload-239327: {Iface:virbr1 ExpiryTime:2024-09-13 20:57:22 +0000 UTC Type:0 Mac:52:54:00:14:8c:9d Iaid: IPaddr:192.168.50.13 Prefix:24 Hostname:no-preload-239327 Clientid:01:52:54:00:14:8c:9d}
	I0913 19:57:29.305404   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined IP address 192.168.50.13 and MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:29.305515   71424 main.go:141] libmachine: (no-preload-239327) DBG | Using SSH client type: external
	I0913 19:57:29.305542   71424 main.go:141] libmachine: (no-preload-239327) DBG | Using SSH private key: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/no-preload-239327/id_rsa (-rw-------)
	I0913 19:57:29.305564   71424 main.go:141] libmachine: (no-preload-239327) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.13 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19636-3902/.minikube/machines/no-preload-239327/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0913 19:57:29.305573   71424 main.go:141] libmachine: (no-preload-239327) DBG | About to run SSH command:
	I0913 19:57:29.305581   71424 main.go:141] libmachine: (no-preload-239327) DBG | exit 0
	I0913 19:57:29.425845   71424 main.go:141] libmachine: (no-preload-239327) DBG | SSH cmd err, output: <nil>: 
	I0913 19:57:29.426277   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetConfigRaw
	I0913 19:57:29.426883   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetIP
	I0913 19:57:29.429328   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:29.429569   71424 main.go:141] libmachine: (no-preload-239327) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:8c:9d", ip: ""} in network mk-no-preload-239327: {Iface:virbr1 ExpiryTime:2024-09-13 20:57:22 +0000 UTC Type:0 Mac:52:54:00:14:8c:9d Iaid: IPaddr:192.168.50.13 Prefix:24 Hostname:no-preload-239327 Clientid:01:52:54:00:14:8c:9d}
	I0913 19:57:29.429604   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined IP address 192.168.50.13 and MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:29.429866   71424 profile.go:143] Saving config to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/no-preload-239327/config.json ...
	I0913 19:57:29.430088   71424 machine.go:93] provisionDockerMachine start ...
	I0913 19:57:29.430124   71424 main.go:141] libmachine: (no-preload-239327) Calling .DriverName
	I0913 19:57:29.430316   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHHostname
	I0913 19:57:29.432371   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:29.432697   71424 main.go:141] libmachine: (no-preload-239327) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:8c:9d", ip: ""} in network mk-no-preload-239327: {Iface:virbr1 ExpiryTime:2024-09-13 20:57:22 +0000 UTC Type:0 Mac:52:54:00:14:8c:9d Iaid: IPaddr:192.168.50.13 Prefix:24 Hostname:no-preload-239327 Clientid:01:52:54:00:14:8c:9d}
	I0913 19:57:29.432723   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined IP address 192.168.50.13 and MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:29.432877   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHPort
	I0913 19:57:29.433028   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHKeyPath
	I0913 19:57:29.433161   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHKeyPath
	I0913 19:57:29.433304   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHUsername
	I0913 19:57:29.433452   71424 main.go:141] libmachine: Using SSH client type: native
	I0913 19:57:29.433659   71424 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.13 22 <nil> <nil>}
	I0913 19:57:29.433671   71424 main.go:141] libmachine: About to run SSH command:
	hostname
	I0913 19:57:29.530650   71424 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0913 19:57:29.530683   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetMachineName
	I0913 19:57:29.530900   71424 buildroot.go:166] provisioning hostname "no-preload-239327"
	I0913 19:57:29.530926   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetMachineName
	I0913 19:57:29.531118   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHHostname
	I0913 19:57:29.533702   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:29.534171   71424 main.go:141] libmachine: (no-preload-239327) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:8c:9d", ip: ""} in network mk-no-preload-239327: {Iface:virbr1 ExpiryTime:2024-09-13 20:57:22 +0000 UTC Type:0 Mac:52:54:00:14:8c:9d Iaid: IPaddr:192.168.50.13 Prefix:24 Hostname:no-preload-239327 Clientid:01:52:54:00:14:8c:9d}
	I0913 19:57:29.534199   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined IP address 192.168.50.13 and MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:29.534417   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHPort
	I0913 19:57:29.534572   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHKeyPath
	I0913 19:57:29.534745   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHKeyPath
	I0913 19:57:29.534891   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHUsername
	I0913 19:57:29.535019   71424 main.go:141] libmachine: Using SSH client type: native
	I0913 19:57:29.535187   71424 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.13 22 <nil> <nil>}
	I0913 19:57:29.535199   71424 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-239327 && echo "no-preload-239327" | sudo tee /etc/hostname
	I0913 19:57:29.648889   71424 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-239327
	
	I0913 19:57:29.648913   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHHostname
	I0913 19:57:29.651418   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:29.651794   71424 main.go:141] libmachine: (no-preload-239327) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:8c:9d", ip: ""} in network mk-no-preload-239327: {Iface:virbr1 ExpiryTime:2024-09-13 20:57:22 +0000 UTC Type:0 Mac:52:54:00:14:8c:9d Iaid: IPaddr:192.168.50.13 Prefix:24 Hostname:no-preload-239327 Clientid:01:52:54:00:14:8c:9d}
	I0913 19:57:29.651818   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined IP address 192.168.50.13 and MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:29.651947   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHPort
	I0913 19:57:29.652123   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHKeyPath
	I0913 19:57:29.652233   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHKeyPath
	I0913 19:57:29.652398   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHUsername
	I0913 19:57:29.652574   71424 main.go:141] libmachine: Using SSH client type: native
	I0913 19:57:29.652776   71424 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.13 22 <nil> <nil>}
	I0913 19:57:29.652794   71424 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-239327' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-239327/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-239327' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0913 19:57:29.762739   71424 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0913 19:57:29.762770   71424 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19636-3902/.minikube CaCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19636-3902/.minikube}
	I0913 19:57:29.762788   71424 buildroot.go:174] setting up certificates
	I0913 19:57:29.762798   71424 provision.go:84] configureAuth start
	I0913 19:57:29.762807   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetMachineName
	I0913 19:57:29.763076   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetIP
	I0913 19:57:29.765579   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:29.765844   71424 main.go:141] libmachine: (no-preload-239327) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:8c:9d", ip: ""} in network mk-no-preload-239327: {Iface:virbr1 ExpiryTime:2024-09-13 20:57:22 +0000 UTC Type:0 Mac:52:54:00:14:8c:9d Iaid: IPaddr:192.168.50.13 Prefix:24 Hostname:no-preload-239327 Clientid:01:52:54:00:14:8c:9d}
	I0913 19:57:29.765881   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined IP address 192.168.50.13 and MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:29.766037   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHHostname
	I0913 19:57:29.768073   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:29.768363   71424 main.go:141] libmachine: (no-preload-239327) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:8c:9d", ip: ""} in network mk-no-preload-239327: {Iface:virbr1 ExpiryTime:2024-09-13 20:57:22 +0000 UTC Type:0 Mac:52:54:00:14:8c:9d Iaid: IPaddr:192.168.50.13 Prefix:24 Hostname:no-preload-239327 Clientid:01:52:54:00:14:8c:9d}
	I0913 19:57:29.768389   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined IP address 192.168.50.13 and MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:29.768465   71424 provision.go:143] copyHostCerts
	I0913 19:57:29.768517   71424 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem, removing ...
	I0913 19:57:29.768527   71424 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem
	I0913 19:57:29.768590   71424 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem (1679 bytes)
	I0913 19:57:29.768687   71424 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem, removing ...
	I0913 19:57:29.768694   71424 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem
	I0913 19:57:29.768722   71424 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem (1082 bytes)
	I0913 19:57:29.768788   71424 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem, removing ...
	I0913 19:57:29.768795   71424 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem
	I0913 19:57:29.768817   71424 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem (1123 bytes)
	I0913 19:57:29.768889   71424 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem org=jenkins.no-preload-239327 san=[127.0.0.1 192.168.50.13 localhost minikube no-preload-239327]
	I0913 19:57:29.880624   71424 provision.go:177] copyRemoteCerts
	I0913 19:57:29.880682   71424 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0913 19:57:29.880717   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHHostname
	I0913 19:57:29.883382   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:29.883679   71424 main.go:141] libmachine: (no-preload-239327) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:8c:9d", ip: ""} in network mk-no-preload-239327: {Iface:virbr1 ExpiryTime:2024-09-13 20:57:22 +0000 UTC Type:0 Mac:52:54:00:14:8c:9d Iaid: IPaddr:192.168.50.13 Prefix:24 Hostname:no-preload-239327 Clientid:01:52:54:00:14:8c:9d}
	I0913 19:57:29.883706   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined IP address 192.168.50.13 and MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:29.883861   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHPort
	I0913 19:57:29.884034   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHKeyPath
	I0913 19:57:29.884172   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHUsername
	I0913 19:57:29.884299   71424 sshutil.go:53] new ssh client: &{IP:192.168.50.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/no-preload-239327/id_rsa Username:docker}
	I0913 19:57:29.964073   71424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0913 19:57:29.988940   71424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0913 19:57:30.013491   71424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0913 19:57:30.038401   71424 provision.go:87] duration metric: took 275.590034ms to configureAuth
	I0913 19:57:30.038427   71424 buildroot.go:189] setting minikube options for container-runtime
	I0913 19:57:30.038638   71424 config.go:182] Loaded profile config "no-preload-239327": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 19:57:30.038726   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHHostname
	I0913 19:57:30.041435   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:30.041734   71424 main.go:141] libmachine: (no-preload-239327) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:8c:9d", ip: ""} in network mk-no-preload-239327: {Iface:virbr1 ExpiryTime:2024-09-13 20:57:22 +0000 UTC Type:0 Mac:52:54:00:14:8c:9d Iaid: IPaddr:192.168.50.13 Prefix:24 Hostname:no-preload-239327 Clientid:01:52:54:00:14:8c:9d}
	I0913 19:57:30.041758   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined IP address 192.168.50.13 and MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:30.041939   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHPort
	I0913 19:57:30.042135   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHKeyPath
	I0913 19:57:30.042328   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHKeyPath
	I0913 19:57:30.042488   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHUsername
	I0913 19:57:30.042633   71424 main.go:141] libmachine: Using SSH client type: native
	I0913 19:57:30.042788   71424 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.13 22 <nil> <nil>}
	I0913 19:57:30.042803   71424 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0913 19:57:30.253339   71424 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0913 19:57:30.253366   71424 machine.go:96] duration metric: took 823.250507ms to provisionDockerMachine
	I0913 19:57:30.253379   71424 start.go:293] postStartSetup for "no-preload-239327" (driver="kvm2")
	I0913 19:57:30.253391   71424 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0913 19:57:30.253413   71424 main.go:141] libmachine: (no-preload-239327) Calling .DriverName
	I0913 19:57:30.253755   71424 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0913 19:57:30.253789   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHHostname
	I0913 19:57:30.256252   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:30.256514   71424 main.go:141] libmachine: (no-preload-239327) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:8c:9d", ip: ""} in network mk-no-preload-239327: {Iface:virbr1 ExpiryTime:2024-09-13 20:57:22 +0000 UTC Type:0 Mac:52:54:00:14:8c:9d Iaid: IPaddr:192.168.50.13 Prefix:24 Hostname:no-preload-239327 Clientid:01:52:54:00:14:8c:9d}
	I0913 19:57:30.256540   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined IP address 192.168.50.13 and MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:30.256711   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHPort
	I0913 19:57:30.256876   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHKeyPath
	I0913 19:57:30.257073   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHUsername
	I0913 19:57:30.257214   71424 sshutil.go:53] new ssh client: &{IP:192.168.50.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/no-preload-239327/id_rsa Username:docker}
	I0913 19:57:30.337478   71424 ssh_runner.go:195] Run: cat /etc/os-release
	I0913 19:57:30.342399   71424 info.go:137] Remote host: Buildroot 2023.02.9
	I0913 19:57:30.342432   71424 filesync.go:126] Scanning /home/jenkins/minikube-integration/19636-3902/.minikube/addons for local assets ...
	I0913 19:57:30.342520   71424 filesync.go:126] Scanning /home/jenkins/minikube-integration/19636-3902/.minikube/files for local assets ...
	I0913 19:57:30.342602   71424 filesync.go:149] local asset: /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem -> 110792.pem in /etc/ssl/certs
	I0913 19:57:30.342687   71424 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0913 19:57:30.352513   71424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem --> /etc/ssl/certs/110792.pem (1708 bytes)
	I0913 19:57:30.377672   71424 start.go:296] duration metric: took 124.280454ms for postStartSetup
	I0913 19:57:30.377713   71424 fix.go:56] duration metric: took 18.619042375s for fixHost
	I0913 19:57:30.377736   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHHostname
	I0913 19:57:30.380480   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:30.380762   71424 main.go:141] libmachine: (no-preload-239327) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:8c:9d", ip: ""} in network mk-no-preload-239327: {Iface:virbr1 ExpiryTime:2024-09-13 20:57:22 +0000 UTC Type:0 Mac:52:54:00:14:8c:9d Iaid: IPaddr:192.168.50.13 Prefix:24 Hostname:no-preload-239327 Clientid:01:52:54:00:14:8c:9d}
	I0913 19:57:30.380784   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined IP address 192.168.50.13 and MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:30.380956   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHPort
	I0913 19:57:30.381202   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHKeyPath
	I0913 19:57:30.381348   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHKeyPath
	I0913 19:57:30.381458   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHUsername
	I0913 19:57:30.381616   71424 main.go:141] libmachine: Using SSH client type: native
	I0913 19:57:30.381771   71424 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.13 22 <nil> <nil>}
	I0913 19:57:30.381780   71424 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0913 19:57:30.479035   71424 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726257450.452618583
	
	I0913 19:57:30.479060   71424 fix.go:216] guest clock: 1726257450.452618583
	I0913 19:57:30.479069   71424 fix.go:229] Guest: 2024-09-13 19:57:30.452618583 +0000 UTC Remote: 2024-09-13 19:57:30.377717716 +0000 UTC m=+279.312798159 (delta=74.900867ms)
	I0913 19:57:30.479125   71424 fix.go:200] guest clock delta is within tolerance: 74.900867ms
	I0913 19:57:30.479144   71424 start.go:83] releasing machines lock for "no-preload-239327", held for 18.720496354s
	I0913 19:57:30.479184   71424 main.go:141] libmachine: (no-preload-239327) Calling .DriverName
	I0913 19:57:30.479427   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetIP
	I0913 19:57:30.481882   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:30.482255   71424 main.go:141] libmachine: (no-preload-239327) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:8c:9d", ip: ""} in network mk-no-preload-239327: {Iface:virbr1 ExpiryTime:2024-09-13 20:57:22 +0000 UTC Type:0 Mac:52:54:00:14:8c:9d Iaid: IPaddr:192.168.50.13 Prefix:24 Hostname:no-preload-239327 Clientid:01:52:54:00:14:8c:9d}
	I0913 19:57:30.482282   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined IP address 192.168.50.13 and MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:30.482456   71424 main.go:141] libmachine: (no-preload-239327) Calling .DriverName
	I0913 19:57:30.482964   71424 main.go:141] libmachine: (no-preload-239327) Calling .DriverName
	I0913 19:57:30.483140   71424 main.go:141] libmachine: (no-preload-239327) Calling .DriverName
	I0913 19:57:30.483216   71424 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0913 19:57:30.483243   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHHostname
	I0913 19:57:30.483423   71424 ssh_runner.go:195] Run: cat /version.json
	I0913 19:57:30.483453   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHHostname
	I0913 19:57:30.485658   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:30.486000   71424 main.go:141] libmachine: (no-preload-239327) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:8c:9d", ip: ""} in network mk-no-preload-239327: {Iface:virbr1 ExpiryTime:2024-09-13 20:57:22 +0000 UTC Type:0 Mac:52:54:00:14:8c:9d Iaid: IPaddr:192.168.50.13 Prefix:24 Hostname:no-preload-239327 Clientid:01:52:54:00:14:8c:9d}
	I0913 19:57:30.486026   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined IP address 192.168.50.13 and MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:30.486080   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:30.486173   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHPort
	I0913 19:57:30.486321   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHKeyPath
	I0913 19:57:30.486463   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHUsername
	I0913 19:57:30.486536   71424 main.go:141] libmachine: (no-preload-239327) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:8c:9d", ip: ""} in network mk-no-preload-239327: {Iface:virbr1 ExpiryTime:2024-09-13 20:57:22 +0000 UTC Type:0 Mac:52:54:00:14:8c:9d Iaid: IPaddr:192.168.50.13 Prefix:24 Hostname:no-preload-239327 Clientid:01:52:54:00:14:8c:9d}
	I0913 19:57:30.486556   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined IP address 192.168.50.13 and MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:30.486581   71424 sshutil.go:53] new ssh client: &{IP:192.168.50.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/no-preload-239327/id_rsa Username:docker}
	I0913 19:57:30.486717   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHPort
	I0913 19:57:30.486859   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHKeyPath
	I0913 19:57:30.487019   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHUsername
	I0913 19:57:30.487177   71424 sshutil.go:53] new ssh client: &{IP:192.168.50.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/no-preload-239327/id_rsa Username:docker}
	I0913 19:57:30.567383   71424 ssh_runner.go:195] Run: systemctl --version
	I0913 19:57:30.589782   71424 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0913 19:57:30.731014   71424 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0913 19:57:30.737329   71424 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0913 19:57:30.737400   71424 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0913 19:57:30.753326   71424 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0913 19:57:30.753355   71424 start.go:495] detecting cgroup driver to use...
	I0913 19:57:30.753427   71424 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0913 19:57:30.769188   71424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0913 19:57:30.783273   71424 docker.go:217] disabling cri-docker service (if available) ...
	I0913 19:57:30.783338   71424 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0913 19:57:30.796488   71424 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0913 19:57:30.809856   71424 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0913 19:57:30.920704   71424 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0913 19:57:31.096766   71424 docker.go:233] disabling docker service ...
	I0913 19:57:31.096843   71424 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0913 19:57:31.111766   71424 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0913 19:57:31.127537   71424 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0913 19:57:31.243075   71424 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0913 19:57:31.367950   71424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0913 19:57:31.382349   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0913 19:57:31.401339   71424 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0913 19:57:31.401408   71424 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:57:31.412154   71424 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0913 19:57:31.412230   71424 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:57:31.423247   71424 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:57:31.433976   71424 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:57:31.445438   71424 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0913 19:57:31.457530   71424 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:57:31.468624   71424 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:57:31.487026   71424 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:57:31.498412   71424 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0913 19:57:31.508829   71424 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0913 19:57:31.508895   71424 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0913 19:57:31.524710   71424 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0913 19:57:31.535524   71424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 19:57:31.653359   71424 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0913 19:57:31.747320   71424 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0913 19:57:31.747407   71424 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0913 19:57:31.752629   71424 start.go:563] Will wait 60s for crictl version
	I0913 19:57:31.752688   71424 ssh_runner.go:195] Run: which crictl
	I0913 19:57:31.756745   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0913 19:57:31.801760   71424 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0913 19:57:31.801845   71424 ssh_runner.go:195] Run: crio --version
	I0913 19:57:31.831043   71424 ssh_runner.go:195] Run: crio --version
	I0913 19:57:31.864324   71424 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0913 19:57:30.504936   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .Start
	I0913 19:57:30.505113   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Ensuring networks are active...
	I0913 19:57:30.505954   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Ensuring network default is active
	I0913 19:57:30.506465   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Ensuring network mk-default-k8s-diff-port-512125 is active
	I0913 19:57:30.506848   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Getting domain xml...
	I0913 19:57:30.507643   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Creating domain...
	I0913 19:57:31.762345   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting to get IP...
	I0913 19:57:31.763307   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:31.763782   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | unable to find current IP address of domain default-k8s-diff-port-512125 in network mk-default-k8s-diff-port-512125
	I0913 19:57:31.763844   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | I0913 19:57:31.763764   72780 retry.go:31] will retry after 200.585233ms: waiting for machine to come up
	I0913 19:57:31.966496   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:31.968386   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | unable to find current IP address of domain default-k8s-diff-port-512125 in network mk-default-k8s-diff-port-512125
	I0913 19:57:31.968411   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | I0913 19:57:31.968318   72780 retry.go:31] will retry after 263.858664ms: waiting for machine to come up
	I0913 19:57:32.234115   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:32.234585   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | unable to find current IP address of domain default-k8s-diff-port-512125 in network mk-default-k8s-diff-port-512125
	I0913 19:57:32.234611   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | I0913 19:57:32.234528   72780 retry.go:31] will retry after 372.592721ms: waiting for machine to come up
	I0913 19:57:32.609295   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:32.609822   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | unable to find current IP address of domain default-k8s-diff-port-512125 in network mk-default-k8s-diff-port-512125
	I0913 19:57:32.609852   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | I0913 19:57:32.609783   72780 retry.go:31] will retry after 570.937116ms: waiting for machine to come up
	I0913 19:57:33.182680   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:33.183060   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | unable to find current IP address of domain default-k8s-diff-port-512125 in network mk-default-k8s-diff-port-512125
	I0913 19:57:33.183090   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | I0913 19:57:33.183013   72780 retry.go:31] will retry after 573.320817ms: waiting for machine to come up
	I0913 19:57:33.757741   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:33.758113   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | unable to find current IP address of domain default-k8s-diff-port-512125 in network mk-default-k8s-diff-port-512125
	I0913 19:57:33.758145   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | I0913 19:57:33.758052   72780 retry.go:31] will retry after 732.322448ms: waiting for machine to come up
	I0913 19:57:34.492123   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:34.492507   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | unable to find current IP address of domain default-k8s-diff-port-512125 in network mk-default-k8s-diff-port-512125
	I0913 19:57:34.492538   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | I0913 19:57:34.492457   72780 retry.go:31] will retry after 958.042939ms: waiting for machine to come up
	I0913 19:57:31.865671   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetIP
	I0913 19:57:31.868390   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:31.868769   71424 main.go:141] libmachine: (no-preload-239327) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:8c:9d", ip: ""} in network mk-no-preload-239327: {Iface:virbr1 ExpiryTime:2024-09-13 20:57:22 +0000 UTC Type:0 Mac:52:54:00:14:8c:9d Iaid: IPaddr:192.168.50.13 Prefix:24 Hostname:no-preload-239327 Clientid:01:52:54:00:14:8c:9d}
	I0913 19:57:31.868809   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined IP address 192.168.50.13 and MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:31.868948   71424 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0913 19:57:31.873443   71424 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0913 19:57:31.886704   71424 kubeadm.go:883] updating cluster {Name:no-preload-239327 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:no-preload-239327 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.13 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0913 19:57:31.886832   71424 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0913 19:57:31.886886   71424 ssh_runner.go:195] Run: sudo crictl images --output json
	I0913 19:57:31.925232   71424 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0913 19:57:31.925256   71424 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.1 registry.k8s.io/kube-controller-manager:v1.31.1 registry.k8s.io/kube-scheduler:v1.31.1 registry.k8s.io/kube-proxy:v1.31.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0913 19:57:31.925331   71424 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 19:57:31.925351   71424 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0913 19:57:31.925350   71424 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I0913 19:57:31.925433   71424 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0913 19:57:31.925483   71424 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0913 19:57:31.925542   71424 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I0913 19:57:31.925553   71424 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I0913 19:57:31.925619   71424 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0913 19:57:31.927195   71424 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0913 19:57:31.927210   71424 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I0913 19:57:31.927221   71424 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0913 19:57:31.927234   71424 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0913 19:57:31.927201   71424 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I0913 19:57:31.927210   71424 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 19:57:31.927265   71424 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I0913 19:57:31.927291   71424 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0913 19:57:32.127330   71424 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0913 19:57:32.132821   71424 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I0913 19:57:32.142922   71424 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.1
	I0913 19:57:32.151533   71424 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.1
	I0913 19:57:32.187158   71424 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.1
	I0913 19:57:32.196395   71424 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0913 19:57:32.196447   71424 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0913 19:57:32.196495   71424 ssh_runner.go:195] Run: which crictl
	I0913 19:57:32.197121   71424 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.1
	I0913 19:57:32.223747   71424 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0913 19:57:32.241044   71424 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.1" does not exist at hash "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee" in container runtime
	I0913 19:57:32.241098   71424 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.1
	I0913 19:57:32.241146   71424 ssh_runner.go:195] Run: which crictl
	I0913 19:57:32.241193   71424 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I0913 19:57:32.241248   71424 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I0913 19:57:32.241305   71424 ssh_runner.go:195] Run: which crictl
	I0913 19:57:32.307038   71424 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.1" does not exist at hash "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1" in container runtime
	I0913 19:57:32.307081   71424 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0913 19:57:32.307161   71424 ssh_runner.go:195] Run: which crictl
	I0913 19:57:32.310315   71424 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.1" does not exist at hash "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b" in container runtime
	I0913 19:57:32.310353   71424 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.1
	I0913 19:57:32.310403   71424 ssh_runner.go:195] Run: which crictl
	I0913 19:57:32.310456   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0913 19:57:32.310513   71424 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.1" needs transfer: "registry.k8s.io/kube-proxy:v1.31.1" does not exist at hash "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561" in container runtime
	I0913 19:57:32.310544   71424 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.1
	I0913 19:57:32.310579   71424 ssh_runner.go:195] Run: which crictl
	I0913 19:57:32.432848   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0913 19:57:32.432949   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0913 19:57:32.432981   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0913 19:57:32.433034   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0913 19:57:32.433086   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0913 19:57:32.433185   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0913 19:57:32.568999   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0913 19:57:32.569071   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0913 19:57:32.569090   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0913 19:57:32.569137   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0913 19:57:32.569158   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0913 19:57:32.569239   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0913 19:57:32.686591   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0913 19:57:32.709864   71424 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0913 19:57:32.709957   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0913 19:57:32.709984   71424 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0913 19:57:32.710022   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0913 19:57:32.710074   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0913 19:57:32.714371   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0913 19:57:32.812533   71424 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I0913 19:57:32.812546   71424 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I0913 19:57:32.812646   71424 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0913 19:57:32.812679   71424 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I0913 19:57:32.822802   71424 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0913 19:57:32.822821   71424 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0913 19:57:32.822870   71424 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0913 19:57:32.822949   71424 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I0913 19:57:32.823020   71424 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I0913 19:57:32.823036   71424 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I0913 19:57:32.823105   71424 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0913 19:57:32.823127   71424 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0913 19:57:32.823108   71424 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1
	I0913 19:57:32.827694   71424 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I0913 19:57:32.827935   71424 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.1 (exists)
	I0913 19:57:33.133519   71424 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 19:57:35.452314   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:35.452807   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | unable to find current IP address of domain default-k8s-diff-port-512125 in network mk-default-k8s-diff-port-512125
	I0913 19:57:35.452832   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | I0913 19:57:35.452764   72780 retry.go:31] will retry after 1.050724369s: waiting for machine to come up
	I0913 19:57:36.504580   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:36.505059   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | unable to find current IP address of domain default-k8s-diff-port-512125 in network mk-default-k8s-diff-port-512125
	I0913 19:57:36.505083   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | I0913 19:57:36.505005   72780 retry.go:31] will retry after 1.828970571s: waiting for machine to come up
	I0913 19:57:38.336079   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:38.336524   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | unable to find current IP address of domain default-k8s-diff-port-512125 in network mk-default-k8s-diff-port-512125
	I0913 19:57:38.336551   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | I0913 19:57:38.336484   72780 retry.go:31] will retry after 1.745975748s: waiting for machine to come up
	I0913 19:57:36.540092   71424 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.717200665s)
	I0913 19:57:36.540120   71424 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0913 19:57:36.540143   71424 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I0913 19:57:36.540185   71424 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1: (3.717045749s)
	I0913 19:57:36.540088   71424 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1: (3.716939076s)
	I0913 19:57:36.540246   71424 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1: (3.717074576s)
	I0913 19:57:36.540263   71424 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.1 (exists)
	I0913 19:57:36.540196   71424 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I0913 19:57:36.540247   71424 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.1 (exists)
	I0913 19:57:36.540220   71424 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.1 (exists)
	I0913 19:57:36.540318   71424 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (3.406769496s)
	I0913 19:57:36.540350   71424 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0913 19:57:36.540383   71424 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 19:57:36.540425   71424 ssh_runner.go:195] Run: which crictl
	I0913 19:57:38.607617   71424 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (2.06732841s)
	I0913 19:57:38.607656   71424 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I0913 19:57:38.607657   71424 ssh_runner.go:235] Completed: which crictl: (2.067217735s)
	I0913 19:57:38.607681   71424 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0913 19:57:38.607717   71424 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0913 19:57:38.607717   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 19:57:38.655710   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 19:57:40.096743   71424 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.440995963s)
	I0913 19:57:40.096836   71424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 19:57:40.096885   71424 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1: (1.489140573s)
	I0913 19:57:40.096912   71424 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 from cache
	I0913 19:57:40.096946   71424 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.1
	I0913 19:57:40.097003   71424 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1
	I0913 19:57:40.142959   71424 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0913 19:57:40.143072   71424 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0913 19:57:40.083781   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:40.084316   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | unable to find current IP address of domain default-k8s-diff-port-512125 in network mk-default-k8s-diff-port-512125
	I0913 19:57:40.084339   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | I0913 19:57:40.084202   72780 retry.go:31] will retry after 2.736824298s: waiting for machine to come up
	I0913 19:57:42.823269   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:42.823689   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | unable to find current IP address of domain default-k8s-diff-port-512125 in network mk-default-k8s-diff-port-512125
	I0913 19:57:42.823723   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | I0913 19:57:42.823648   72780 retry.go:31] will retry after 3.517461718s: waiting for machine to come up
	I0913 19:57:42.266895   71424 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1: (2.169865218s)
	I0913 19:57:42.266929   71424 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 from cache
	I0913 19:57:42.266971   71424 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0913 19:57:42.267074   71424 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0913 19:57:42.266978   71424 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (2.123869445s)
	I0913 19:57:42.267185   71424 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0913 19:57:44.129215   71424 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1: (1.86211411s)
	I0913 19:57:44.129248   71424 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 from cache
	I0913 19:57:44.129280   71424 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0913 19:57:44.129356   71424 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0913 19:57:46.077759   71424 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1: (1.948382667s)
	I0913 19:57:46.077791   71424 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 from cache
	I0913 19:57:46.077818   71424 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0913 19:57:46.077859   71424 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0913 19:57:46.342187   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:46.342624   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | unable to find current IP address of domain default-k8s-diff-port-512125 in network mk-default-k8s-diff-port-512125
	I0913 19:57:46.342661   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | I0913 19:57:46.342555   72780 retry.go:31] will retry after 3.728072283s: waiting for machine to come up
	I0913 19:57:46.728210   71424 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0913 19:57:46.728256   71424 cache_images.go:123] Successfully loaded all cached images
	I0913 19:57:46.728261   71424 cache_images.go:92] duration metric: took 14.802990931s to LoadCachedImages
	I0913 19:57:46.728274   71424 kubeadm.go:934] updating node { 192.168.50.13 8443 v1.31.1 crio true true} ...
	I0913 19:57:46.728393   71424 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-239327 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.13
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-239327 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0913 19:57:46.728503   71424 ssh_runner.go:195] Run: crio config
	I0913 19:57:46.777890   71424 cni.go:84] Creating CNI manager for ""
	I0913 19:57:46.777916   71424 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0913 19:57:46.777928   71424 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0913 19:57:46.777948   71424 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.13 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-239327 NodeName:no-preload-239327 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.13"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.13 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0913 19:57:46.778129   71424 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.13
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-239327"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.13
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.13"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0913 19:57:46.778201   71424 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0913 19:57:46.788550   71424 binaries.go:44] Found k8s binaries, skipping transfer
	I0913 19:57:46.788612   71424 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0913 19:57:46.797610   71424 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0913 19:57:46.813683   71424 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0913 19:57:46.829359   71424 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0913 19:57:46.846055   71424 ssh_runner.go:195] Run: grep 192.168.50.13	control-plane.minikube.internal$ /etc/hosts
	I0913 19:57:46.849820   71424 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.13	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0913 19:57:46.861351   71424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 19:57:46.976645   71424 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0913 19:57:46.993359   71424 certs.go:68] Setting up /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/no-preload-239327 for IP: 192.168.50.13
	I0913 19:57:46.993390   71424 certs.go:194] generating shared ca certs ...
	I0913 19:57:46.993410   71424 certs.go:226] acquiring lock for ca certs: {Name:mke780aab4c2895bded11772e31c7a357d07742c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 19:57:46.993586   71424 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.key
	I0913 19:57:46.993648   71424 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.key
	I0913 19:57:46.993661   71424 certs.go:256] generating profile certs ...
	I0913 19:57:46.993761   71424 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/no-preload-239327/client.key
	I0913 19:57:46.993845   71424 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/no-preload-239327/apiserver.key.1d2f30c2
	I0913 19:57:46.993896   71424 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/no-preload-239327/proxy-client.key
	I0913 19:57:46.994053   71424 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079.pem (1338 bytes)
	W0913 19:57:46.994120   71424 certs.go:480] ignoring /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079_empty.pem, impossibly tiny 0 bytes
	I0913 19:57:46.994134   71424 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem (1675 bytes)
	I0913 19:57:46.994178   71424 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem (1082 bytes)
	I0913 19:57:46.994218   71424 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem (1123 bytes)
	I0913 19:57:46.994250   71424 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem (1679 bytes)
	I0913 19:57:46.994307   71424 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem (1708 bytes)
	I0913 19:57:46.995114   71424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0913 19:57:47.025538   71424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0913 19:57:47.078641   71424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0913 19:57:47.107063   71424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0913 19:57:47.147536   71424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/no-preload-239327/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0913 19:57:47.179796   71424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/no-preload-239327/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0913 19:57:47.202593   71424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/no-preload-239327/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0913 19:57:47.227536   71424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/no-preload-239327/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0913 19:57:47.251324   71424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0913 19:57:47.274447   71424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079.pem --> /usr/share/ca-certificates/11079.pem (1338 bytes)
	I0913 19:57:47.297216   71424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem --> /usr/share/ca-certificates/110792.pem (1708 bytes)
	I0913 19:57:47.320138   71424 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0913 19:57:47.336696   71424 ssh_runner.go:195] Run: openssl version
	I0913 19:57:47.342403   71424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0913 19:57:47.352378   71424 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0913 19:57:47.356749   71424 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 13 18:22 /usr/share/ca-certificates/minikubeCA.pem
	I0913 19:57:47.356793   71424 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0913 19:57:47.362541   71424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0913 19:57:47.372621   71424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11079.pem && ln -fs /usr/share/ca-certificates/11079.pem /etc/ssl/certs/11079.pem"
	I0913 19:57:47.382729   71424 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11079.pem
	I0913 19:57:47.387369   71424 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 13 18:38 /usr/share/ca-certificates/11079.pem
	I0913 19:57:47.387431   71424 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11079.pem
	I0913 19:57:47.393218   71424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11079.pem /etc/ssl/certs/51391683.0"
	I0913 19:57:47.403529   71424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110792.pem && ln -fs /usr/share/ca-certificates/110792.pem /etc/ssl/certs/110792.pem"
	I0913 19:57:47.414210   71424 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110792.pem
	I0913 19:57:47.418917   71424 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 13 18:38 /usr/share/ca-certificates/110792.pem
	I0913 19:57:47.418965   71424 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110792.pem
	I0913 19:57:47.424414   71424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110792.pem /etc/ssl/certs/3ec20f2e.0"
	I0913 19:57:47.434850   71424 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0913 19:57:47.439245   71424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0913 19:57:47.445052   71424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0913 19:57:47.450680   71424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0913 19:57:47.456489   71424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0913 19:57:47.462051   71424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0913 19:57:47.467582   71424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0913 19:57:47.473181   71424 kubeadm.go:392] StartCluster: {Name:no-preload-239327 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:no-preload-239327 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.13 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 19:57:47.473256   71424 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0913 19:57:47.473295   71424 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0913 19:57:47.510432   71424 cri.go:89] found id: ""
	I0913 19:57:47.510508   71424 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0913 19:57:47.520272   71424 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0913 19:57:47.520293   71424 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0913 19:57:47.520338   71424 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0913 19:57:47.529391   71424 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0913 19:57:47.530298   71424 kubeconfig.go:125] found "no-preload-239327" server: "https://192.168.50.13:8443"
	I0913 19:57:47.532275   71424 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0913 19:57:47.541080   71424 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.13
	I0913 19:57:47.541115   71424 kubeadm.go:1160] stopping kube-system containers ...
	I0913 19:57:47.541130   71424 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0913 19:57:47.541167   71424 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0913 19:57:47.575726   71424 cri.go:89] found id: ""
	I0913 19:57:47.575797   71424 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0913 19:57:47.591640   71424 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0913 19:57:47.600616   71424 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0913 19:57:47.600634   71424 kubeadm.go:157] found existing configuration files:
	
	I0913 19:57:47.600680   71424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0913 19:57:47.609317   71424 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0913 19:57:47.609360   71424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0913 19:57:47.618729   71424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0913 19:57:47.627198   71424 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0913 19:57:47.627241   71424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0913 19:57:47.636259   71424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0913 19:57:47.645245   71424 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0913 19:57:47.645303   71424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0913 19:57:47.654245   71424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0913 19:57:47.662970   71424 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0913 19:57:47.663045   71424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0913 19:57:47.672250   71424 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0913 19:57:47.681504   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:57:47.783618   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:57:48.614939   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:57:48.812739   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:57:48.888885   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:57:48.999877   71424 api_server.go:52] waiting for apiserver process to appear ...
	I0913 19:57:48.999966   71424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:57:49.500587   71424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:57:50.001072   71424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:57:50.026939   71424 api_server.go:72] duration metric: took 1.027062019s to wait for apiserver process to appear ...
	I0913 19:57:50.026965   71424 api_server.go:88] waiting for apiserver healthz status ...
	I0913 19:57:50.026983   71424 api_server.go:253] Checking apiserver healthz at https://192.168.50.13:8443/healthz ...
	I0913 19:57:51.327259   71926 start.go:364] duration metric: took 4m9.277620447s to acquireMachinesLock for "old-k8s-version-234290"
	I0913 19:57:51.327324   71926 start.go:96] Skipping create...Using existing machine configuration
	I0913 19:57:51.327338   71926 fix.go:54] fixHost starting: 
	I0913 19:57:51.327769   71926 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:57:51.327815   71926 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:57:51.344030   71926 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37477
	I0913 19:57:51.344527   71926 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:57:51.344994   71926 main.go:141] libmachine: Using API Version  1
	I0913 19:57:51.345018   71926 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:57:51.345360   71926 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:57:51.345563   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .DriverName
	I0913 19:57:51.345700   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetState
	I0913 19:57:51.347144   71926 fix.go:112] recreateIfNeeded on old-k8s-version-234290: state=Stopped err=<nil>
	I0913 19:57:51.347169   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .DriverName
	W0913 19:57:51.347304   71926 fix.go:138] unexpected machine state, will restart: <nil>
	I0913 19:57:51.349231   71926 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-234290" ...
	I0913 19:57:51.350756   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .Start
	I0913 19:57:51.350906   71926 main.go:141] libmachine: (old-k8s-version-234290) Ensuring networks are active...
	I0913 19:57:51.351585   71926 main.go:141] libmachine: (old-k8s-version-234290) Ensuring network default is active
	I0913 19:57:51.351974   71926 main.go:141] libmachine: (old-k8s-version-234290) Ensuring network mk-old-k8s-version-234290 is active
	I0913 19:57:51.352333   71926 main.go:141] libmachine: (old-k8s-version-234290) Getting domain xml...
	I0913 19:57:51.352947   71926 main.go:141] libmachine: (old-k8s-version-234290) Creating domain...
	I0913 19:57:50.075284   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:50.075782   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has current primary IP address 192.168.61.3 and MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:50.075801   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Found IP for machine: 192.168.61.3
	I0913 19:57:50.075813   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Reserving static IP address...
	I0913 19:57:50.076344   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Reserved static IP address: 192.168.61.3
	I0913 19:57:50.076383   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Waiting for SSH to be available...
	I0913 19:57:50.076420   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-512125", mac: "52:54:00:5b:54:e0", ip: "192.168.61.3"} in network mk-default-k8s-diff-port-512125: {Iface:virbr2 ExpiryTime:2024-09-13 20:49:35 +0000 UTC Type:0 Mac:52:54:00:5b:54:e0 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-512125 Clientid:01:52:54:00:5b:54:e0}
	I0913 19:57:50.076452   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | skip adding static IP to network mk-default-k8s-diff-port-512125 - found existing host DHCP lease matching {name: "default-k8s-diff-port-512125", mac: "52:54:00:5b:54:e0", ip: "192.168.61.3"}
	I0913 19:57:50.076468   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | Getting to WaitForSSH function...
	I0913 19:57:50.078783   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:50.079184   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:54:e0", ip: ""} in network mk-default-k8s-diff-port-512125: {Iface:virbr2 ExpiryTime:2024-09-13 20:49:35 +0000 UTC Type:0 Mac:52:54:00:5b:54:e0 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-512125 Clientid:01:52:54:00:5b:54:e0}
	I0913 19:57:50.079251   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined IP address 192.168.61.3 and MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:50.079322   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | Using SSH client type: external
	I0913 19:57:50.079363   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | Using SSH private key: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/default-k8s-diff-port-512125/id_rsa (-rw-------)
	I0913 19:57:50.079395   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.3 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19636-3902/.minikube/machines/default-k8s-diff-port-512125/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0913 19:57:50.079422   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | About to run SSH command:
	I0913 19:57:50.079444   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | exit 0
	I0913 19:57:50.206454   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | SSH cmd err, output: <nil>: 
	I0913 19:57:50.206818   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetConfigRaw
	I0913 19:57:50.207468   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetIP
	I0913 19:57:50.210231   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:50.210663   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:54:e0", ip: ""} in network mk-default-k8s-diff-port-512125: {Iface:virbr2 ExpiryTime:2024-09-13 20:49:35 +0000 UTC Type:0 Mac:52:54:00:5b:54:e0 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-512125 Clientid:01:52:54:00:5b:54:e0}
	I0913 19:57:50.210690   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined IP address 192.168.61.3 and MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:50.210983   71702 profile.go:143] Saving config to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/default-k8s-diff-port-512125/config.json ...
	I0913 19:57:50.211209   71702 machine.go:93] provisionDockerMachine start ...
	I0913 19:57:50.211228   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .DriverName
	I0913 19:57:50.211520   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHHostname
	I0913 19:57:50.214581   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:50.214920   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:54:e0", ip: ""} in network mk-default-k8s-diff-port-512125: {Iface:virbr2 ExpiryTime:2024-09-13 20:49:35 +0000 UTC Type:0 Mac:52:54:00:5b:54:e0 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-512125 Clientid:01:52:54:00:5b:54:e0}
	I0913 19:57:50.214943   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined IP address 192.168.61.3 and MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:50.215121   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHPort
	I0913 19:57:50.215303   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHKeyPath
	I0913 19:57:50.215451   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHKeyPath
	I0913 19:57:50.215645   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHUsername
	I0913 19:57:50.215804   71702 main.go:141] libmachine: Using SSH client type: native
	I0913 19:57:50.216045   71702 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.3 22 <nil> <nil>}
	I0913 19:57:50.216060   71702 main.go:141] libmachine: About to run SSH command:
	hostname
	I0913 19:57:50.331657   71702 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0913 19:57:50.331684   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetMachineName
	I0913 19:57:50.331934   71702 buildroot.go:166] provisioning hostname "default-k8s-diff-port-512125"
	I0913 19:57:50.331950   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetMachineName
	I0913 19:57:50.332149   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHHostname
	I0913 19:57:50.335159   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:50.335537   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:54:e0", ip: ""} in network mk-default-k8s-diff-port-512125: {Iface:virbr2 ExpiryTime:2024-09-13 20:49:35 +0000 UTC Type:0 Mac:52:54:00:5b:54:e0 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-512125 Clientid:01:52:54:00:5b:54:e0}
	I0913 19:57:50.335567   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined IP address 192.168.61.3 and MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:50.335723   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHPort
	I0913 19:57:50.335908   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHKeyPath
	I0913 19:57:50.336046   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHKeyPath
	I0913 19:57:50.336226   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHUsername
	I0913 19:57:50.336384   71702 main.go:141] libmachine: Using SSH client type: native
	I0913 19:57:50.336597   71702 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.3 22 <nil> <nil>}
	I0913 19:57:50.336616   71702 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-512125 && echo "default-k8s-diff-port-512125" | sudo tee /etc/hostname
	I0913 19:57:50.467731   71702 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-512125
	
	I0913 19:57:50.467765   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHHostname
	I0913 19:57:50.470668   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:50.471106   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:54:e0", ip: ""} in network mk-default-k8s-diff-port-512125: {Iface:virbr2 ExpiryTime:2024-09-13 20:49:35 +0000 UTC Type:0 Mac:52:54:00:5b:54:e0 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-512125 Clientid:01:52:54:00:5b:54:e0}
	I0913 19:57:50.471135   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined IP address 192.168.61.3 and MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:50.471401   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHPort
	I0913 19:57:50.471588   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHKeyPath
	I0913 19:57:50.471784   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHKeyPath
	I0913 19:57:50.471944   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHUsername
	I0913 19:57:50.472126   71702 main.go:141] libmachine: Using SSH client type: native
	I0913 19:57:50.472334   71702 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.3 22 <nil> <nil>}
	I0913 19:57:50.472352   71702 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-512125' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-512125/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-512125' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0913 19:57:50.587535   71702 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0913 19:57:50.587565   71702 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19636-3902/.minikube CaCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19636-3902/.minikube}
	I0913 19:57:50.587599   71702 buildroot.go:174] setting up certificates
	I0913 19:57:50.587608   71702 provision.go:84] configureAuth start
	I0913 19:57:50.587617   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetMachineName
	I0913 19:57:50.587881   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetIP
	I0913 19:57:50.590622   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:50.591016   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:54:e0", ip: ""} in network mk-default-k8s-diff-port-512125: {Iface:virbr2 ExpiryTime:2024-09-13 20:49:35 +0000 UTC Type:0 Mac:52:54:00:5b:54:e0 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-512125 Clientid:01:52:54:00:5b:54:e0}
	I0913 19:57:50.591046   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined IP address 192.168.61.3 and MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:50.591235   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHHostname
	I0913 19:57:50.593758   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:50.594161   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:54:e0", ip: ""} in network mk-default-k8s-diff-port-512125: {Iface:virbr2 ExpiryTime:2024-09-13 20:49:35 +0000 UTC Type:0 Mac:52:54:00:5b:54:e0 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-512125 Clientid:01:52:54:00:5b:54:e0}
	I0913 19:57:50.594188   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined IP address 192.168.61.3 and MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:50.594290   71702 provision.go:143] copyHostCerts
	I0913 19:57:50.594351   71702 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem, removing ...
	I0913 19:57:50.594364   71702 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem
	I0913 19:57:50.594423   71702 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem (1123 bytes)
	I0913 19:57:50.594504   71702 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem, removing ...
	I0913 19:57:50.594511   71702 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem
	I0913 19:57:50.594529   71702 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem (1679 bytes)
	I0913 19:57:50.594580   71702 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem, removing ...
	I0913 19:57:50.594586   71702 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem
	I0913 19:57:50.594603   71702 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem (1082 bytes)
	I0913 19:57:50.594654   71702 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-512125 san=[127.0.0.1 192.168.61.3 default-k8s-diff-port-512125 localhost minikube]
	I0913 19:57:50.688827   71702 provision.go:177] copyRemoteCerts
	I0913 19:57:50.688879   71702 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0913 19:57:50.688903   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHHostname
	I0913 19:57:50.691724   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:50.692112   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:54:e0", ip: ""} in network mk-default-k8s-diff-port-512125: {Iface:virbr2 ExpiryTime:2024-09-13 20:49:35 +0000 UTC Type:0 Mac:52:54:00:5b:54:e0 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-512125 Clientid:01:52:54:00:5b:54:e0}
	I0913 19:57:50.692142   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined IP address 192.168.61.3 and MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:50.692387   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHPort
	I0913 19:57:50.692579   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHKeyPath
	I0913 19:57:50.692754   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHUsername
	I0913 19:57:50.692876   71702 sshutil.go:53] new ssh client: &{IP:192.168.61.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/default-k8s-diff-port-512125/id_rsa Username:docker}
	I0913 19:57:50.776582   71702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0913 19:57:50.802453   71702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0913 19:57:50.827446   71702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0913 19:57:50.855966   71702 provision.go:87] duration metric: took 268.344608ms to configureAuth
	I0913 19:57:50.855995   71702 buildroot.go:189] setting minikube options for container-runtime
	I0913 19:57:50.856210   71702 config.go:182] Loaded profile config "default-k8s-diff-port-512125": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 19:57:50.856298   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHHostname
	I0913 19:57:50.859097   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:50.859426   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:54:e0", ip: ""} in network mk-default-k8s-diff-port-512125: {Iface:virbr2 ExpiryTime:2024-09-13 20:49:35 +0000 UTC Type:0 Mac:52:54:00:5b:54:e0 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-512125 Clientid:01:52:54:00:5b:54:e0}
	I0913 19:57:50.859464   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined IP address 192.168.61.3 and MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:50.859667   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHPort
	I0913 19:57:50.859851   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHKeyPath
	I0913 19:57:50.860001   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHKeyPath
	I0913 19:57:50.860103   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHUsername
	I0913 19:57:50.860270   71702 main.go:141] libmachine: Using SSH client type: native
	I0913 19:57:50.860450   71702 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.3 22 <nil> <nil>}
	I0913 19:57:50.860472   71702 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0913 19:57:51.091137   71702 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0913 19:57:51.091162   71702 machine.go:96] duration metric: took 879.939352ms to provisionDockerMachine
	I0913 19:57:51.091174   71702 start.go:293] postStartSetup for "default-k8s-diff-port-512125" (driver="kvm2")
	I0913 19:57:51.091187   71702 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0913 19:57:51.091208   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .DriverName
	I0913 19:57:51.091525   71702 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0913 19:57:51.091558   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHHostname
	I0913 19:57:51.094398   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:51.094755   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:54:e0", ip: ""} in network mk-default-k8s-diff-port-512125: {Iface:virbr2 ExpiryTime:2024-09-13 20:49:35 +0000 UTC Type:0 Mac:52:54:00:5b:54:e0 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-512125 Clientid:01:52:54:00:5b:54:e0}
	I0913 19:57:51.094783   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined IP address 192.168.61.3 and MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:51.094945   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHPort
	I0913 19:57:51.095112   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHKeyPath
	I0913 19:57:51.095269   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHUsername
	I0913 19:57:51.095391   71702 sshutil.go:53] new ssh client: &{IP:192.168.61.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/default-k8s-diff-port-512125/id_rsa Username:docker}
	I0913 19:57:51.176959   71702 ssh_runner.go:195] Run: cat /etc/os-release
	I0913 19:57:51.181585   71702 info.go:137] Remote host: Buildroot 2023.02.9
	I0913 19:57:51.181614   71702 filesync.go:126] Scanning /home/jenkins/minikube-integration/19636-3902/.minikube/addons for local assets ...
	I0913 19:57:51.181687   71702 filesync.go:126] Scanning /home/jenkins/minikube-integration/19636-3902/.minikube/files for local assets ...
	I0913 19:57:51.181768   71702 filesync.go:149] local asset: /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem -> 110792.pem in /etc/ssl/certs
	I0913 19:57:51.181857   71702 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0913 19:57:51.191417   71702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem --> /etc/ssl/certs/110792.pem (1708 bytes)
	I0913 19:57:51.218033   71702 start.go:296] duration metric: took 126.844149ms for postStartSetup
	I0913 19:57:51.218076   71702 fix.go:56] duration metric: took 20.738765131s for fixHost
	I0913 19:57:51.218119   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHHostname
	I0913 19:57:51.221206   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:51.221713   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:54:e0", ip: ""} in network mk-default-k8s-diff-port-512125: {Iface:virbr2 ExpiryTime:2024-09-13 20:49:35 +0000 UTC Type:0 Mac:52:54:00:5b:54:e0 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-512125 Clientid:01:52:54:00:5b:54:e0}
	I0913 19:57:51.221748   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined IP address 192.168.61.3 and MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:51.221946   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHPort
	I0913 19:57:51.222151   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHKeyPath
	I0913 19:57:51.222344   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHKeyPath
	I0913 19:57:51.222538   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHUsername
	I0913 19:57:51.222673   71702 main.go:141] libmachine: Using SSH client type: native
	I0913 19:57:51.222834   71702 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.3 22 <nil> <nil>}
	I0913 19:57:51.222844   71702 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0913 19:57:51.327091   71702 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726257471.303496315
	
	I0913 19:57:51.327121   71702 fix.go:216] guest clock: 1726257471.303496315
	I0913 19:57:51.327132   71702 fix.go:229] Guest: 2024-09-13 19:57:51.303496315 +0000 UTC Remote: 2024-09-13 19:57:51.218080493 +0000 UTC m=+266.360246627 (delta=85.415822ms)
	I0913 19:57:51.327179   71702 fix.go:200] guest clock delta is within tolerance: 85.415822ms
	I0913 19:57:51.327187   71702 start.go:83] releasing machines lock for "default-k8s-diff-port-512125", held for 20.847905198s
	I0913 19:57:51.327218   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .DriverName
	I0913 19:57:51.327478   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetIP
	I0913 19:57:51.330295   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:51.330668   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:54:e0", ip: ""} in network mk-default-k8s-diff-port-512125: {Iface:virbr2 ExpiryTime:2024-09-13 20:49:35 +0000 UTC Type:0 Mac:52:54:00:5b:54:e0 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-512125 Clientid:01:52:54:00:5b:54:e0}
	I0913 19:57:51.330701   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined IP address 192.168.61.3 and MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:51.330809   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .DriverName
	I0913 19:57:51.331309   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .DriverName
	I0913 19:57:51.331492   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .DriverName
	I0913 19:57:51.331611   71702 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0913 19:57:51.331653   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHHostname
	I0913 19:57:51.331703   71702 ssh_runner.go:195] Run: cat /version.json
	I0913 19:57:51.331728   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHHostname
	I0913 19:57:51.334221   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:51.334411   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:51.334581   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:54:e0", ip: ""} in network mk-default-k8s-diff-port-512125: {Iface:virbr2 ExpiryTime:2024-09-13 20:49:35 +0000 UTC Type:0 Mac:52:54:00:5b:54:e0 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-512125 Clientid:01:52:54:00:5b:54:e0}
	I0913 19:57:51.334609   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined IP address 192.168.61.3 and MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:51.334779   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHPort
	I0913 19:57:51.334879   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:54:e0", ip: ""} in network mk-default-k8s-diff-port-512125: {Iface:virbr2 ExpiryTime:2024-09-13 20:49:35 +0000 UTC Type:0 Mac:52:54:00:5b:54:e0 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-512125 Clientid:01:52:54:00:5b:54:e0}
	I0913 19:57:51.334919   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined IP address 192.168.61.3 and MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:51.334966   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHKeyPath
	I0913 19:57:51.335052   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHPort
	I0913 19:57:51.335126   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHUsername
	I0913 19:57:51.335198   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHKeyPath
	I0913 19:57:51.335270   71702 sshutil.go:53] new ssh client: &{IP:192.168.61.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/default-k8s-diff-port-512125/id_rsa Username:docker}
	I0913 19:57:51.335331   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHUsername
	I0913 19:57:51.335546   71702 sshutil.go:53] new ssh client: &{IP:192.168.61.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/default-k8s-diff-port-512125/id_rsa Username:docker}
	I0913 19:57:51.415552   71702 ssh_runner.go:195] Run: systemctl --version
	I0913 19:57:51.440411   71702 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0913 19:57:51.584757   71702 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0913 19:57:51.590531   71702 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0913 19:57:51.590604   71702 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0913 19:57:51.606595   71702 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0913 19:57:51.606619   71702 start.go:495] detecting cgroup driver to use...
	I0913 19:57:51.606678   71702 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0913 19:57:51.622887   71702 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0913 19:57:51.642168   71702 docker.go:217] disabling cri-docker service (if available) ...
	I0913 19:57:51.642235   71702 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0913 19:57:51.657201   71702 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0913 19:57:51.672504   71702 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0913 19:57:51.797046   71702 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0913 19:57:51.944856   71702 docker.go:233] disabling docker service ...
	I0913 19:57:51.944930   71702 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0913 19:57:51.962885   71702 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0913 19:57:51.979765   71702 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0913 19:57:52.144865   71702 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0913 19:57:52.305549   71702 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0913 19:57:52.319742   71702 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0913 19:57:52.341814   71702 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0913 19:57:52.341877   71702 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:57:52.356233   71702 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0913 19:57:52.356304   71702 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:57:52.367867   71702 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:57:52.380357   71702 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:57:52.396158   71702 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0913 19:57:52.409682   71702 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:57:52.425012   71702 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:57:52.443770   71702 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:57:52.455296   71702 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0913 19:57:52.471321   71702 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0913 19:57:52.471384   71702 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0913 19:57:52.486626   71702 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0913 19:57:52.503172   71702 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 19:57:52.637550   71702 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0913 19:57:52.749215   71702 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0913 19:57:52.749314   71702 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0913 19:57:52.755695   71702 start.go:563] Will wait 60s for crictl version
	I0913 19:57:52.755764   71702 ssh_runner.go:195] Run: which crictl
	I0913 19:57:52.760759   71702 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0913 19:57:52.810845   71702 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0913 19:57:52.810938   71702 ssh_runner.go:195] Run: crio --version
	I0913 19:57:52.843238   71702 ssh_runner.go:195] Run: crio --version
	I0913 19:57:52.881367   71702 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0913 19:57:52.882926   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetIP
	I0913 19:57:52.886161   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:52.886611   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:54:e0", ip: ""} in network mk-default-k8s-diff-port-512125: {Iface:virbr2 ExpiryTime:2024-09-13 20:49:35 +0000 UTC Type:0 Mac:52:54:00:5b:54:e0 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-512125 Clientid:01:52:54:00:5b:54:e0}
	I0913 19:57:52.886640   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined IP address 192.168.61.3 and MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 19:57:52.886873   71702 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0913 19:57:52.891585   71702 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0913 19:57:52.909764   71702 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-512125 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:default-k8s-diff-port-512125 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.3 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0913 19:57:52.909895   71702 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0913 19:57:52.909946   71702 ssh_runner.go:195] Run: sudo crictl images --output json
	I0913 19:57:52.951579   71702 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0913 19:57:52.951663   71702 ssh_runner.go:195] Run: which lz4
	I0913 19:57:52.956284   71702 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0913 19:57:52.961057   71702 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0913 19:57:52.961107   71702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0913 19:57:54.413207   71702 crio.go:462] duration metric: took 1.457013899s to copy over tarball
	I0913 19:57:54.413281   71702 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0913 19:57:53.355482   71424 api_server.go:279] https://192.168.50.13:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0913 19:57:53.355515   71424 api_server.go:103] status: https://192.168.50.13:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0913 19:57:53.355532   71424 api_server.go:253] Checking apiserver healthz at https://192.168.50.13:8443/healthz ...
	I0913 19:57:53.403530   71424 api_server.go:279] https://192.168.50.13:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0913 19:57:53.403563   71424 api_server.go:103] status: https://192.168.50.13:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0913 19:57:53.527891   71424 api_server.go:253] Checking apiserver healthz at https://192.168.50.13:8443/healthz ...
	I0913 19:57:53.540614   71424 api_server.go:279] https://192.168.50.13:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0913 19:57:53.540645   71424 api_server.go:103] status: https://192.168.50.13:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0913 19:57:54.027103   71424 api_server.go:253] Checking apiserver healthz at https://192.168.50.13:8443/healthz ...
	I0913 19:57:54.033969   71424 api_server.go:279] https://192.168.50.13:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0913 19:57:54.034007   71424 api_server.go:103] status: https://192.168.50.13:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0913 19:57:54.527232   71424 api_server.go:253] Checking apiserver healthz at https://192.168.50.13:8443/healthz ...
	I0913 19:57:54.533061   71424 api_server.go:279] https://192.168.50.13:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0913 19:57:54.533101   71424 api_server.go:103] status: https://192.168.50.13:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0913 19:57:55.027284   71424 api_server.go:253] Checking apiserver healthz at https://192.168.50.13:8443/healthz ...
	I0913 19:57:55.033940   71424 api_server.go:279] https://192.168.50.13:8443/healthz returned 200:
	ok
	I0913 19:57:55.041955   71424 api_server.go:141] control plane version: v1.31.1
	I0913 19:57:55.041994   71424 api_server.go:131] duration metric: took 5.01501979s to wait for apiserver health ...
	I0913 19:57:55.042004   71424 cni.go:84] Creating CNI manager for ""
	I0913 19:57:55.042012   71424 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0913 19:57:55.043980   71424 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0913 19:57:55.045528   71424 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0913 19:57:55.095694   71424 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0913 19:57:55.130974   71424 system_pods.go:43] waiting for kube-system pods to appear ...
	I0913 19:57:55.144810   71424 system_pods.go:59] 8 kube-system pods found
	I0913 19:57:55.144850   71424 system_pods.go:61] "coredns-7c65d6cfc9-fjzxv" [984f1946-61b1-4881-ae99-495855aaf948] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0913 19:57:55.144861   71424 system_pods.go:61] "etcd-no-preload-239327" [d0514967-93ea-4792-b49d-500c6800d102] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0913 19:57:55.144871   71424 system_pods.go:61] "kube-apiserver-no-preload-239327" [2e37433d-d767-4e6a-9697-79e99bb2cf74] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0913 19:57:55.144879   71424 system_pods.go:61] "kube-controller-manager-no-preload-239327" [71d86891-1378-43b5-af10-c45acb1ef854] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0913 19:57:55.144885   71424 system_pods.go:61] "kube-proxy-b24zg" [67fffd9e-ddf7-4abb-bfce-1528060d6b43] Running
	I0913 19:57:55.144892   71424 system_pods.go:61] "kube-scheduler-no-preload-239327" [22e17a4f-902f-4593-82dd-d2f04104e66a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0913 19:57:55.144899   71424 system_pods.go:61] "metrics-server-6867b74b74-bq7jp" [9920ad88-3d00-458f-94d4-3dcfd0cd9a01] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0913 19:57:55.144904   71424 system_pods.go:61] "storage-provisioner" [1cb55fe9-4adb-4d3e-9f26-34ee4b3f01a2] Running
	I0913 19:57:55.144912   71424 system_pods.go:74] duration metric: took 13.911878ms to wait for pod list to return data ...
	I0913 19:57:55.144925   71424 node_conditions.go:102] verifying NodePressure condition ...
	I0913 19:57:55.150452   71424 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0913 19:57:55.150485   71424 node_conditions.go:123] node cpu capacity is 2
	I0913 19:57:55.150498   71424 node_conditions.go:105] duration metric: took 5.568616ms to run NodePressure ...
	I0913 19:57:55.150517   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:57:55.469599   71424 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0913 19:57:55.475337   71424 kubeadm.go:739] kubelet initialised
	I0913 19:57:55.475361   71424 kubeadm.go:740] duration metric: took 5.681154ms waiting for restarted kubelet to initialise ...
	I0913 19:57:55.475372   71424 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0913 19:57:55.485218   71424 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-fjzxv" in "kube-system" namespace to be "Ready" ...
	I0913 19:57:55.495426   71424 pod_ready.go:98] node "no-preload-239327" hosting pod "coredns-7c65d6cfc9-fjzxv" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-239327" has status "Ready":"False"
	I0913 19:57:55.495451   71424 pod_ready.go:82] duration metric: took 10.207619ms for pod "coredns-7c65d6cfc9-fjzxv" in "kube-system" namespace to be "Ready" ...
	E0913 19:57:55.495464   71424 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-239327" hosting pod "coredns-7c65d6cfc9-fjzxv" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-239327" has status "Ready":"False"
	I0913 19:57:55.495474   71424 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-239327" in "kube-system" namespace to be "Ready" ...
	I0913 19:57:55.501722   71424 pod_ready.go:98] node "no-preload-239327" hosting pod "etcd-no-preload-239327" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-239327" has status "Ready":"False"
	I0913 19:57:55.501746   71424 pod_ready.go:82] duration metric: took 6.262633ms for pod "etcd-no-preload-239327" in "kube-system" namespace to be "Ready" ...
	E0913 19:57:55.501758   71424 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-239327" hosting pod "etcd-no-preload-239327" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-239327" has status "Ready":"False"
	I0913 19:57:55.501766   71424 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-239327" in "kube-system" namespace to be "Ready" ...
	I0913 19:57:55.508771   71424 pod_ready.go:98] node "no-preload-239327" hosting pod "kube-apiserver-no-preload-239327" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-239327" has status "Ready":"False"
	I0913 19:57:55.508797   71424 pod_ready.go:82] duration metric: took 7.022139ms for pod "kube-apiserver-no-preload-239327" in "kube-system" namespace to be "Ready" ...
	E0913 19:57:55.508808   71424 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-239327" hosting pod "kube-apiserver-no-preload-239327" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-239327" has status "Ready":"False"
	I0913 19:57:55.508816   71424 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-239327" in "kube-system" namespace to be "Ready" ...
	I0913 19:57:55.533464   71424 pod_ready.go:98] node "no-preload-239327" hosting pod "kube-controller-manager-no-preload-239327" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-239327" has status "Ready":"False"
	I0913 19:57:55.533494   71424 pod_ready.go:82] duration metric: took 24.667318ms for pod "kube-controller-manager-no-preload-239327" in "kube-system" namespace to be "Ready" ...
	E0913 19:57:55.533505   71424 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-239327" hosting pod "kube-controller-manager-no-preload-239327" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-239327" has status "Ready":"False"
	I0913 19:57:55.533515   71424 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-b24zg" in "kube-system" namespace to be "Ready" ...
	I0913 19:57:55.935346   71424 pod_ready.go:98] node "no-preload-239327" hosting pod "kube-proxy-b24zg" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-239327" has status "Ready":"False"
	I0913 19:57:55.935376   71424 pod_ready.go:82] duration metric: took 401.852235ms for pod "kube-proxy-b24zg" in "kube-system" namespace to be "Ready" ...
	E0913 19:57:55.935388   71424 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-239327" hosting pod "kube-proxy-b24zg" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-239327" has status "Ready":"False"
	I0913 19:57:55.935399   71424 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-239327" in "kube-system" namespace to be "Ready" ...
	I0913 19:57:56.335156   71424 pod_ready.go:98] node "no-preload-239327" hosting pod "kube-scheduler-no-preload-239327" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-239327" has status "Ready":"False"
	I0913 19:57:56.335194   71424 pod_ready.go:82] duration metric: took 399.782959ms for pod "kube-scheduler-no-preload-239327" in "kube-system" namespace to be "Ready" ...
	E0913 19:57:56.335207   71424 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-239327" hosting pod "kube-scheduler-no-preload-239327" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-239327" has status "Ready":"False"
	I0913 19:57:56.335216   71424 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace to be "Ready" ...
	I0913 19:57:56.734606   71424 pod_ready.go:98] node "no-preload-239327" hosting pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-239327" has status "Ready":"False"
	I0913 19:57:56.734633   71424 pod_ready.go:82] duration metric: took 399.405497ms for pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace to be "Ready" ...
	E0913 19:57:56.734644   71424 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-239327" hosting pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-239327" has status "Ready":"False"
	I0913 19:57:56.734654   71424 pod_ready.go:39] duration metric: took 1.259272309s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0913 19:57:56.734673   71424 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0913 19:57:56.748215   71424 ops.go:34] apiserver oom_adj: -16
	I0913 19:57:56.748236   71424 kubeadm.go:597] duration metric: took 9.227936606s to restartPrimaryControlPlane
	I0913 19:57:56.748247   71424 kubeadm.go:394] duration metric: took 9.275070425s to StartCluster
	I0913 19:57:56.748267   71424 settings.go:142] acquiring lock: {Name:mk3b751ea8a9f3a6c2d19469cffad500e96411b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 19:57:56.748361   71424 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19636-3902/kubeconfig
	I0913 19:57:56.750523   71424 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/kubeconfig: {Name:mk324cf401f96b93fed93af51e24b0634e5fa1a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 19:57:56.750818   71424 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.13 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0913 19:57:56.750914   71424 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0913 19:57:56.751016   71424 addons.go:69] Setting storage-provisioner=true in profile "no-preload-239327"
	I0913 19:57:56.751037   71424 addons.go:234] Setting addon storage-provisioner=true in "no-preload-239327"
	W0913 19:57:56.751046   71424 addons.go:243] addon storage-provisioner should already be in state true
	I0913 19:57:56.751034   71424 addons.go:69] Setting default-storageclass=true in profile "no-preload-239327"
	I0913 19:57:56.751066   71424 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-239327"
	I0913 19:57:56.751076   71424 host.go:66] Checking if "no-preload-239327" exists ...
	I0913 19:57:56.751108   71424 config.go:182] Loaded profile config "no-preload-239327": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 19:57:56.751172   71424 addons.go:69] Setting metrics-server=true in profile "no-preload-239327"
	I0913 19:57:56.751186   71424 addons.go:234] Setting addon metrics-server=true in "no-preload-239327"
	W0913 19:57:56.751208   71424 addons.go:243] addon metrics-server should already be in state true
	I0913 19:57:56.751231   71424 host.go:66] Checking if "no-preload-239327" exists ...
	I0913 19:57:56.751527   71424 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:57:56.751550   71424 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:57:56.751568   71424 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:57:56.751581   71424 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:57:56.751735   71424 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:57:56.751799   71424 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:57:56.753086   71424 out.go:177] * Verifying Kubernetes components...
	I0913 19:57:56.755069   71424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 19:57:56.769111   71424 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39191
	I0913 19:57:56.769722   71424 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:57:56.770138   71424 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37643
	I0913 19:57:56.770380   71424 main.go:141] libmachine: Using API Version  1
	I0913 19:57:56.770397   71424 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:57:56.770472   71424 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44401
	I0913 19:57:56.770616   71424 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:57:56.770858   71424 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:57:56.771033   71424 main.go:141] libmachine: Using API Version  1
	I0913 19:57:56.771054   71424 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:57:56.771358   71424 main.go:141] libmachine: Using API Version  1
	I0913 19:57:56.771375   71424 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:57:56.771393   71424 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:57:56.771418   71424 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:57:56.771553   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetState
	I0913 19:57:56.772058   71424 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:57:56.772097   71424 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:57:56.772313   71424 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:57:56.772870   71424 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:57:56.772911   71424 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:57:56.791429   71424 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39909
	I0913 19:57:56.791741   71424 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:57:56.791800   71424 addons.go:234] Setting addon default-storageclass=true in "no-preload-239327"
	W0913 19:57:56.791813   71424 addons.go:243] addon default-storageclass should already be in state true
	I0913 19:57:56.791841   71424 host.go:66] Checking if "no-preload-239327" exists ...
	I0913 19:57:56.792127   71424 main.go:141] libmachine: Using API Version  1
	I0913 19:57:56.792142   71424 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:57:56.792204   71424 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:57:56.792234   71424 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:57:56.792419   71424 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:57:56.792545   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetState
	I0913 19:57:56.794360   71424 main.go:141] libmachine: (no-preload-239327) Calling .DriverName
	I0913 19:57:56.796432   71424 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0913 19:57:56.797889   71424 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0913 19:57:56.797906   71424 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0913 19:57:56.797936   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHHostname
	I0913 19:57:56.801559   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:56.801916   71424 main.go:141] libmachine: (no-preload-239327) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:8c:9d", ip: ""} in network mk-no-preload-239327: {Iface:virbr1 ExpiryTime:2024-09-13 20:57:22 +0000 UTC Type:0 Mac:52:54:00:14:8c:9d Iaid: IPaddr:192.168.50.13 Prefix:24 Hostname:no-preload-239327 Clientid:01:52:54:00:14:8c:9d}
	I0913 19:57:56.801937   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined IP address 192.168.50.13 and MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:56.803787   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHPort
	I0913 19:57:56.803937   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHKeyPath
	I0913 19:57:56.806185   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHUsername
	I0913 19:57:56.806357   71424 sshutil.go:53] new ssh client: &{IP:192.168.50.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/no-preload-239327/id_rsa Username:docker}
	I0913 19:57:56.809000   71424 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40391
	I0913 19:57:56.809444   71424 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:57:56.809928   71424 main.go:141] libmachine: Using API Version  1
	I0913 19:57:56.809943   71424 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:57:56.809962   71424 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36435
	I0913 19:57:56.810309   71424 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:57:56.810511   71424 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:57:56.810829   71424 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:57:56.810862   71424 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:57:56.810872   71424 main.go:141] libmachine: Using API Version  1
	I0913 19:57:56.810886   71424 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:57:56.811194   71424 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:57:56.811321   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetState
	I0913 19:57:56.812760   71424 main.go:141] libmachine: (no-preload-239327) Calling .DriverName
	I0913 19:57:56.814270   71424 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 19:57:52.718732   71926 main.go:141] libmachine: (old-k8s-version-234290) Waiting to get IP...
	I0913 19:57:52.719575   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:57:52.720004   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | unable to find current IP address of domain old-k8s-version-234290 in network mk-old-k8s-version-234290
	I0913 19:57:52.720083   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | I0913 19:57:52.720009   72953 retry.go:31] will retry after 304.912151ms: waiting for machine to come up
	I0913 19:57:53.026797   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:57:53.027578   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | unable to find current IP address of domain old-k8s-version-234290 in network mk-old-k8s-version-234290
	I0913 19:57:53.027703   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | I0913 19:57:53.027641   72953 retry.go:31] will retry after 242.676909ms: waiting for machine to come up
	I0913 19:57:53.272108   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:57:53.272588   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | unable to find current IP address of domain old-k8s-version-234290 in network mk-old-k8s-version-234290
	I0913 19:57:53.272612   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | I0913 19:57:53.272518   72953 retry.go:31] will retry after 405.559393ms: waiting for machine to come up
	I0913 19:57:53.679940   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:57:53.680380   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | unable to find current IP address of domain old-k8s-version-234290 in network mk-old-k8s-version-234290
	I0913 19:57:53.680414   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | I0913 19:57:53.680348   72953 retry.go:31] will retry after 378.743628ms: waiting for machine to come up
	I0913 19:57:54.061169   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:57:54.061778   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | unable to find current IP address of domain old-k8s-version-234290 in network mk-old-k8s-version-234290
	I0913 19:57:54.061805   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | I0913 19:57:54.061698   72953 retry.go:31] will retry after 481.46563ms: waiting for machine to come up
	I0913 19:57:54.545134   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:57:54.545582   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | unable to find current IP address of domain old-k8s-version-234290 in network mk-old-k8s-version-234290
	I0913 19:57:54.545604   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | I0913 19:57:54.545548   72953 retry.go:31] will retry after 836.433898ms: waiting for machine to come up
	I0913 19:57:55.383396   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:57:55.384063   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | unable to find current IP address of domain old-k8s-version-234290 in network mk-old-k8s-version-234290
	I0913 19:57:55.384094   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | I0913 19:57:55.384023   72953 retry.go:31] will retry after 848.706378ms: waiting for machine to come up
	I0913 19:57:56.233996   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:57:56.234429   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | unable to find current IP address of domain old-k8s-version-234290 in network mk-old-k8s-version-234290
	I0913 19:57:56.234456   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | I0913 19:57:56.234381   72953 retry.go:31] will retry after 969.158848ms: waiting for machine to come up
	I0913 19:57:56.815854   71424 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0913 19:57:56.815866   71424 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0913 19:57:56.815878   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHHostname
	I0913 19:57:56.822710   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:56.823097   71424 main.go:141] libmachine: (no-preload-239327) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:8c:9d", ip: ""} in network mk-no-preload-239327: {Iface:virbr1 ExpiryTime:2024-09-13 20:57:22 +0000 UTC Type:0 Mac:52:54:00:14:8c:9d Iaid: IPaddr:192.168.50.13 Prefix:24 Hostname:no-preload-239327 Clientid:01:52:54:00:14:8c:9d}
	I0913 19:57:56.823115   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined IP address 192.168.50.13 and MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:56.823379   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHPort
	I0913 19:57:56.823519   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHKeyPath
	I0913 19:57:56.823634   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHUsername
	I0913 19:57:56.823721   71424 sshutil.go:53] new ssh client: &{IP:192.168.50.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/no-preload-239327/id_rsa Username:docker}
	I0913 19:57:56.830245   71424 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38447
	I0913 19:57:56.830634   71424 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:57:56.831243   71424 main.go:141] libmachine: Using API Version  1
	I0913 19:57:56.831258   71424 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:57:56.831746   71424 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:57:56.831977   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetState
	I0913 19:57:56.833771   71424 main.go:141] libmachine: (no-preload-239327) Calling .DriverName
	I0913 19:57:56.833953   71424 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0913 19:57:56.833966   71424 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0913 19:57:56.833981   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHHostname
	I0913 19:57:56.837171   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:56.837611   71424 main.go:141] libmachine: (no-preload-239327) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:8c:9d", ip: ""} in network mk-no-preload-239327: {Iface:virbr1 ExpiryTime:2024-09-13 20:57:22 +0000 UTC Type:0 Mac:52:54:00:14:8c:9d Iaid: IPaddr:192.168.50.13 Prefix:24 Hostname:no-preload-239327 Clientid:01:52:54:00:14:8c:9d}
	I0913 19:57:56.837630   71424 main.go:141] libmachine: (no-preload-239327) DBG | domain no-preload-239327 has defined IP address 192.168.50.13 and MAC address 52:54:00:14:8c:9d in network mk-no-preload-239327
	I0913 19:57:56.837793   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHPort
	I0913 19:57:56.837962   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHKeyPath
	I0913 19:57:56.838198   71424 main.go:141] libmachine: (no-preload-239327) Calling .GetSSHUsername
	I0913 19:57:56.838323   71424 sshutil.go:53] new ssh client: &{IP:192.168.50.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/no-preload-239327/id_rsa Username:docker}
	I0913 19:57:57.030836   71424 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0913 19:57:57.056630   71424 node_ready.go:35] waiting up to 6m0s for node "no-preload-239327" to be "Ready" ...
	I0913 19:57:57.157478   71424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0913 19:57:57.169686   71424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0913 19:57:57.302368   71424 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0913 19:57:57.302395   71424 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0913 19:57:57.355982   71424 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0913 19:57:57.356013   71424 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0913 19:57:57.378079   71424 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0913 19:57:57.378128   71424 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0913 19:57:57.437879   71424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0913 19:57:59.395739   71424 node_ready.go:53] node "no-preload-239327" has status "Ready":"False"
	I0913 19:57:59.399929   71424 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.230206257s)
	I0913 19:57:59.399976   71424 main.go:141] libmachine: Making call to close driver server
	I0913 19:57:59.399988   71424 main.go:141] libmachine: (no-preload-239327) Calling .Close
	I0913 19:57:59.400026   71424 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.242509219s)
	I0913 19:57:59.400067   71424 main.go:141] libmachine: Making call to close driver server
	I0913 19:57:59.400083   71424 main.go:141] libmachine: (no-preload-239327) Calling .Close
	I0913 19:57:59.400273   71424 main.go:141] libmachine: Successfully made call to close driver server
	I0913 19:57:59.400287   71424 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 19:57:59.400297   71424 main.go:141] libmachine: Making call to close driver server
	I0913 19:57:59.400305   71424 main.go:141] libmachine: (no-preload-239327) Calling .Close
	I0913 19:57:59.400481   71424 main.go:141] libmachine: (no-preload-239327) DBG | Closing plugin on server side
	I0913 19:57:59.400514   71424 main.go:141] libmachine: Successfully made call to close driver server
	I0913 19:57:59.400529   71424 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 19:57:59.400548   71424 main.go:141] libmachine: Making call to close driver server
	I0913 19:57:59.400556   71424 main.go:141] libmachine: (no-preload-239327) Calling .Close
	I0913 19:57:59.400706   71424 main.go:141] libmachine: Successfully made call to close driver server
	I0913 19:57:59.400716   71424 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 19:57:59.402063   71424 main.go:141] libmachine: Successfully made call to close driver server
	I0913 19:57:59.402078   71424 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 19:57:59.402110   71424 main.go:141] libmachine: (no-preload-239327) DBG | Closing plugin on server side
	I0913 19:57:59.729071   71424 main.go:141] libmachine: Making call to close driver server
	I0913 19:57:59.729097   71424 main.go:141] libmachine: (no-preload-239327) Calling .Close
	I0913 19:57:59.729396   71424 main.go:141] libmachine: Successfully made call to close driver server
	I0913 19:57:59.729416   71424 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 19:57:59.862773   71424 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.424844753s)
	I0913 19:57:59.862831   71424 main.go:141] libmachine: Making call to close driver server
	I0913 19:57:59.862847   71424 main.go:141] libmachine: (no-preload-239327) Calling .Close
	I0913 19:57:59.863167   71424 main.go:141] libmachine: (no-preload-239327) DBG | Closing plugin on server side
	I0913 19:57:59.863223   71424 main.go:141] libmachine: Successfully made call to close driver server
	I0913 19:57:59.863241   71424 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 19:57:59.863253   71424 main.go:141] libmachine: Making call to close driver server
	I0913 19:57:59.863261   71424 main.go:141] libmachine: (no-preload-239327) Calling .Close
	I0913 19:57:59.863505   71424 main.go:141] libmachine: Successfully made call to close driver server
	I0913 19:57:59.863521   71424 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 19:57:59.863536   71424 addons.go:475] Verifying addon metrics-server=true in "no-preload-239327"
	I0913 19:57:59.865569   71424 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0913 19:57:56.673474   71702 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.260118506s)
	I0913 19:57:56.673521   71702 crio.go:469] duration metric: took 2.260277637s to extract the tarball
	I0913 19:57:56.673535   71702 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0913 19:57:56.710512   71702 ssh_runner.go:195] Run: sudo crictl images --output json
	I0913 19:57:56.757884   71702 crio.go:514] all images are preloaded for cri-o runtime.
	I0913 19:57:56.757904   71702 cache_images.go:84] Images are preloaded, skipping loading
	I0913 19:57:56.757913   71702 kubeadm.go:934] updating node { 192.168.61.3 8444 v1.31.1 crio true true} ...
	I0913 19:57:56.758026   71702 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-512125 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-512125 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0913 19:57:56.758115   71702 ssh_runner.go:195] Run: crio config
	I0913 19:57:56.832109   71702 cni.go:84] Creating CNI manager for ""
	I0913 19:57:56.832131   71702 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0913 19:57:56.832143   71702 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0913 19:57:56.832170   71702 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.3 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-512125 NodeName:default-k8s-diff-port-512125 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.3"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0913 19:57:56.832376   71702 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.3
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-512125"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.3"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0913 19:57:56.832442   71702 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0913 19:57:56.845057   71702 binaries.go:44] Found k8s binaries, skipping transfer
	I0913 19:57:56.845112   71702 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0913 19:57:56.855452   71702 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I0913 19:57:56.874607   71702 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0913 19:57:56.891656   71702 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0913 19:57:56.910268   71702 ssh_runner.go:195] Run: grep 192.168.61.3	control-plane.minikube.internal$ /etc/hosts
	I0913 19:57:56.915416   71702 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.3	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0913 19:57:56.929858   71702 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 19:57:57.051400   71702 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0913 19:57:57.073706   71702 certs.go:68] Setting up /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/default-k8s-diff-port-512125 for IP: 192.168.61.3
	I0913 19:57:57.073736   71702 certs.go:194] generating shared ca certs ...
	I0913 19:57:57.073756   71702 certs.go:226] acquiring lock for ca certs: {Name:mke780aab4c2895bded11772e31c7a357d07742c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 19:57:57.073920   71702 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.key
	I0913 19:57:57.073981   71702 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.key
	I0913 19:57:57.073997   71702 certs.go:256] generating profile certs ...
	I0913 19:57:57.074130   71702 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/default-k8s-diff-port-512125/client.key
	I0913 19:57:57.074222   71702 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/default-k8s-diff-port-512125/apiserver.key.c56bc154
	I0913 19:57:57.074281   71702 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/default-k8s-diff-port-512125/proxy-client.key
	I0913 19:57:57.074428   71702 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079.pem (1338 bytes)
	W0913 19:57:57.074478   71702 certs.go:480] ignoring /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079_empty.pem, impossibly tiny 0 bytes
	I0913 19:57:57.074492   71702 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem (1675 bytes)
	I0913 19:57:57.074524   71702 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem (1082 bytes)
	I0913 19:57:57.074552   71702 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem (1123 bytes)
	I0913 19:57:57.074588   71702 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem (1679 bytes)
	I0913 19:57:57.074648   71702 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem (1708 bytes)
	I0913 19:57:57.075352   71702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0913 19:57:57.116487   71702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0913 19:57:57.149579   71702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0913 19:57:57.181669   71702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0913 19:57:57.222493   71702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/default-k8s-diff-port-512125/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0913 19:57:57.265591   71702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/default-k8s-diff-port-512125/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0913 19:57:57.309431   71702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/default-k8s-diff-port-512125/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0913 19:57:57.337978   71702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/default-k8s-diff-port-512125/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0913 19:57:57.368737   71702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem --> /usr/share/ca-certificates/110792.pem (1708 bytes)
	I0913 19:57:57.395163   71702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0913 19:57:57.422620   71702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079.pem --> /usr/share/ca-certificates/11079.pem (1338 bytes)
	I0913 19:57:57.452103   71702 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0913 19:57:57.473413   71702 ssh_runner.go:195] Run: openssl version
	I0913 19:57:57.481312   71702 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11079.pem && ln -fs /usr/share/ca-certificates/11079.pem /etc/ssl/certs/11079.pem"
	I0913 19:57:57.492674   71702 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11079.pem
	I0913 19:57:57.497758   71702 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 13 18:38 /usr/share/ca-certificates/11079.pem
	I0913 19:57:57.497839   71702 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11079.pem
	I0913 19:57:57.504428   71702 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11079.pem /etc/ssl/certs/51391683.0"
	I0913 19:57:57.516174   71702 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110792.pem && ln -fs /usr/share/ca-certificates/110792.pem /etc/ssl/certs/110792.pem"
	I0913 19:57:57.531615   71702 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110792.pem
	I0913 19:57:57.536963   71702 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 13 18:38 /usr/share/ca-certificates/110792.pem
	I0913 19:57:57.537044   71702 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110792.pem
	I0913 19:57:57.543533   71702 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110792.pem /etc/ssl/certs/3ec20f2e.0"
	I0913 19:57:57.555225   71702 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0913 19:57:57.567042   71702 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0913 19:57:57.571812   71702 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 13 18:22 /usr/share/ca-certificates/minikubeCA.pem
	I0913 19:57:57.571880   71702 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0913 19:57:57.578078   71702 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0913 19:57:57.589068   71702 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0913 19:57:57.593977   71702 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0913 19:57:57.600118   71702 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0913 19:57:57.608059   71702 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0913 19:57:57.616018   71702 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0913 19:57:57.623731   71702 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0913 19:57:57.631334   71702 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0913 19:57:57.639262   71702 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-512125 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:default-k8s-diff-port-512125 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.3 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2
6280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 19:57:57.639371   71702 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0913 19:57:57.639428   71702 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0913 19:57:57.690322   71702 cri.go:89] found id: ""
	I0913 19:57:57.690474   71702 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0913 19:57:57.701319   71702 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0913 19:57:57.701343   71702 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0913 19:57:57.701398   71702 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0913 19:57:57.714480   71702 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0913 19:57:57.715899   71702 kubeconfig.go:125] found "default-k8s-diff-port-512125" server: "https://192.168.61.3:8444"
	I0913 19:57:57.719013   71702 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0913 19:57:57.732186   71702 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.3
	I0913 19:57:57.732229   71702 kubeadm.go:1160] stopping kube-system containers ...
	I0913 19:57:57.732243   71702 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0913 19:57:57.732295   71702 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0913 19:57:57.777389   71702 cri.go:89] found id: ""
	I0913 19:57:57.777469   71702 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0913 19:57:57.800158   71702 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0913 19:57:57.813502   71702 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0913 19:57:57.813524   71702 kubeadm.go:157] found existing configuration files:
	
	I0913 19:57:57.813587   71702 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0913 19:57:57.824010   71702 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0913 19:57:57.824089   71702 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0913 19:57:57.837916   71702 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0913 19:57:57.848018   71702 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0913 19:57:57.848100   71702 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0913 19:57:57.858224   71702 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0913 19:57:57.867720   71702 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0913 19:57:57.867791   71702 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0913 19:57:57.877546   71702 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0913 19:57:57.886880   71702 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0913 19:57:57.886946   71702 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0913 19:57:57.897287   71702 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0913 19:57:57.907278   71702 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:57:58.066862   71702 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:57:59.038179   71702 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:57:59.245671   71702 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:57:59.306302   71702 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:57:59.366665   71702 api_server.go:52] waiting for apiserver process to appear ...
	I0913 19:57:59.366755   71702 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:57:59.867295   71702 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:57:59.867010   71424 addons.go:510] duration metric: took 3.116105462s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0913 19:57:57.205383   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:57:57.205897   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | unable to find current IP address of domain old-k8s-version-234290 in network mk-old-k8s-version-234290
	I0913 19:57:57.205926   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | I0913 19:57:57.205815   72953 retry.go:31] will retry after 1.270443953s: waiting for machine to come up
	I0913 19:57:58.477621   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:57:58.478121   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | unable to find current IP address of domain old-k8s-version-234290 in network mk-old-k8s-version-234290
	I0913 19:57:58.478142   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | I0913 19:57:58.478071   72953 retry.go:31] will retry after 1.698380616s: waiting for machine to come up
	I0913 19:58:00.179093   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:00.179578   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | unable to find current IP address of domain old-k8s-version-234290 in network mk-old-k8s-version-234290
	I0913 19:58:00.179602   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | I0913 19:58:00.179528   72953 retry.go:31] will retry after 2.83575453s: waiting for machine to come up
	I0913 19:58:00.367089   71702 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:00.386556   71702 api_server.go:72] duration metric: took 1.019888667s to wait for apiserver process to appear ...
	I0913 19:58:00.386585   71702 api_server.go:88] waiting for apiserver healthz status ...
	I0913 19:58:00.386612   71702 api_server.go:253] Checking apiserver healthz at https://192.168.61.3:8444/healthz ...
	I0913 19:58:00.387195   71702 api_server.go:269] stopped: https://192.168.61.3:8444/healthz: Get "https://192.168.61.3:8444/healthz": dial tcp 192.168.61.3:8444: connect: connection refused
	I0913 19:58:00.887556   71702 api_server.go:253] Checking apiserver healthz at https://192.168.61.3:8444/healthz ...
	I0913 19:58:03.321626   71702 api_server.go:279] https://192.168.61.3:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0913 19:58:03.321655   71702 api_server.go:103] status: https://192.168.61.3:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0913 19:58:03.321671   71702 api_server.go:253] Checking apiserver healthz at https://192.168.61.3:8444/healthz ...
	I0913 19:58:03.348469   71702 api_server.go:279] https://192.168.61.3:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0913 19:58:03.348523   71702 api_server.go:103] status: https://192.168.61.3:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0913 19:58:03.386697   71702 api_server.go:253] Checking apiserver healthz at https://192.168.61.3:8444/healthz ...
	I0913 19:58:03.431803   71702 api_server.go:279] https://192.168.61.3:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0913 19:58:03.431840   71702 api_server.go:103] status: https://192.168.61.3:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0913 19:58:03.887458   71702 api_server.go:253] Checking apiserver healthz at https://192.168.61.3:8444/healthz ...
	I0913 19:58:03.892461   71702 api_server.go:279] https://192.168.61.3:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0913 19:58:03.892542   71702 api_server.go:103] status: https://192.168.61.3:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0913 19:58:04.387025   71702 api_server.go:253] Checking apiserver healthz at https://192.168.61.3:8444/healthz ...
	I0913 19:58:04.392727   71702 api_server.go:279] https://192.168.61.3:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0913 19:58:04.392754   71702 api_server.go:103] status: https://192.168.61.3:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0913 19:58:04.887683   71702 api_server.go:253] Checking apiserver healthz at https://192.168.61.3:8444/healthz ...
	I0913 19:58:04.892753   71702 api_server.go:279] https://192.168.61.3:8444/healthz returned 200:
	ok
	I0913 19:58:04.904148   71702 api_server.go:141] control plane version: v1.31.1
	I0913 19:58:04.904182   71702 api_server.go:131] duration metric: took 4.517588824s to wait for apiserver health ...
	I0913 19:58:04.904194   71702 cni.go:84] Creating CNI manager for ""
	I0913 19:58:04.904202   71702 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0913 19:58:04.905663   71702 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0913 19:58:01.560970   71424 node_ready.go:53] node "no-preload-239327" has status "Ready":"False"
	I0913 19:58:04.064801   71424 node_ready.go:49] node "no-preload-239327" has status "Ready":"True"
	I0913 19:58:04.064833   71424 node_ready.go:38] duration metric: took 7.008173513s for node "no-preload-239327" to be "Ready" ...
	I0913 19:58:04.064847   71424 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0913 19:58:04.071226   71424 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-fjzxv" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:04.075856   71424 pod_ready.go:93] pod "coredns-7c65d6cfc9-fjzxv" in "kube-system" namespace has status "Ready":"True"
	I0913 19:58:04.075876   71424 pod_ready.go:82] duration metric: took 4.620688ms for pod "coredns-7c65d6cfc9-fjzxv" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:04.075886   71424 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-239327" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:06.082608   71424 pod_ready.go:103] pod "etcd-no-preload-239327" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:03.017261   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:03.017797   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | unable to find current IP address of domain old-k8s-version-234290 in network mk-old-k8s-version-234290
	I0913 19:58:03.017824   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | I0913 19:58:03.017750   72953 retry.go:31] will retry after 2.837073214s: waiting for machine to come up
	I0913 19:58:05.856138   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:05.856521   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | unable to find current IP address of domain old-k8s-version-234290 in network mk-old-k8s-version-234290
	I0913 19:58:05.856541   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | I0913 19:58:05.856478   72953 retry.go:31] will retry after 3.468611434s: waiting for machine to come up
	I0913 19:58:04.907086   71702 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0913 19:58:04.935755   71702 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0913 19:58:04.972552   71702 system_pods.go:43] waiting for kube-system pods to appear ...
	I0913 19:58:04.987070   71702 system_pods.go:59] 8 kube-system pods found
	I0913 19:58:04.987104   71702 system_pods.go:61] "coredns-7c65d6cfc9-zvnss" [b6584e3d-4140-4666-8303-94c0900eaf8c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0913 19:58:04.987118   71702 system_pods.go:61] "etcd-default-k8s-diff-port-512125" [5eb1e9b1-b89a-427d-83f5-96d9109b10c6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0913 19:58:04.987128   71702 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-512125" [5118097e-a1ed-403e-8acb-22c7619a6db9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0913 19:58:04.987148   71702 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-512125" [37f11854-a2b8-45d5-8491-e2f92b860220] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0913 19:58:04.987160   71702 system_pods.go:61] "kube-proxy-xqv9m" [92c9dda2-fabe-4b3b-9bae-892e6daf0889] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0913 19:58:04.987172   71702 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-512125" [a9f4fa75-b73d-477a-83e9-e855ec50f90d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0913 19:58:04.987180   71702 system_pods.go:61] "metrics-server-6867b74b74-7ltrm" [8560dbda-82b3-49a1-8ed8-f149e5e99168] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0913 19:58:04.987188   71702 system_pods.go:61] "storage-provisioner" [d8f393fe-0f71-4f3c-b17e-6132503c2b9d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0913 19:58:04.987198   71702 system_pods.go:74] duration metric: took 14.623093ms to wait for pod list to return data ...
	I0913 19:58:04.987207   71702 node_conditions.go:102] verifying NodePressure condition ...
	I0913 19:58:04.991659   71702 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0913 19:58:04.991686   71702 node_conditions.go:123] node cpu capacity is 2
	I0913 19:58:04.991701   71702 node_conditions.go:105] duration metric: took 4.488975ms to run NodePressure ...
	I0913 19:58:04.991720   71702 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:58:05.329547   71702 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0913 19:58:05.342174   71702 kubeadm.go:739] kubelet initialised
	I0913 19:58:05.342208   71702 kubeadm.go:740] duration metric: took 12.632654ms waiting for restarted kubelet to initialise ...
	I0913 19:58:05.342218   71702 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0913 19:58:05.351246   71702 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-zvnss" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:07.371790   71702 pod_ready.go:103] pod "coredns-7c65d6cfc9-zvnss" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:09.857936   71702 pod_ready.go:93] pod "coredns-7c65d6cfc9-zvnss" in "kube-system" namespace has status "Ready":"True"
	I0913 19:58:09.857956   71702 pod_ready.go:82] duration metric: took 4.506679998s for pod "coredns-7c65d6cfc9-zvnss" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:09.857966   71702 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-512125" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:10.763154   71233 start.go:364] duration metric: took 54.002772677s to acquireMachinesLock for "embed-certs-175374"
	I0913 19:58:10.763209   71233 start.go:96] Skipping create...Using existing machine configuration
	I0913 19:58:10.763220   71233 fix.go:54] fixHost starting: 
	I0913 19:58:10.763652   71233 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:58:10.763701   71233 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:58:10.780781   71233 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34533
	I0913 19:58:10.781257   71233 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:58:10.781767   71233 main.go:141] libmachine: Using API Version  1
	I0913 19:58:10.781792   71233 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:58:10.782108   71233 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:58:10.782297   71233 main.go:141] libmachine: (embed-certs-175374) Calling .DriverName
	I0913 19:58:10.782435   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetState
	I0913 19:58:10.783818   71233 fix.go:112] recreateIfNeeded on embed-certs-175374: state=Stopped err=<nil>
	I0913 19:58:10.783838   71233 main.go:141] libmachine: (embed-certs-175374) Calling .DriverName
	W0913 19:58:10.783968   71233 fix.go:138] unexpected machine state, will restart: <nil>
	I0913 19:58:10.786142   71233 out.go:177] * Restarting existing kvm2 VM for "embed-certs-175374" ...
	I0913 19:58:07.082571   71424 pod_ready.go:93] pod "etcd-no-preload-239327" in "kube-system" namespace has status "Ready":"True"
	I0913 19:58:07.082601   71424 pod_ready.go:82] duration metric: took 3.006705611s for pod "etcd-no-preload-239327" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:07.082614   71424 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-239327" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:07.087377   71424 pod_ready.go:93] pod "kube-apiserver-no-preload-239327" in "kube-system" namespace has status "Ready":"True"
	I0913 19:58:07.087394   71424 pod_ready.go:82] duration metric: took 4.772922ms for pod "kube-apiserver-no-preload-239327" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:07.087403   71424 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-239327" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:07.091167   71424 pod_ready.go:93] pod "kube-controller-manager-no-preload-239327" in "kube-system" namespace has status "Ready":"True"
	I0913 19:58:07.091181   71424 pod_ready.go:82] duration metric: took 3.772461ms for pod "kube-controller-manager-no-preload-239327" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:07.091188   71424 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-b24zg" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:07.095143   71424 pod_ready.go:93] pod "kube-proxy-b24zg" in "kube-system" namespace has status "Ready":"True"
	I0913 19:58:07.095158   71424 pod_ready.go:82] duration metric: took 3.964773ms for pod "kube-proxy-b24zg" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:07.095164   71424 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-239327" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:07.259916   71424 pod_ready.go:93] pod "kube-scheduler-no-preload-239327" in "kube-system" namespace has status "Ready":"True"
	I0913 19:58:07.259939   71424 pod_ready.go:82] duration metric: took 164.768229ms for pod "kube-scheduler-no-preload-239327" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:07.259948   71424 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:09.267203   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:09.327843   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:09.328294   71926 main.go:141] libmachine: (old-k8s-version-234290) Found IP for machine: 192.168.72.137
	I0913 19:58:09.328318   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has current primary IP address 192.168.72.137 and MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:09.328326   71926 main.go:141] libmachine: (old-k8s-version-234290) Reserving static IP address...
	I0913 19:58:09.328829   71926 main.go:141] libmachine: (old-k8s-version-234290) Reserved static IP address: 192.168.72.137
	I0913 19:58:09.328863   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | found host DHCP lease matching {name: "old-k8s-version-234290", mac: "52:54:00:11:33:43", ip: "192.168.72.137"} in network mk-old-k8s-version-234290: {Iface:virbr4 ExpiryTime:2024-09-13 20:58:03 +0000 UTC Type:0 Mac:52:54:00:11:33:43 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-234290 Clientid:01:52:54:00:11:33:43}
	I0913 19:58:09.328879   71926 main.go:141] libmachine: (old-k8s-version-234290) Waiting for SSH to be available...
	I0913 19:58:09.328907   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | skip adding static IP to network mk-old-k8s-version-234290 - found existing host DHCP lease matching {name: "old-k8s-version-234290", mac: "52:54:00:11:33:43", ip: "192.168.72.137"}
	I0913 19:58:09.328936   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | Getting to WaitForSSH function...
	I0913 19:58:09.331039   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:09.331303   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:33:43", ip: ""} in network mk-old-k8s-version-234290: {Iface:virbr4 ExpiryTime:2024-09-13 20:58:03 +0000 UTC Type:0 Mac:52:54:00:11:33:43 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-234290 Clientid:01:52:54:00:11:33:43}
	I0913 19:58:09.331334   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined IP address 192.168.72.137 and MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:09.331400   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | Using SSH client type: external
	I0913 19:58:09.331435   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | Using SSH private key: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/old-k8s-version-234290/id_rsa (-rw-------)
	I0913 19:58:09.331513   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.137 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19636-3902/.minikube/machines/old-k8s-version-234290/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0913 19:58:09.331538   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | About to run SSH command:
	I0913 19:58:09.331555   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | exit 0
	I0913 19:58:09.458142   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | SSH cmd err, output: <nil>: 
	I0913 19:58:09.458503   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetConfigRaw
	I0913 19:58:09.459065   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetIP
	I0913 19:58:09.461622   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:09.461915   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:33:43", ip: ""} in network mk-old-k8s-version-234290: {Iface:virbr4 ExpiryTime:2024-09-13 20:58:03 +0000 UTC Type:0 Mac:52:54:00:11:33:43 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-234290 Clientid:01:52:54:00:11:33:43}
	I0913 19:58:09.461939   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined IP address 192.168.72.137 and MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:09.462215   71926 profile.go:143] Saving config to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/old-k8s-version-234290/config.json ...
	I0913 19:58:09.462430   71926 machine.go:93] provisionDockerMachine start ...
	I0913 19:58:09.462454   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .DriverName
	I0913 19:58:09.462652   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHHostname
	I0913 19:58:09.464731   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:09.465001   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:33:43", ip: ""} in network mk-old-k8s-version-234290: {Iface:virbr4 ExpiryTime:2024-09-13 20:58:03 +0000 UTC Type:0 Mac:52:54:00:11:33:43 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-234290 Clientid:01:52:54:00:11:33:43}
	I0913 19:58:09.465025   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined IP address 192.168.72.137 and MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:09.465140   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHPort
	I0913 19:58:09.465292   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHKeyPath
	I0913 19:58:09.465448   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHKeyPath
	I0913 19:58:09.465586   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHUsername
	I0913 19:58:09.465754   71926 main.go:141] libmachine: Using SSH client type: native
	I0913 19:58:09.465934   71926 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.137 22 <nil> <nil>}
	I0913 19:58:09.465944   71926 main.go:141] libmachine: About to run SSH command:
	hostname
	I0913 19:58:09.578790   71926 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0913 19:58:09.578821   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetMachineName
	I0913 19:58:09.579119   71926 buildroot.go:166] provisioning hostname "old-k8s-version-234290"
	I0913 19:58:09.579148   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetMachineName
	I0913 19:58:09.579352   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHHostname
	I0913 19:58:09.582085   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:09.582501   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:33:43", ip: ""} in network mk-old-k8s-version-234290: {Iface:virbr4 ExpiryTime:2024-09-13 20:58:03 +0000 UTC Type:0 Mac:52:54:00:11:33:43 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-234290 Clientid:01:52:54:00:11:33:43}
	I0913 19:58:09.582530   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined IP address 192.168.72.137 and MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:09.582677   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHPort
	I0913 19:58:09.582828   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHKeyPath
	I0913 19:58:09.582982   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHKeyPath
	I0913 19:58:09.583111   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHUsername
	I0913 19:58:09.583310   71926 main.go:141] libmachine: Using SSH client type: native
	I0913 19:58:09.583519   71926 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.137 22 <nil> <nil>}
	I0913 19:58:09.583536   71926 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-234290 && echo "old-k8s-version-234290" | sudo tee /etc/hostname
	I0913 19:58:09.712818   71926 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-234290
	
	I0913 19:58:09.712849   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHHostname
	I0913 19:58:09.715668   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:09.716012   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:33:43", ip: ""} in network mk-old-k8s-version-234290: {Iface:virbr4 ExpiryTime:2024-09-13 20:58:03 +0000 UTC Type:0 Mac:52:54:00:11:33:43 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-234290 Clientid:01:52:54:00:11:33:43}
	I0913 19:58:09.716033   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined IP address 192.168.72.137 and MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:09.716166   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHPort
	I0913 19:58:09.716370   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHKeyPath
	I0913 19:58:09.716550   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHKeyPath
	I0913 19:58:09.716740   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHUsername
	I0913 19:58:09.716935   71926 main.go:141] libmachine: Using SSH client type: native
	I0913 19:58:09.717120   71926 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.137 22 <nil> <nil>}
	I0913 19:58:09.717145   71926 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-234290' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-234290/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-234290' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0913 19:58:09.835137   71926 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0913 19:58:09.835169   71926 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19636-3902/.minikube CaCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19636-3902/.minikube}
	I0913 19:58:09.835198   71926 buildroot.go:174] setting up certificates
	I0913 19:58:09.835207   71926 provision.go:84] configureAuth start
	I0913 19:58:09.835220   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetMachineName
	I0913 19:58:09.835493   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetIP
	I0913 19:58:09.838090   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:09.838460   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:33:43", ip: ""} in network mk-old-k8s-version-234290: {Iface:virbr4 ExpiryTime:2024-09-13 20:58:03 +0000 UTC Type:0 Mac:52:54:00:11:33:43 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-234290 Clientid:01:52:54:00:11:33:43}
	I0913 19:58:09.838496   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined IP address 192.168.72.137 and MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:09.838570   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHHostname
	I0913 19:58:09.840786   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:09.841121   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:33:43", ip: ""} in network mk-old-k8s-version-234290: {Iface:virbr4 ExpiryTime:2024-09-13 20:58:03 +0000 UTC Type:0 Mac:52:54:00:11:33:43 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-234290 Clientid:01:52:54:00:11:33:43}
	I0913 19:58:09.841147   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined IP address 192.168.72.137 and MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:09.841283   71926 provision.go:143] copyHostCerts
	I0913 19:58:09.841339   71926 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem, removing ...
	I0913 19:58:09.841349   71926 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem
	I0913 19:58:09.841404   71926 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem (1082 bytes)
	I0913 19:58:09.841495   71926 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem, removing ...
	I0913 19:58:09.841503   71926 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem
	I0913 19:58:09.841525   71926 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem (1123 bytes)
	I0913 19:58:09.841589   71926 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem, removing ...
	I0913 19:58:09.841596   71926 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem
	I0913 19:58:09.841613   71926 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem (1679 bytes)
	I0913 19:58:09.841671   71926 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-234290 san=[127.0.0.1 192.168.72.137 localhost minikube old-k8s-version-234290]
	I0913 19:58:10.111960   71926 provision.go:177] copyRemoteCerts
	I0913 19:58:10.112014   71926 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0913 19:58:10.112042   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHHostname
	I0913 19:58:10.114801   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:10.115156   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:33:43", ip: ""} in network mk-old-k8s-version-234290: {Iface:virbr4 ExpiryTime:2024-09-13 20:58:03 +0000 UTC Type:0 Mac:52:54:00:11:33:43 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-234290 Clientid:01:52:54:00:11:33:43}
	I0913 19:58:10.115203   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined IP address 192.168.72.137 and MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:10.115378   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHPort
	I0913 19:58:10.115543   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHKeyPath
	I0913 19:58:10.115703   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHUsername
	I0913 19:58:10.115816   71926 sshutil.go:53] new ssh client: &{IP:192.168.72.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/old-k8s-version-234290/id_rsa Username:docker}
	I0913 19:58:10.201034   71926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0913 19:58:10.225250   71926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0913 19:58:10.248653   71926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0913 19:58:10.273946   71926 provision.go:87] duration metric: took 438.72916ms to configureAuth
	I0913 19:58:10.273971   71926 buildroot.go:189] setting minikube options for container-runtime
	I0913 19:58:10.274216   71926 config.go:182] Loaded profile config "old-k8s-version-234290": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0913 19:58:10.274293   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHHostname
	I0913 19:58:10.276661   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:10.276973   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:33:43", ip: ""} in network mk-old-k8s-version-234290: {Iface:virbr4 ExpiryTime:2024-09-13 20:58:03 +0000 UTC Type:0 Mac:52:54:00:11:33:43 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-234290 Clientid:01:52:54:00:11:33:43}
	I0913 19:58:10.277010   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined IP address 192.168.72.137 and MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:10.277098   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHPort
	I0913 19:58:10.277315   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHKeyPath
	I0913 19:58:10.277465   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHKeyPath
	I0913 19:58:10.277593   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHUsername
	I0913 19:58:10.277755   71926 main.go:141] libmachine: Using SSH client type: native
	I0913 19:58:10.277914   71926 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.137 22 <nil> <nil>}
	I0913 19:58:10.277929   71926 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0913 19:58:10.506712   71926 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0913 19:58:10.506740   71926 machine.go:96] duration metric: took 1.044293936s to provisionDockerMachine
	I0913 19:58:10.506752   71926 start.go:293] postStartSetup for "old-k8s-version-234290" (driver="kvm2")
	I0913 19:58:10.506761   71926 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0913 19:58:10.506786   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .DriverName
	I0913 19:58:10.507087   71926 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0913 19:58:10.507121   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHHostname
	I0913 19:58:10.509746   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:10.510073   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:33:43", ip: ""} in network mk-old-k8s-version-234290: {Iface:virbr4 ExpiryTime:2024-09-13 20:58:03 +0000 UTC Type:0 Mac:52:54:00:11:33:43 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-234290 Clientid:01:52:54:00:11:33:43}
	I0913 19:58:10.510115   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined IP address 192.168.72.137 and MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:10.510319   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHPort
	I0913 19:58:10.510493   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHKeyPath
	I0913 19:58:10.510652   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHUsername
	I0913 19:58:10.510791   71926 sshutil.go:53] new ssh client: &{IP:192.168.72.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/old-k8s-version-234290/id_rsa Username:docker}
	I0913 19:58:10.608187   71926 ssh_runner.go:195] Run: cat /etc/os-release
	I0913 19:58:10.612545   71926 info.go:137] Remote host: Buildroot 2023.02.9
	I0913 19:58:10.612570   71926 filesync.go:126] Scanning /home/jenkins/minikube-integration/19636-3902/.minikube/addons for local assets ...
	I0913 19:58:10.612659   71926 filesync.go:126] Scanning /home/jenkins/minikube-integration/19636-3902/.minikube/files for local assets ...
	I0913 19:58:10.612760   71926 filesync.go:149] local asset: /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem -> 110792.pem in /etc/ssl/certs
	I0913 19:58:10.612876   71926 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0913 19:58:10.623923   71926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem --> /etc/ssl/certs/110792.pem (1708 bytes)
	I0913 19:58:10.649632   71926 start.go:296] duration metric: took 142.866871ms for postStartSetup
	I0913 19:58:10.649667   71926 fix.go:56] duration metric: took 19.32233086s for fixHost
	I0913 19:58:10.649685   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHHostname
	I0913 19:58:10.652317   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:10.652626   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:33:43", ip: ""} in network mk-old-k8s-version-234290: {Iface:virbr4 ExpiryTime:2024-09-13 20:58:03 +0000 UTC Type:0 Mac:52:54:00:11:33:43 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-234290 Clientid:01:52:54:00:11:33:43}
	I0913 19:58:10.652654   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined IP address 192.168.72.137 and MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:10.652772   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHPort
	I0913 19:58:10.652954   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHKeyPath
	I0913 19:58:10.653102   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHKeyPath
	I0913 19:58:10.653224   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHUsername
	I0913 19:58:10.653391   71926 main.go:141] libmachine: Using SSH client type: native
	I0913 19:58:10.653546   71926 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.137 22 <nil> <nil>}
	I0913 19:58:10.653555   71926 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0913 19:58:10.762947   71926 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726257490.741456651
	
	I0913 19:58:10.763010   71926 fix.go:216] guest clock: 1726257490.741456651
	I0913 19:58:10.763025   71926 fix.go:229] Guest: 2024-09-13 19:58:10.741456651 +0000 UTC Remote: 2024-09-13 19:58:10.649671047 +0000 UTC m=+268.740736518 (delta=91.785604ms)
	I0913 19:58:10.763052   71926 fix.go:200] guest clock delta is within tolerance: 91.785604ms
	I0913 19:58:10.763059   71926 start.go:83] releasing machines lock for "old-k8s-version-234290", held for 19.435752772s
	I0913 19:58:10.763095   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .DriverName
	I0913 19:58:10.763368   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetIP
	I0913 19:58:10.766318   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:10.766797   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:33:43", ip: ""} in network mk-old-k8s-version-234290: {Iface:virbr4 ExpiryTime:2024-09-13 20:58:03 +0000 UTC Type:0 Mac:52:54:00:11:33:43 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-234290 Clientid:01:52:54:00:11:33:43}
	I0913 19:58:10.766838   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined IP address 192.168.72.137 and MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:10.767094   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .DriverName
	I0913 19:58:10.767602   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .DriverName
	I0913 19:58:10.767791   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .DriverName
	I0913 19:58:10.767901   71926 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0913 19:58:10.767938   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHHostname
	I0913 19:58:10.768057   71926 ssh_runner.go:195] Run: cat /version.json
	I0913 19:58:10.768077   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHHostname
	I0913 19:58:10.770835   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:10.770860   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:10.771204   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:33:43", ip: ""} in network mk-old-k8s-version-234290: {Iface:virbr4 ExpiryTime:2024-09-13 20:58:03 +0000 UTC Type:0 Mac:52:54:00:11:33:43 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-234290 Clientid:01:52:54:00:11:33:43}
	I0913 19:58:10.771231   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined IP address 192.168.72.137 and MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:10.771269   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:33:43", ip: ""} in network mk-old-k8s-version-234290: {Iface:virbr4 ExpiryTime:2024-09-13 20:58:03 +0000 UTC Type:0 Mac:52:54:00:11:33:43 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-234290 Clientid:01:52:54:00:11:33:43}
	I0913 19:58:10.771299   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined IP address 192.168.72.137 and MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:10.771390   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHPort
	I0913 19:58:10.771538   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHKeyPath
	I0913 19:58:10.771562   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHPort
	I0913 19:58:10.771722   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHUsername
	I0913 19:58:10.771758   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHKeyPath
	I0913 19:58:10.771888   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetSSHUsername
	I0913 19:58:10.771910   71926 sshutil.go:53] new ssh client: &{IP:192.168.72.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/old-k8s-version-234290/id_rsa Username:docker}
	I0913 19:58:10.772009   71926 sshutil.go:53] new ssh client: &{IP:192.168.72.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/old-k8s-version-234290/id_rsa Username:docker}
	I0913 19:58:10.851445   71926 ssh_runner.go:195] Run: systemctl --version
	I0913 19:58:10.876291   71926 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0913 19:58:11.022514   71926 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0913 19:58:11.029415   71926 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0913 19:58:11.029478   71926 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0913 19:58:11.046313   71926 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0913 19:58:11.046338   71926 start.go:495] detecting cgroup driver to use...
	I0913 19:58:11.046411   71926 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0913 19:58:11.064638   71926 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0913 19:58:11.079465   71926 docker.go:217] disabling cri-docker service (if available) ...
	I0913 19:58:11.079555   71926 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0913 19:58:11.092965   71926 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0913 19:58:11.107487   71926 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0913 19:58:11.225260   71926 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0913 19:58:11.379777   71926 docker.go:233] disabling docker service ...
	I0913 19:58:11.379918   71926 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0913 19:58:11.399146   71926 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0913 19:58:11.418820   71926 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0913 19:58:11.608056   71926 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0913 19:58:11.793596   71926 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0913 19:58:11.809432   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0913 19:58:11.831794   71926 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0913 19:58:11.831867   71926 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:58:11.843613   71926 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0913 19:58:11.843681   71926 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:58:11.856437   71926 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:58:11.868563   71926 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:58:11.880448   71926 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0913 19:58:11.892795   71926 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0913 19:58:11.903756   71926 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0913 19:58:11.903820   71926 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0913 19:58:11.919323   71926 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0913 19:58:11.932414   71926 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 19:58:12.084112   71926 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0913 19:58:12.186561   71926 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0913 19:58:12.186626   71926 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0913 19:58:12.200935   71926 start.go:563] Will wait 60s for crictl version
	I0913 19:58:12.200999   71926 ssh_runner.go:195] Run: which crictl
	I0913 19:58:12.204888   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0913 19:58:12.251729   71926 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0913 19:58:12.251822   71926 ssh_runner.go:195] Run: crio --version
	I0913 19:58:12.284955   71926 ssh_runner.go:195] Run: crio --version
	I0913 19:58:12.316561   71926 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0913 19:58:10.787457   71233 main.go:141] libmachine: (embed-certs-175374) Calling .Start
	I0913 19:58:10.787620   71233 main.go:141] libmachine: (embed-certs-175374) Ensuring networks are active...
	I0913 19:58:10.788313   71233 main.go:141] libmachine: (embed-certs-175374) Ensuring network default is active
	I0913 19:58:10.788694   71233 main.go:141] libmachine: (embed-certs-175374) Ensuring network mk-embed-certs-175374 is active
	I0913 19:58:10.789203   71233 main.go:141] libmachine: (embed-certs-175374) Getting domain xml...
	I0913 19:58:10.790255   71233 main.go:141] libmachine: (embed-certs-175374) Creating domain...
	I0913 19:58:12.138157   71233 main.go:141] libmachine: (embed-certs-175374) Waiting to get IP...
	I0913 19:58:12.139236   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:12.139700   71233 main.go:141] libmachine: (embed-certs-175374) DBG | unable to find current IP address of domain embed-certs-175374 in network mk-embed-certs-175374
	I0913 19:58:12.139753   71233 main.go:141] libmachine: (embed-certs-175374) DBG | I0913 19:58:12.139667   73146 retry.go:31] will retry after 297.211027ms: waiting for machine to come up
	I0913 19:58:12.438089   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:12.438546   71233 main.go:141] libmachine: (embed-certs-175374) DBG | unable to find current IP address of domain embed-certs-175374 in network mk-embed-certs-175374
	I0913 19:58:12.438573   71233 main.go:141] libmachine: (embed-certs-175374) DBG | I0913 19:58:12.438508   73146 retry.go:31] will retry after 295.16699ms: waiting for machine to come up
	I0913 19:58:12.735114   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:12.735588   71233 main.go:141] libmachine: (embed-certs-175374) DBG | unable to find current IP address of domain embed-certs-175374 in network mk-embed-certs-175374
	I0913 19:58:12.735624   71233 main.go:141] libmachine: (embed-certs-175374) DBG | I0913 19:58:12.735558   73146 retry.go:31] will retry after 439.751807ms: waiting for machine to come up
	I0913 19:58:13.177095   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:13.177613   71233 main.go:141] libmachine: (embed-certs-175374) DBG | unable to find current IP address of domain embed-certs-175374 in network mk-embed-certs-175374
	I0913 19:58:13.177643   71233 main.go:141] libmachine: (embed-certs-175374) DBG | I0913 19:58:13.177584   73146 retry.go:31] will retry after 561.896034ms: waiting for machine to come up
	I0913 19:58:13.741520   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:13.742128   71233 main.go:141] libmachine: (embed-certs-175374) DBG | unable to find current IP address of domain embed-certs-175374 in network mk-embed-certs-175374
	I0913 19:58:13.742164   71233 main.go:141] libmachine: (embed-certs-175374) DBG | I0913 19:58:13.742027   73146 retry.go:31] will retry after 713.20889ms: waiting for machine to come up
	I0913 19:58:11.865414   71702 pod_ready.go:103] pod "etcd-default-k8s-diff-port-512125" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:13.865756   71702 pod_ready.go:103] pod "etcd-default-k8s-diff-port-512125" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:11.267770   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:13.269041   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:15.768231   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:12.317917   71926 main.go:141] libmachine: (old-k8s-version-234290) Calling .GetIP
	I0913 19:58:12.320920   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:12.321261   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:33:43", ip: ""} in network mk-old-k8s-version-234290: {Iface:virbr4 ExpiryTime:2024-09-13 20:58:03 +0000 UTC Type:0 Mac:52:54:00:11:33:43 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-234290 Clientid:01:52:54:00:11:33:43}
	I0913 19:58:12.321291   71926 main.go:141] libmachine: (old-k8s-version-234290) DBG | domain old-k8s-version-234290 has defined IP address 192.168.72.137 and MAC address 52:54:00:11:33:43 in network mk-old-k8s-version-234290
	I0913 19:58:12.321498   71926 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0913 19:58:12.325745   71926 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0913 19:58:12.340042   71926 kubeadm.go:883] updating cluster {Name:old-k8s-version-234290 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-234290 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.137 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0913 19:58:12.340163   71926 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0913 19:58:12.340227   71926 ssh_runner.go:195] Run: sudo crictl images --output json
	I0913 19:58:12.387772   71926 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0913 19:58:12.387831   71926 ssh_runner.go:195] Run: which lz4
	I0913 19:58:12.391877   71926 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0913 19:58:12.397084   71926 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0913 19:58:12.397111   71926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0913 19:58:14.108496   71926 crio.go:462] duration metric: took 1.716639607s to copy over tarball
	I0913 19:58:14.108580   71926 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0913 19:58:14.457047   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:14.457530   71233 main.go:141] libmachine: (embed-certs-175374) DBG | unable to find current IP address of domain embed-certs-175374 in network mk-embed-certs-175374
	I0913 19:58:14.457578   71233 main.go:141] libmachine: (embed-certs-175374) DBG | I0913 19:58:14.457461   73146 retry.go:31] will retry after 696.737044ms: waiting for machine to come up
	I0913 19:58:15.156145   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:15.156601   71233 main.go:141] libmachine: (embed-certs-175374) DBG | unable to find current IP address of domain embed-certs-175374 in network mk-embed-certs-175374
	I0913 19:58:15.156634   71233 main.go:141] libmachine: (embed-certs-175374) DBG | I0913 19:58:15.156555   73146 retry.go:31] will retry after 799.457406ms: waiting for machine to come up
	I0913 19:58:15.957762   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:15.958268   71233 main.go:141] libmachine: (embed-certs-175374) DBG | unable to find current IP address of domain embed-certs-175374 in network mk-embed-certs-175374
	I0913 19:58:15.958296   71233 main.go:141] libmachine: (embed-certs-175374) DBG | I0913 19:58:15.958218   73146 retry.go:31] will retry after 1.037426883s: waiting for machine to come up
	I0913 19:58:16.996752   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:16.997283   71233 main.go:141] libmachine: (embed-certs-175374) DBG | unable to find current IP address of domain embed-certs-175374 in network mk-embed-certs-175374
	I0913 19:58:16.997310   71233 main.go:141] libmachine: (embed-certs-175374) DBG | I0913 19:58:16.997233   73146 retry.go:31] will retry after 1.529310984s: waiting for machine to come up
	I0913 19:58:18.528167   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:18.528770   71233 main.go:141] libmachine: (embed-certs-175374) DBG | unable to find current IP address of domain embed-certs-175374 in network mk-embed-certs-175374
	I0913 19:58:18.528817   71233 main.go:141] libmachine: (embed-certs-175374) DBG | I0913 19:58:18.528732   73146 retry.go:31] will retry after 1.63281335s: waiting for machine to come up
	I0913 19:58:15.866154   71702 pod_ready.go:103] pod "etcd-default-k8s-diff-port-512125" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:16.865395   71702 pod_ready.go:93] pod "etcd-default-k8s-diff-port-512125" in "kube-system" namespace has status "Ready":"True"
	I0913 19:58:16.865434   71702 pod_ready.go:82] duration metric: took 7.007454177s for pod "etcd-default-k8s-diff-port-512125" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:16.865449   71702 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-512125" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:16.871374   71702 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-512125" in "kube-system" namespace has status "Ready":"True"
	I0913 19:58:16.871398   71702 pod_ready.go:82] duration metric: took 5.94123ms for pod "kube-apiserver-default-k8s-diff-port-512125" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:16.871410   71702 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-512125" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:19.122189   71702 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-512125" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:19.413846   71702 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-512125" in "kube-system" namespace has status "Ready":"True"
	I0913 19:58:19.413866   71702 pod_ready.go:82] duration metric: took 2.542449272s for pod "kube-controller-manager-default-k8s-diff-port-512125" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:19.413880   71702 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-xqv9m" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:19.419124   71702 pod_ready.go:93] pod "kube-proxy-xqv9m" in "kube-system" namespace has status "Ready":"True"
	I0913 19:58:19.419146   71702 pod_ready.go:82] duration metric: took 5.258451ms for pod "kube-proxy-xqv9m" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:19.419157   71702 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-512125" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:19.424347   71702 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-512125" in "kube-system" namespace has status "Ready":"True"
	I0913 19:58:19.424369   71702 pod_ready.go:82] duration metric: took 5.205567ms for pod "kube-scheduler-default-k8s-diff-port-512125" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:19.424378   71702 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:18.266585   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:20.267496   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:17.092899   71926 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.984287393s)
	I0913 19:58:17.092927   71926 crio.go:469] duration metric: took 2.984399164s to extract the tarball
	I0913 19:58:17.092936   71926 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0913 19:58:17.134595   71926 ssh_runner.go:195] Run: sudo crictl images --output json
	I0913 19:58:17.173233   71926 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0913 19:58:17.173261   71926 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0913 19:58:17.173353   71926 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 19:58:17.173420   71926 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0913 19:58:17.173496   71926 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0913 19:58:17.173545   71926 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0913 19:58:17.173432   71926 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0913 19:58:17.173391   71926 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0913 19:58:17.173404   71926 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0913 19:58:17.173354   71926 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0913 19:58:17.174795   71926 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0913 19:58:17.174996   71926 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0913 19:58:17.175016   71926 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0913 19:58:17.175093   71926 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0913 19:58:17.175261   71926 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0913 19:58:17.175274   71926 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0913 19:58:17.175314   71926 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 19:58:17.175374   71926 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0913 19:58:17.389976   71926 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0913 19:58:17.401858   71926 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0913 19:58:17.422207   71926 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0913 19:58:17.437983   71926 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0913 19:58:17.441827   71926 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0913 19:58:17.444353   71926 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0913 19:58:17.448291   71926 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0913 19:58:17.448327   71926 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0913 19:58:17.448360   71926 ssh_runner.go:195] Run: which crictl
	I0913 19:58:17.484630   71926 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0913 19:58:17.486907   71926 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0913 19:58:17.486953   71926 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0913 19:58:17.486992   71926 ssh_runner.go:195] Run: which crictl
	I0913 19:58:17.552972   71926 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0913 19:58:17.553017   71926 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0913 19:58:17.553070   71926 ssh_runner.go:195] Run: which crictl
	I0913 19:58:17.553094   71926 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0913 19:58:17.553131   71926 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0913 19:58:17.553172   71926 ssh_runner.go:195] Run: which crictl
	I0913 19:58:17.573201   71926 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0913 19:58:17.573248   71926 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0913 19:58:17.573298   71926 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0913 19:58:17.573320   71926 ssh_runner.go:195] Run: which crictl
	I0913 19:58:17.573340   71926 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0913 19:58:17.573386   71926 ssh_runner.go:195] Run: which crictl
	I0913 19:58:17.573434   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0913 19:58:17.605151   71926 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0913 19:58:17.605199   71926 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0913 19:58:17.605248   71926 ssh_runner.go:195] Run: which crictl
	I0913 19:58:17.605300   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0913 19:58:17.605347   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0913 19:58:17.605439   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0913 19:58:17.628947   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0913 19:58:17.628992   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0913 19:58:17.629158   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0913 19:58:17.629483   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0913 19:58:17.714772   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0913 19:58:17.714812   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0913 19:58:17.755168   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0913 19:58:17.813880   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0913 19:58:17.813980   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0913 19:58:17.822268   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0913 19:58:17.822277   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0913 19:58:17.864418   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0913 19:58:17.864418   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0913 19:58:17.930419   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0913 19:58:18.000454   71926 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0913 19:58:18.000613   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0913 19:58:18.000655   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0913 19:58:18.014400   71926 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0913 19:58:18.065836   71926 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0913 19:58:18.065876   71926 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0913 19:58:18.065922   71926 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0913 19:58:18.068943   71926 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0913 19:58:18.075442   71926 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0913 19:58:18.102221   71926 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0913 19:58:18.330677   71926 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 19:58:18.474412   71926 cache_images.go:92] duration metric: took 1.301129458s to LoadCachedImages
	W0913 19:58:18.474515   71926 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19636-3902/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0913 19:58:18.474532   71926 kubeadm.go:934] updating node { 192.168.72.137 8443 v1.20.0 crio true true} ...
	I0913 19:58:18.474668   71926 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-234290 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.137
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-234290 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0913 19:58:18.474764   71926 ssh_runner.go:195] Run: crio config
	I0913 19:58:18.528116   71926 cni.go:84] Creating CNI manager for ""
	I0913 19:58:18.528142   71926 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0913 19:58:18.528153   71926 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0913 19:58:18.528175   71926 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.137 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-234290 NodeName:old-k8s-version-234290 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.137"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.137 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0913 19:58:18.528341   71926 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.137
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-234290"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.137
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.137"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0913 19:58:18.528421   71926 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0913 19:58:18.539309   71926 binaries.go:44] Found k8s binaries, skipping transfer
	I0913 19:58:18.539396   71926 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0913 19:58:18.549279   71926 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0913 19:58:18.566974   71926 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0913 19:58:18.585652   71926 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0913 19:58:18.606817   71926 ssh_runner.go:195] Run: grep 192.168.72.137	control-plane.minikube.internal$ /etc/hosts
	I0913 19:58:18.610999   71926 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.137	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0913 19:58:18.623650   71926 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 19:58:18.738375   71926 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0913 19:58:18.759935   71926 certs.go:68] Setting up /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/old-k8s-version-234290 for IP: 192.168.72.137
	I0913 19:58:18.759960   71926 certs.go:194] generating shared ca certs ...
	I0913 19:58:18.759976   71926 certs.go:226] acquiring lock for ca certs: {Name:mke780aab4c2895bded11772e31c7a357d07742c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 19:58:18.760149   71926 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.key
	I0913 19:58:18.760202   71926 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.key
	I0913 19:58:18.760217   71926 certs.go:256] generating profile certs ...
	I0913 19:58:18.760337   71926 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/old-k8s-version-234290/client.key
	I0913 19:58:18.760412   71926 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/old-k8s-version-234290/apiserver.key.e5f62d17
	I0913 19:58:18.760468   71926 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/old-k8s-version-234290/proxy-client.key
	I0913 19:58:18.760623   71926 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079.pem (1338 bytes)
	W0913 19:58:18.760669   71926 certs.go:480] ignoring /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079_empty.pem, impossibly tiny 0 bytes
	I0913 19:58:18.760681   71926 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem (1675 bytes)
	I0913 19:58:18.760718   71926 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem (1082 bytes)
	I0913 19:58:18.760751   71926 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem (1123 bytes)
	I0913 19:58:18.760779   71926 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem (1679 bytes)
	I0913 19:58:18.760832   71926 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem (1708 bytes)
	I0913 19:58:18.761583   71926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0913 19:58:18.793014   71926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0913 19:58:18.848745   71926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0913 19:58:18.886745   71926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0913 19:58:18.924588   71926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/old-k8s-version-234290/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0913 19:58:18.958481   71926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/old-k8s-version-234290/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0913 19:58:18.991482   71926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/old-k8s-version-234290/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0913 19:58:19.032324   71926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/old-k8s-version-234290/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0913 19:58:19.059068   71926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079.pem --> /usr/share/ca-certificates/11079.pem (1338 bytes)
	I0913 19:58:19.085949   71926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem --> /usr/share/ca-certificates/110792.pem (1708 bytes)
	I0913 19:58:19.113643   71926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0913 19:58:19.145333   71926 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0913 19:58:19.163687   71926 ssh_runner.go:195] Run: openssl version
	I0913 19:58:19.171767   71926 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11079.pem && ln -fs /usr/share/ca-certificates/11079.pem /etc/ssl/certs/11079.pem"
	I0913 19:58:19.186554   71926 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11079.pem
	I0913 19:58:19.192330   71926 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 13 18:38 /usr/share/ca-certificates/11079.pem
	I0913 19:58:19.192401   71926 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11079.pem
	I0913 19:58:19.198792   71926 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11079.pem /etc/ssl/certs/51391683.0"
	I0913 19:58:19.210300   71926 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110792.pem && ln -fs /usr/share/ca-certificates/110792.pem /etc/ssl/certs/110792.pem"
	I0913 19:58:19.223407   71926 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110792.pem
	I0913 19:58:19.228291   71926 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 13 18:38 /usr/share/ca-certificates/110792.pem
	I0913 19:58:19.228349   71926 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110792.pem
	I0913 19:58:19.234308   71926 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110792.pem /etc/ssl/certs/3ec20f2e.0"
	I0913 19:58:19.245203   71926 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0913 19:58:19.256773   71926 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0913 19:58:19.262488   71926 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 13 18:22 /usr/share/ca-certificates/minikubeCA.pem
	I0913 19:58:19.262571   71926 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0913 19:58:19.269483   71926 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0913 19:58:19.281592   71926 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0913 19:58:19.286741   71926 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0913 19:58:19.293353   71926 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0913 19:58:19.299808   71926 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0913 19:58:19.306799   71926 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0913 19:58:19.313162   71926 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0913 19:58:19.319027   71926 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0913 19:58:19.325179   71926 kubeadm.go:392] StartCluster: {Name:old-k8s-version-234290 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-234290 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.137 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 19:58:19.325264   71926 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0913 19:58:19.325324   71926 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0913 19:58:19.369346   71926 cri.go:89] found id: ""
	I0913 19:58:19.369426   71926 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0913 19:58:19.379886   71926 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0913 19:58:19.379909   71926 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0913 19:58:19.379970   71926 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0913 19:58:19.390431   71926 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0913 19:58:19.391399   71926 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-234290" does not appear in /home/jenkins/minikube-integration/19636-3902/kubeconfig
	I0913 19:58:19.392019   71926 kubeconfig.go:62] /home/jenkins/minikube-integration/19636-3902/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-234290" cluster setting kubeconfig missing "old-k8s-version-234290" context setting]
	I0913 19:58:19.392914   71926 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/kubeconfig: {Name:mk324cf401f96b93fed93af51e24b0634e5fa1a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 19:58:19.407513   71926 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0913 19:58:19.419283   71926 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.137
	I0913 19:58:19.419314   71926 kubeadm.go:1160] stopping kube-system containers ...
	I0913 19:58:19.419326   71926 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0913 19:58:19.419379   71926 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0913 19:58:19.461864   71926 cri.go:89] found id: ""
	I0913 19:58:19.461935   71926 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0913 19:58:19.479746   71926 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0913 19:58:19.490722   71926 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0913 19:58:19.490746   71926 kubeadm.go:157] found existing configuration files:
	
	I0913 19:58:19.490793   71926 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0913 19:58:19.500968   71926 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0913 19:58:19.501031   71926 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0913 19:58:19.511172   71926 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0913 19:58:19.521623   71926 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0913 19:58:19.521690   71926 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0913 19:58:19.532035   71926 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0913 19:58:19.542058   71926 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0913 19:58:19.542139   71926 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0913 19:58:19.551747   71926 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0913 19:58:19.561183   71926 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0913 19:58:19.561240   71926 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0913 19:58:19.571631   71926 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0913 19:58:19.582073   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:58:19.730805   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:58:20.577463   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:58:20.815243   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:58:20.907599   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:58:21.008611   71926 api_server.go:52] waiting for apiserver process to appear ...
	I0913 19:58:21.008687   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:21.509128   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:20.163342   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:20.163836   71233 main.go:141] libmachine: (embed-certs-175374) DBG | unable to find current IP address of domain embed-certs-175374 in network mk-embed-certs-175374
	I0913 19:58:20.163866   71233 main.go:141] libmachine: (embed-certs-175374) DBG | I0913 19:58:20.163797   73146 retry.go:31] will retry after 2.608130242s: waiting for machine to come up
	I0913 19:58:22.773220   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:22.773746   71233 main.go:141] libmachine: (embed-certs-175374) DBG | unable to find current IP address of domain embed-certs-175374 in network mk-embed-certs-175374
	I0913 19:58:22.773773   71233 main.go:141] libmachine: (embed-certs-175374) DBG | I0913 19:58:22.773702   73146 retry.go:31] will retry after 2.358024102s: waiting for machine to come up
	I0913 19:58:21.432080   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:23.931775   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:22.766841   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:24.767073   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:22.009465   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:22.509128   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:23.009680   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:23.509066   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:24.009717   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:24.509499   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:25.008831   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:25.509742   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:26.009748   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:26.509405   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:25.134055   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:25.134613   71233 main.go:141] libmachine: (embed-certs-175374) DBG | unable to find current IP address of domain embed-certs-175374 in network mk-embed-certs-175374
	I0913 19:58:25.134637   71233 main.go:141] libmachine: (embed-certs-175374) DBG | I0913 19:58:25.134569   73146 retry.go:31] will retry after 3.938314294s: waiting for machine to come up
	I0913 19:58:29.076283   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:29.076741   71233 main.go:141] libmachine: (embed-certs-175374) Found IP for machine: 192.168.39.32
	I0913 19:58:29.076760   71233 main.go:141] libmachine: (embed-certs-175374) Reserving static IP address...
	I0913 19:58:29.076770   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has current primary IP address 192.168.39.32 and MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:29.077137   71233 main.go:141] libmachine: (embed-certs-175374) DBG | found host DHCP lease matching {name: "embed-certs-175374", mac: "52:54:00:72:57:cd", ip: "192.168.39.32"} in network mk-embed-certs-175374: {Iface:virbr3 ExpiryTime:2024-09-13 20:58:22 +0000 UTC Type:0 Mac:52:54:00:72:57:cd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:embed-certs-175374 Clientid:01:52:54:00:72:57:cd}
	I0913 19:58:29.077164   71233 main.go:141] libmachine: (embed-certs-175374) DBG | skip adding static IP to network mk-embed-certs-175374 - found existing host DHCP lease matching {name: "embed-certs-175374", mac: "52:54:00:72:57:cd", ip: "192.168.39.32"}
	I0913 19:58:29.077174   71233 main.go:141] libmachine: (embed-certs-175374) Reserved static IP address: 192.168.39.32
	I0913 19:58:29.077185   71233 main.go:141] libmachine: (embed-certs-175374) Waiting for SSH to be available...
	I0913 19:58:29.077194   71233 main.go:141] libmachine: (embed-certs-175374) DBG | Getting to WaitForSSH function...
	I0913 19:58:29.079065   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:29.079375   71233 main.go:141] libmachine: (embed-certs-175374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:57:cd", ip: ""} in network mk-embed-certs-175374: {Iface:virbr3 ExpiryTime:2024-09-13 20:58:22 +0000 UTC Type:0 Mac:52:54:00:72:57:cd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:embed-certs-175374 Clientid:01:52:54:00:72:57:cd}
	I0913 19:58:29.079407   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined IP address 192.168.39.32 and MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:29.079508   71233 main.go:141] libmachine: (embed-certs-175374) DBG | Using SSH client type: external
	I0913 19:58:29.079559   71233 main.go:141] libmachine: (embed-certs-175374) DBG | Using SSH private key: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/embed-certs-175374/id_rsa (-rw-------)
	I0913 19:58:29.079600   71233 main.go:141] libmachine: (embed-certs-175374) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.32 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19636-3902/.minikube/machines/embed-certs-175374/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0913 19:58:29.079615   71233 main.go:141] libmachine: (embed-certs-175374) DBG | About to run SSH command:
	I0913 19:58:29.079643   71233 main.go:141] libmachine: (embed-certs-175374) DBG | exit 0
	I0913 19:58:29.202138   71233 main.go:141] libmachine: (embed-certs-175374) DBG | SSH cmd err, output: <nil>: 
	I0913 19:58:29.202522   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetConfigRaw
	I0913 19:58:26.431735   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:28.930537   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:27.266331   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:29.272314   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:29.203122   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetIP
	I0913 19:58:29.205936   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:29.206304   71233 main.go:141] libmachine: (embed-certs-175374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:57:cd", ip: ""} in network mk-embed-certs-175374: {Iface:virbr3 ExpiryTime:2024-09-13 20:58:22 +0000 UTC Type:0 Mac:52:54:00:72:57:cd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:embed-certs-175374 Clientid:01:52:54:00:72:57:cd}
	I0913 19:58:29.206326   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined IP address 192.168.39.32 and MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:29.206567   71233 profile.go:143] Saving config to /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/embed-certs-175374/config.json ...
	I0913 19:58:29.206799   71233 machine.go:93] provisionDockerMachine start ...
	I0913 19:58:29.206820   71233 main.go:141] libmachine: (embed-certs-175374) Calling .DriverName
	I0913 19:58:29.207047   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHHostname
	I0913 19:58:29.209407   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:29.209733   71233 main.go:141] libmachine: (embed-certs-175374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:57:cd", ip: ""} in network mk-embed-certs-175374: {Iface:virbr3 ExpiryTime:2024-09-13 20:58:22 +0000 UTC Type:0 Mac:52:54:00:72:57:cd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:embed-certs-175374 Clientid:01:52:54:00:72:57:cd}
	I0913 19:58:29.209755   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined IP address 192.168.39.32 and MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:29.209880   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHPort
	I0913 19:58:29.210087   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHKeyPath
	I0913 19:58:29.210264   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHKeyPath
	I0913 19:58:29.210475   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHUsername
	I0913 19:58:29.210613   71233 main.go:141] libmachine: Using SSH client type: native
	I0913 19:58:29.210806   71233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.32 22 <nil> <nil>}
	I0913 19:58:29.210819   71233 main.go:141] libmachine: About to run SSH command:
	hostname
	I0913 19:58:29.318615   71233 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0913 19:58:29.318647   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetMachineName
	I0913 19:58:29.318874   71233 buildroot.go:166] provisioning hostname "embed-certs-175374"
	I0913 19:58:29.318891   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetMachineName
	I0913 19:58:29.319050   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHHostname
	I0913 19:58:29.321627   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:29.321981   71233 main.go:141] libmachine: (embed-certs-175374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:57:cd", ip: ""} in network mk-embed-certs-175374: {Iface:virbr3 ExpiryTime:2024-09-13 20:58:22 +0000 UTC Type:0 Mac:52:54:00:72:57:cd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:embed-certs-175374 Clientid:01:52:54:00:72:57:cd}
	I0913 19:58:29.322007   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined IP address 192.168.39.32 and MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:29.322233   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHPort
	I0913 19:58:29.322411   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHKeyPath
	I0913 19:58:29.322524   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHKeyPath
	I0913 19:58:29.322665   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHUsername
	I0913 19:58:29.322814   71233 main.go:141] libmachine: Using SSH client type: native
	I0913 19:58:29.322993   71233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.32 22 <nil> <nil>}
	I0913 19:58:29.323011   71233 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-175374 && echo "embed-certs-175374" | sudo tee /etc/hostname
	I0913 19:58:29.441656   71233 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-175374
	
	I0913 19:58:29.441686   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHHostname
	I0913 19:58:29.444529   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:29.444942   71233 main.go:141] libmachine: (embed-certs-175374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:57:cd", ip: ""} in network mk-embed-certs-175374: {Iface:virbr3 ExpiryTime:2024-09-13 20:58:22 +0000 UTC Type:0 Mac:52:54:00:72:57:cd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:embed-certs-175374 Clientid:01:52:54:00:72:57:cd}
	I0913 19:58:29.444973   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined IP address 192.168.39.32 and MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:29.445107   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHPort
	I0913 19:58:29.445291   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHKeyPath
	I0913 19:58:29.445451   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHKeyPath
	I0913 19:58:29.445560   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHUsername
	I0913 19:58:29.445756   71233 main.go:141] libmachine: Using SSH client type: native
	I0913 19:58:29.445939   71233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.32 22 <nil> <nil>}
	I0913 19:58:29.445961   71233 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-175374' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-175374/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-175374' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0913 19:58:29.555773   71233 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0913 19:58:29.555798   71233 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19636-3902/.minikube CaCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19636-3902/.minikube}
	I0913 19:58:29.555815   71233 buildroot.go:174] setting up certificates
	I0913 19:58:29.555836   71233 provision.go:84] configureAuth start
	I0913 19:58:29.555845   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetMachineName
	I0913 19:58:29.556128   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetIP
	I0913 19:58:29.559064   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:29.559438   71233 main.go:141] libmachine: (embed-certs-175374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:57:cd", ip: ""} in network mk-embed-certs-175374: {Iface:virbr3 ExpiryTime:2024-09-13 20:58:22 +0000 UTC Type:0 Mac:52:54:00:72:57:cd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:embed-certs-175374 Clientid:01:52:54:00:72:57:cd}
	I0913 19:58:29.559459   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined IP address 192.168.39.32 and MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:29.559589   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHHostname
	I0913 19:58:29.561763   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:29.562078   71233 main.go:141] libmachine: (embed-certs-175374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:57:cd", ip: ""} in network mk-embed-certs-175374: {Iface:virbr3 ExpiryTime:2024-09-13 20:58:22 +0000 UTC Type:0 Mac:52:54:00:72:57:cd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:embed-certs-175374 Clientid:01:52:54:00:72:57:cd}
	I0913 19:58:29.562120   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined IP address 192.168.39.32 and MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:29.562218   71233 provision.go:143] copyHostCerts
	I0913 19:58:29.562277   71233 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem, removing ...
	I0913 19:58:29.562288   71233 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem
	I0913 19:58:29.562362   71233 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/ca.pem (1082 bytes)
	I0913 19:58:29.562476   71233 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem, removing ...
	I0913 19:58:29.562487   71233 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem
	I0913 19:58:29.562519   71233 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/cert.pem (1123 bytes)
	I0913 19:58:29.562621   71233 exec_runner.go:144] found /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem, removing ...
	I0913 19:58:29.562630   71233 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem
	I0913 19:58:29.562657   71233 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19636-3902/.minikube/key.pem (1679 bytes)
	I0913 19:58:29.562729   71233 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem org=jenkins.embed-certs-175374 san=[127.0.0.1 192.168.39.32 embed-certs-175374 localhost minikube]
	I0913 19:58:29.724450   71233 provision.go:177] copyRemoteCerts
	I0913 19:58:29.724502   71233 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0913 19:58:29.724524   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHHostname
	I0913 19:58:29.727348   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:29.727653   71233 main.go:141] libmachine: (embed-certs-175374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:57:cd", ip: ""} in network mk-embed-certs-175374: {Iface:virbr3 ExpiryTime:2024-09-13 20:58:22 +0000 UTC Type:0 Mac:52:54:00:72:57:cd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:embed-certs-175374 Clientid:01:52:54:00:72:57:cd}
	I0913 19:58:29.727680   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined IP address 192.168.39.32 and MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:29.727870   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHPort
	I0913 19:58:29.728028   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHKeyPath
	I0913 19:58:29.728142   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHUsername
	I0913 19:58:29.728291   71233 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/embed-certs-175374/id_rsa Username:docker}
	I0913 19:58:29.807752   71233 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0913 19:58:29.832344   71233 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0913 19:58:29.856275   71233 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0913 19:58:29.879235   71233 provision.go:87] duration metric: took 323.386002ms to configureAuth
	I0913 19:58:29.879264   71233 buildroot.go:189] setting minikube options for container-runtime
	I0913 19:58:29.879464   71233 config.go:182] Loaded profile config "embed-certs-175374": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 19:58:29.879535   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHHostname
	I0913 19:58:29.882178   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:29.882577   71233 main.go:141] libmachine: (embed-certs-175374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:57:cd", ip: ""} in network mk-embed-certs-175374: {Iface:virbr3 ExpiryTime:2024-09-13 20:58:22 +0000 UTC Type:0 Mac:52:54:00:72:57:cd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:embed-certs-175374 Clientid:01:52:54:00:72:57:cd}
	I0913 19:58:29.882608   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined IP address 192.168.39.32 and MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:29.882736   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHPort
	I0913 19:58:29.883001   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHKeyPath
	I0913 19:58:29.883187   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHKeyPath
	I0913 19:58:29.883328   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHUsername
	I0913 19:58:29.883519   71233 main.go:141] libmachine: Using SSH client type: native
	I0913 19:58:29.883723   71233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.32 22 <nil> <nil>}
	I0913 19:58:29.883747   71233 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0913 19:58:30.103532   71233 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0913 19:58:30.103557   71233 machine.go:96] duration metric: took 896.744413ms to provisionDockerMachine
	I0913 19:58:30.103574   71233 start.go:293] postStartSetup for "embed-certs-175374" (driver="kvm2")
	I0913 19:58:30.103588   71233 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0913 19:58:30.103610   71233 main.go:141] libmachine: (embed-certs-175374) Calling .DriverName
	I0913 19:58:30.103908   71233 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0913 19:58:30.103935   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHHostname
	I0913 19:58:30.106889   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:30.107288   71233 main.go:141] libmachine: (embed-certs-175374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:57:cd", ip: ""} in network mk-embed-certs-175374: {Iface:virbr3 ExpiryTime:2024-09-13 20:58:22 +0000 UTC Type:0 Mac:52:54:00:72:57:cd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:embed-certs-175374 Clientid:01:52:54:00:72:57:cd}
	I0913 19:58:30.107320   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined IP address 192.168.39.32 and MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:30.107434   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHPort
	I0913 19:58:30.107613   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHKeyPath
	I0913 19:58:30.107766   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHUsername
	I0913 19:58:30.107900   71233 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/embed-certs-175374/id_rsa Username:docker}
	I0913 19:58:30.189085   71233 ssh_runner.go:195] Run: cat /etc/os-release
	I0913 19:58:30.193560   71233 info.go:137] Remote host: Buildroot 2023.02.9
	I0913 19:58:30.193587   71233 filesync.go:126] Scanning /home/jenkins/minikube-integration/19636-3902/.minikube/addons for local assets ...
	I0913 19:58:30.193667   71233 filesync.go:126] Scanning /home/jenkins/minikube-integration/19636-3902/.minikube/files for local assets ...
	I0913 19:58:30.193767   71233 filesync.go:149] local asset: /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem -> 110792.pem in /etc/ssl/certs
	I0913 19:58:30.193878   71233 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0913 19:58:30.203533   71233 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem --> /etc/ssl/certs/110792.pem (1708 bytes)
	I0913 19:58:30.227895   71233 start.go:296] duration metric: took 124.307474ms for postStartSetup
	I0913 19:58:30.227936   71233 fix.go:56] duration metric: took 19.464716966s for fixHost
	I0913 19:58:30.227956   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHHostname
	I0913 19:58:30.230672   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:30.230977   71233 main.go:141] libmachine: (embed-certs-175374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:57:cd", ip: ""} in network mk-embed-certs-175374: {Iface:virbr3 ExpiryTime:2024-09-13 20:58:22 +0000 UTC Type:0 Mac:52:54:00:72:57:cd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:embed-certs-175374 Clientid:01:52:54:00:72:57:cd}
	I0913 19:58:30.231003   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined IP address 192.168.39.32 and MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:30.231167   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHPort
	I0913 19:58:30.231432   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHKeyPath
	I0913 19:58:30.231610   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHKeyPath
	I0913 19:58:30.231758   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHUsername
	I0913 19:58:30.231913   71233 main.go:141] libmachine: Using SSH client type: native
	I0913 19:58:30.232089   71233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.32 22 <nil> <nil>}
	I0913 19:58:30.232100   71233 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0913 19:58:30.331036   71233 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726257510.303110870
	
	I0913 19:58:30.331065   71233 fix.go:216] guest clock: 1726257510.303110870
	I0913 19:58:30.331076   71233 fix.go:229] Guest: 2024-09-13 19:58:30.30311087 +0000 UTC Remote: 2024-09-13 19:58:30.227940037 +0000 UTC m=+356.058673795 (delta=75.170833ms)
	I0913 19:58:30.331112   71233 fix.go:200] guest clock delta is within tolerance: 75.170833ms
	I0913 19:58:30.331117   71233 start.go:83] releasing machines lock for "embed-certs-175374", held for 19.567934671s
	I0913 19:58:30.331140   71233 main.go:141] libmachine: (embed-certs-175374) Calling .DriverName
	I0913 19:58:30.331423   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetIP
	I0913 19:58:30.334022   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:30.334506   71233 main.go:141] libmachine: (embed-certs-175374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:57:cd", ip: ""} in network mk-embed-certs-175374: {Iface:virbr3 ExpiryTime:2024-09-13 20:58:22 +0000 UTC Type:0 Mac:52:54:00:72:57:cd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:embed-certs-175374 Clientid:01:52:54:00:72:57:cd}
	I0913 19:58:30.334533   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined IP address 192.168.39.32 and MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:30.334671   71233 main.go:141] libmachine: (embed-certs-175374) Calling .DriverName
	I0913 19:58:30.335259   71233 main.go:141] libmachine: (embed-certs-175374) Calling .DriverName
	I0913 19:58:30.335431   71233 main.go:141] libmachine: (embed-certs-175374) Calling .DriverName
	I0913 19:58:30.335489   71233 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0913 19:58:30.335528   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHHostname
	I0913 19:58:30.335642   71233 ssh_runner.go:195] Run: cat /version.json
	I0913 19:58:30.335660   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHHostname
	I0913 19:58:30.338223   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:30.338556   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:30.338585   71233 main.go:141] libmachine: (embed-certs-175374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:57:cd", ip: ""} in network mk-embed-certs-175374: {Iface:virbr3 ExpiryTime:2024-09-13 20:58:22 +0000 UTC Type:0 Mac:52:54:00:72:57:cd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:embed-certs-175374 Clientid:01:52:54:00:72:57:cd}
	I0913 19:58:30.338608   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined IP address 192.168.39.32 and MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:30.338738   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHPort
	I0913 19:58:30.338891   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHKeyPath
	I0913 19:58:30.339037   71233 main.go:141] libmachine: (embed-certs-175374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:57:cd", ip: ""} in network mk-embed-certs-175374: {Iface:virbr3 ExpiryTime:2024-09-13 20:58:22 +0000 UTC Type:0 Mac:52:54:00:72:57:cd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:embed-certs-175374 Clientid:01:52:54:00:72:57:cd}
	I0913 19:58:30.339057   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined IP address 192.168.39.32 and MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:30.339072   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHUsername
	I0913 19:58:30.339199   71233 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/embed-certs-175374/id_rsa Username:docker}
	I0913 19:58:30.339247   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHPort
	I0913 19:58:30.339387   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHKeyPath
	I0913 19:58:30.339526   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHUsername
	I0913 19:58:30.339639   71233 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/embed-certs-175374/id_rsa Username:docker}
	I0913 19:58:30.415622   71233 ssh_runner.go:195] Run: systemctl --version
	I0913 19:58:30.440604   71233 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0913 19:58:30.586022   71233 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0913 19:58:30.594584   71233 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0913 19:58:30.594660   71233 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0913 19:58:30.611349   71233 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0913 19:58:30.611371   71233 start.go:495] detecting cgroup driver to use...
	I0913 19:58:30.611431   71233 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0913 19:58:30.626916   71233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0913 19:58:30.641834   71233 docker.go:217] disabling cri-docker service (if available) ...
	I0913 19:58:30.641899   71233 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0913 19:58:30.656109   71233 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0913 19:58:30.670053   71233 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0913 19:58:30.785264   71233 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0913 19:58:30.936484   71233 docker.go:233] disabling docker service ...
	I0913 19:58:30.936548   71233 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0913 19:58:30.951998   71233 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0913 19:58:30.965863   71233 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0913 19:58:31.117753   71233 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0913 19:58:31.241750   71233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0913 19:58:31.255910   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0913 19:58:31.276372   71233 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0913 19:58:31.276453   71233 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:58:31.286686   71233 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0913 19:58:31.286749   71233 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:58:31.296762   71233 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:58:31.306752   71233 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:58:31.317435   71233 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0913 19:58:31.328859   71233 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:58:31.339508   71233 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:58:31.358855   71233 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 19:58:31.369756   71233 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0913 19:58:31.379838   71233 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0913 19:58:31.379908   71233 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0913 19:58:31.392714   71233 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0913 19:58:31.402973   71233 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 19:58:31.543089   71233 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0913 19:58:31.635184   71233 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0913 19:58:31.635259   71233 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0913 19:58:31.640122   71233 start.go:563] Will wait 60s for crictl version
	I0913 19:58:31.640190   71233 ssh_runner.go:195] Run: which crictl
	I0913 19:58:31.644326   71233 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0913 19:58:31.687840   71233 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0913 19:58:31.687936   71233 ssh_runner.go:195] Run: crio --version
	I0913 19:58:31.716376   71233 ssh_runner.go:195] Run: crio --version
	I0913 19:58:31.749357   71233 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0913 19:58:27.009130   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:27.509574   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:28.009714   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:28.508758   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:29.008768   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:29.509523   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:30.009031   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:30.508827   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:31.009653   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:31.509554   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:31.750649   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetIP
	I0913 19:58:31.753235   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:31.753547   71233 main.go:141] libmachine: (embed-certs-175374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:57:cd", ip: ""} in network mk-embed-certs-175374: {Iface:virbr3 ExpiryTime:2024-09-13 20:58:22 +0000 UTC Type:0 Mac:52:54:00:72:57:cd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:embed-certs-175374 Clientid:01:52:54:00:72:57:cd}
	I0913 19:58:31.753576   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined IP address 192.168.39.32 and MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:31.753809   71233 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0913 19:58:31.757927   71233 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0913 19:58:31.771018   71233 kubeadm.go:883] updating cluster {Name:embed-certs-175374 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:embed-certs-175374 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.32 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0913 19:58:31.771171   71233 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0913 19:58:31.771221   71233 ssh_runner.go:195] Run: sudo crictl images --output json
	I0913 19:58:31.810741   71233 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0913 19:58:31.810798   71233 ssh_runner.go:195] Run: which lz4
	I0913 19:58:31.814892   71233 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0913 19:58:31.819229   71233 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0913 19:58:31.819269   71233 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0913 19:58:33.221865   71233 crio.go:462] duration metric: took 1.407002501s to copy over tarball
	I0913 19:58:33.221951   71233 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0913 19:58:30.931694   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:32.934639   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:31.767243   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:33.767834   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:35.768301   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:32.009337   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:32.509870   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:33.009618   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:33.509364   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:34.009124   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:34.509744   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:35.009610   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:35.509772   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:36.009680   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:36.509743   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:35.282125   71233 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.060124935s)
	I0913 19:58:35.282151   71233 crio.go:469] duration metric: took 2.060254719s to extract the tarball
	I0913 19:58:35.282158   71233 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0913 19:58:35.320685   71233 ssh_runner.go:195] Run: sudo crictl images --output json
	I0913 19:58:35.364371   71233 crio.go:514] all images are preloaded for cri-o runtime.
	I0913 19:58:35.364396   71233 cache_images.go:84] Images are preloaded, skipping loading
	I0913 19:58:35.364404   71233 kubeadm.go:934] updating node { 192.168.39.32 8443 v1.31.1 crio true true} ...
	I0913 19:58:35.364505   71233 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-175374 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.32
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-175374 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0913 19:58:35.364574   71233 ssh_runner.go:195] Run: crio config
	I0913 19:58:35.409662   71233 cni.go:84] Creating CNI manager for ""
	I0913 19:58:35.409684   71233 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0913 19:58:35.409692   71233 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0913 19:58:35.409711   71233 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.32 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-175374 NodeName:embed-certs-175374 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.32"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.32 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0913 19:58:35.409829   71233 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.32
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-175374"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.32
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.32"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0913 19:58:35.409886   71233 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0913 19:58:35.420286   71233 binaries.go:44] Found k8s binaries, skipping transfer
	I0913 19:58:35.420354   71233 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0913 19:58:35.430624   71233 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0913 19:58:35.448662   71233 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0913 19:58:35.465838   71233 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0913 19:58:35.483262   71233 ssh_runner.go:195] Run: grep 192.168.39.32	control-plane.minikube.internal$ /etc/hosts
	I0913 19:58:35.487299   71233 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.32	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0913 19:58:35.500571   71233 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 19:58:35.615618   71233 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0913 19:58:35.634191   71233 certs.go:68] Setting up /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/embed-certs-175374 for IP: 192.168.39.32
	I0913 19:58:35.634216   71233 certs.go:194] generating shared ca certs ...
	I0913 19:58:35.634237   71233 certs.go:226] acquiring lock for ca certs: {Name:mke780aab4c2895bded11772e31c7a357d07742c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 19:58:35.634421   71233 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19636-3902/.minikube/ca.key
	I0913 19:58:35.634489   71233 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.key
	I0913 19:58:35.634503   71233 certs.go:256] generating profile certs ...
	I0913 19:58:35.634599   71233 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/embed-certs-175374/client.key
	I0913 19:58:35.634664   71233 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/embed-certs-175374/apiserver.key.f26b0d46
	I0913 19:58:35.634719   71233 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/embed-certs-175374/proxy-client.key
	I0913 19:58:35.634847   71233 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079.pem (1338 bytes)
	W0913 19:58:35.634888   71233 certs.go:480] ignoring /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079_empty.pem, impossibly tiny 0 bytes
	I0913 19:58:35.634903   71233 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca-key.pem (1675 bytes)
	I0913 19:58:35.634940   71233 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/ca.pem (1082 bytes)
	I0913 19:58:35.634974   71233 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/cert.pem (1123 bytes)
	I0913 19:58:35.635013   71233 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/certs/key.pem (1679 bytes)
	I0913 19:58:35.635070   71233 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem (1708 bytes)
	I0913 19:58:35.635679   71233 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0913 19:58:35.680013   71233 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0913 19:58:35.708836   71233 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0913 19:58:35.742138   71233 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0913 19:58:35.783230   71233 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/embed-certs-175374/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0913 19:58:35.816022   71233 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/embed-certs-175374/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0913 19:58:35.847365   71233 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/embed-certs-175374/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0913 19:58:35.871389   71233 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/embed-certs-175374/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0913 19:58:35.896617   71233 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/ssl/certs/110792.pem --> /usr/share/ca-certificates/110792.pem (1708 bytes)
	I0913 19:58:35.920811   71233 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0913 19:58:35.947119   71233 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3902/.minikube/certs/11079.pem --> /usr/share/ca-certificates/11079.pem (1338 bytes)
	I0913 19:58:35.971590   71233 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0913 19:58:35.988797   71233 ssh_runner.go:195] Run: openssl version
	I0913 19:58:35.994690   71233 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11079.pem && ln -fs /usr/share/ca-certificates/11079.pem /etc/ssl/certs/11079.pem"
	I0913 19:58:36.006056   71233 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11079.pem
	I0913 19:58:36.010744   71233 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 13 18:38 /usr/share/ca-certificates/11079.pem
	I0913 19:58:36.010813   71233 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11079.pem
	I0913 19:58:36.016820   71233 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11079.pem /etc/ssl/certs/51391683.0"
	I0913 19:58:36.028895   71233 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110792.pem && ln -fs /usr/share/ca-certificates/110792.pem /etc/ssl/certs/110792.pem"
	I0913 19:58:36.040296   71233 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110792.pem
	I0913 19:58:36.044904   71233 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 13 18:38 /usr/share/ca-certificates/110792.pem
	I0913 19:58:36.044948   71233 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110792.pem
	I0913 19:58:36.050727   71233 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110792.pem /etc/ssl/certs/3ec20f2e.0"
	I0913 19:58:36.061195   71233 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0913 19:58:36.071527   71233 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0913 19:58:36.076171   71233 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 13 18:22 /usr/share/ca-certificates/minikubeCA.pem
	I0913 19:58:36.076204   71233 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0913 19:58:36.081765   71233 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0913 19:58:36.093815   71233 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0913 19:58:36.098729   71233 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0913 19:58:36.105238   71233 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0913 19:58:36.111340   71233 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0913 19:58:36.117349   71233 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0913 19:58:36.123329   71233 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0913 19:58:36.129083   71233 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0913 19:58:36.134952   71233 kubeadm.go:392] StartCluster: {Name:embed-certs-175374 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:embed-certs-175374 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.32 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 19:58:36.135035   71233 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0913 19:58:36.135095   71233 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0913 19:58:36.177680   71233 cri.go:89] found id: ""
	I0913 19:58:36.177743   71233 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0913 19:58:36.188511   71233 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0913 19:58:36.188531   71233 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0913 19:58:36.188580   71233 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0913 19:58:36.199007   71233 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0913 19:58:36.200034   71233 kubeconfig.go:125] found "embed-certs-175374" server: "https://192.168.39.32:8443"
	I0913 19:58:36.201838   71233 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0913 19:58:36.211823   71233 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.32
	I0913 19:58:36.211850   71233 kubeadm.go:1160] stopping kube-system containers ...
	I0913 19:58:36.211863   71233 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0913 19:58:36.211907   71233 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0913 19:58:36.254383   71233 cri.go:89] found id: ""
	I0913 19:58:36.254452   71233 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0913 19:58:36.274482   71233 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0913 19:58:36.284752   71233 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0913 19:58:36.284776   71233 kubeadm.go:157] found existing configuration files:
	
	I0913 19:58:36.284826   71233 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0913 19:58:36.294122   71233 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0913 19:58:36.294186   71233 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0913 19:58:36.303848   71233 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0913 19:58:36.313197   71233 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0913 19:58:36.313270   71233 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0913 19:58:36.322754   71233 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0913 19:58:36.332018   71233 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0913 19:58:36.332078   71233 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0913 19:58:36.341980   71233 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0913 19:58:36.351251   71233 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0913 19:58:36.351308   71233 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0913 19:58:36.360867   71233 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0913 19:58:36.370253   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:58:36.476811   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:58:37.459731   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:58:37.701271   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:58:37.795569   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:58:37.884961   71233 api_server.go:52] waiting for apiserver process to appear ...
	I0913 19:58:37.885054   71233 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:38.385265   71233 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:38.886038   71233 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:35.431757   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:37.930698   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:38.869696   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:37.009084   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:37.509368   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:38.009698   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:38.509699   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:39.008821   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:39.509724   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:40.008865   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:40.509533   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:41.009397   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:41.508872   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:39.385638   71233 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:39.885566   71233 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:39.901409   71233 api_server.go:72] duration metric: took 2.016446791s to wait for apiserver process to appear ...
	I0913 19:58:39.901438   71233 api_server.go:88] waiting for apiserver healthz status ...
	I0913 19:58:39.901469   71233 api_server.go:253] Checking apiserver healthz at https://192.168.39.32:8443/healthz ...
	I0913 19:58:42.607623   71233 api_server.go:279] https://192.168.39.32:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0913 19:58:42.607656   71233 api_server.go:103] status: https://192.168.39.32:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0913 19:58:42.607672   71233 api_server.go:253] Checking apiserver healthz at https://192.168.39.32:8443/healthz ...
	I0913 19:58:42.625107   71233 api_server.go:279] https://192.168.39.32:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0913 19:58:42.625134   71233 api_server.go:103] status: https://192.168.39.32:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0913 19:58:42.902512   71233 api_server.go:253] Checking apiserver healthz at https://192.168.39.32:8443/healthz ...
	I0913 19:58:42.912382   71233 api_server.go:279] https://192.168.39.32:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0913 19:58:42.912424   71233 api_server.go:103] status: https://192.168.39.32:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0913 19:58:43.401981   71233 api_server.go:253] Checking apiserver healthz at https://192.168.39.32:8443/healthz ...
	I0913 19:58:43.406231   71233 api_server.go:279] https://192.168.39.32:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0913 19:58:43.406253   71233 api_server.go:103] status: https://192.168.39.32:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0913 19:58:43.901758   71233 api_server.go:253] Checking apiserver healthz at https://192.168.39.32:8443/healthz ...
	I0913 19:58:43.909236   71233 api_server.go:279] https://192.168.39.32:8443/healthz returned 200:
	ok
	I0913 19:58:43.915858   71233 api_server.go:141] control plane version: v1.31.1
	I0913 19:58:43.915878   71233 api_server.go:131] duration metric: took 4.014433541s to wait for apiserver health ...
	I0913 19:58:43.915886   71233 cni.go:84] Creating CNI manager for ""
	I0913 19:58:43.915892   71233 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0913 19:58:43.917333   71233 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0913 19:58:43.918437   71233 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0913 19:58:43.929803   71233 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0913 19:58:43.962264   71233 system_pods.go:43] waiting for kube-system pods to appear ...
	I0913 19:58:43.974064   71233 system_pods.go:59] 8 kube-system pods found
	I0913 19:58:43.974124   71233 system_pods.go:61] "coredns-7c65d6cfc9-lrrkx" [3b5420cc-a0cd-4f34-8f65-9fa6fd5dd02f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0913 19:58:43.974132   71233 system_pods.go:61] "etcd-embed-certs-175374" [4f645ba5-cffa-49a2-9219-8e635f58abd3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0913 19:58:43.974140   71233 system_pods.go:61] "kube-apiserver-embed-certs-175374" [4c21b983-d949-4642-b17c-b547330b0d05] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0913 19:58:43.974146   71233 system_pods.go:61] "kube-controller-manager-embed-certs-175374" [defb4f6c-81f8-4405-b2c0-abe91c846f4c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0913 19:58:43.974154   71233 system_pods.go:61] "kube-proxy-jv77q" [28580bbe-7c5f-4161-8370-41f3286d508c] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0913 19:58:43.974159   71233 system_pods.go:61] "kube-scheduler-embed-certs-175374" [c5aa66b9-523c-46e8-b8e4-70680490dbf5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0913 19:58:43.974168   71233 system_pods.go:61] "metrics-server-6867b74b74-fnznh" [9ca67e1c-a852-4513-abfc-ace5908d2727] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0913 19:58:43.974174   71233 system_pods.go:61] "storage-provisioner" [afa99920-b51a-4d30-a8e0-269a0beeee8a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0913 19:58:43.974180   71233 system_pods.go:74] duration metric: took 11.890984ms to wait for pod list to return data ...
	I0913 19:58:43.974191   71233 node_conditions.go:102] verifying NodePressure condition ...
	I0913 19:58:43.978060   71233 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0913 19:58:43.978084   71233 node_conditions.go:123] node cpu capacity is 2
	I0913 19:58:43.978115   71233 node_conditions.go:105] duration metric: took 3.91914ms to run NodePressure ...
	I0913 19:58:43.978136   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0913 19:58:39.931725   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:41.931904   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:43.932454   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:44.265300   71233 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0913 19:58:44.270133   71233 kubeadm.go:739] kubelet initialised
	I0913 19:58:44.270161   71233 kubeadm.go:740] duration metric: took 4.829768ms waiting for restarted kubelet to initialise ...
	I0913 19:58:44.270170   71233 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0913 19:58:44.275324   71233 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-lrrkx" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:44.280420   71233 pod_ready.go:98] node "embed-certs-175374" hosting pod "coredns-7c65d6cfc9-lrrkx" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-175374" has status "Ready":"False"
	I0913 19:58:44.280443   71233 pod_ready.go:82] duration metric: took 5.093507ms for pod "coredns-7c65d6cfc9-lrrkx" in "kube-system" namespace to be "Ready" ...
	E0913 19:58:44.280452   71233 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-175374" hosting pod "coredns-7c65d6cfc9-lrrkx" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-175374" has status "Ready":"False"
	I0913 19:58:44.280459   71233 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-175374" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:44.284917   71233 pod_ready.go:98] node "embed-certs-175374" hosting pod "etcd-embed-certs-175374" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-175374" has status "Ready":"False"
	I0913 19:58:44.284937   71233 pod_ready.go:82] duration metric: took 4.469078ms for pod "etcd-embed-certs-175374" in "kube-system" namespace to be "Ready" ...
	E0913 19:58:44.284945   71233 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-175374" hosting pod "etcd-embed-certs-175374" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-175374" has status "Ready":"False"
	I0913 19:58:44.284952   71233 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-175374" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:44.288979   71233 pod_ready.go:98] node "embed-certs-175374" hosting pod "kube-apiserver-embed-certs-175374" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-175374" has status "Ready":"False"
	I0913 19:58:44.289001   71233 pod_ready.go:82] duration metric: took 4.040314ms for pod "kube-apiserver-embed-certs-175374" in "kube-system" namespace to be "Ready" ...
	E0913 19:58:44.289012   71233 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-175374" hosting pod "kube-apiserver-embed-certs-175374" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-175374" has status "Ready":"False"
	I0913 19:58:44.289019   71233 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-175374" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:44.366067   71233 pod_ready.go:98] node "embed-certs-175374" hosting pod "kube-controller-manager-embed-certs-175374" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-175374" has status "Ready":"False"
	I0913 19:58:44.366115   71233 pod_ready.go:82] duration metric: took 77.081723ms for pod "kube-controller-manager-embed-certs-175374" in "kube-system" namespace to be "Ready" ...
	E0913 19:58:44.366130   71233 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-175374" hosting pod "kube-controller-manager-embed-certs-175374" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-175374" has status "Ready":"False"
	I0913 19:58:44.366138   71233 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-jv77q" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:44.768797   71233 pod_ready.go:98] node "embed-certs-175374" hosting pod "kube-proxy-jv77q" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-175374" has status "Ready":"False"
	I0913 19:58:44.768829   71233 pod_ready.go:82] duration metric: took 402.677833ms for pod "kube-proxy-jv77q" in "kube-system" namespace to be "Ready" ...
	E0913 19:58:44.768838   71233 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-175374" hosting pod "kube-proxy-jv77q" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-175374" has status "Ready":"False"
	I0913 19:58:44.768845   71233 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-175374" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:45.166011   71233 pod_ready.go:98] node "embed-certs-175374" hosting pod "kube-scheduler-embed-certs-175374" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-175374" has status "Ready":"False"
	I0913 19:58:45.166046   71233 pod_ready.go:82] duration metric: took 397.193399ms for pod "kube-scheduler-embed-certs-175374" in "kube-system" namespace to be "Ready" ...
	E0913 19:58:45.166059   71233 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-175374" hosting pod "kube-scheduler-embed-certs-175374" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-175374" has status "Ready":"False"
	I0913 19:58:45.166068   71233 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:45.565304   71233 pod_ready.go:98] node "embed-certs-175374" hosting pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-175374" has status "Ready":"False"
	I0913 19:58:45.565328   71233 pod_ready.go:82] duration metric: took 399.249933ms for pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace to be "Ready" ...
	E0913 19:58:45.565337   71233 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-175374" hosting pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-175374" has status "Ready":"False"
	I0913 19:58:45.565350   71233 pod_ready.go:39] duration metric: took 1.295171906s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0913 19:58:45.565371   71233 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0913 19:58:45.577831   71233 ops.go:34] apiserver oom_adj: -16
	I0913 19:58:45.577857   71233 kubeadm.go:597] duration metric: took 9.389319229s to restartPrimaryControlPlane
	I0913 19:58:45.577868   71233 kubeadm.go:394] duration metric: took 9.442921883s to StartCluster
	I0913 19:58:45.577884   71233 settings.go:142] acquiring lock: {Name:mk3b751ea8a9f3a6c2d19469cffad500e96411b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 19:58:45.577967   71233 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19636-3902/kubeconfig
	I0913 19:58:45.579765   71233 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/kubeconfig: {Name:mk324cf401f96b93fed93af51e24b0634e5fa1a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 19:58:45.580068   71233 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.32 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0913 19:58:45.580156   71233 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0913 19:58:45.580249   71233 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-175374"
	I0913 19:58:45.580272   71233 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-175374"
	W0913 19:58:45.580281   71233 addons.go:243] addon storage-provisioner should already be in state true
	I0913 19:58:45.580295   71233 config.go:182] Loaded profile config "embed-certs-175374": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 19:58:45.580311   71233 host.go:66] Checking if "embed-certs-175374" exists ...
	I0913 19:58:45.580300   71233 addons.go:69] Setting default-storageclass=true in profile "embed-certs-175374"
	I0913 19:58:45.580353   71233 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-175374"
	I0913 19:58:45.580341   71233 addons.go:69] Setting metrics-server=true in profile "embed-certs-175374"
	I0913 19:58:45.580395   71233 addons.go:234] Setting addon metrics-server=true in "embed-certs-175374"
	W0913 19:58:45.580409   71233 addons.go:243] addon metrics-server should already be in state true
	I0913 19:58:45.580482   71233 host.go:66] Checking if "embed-certs-175374" exists ...
	I0913 19:58:45.580753   71233 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:58:45.580799   71233 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:58:45.580846   71233 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:58:45.580894   71233 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:58:45.580952   71233 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:58:45.581001   71233 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:58:45.581828   71233 out.go:177] * Verifying Kubernetes components...
	I0913 19:58:45.583145   71233 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 19:58:45.596215   71233 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38289
	I0913 19:58:45.596347   71233 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43797
	I0913 19:58:45.596650   71233 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:58:45.596775   71233 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44367
	I0913 19:58:45.596889   71233 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:58:45.597150   71233 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:58:45.597156   71233 main.go:141] libmachine: Using API Version  1
	I0913 19:58:45.597175   71233 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:58:45.597345   71233 main.go:141] libmachine: Using API Version  1
	I0913 19:58:45.597359   71233 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:58:45.597606   71233 main.go:141] libmachine: Using API Version  1
	I0913 19:58:45.597623   71233 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:58:45.597659   71233 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:58:45.597683   71233 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:58:45.597842   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetState
	I0913 19:58:45.597952   71233 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:58:45.598212   71233 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:58:45.598243   71233 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:58:45.598512   71233 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:58:45.598541   71233 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:58:45.601548   71233 addons.go:234] Setting addon default-storageclass=true in "embed-certs-175374"
	W0913 19:58:45.601569   71233 addons.go:243] addon default-storageclass should already be in state true
	I0913 19:58:45.601596   71233 host.go:66] Checking if "embed-certs-175374" exists ...
	I0913 19:58:45.601941   71233 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:58:45.601971   71233 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:58:45.613596   71233 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37017
	I0913 19:58:45.614086   71233 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:58:45.614646   71233 main.go:141] libmachine: Using API Version  1
	I0913 19:58:45.614670   71233 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:58:45.615015   71233 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:58:45.615328   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetState
	I0913 19:58:45.615792   71233 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42809
	I0913 19:58:45.616459   71233 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:58:45.617057   71233 main.go:141] libmachine: Using API Version  1
	I0913 19:58:45.617076   71233 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:58:45.617135   71233 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40251
	I0913 19:58:45.617429   71233 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:58:45.617492   71233 main.go:141] libmachine: (embed-certs-175374) Calling .DriverName
	I0913 19:58:45.617538   71233 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:58:45.617720   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetState
	I0913 19:58:45.618009   71233 main.go:141] libmachine: Using API Version  1
	I0913 19:58:45.618029   71233 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:58:45.618610   71233 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:58:45.619215   71233 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:58:45.619257   71233 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:58:45.619496   71233 main.go:141] libmachine: (embed-certs-175374) Calling .DriverName
	I0913 19:58:45.619734   71233 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0913 19:58:45.620863   71233 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 19:58:41.266572   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:43.267658   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:45.768086   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:42.009722   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:42.509784   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:43.009630   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:43.508726   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:44.009339   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:44.509674   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:45.009675   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:45.509437   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:46.009589   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:46.509457   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:45.620906   71233 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0913 19:58:45.620921   71233 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0913 19:58:45.620940   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHHostname
	I0913 19:58:45.622242   71233 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0913 19:58:45.622255   71233 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0913 19:58:45.622272   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHHostname
	I0913 19:58:45.624230   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:45.624735   71233 main.go:141] libmachine: (embed-certs-175374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:57:cd", ip: ""} in network mk-embed-certs-175374: {Iface:virbr3 ExpiryTime:2024-09-13 20:58:22 +0000 UTC Type:0 Mac:52:54:00:72:57:cd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:embed-certs-175374 Clientid:01:52:54:00:72:57:cd}
	I0913 19:58:45.624763   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined IP address 192.168.39.32 and MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:45.624903   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHPort
	I0913 19:58:45.625063   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHKeyPath
	I0913 19:58:45.625200   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHUsername
	I0913 19:58:45.625354   71233 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/embed-certs-175374/id_rsa Username:docker}
	I0913 19:58:45.625501   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:45.625915   71233 main.go:141] libmachine: (embed-certs-175374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:57:cd", ip: ""} in network mk-embed-certs-175374: {Iface:virbr3 ExpiryTime:2024-09-13 20:58:22 +0000 UTC Type:0 Mac:52:54:00:72:57:cd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:embed-certs-175374 Clientid:01:52:54:00:72:57:cd}
	I0913 19:58:45.625938   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined IP address 192.168.39.32 and MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:45.626141   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHPort
	I0913 19:58:45.626285   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHKeyPath
	I0913 19:58:45.626451   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHUsername
	I0913 19:58:45.626625   71233 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/embed-certs-175374/id_rsa Username:docker}
	I0913 19:58:45.658599   71233 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45219
	I0913 19:58:45.659088   71233 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:58:45.659729   71233 main.go:141] libmachine: Using API Version  1
	I0913 19:58:45.659752   71233 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:58:45.660087   71233 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:58:45.660266   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetState
	I0913 19:58:45.661894   71233 main.go:141] libmachine: (embed-certs-175374) Calling .DriverName
	I0913 19:58:45.662127   71233 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0913 19:58:45.662143   71233 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0913 19:58:45.662159   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHHostname
	I0913 19:58:45.664987   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:45.665347   71233 main.go:141] libmachine: (embed-certs-175374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:57:cd", ip: ""} in network mk-embed-certs-175374: {Iface:virbr3 ExpiryTime:2024-09-13 20:58:22 +0000 UTC Type:0 Mac:52:54:00:72:57:cd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:embed-certs-175374 Clientid:01:52:54:00:72:57:cd}
	I0913 19:58:45.665369   71233 main.go:141] libmachine: (embed-certs-175374) DBG | domain embed-certs-175374 has defined IP address 192.168.39.32 and MAC address 52:54:00:72:57:cd in network mk-embed-certs-175374
	I0913 19:58:45.665475   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHPort
	I0913 19:58:45.665622   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHKeyPath
	I0913 19:58:45.665765   71233 main.go:141] libmachine: (embed-certs-175374) Calling .GetSSHUsername
	I0913 19:58:45.665890   71233 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/embed-certs-175374/id_rsa Username:docker}
	I0913 19:58:45.771910   71233 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0913 19:58:45.788103   71233 node_ready.go:35] waiting up to 6m0s for node "embed-certs-175374" to be "Ready" ...
	I0913 19:58:45.849115   71233 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0913 19:58:45.954823   71233 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0913 19:58:45.954845   71233 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0913 19:58:45.972602   71233 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0913 19:58:46.008217   71233 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0913 19:58:46.008243   71233 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0913 19:58:46.087347   71233 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0913 19:58:46.087374   71233 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0913 19:58:46.145493   71233 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0913 19:58:46.413833   71233 main.go:141] libmachine: Making call to close driver server
	I0913 19:58:46.413867   71233 main.go:141] libmachine: (embed-certs-175374) Calling .Close
	I0913 19:58:46.414152   71233 main.go:141] libmachine: Successfully made call to close driver server
	I0913 19:58:46.414211   71233 main.go:141] libmachine: (embed-certs-175374) DBG | Closing plugin on server side
	I0913 19:58:46.414228   71233 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 19:58:46.414239   71233 main.go:141] libmachine: Making call to close driver server
	I0913 19:58:46.414257   71233 main.go:141] libmachine: (embed-certs-175374) Calling .Close
	I0913 19:58:46.414562   71233 main.go:141] libmachine: (embed-certs-175374) DBG | Closing plugin on server side
	I0913 19:58:46.414574   71233 main.go:141] libmachine: Successfully made call to close driver server
	I0913 19:58:46.414587   71233 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 19:58:46.420582   71233 main.go:141] libmachine: Making call to close driver server
	I0913 19:58:46.420600   71233 main.go:141] libmachine: (embed-certs-175374) Calling .Close
	I0913 19:58:46.420839   71233 main.go:141] libmachine: Successfully made call to close driver server
	I0913 19:58:46.420855   71233 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 19:58:46.960928   71233 main.go:141] libmachine: Making call to close driver server
	I0913 19:58:46.960961   71233 main.go:141] libmachine: (embed-certs-175374) Calling .Close
	I0913 19:58:46.961258   71233 main.go:141] libmachine: Successfully made call to close driver server
	I0913 19:58:46.961292   71233 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 19:58:46.961298   71233 main.go:141] libmachine: (embed-certs-175374) DBG | Closing plugin on server side
	I0913 19:58:46.961314   71233 main.go:141] libmachine: Making call to close driver server
	I0913 19:58:46.961325   71233 main.go:141] libmachine: (embed-certs-175374) Calling .Close
	I0913 19:58:46.961592   71233 main.go:141] libmachine: Successfully made call to close driver server
	I0913 19:58:46.961607   71233 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 19:58:47.205831   71233 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.060299398s)
	I0913 19:58:47.205881   71233 main.go:141] libmachine: Making call to close driver server
	I0913 19:58:47.205896   71233 main.go:141] libmachine: (embed-certs-175374) Calling .Close
	I0913 19:58:47.206177   71233 main.go:141] libmachine: Successfully made call to close driver server
	I0913 19:58:47.206198   71233 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 19:58:47.206211   71233 main.go:141] libmachine: Making call to close driver server
	I0913 19:58:47.206209   71233 main.go:141] libmachine: (embed-certs-175374) DBG | Closing plugin on server side
	I0913 19:58:47.206218   71233 main.go:141] libmachine: (embed-certs-175374) Calling .Close
	I0913 19:58:47.206422   71233 main.go:141] libmachine: (embed-certs-175374) DBG | Closing plugin on server side
	I0913 19:58:47.206461   71233 main.go:141] libmachine: Successfully made call to close driver server
	I0913 19:58:47.206469   71233 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 19:58:47.206482   71233 addons.go:475] Verifying addon metrics-server=true in "embed-certs-175374"
	I0913 19:58:47.208308   71233 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0913 19:58:47.209327   71233 addons.go:510] duration metric: took 1.629176141s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0913 19:58:47.792485   71233 node_ready.go:53] node "embed-certs-175374" has status "Ready":"False"
	I0913 19:58:46.431055   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:48.930705   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:48.265994   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:50.266158   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:47.009340   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:47.509159   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:48.009550   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:48.509199   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:49.009364   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:49.509522   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:50.008790   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:50.509733   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:51.009675   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:51.509423   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:50.293136   71233 node_ready.go:53] node "embed-certs-175374" has status "Ready":"False"
	I0913 19:58:52.792201   71233 node_ready.go:53] node "embed-certs-175374" has status "Ready":"False"
	I0913 19:58:53.291781   71233 node_ready.go:49] node "embed-certs-175374" has status "Ready":"True"
	I0913 19:58:53.291808   71233 node_ready.go:38] duration metric: took 7.503674244s for node "embed-certs-175374" to be "Ready" ...
	I0913 19:58:53.291817   71233 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0913 19:58:53.297601   71233 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-lrrkx" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:53.304575   71233 pod_ready.go:93] pod "coredns-7c65d6cfc9-lrrkx" in "kube-system" namespace has status "Ready":"True"
	I0913 19:58:53.304599   71233 pod_ready.go:82] duration metric: took 6.973055ms for pod "coredns-7c65d6cfc9-lrrkx" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:53.304608   71233 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-175374" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:50.932102   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:53.431177   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:52.267198   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:54.267301   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:52.009563   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:52.509133   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:53.008751   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:53.508990   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:54.009430   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:54.508835   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:55.009332   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:55.509474   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:56.009345   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:56.509025   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:55.312022   71233 pod_ready.go:103] pod "etcd-embed-certs-175374" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:57.310407   71233 pod_ready.go:93] pod "etcd-embed-certs-175374" in "kube-system" namespace has status "Ready":"True"
	I0913 19:58:57.310430   71233 pod_ready.go:82] duration metric: took 4.0058159s for pod "etcd-embed-certs-175374" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:57.310440   71233 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-175374" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:57.315573   71233 pod_ready.go:93] pod "kube-apiserver-embed-certs-175374" in "kube-system" namespace has status "Ready":"True"
	I0913 19:58:57.315592   71233 pod_ready.go:82] duration metric: took 5.146474ms for pod "kube-apiserver-embed-certs-175374" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:57.315600   71233 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-175374" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:57.319332   71233 pod_ready.go:93] pod "kube-controller-manager-embed-certs-175374" in "kube-system" namespace has status "Ready":"True"
	I0913 19:58:57.319347   71233 pod_ready.go:82] duration metric: took 3.741976ms for pod "kube-controller-manager-embed-certs-175374" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:57.319356   71233 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-jv77q" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:57.323231   71233 pod_ready.go:93] pod "kube-proxy-jv77q" in "kube-system" namespace has status "Ready":"True"
	I0913 19:58:57.323247   71233 pod_ready.go:82] duration metric: took 3.886178ms for pod "kube-proxy-jv77q" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:57.323254   71233 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-175374" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:57.329250   71233 pod_ready.go:93] pod "kube-scheduler-embed-certs-175374" in "kube-system" namespace has status "Ready":"True"
	I0913 19:58:57.329264   71233 pod_ready.go:82] duration metric: took 6.005366ms for pod "kube-scheduler-embed-certs-175374" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:57.329273   71233 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace to be "Ready" ...
	I0913 19:58:55.932146   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:58.430922   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:56.765730   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:58.767104   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:58:57.009384   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:57.509443   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:58.008849   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:58.509514   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:59.009139   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:59.509433   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:00.009778   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:00.508827   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:01.009427   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:01.508910   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:58:59.335308   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:01.335559   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:03.337207   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:00.930860   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:02.932443   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:01.267236   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:03.765856   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:05.766799   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:02.009696   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:02.509043   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:03.008825   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:03.509139   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:04.009549   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:04.509093   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:05.009633   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:05.509496   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:06.008914   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:06.509007   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:05.835701   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:07.836050   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:05.431045   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:07.431161   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:08.266221   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:10.267540   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:07.009527   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:07.509637   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:08.009264   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:08.509505   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:09.008838   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:09.509478   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:10.009569   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:10.509697   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:11.009352   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:11.509280   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:10.335743   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:12.835060   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:09.930272   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:11.930469   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:14.431325   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:12.766317   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:14.766811   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:12.009200   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:12.509540   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:13.008811   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:13.509512   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:14.008877   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:14.509361   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:15.008823   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:15.509022   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:16.009176   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:16.509654   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:14.836303   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:17.336034   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:16.431384   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:18.930816   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:17.266683   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:19.268476   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:17.009758   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:17.509429   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:18.009470   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:18.509732   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:19.008954   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:19.509475   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:20.008916   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:20.509623   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:21.009697   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 19:59:21.009781   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 19:59:21.044701   71926 cri.go:89] found id: ""
	I0913 19:59:21.044724   71926 logs.go:276] 0 containers: []
	W0913 19:59:21.044734   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 19:59:21.044739   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 19:59:21.044786   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 19:59:21.086372   71926 cri.go:89] found id: ""
	I0913 19:59:21.086399   71926 logs.go:276] 0 containers: []
	W0913 19:59:21.086408   71926 logs.go:278] No container was found matching "etcd"
	I0913 19:59:21.086413   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 19:59:21.086474   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 19:59:21.121663   71926 cri.go:89] found id: ""
	I0913 19:59:21.121691   71926 logs.go:276] 0 containers: []
	W0913 19:59:21.121702   71926 logs.go:278] No container was found matching "coredns"
	I0913 19:59:21.121709   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 19:59:21.121772   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 19:59:21.159432   71926 cri.go:89] found id: ""
	I0913 19:59:21.159462   71926 logs.go:276] 0 containers: []
	W0913 19:59:21.159474   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 19:59:21.159481   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 19:59:21.159547   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 19:59:21.196077   71926 cri.go:89] found id: ""
	I0913 19:59:21.196108   71926 logs.go:276] 0 containers: []
	W0913 19:59:21.196120   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 19:59:21.196128   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 19:59:21.196202   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 19:59:21.228527   71926 cri.go:89] found id: ""
	I0913 19:59:21.228560   71926 logs.go:276] 0 containers: []
	W0913 19:59:21.228572   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 19:59:21.228579   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 19:59:21.228642   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 19:59:21.261972   71926 cri.go:89] found id: ""
	I0913 19:59:21.262004   71926 logs.go:276] 0 containers: []
	W0913 19:59:21.262015   71926 logs.go:278] No container was found matching "kindnet"
	I0913 19:59:21.262023   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 19:59:21.262088   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 19:59:21.297147   71926 cri.go:89] found id: ""
	I0913 19:59:21.297178   71926 logs.go:276] 0 containers: []
	W0913 19:59:21.297191   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 19:59:21.297201   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 19:59:21.297214   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 19:59:21.351067   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 19:59:21.351099   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 19:59:21.367453   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 19:59:21.367492   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 19:59:21.497454   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 19:59:21.497474   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 19:59:21.497487   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 19:59:21.568483   71926 logs.go:123] Gathering logs for container status ...
	I0913 19:59:21.568519   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 19:59:19.836218   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:22.336293   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:21.430519   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:23.930458   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:21.767677   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:24.267717   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:24.114940   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:24.127514   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 19:59:24.127597   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 19:59:24.164868   71926 cri.go:89] found id: ""
	I0913 19:59:24.164896   71926 logs.go:276] 0 containers: []
	W0913 19:59:24.164909   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 19:59:24.164917   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 19:59:24.164976   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 19:59:24.200342   71926 cri.go:89] found id: ""
	I0913 19:59:24.200374   71926 logs.go:276] 0 containers: []
	W0913 19:59:24.200386   71926 logs.go:278] No container was found matching "etcd"
	I0913 19:59:24.200393   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 19:59:24.200453   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 19:59:24.237566   71926 cri.go:89] found id: ""
	I0913 19:59:24.237592   71926 logs.go:276] 0 containers: []
	W0913 19:59:24.237603   71926 logs.go:278] No container was found matching "coredns"
	I0913 19:59:24.237619   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 19:59:24.237691   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 19:59:24.275357   71926 cri.go:89] found id: ""
	I0913 19:59:24.275386   71926 logs.go:276] 0 containers: []
	W0913 19:59:24.275397   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 19:59:24.275412   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 19:59:24.275476   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 19:59:24.312715   71926 cri.go:89] found id: ""
	I0913 19:59:24.312744   71926 logs.go:276] 0 containers: []
	W0913 19:59:24.312754   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 19:59:24.312759   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 19:59:24.312821   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 19:59:24.348027   71926 cri.go:89] found id: ""
	I0913 19:59:24.348053   71926 logs.go:276] 0 containers: []
	W0913 19:59:24.348064   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 19:59:24.348071   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 19:59:24.348149   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 19:59:24.382195   71926 cri.go:89] found id: ""
	I0913 19:59:24.382219   71926 logs.go:276] 0 containers: []
	W0913 19:59:24.382229   71926 logs.go:278] No container was found matching "kindnet"
	I0913 19:59:24.382235   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 19:59:24.382282   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 19:59:24.418783   71926 cri.go:89] found id: ""
	I0913 19:59:24.418806   71926 logs.go:276] 0 containers: []
	W0913 19:59:24.418815   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 19:59:24.418823   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 19:59:24.418833   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 19:59:24.504681   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 19:59:24.504705   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 19:59:24.504718   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 19:59:24.578961   71926 logs.go:123] Gathering logs for container status ...
	I0913 19:59:24.578993   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 19:59:24.623057   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 19:59:24.623083   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 19:59:24.673801   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 19:59:24.673835   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 19:59:24.336593   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:26.835014   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:28.836636   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:25.932213   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:28.431013   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:26.767205   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:29.266801   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:27.188790   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:27.201419   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 19:59:27.201476   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 19:59:27.234511   71926 cri.go:89] found id: ""
	I0913 19:59:27.234535   71926 logs.go:276] 0 containers: []
	W0913 19:59:27.234543   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 19:59:27.234550   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 19:59:27.234610   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 19:59:27.273066   71926 cri.go:89] found id: ""
	I0913 19:59:27.273089   71926 logs.go:276] 0 containers: []
	W0913 19:59:27.273098   71926 logs.go:278] No container was found matching "etcd"
	I0913 19:59:27.273112   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 19:59:27.273169   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 19:59:27.306510   71926 cri.go:89] found id: ""
	I0913 19:59:27.306531   71926 logs.go:276] 0 containers: []
	W0913 19:59:27.306540   71926 logs.go:278] No container was found matching "coredns"
	I0913 19:59:27.306545   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 19:59:27.306630   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 19:59:27.343335   71926 cri.go:89] found id: ""
	I0913 19:59:27.343359   71926 logs.go:276] 0 containers: []
	W0913 19:59:27.343371   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 19:59:27.343378   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 19:59:27.343427   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 19:59:27.380440   71926 cri.go:89] found id: ""
	I0913 19:59:27.380469   71926 logs.go:276] 0 containers: []
	W0913 19:59:27.380478   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 19:59:27.380483   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 19:59:27.380536   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 19:59:27.419250   71926 cri.go:89] found id: ""
	I0913 19:59:27.419280   71926 logs.go:276] 0 containers: []
	W0913 19:59:27.419292   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 19:59:27.419299   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 19:59:27.419370   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 19:59:27.454315   71926 cri.go:89] found id: ""
	I0913 19:59:27.454337   71926 logs.go:276] 0 containers: []
	W0913 19:59:27.454346   71926 logs.go:278] No container was found matching "kindnet"
	I0913 19:59:27.454352   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 19:59:27.454402   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 19:59:27.491075   71926 cri.go:89] found id: ""
	I0913 19:59:27.491107   71926 logs.go:276] 0 containers: []
	W0913 19:59:27.491118   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 19:59:27.491128   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 19:59:27.491170   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 19:59:27.540849   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 19:59:27.540877   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 19:59:27.554829   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 19:59:27.554860   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 19:59:27.624534   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 19:59:27.624562   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 19:59:27.624577   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 19:59:27.702577   71926 logs.go:123] Gathering logs for container status ...
	I0913 19:59:27.702612   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 19:59:30.242489   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:30.255585   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 19:59:30.255667   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 19:59:30.298214   71926 cri.go:89] found id: ""
	I0913 19:59:30.298241   71926 logs.go:276] 0 containers: []
	W0913 19:59:30.298253   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 19:59:30.298259   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 19:59:30.298332   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 19:59:30.335270   71926 cri.go:89] found id: ""
	I0913 19:59:30.335300   71926 logs.go:276] 0 containers: []
	W0913 19:59:30.335309   71926 logs.go:278] No container was found matching "etcd"
	I0913 19:59:30.335314   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 19:59:30.335379   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 19:59:30.373478   71926 cri.go:89] found id: ""
	I0913 19:59:30.373506   71926 logs.go:276] 0 containers: []
	W0913 19:59:30.373517   71926 logs.go:278] No container was found matching "coredns"
	I0913 19:59:30.373524   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 19:59:30.373583   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 19:59:30.411820   71926 cri.go:89] found id: ""
	I0913 19:59:30.411845   71926 logs.go:276] 0 containers: []
	W0913 19:59:30.411854   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 19:59:30.411863   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 19:59:30.411908   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 19:59:30.449434   71926 cri.go:89] found id: ""
	I0913 19:59:30.449463   71926 logs.go:276] 0 containers: []
	W0913 19:59:30.449479   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 19:59:30.449486   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 19:59:30.449547   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 19:59:30.482794   71926 cri.go:89] found id: ""
	I0913 19:59:30.482822   71926 logs.go:276] 0 containers: []
	W0913 19:59:30.482831   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 19:59:30.482837   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 19:59:30.482887   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 19:59:30.517320   71926 cri.go:89] found id: ""
	I0913 19:59:30.517342   71926 logs.go:276] 0 containers: []
	W0913 19:59:30.517351   71926 logs.go:278] No container was found matching "kindnet"
	I0913 19:59:30.517357   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 19:59:30.517404   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 19:59:30.553119   71926 cri.go:89] found id: ""
	I0913 19:59:30.553146   71926 logs.go:276] 0 containers: []
	W0913 19:59:30.553154   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 19:59:30.553162   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 19:59:30.553172   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 19:59:30.605857   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 19:59:30.605886   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 19:59:30.620823   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 19:59:30.620855   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 19:59:30.689618   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 19:59:30.689637   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 19:59:30.689650   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 19:59:30.772324   71926 logs.go:123] Gathering logs for container status ...
	I0913 19:59:30.772359   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 19:59:31.335265   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:33.336711   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:30.431957   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:32.930866   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:31.765595   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:33.768217   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:33.313109   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:33.327584   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 19:59:33.327642   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 19:59:33.362880   71926 cri.go:89] found id: ""
	I0913 19:59:33.362904   71926 logs.go:276] 0 containers: []
	W0913 19:59:33.362912   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 19:59:33.362919   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 19:59:33.362979   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 19:59:33.395790   71926 cri.go:89] found id: ""
	I0913 19:59:33.395818   71926 logs.go:276] 0 containers: []
	W0913 19:59:33.395828   71926 logs.go:278] No container was found matching "etcd"
	I0913 19:59:33.395833   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 19:59:33.395883   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 19:59:33.430371   71926 cri.go:89] found id: ""
	I0913 19:59:33.430397   71926 logs.go:276] 0 containers: []
	W0913 19:59:33.430405   71926 logs.go:278] No container was found matching "coredns"
	I0913 19:59:33.430410   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 19:59:33.430522   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 19:59:33.465466   71926 cri.go:89] found id: ""
	I0913 19:59:33.465494   71926 logs.go:276] 0 containers: []
	W0913 19:59:33.465502   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 19:59:33.465508   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 19:59:33.465554   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 19:59:33.500340   71926 cri.go:89] found id: ""
	I0913 19:59:33.500370   71926 logs.go:276] 0 containers: []
	W0913 19:59:33.500385   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 19:59:33.500390   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 19:59:33.500440   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 19:59:33.537219   71926 cri.go:89] found id: ""
	I0913 19:59:33.537248   71926 logs.go:276] 0 containers: []
	W0913 19:59:33.537259   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 19:59:33.537267   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 19:59:33.537315   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 19:59:33.576171   71926 cri.go:89] found id: ""
	I0913 19:59:33.576201   71926 logs.go:276] 0 containers: []
	W0913 19:59:33.576209   71926 logs.go:278] No container was found matching "kindnet"
	I0913 19:59:33.576214   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 19:59:33.576261   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 19:59:33.618525   71926 cri.go:89] found id: ""
	I0913 19:59:33.618552   71926 logs.go:276] 0 containers: []
	W0913 19:59:33.618564   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 19:59:33.618574   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 19:59:33.618588   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 19:59:33.667903   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 19:59:33.667932   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 19:59:33.683870   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 19:59:33.683897   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 19:59:33.755651   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 19:59:33.755675   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 19:59:33.755687   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 19:59:33.834518   71926 logs.go:123] Gathering logs for container status ...
	I0913 19:59:33.834563   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 19:59:36.375763   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:36.389874   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 19:59:36.389945   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 19:59:36.423918   71926 cri.go:89] found id: ""
	I0913 19:59:36.423943   71926 logs.go:276] 0 containers: []
	W0913 19:59:36.423955   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 19:59:36.423962   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 19:59:36.424021   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 19:59:36.461591   71926 cri.go:89] found id: ""
	I0913 19:59:36.461615   71926 logs.go:276] 0 containers: []
	W0913 19:59:36.461627   71926 logs.go:278] No container was found matching "etcd"
	I0913 19:59:36.461633   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 19:59:36.461686   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 19:59:36.500927   71926 cri.go:89] found id: ""
	I0913 19:59:36.500951   71926 logs.go:276] 0 containers: []
	W0913 19:59:36.500961   71926 logs.go:278] No container was found matching "coredns"
	I0913 19:59:36.500966   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 19:59:36.501024   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 19:59:36.538153   71926 cri.go:89] found id: ""
	I0913 19:59:36.538178   71926 logs.go:276] 0 containers: []
	W0913 19:59:36.538189   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 19:59:36.538196   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 19:59:36.538253   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 19:59:36.586559   71926 cri.go:89] found id: ""
	I0913 19:59:36.586593   71926 logs.go:276] 0 containers: []
	W0913 19:59:36.586604   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 19:59:36.586612   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 19:59:36.586671   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 19:59:36.635198   71926 cri.go:89] found id: ""
	I0913 19:59:36.635226   71926 logs.go:276] 0 containers: []
	W0913 19:59:36.635238   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 19:59:36.635246   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 19:59:36.635312   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 19:59:36.679531   71926 cri.go:89] found id: ""
	I0913 19:59:36.679554   71926 logs.go:276] 0 containers: []
	W0913 19:59:36.679565   71926 logs.go:278] No container was found matching "kindnet"
	I0913 19:59:36.679572   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 19:59:36.679635   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 19:59:36.714288   71926 cri.go:89] found id: ""
	I0913 19:59:36.714315   71926 logs.go:276] 0 containers: []
	W0913 19:59:36.714327   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 19:59:36.714338   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 19:59:36.714352   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 19:59:36.793900   71926 logs.go:123] Gathering logs for container status ...
	I0913 19:59:36.793938   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 19:59:36.831700   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 19:59:36.831732   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 19:59:36.884424   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 19:59:36.884466   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 19:59:36.900593   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 19:59:36.900622   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 19:59:35.835628   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:37.836645   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:34.931979   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:37.429866   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:39.431100   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:36.265867   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:38.266340   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:40.767051   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	W0913 19:59:36.979173   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 19:59:39.480036   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:39.494368   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 19:59:39.494434   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 19:59:39.528620   71926 cri.go:89] found id: ""
	I0913 19:59:39.528647   71926 logs.go:276] 0 containers: []
	W0913 19:59:39.528655   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 19:59:39.528661   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 19:59:39.528708   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 19:59:39.564307   71926 cri.go:89] found id: ""
	I0913 19:59:39.564339   71926 logs.go:276] 0 containers: []
	W0913 19:59:39.564348   71926 logs.go:278] No container was found matching "etcd"
	I0913 19:59:39.564354   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 19:59:39.564402   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 19:59:39.596786   71926 cri.go:89] found id: ""
	I0913 19:59:39.596813   71926 logs.go:276] 0 containers: []
	W0913 19:59:39.596822   71926 logs.go:278] No container was found matching "coredns"
	I0913 19:59:39.596828   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 19:59:39.596887   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 19:59:39.640612   71926 cri.go:89] found id: ""
	I0913 19:59:39.640636   71926 logs.go:276] 0 containers: []
	W0913 19:59:39.640649   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 19:59:39.640654   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 19:59:39.640701   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 19:59:39.685855   71926 cri.go:89] found id: ""
	I0913 19:59:39.685876   71926 logs.go:276] 0 containers: []
	W0913 19:59:39.685884   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 19:59:39.685890   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 19:59:39.685937   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 19:59:39.723549   71926 cri.go:89] found id: ""
	I0913 19:59:39.723578   71926 logs.go:276] 0 containers: []
	W0913 19:59:39.723586   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 19:59:39.723592   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 19:59:39.723647   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 19:59:39.761901   71926 cri.go:89] found id: ""
	I0913 19:59:39.761928   71926 logs.go:276] 0 containers: []
	W0913 19:59:39.761938   71926 logs.go:278] No container was found matching "kindnet"
	I0913 19:59:39.761944   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 19:59:39.762005   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 19:59:39.804200   71926 cri.go:89] found id: ""
	I0913 19:59:39.804233   71926 logs.go:276] 0 containers: []
	W0913 19:59:39.804244   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 19:59:39.804254   71926 logs.go:123] Gathering logs for container status ...
	I0913 19:59:39.804268   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 19:59:39.843760   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 19:59:39.843792   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 19:59:39.898610   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 19:59:39.898640   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 19:59:39.915710   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 19:59:39.915733   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 19:59:39.991138   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 19:59:39.991161   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 19:59:39.991175   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 19:59:40.335372   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:42.339270   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:41.431411   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:43.930395   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:43.266899   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:45.769316   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:42.567023   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:42.579927   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 19:59:42.580001   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 19:59:42.613569   71926 cri.go:89] found id: ""
	I0913 19:59:42.613595   71926 logs.go:276] 0 containers: []
	W0913 19:59:42.613603   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 19:59:42.613608   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 19:59:42.613654   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 19:59:42.649375   71926 cri.go:89] found id: ""
	I0913 19:59:42.649408   71926 logs.go:276] 0 containers: []
	W0913 19:59:42.649421   71926 logs.go:278] No container was found matching "etcd"
	I0913 19:59:42.649433   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 19:59:42.649502   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 19:59:42.685299   71926 cri.go:89] found id: ""
	I0913 19:59:42.685322   71926 logs.go:276] 0 containers: []
	W0913 19:59:42.685330   71926 logs.go:278] No container was found matching "coredns"
	I0913 19:59:42.685336   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 19:59:42.685383   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 19:59:42.718646   71926 cri.go:89] found id: ""
	I0913 19:59:42.718671   71926 logs.go:276] 0 containers: []
	W0913 19:59:42.718680   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 19:59:42.718686   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 19:59:42.718736   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 19:59:42.755277   71926 cri.go:89] found id: ""
	I0913 19:59:42.755310   71926 logs.go:276] 0 containers: []
	W0913 19:59:42.755322   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 19:59:42.755330   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 19:59:42.755399   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 19:59:42.791071   71926 cri.go:89] found id: ""
	I0913 19:59:42.791099   71926 logs.go:276] 0 containers: []
	W0913 19:59:42.791110   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 19:59:42.791117   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 19:59:42.791191   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 19:59:42.824895   71926 cri.go:89] found id: ""
	I0913 19:59:42.824924   71926 logs.go:276] 0 containers: []
	W0913 19:59:42.824935   71926 logs.go:278] No container was found matching "kindnet"
	I0913 19:59:42.824942   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 19:59:42.825004   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 19:59:42.864526   71926 cri.go:89] found id: ""
	I0913 19:59:42.864555   71926 logs.go:276] 0 containers: []
	W0913 19:59:42.864567   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 19:59:42.864576   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 19:59:42.864590   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 19:59:42.913990   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 19:59:42.914023   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 19:59:42.929285   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 19:59:42.929319   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 19:59:43.003029   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 19:59:43.003061   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 19:59:43.003075   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 19:59:43.083457   71926 logs.go:123] Gathering logs for container status ...
	I0913 19:59:43.083490   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 19:59:45.625941   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:45.639111   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 19:59:45.639200   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 19:59:45.672356   71926 cri.go:89] found id: ""
	I0913 19:59:45.672383   71926 logs.go:276] 0 containers: []
	W0913 19:59:45.672391   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 19:59:45.672397   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 19:59:45.672463   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 19:59:45.713528   71926 cri.go:89] found id: ""
	I0913 19:59:45.713551   71926 logs.go:276] 0 containers: []
	W0913 19:59:45.713557   71926 logs.go:278] No container was found matching "etcd"
	I0913 19:59:45.713564   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 19:59:45.713618   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 19:59:45.749950   71926 cri.go:89] found id: ""
	I0913 19:59:45.749974   71926 logs.go:276] 0 containers: []
	W0913 19:59:45.749982   71926 logs.go:278] No container was found matching "coredns"
	I0913 19:59:45.749988   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 19:59:45.750036   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 19:59:45.787366   71926 cri.go:89] found id: ""
	I0913 19:59:45.787403   71926 logs.go:276] 0 containers: []
	W0913 19:59:45.787415   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 19:59:45.787423   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 19:59:45.787482   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 19:59:45.822464   71926 cri.go:89] found id: ""
	I0913 19:59:45.822493   71926 logs.go:276] 0 containers: []
	W0913 19:59:45.822504   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 19:59:45.822511   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 19:59:45.822574   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 19:59:45.857614   71926 cri.go:89] found id: ""
	I0913 19:59:45.857643   71926 logs.go:276] 0 containers: []
	W0913 19:59:45.857654   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 19:59:45.857666   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 19:59:45.857716   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 19:59:45.896323   71926 cri.go:89] found id: ""
	I0913 19:59:45.896349   71926 logs.go:276] 0 containers: []
	W0913 19:59:45.896357   71926 logs.go:278] No container was found matching "kindnet"
	I0913 19:59:45.896362   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 19:59:45.896416   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 19:59:45.935706   71926 cri.go:89] found id: ""
	I0913 19:59:45.935731   71926 logs.go:276] 0 containers: []
	W0913 19:59:45.935742   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 19:59:45.935751   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 19:59:45.935763   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 19:59:45.986687   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 19:59:45.986721   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 19:59:46.001549   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 19:59:46.001584   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 19:59:46.075482   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 19:59:46.075505   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 19:59:46.075520   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 19:59:46.152094   71926 logs.go:123] Gathering logs for container status ...
	I0913 19:59:46.152130   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 19:59:44.836085   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:46.836175   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:45.932069   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:47.932660   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:48.266623   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:50.766356   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:48.698127   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:48.711198   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 19:59:48.711271   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 19:59:48.746560   71926 cri.go:89] found id: ""
	I0913 19:59:48.746588   71926 logs.go:276] 0 containers: []
	W0913 19:59:48.746598   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 19:59:48.746605   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 19:59:48.746662   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 19:59:48.780588   71926 cri.go:89] found id: ""
	I0913 19:59:48.780614   71926 logs.go:276] 0 containers: []
	W0913 19:59:48.780624   71926 logs.go:278] No container was found matching "etcd"
	I0913 19:59:48.780631   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 19:59:48.780689   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 19:59:48.812521   71926 cri.go:89] found id: ""
	I0913 19:59:48.812547   71926 logs.go:276] 0 containers: []
	W0913 19:59:48.812560   71926 logs.go:278] No container was found matching "coredns"
	I0913 19:59:48.812567   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 19:59:48.812626   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 19:59:48.850273   71926 cri.go:89] found id: ""
	I0913 19:59:48.850303   71926 logs.go:276] 0 containers: []
	W0913 19:59:48.850314   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 19:59:48.850322   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 19:59:48.850384   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 19:59:48.887865   71926 cri.go:89] found id: ""
	I0913 19:59:48.887888   71926 logs.go:276] 0 containers: []
	W0913 19:59:48.887896   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 19:59:48.887901   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 19:59:48.887966   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 19:59:48.928155   71926 cri.go:89] found id: ""
	I0913 19:59:48.928182   71926 logs.go:276] 0 containers: []
	W0913 19:59:48.928193   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 19:59:48.928201   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 19:59:48.928263   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 19:59:48.966159   71926 cri.go:89] found id: ""
	I0913 19:59:48.966185   71926 logs.go:276] 0 containers: []
	W0913 19:59:48.966194   71926 logs.go:278] No container was found matching "kindnet"
	I0913 19:59:48.966199   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 19:59:48.966267   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 19:59:48.999627   71926 cri.go:89] found id: ""
	I0913 19:59:48.999654   71926 logs.go:276] 0 containers: []
	W0913 19:59:48.999663   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 19:59:48.999674   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 19:59:48.999685   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 19:59:49.087362   71926 logs.go:123] Gathering logs for container status ...
	I0913 19:59:49.087398   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 19:59:49.131037   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 19:59:49.131065   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 19:59:49.183183   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 19:59:49.183214   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 19:59:49.197511   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 19:59:49.197536   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 19:59:49.269251   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 19:59:51.769494   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:51.783274   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 19:59:51.783334   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 19:59:51.819611   71926 cri.go:89] found id: ""
	I0913 19:59:51.819644   71926 logs.go:276] 0 containers: []
	W0913 19:59:51.819655   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 19:59:51.819662   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 19:59:51.819722   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 19:59:51.857054   71926 cri.go:89] found id: ""
	I0913 19:59:51.857081   71926 logs.go:276] 0 containers: []
	W0913 19:59:51.857093   71926 logs.go:278] No container was found matching "etcd"
	I0913 19:59:51.857101   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 19:59:51.857183   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 19:59:51.892270   71926 cri.go:89] found id: ""
	I0913 19:59:51.892292   71926 logs.go:276] 0 containers: []
	W0913 19:59:51.892301   71926 logs.go:278] No container was found matching "coredns"
	I0913 19:59:51.892306   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 19:59:51.892354   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 19:59:51.926615   71926 cri.go:89] found id: ""
	I0913 19:59:51.926641   71926 logs.go:276] 0 containers: []
	W0913 19:59:51.926653   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 19:59:51.926667   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 19:59:51.926728   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 19:59:49.336581   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:51.837000   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:53.838872   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:49.936518   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:52.430631   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:52.767109   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:55.265920   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:51.966322   71926 cri.go:89] found id: ""
	I0913 19:59:51.966352   71926 logs.go:276] 0 containers: []
	W0913 19:59:51.966372   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 19:59:51.966380   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 19:59:51.966446   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 19:59:52.008145   71926 cri.go:89] found id: ""
	I0913 19:59:52.008173   71926 logs.go:276] 0 containers: []
	W0913 19:59:52.008182   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 19:59:52.008189   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 19:59:52.008241   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 19:59:52.044486   71926 cri.go:89] found id: ""
	I0913 19:59:52.044512   71926 logs.go:276] 0 containers: []
	W0913 19:59:52.044520   71926 logs.go:278] No container was found matching "kindnet"
	I0913 19:59:52.044527   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 19:59:52.044590   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 19:59:52.080845   71926 cri.go:89] found id: ""
	I0913 19:59:52.080873   71926 logs.go:276] 0 containers: []
	W0913 19:59:52.080885   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 19:59:52.080895   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 19:59:52.080910   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 19:59:52.094040   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 19:59:52.094067   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 19:59:52.163809   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 19:59:52.163836   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 19:59:52.163850   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 19:59:52.244680   71926 logs.go:123] Gathering logs for container status ...
	I0913 19:59:52.244724   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 19:59:52.284651   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 19:59:52.284686   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 19:59:54.841167   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:54.853992   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 19:59:54.854055   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 19:59:54.886801   71926 cri.go:89] found id: ""
	I0913 19:59:54.886830   71926 logs.go:276] 0 containers: []
	W0913 19:59:54.886841   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 19:59:54.886848   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 19:59:54.886922   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 19:59:54.921963   71926 cri.go:89] found id: ""
	I0913 19:59:54.921990   71926 logs.go:276] 0 containers: []
	W0913 19:59:54.922001   71926 logs.go:278] No container was found matching "etcd"
	I0913 19:59:54.922009   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 19:59:54.922074   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 19:59:54.960805   71926 cri.go:89] found id: ""
	I0913 19:59:54.960840   71926 logs.go:276] 0 containers: []
	W0913 19:59:54.960852   71926 logs.go:278] No container was found matching "coredns"
	I0913 19:59:54.960859   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 19:59:54.960938   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 19:59:54.998466   71926 cri.go:89] found id: ""
	I0913 19:59:54.998490   71926 logs.go:276] 0 containers: []
	W0913 19:59:54.998501   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 19:59:54.998508   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 19:59:54.998570   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 19:59:55.036768   71926 cri.go:89] found id: ""
	I0913 19:59:55.036795   71926 logs.go:276] 0 containers: []
	W0913 19:59:55.036803   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 19:59:55.036809   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 19:59:55.036870   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 19:59:55.075135   71926 cri.go:89] found id: ""
	I0913 19:59:55.075165   71926 logs.go:276] 0 containers: []
	W0913 19:59:55.075176   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 19:59:55.075184   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 19:59:55.075244   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 19:59:55.112784   71926 cri.go:89] found id: ""
	I0913 19:59:55.112806   71926 logs.go:276] 0 containers: []
	W0913 19:59:55.112815   71926 logs.go:278] No container was found matching "kindnet"
	I0913 19:59:55.112821   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 19:59:55.112866   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 19:59:55.147189   71926 cri.go:89] found id: ""
	I0913 19:59:55.147215   71926 logs.go:276] 0 containers: []
	W0913 19:59:55.147226   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 19:59:55.147236   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 19:59:55.147247   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 19:59:55.199769   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 19:59:55.199802   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 19:59:55.214075   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 19:59:55.214124   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 19:59:55.285621   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 19:59:55.285640   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 19:59:55.285650   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 19:59:55.366727   71926 logs.go:123] Gathering logs for container status ...
	I0913 19:59:55.366762   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 19:59:56.336491   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:58.836762   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:54.932054   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:57.431007   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:57.266309   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:59.266774   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:57.913453   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:59:57.927109   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 19:59:57.927249   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 19:59:57.964607   71926 cri.go:89] found id: ""
	I0913 19:59:57.964635   71926 logs.go:276] 0 containers: []
	W0913 19:59:57.964647   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 19:59:57.964655   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 19:59:57.964723   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 19:59:58.006749   71926 cri.go:89] found id: ""
	I0913 19:59:58.006773   71926 logs.go:276] 0 containers: []
	W0913 19:59:58.006782   71926 logs.go:278] No container was found matching "etcd"
	I0913 19:59:58.006788   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 19:59:58.006835   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 19:59:58.042438   71926 cri.go:89] found id: ""
	I0913 19:59:58.042461   71926 logs.go:276] 0 containers: []
	W0913 19:59:58.042470   71926 logs.go:278] No container was found matching "coredns"
	I0913 19:59:58.042475   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 19:59:58.042526   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 19:59:58.076413   71926 cri.go:89] found id: ""
	I0913 19:59:58.076443   71926 logs.go:276] 0 containers: []
	W0913 19:59:58.076456   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 19:59:58.076463   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 19:59:58.076525   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 19:59:58.110474   71926 cri.go:89] found id: ""
	I0913 19:59:58.110495   71926 logs.go:276] 0 containers: []
	W0913 19:59:58.110502   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 19:59:58.110507   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 19:59:58.110555   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 19:59:58.145068   71926 cri.go:89] found id: ""
	I0913 19:59:58.145090   71926 logs.go:276] 0 containers: []
	W0913 19:59:58.145098   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 19:59:58.145104   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 19:59:58.145152   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 19:59:58.179071   71926 cri.go:89] found id: ""
	I0913 19:59:58.179102   71926 logs.go:276] 0 containers: []
	W0913 19:59:58.179115   71926 logs.go:278] No container was found matching "kindnet"
	I0913 19:59:58.179122   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 19:59:58.179176   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 19:59:58.212749   71926 cri.go:89] found id: ""
	I0913 19:59:58.212779   71926 logs.go:276] 0 containers: []
	W0913 19:59:58.212791   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 19:59:58.212801   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 19:59:58.212814   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 19:59:58.297012   71926 logs.go:123] Gathering logs for container status ...
	I0913 19:59:58.297046   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 19:59:58.336206   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 19:59:58.336229   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 19:59:58.388982   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 19:59:58.389014   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 19:59:58.404582   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 19:59:58.404612   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 19:59:58.476882   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:00.977647   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:00.991660   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:00.991743   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:01.028753   71926 cri.go:89] found id: ""
	I0913 20:00:01.028779   71926 logs.go:276] 0 containers: []
	W0913 20:00:01.028788   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:01.028794   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:01.028843   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:01.064606   71926 cri.go:89] found id: ""
	I0913 20:00:01.064638   71926 logs.go:276] 0 containers: []
	W0913 20:00:01.064647   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:01.064652   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:01.064701   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:01.100564   71926 cri.go:89] found id: ""
	I0913 20:00:01.100597   71926 logs.go:276] 0 containers: []
	W0913 20:00:01.100609   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:01.100615   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:01.100678   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:01.135197   71926 cri.go:89] found id: ""
	I0913 20:00:01.135230   71926 logs.go:276] 0 containers: []
	W0913 20:00:01.135242   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:01.135249   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:01.135313   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:01.187712   71926 cri.go:89] found id: ""
	I0913 20:00:01.187783   71926 logs.go:276] 0 containers: []
	W0913 20:00:01.187796   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:01.187804   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:01.187870   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:01.225400   71926 cri.go:89] found id: ""
	I0913 20:00:01.225434   71926 logs.go:276] 0 containers: []
	W0913 20:00:01.225443   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:01.225449   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:01.225511   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:01.263652   71926 cri.go:89] found id: ""
	I0913 20:00:01.263685   71926 logs.go:276] 0 containers: []
	W0913 20:00:01.263696   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:01.263703   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:01.263764   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:01.302630   71926 cri.go:89] found id: ""
	I0913 20:00:01.302652   71926 logs.go:276] 0 containers: []
	W0913 20:00:01.302661   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:01.302669   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:01.302680   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:01.316837   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:01.316871   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:01.389124   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:01.389149   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:01.389159   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:01.475528   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:01.475565   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:01.518896   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:01.518922   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:01.338229   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:03.836029   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 19:59:59.932112   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:01.932389   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:03.932525   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:01.267699   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:03.268309   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:05.765913   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:04.069894   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:04.083877   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:04.083940   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:04.119079   71926 cri.go:89] found id: ""
	I0913 20:00:04.119109   71926 logs.go:276] 0 containers: []
	W0913 20:00:04.119118   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:04.119123   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:04.119175   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:04.154999   71926 cri.go:89] found id: ""
	I0913 20:00:04.155026   71926 logs.go:276] 0 containers: []
	W0913 20:00:04.155035   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:04.155040   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:04.155087   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:04.189390   71926 cri.go:89] found id: ""
	I0913 20:00:04.189414   71926 logs.go:276] 0 containers: []
	W0913 20:00:04.189422   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:04.189428   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:04.189477   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:04.224883   71926 cri.go:89] found id: ""
	I0913 20:00:04.224912   71926 logs.go:276] 0 containers: []
	W0913 20:00:04.224924   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:04.224932   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:04.224990   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:04.265300   71926 cri.go:89] found id: ""
	I0913 20:00:04.265328   71926 logs.go:276] 0 containers: []
	W0913 20:00:04.265340   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:04.265347   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:04.265403   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:04.308225   71926 cri.go:89] found id: ""
	I0913 20:00:04.308253   71926 logs.go:276] 0 containers: []
	W0913 20:00:04.308264   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:04.308271   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:04.308339   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:04.347508   71926 cri.go:89] found id: ""
	I0913 20:00:04.347539   71926 logs.go:276] 0 containers: []
	W0913 20:00:04.347552   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:04.347558   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:04.347615   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:04.385610   71926 cri.go:89] found id: ""
	I0913 20:00:04.385635   71926 logs.go:276] 0 containers: []
	W0913 20:00:04.385644   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:04.385651   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:04.385664   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:04.438181   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:04.438210   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:04.452207   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:04.452231   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:04.527920   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:04.527942   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:04.527956   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:04.617212   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:04.617256   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:05.836478   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:08.336478   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:06.429978   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:08.430153   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:08.266149   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:10.267683   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:07.159525   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:07.172355   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:07.172433   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:07.210683   71926 cri.go:89] found id: ""
	I0913 20:00:07.210713   71926 logs.go:276] 0 containers: []
	W0913 20:00:07.210725   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:07.210733   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:07.210794   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:07.252477   71926 cri.go:89] found id: ""
	I0913 20:00:07.252501   71926 logs.go:276] 0 containers: []
	W0913 20:00:07.252511   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:07.252516   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:07.252563   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:07.293646   71926 cri.go:89] found id: ""
	I0913 20:00:07.293671   71926 logs.go:276] 0 containers: []
	W0913 20:00:07.293679   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:07.293685   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:07.293744   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:07.328603   71926 cri.go:89] found id: ""
	I0913 20:00:07.328631   71926 logs.go:276] 0 containers: []
	W0913 20:00:07.328644   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:07.328652   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:07.328716   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:07.366358   71926 cri.go:89] found id: ""
	I0913 20:00:07.366385   71926 logs.go:276] 0 containers: []
	W0913 20:00:07.366395   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:07.366402   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:07.366480   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:07.400380   71926 cri.go:89] found id: ""
	I0913 20:00:07.400406   71926 logs.go:276] 0 containers: []
	W0913 20:00:07.400417   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:07.400425   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:07.400476   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:07.438332   71926 cri.go:89] found id: ""
	I0913 20:00:07.438360   71926 logs.go:276] 0 containers: []
	W0913 20:00:07.438371   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:07.438380   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:07.438437   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:07.477262   71926 cri.go:89] found id: ""
	I0913 20:00:07.477294   71926 logs.go:276] 0 containers: []
	W0913 20:00:07.477305   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:07.477315   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:07.477329   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:07.528404   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:07.528437   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:07.543025   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:07.543058   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:07.619580   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:07.619599   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:07.619611   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:07.698002   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:07.698037   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:10.241528   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:10.255274   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:10.255350   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:10.289635   71926 cri.go:89] found id: ""
	I0913 20:00:10.289660   71926 logs.go:276] 0 containers: []
	W0913 20:00:10.289670   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:10.289675   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:10.289726   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:10.323749   71926 cri.go:89] found id: ""
	I0913 20:00:10.323772   71926 logs.go:276] 0 containers: []
	W0913 20:00:10.323781   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:10.323788   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:10.323835   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:10.360399   71926 cri.go:89] found id: ""
	I0913 20:00:10.360424   71926 logs.go:276] 0 containers: []
	W0913 20:00:10.360432   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:10.360441   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:10.360517   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:10.396685   71926 cri.go:89] found id: ""
	I0913 20:00:10.396714   71926 logs.go:276] 0 containers: []
	W0913 20:00:10.396724   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:10.396731   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:10.396793   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:10.439076   71926 cri.go:89] found id: ""
	I0913 20:00:10.439104   71926 logs.go:276] 0 containers: []
	W0913 20:00:10.439116   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:10.439126   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:10.439202   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:10.484451   71926 cri.go:89] found id: ""
	I0913 20:00:10.484474   71926 logs.go:276] 0 containers: []
	W0913 20:00:10.484483   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:10.484490   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:10.484553   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:10.521271   71926 cri.go:89] found id: ""
	I0913 20:00:10.521297   71926 logs.go:276] 0 containers: []
	W0913 20:00:10.521307   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:10.521313   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:10.521371   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:10.556468   71926 cri.go:89] found id: ""
	I0913 20:00:10.556490   71926 logs.go:276] 0 containers: []
	W0913 20:00:10.556499   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:10.556507   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:10.556519   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:10.593210   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:10.593237   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:10.645456   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:10.645493   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:10.659365   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:10.659393   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:10.734764   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:10.734786   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:10.734800   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:10.338631   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:12.835744   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:10.430954   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:12.931007   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:12.767070   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:15.267220   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:13.320229   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:13.335065   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:13.335136   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:13.373500   71926 cri.go:89] found id: ""
	I0913 20:00:13.373531   71926 logs.go:276] 0 containers: []
	W0913 20:00:13.373543   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:13.373550   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:13.373613   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:13.410784   71926 cri.go:89] found id: ""
	I0913 20:00:13.410815   71926 logs.go:276] 0 containers: []
	W0913 20:00:13.410827   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:13.410834   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:13.410900   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:13.447564   71926 cri.go:89] found id: ""
	I0913 20:00:13.447592   71926 logs.go:276] 0 containers: []
	W0913 20:00:13.447603   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:13.447611   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:13.447668   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:13.482858   71926 cri.go:89] found id: ""
	I0913 20:00:13.482885   71926 logs.go:276] 0 containers: []
	W0913 20:00:13.482895   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:13.482902   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:13.482963   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:13.517827   71926 cri.go:89] found id: ""
	I0913 20:00:13.517853   71926 logs.go:276] 0 containers: []
	W0913 20:00:13.517864   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:13.517870   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:13.517932   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:13.553032   71926 cri.go:89] found id: ""
	I0913 20:00:13.553063   71926 logs.go:276] 0 containers: []
	W0913 20:00:13.553081   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:13.553088   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:13.553149   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:13.588527   71926 cri.go:89] found id: ""
	I0913 20:00:13.588553   71926 logs.go:276] 0 containers: []
	W0913 20:00:13.588561   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:13.588567   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:13.588620   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:13.625094   71926 cri.go:89] found id: ""
	I0913 20:00:13.625118   71926 logs.go:276] 0 containers: []
	W0913 20:00:13.625131   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:13.625141   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:13.625158   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:13.677821   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:13.677851   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:13.691860   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:13.691887   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:13.764966   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:13.764993   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:13.765009   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:13.847866   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:13.847908   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:16.389642   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:16.403272   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:16.403331   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:16.440078   71926 cri.go:89] found id: ""
	I0913 20:00:16.440104   71926 logs.go:276] 0 containers: []
	W0913 20:00:16.440114   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:16.440122   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:16.440190   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:16.474256   71926 cri.go:89] found id: ""
	I0913 20:00:16.474283   71926 logs.go:276] 0 containers: []
	W0913 20:00:16.474301   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:16.474308   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:16.474366   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:16.510715   71926 cri.go:89] found id: ""
	I0913 20:00:16.510749   71926 logs.go:276] 0 containers: []
	W0913 20:00:16.510760   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:16.510767   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:16.510828   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:16.547051   71926 cri.go:89] found id: ""
	I0913 20:00:16.547081   71926 logs.go:276] 0 containers: []
	W0913 20:00:16.547090   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:16.547095   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:16.547181   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:16.583643   71926 cri.go:89] found id: ""
	I0913 20:00:16.583673   71926 logs.go:276] 0 containers: []
	W0913 20:00:16.583684   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:16.583692   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:16.583751   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:16.620508   71926 cri.go:89] found id: ""
	I0913 20:00:16.620531   71926 logs.go:276] 0 containers: []
	W0913 20:00:16.620538   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:16.620544   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:16.620591   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:16.659447   71926 cri.go:89] found id: ""
	I0913 20:00:16.659474   71926 logs.go:276] 0 containers: []
	W0913 20:00:16.659483   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:16.659487   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:16.659551   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:16.696858   71926 cri.go:89] found id: ""
	I0913 20:00:16.696883   71926 logs.go:276] 0 containers: []
	W0913 20:00:16.696892   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:16.696900   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:16.696913   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:16.767299   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:16.767322   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:16.767337   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:16.847320   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:16.847356   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:16.922176   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:16.922209   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:14.836490   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:16.838300   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:15.430562   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:17.431842   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:17.766696   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:19.767921   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:16.982583   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:16.982627   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:19.496581   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:19.509942   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:19.510040   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:19.546457   71926 cri.go:89] found id: ""
	I0913 20:00:19.546493   71926 logs.go:276] 0 containers: []
	W0913 20:00:19.546510   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:19.546517   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:19.546584   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:19.585577   71926 cri.go:89] found id: ""
	I0913 20:00:19.585613   71926 logs.go:276] 0 containers: []
	W0913 20:00:19.585624   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:19.585631   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:19.585681   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:19.624383   71926 cri.go:89] found id: ""
	I0913 20:00:19.624416   71926 logs.go:276] 0 containers: []
	W0913 20:00:19.624428   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:19.624436   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:19.624492   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:19.662531   71926 cri.go:89] found id: ""
	I0913 20:00:19.662558   71926 logs.go:276] 0 containers: []
	W0913 20:00:19.662570   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:19.662578   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:19.662636   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:19.704253   71926 cri.go:89] found id: ""
	I0913 20:00:19.704278   71926 logs.go:276] 0 containers: []
	W0913 20:00:19.704290   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:19.704296   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:19.704354   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:19.743087   71926 cri.go:89] found id: ""
	I0913 20:00:19.743113   71926 logs.go:276] 0 containers: []
	W0913 20:00:19.743122   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:19.743128   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:19.743175   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:19.779598   71926 cri.go:89] found id: ""
	I0913 20:00:19.779625   71926 logs.go:276] 0 containers: []
	W0913 20:00:19.779635   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:19.779643   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:19.779692   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:19.817509   71926 cri.go:89] found id: ""
	I0913 20:00:19.817541   71926 logs.go:276] 0 containers: []
	W0913 20:00:19.817553   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:19.817564   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:19.817577   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:19.870071   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:19.870120   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:19.884612   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:19.884638   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:19.959650   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:19.959675   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:19.959687   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:20.040351   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:20.040384   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:19.335437   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:21.335913   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:23.838023   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:19.931244   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:22.430934   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:24.431456   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:22.266411   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:24.266828   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:22.581978   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:22.599144   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:22.599205   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:22.652862   71926 cri.go:89] found id: ""
	I0913 20:00:22.652894   71926 logs.go:276] 0 containers: []
	W0913 20:00:22.652906   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:22.652913   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:22.652998   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:22.712188   71926 cri.go:89] found id: ""
	I0913 20:00:22.712220   71926 logs.go:276] 0 containers: []
	W0913 20:00:22.712231   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:22.712238   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:22.712299   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:22.750212   71926 cri.go:89] found id: ""
	I0913 20:00:22.750238   71926 logs.go:276] 0 containers: []
	W0913 20:00:22.750249   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:22.750257   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:22.750319   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:22.786432   71926 cri.go:89] found id: ""
	I0913 20:00:22.786463   71926 logs.go:276] 0 containers: []
	W0913 20:00:22.786475   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:22.786483   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:22.786547   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:22.822681   71926 cri.go:89] found id: ""
	I0913 20:00:22.822707   71926 logs.go:276] 0 containers: []
	W0913 20:00:22.822716   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:22.822722   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:22.822780   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:22.862076   71926 cri.go:89] found id: ""
	I0913 20:00:22.862143   71926 logs.go:276] 0 containers: []
	W0913 20:00:22.862157   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:22.862166   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:22.862230   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:22.899489   71926 cri.go:89] found id: ""
	I0913 20:00:22.899519   71926 logs.go:276] 0 containers: []
	W0913 20:00:22.899528   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:22.899535   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:22.899604   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:22.937229   71926 cri.go:89] found id: ""
	I0913 20:00:22.937255   71926 logs.go:276] 0 containers: []
	W0913 20:00:22.937270   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:22.937282   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:22.937300   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:23.007842   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:23.007871   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:23.007887   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:23.092726   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:23.092763   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:23.131372   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:23.131403   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:23.183785   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:23.183819   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:25.698367   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:25.712256   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:25.712338   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:25.747797   71926 cri.go:89] found id: ""
	I0913 20:00:25.747824   71926 logs.go:276] 0 containers: []
	W0913 20:00:25.747835   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:25.747842   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:25.747929   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:25.783257   71926 cri.go:89] found id: ""
	I0913 20:00:25.783285   71926 logs.go:276] 0 containers: []
	W0913 20:00:25.783295   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:25.783301   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:25.783352   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:25.817087   71926 cri.go:89] found id: ""
	I0913 20:00:25.817120   71926 logs.go:276] 0 containers: []
	W0913 20:00:25.817132   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:25.817142   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:25.817203   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:25.854055   71926 cri.go:89] found id: ""
	I0913 20:00:25.854082   71926 logs.go:276] 0 containers: []
	W0913 20:00:25.854108   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:25.854116   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:25.854188   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:25.889972   71926 cri.go:89] found id: ""
	I0913 20:00:25.889994   71926 logs.go:276] 0 containers: []
	W0913 20:00:25.890002   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:25.890008   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:25.890058   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:25.927061   71926 cri.go:89] found id: ""
	I0913 20:00:25.927093   71926 logs.go:276] 0 containers: []
	W0913 20:00:25.927104   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:25.927115   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:25.927169   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:25.965541   71926 cri.go:89] found id: ""
	I0913 20:00:25.965570   71926 logs.go:276] 0 containers: []
	W0913 20:00:25.965582   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:25.965588   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:25.965649   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:26.002772   71926 cri.go:89] found id: ""
	I0913 20:00:26.002801   71926 logs.go:276] 0 containers: []
	W0913 20:00:26.002814   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:26.002825   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:26.002840   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:26.054407   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:26.054442   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:26.069608   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:26.069634   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:26.141529   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:26.141557   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:26.141572   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:26.223623   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:26.223657   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:26.336386   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:28.836218   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:26.431607   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:28.431821   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:26.267742   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:28.766624   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:30.767391   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:28.764312   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:28.779319   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:28.779383   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:28.813431   71926 cri.go:89] found id: ""
	I0913 20:00:28.813457   71926 logs.go:276] 0 containers: []
	W0913 20:00:28.813465   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:28.813473   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:28.813532   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:28.850083   71926 cri.go:89] found id: ""
	I0913 20:00:28.850145   71926 logs.go:276] 0 containers: []
	W0913 20:00:28.850157   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:28.850163   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:28.850221   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:28.890348   71926 cri.go:89] found id: ""
	I0913 20:00:28.890373   71926 logs.go:276] 0 containers: []
	W0913 20:00:28.890384   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:28.890390   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:28.890440   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:28.926531   71926 cri.go:89] found id: ""
	I0913 20:00:28.926564   71926 logs.go:276] 0 containers: []
	W0913 20:00:28.926576   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:28.926583   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:28.926650   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:28.968307   71926 cri.go:89] found id: ""
	I0913 20:00:28.968336   71926 logs.go:276] 0 containers: []
	W0913 20:00:28.968349   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:28.968356   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:28.968420   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:29.004257   71926 cri.go:89] found id: ""
	I0913 20:00:29.004285   71926 logs.go:276] 0 containers: []
	W0913 20:00:29.004297   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:29.004304   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:29.004369   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:29.039438   71926 cri.go:89] found id: ""
	I0913 20:00:29.039466   71926 logs.go:276] 0 containers: []
	W0913 20:00:29.039478   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:29.039486   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:29.039546   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:29.075138   71926 cri.go:89] found id: ""
	I0913 20:00:29.075172   71926 logs.go:276] 0 containers: []
	W0913 20:00:29.075184   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:29.075195   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:29.075210   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:29.151631   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:29.151671   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:29.193680   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:29.193707   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:29.244926   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:29.244960   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:29.258897   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:29.258922   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:29.329212   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:31.829955   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:31.846683   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:31.846747   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:31.888898   71926 cri.go:89] found id: ""
	I0913 20:00:31.888933   71926 logs.go:276] 0 containers: []
	W0913 20:00:31.888944   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:31.888952   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:31.889023   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:31.923896   71926 cri.go:89] found id: ""
	I0913 20:00:31.923922   71926 logs.go:276] 0 containers: []
	W0913 20:00:31.923933   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:31.923940   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:31.923999   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:30.836587   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:33.335323   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:30.431964   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:32.931375   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:32.770852   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:35.267129   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:31.965263   71926 cri.go:89] found id: ""
	I0913 20:00:31.965297   71926 logs.go:276] 0 containers: []
	W0913 20:00:31.965309   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:31.965317   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:31.965387   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:31.998461   71926 cri.go:89] found id: ""
	I0913 20:00:31.998491   71926 logs.go:276] 0 containers: []
	W0913 20:00:31.998505   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:31.998512   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:31.998564   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:32.039654   71926 cri.go:89] found id: ""
	I0913 20:00:32.039681   71926 logs.go:276] 0 containers: []
	W0913 20:00:32.039690   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:32.039696   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:32.039747   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:32.073387   71926 cri.go:89] found id: ""
	I0913 20:00:32.073413   71926 logs.go:276] 0 containers: []
	W0913 20:00:32.073424   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:32.073432   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:32.073491   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:32.112119   71926 cri.go:89] found id: ""
	I0913 20:00:32.112148   71926 logs.go:276] 0 containers: []
	W0913 20:00:32.112159   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:32.112166   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:32.112231   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:32.156508   71926 cri.go:89] found id: ""
	I0913 20:00:32.156539   71926 logs.go:276] 0 containers: []
	W0913 20:00:32.156548   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:32.156556   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:32.156567   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:32.210961   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:32.210994   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:32.224674   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:32.224703   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:32.297699   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:32.297725   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:32.297738   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:32.383090   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:32.383130   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:34.926212   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:34.942930   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:34.943015   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:34.983115   71926 cri.go:89] found id: ""
	I0913 20:00:34.983140   71926 logs.go:276] 0 containers: []
	W0913 20:00:34.983151   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:34.983159   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:34.983232   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:35.022893   71926 cri.go:89] found id: ""
	I0913 20:00:35.022916   71926 logs.go:276] 0 containers: []
	W0913 20:00:35.022924   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:35.022931   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:35.022980   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:35.061105   71926 cri.go:89] found id: ""
	I0913 20:00:35.061129   71926 logs.go:276] 0 containers: []
	W0913 20:00:35.061137   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:35.061143   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:35.061191   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:35.095853   71926 cri.go:89] found id: ""
	I0913 20:00:35.095879   71926 logs.go:276] 0 containers: []
	W0913 20:00:35.095890   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:35.095897   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:35.095966   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:35.132771   71926 cri.go:89] found id: ""
	I0913 20:00:35.132796   71926 logs.go:276] 0 containers: []
	W0913 20:00:35.132811   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:35.132816   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:35.132879   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:35.171692   71926 cri.go:89] found id: ""
	I0913 20:00:35.171720   71926 logs.go:276] 0 containers: []
	W0913 20:00:35.171729   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:35.171734   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:35.171782   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:35.212217   71926 cri.go:89] found id: ""
	I0913 20:00:35.212248   71926 logs.go:276] 0 containers: []
	W0913 20:00:35.212258   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:35.212266   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:35.212318   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:35.247910   71926 cri.go:89] found id: ""
	I0913 20:00:35.247938   71926 logs.go:276] 0 containers: []
	W0913 20:00:35.247949   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:35.247958   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:35.247972   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:35.321607   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:35.321627   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:35.321641   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:35.405442   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:35.405483   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:35.450174   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:35.450201   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:35.503640   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:35.503673   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:35.336847   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:37.337476   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:34.931775   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:37.430241   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:39.432113   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:37.268324   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:39.766957   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:38.019116   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:38.033625   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:38.033696   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:38.068648   71926 cri.go:89] found id: ""
	I0913 20:00:38.068677   71926 logs.go:276] 0 containers: []
	W0913 20:00:38.068688   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:38.068696   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:38.068764   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:38.103808   71926 cri.go:89] found id: ""
	I0913 20:00:38.103837   71926 logs.go:276] 0 containers: []
	W0913 20:00:38.103850   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:38.103857   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:38.103920   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:38.143305   71926 cri.go:89] found id: ""
	I0913 20:00:38.143326   71926 logs.go:276] 0 containers: []
	W0913 20:00:38.143335   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:38.143341   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:38.143397   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:38.181356   71926 cri.go:89] found id: ""
	I0913 20:00:38.181382   71926 logs.go:276] 0 containers: []
	W0913 20:00:38.181394   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:38.181401   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:38.181459   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:38.216935   71926 cri.go:89] found id: ""
	I0913 20:00:38.216966   71926 logs.go:276] 0 containers: []
	W0913 20:00:38.216977   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:38.216985   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:38.217043   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:38.253565   71926 cri.go:89] found id: ""
	I0913 20:00:38.253594   71926 logs.go:276] 0 containers: []
	W0913 20:00:38.253606   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:38.253614   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:38.253670   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:38.288672   71926 cri.go:89] found id: ""
	I0913 20:00:38.288698   71926 logs.go:276] 0 containers: []
	W0913 20:00:38.288714   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:38.288721   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:38.288782   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:38.323979   71926 cri.go:89] found id: ""
	I0913 20:00:38.324001   71926 logs.go:276] 0 containers: []
	W0913 20:00:38.324009   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:38.324017   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:38.324028   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:38.375830   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:38.375861   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:38.391703   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:38.391742   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:38.464586   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:38.464615   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:38.464630   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:38.545577   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:38.545616   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:41.090184   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:41.104040   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:41.104129   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:41.148332   71926 cri.go:89] found id: ""
	I0913 20:00:41.148362   71926 logs.go:276] 0 containers: []
	W0913 20:00:41.148371   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:41.148377   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:41.148441   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:41.187205   71926 cri.go:89] found id: ""
	I0913 20:00:41.187230   71926 logs.go:276] 0 containers: []
	W0913 20:00:41.187238   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:41.187244   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:41.187299   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:41.223654   71926 cri.go:89] found id: ""
	I0913 20:00:41.223677   71926 logs.go:276] 0 containers: []
	W0913 20:00:41.223685   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:41.223690   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:41.223750   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:41.258481   71926 cri.go:89] found id: ""
	I0913 20:00:41.258505   71926 logs.go:276] 0 containers: []
	W0913 20:00:41.258515   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:41.258520   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:41.258578   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:41.293205   71926 cri.go:89] found id: ""
	I0913 20:00:41.293236   71926 logs.go:276] 0 containers: []
	W0913 20:00:41.293248   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:41.293255   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:41.293313   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:41.331425   71926 cri.go:89] found id: ""
	I0913 20:00:41.331454   71926 logs.go:276] 0 containers: []
	W0913 20:00:41.331466   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:41.331474   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:41.331524   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:41.367916   71926 cri.go:89] found id: ""
	I0913 20:00:41.367942   71926 logs.go:276] 0 containers: []
	W0913 20:00:41.367953   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:41.367960   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:41.368024   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:41.413683   71926 cri.go:89] found id: ""
	I0913 20:00:41.413713   71926 logs.go:276] 0 containers: []
	W0913 20:00:41.413724   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:41.413736   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:41.413752   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:41.468018   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:41.468053   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:41.482397   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:41.482424   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:41.552203   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:41.552223   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:41.552238   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:41.628515   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:41.628553   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:39.835678   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:42.336092   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:41.932753   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:44.431833   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:41.768156   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:44.268056   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:44.167382   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:44.180046   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:44.180127   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:44.218376   71926 cri.go:89] found id: ""
	I0913 20:00:44.218400   71926 logs.go:276] 0 containers: []
	W0913 20:00:44.218409   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:44.218415   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:44.218474   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:44.255250   71926 cri.go:89] found id: ""
	I0913 20:00:44.255275   71926 logs.go:276] 0 containers: []
	W0913 20:00:44.255284   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:44.255289   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:44.255338   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:44.292561   71926 cri.go:89] found id: ""
	I0913 20:00:44.292587   71926 logs.go:276] 0 containers: []
	W0913 20:00:44.292597   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:44.292604   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:44.292662   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:44.326876   71926 cri.go:89] found id: ""
	I0913 20:00:44.326900   71926 logs.go:276] 0 containers: []
	W0913 20:00:44.326912   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:44.326919   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:44.326977   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:44.362531   71926 cri.go:89] found id: ""
	I0913 20:00:44.362562   71926 logs.go:276] 0 containers: []
	W0913 20:00:44.362574   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:44.362582   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:44.362646   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:44.396145   71926 cri.go:89] found id: ""
	I0913 20:00:44.396170   71926 logs.go:276] 0 containers: []
	W0913 20:00:44.396181   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:44.396188   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:44.396251   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:44.434531   71926 cri.go:89] found id: ""
	I0913 20:00:44.434556   71926 logs.go:276] 0 containers: []
	W0913 20:00:44.434564   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:44.434570   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:44.434627   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:44.470926   71926 cri.go:89] found id: ""
	I0913 20:00:44.470955   71926 logs.go:276] 0 containers: []
	W0913 20:00:44.470965   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:44.470976   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:44.470991   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:44.523651   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:44.523685   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:44.537739   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:44.537766   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:44.608055   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:44.608083   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:44.608098   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:44.694384   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:44.694416   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:44.835785   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:47.336699   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:46.932718   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:49.431805   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:46.766589   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:48.773406   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:47.235609   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:47.248691   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:47.248749   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:47.284011   71926 cri.go:89] found id: ""
	I0913 20:00:47.284037   71926 logs.go:276] 0 containers: []
	W0913 20:00:47.284049   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:47.284056   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:47.284118   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:47.319729   71926 cri.go:89] found id: ""
	I0913 20:00:47.319755   71926 logs.go:276] 0 containers: []
	W0913 20:00:47.319766   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:47.319772   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:47.319833   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:47.356053   71926 cri.go:89] found id: ""
	I0913 20:00:47.356079   71926 logs.go:276] 0 containers: []
	W0913 20:00:47.356087   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:47.356094   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:47.356172   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:47.390054   71926 cri.go:89] found id: ""
	I0913 20:00:47.390083   71926 logs.go:276] 0 containers: []
	W0913 20:00:47.390101   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:47.390109   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:47.390171   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:47.428894   71926 cri.go:89] found id: ""
	I0913 20:00:47.428920   71926 logs.go:276] 0 containers: []
	W0913 20:00:47.428932   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:47.428939   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:47.428996   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:47.463328   71926 cri.go:89] found id: ""
	I0913 20:00:47.463363   71926 logs.go:276] 0 containers: []
	W0913 20:00:47.463376   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:47.463389   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:47.463450   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:47.500739   71926 cri.go:89] found id: ""
	I0913 20:00:47.500764   71926 logs.go:276] 0 containers: []
	W0913 20:00:47.500773   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:47.500779   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:47.500827   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:47.534425   71926 cri.go:89] found id: ""
	I0913 20:00:47.534456   71926 logs.go:276] 0 containers: []
	W0913 20:00:47.534468   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:47.534479   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:47.534493   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:47.584525   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:47.584553   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:47.599468   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:47.599497   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:47.683020   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:47.683044   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:47.683055   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:47.767236   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:47.767272   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:50.310385   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:50.324059   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:50.324136   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:50.359347   71926 cri.go:89] found id: ""
	I0913 20:00:50.359381   71926 logs.go:276] 0 containers: []
	W0913 20:00:50.359393   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:50.359401   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:50.359451   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:50.395291   71926 cri.go:89] found id: ""
	I0913 20:00:50.395339   71926 logs.go:276] 0 containers: []
	W0913 20:00:50.395352   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:50.395360   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:50.395416   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:50.430783   71926 cri.go:89] found id: ""
	I0913 20:00:50.430806   71926 logs.go:276] 0 containers: []
	W0913 20:00:50.430816   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:50.430823   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:50.430884   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:50.467883   71926 cri.go:89] found id: ""
	I0913 20:00:50.467916   71926 logs.go:276] 0 containers: []
	W0913 20:00:50.467928   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:50.467935   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:50.467997   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:50.505962   71926 cri.go:89] found id: ""
	I0913 20:00:50.505995   71926 logs.go:276] 0 containers: []
	W0913 20:00:50.506007   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:50.506014   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:50.506080   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:50.542408   71926 cri.go:89] found id: ""
	I0913 20:00:50.542432   71926 logs.go:276] 0 containers: []
	W0913 20:00:50.542440   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:50.542445   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:50.542492   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:50.576431   71926 cri.go:89] found id: ""
	I0913 20:00:50.576454   71926 logs.go:276] 0 containers: []
	W0913 20:00:50.576463   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:50.576469   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:50.576533   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:50.609940   71926 cri.go:89] found id: ""
	I0913 20:00:50.609982   71926 logs.go:276] 0 containers: []
	W0913 20:00:50.609994   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:50.610004   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:50.610022   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:50.661793   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:50.661829   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:50.676085   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:50.676128   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:50.745092   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:50.745118   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:50.745132   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:50.830719   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:50.830754   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:49.835228   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:51.835655   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:53.835956   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:51.930403   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:53.931943   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:51.266576   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:53.267140   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:55.267966   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:53.369220   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:53.381944   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:53.382009   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:53.414010   71926 cri.go:89] found id: ""
	I0913 20:00:53.414032   71926 logs.go:276] 0 containers: []
	W0913 20:00:53.414041   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:53.414047   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:53.414131   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:53.448480   71926 cri.go:89] found id: ""
	I0913 20:00:53.448512   71926 logs.go:276] 0 containers: []
	W0913 20:00:53.448523   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:53.448530   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:53.448589   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:53.489451   71926 cri.go:89] found id: ""
	I0913 20:00:53.489483   71926 logs.go:276] 0 containers: []
	W0913 20:00:53.489494   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:53.489502   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:53.489562   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:53.531122   71926 cri.go:89] found id: ""
	I0913 20:00:53.531143   71926 logs.go:276] 0 containers: []
	W0913 20:00:53.531154   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:53.531169   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:53.531228   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:53.568070   71926 cri.go:89] found id: ""
	I0913 20:00:53.568098   71926 logs.go:276] 0 containers: []
	W0913 20:00:53.568109   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:53.568116   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:53.568187   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:53.601557   71926 cri.go:89] found id: ""
	I0913 20:00:53.601580   71926 logs.go:276] 0 containers: []
	W0913 20:00:53.601589   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:53.601595   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:53.601646   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:53.637689   71926 cri.go:89] found id: ""
	I0913 20:00:53.637710   71926 logs.go:276] 0 containers: []
	W0913 20:00:53.637719   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:53.637724   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:53.637782   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:53.672876   71926 cri.go:89] found id: ""
	I0913 20:00:53.672899   71926 logs.go:276] 0 containers: []
	W0913 20:00:53.672908   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:53.672916   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:53.672932   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:53.686015   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:53.686044   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:53.753998   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:53.754023   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:53.754042   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:53.844298   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:53.844329   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:53.883828   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:53.883853   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:56.440474   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:56.453970   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:56.454039   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:56.490986   71926 cri.go:89] found id: ""
	I0913 20:00:56.491013   71926 logs.go:276] 0 containers: []
	W0913 20:00:56.491024   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:56.491031   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:56.491094   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:56.530035   71926 cri.go:89] found id: ""
	I0913 20:00:56.530059   71926 logs.go:276] 0 containers: []
	W0913 20:00:56.530070   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:56.530077   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:56.530175   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:56.565256   71926 cri.go:89] found id: ""
	I0913 20:00:56.565280   71926 logs.go:276] 0 containers: []
	W0913 20:00:56.565290   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:56.565297   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:56.565358   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:56.600685   71926 cri.go:89] found id: ""
	I0913 20:00:56.600705   71926 logs.go:276] 0 containers: []
	W0913 20:00:56.600714   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:56.600725   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:56.600777   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:56.635015   71926 cri.go:89] found id: ""
	I0913 20:00:56.635042   71926 logs.go:276] 0 containers: []
	W0913 20:00:56.635053   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:56.635060   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:56.635125   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:56.671319   71926 cri.go:89] found id: ""
	I0913 20:00:56.671346   71926 logs.go:276] 0 containers: []
	W0913 20:00:56.671354   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:56.671359   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:56.671407   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:56.732238   71926 cri.go:89] found id: ""
	I0913 20:00:56.732269   71926 logs.go:276] 0 containers: []
	W0913 20:00:56.732280   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:56.732287   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:56.732347   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:56.779316   71926 cri.go:89] found id: ""
	I0913 20:00:56.779345   71926 logs.go:276] 0 containers: []
	W0913 20:00:56.779356   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:56.779366   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:00:56.779378   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:00:56.858727   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:00:56.858755   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:00:56.899449   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:56.899480   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:55.836469   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:58.335760   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:56.431305   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:58.431336   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:57.766219   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:59.767250   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:00:56.952972   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:56.953003   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:56.967092   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:56.967131   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:00:57.052690   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:00:59.553696   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:00:59.569295   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:00:59.569370   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:00:59.608548   71926 cri.go:89] found id: ""
	I0913 20:00:59.608592   71926 logs.go:276] 0 containers: []
	W0913 20:00:59.608605   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:00:59.608612   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:00:59.608674   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:00:59.643673   71926 cri.go:89] found id: ""
	I0913 20:00:59.643699   71926 logs.go:276] 0 containers: []
	W0913 20:00:59.643710   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:00:59.643716   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:00:59.643776   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:00:59.687204   71926 cri.go:89] found id: ""
	I0913 20:00:59.687228   71926 logs.go:276] 0 containers: []
	W0913 20:00:59.687237   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:00:59.687242   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:00:59.687306   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:00:59.720335   71926 cri.go:89] found id: ""
	I0913 20:00:59.720360   71926 logs.go:276] 0 containers: []
	W0913 20:00:59.720371   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:00:59.720378   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:00:59.720446   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:00:59.761164   71926 cri.go:89] found id: ""
	I0913 20:00:59.761194   71926 logs.go:276] 0 containers: []
	W0913 20:00:59.761205   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:00:59.761213   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:00:59.761278   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:00:59.796770   71926 cri.go:89] found id: ""
	I0913 20:00:59.796796   71926 logs.go:276] 0 containers: []
	W0913 20:00:59.796807   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:00:59.796814   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:00:59.796880   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:00:59.832333   71926 cri.go:89] found id: ""
	I0913 20:00:59.832364   71926 logs.go:276] 0 containers: []
	W0913 20:00:59.832377   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:00:59.832385   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:00:59.832444   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:00:59.868387   71926 cri.go:89] found id: ""
	I0913 20:00:59.868415   71926 logs.go:276] 0 containers: []
	W0913 20:00:59.868427   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:00:59.868437   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:00:59.868450   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:00:59.920632   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:00:59.920663   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:00:59.937573   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:00:59.937609   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:00.007803   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:00.007826   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:00.007837   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:00.085289   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:00.085324   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:00.336553   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:02.835544   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:00.931173   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:02.931879   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:02.267501   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:04.766302   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:02.628580   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:02.642122   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:02.642200   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:02.678238   71926 cri.go:89] found id: ""
	I0913 20:01:02.678260   71926 logs.go:276] 0 containers: []
	W0913 20:01:02.678268   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:02.678274   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:02.678325   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:02.714164   71926 cri.go:89] found id: ""
	I0913 20:01:02.714187   71926 logs.go:276] 0 containers: []
	W0913 20:01:02.714197   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:02.714202   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:02.714267   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:02.750200   71926 cri.go:89] found id: ""
	I0913 20:01:02.750228   71926 logs.go:276] 0 containers: []
	W0913 20:01:02.750237   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:02.750243   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:02.750291   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:02.786893   71926 cri.go:89] found id: ""
	I0913 20:01:02.786921   71926 logs.go:276] 0 containers: []
	W0913 20:01:02.786929   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:02.786935   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:02.786987   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:02.824161   71926 cri.go:89] found id: ""
	I0913 20:01:02.824192   71926 logs.go:276] 0 containers: []
	W0913 20:01:02.824204   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:02.824211   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:02.824274   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:02.861567   71926 cri.go:89] found id: ""
	I0913 20:01:02.861594   71926 logs.go:276] 0 containers: []
	W0913 20:01:02.861606   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:02.861613   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:02.861678   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:02.897791   71926 cri.go:89] found id: ""
	I0913 20:01:02.897813   71926 logs.go:276] 0 containers: []
	W0913 20:01:02.897822   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:02.897827   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:02.897875   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:02.935790   71926 cri.go:89] found id: ""
	I0913 20:01:02.935818   71926 logs.go:276] 0 containers: []
	W0913 20:01:02.935830   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:02.935840   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:02.935853   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:02.987011   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:02.987044   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:03.000688   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:03.000722   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:03.075757   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:03.075780   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:03.075795   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:03.167565   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:03.167611   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:05.713852   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:05.727810   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:05.727876   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:05.765630   71926 cri.go:89] found id: ""
	I0913 20:01:05.765655   71926 logs.go:276] 0 containers: []
	W0913 20:01:05.765666   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:05.765672   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:05.765720   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:05.801085   71926 cri.go:89] found id: ""
	I0913 20:01:05.801123   71926 logs.go:276] 0 containers: []
	W0913 20:01:05.801136   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:05.801142   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:05.801209   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:05.840112   71926 cri.go:89] found id: ""
	I0913 20:01:05.840146   71926 logs.go:276] 0 containers: []
	W0913 20:01:05.840156   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:05.840163   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:05.840225   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:05.876082   71926 cri.go:89] found id: ""
	I0913 20:01:05.876107   71926 logs.go:276] 0 containers: []
	W0913 20:01:05.876118   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:05.876125   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:05.876188   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:05.913772   71926 cri.go:89] found id: ""
	I0913 20:01:05.913801   71926 logs.go:276] 0 containers: []
	W0913 20:01:05.913813   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:05.913820   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:05.913879   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:05.947670   71926 cri.go:89] found id: ""
	I0913 20:01:05.947694   71926 logs.go:276] 0 containers: []
	W0913 20:01:05.947702   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:05.947708   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:05.947756   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:05.989763   71926 cri.go:89] found id: ""
	I0913 20:01:05.989794   71926 logs.go:276] 0 containers: []
	W0913 20:01:05.989807   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:05.989814   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:05.989875   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:06.030319   71926 cri.go:89] found id: ""
	I0913 20:01:06.030361   71926 logs.go:276] 0 containers: []
	W0913 20:01:06.030373   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:06.030383   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:06.030397   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:06.111153   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:06.111185   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:06.149331   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:06.149357   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:06.202013   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:06.202056   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:06.216224   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:06.216252   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:06.291004   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:04.839716   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:07.334774   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:04.932814   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:07.431144   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:09.431578   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:06.766410   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:09.267184   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:08.791573   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:08.804872   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:08.804930   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:08.846040   71926 cri.go:89] found id: ""
	I0913 20:01:08.846068   71926 logs.go:276] 0 containers: []
	W0913 20:01:08.846081   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:08.846088   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:08.846159   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:08.892954   71926 cri.go:89] found id: ""
	I0913 20:01:08.892986   71926 logs.go:276] 0 containers: []
	W0913 20:01:08.892998   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:08.893005   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:08.893068   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:08.949737   71926 cri.go:89] found id: ""
	I0913 20:01:08.949762   71926 logs.go:276] 0 containers: []
	W0913 20:01:08.949773   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:08.949779   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:08.949847   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:08.991915   71926 cri.go:89] found id: ""
	I0913 20:01:08.991938   71926 logs.go:276] 0 containers: []
	W0913 20:01:08.991950   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:08.991956   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:08.992009   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:09.028702   71926 cri.go:89] found id: ""
	I0913 20:01:09.028733   71926 logs.go:276] 0 containers: []
	W0913 20:01:09.028754   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:09.028764   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:09.028828   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:09.063615   71926 cri.go:89] found id: ""
	I0913 20:01:09.063639   71926 logs.go:276] 0 containers: []
	W0913 20:01:09.063648   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:09.063653   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:09.063713   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:09.096739   71926 cri.go:89] found id: ""
	I0913 20:01:09.096766   71926 logs.go:276] 0 containers: []
	W0913 20:01:09.096776   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:09.096782   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:09.096838   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:09.129605   71926 cri.go:89] found id: ""
	I0913 20:01:09.129635   71926 logs.go:276] 0 containers: []
	W0913 20:01:09.129647   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:09.129657   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:09.129674   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:09.181245   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:09.181280   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:09.195598   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:09.195627   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:09.271676   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:09.271703   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:09.271718   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:09.353657   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:09.353688   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:11.896613   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:11.910334   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:11.910410   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:09.336081   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:11.336204   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:13.336445   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:11.934825   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:14.430581   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:11.766779   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:14.267119   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:11.947632   71926 cri.go:89] found id: ""
	I0913 20:01:11.947654   71926 logs.go:276] 0 containers: []
	W0913 20:01:11.947663   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:11.947669   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:11.947727   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:11.983996   71926 cri.go:89] found id: ""
	I0913 20:01:11.984019   71926 logs.go:276] 0 containers: []
	W0913 20:01:11.984028   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:11.984034   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:11.984082   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:12.020497   71926 cri.go:89] found id: ""
	I0913 20:01:12.020536   71926 logs.go:276] 0 containers: []
	W0913 20:01:12.020548   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:12.020556   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:12.020615   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:12.059799   71926 cri.go:89] found id: ""
	I0913 20:01:12.059821   71926 logs.go:276] 0 containers: []
	W0913 20:01:12.059829   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:12.059835   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:12.059882   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:12.099079   71926 cri.go:89] found id: ""
	I0913 20:01:12.099113   71926 logs.go:276] 0 containers: []
	W0913 20:01:12.099125   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:12.099132   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:12.099193   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:12.132677   71926 cri.go:89] found id: ""
	I0913 20:01:12.132704   71926 logs.go:276] 0 containers: []
	W0913 20:01:12.132716   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:12.132722   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:12.132769   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:12.165522   71926 cri.go:89] found id: ""
	I0913 20:01:12.165546   71926 logs.go:276] 0 containers: []
	W0913 20:01:12.165560   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:12.165565   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:12.165625   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:12.207145   71926 cri.go:89] found id: ""
	I0913 20:01:12.207168   71926 logs.go:276] 0 containers: []
	W0913 20:01:12.207177   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:12.207185   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:12.207195   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:12.260523   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:12.260556   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:12.275208   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:12.275236   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:12.346424   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:12.346447   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:12.346463   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:12.431012   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:12.431050   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:14.970909   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:14.984452   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:14.984512   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:15.020614   71926 cri.go:89] found id: ""
	I0913 20:01:15.020635   71926 logs.go:276] 0 containers: []
	W0913 20:01:15.020645   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:15.020650   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:15.020695   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:15.060481   71926 cri.go:89] found id: ""
	I0913 20:01:15.060508   71926 logs.go:276] 0 containers: []
	W0913 20:01:15.060516   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:15.060520   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:15.060580   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:15.095109   71926 cri.go:89] found id: ""
	I0913 20:01:15.095134   71926 logs.go:276] 0 containers: []
	W0913 20:01:15.095143   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:15.095148   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:15.095201   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:15.130733   71926 cri.go:89] found id: ""
	I0913 20:01:15.130754   71926 logs.go:276] 0 containers: []
	W0913 20:01:15.130762   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:15.130768   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:15.130816   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:15.164998   71926 cri.go:89] found id: ""
	I0913 20:01:15.165027   71926 logs.go:276] 0 containers: []
	W0913 20:01:15.165042   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:15.165049   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:15.165100   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:15.205957   71926 cri.go:89] found id: ""
	I0913 20:01:15.205981   71926 logs.go:276] 0 containers: []
	W0913 20:01:15.205989   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:15.205994   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:15.206053   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:15.240502   71926 cri.go:89] found id: ""
	I0913 20:01:15.240526   71926 logs.go:276] 0 containers: []
	W0913 20:01:15.240535   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:15.240540   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:15.240586   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:15.275900   71926 cri.go:89] found id: ""
	I0913 20:01:15.275920   71926 logs.go:276] 0 containers: []
	W0913 20:01:15.275928   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:15.275936   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:15.275946   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:15.343658   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:15.343677   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:15.343690   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:15.426317   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:15.426355   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:15.474538   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:15.474569   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:15.525405   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:15.525439   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:15.836259   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:18.336529   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:16.431423   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:18.930385   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:16.766863   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:19.266906   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:18.040698   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:18.053759   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:18.053820   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:18.086056   71926 cri.go:89] found id: ""
	I0913 20:01:18.086078   71926 logs.go:276] 0 containers: []
	W0913 20:01:18.086087   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:18.086092   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:18.086166   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:18.119247   71926 cri.go:89] found id: ""
	I0913 20:01:18.119277   71926 logs.go:276] 0 containers: []
	W0913 20:01:18.119290   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:18.119301   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:18.119366   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:18.163417   71926 cri.go:89] found id: ""
	I0913 20:01:18.163442   71926 logs.go:276] 0 containers: []
	W0913 20:01:18.163450   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:18.163456   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:18.163504   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:18.196786   71926 cri.go:89] found id: ""
	I0913 20:01:18.196812   71926 logs.go:276] 0 containers: []
	W0913 20:01:18.196820   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:18.196826   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:18.196878   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:18.231864   71926 cri.go:89] found id: ""
	I0913 20:01:18.231893   71926 logs.go:276] 0 containers: []
	W0913 20:01:18.231903   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:18.231909   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:18.231973   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:18.268498   71926 cri.go:89] found id: ""
	I0913 20:01:18.268518   71926 logs.go:276] 0 containers: []
	W0913 20:01:18.268527   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:18.268534   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:18.268586   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:18.305391   71926 cri.go:89] found id: ""
	I0913 20:01:18.305418   71926 logs.go:276] 0 containers: []
	W0913 20:01:18.305430   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:18.305438   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:18.305499   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:18.340160   71926 cri.go:89] found id: ""
	I0913 20:01:18.340187   71926 logs.go:276] 0 containers: []
	W0913 20:01:18.340197   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:18.340207   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:18.340220   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:18.391723   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:18.391757   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:18.405914   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:18.405943   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:18.476941   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:18.476960   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:18.476972   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:18.556812   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:18.556845   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:21.098172   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:21.111147   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:21.111211   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:21.147273   71926 cri.go:89] found id: ""
	I0913 20:01:21.147299   71926 logs.go:276] 0 containers: []
	W0913 20:01:21.147316   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:21.147322   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:21.147370   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:21.185018   71926 cri.go:89] found id: ""
	I0913 20:01:21.185047   71926 logs.go:276] 0 containers: []
	W0913 20:01:21.185059   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:21.185066   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:21.185148   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:21.219763   71926 cri.go:89] found id: ""
	I0913 20:01:21.219798   71926 logs.go:276] 0 containers: []
	W0913 20:01:21.219809   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:21.219816   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:21.219882   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:21.257121   71926 cri.go:89] found id: ""
	I0913 20:01:21.257149   71926 logs.go:276] 0 containers: []
	W0913 20:01:21.257161   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:21.257167   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:21.257229   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:21.292011   71926 cri.go:89] found id: ""
	I0913 20:01:21.292038   71926 logs.go:276] 0 containers: []
	W0913 20:01:21.292049   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:21.292060   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:21.292124   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:21.327563   71926 cri.go:89] found id: ""
	I0913 20:01:21.327592   71926 logs.go:276] 0 containers: []
	W0913 20:01:21.327604   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:21.327612   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:21.327679   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:21.365175   71926 cri.go:89] found id: ""
	I0913 20:01:21.365201   71926 logs.go:276] 0 containers: []
	W0913 20:01:21.365209   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:21.365215   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:21.365272   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:21.403238   71926 cri.go:89] found id: ""
	I0913 20:01:21.403266   71926 logs.go:276] 0 containers: []
	W0913 20:01:21.403278   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:21.403288   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:21.403310   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:21.417704   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:21.417737   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:21.490232   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:21.490264   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:21.490283   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:21.573376   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:21.573414   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:21.619573   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:21.619604   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:20.835709   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:22.835800   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:20.931257   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:22.932350   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:21.267729   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:23.767489   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:25.768029   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:24.173931   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:24.187538   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:24.187608   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:24.227803   71926 cri.go:89] found id: ""
	I0913 20:01:24.227827   71926 logs.go:276] 0 containers: []
	W0913 20:01:24.227836   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:24.227841   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:24.227898   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:24.263194   71926 cri.go:89] found id: ""
	I0913 20:01:24.263223   71926 logs.go:276] 0 containers: []
	W0913 20:01:24.263239   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:24.263246   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:24.263308   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:24.298159   71926 cri.go:89] found id: ""
	I0913 20:01:24.298188   71926 logs.go:276] 0 containers: []
	W0913 20:01:24.298201   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:24.298209   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:24.298267   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:24.340590   71926 cri.go:89] found id: ""
	I0913 20:01:24.340621   71926 logs.go:276] 0 containers: []
	W0913 20:01:24.340633   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:24.340640   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:24.340699   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:24.385839   71926 cri.go:89] found id: ""
	I0913 20:01:24.385866   71926 logs.go:276] 0 containers: []
	W0913 20:01:24.385875   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:24.385880   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:24.385944   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:24.424723   71926 cri.go:89] found id: ""
	I0913 20:01:24.424750   71926 logs.go:276] 0 containers: []
	W0913 20:01:24.424761   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:24.424767   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:24.424828   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:24.461879   71926 cri.go:89] found id: ""
	I0913 20:01:24.461911   71926 logs.go:276] 0 containers: []
	W0913 20:01:24.461922   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:24.461929   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:24.461995   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:24.497230   71926 cri.go:89] found id: ""
	I0913 20:01:24.497257   71926 logs.go:276] 0 containers: []
	W0913 20:01:24.497269   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:24.497278   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:24.497297   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:24.538048   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:24.538084   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:24.592840   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:24.592880   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:24.608817   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:24.608851   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:24.683335   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:24.683356   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:24.683367   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:24.836044   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:27.335709   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:25.431310   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:27.931864   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:28.266427   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:30.765946   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:27.262211   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:27.275199   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:27.275277   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:27.308960   71926 cri.go:89] found id: ""
	I0913 20:01:27.308986   71926 logs.go:276] 0 containers: []
	W0913 20:01:27.308994   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:27.309000   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:27.309065   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:27.345030   71926 cri.go:89] found id: ""
	I0913 20:01:27.345055   71926 logs.go:276] 0 containers: []
	W0913 20:01:27.345067   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:27.345074   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:27.345132   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:27.380791   71926 cri.go:89] found id: ""
	I0913 20:01:27.380823   71926 logs.go:276] 0 containers: []
	W0913 20:01:27.380831   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:27.380837   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:27.380896   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:27.416561   71926 cri.go:89] found id: ""
	I0913 20:01:27.416589   71926 logs.go:276] 0 containers: []
	W0913 20:01:27.416599   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:27.416604   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:27.416654   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:27.455781   71926 cri.go:89] found id: ""
	I0913 20:01:27.455809   71926 logs.go:276] 0 containers: []
	W0913 20:01:27.455820   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:27.455827   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:27.455887   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:27.493920   71926 cri.go:89] found id: ""
	I0913 20:01:27.493950   71926 logs.go:276] 0 containers: []
	W0913 20:01:27.493959   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:27.493967   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:27.494016   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:27.531706   71926 cri.go:89] found id: ""
	I0913 20:01:27.531730   71926 logs.go:276] 0 containers: []
	W0913 20:01:27.531740   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:27.531746   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:27.531796   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:27.568697   71926 cri.go:89] found id: ""
	I0913 20:01:27.568726   71926 logs.go:276] 0 containers: []
	W0913 20:01:27.568735   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:27.568744   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:27.568755   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:27.620618   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:27.620655   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:27.636353   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:27.636378   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:27.709779   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:27.709806   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:27.709819   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:27.784476   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:27.784508   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:30.331351   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:30.344583   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:30.344647   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:30.379992   71926 cri.go:89] found id: ""
	I0913 20:01:30.380018   71926 logs.go:276] 0 containers: []
	W0913 20:01:30.380051   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:30.380059   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:30.380129   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:30.415148   71926 cri.go:89] found id: ""
	I0913 20:01:30.415174   71926 logs.go:276] 0 containers: []
	W0913 20:01:30.415185   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:30.415192   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:30.415250   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:30.450578   71926 cri.go:89] found id: ""
	I0913 20:01:30.450602   71926 logs.go:276] 0 containers: []
	W0913 20:01:30.450611   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:30.450616   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:30.450675   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:30.484582   71926 cri.go:89] found id: ""
	I0913 20:01:30.484612   71926 logs.go:276] 0 containers: []
	W0913 20:01:30.484623   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:30.484631   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:30.484683   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:30.520476   71926 cri.go:89] found id: ""
	I0913 20:01:30.520498   71926 logs.go:276] 0 containers: []
	W0913 20:01:30.520506   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:30.520512   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:30.520559   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:30.556898   71926 cri.go:89] found id: ""
	I0913 20:01:30.556928   71926 logs.go:276] 0 containers: []
	W0913 20:01:30.556937   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:30.556943   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:30.556993   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:30.595216   71926 cri.go:89] found id: ""
	I0913 20:01:30.595240   71926 logs.go:276] 0 containers: []
	W0913 20:01:30.595252   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:30.595259   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:30.595312   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:30.629834   71926 cri.go:89] found id: ""
	I0913 20:01:30.629866   71926 logs.go:276] 0 containers: []
	W0913 20:01:30.629880   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:30.629891   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:30.629907   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:30.643487   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:30.643516   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:30.718267   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:30.718290   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:30.718304   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:30.803014   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:30.803044   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:30.843670   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:30.843694   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:29.336064   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:31.836582   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:29.932193   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:32.431217   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:32.766473   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:34.767287   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:33.396566   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:33.409579   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:33.409641   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:33.444647   71926 cri.go:89] found id: ""
	I0913 20:01:33.444671   71926 logs.go:276] 0 containers: []
	W0913 20:01:33.444682   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:33.444689   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:33.444747   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:33.479545   71926 cri.go:89] found id: ""
	I0913 20:01:33.479568   71926 logs.go:276] 0 containers: []
	W0913 20:01:33.479577   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:33.479583   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:33.479634   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:33.517343   71926 cri.go:89] found id: ""
	I0913 20:01:33.517366   71926 logs.go:276] 0 containers: []
	W0913 20:01:33.517375   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:33.517380   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:33.517437   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:33.551394   71926 cri.go:89] found id: ""
	I0913 20:01:33.551424   71926 logs.go:276] 0 containers: []
	W0913 20:01:33.551436   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:33.551449   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:33.551499   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:33.583871   71926 cri.go:89] found id: ""
	I0913 20:01:33.583893   71926 logs.go:276] 0 containers: []
	W0913 20:01:33.583902   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:33.583907   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:33.583966   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:33.616984   71926 cri.go:89] found id: ""
	I0913 20:01:33.617008   71926 logs.go:276] 0 containers: []
	W0913 20:01:33.617018   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:33.617023   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:33.617078   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:33.660231   71926 cri.go:89] found id: ""
	I0913 20:01:33.660252   71926 logs.go:276] 0 containers: []
	W0913 20:01:33.660261   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:33.660267   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:33.660315   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:33.705140   71926 cri.go:89] found id: ""
	I0913 20:01:33.705176   71926 logs.go:276] 0 containers: []
	W0913 20:01:33.705188   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:33.705198   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:33.705213   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:33.781889   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:33.781916   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:33.781931   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:33.860132   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:33.860171   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:33.899723   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:33.899754   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:33.953380   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:33.953416   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:36.469123   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:36.482260   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:36.482324   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:36.516469   71926 cri.go:89] found id: ""
	I0913 20:01:36.516494   71926 logs.go:276] 0 containers: []
	W0913 20:01:36.516504   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:36.516509   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:36.516599   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:36.554065   71926 cri.go:89] found id: ""
	I0913 20:01:36.554103   71926 logs.go:276] 0 containers: []
	W0913 20:01:36.554116   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:36.554123   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:36.554197   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:36.587706   71926 cri.go:89] found id: ""
	I0913 20:01:36.587736   71926 logs.go:276] 0 containers: []
	W0913 20:01:36.587745   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:36.587750   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:36.587812   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:36.621866   71926 cri.go:89] found id: ""
	I0913 20:01:36.621897   71926 logs.go:276] 0 containers: []
	W0913 20:01:36.621908   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:36.621915   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:36.621984   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:36.656374   71926 cri.go:89] found id: ""
	I0913 20:01:36.656403   71926 logs.go:276] 0 containers: []
	W0913 20:01:36.656411   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:36.656416   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:36.656465   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:36.690236   71926 cri.go:89] found id: ""
	I0913 20:01:36.690259   71926 logs.go:276] 0 containers: []
	W0913 20:01:36.690268   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:36.690273   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:36.690329   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:36.725544   71926 cri.go:89] found id: ""
	I0913 20:01:36.725580   71926 logs.go:276] 0 containers: []
	W0913 20:01:36.725592   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:36.725600   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:36.725656   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:36.762143   71926 cri.go:89] found id: ""
	I0913 20:01:36.762175   71926 logs.go:276] 0 containers: []
	W0913 20:01:36.762186   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:36.762195   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:36.762210   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:36.837436   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:36.837459   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:36.837473   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:36.916272   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:36.916310   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:34.334975   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:36.335436   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:38.835559   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:34.930444   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:36.931136   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:39.430007   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:37.266186   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:39.769801   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:36.962300   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:36.962332   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:37.016419   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:37.016449   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:39.532924   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:39.546066   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:39.546150   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:39.585711   71926 cri.go:89] found id: ""
	I0913 20:01:39.585733   71926 logs.go:276] 0 containers: []
	W0913 20:01:39.585741   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:39.585747   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:39.585803   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:39.623366   71926 cri.go:89] found id: ""
	I0913 20:01:39.623409   71926 logs.go:276] 0 containers: []
	W0913 20:01:39.623419   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:39.623425   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:39.623487   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:39.660533   71926 cri.go:89] found id: ""
	I0913 20:01:39.660558   71926 logs.go:276] 0 containers: []
	W0913 20:01:39.660567   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:39.660575   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:39.660637   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:39.694266   71926 cri.go:89] found id: ""
	I0913 20:01:39.694293   71926 logs.go:276] 0 containers: []
	W0913 20:01:39.694304   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:39.694311   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:39.694373   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:39.734258   71926 cri.go:89] found id: ""
	I0913 20:01:39.734284   71926 logs.go:276] 0 containers: []
	W0913 20:01:39.734295   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:39.734302   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:39.734361   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:39.770202   71926 cri.go:89] found id: ""
	I0913 20:01:39.770219   71926 logs.go:276] 0 containers: []
	W0913 20:01:39.770227   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:39.770233   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:39.770284   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:39.804380   71926 cri.go:89] found id: ""
	I0913 20:01:39.804417   71926 logs.go:276] 0 containers: []
	W0913 20:01:39.804425   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:39.804431   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:39.804481   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:39.837961   71926 cri.go:89] found id: ""
	I0913 20:01:39.837989   71926 logs.go:276] 0 containers: []
	W0913 20:01:39.838011   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:39.838021   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:39.838047   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:39.851152   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:39.851176   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:39.930199   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:39.930224   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:39.930239   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:40.007475   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:40.007507   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:40.050291   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:40.050320   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:40.835948   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:42.836933   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:41.431508   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:43.930509   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:42.265895   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:44.267214   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:42.599100   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:42.613586   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:42.613662   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:42.648187   71926 cri.go:89] found id: ""
	I0913 20:01:42.648213   71926 logs.go:276] 0 containers: []
	W0913 20:01:42.648225   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:42.648232   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:42.648283   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:42.692692   71926 cri.go:89] found id: ""
	I0913 20:01:42.692721   71926 logs.go:276] 0 containers: []
	W0913 20:01:42.692730   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:42.692736   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:42.692787   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:42.726248   71926 cri.go:89] found id: ""
	I0913 20:01:42.726273   71926 logs.go:276] 0 containers: []
	W0913 20:01:42.726283   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:42.726291   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:42.726364   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:42.764372   71926 cri.go:89] found id: ""
	I0913 20:01:42.764416   71926 logs.go:276] 0 containers: []
	W0913 20:01:42.764428   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:42.764434   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:42.764495   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:42.802871   71926 cri.go:89] found id: ""
	I0913 20:01:42.802902   71926 logs.go:276] 0 containers: []
	W0913 20:01:42.802914   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:42.802922   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:42.802983   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:42.840018   71926 cri.go:89] found id: ""
	I0913 20:01:42.840048   71926 logs.go:276] 0 containers: []
	W0913 20:01:42.840060   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:42.840067   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:42.840127   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:42.876359   71926 cri.go:89] found id: ""
	I0913 20:01:42.876388   71926 logs.go:276] 0 containers: []
	W0913 20:01:42.876400   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:42.876408   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:42.876473   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:42.911673   71926 cri.go:89] found id: ""
	I0913 20:01:42.911697   71926 logs.go:276] 0 containers: []
	W0913 20:01:42.911706   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:42.911713   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:42.911725   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:43.016107   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:43.016143   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:43.061781   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:43.061811   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:43.112989   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:43.113035   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:43.129172   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:43.129201   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:43.209596   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:45.710278   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:45.723939   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:45.724013   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:45.759393   71926 cri.go:89] found id: ""
	I0913 20:01:45.759420   71926 logs.go:276] 0 containers: []
	W0913 20:01:45.759432   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:45.759439   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:45.759500   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:45.796795   71926 cri.go:89] found id: ""
	I0913 20:01:45.796820   71926 logs.go:276] 0 containers: []
	W0913 20:01:45.796831   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:45.796838   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:45.796911   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:45.837904   71926 cri.go:89] found id: ""
	I0913 20:01:45.837929   71926 logs.go:276] 0 containers: []
	W0913 20:01:45.837939   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:45.837945   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:45.838005   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:45.873079   71926 cri.go:89] found id: ""
	I0913 20:01:45.873104   71926 logs.go:276] 0 containers: []
	W0913 20:01:45.873115   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:45.873122   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:45.873194   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:45.910115   71926 cri.go:89] found id: ""
	I0913 20:01:45.910144   71926 logs.go:276] 0 containers: []
	W0913 20:01:45.910160   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:45.910167   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:45.910228   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:45.949135   71926 cri.go:89] found id: ""
	I0913 20:01:45.949166   71926 logs.go:276] 0 containers: []
	W0913 20:01:45.949178   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:45.949186   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:45.949246   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:45.984997   71926 cri.go:89] found id: ""
	I0913 20:01:45.985023   71926 logs.go:276] 0 containers: []
	W0913 20:01:45.985033   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:45.985038   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:45.985088   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:46.023590   71926 cri.go:89] found id: ""
	I0913 20:01:46.023618   71926 logs.go:276] 0 containers: []
	W0913 20:01:46.023632   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:46.023642   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:46.023656   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:46.062364   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:46.062392   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:46.114617   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:46.114650   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:46.129756   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:46.129799   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:46.202031   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:46.202052   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:46.202063   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:45.337317   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:47.834948   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:45.931344   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:47.932508   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:46.776369   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:49.268050   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:48.782933   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:48.796685   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:48.796758   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:48.832035   71926 cri.go:89] found id: ""
	I0913 20:01:48.832061   71926 logs.go:276] 0 containers: []
	W0913 20:01:48.832074   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:48.832081   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:48.832155   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:48.866904   71926 cri.go:89] found id: ""
	I0913 20:01:48.866929   71926 logs.go:276] 0 containers: []
	W0913 20:01:48.866942   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:48.866947   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:48.867004   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:48.904082   71926 cri.go:89] found id: ""
	I0913 20:01:48.904105   71926 logs.go:276] 0 containers: []
	W0913 20:01:48.904113   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:48.904118   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:48.904174   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:48.947496   71926 cri.go:89] found id: ""
	I0913 20:01:48.947519   71926 logs.go:276] 0 containers: []
	W0913 20:01:48.947526   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:48.947532   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:48.947588   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:48.982918   71926 cri.go:89] found id: ""
	I0913 20:01:48.982954   71926 logs.go:276] 0 containers: []
	W0913 20:01:48.982964   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:48.982969   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:48.983034   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:49.023593   71926 cri.go:89] found id: ""
	I0913 20:01:49.023615   71926 logs.go:276] 0 containers: []
	W0913 20:01:49.023623   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:49.023629   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:49.023690   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:49.068211   71926 cri.go:89] found id: ""
	I0913 20:01:49.068233   71926 logs.go:276] 0 containers: []
	W0913 20:01:49.068241   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:49.068246   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:49.068310   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:49.104174   71926 cri.go:89] found id: ""
	I0913 20:01:49.104195   71926 logs.go:276] 0 containers: []
	W0913 20:01:49.104203   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:49.104210   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:49.104221   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:49.158282   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:49.158317   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:49.173592   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:49.173617   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:49.249039   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:49.249067   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:49.249082   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:49.334924   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:49.334969   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:51.884986   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:51.898020   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:51.898172   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:51.937858   71926 cri.go:89] found id: ""
	I0913 20:01:51.937885   71926 logs.go:276] 0 containers: []
	W0913 20:01:51.937896   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:51.937905   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:51.937971   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:49.836646   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:52.337477   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:50.432045   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:52.930984   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:51.765027   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:53.766659   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:55.766923   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:51.977712   71926 cri.go:89] found id: ""
	I0913 20:01:51.977743   71926 logs.go:276] 0 containers: []
	W0913 20:01:51.977756   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:51.977766   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:51.977852   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:52.012504   71926 cri.go:89] found id: ""
	I0913 20:01:52.012529   71926 logs.go:276] 0 containers: []
	W0913 20:01:52.012539   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:52.012544   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:52.012604   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:52.047642   71926 cri.go:89] found id: ""
	I0913 20:01:52.047671   71926 logs.go:276] 0 containers: []
	W0913 20:01:52.047683   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:52.047692   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:52.047743   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:52.087213   71926 cri.go:89] found id: ""
	I0913 20:01:52.087240   71926 logs.go:276] 0 containers: []
	W0913 20:01:52.087251   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:52.087258   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:52.087317   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:52.126609   71926 cri.go:89] found id: ""
	I0913 20:01:52.126633   71926 logs.go:276] 0 containers: []
	W0913 20:01:52.126641   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:52.126647   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:52.126699   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:52.160754   71926 cri.go:89] found id: ""
	I0913 20:01:52.160778   71926 logs.go:276] 0 containers: []
	W0913 20:01:52.160789   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:52.160796   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:52.160857   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:52.196945   71926 cri.go:89] found id: ""
	I0913 20:01:52.196967   71926 logs.go:276] 0 containers: []
	W0913 20:01:52.196975   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:52.196983   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:52.196994   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:52.248515   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:52.248552   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:52.264602   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:52.264629   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:52.339626   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:52.339653   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:52.339668   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:52.419793   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:52.419827   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:54.966220   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:54.981234   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:54.981299   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:55.015806   71926 cri.go:89] found id: ""
	I0913 20:01:55.015843   71926 logs.go:276] 0 containers: []
	W0913 20:01:55.015855   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:55.015861   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:55.015931   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:55.053979   71926 cri.go:89] found id: ""
	I0913 20:01:55.054002   71926 logs.go:276] 0 containers: []
	W0913 20:01:55.054011   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:55.054016   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:55.054075   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:55.096464   71926 cri.go:89] found id: ""
	I0913 20:01:55.096495   71926 logs.go:276] 0 containers: []
	W0913 20:01:55.096506   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:55.096514   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:55.096591   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:55.146557   71926 cri.go:89] found id: ""
	I0913 20:01:55.146586   71926 logs.go:276] 0 containers: []
	W0913 20:01:55.146599   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:55.146606   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:55.146667   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:55.199034   71926 cri.go:89] found id: ""
	I0913 20:01:55.199063   71926 logs.go:276] 0 containers: []
	W0913 20:01:55.199074   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:55.199083   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:55.199144   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:55.234731   71926 cri.go:89] found id: ""
	I0913 20:01:55.234760   71926 logs.go:276] 0 containers: []
	W0913 20:01:55.234772   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:55.234780   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:55.234830   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:55.273543   71926 cri.go:89] found id: ""
	I0913 20:01:55.273570   71926 logs.go:276] 0 containers: []
	W0913 20:01:55.273579   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:55.273585   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:55.273643   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:55.310449   71926 cri.go:89] found id: ""
	I0913 20:01:55.310483   71926 logs.go:276] 0 containers: []
	W0913 20:01:55.310496   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:55.310507   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:55.310521   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:55.360725   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:55.360761   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:55.375346   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:55.375374   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:55.445075   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:55.445098   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:55.445108   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:55.520105   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:55.520144   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:01:54.835305   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:56.835825   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:58.836975   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:55.431354   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:57.930223   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:57.767026   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:00.266415   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:58.059379   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:01:58.072373   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:01:58.072454   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:01:58.108624   71926 cri.go:89] found id: ""
	I0913 20:01:58.108650   71926 logs.go:276] 0 containers: []
	W0913 20:01:58.108659   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:01:58.108665   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:01:58.108722   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:01:58.143978   71926 cri.go:89] found id: ""
	I0913 20:01:58.143999   71926 logs.go:276] 0 containers: []
	W0913 20:01:58.144007   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:01:58.144013   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:01:58.144059   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:01:58.178996   71926 cri.go:89] found id: ""
	I0913 20:01:58.179024   71926 logs.go:276] 0 containers: []
	W0913 20:01:58.179032   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:01:58.179038   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:01:58.179097   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:01:58.211596   71926 cri.go:89] found id: ""
	I0913 20:01:58.211624   71926 logs.go:276] 0 containers: []
	W0913 20:01:58.211634   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:01:58.211640   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:01:58.211696   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:01:58.245034   71926 cri.go:89] found id: ""
	I0913 20:01:58.245065   71926 logs.go:276] 0 containers: []
	W0913 20:01:58.245077   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:01:58.245085   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:01:58.245148   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:01:58.283216   71926 cri.go:89] found id: ""
	I0913 20:01:58.283238   71926 logs.go:276] 0 containers: []
	W0913 20:01:58.283247   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:01:58.283252   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:01:58.283309   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:01:58.321463   71926 cri.go:89] found id: ""
	I0913 20:01:58.321484   71926 logs.go:276] 0 containers: []
	W0913 20:01:58.321492   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:01:58.321498   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:01:58.321544   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:01:58.355902   71926 cri.go:89] found id: ""
	I0913 20:01:58.355926   71926 logs.go:276] 0 containers: []
	W0913 20:01:58.355937   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:01:58.355947   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:01:58.355965   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:01:58.413005   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:01:58.413035   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:01:58.428082   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:01:58.428108   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:01:58.498169   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:01:58.498197   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:01:58.498212   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:01:58.578725   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:01:58.578757   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:02:01.119256   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:02:01.146017   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:02:01.146114   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:02:01.196918   71926 cri.go:89] found id: ""
	I0913 20:02:01.196945   71926 logs.go:276] 0 containers: []
	W0913 20:02:01.196953   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:02:01.196959   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:02:01.197016   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:02:01.233679   71926 cri.go:89] found id: ""
	I0913 20:02:01.233718   71926 logs.go:276] 0 containers: []
	W0913 20:02:01.233729   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:02:01.233736   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:02:01.233800   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:02:01.269464   71926 cri.go:89] found id: ""
	I0913 20:02:01.269492   71926 logs.go:276] 0 containers: []
	W0913 20:02:01.269503   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:02:01.269510   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:02:01.269570   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:02:01.305729   71926 cri.go:89] found id: ""
	I0913 20:02:01.305754   71926 logs.go:276] 0 containers: []
	W0913 20:02:01.305763   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:02:01.305769   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:02:01.305828   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:02:01.342388   71926 cri.go:89] found id: ""
	I0913 20:02:01.342415   71926 logs.go:276] 0 containers: []
	W0913 20:02:01.342426   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:02:01.342433   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:02:01.342496   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:02:01.381841   71926 cri.go:89] found id: ""
	I0913 20:02:01.381870   71926 logs.go:276] 0 containers: []
	W0913 20:02:01.381887   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:02:01.381895   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:02:01.381959   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:02:01.418835   71926 cri.go:89] found id: ""
	I0913 20:02:01.418875   71926 logs.go:276] 0 containers: []
	W0913 20:02:01.418886   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:02:01.418893   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:02:01.418957   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:02:01.457113   71926 cri.go:89] found id: ""
	I0913 20:02:01.457147   71926 logs.go:276] 0 containers: []
	W0913 20:02:01.457158   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:02:01.457168   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:02:01.457178   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:02:01.513024   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:02:01.513060   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:02:01.528102   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:02:01.528128   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:02:01.602459   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:02:01.602480   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:02:01.602495   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:02:01.685567   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:02:01.685599   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:02:01.336152   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:03.836139   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:01:59.931408   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:02.430247   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:04.431966   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:02.266731   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:04.768148   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:04.231832   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:02:04.246134   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:02:04.246215   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:02:04.285749   71926 cri.go:89] found id: ""
	I0913 20:02:04.285779   71926 logs.go:276] 0 containers: []
	W0913 20:02:04.285789   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:02:04.285796   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:02:04.285857   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:02:04.320201   71926 cri.go:89] found id: ""
	I0913 20:02:04.320235   71926 logs.go:276] 0 containers: []
	W0913 20:02:04.320246   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:02:04.320252   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:02:04.320314   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:02:04.356354   71926 cri.go:89] found id: ""
	I0913 20:02:04.356376   71926 logs.go:276] 0 containers: []
	W0913 20:02:04.356387   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:02:04.356394   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:02:04.356452   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:02:04.391995   71926 cri.go:89] found id: ""
	I0913 20:02:04.392025   71926 logs.go:276] 0 containers: []
	W0913 20:02:04.392036   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:02:04.392044   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:02:04.392111   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:02:04.425216   71926 cri.go:89] found id: ""
	I0913 20:02:04.425245   71926 logs.go:276] 0 containers: []
	W0913 20:02:04.425255   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:02:04.425262   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:02:04.425327   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:02:04.473200   71926 cri.go:89] found id: ""
	I0913 20:02:04.473223   71926 logs.go:276] 0 containers: []
	W0913 20:02:04.473232   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:02:04.473238   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:02:04.473283   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:02:04.508088   71926 cri.go:89] found id: ""
	I0913 20:02:04.508110   71926 logs.go:276] 0 containers: []
	W0913 20:02:04.508119   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:02:04.508124   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:02:04.508175   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:02:04.543322   71926 cri.go:89] found id: ""
	I0913 20:02:04.543343   71926 logs.go:276] 0 containers: []
	W0913 20:02:04.543351   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:02:04.543360   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:02:04.543370   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:02:04.618660   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:02:04.618679   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:02:04.618691   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:02:04.695411   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:02:04.695446   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:02:04.735797   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:02:04.735829   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:02:04.792281   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:02:04.792318   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:02:05.836177   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:07.837164   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:06.931841   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:09.432062   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:07.266508   71424 pod_ready.go:103] pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:07.266540   71424 pod_ready.go:82] duration metric: took 4m0.00658418s for pod "metrics-server-6867b74b74-bq7jp" in "kube-system" namespace to be "Ready" ...
	E0913 20:02:07.266553   71424 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0913 20:02:07.266569   71424 pod_ready.go:39] duration metric: took 4m3.201709894s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0913 20:02:07.266588   71424 api_server.go:52] waiting for apiserver process to appear ...
	I0913 20:02:07.266618   71424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:02:07.266671   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:02:07.316650   71424 cri.go:89] found id: "7b1108fd5841778e635c87c901ca2bc56071cc7f48f9b9812e1bf8fa063284c3"
	I0913 20:02:07.316674   71424 cri.go:89] found id: ""
	I0913 20:02:07.316681   71424 logs.go:276] 1 containers: [7b1108fd5841778e635c87c901ca2bc56071cc7f48f9b9812e1bf8fa063284c3]
	I0913 20:02:07.316740   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:07.321334   71424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:02:07.321407   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:02:07.373164   71424 cri.go:89] found id: "a3490cc2f99b270938b5fed10c34d8466f4e2d304dac9cdacb69b239c36988e3"
	I0913 20:02:07.373187   71424 cri.go:89] found id: ""
	I0913 20:02:07.373197   71424 logs.go:276] 1 containers: [a3490cc2f99b270938b5fed10c34d8466f4e2d304dac9cdacb69b239c36988e3]
	I0913 20:02:07.373247   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:07.377883   71424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:02:07.377954   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:02:07.424142   71424 cri.go:89] found id: "e70559352db6a4d712cb06065541ce8338cb0ea989eac571f538aa205d022f73"
	I0913 20:02:07.424169   71424 cri.go:89] found id: ""
	I0913 20:02:07.424179   71424 logs.go:276] 1 containers: [e70559352db6a4d712cb06065541ce8338cb0ea989eac571f538aa205d022f73]
	I0913 20:02:07.424241   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:07.429508   71424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:02:07.429578   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:02:07.484114   71424 cri.go:89] found id: "4c2bf4fed4e33c404d568d430bf58fe30ca0c9f0cf8964c530e93f1fd1a51edf"
	I0913 20:02:07.484180   71424 cri.go:89] found id: ""
	I0913 20:02:07.484193   71424 logs.go:276] 1 containers: [4c2bf4fed4e33c404d568d430bf58fe30ca0c9f0cf8964c530e93f1fd1a51edf]
	I0913 20:02:07.484250   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:07.488689   71424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:02:07.488757   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:02:07.527755   71424 cri.go:89] found id: "adbec8ff0ed7ae2d76b2b4191fbf08117c3c2c17aaf2f56bf763ce14fba83991"
	I0913 20:02:07.527777   71424 cri.go:89] found id: ""
	I0913 20:02:07.527786   71424 logs.go:276] 1 containers: [adbec8ff0ed7ae2d76b2b4191fbf08117c3c2c17aaf2f56bf763ce14fba83991]
	I0913 20:02:07.527840   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:07.532748   71424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:02:07.532806   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:02:07.570018   71424 cri.go:89] found id: "e6169bebe5711890f53f70a7ec514779cf495ee725c60dab67e7acdca64ef03d"
	I0913 20:02:07.570043   71424 cri.go:89] found id: ""
	I0913 20:02:07.570052   71424 logs.go:276] 1 containers: [e6169bebe5711890f53f70a7ec514779cf495ee725c60dab67e7acdca64ef03d]
	I0913 20:02:07.570125   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:07.574697   71424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:02:07.574765   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:02:07.618877   71424 cri.go:89] found id: ""
	I0913 20:02:07.618971   71424 logs.go:276] 0 containers: []
	W0913 20:02:07.618998   71424 logs.go:278] No container was found matching "kindnet"
	I0913 20:02:07.619014   71424 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0913 20:02:07.619122   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0913 20:02:07.659244   71424 cri.go:89] found id: "fc01d7b17bbc929489326ae7e806a248f2d3d5cbc145bf51eb1b0cf2ba88d950"
	I0913 20:02:07.659270   71424 cri.go:89] found id: "4a9c61bb677322f851f9504c0ffe2044897d80333391be76078a4f7825afe9fe"
	I0913 20:02:07.659275   71424 cri.go:89] found id: ""
	I0913 20:02:07.659283   71424 logs.go:276] 2 containers: [fc01d7b17bbc929489326ae7e806a248f2d3d5cbc145bf51eb1b0cf2ba88d950 4a9c61bb677322f851f9504c0ffe2044897d80333391be76078a4f7825afe9fe]
	I0913 20:02:07.659335   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:07.664257   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:07.668591   71424 logs.go:123] Gathering logs for kube-scheduler [4c2bf4fed4e33c404d568d430bf58fe30ca0c9f0cf8964c530e93f1fd1a51edf] ...
	I0913 20:02:07.668613   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c2bf4fed4e33c404d568d430bf58fe30ca0c9f0cf8964c530e93f1fd1a51edf"
	I0913 20:02:07.709612   71424 logs.go:123] Gathering logs for kube-controller-manager [e6169bebe5711890f53f70a7ec514779cf495ee725c60dab67e7acdca64ef03d] ...
	I0913 20:02:07.709638   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e6169bebe5711890f53f70a7ec514779cf495ee725c60dab67e7acdca64ef03d"
	I0913 20:02:07.765784   71424 logs.go:123] Gathering logs for storage-provisioner [4a9c61bb677322f851f9504c0ffe2044897d80333391be76078a4f7825afe9fe] ...
	I0913 20:02:07.765838   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4a9c61bb677322f851f9504c0ffe2044897d80333391be76078a4f7825afe9fe"
	I0913 20:02:07.808828   71424 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:02:07.808853   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:02:08.315417   71424 logs.go:123] Gathering logs for container status ...
	I0913 20:02:08.315462   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:02:08.361953   71424 logs.go:123] Gathering logs for kubelet ...
	I0913 20:02:08.361984   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:02:08.434091   71424 logs.go:123] Gathering logs for dmesg ...
	I0913 20:02:08.434143   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:02:08.448853   71424 logs.go:123] Gathering logs for etcd [a3490cc2f99b270938b5fed10c34d8466f4e2d304dac9cdacb69b239c36988e3] ...
	I0913 20:02:08.448877   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a3490cc2f99b270938b5fed10c34d8466f4e2d304dac9cdacb69b239c36988e3"
	I0913 20:02:08.510886   71424 logs.go:123] Gathering logs for coredns [e70559352db6a4d712cb06065541ce8338cb0ea989eac571f538aa205d022f73] ...
	I0913 20:02:08.510919   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e70559352db6a4d712cb06065541ce8338cb0ea989eac571f538aa205d022f73"
	I0913 20:02:08.547445   71424 logs.go:123] Gathering logs for kube-proxy [adbec8ff0ed7ae2d76b2b4191fbf08117c3c2c17aaf2f56bf763ce14fba83991] ...
	I0913 20:02:08.547482   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 adbec8ff0ed7ae2d76b2b4191fbf08117c3c2c17aaf2f56bf763ce14fba83991"
	I0913 20:02:08.585883   71424 logs.go:123] Gathering logs for storage-provisioner [fc01d7b17bbc929489326ae7e806a248f2d3d5cbc145bf51eb1b0cf2ba88d950] ...
	I0913 20:02:08.585907   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc01d7b17bbc929489326ae7e806a248f2d3d5cbc145bf51eb1b0cf2ba88d950"
	I0913 20:02:08.628105   71424 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:02:08.628134   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 20:02:08.764531   71424 logs.go:123] Gathering logs for kube-apiserver [7b1108fd5841778e635c87c901ca2bc56071cc7f48f9b9812e1bf8fa063284c3] ...
	I0913 20:02:08.764562   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7b1108fd5841778e635c87c901ca2bc56071cc7f48f9b9812e1bf8fa063284c3"
	I0913 20:02:07.307778   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:02:07.322032   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:02:07.322110   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:02:07.357558   71926 cri.go:89] found id: ""
	I0913 20:02:07.357585   71926 logs.go:276] 0 containers: []
	W0913 20:02:07.357597   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:02:07.357605   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:02:07.357664   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:02:07.391428   71926 cri.go:89] found id: ""
	I0913 20:02:07.391457   71926 logs.go:276] 0 containers: []
	W0913 20:02:07.391468   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:02:07.391476   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:02:07.391531   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:02:07.427244   71926 cri.go:89] found id: ""
	I0913 20:02:07.427268   71926 logs.go:276] 0 containers: []
	W0913 20:02:07.427281   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:02:07.427289   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:02:07.427364   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:02:07.477369   71926 cri.go:89] found id: ""
	I0913 20:02:07.477399   71926 logs.go:276] 0 containers: []
	W0913 20:02:07.477411   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:02:07.477420   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:02:07.477478   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:02:07.519477   71926 cri.go:89] found id: ""
	I0913 20:02:07.519505   71926 logs.go:276] 0 containers: []
	W0913 20:02:07.519516   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:02:07.519524   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:02:07.519586   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:02:07.556229   71926 cri.go:89] found id: ""
	I0913 20:02:07.556252   71926 logs.go:276] 0 containers: []
	W0913 20:02:07.556260   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:02:07.556270   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:02:07.556329   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:02:07.594498   71926 cri.go:89] found id: ""
	I0913 20:02:07.594531   71926 logs.go:276] 0 containers: []
	W0913 20:02:07.594543   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:02:07.594551   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:02:07.594609   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:02:07.634000   71926 cri.go:89] found id: ""
	I0913 20:02:07.634027   71926 logs.go:276] 0 containers: []
	W0913 20:02:07.634038   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:02:07.634048   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:02:07.634061   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:02:07.690769   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:02:07.690801   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:02:07.708059   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:02:07.708087   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:02:07.783421   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:02:07.783440   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:02:07.783481   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:02:07.866138   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:02:07.866169   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:02:10.416560   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:02:10.430464   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:02:10.430536   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:02:10.467821   71926 cri.go:89] found id: ""
	I0913 20:02:10.467849   71926 logs.go:276] 0 containers: []
	W0913 20:02:10.467858   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:02:10.467864   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:02:10.467930   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:02:10.504320   71926 cri.go:89] found id: ""
	I0913 20:02:10.504347   71926 logs.go:276] 0 containers: []
	W0913 20:02:10.504358   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:02:10.504371   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:02:10.504461   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:02:10.541261   71926 cri.go:89] found id: ""
	I0913 20:02:10.541290   71926 logs.go:276] 0 containers: []
	W0913 20:02:10.541302   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:02:10.541309   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:02:10.541376   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:02:10.576270   71926 cri.go:89] found id: ""
	I0913 20:02:10.576297   71926 logs.go:276] 0 containers: []
	W0913 20:02:10.576310   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:02:10.576317   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:02:10.576373   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:02:10.614974   71926 cri.go:89] found id: ""
	I0913 20:02:10.615004   71926 logs.go:276] 0 containers: []
	W0913 20:02:10.615022   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:02:10.615029   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:02:10.615091   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:02:10.650917   71926 cri.go:89] found id: ""
	I0913 20:02:10.650947   71926 logs.go:276] 0 containers: []
	W0913 20:02:10.650959   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:02:10.650967   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:02:10.651028   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:02:10.688597   71926 cri.go:89] found id: ""
	I0913 20:02:10.688622   71926 logs.go:276] 0 containers: []
	W0913 20:02:10.688632   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:02:10.688640   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:02:10.688699   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:02:10.723937   71926 cri.go:89] found id: ""
	I0913 20:02:10.723962   71926 logs.go:276] 0 containers: []
	W0913 20:02:10.723973   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:02:10.723983   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:02:10.723998   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:02:10.776033   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:02:10.776065   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:02:10.791601   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:02:10.791624   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:02:10.870427   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:02:10.870453   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:02:10.870481   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:02:10.950924   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:02:10.950958   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:02:10.335945   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:12.336240   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:11.932240   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:14.430527   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:11.311597   71424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:02:11.329620   71424 api_server.go:72] duration metric: took 4m14.578764648s to wait for apiserver process to appear ...
	I0913 20:02:11.329645   71424 api_server.go:88] waiting for apiserver healthz status ...
	I0913 20:02:11.329689   71424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:02:11.329748   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:02:11.372419   71424 cri.go:89] found id: "7b1108fd5841778e635c87c901ca2bc56071cc7f48f9b9812e1bf8fa063284c3"
	I0913 20:02:11.372443   71424 cri.go:89] found id: ""
	I0913 20:02:11.372454   71424 logs.go:276] 1 containers: [7b1108fd5841778e635c87c901ca2bc56071cc7f48f9b9812e1bf8fa063284c3]
	I0913 20:02:11.372510   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:11.377048   71424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:02:11.377112   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:02:11.415150   71424 cri.go:89] found id: "a3490cc2f99b270938b5fed10c34d8466f4e2d304dac9cdacb69b239c36988e3"
	I0913 20:02:11.415177   71424 cri.go:89] found id: ""
	I0913 20:02:11.415186   71424 logs.go:276] 1 containers: [a3490cc2f99b270938b5fed10c34d8466f4e2d304dac9cdacb69b239c36988e3]
	I0913 20:02:11.415255   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:11.420007   71424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:02:11.420092   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:02:11.459538   71424 cri.go:89] found id: "e70559352db6a4d712cb06065541ce8338cb0ea989eac571f538aa205d022f73"
	I0913 20:02:11.459560   71424 cri.go:89] found id: ""
	I0913 20:02:11.459568   71424 logs.go:276] 1 containers: [e70559352db6a4d712cb06065541ce8338cb0ea989eac571f538aa205d022f73]
	I0913 20:02:11.459626   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:11.464079   71424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:02:11.464133   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:02:11.502877   71424 cri.go:89] found id: "4c2bf4fed4e33c404d568d430bf58fe30ca0c9f0cf8964c530e93f1fd1a51edf"
	I0913 20:02:11.502902   71424 cri.go:89] found id: ""
	I0913 20:02:11.502909   71424 logs.go:276] 1 containers: [4c2bf4fed4e33c404d568d430bf58fe30ca0c9f0cf8964c530e93f1fd1a51edf]
	I0913 20:02:11.502958   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:11.507529   71424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:02:11.507614   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:02:11.553452   71424 cri.go:89] found id: "adbec8ff0ed7ae2d76b2b4191fbf08117c3c2c17aaf2f56bf763ce14fba83991"
	I0913 20:02:11.553476   71424 cri.go:89] found id: ""
	I0913 20:02:11.553484   71424 logs.go:276] 1 containers: [adbec8ff0ed7ae2d76b2b4191fbf08117c3c2c17aaf2f56bf763ce14fba83991]
	I0913 20:02:11.553538   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:11.557584   71424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:02:11.557649   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:02:11.598606   71424 cri.go:89] found id: "e6169bebe5711890f53f70a7ec514779cf495ee725c60dab67e7acdca64ef03d"
	I0913 20:02:11.598632   71424 cri.go:89] found id: ""
	I0913 20:02:11.598641   71424 logs.go:276] 1 containers: [e6169bebe5711890f53f70a7ec514779cf495ee725c60dab67e7acdca64ef03d]
	I0913 20:02:11.598694   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:11.602735   71424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:02:11.602803   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:02:11.637072   71424 cri.go:89] found id: ""
	I0913 20:02:11.637099   71424 logs.go:276] 0 containers: []
	W0913 20:02:11.637110   71424 logs.go:278] No container was found matching "kindnet"
	I0913 20:02:11.637133   71424 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0913 20:02:11.637197   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0913 20:02:11.680922   71424 cri.go:89] found id: "fc01d7b17bbc929489326ae7e806a248f2d3d5cbc145bf51eb1b0cf2ba88d950"
	I0913 20:02:11.680941   71424 cri.go:89] found id: "4a9c61bb677322f851f9504c0ffe2044897d80333391be76078a4f7825afe9fe"
	I0913 20:02:11.680945   71424 cri.go:89] found id: ""
	I0913 20:02:11.680952   71424 logs.go:276] 2 containers: [fc01d7b17bbc929489326ae7e806a248f2d3d5cbc145bf51eb1b0cf2ba88d950 4a9c61bb677322f851f9504c0ffe2044897d80333391be76078a4f7825afe9fe]
	I0913 20:02:11.680993   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:11.685264   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:11.689862   71424 logs.go:123] Gathering logs for etcd [a3490cc2f99b270938b5fed10c34d8466f4e2d304dac9cdacb69b239c36988e3] ...
	I0913 20:02:11.689887   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a3490cc2f99b270938b5fed10c34d8466f4e2d304dac9cdacb69b239c36988e3"
	I0913 20:02:11.758440   71424 logs.go:123] Gathering logs for coredns [e70559352db6a4d712cb06065541ce8338cb0ea989eac571f538aa205d022f73] ...
	I0913 20:02:11.758475   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e70559352db6a4d712cb06065541ce8338cb0ea989eac571f538aa205d022f73"
	I0913 20:02:11.799263   71424 logs.go:123] Gathering logs for kube-proxy [adbec8ff0ed7ae2d76b2b4191fbf08117c3c2c17aaf2f56bf763ce14fba83991] ...
	I0913 20:02:11.799295   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 adbec8ff0ed7ae2d76b2b4191fbf08117c3c2c17aaf2f56bf763ce14fba83991"
	I0913 20:02:11.837890   71424 logs.go:123] Gathering logs for kube-controller-manager [e6169bebe5711890f53f70a7ec514779cf495ee725c60dab67e7acdca64ef03d] ...
	I0913 20:02:11.837918   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e6169bebe5711890f53f70a7ec514779cf495ee725c60dab67e7acdca64ef03d"
	I0913 20:02:11.902156   71424 logs.go:123] Gathering logs for storage-provisioner [fc01d7b17bbc929489326ae7e806a248f2d3d5cbc145bf51eb1b0cf2ba88d950] ...
	I0913 20:02:11.902189   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc01d7b17bbc929489326ae7e806a248f2d3d5cbc145bf51eb1b0cf2ba88d950"
	I0913 20:02:11.953825   71424 logs.go:123] Gathering logs for kubelet ...
	I0913 20:02:11.953854   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:02:12.022461   71424 logs.go:123] Gathering logs for dmesg ...
	I0913 20:02:12.022498   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:02:12.038744   71424 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:02:12.038773   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 20:02:12.156945   71424 logs.go:123] Gathering logs for storage-provisioner [4a9c61bb677322f851f9504c0ffe2044897d80333391be76078a4f7825afe9fe] ...
	I0913 20:02:12.156982   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4a9c61bb677322f851f9504c0ffe2044897d80333391be76078a4f7825afe9fe"
	I0913 20:02:12.191539   71424 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:02:12.191576   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:02:12.615499   71424 logs.go:123] Gathering logs for kube-apiserver [7b1108fd5841778e635c87c901ca2bc56071cc7f48f9b9812e1bf8fa063284c3] ...
	I0913 20:02:12.615539   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7b1108fd5841778e635c87c901ca2bc56071cc7f48f9b9812e1bf8fa063284c3"
	I0913 20:02:12.662305   71424 logs.go:123] Gathering logs for kube-scheduler [4c2bf4fed4e33c404d568d430bf58fe30ca0c9f0cf8964c530e93f1fd1a51edf] ...
	I0913 20:02:12.662340   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c2bf4fed4e33c404d568d430bf58fe30ca0c9f0cf8964c530e93f1fd1a51edf"
	I0913 20:02:12.701720   71424 logs.go:123] Gathering logs for container status ...
	I0913 20:02:12.701747   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:02:15.241370   71424 api_server.go:253] Checking apiserver healthz at https://192.168.50.13:8443/healthz ...
	I0913 20:02:15.246417   71424 api_server.go:279] https://192.168.50.13:8443/healthz returned 200:
	ok
	I0913 20:02:15.247538   71424 api_server.go:141] control plane version: v1.31.1
	I0913 20:02:15.247557   71424 api_server.go:131] duration metric: took 3.917905929s to wait for apiserver health ...
	I0913 20:02:15.247565   71424 system_pods.go:43] waiting for kube-system pods to appear ...
	I0913 20:02:15.247592   71424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:02:15.247646   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:02:15.287202   71424 cri.go:89] found id: "7b1108fd5841778e635c87c901ca2bc56071cc7f48f9b9812e1bf8fa063284c3"
	I0913 20:02:15.287223   71424 cri.go:89] found id: ""
	I0913 20:02:15.287231   71424 logs.go:276] 1 containers: [7b1108fd5841778e635c87c901ca2bc56071cc7f48f9b9812e1bf8fa063284c3]
	I0913 20:02:15.287285   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:15.292060   71424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:02:15.292115   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:02:15.327342   71424 cri.go:89] found id: "a3490cc2f99b270938b5fed10c34d8466f4e2d304dac9cdacb69b239c36988e3"
	I0913 20:02:15.327367   71424 cri.go:89] found id: ""
	I0913 20:02:15.327376   71424 logs.go:276] 1 containers: [a3490cc2f99b270938b5fed10c34d8466f4e2d304dac9cdacb69b239c36988e3]
	I0913 20:02:15.327441   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:15.332284   71424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:02:15.332356   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:02:15.374686   71424 cri.go:89] found id: "e70559352db6a4d712cb06065541ce8338cb0ea989eac571f538aa205d022f73"
	I0913 20:02:15.374708   71424 cri.go:89] found id: ""
	I0913 20:02:15.374714   71424 logs.go:276] 1 containers: [e70559352db6a4d712cb06065541ce8338cb0ea989eac571f538aa205d022f73]
	I0913 20:02:15.374771   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:15.379199   71424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:02:15.379269   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:02:15.422011   71424 cri.go:89] found id: "4c2bf4fed4e33c404d568d430bf58fe30ca0c9f0cf8964c530e93f1fd1a51edf"
	I0913 20:02:15.422034   71424 cri.go:89] found id: ""
	I0913 20:02:15.422044   71424 logs.go:276] 1 containers: [4c2bf4fed4e33c404d568d430bf58fe30ca0c9f0cf8964c530e93f1fd1a51edf]
	I0913 20:02:15.422110   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:15.426331   71424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:02:15.426395   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:02:15.471552   71424 cri.go:89] found id: "adbec8ff0ed7ae2d76b2b4191fbf08117c3c2c17aaf2f56bf763ce14fba83991"
	I0913 20:02:15.471570   71424 cri.go:89] found id: ""
	I0913 20:02:15.471577   71424 logs.go:276] 1 containers: [adbec8ff0ed7ae2d76b2b4191fbf08117c3c2c17aaf2f56bf763ce14fba83991]
	I0913 20:02:15.471630   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:15.475964   71424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:02:15.476021   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:02:15.520619   71424 cri.go:89] found id: "e6169bebe5711890f53f70a7ec514779cf495ee725c60dab67e7acdca64ef03d"
	I0913 20:02:15.520647   71424 cri.go:89] found id: ""
	I0913 20:02:15.520656   71424 logs.go:276] 1 containers: [e6169bebe5711890f53f70a7ec514779cf495ee725c60dab67e7acdca64ef03d]
	I0913 20:02:15.520713   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:15.524851   71424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:02:15.524912   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:02:15.559283   71424 cri.go:89] found id: ""
	I0913 20:02:15.559309   71424 logs.go:276] 0 containers: []
	W0913 20:02:15.559320   71424 logs.go:278] No container was found matching "kindnet"
	I0913 20:02:15.559327   71424 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0913 20:02:15.559383   71424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0913 20:02:15.597439   71424 cri.go:89] found id: "fc01d7b17bbc929489326ae7e806a248f2d3d5cbc145bf51eb1b0cf2ba88d950"
	I0913 20:02:15.597465   71424 cri.go:89] found id: "4a9c61bb677322f851f9504c0ffe2044897d80333391be76078a4f7825afe9fe"
	I0913 20:02:15.597471   71424 cri.go:89] found id: ""
	I0913 20:02:15.597480   71424 logs.go:276] 2 containers: [fc01d7b17bbc929489326ae7e806a248f2d3d5cbc145bf51eb1b0cf2ba88d950 4a9c61bb677322f851f9504c0ffe2044897d80333391be76078a4f7825afe9fe]
	I0913 20:02:15.597540   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:15.601932   71424 ssh_runner.go:195] Run: which crictl
	I0913 20:02:15.605741   71424 logs.go:123] Gathering logs for kube-scheduler [4c2bf4fed4e33c404d568d430bf58fe30ca0c9f0cf8964c530e93f1fd1a51edf] ...
	I0913 20:02:15.605765   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c2bf4fed4e33c404d568d430bf58fe30ca0c9f0cf8964c530e93f1fd1a51edf"
	I0913 20:02:15.641300   71424 logs.go:123] Gathering logs for kube-proxy [adbec8ff0ed7ae2d76b2b4191fbf08117c3c2c17aaf2f56bf763ce14fba83991] ...
	I0913 20:02:15.641328   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 adbec8ff0ed7ae2d76b2b4191fbf08117c3c2c17aaf2f56bf763ce14fba83991"
	I0913 20:02:15.679604   71424 logs.go:123] Gathering logs for kube-controller-manager [e6169bebe5711890f53f70a7ec514779cf495ee725c60dab67e7acdca64ef03d] ...
	I0913 20:02:15.679633   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e6169bebe5711890f53f70a7ec514779cf495ee725c60dab67e7acdca64ef03d"
	I0913 20:02:15.731316   71424 logs.go:123] Gathering logs for storage-provisioner [fc01d7b17bbc929489326ae7e806a248f2d3d5cbc145bf51eb1b0cf2ba88d950] ...
	I0913 20:02:15.731348   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc01d7b17bbc929489326ae7e806a248f2d3d5cbc145bf51eb1b0cf2ba88d950"
	I0913 20:02:15.774692   71424 logs.go:123] Gathering logs for dmesg ...
	I0913 20:02:15.774719   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:02:15.789708   71424 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:02:15.789733   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 20:02:15.899485   71424 logs.go:123] Gathering logs for kube-apiserver [7b1108fd5841778e635c87c901ca2bc56071cc7f48f9b9812e1bf8fa063284c3] ...
	I0913 20:02:15.899517   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7b1108fd5841778e635c87c901ca2bc56071cc7f48f9b9812e1bf8fa063284c3"
	I0913 20:02:15.953758   71424 logs.go:123] Gathering logs for coredns [e70559352db6a4d712cb06065541ce8338cb0ea989eac571f538aa205d022f73] ...
	I0913 20:02:15.953795   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e70559352db6a4d712cb06065541ce8338cb0ea989eac571f538aa205d022f73"
	I0913 20:02:15.996235   71424 logs.go:123] Gathering logs for storage-provisioner [4a9c61bb677322f851f9504c0ffe2044897d80333391be76078a4f7825afe9fe] ...
	I0913 20:02:15.996266   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4a9c61bb677322f851f9504c0ffe2044897d80333391be76078a4f7825afe9fe"
	I0913 20:02:16.033729   71424 logs.go:123] Gathering logs for container status ...
	I0913 20:02:16.033765   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:02:16.083481   71424 logs.go:123] Gathering logs for kubelet ...
	I0913 20:02:16.083514   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:02:13.497982   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:02:13.511795   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:02:13.511865   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:02:13.551686   71926 cri.go:89] found id: ""
	I0913 20:02:13.551714   71926 logs.go:276] 0 containers: []
	W0913 20:02:13.551723   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:02:13.551729   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:02:13.551779   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:02:13.584641   71926 cri.go:89] found id: ""
	I0913 20:02:13.584671   71926 logs.go:276] 0 containers: []
	W0913 20:02:13.584682   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:02:13.584689   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:02:13.584740   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:02:13.622693   71926 cri.go:89] found id: ""
	I0913 20:02:13.622720   71926 logs.go:276] 0 containers: []
	W0913 20:02:13.622731   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:02:13.622739   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:02:13.622801   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:02:13.661315   71926 cri.go:89] found id: ""
	I0913 20:02:13.661343   71926 logs.go:276] 0 containers: []
	W0913 20:02:13.661355   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:02:13.661363   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:02:13.661422   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:02:13.698433   71926 cri.go:89] found id: ""
	I0913 20:02:13.698460   71926 logs.go:276] 0 containers: []
	W0913 20:02:13.698471   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:02:13.698485   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:02:13.698541   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:02:13.733216   71926 cri.go:89] found id: ""
	I0913 20:02:13.733245   71926 logs.go:276] 0 containers: []
	W0913 20:02:13.733256   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:02:13.733264   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:02:13.733323   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:02:13.769397   71926 cri.go:89] found id: ""
	I0913 20:02:13.769426   71926 logs.go:276] 0 containers: []
	W0913 20:02:13.769436   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:02:13.769441   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:02:13.769502   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:02:13.806343   71926 cri.go:89] found id: ""
	I0913 20:02:13.806367   71926 logs.go:276] 0 containers: []
	W0913 20:02:13.806378   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:02:13.806389   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:02:13.806402   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:02:13.885918   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:02:13.885958   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:02:13.927909   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:02:13.927948   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:02:13.983289   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:02:13.983325   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:02:13.996867   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:02:13.996892   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:02:14.065876   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:02:16.566876   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:02:16.581980   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:02:16.582059   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:02:16.620669   71926 cri.go:89] found id: ""
	I0913 20:02:16.620694   71926 logs.go:276] 0 containers: []
	W0913 20:02:16.620703   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:02:16.620709   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:02:16.620758   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:02:16.660133   71926 cri.go:89] found id: ""
	I0913 20:02:16.660156   71926 logs.go:276] 0 containers: []
	W0913 20:02:16.660165   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:02:16.660171   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:02:16.660218   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:02:16.695475   71926 cri.go:89] found id: ""
	I0913 20:02:16.695503   71926 logs.go:276] 0 containers: []
	W0913 20:02:16.695515   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:02:16.695522   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:02:16.695581   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:02:16.730018   71926 cri.go:89] found id: ""
	I0913 20:02:16.730051   71926 logs.go:276] 0 containers: []
	W0913 20:02:16.730063   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:02:16.730069   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:02:16.730136   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:02:16.763193   71926 cri.go:89] found id: ""
	I0913 20:02:16.763219   71926 logs.go:276] 0 containers: []
	W0913 20:02:16.763230   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:02:16.763236   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:02:16.763303   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:02:16.800623   71926 cri.go:89] found id: ""
	I0913 20:02:16.800650   71926 logs.go:276] 0 containers: []
	W0913 20:02:16.800662   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:02:16.800670   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:02:16.800730   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:02:16.834922   71926 cri.go:89] found id: ""
	I0913 20:02:16.834950   71926 logs.go:276] 0 containers: []
	W0913 20:02:16.834961   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:02:16.834968   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:02:16.835012   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:02:16.872570   71926 cri.go:89] found id: ""
	I0913 20:02:16.872598   71926 logs.go:276] 0 containers: []
	W0913 20:02:16.872607   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:02:16.872615   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:02:16.872625   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:02:16.922229   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:02:16.922265   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:02:16.936954   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:02:16.936985   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 20:02:16.155161   71424 logs.go:123] Gathering logs for etcd [a3490cc2f99b270938b5fed10c34d8466f4e2d304dac9cdacb69b239c36988e3] ...
	I0913 20:02:16.155202   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a3490cc2f99b270938b5fed10c34d8466f4e2d304dac9cdacb69b239c36988e3"
	I0913 20:02:16.213457   71424 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:02:16.213494   71424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:02:19.078923   71424 system_pods.go:59] 8 kube-system pods found
	I0913 20:02:19.078950   71424 system_pods.go:61] "coredns-7c65d6cfc9-fjzxv" [984f1946-61b1-4881-ae99-495855aaf948] Running
	I0913 20:02:19.078956   71424 system_pods.go:61] "etcd-no-preload-239327" [d0514967-93ea-4792-b49d-500c6800d102] Running
	I0913 20:02:19.078959   71424 system_pods.go:61] "kube-apiserver-no-preload-239327" [2e37433d-d767-4e6a-9697-79e99bb2cf74] Running
	I0913 20:02:19.078964   71424 system_pods.go:61] "kube-controller-manager-no-preload-239327" [71d86891-1378-43b5-af10-c45acb1ef854] Running
	I0913 20:02:19.078967   71424 system_pods.go:61] "kube-proxy-b24zg" [67fffd9e-ddf7-4abb-bfce-1528060d6b43] Running
	I0913 20:02:19.078971   71424 system_pods.go:61] "kube-scheduler-no-preload-239327" [22e17a4f-902f-4593-82dd-d2f04104e66a] Running
	I0913 20:02:19.078976   71424 system_pods.go:61] "metrics-server-6867b74b74-bq7jp" [9920ad88-3d00-458f-94d4-3dcfd0cd9a01] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0913 20:02:19.078980   71424 system_pods.go:61] "storage-provisioner" [1cb55fe9-4adb-4d3e-9f26-34ee4b3f01a2] Running
	I0913 20:02:19.078988   71424 system_pods.go:74] duration metric: took 3.831418395s to wait for pod list to return data ...
	I0913 20:02:19.078995   71424 default_sa.go:34] waiting for default service account to be created ...
	I0913 20:02:19.081391   71424 default_sa.go:45] found service account: "default"
	I0913 20:02:19.081412   71424 default_sa.go:55] duration metric: took 2.412971ms for default service account to be created ...
	I0913 20:02:19.081419   71424 system_pods.go:116] waiting for k8s-apps to be running ...
	I0913 20:02:19.085561   71424 system_pods.go:86] 8 kube-system pods found
	I0913 20:02:19.085580   71424 system_pods.go:89] "coredns-7c65d6cfc9-fjzxv" [984f1946-61b1-4881-ae99-495855aaf948] Running
	I0913 20:02:19.085586   71424 system_pods.go:89] "etcd-no-preload-239327" [d0514967-93ea-4792-b49d-500c6800d102] Running
	I0913 20:02:19.085590   71424 system_pods.go:89] "kube-apiserver-no-preload-239327" [2e37433d-d767-4e6a-9697-79e99bb2cf74] Running
	I0913 20:02:19.085594   71424 system_pods.go:89] "kube-controller-manager-no-preload-239327" [71d86891-1378-43b5-af10-c45acb1ef854] Running
	I0913 20:02:19.085597   71424 system_pods.go:89] "kube-proxy-b24zg" [67fffd9e-ddf7-4abb-bfce-1528060d6b43] Running
	I0913 20:02:19.085601   71424 system_pods.go:89] "kube-scheduler-no-preload-239327" [22e17a4f-902f-4593-82dd-d2f04104e66a] Running
	I0913 20:02:19.085607   71424 system_pods.go:89] "metrics-server-6867b74b74-bq7jp" [9920ad88-3d00-458f-94d4-3dcfd0cd9a01] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0913 20:02:19.085610   71424 system_pods.go:89] "storage-provisioner" [1cb55fe9-4adb-4d3e-9f26-34ee4b3f01a2] Running
	I0913 20:02:19.085616   71424 system_pods.go:126] duration metric: took 4.193561ms to wait for k8s-apps to be running ...
	I0913 20:02:19.085625   71424 system_svc.go:44] waiting for kubelet service to be running ....
	I0913 20:02:19.085664   71424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 20:02:19.105440   71424 system_svc.go:56] duration metric: took 19.808703ms WaitForService to wait for kubelet
	I0913 20:02:19.105469   71424 kubeadm.go:582] duration metric: took 4m22.354619761s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0913 20:02:19.105491   71424 node_conditions.go:102] verifying NodePressure condition ...
	I0913 20:02:19.109107   71424 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0913 20:02:19.109126   71424 node_conditions.go:123] node cpu capacity is 2
	I0913 20:02:19.109136   71424 node_conditions.go:105] duration metric: took 3.640406ms to run NodePressure ...
	I0913 20:02:19.109146   71424 start.go:241] waiting for startup goroutines ...
	I0913 20:02:19.109153   71424 start.go:246] waiting for cluster config update ...
	I0913 20:02:19.109163   71424 start.go:255] writing updated cluster config ...
	I0913 20:02:19.109412   71424 ssh_runner.go:195] Run: rm -f paused
	I0913 20:02:19.156906   71424 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0913 20:02:19.158757   71424 out.go:177] * Done! kubectl is now configured to use "no-preload-239327" cluster and "default" namespace by default
	I0913 20:02:14.835749   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:17.335566   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:16.431024   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:18.434223   71702 pod_ready.go:103] pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:19.425264   71702 pod_ready.go:82] duration metric: took 4m0.000872269s for pod "metrics-server-6867b74b74-7ltrm" in "kube-system" namespace to be "Ready" ...
	E0913 20:02:19.425295   71702 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0913 20:02:19.425314   71702 pod_ready.go:39] duration metric: took 4m14.083085064s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0913 20:02:19.425344   71702 kubeadm.go:597] duration metric: took 4m21.72399516s to restartPrimaryControlPlane
	W0913 20:02:19.425404   71702 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0913 20:02:19.425434   71702 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	W0913 20:02:17.022260   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:02:17.022281   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:02:17.022292   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:02:17.103233   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:02:17.103262   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:02:19.648871   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:02:19.664542   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:02:19.664619   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:02:19.704693   71926 cri.go:89] found id: ""
	I0913 20:02:19.704725   71926 logs.go:276] 0 containers: []
	W0913 20:02:19.704738   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:02:19.704745   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:02:19.704808   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:02:19.746522   71926 cri.go:89] found id: ""
	I0913 20:02:19.746550   71926 logs.go:276] 0 containers: []
	W0913 20:02:19.746562   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:02:19.746569   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:02:19.746630   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:02:19.782277   71926 cri.go:89] found id: ""
	I0913 20:02:19.782305   71926 logs.go:276] 0 containers: []
	W0913 20:02:19.782316   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:02:19.782323   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:02:19.782390   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:02:19.822083   71926 cri.go:89] found id: ""
	I0913 20:02:19.822147   71926 logs.go:276] 0 containers: []
	W0913 20:02:19.822157   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:02:19.822163   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:02:19.822221   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:02:19.865403   71926 cri.go:89] found id: ""
	I0913 20:02:19.865433   71926 logs.go:276] 0 containers: []
	W0913 20:02:19.865443   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:02:19.865451   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:02:19.865513   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:02:19.906436   71926 cri.go:89] found id: ""
	I0913 20:02:19.906462   71926 logs.go:276] 0 containers: []
	W0913 20:02:19.906471   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:02:19.906477   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:02:19.906535   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:02:19.943277   71926 cri.go:89] found id: ""
	I0913 20:02:19.943301   71926 logs.go:276] 0 containers: []
	W0913 20:02:19.943311   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:02:19.943318   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:02:19.943379   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:02:19.978660   71926 cri.go:89] found id: ""
	I0913 20:02:19.978683   71926 logs.go:276] 0 containers: []
	W0913 20:02:19.978694   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:02:19.978704   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:02:19.978718   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:02:20.051748   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:02:20.051773   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:02:20.051788   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:02:20.133912   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:02:20.133951   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:02:20.174826   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:02:20.174854   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:02:20.228002   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:02:20.228038   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:02:22.743346   71926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:02:22.757641   71926 kubeadm.go:597] duration metric: took 4m3.377721408s to restartPrimaryControlPlane
	W0913 20:02:22.757719   71926 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0913 20:02:22.757750   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0913 20:02:23.489114   71926 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 20:02:23.505494   71926 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0913 20:02:23.516804   71926 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0913 20:02:23.527373   71926 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0913 20:02:23.527392   71926 kubeadm.go:157] found existing configuration files:
	
	I0913 20:02:23.527433   71926 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0913 20:02:23.537725   71926 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0913 20:02:23.537797   71926 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0913 20:02:23.548667   71926 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0913 20:02:23.558212   71926 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0913 20:02:23.558288   71926 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0913 20:02:23.568235   71926 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0913 20:02:23.577925   71926 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0913 20:02:23.577989   71926 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0913 20:02:23.587819   71926 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0913 20:02:23.597266   71926 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0913 20:02:23.597330   71926 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0913 20:02:23.607021   71926 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0913 20:02:23.681432   71926 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0913 20:02:23.681576   71926 kubeadm.go:310] [preflight] Running pre-flight checks
	I0913 20:02:23.837573   71926 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0913 20:02:23.837727   71926 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0913 20:02:23.837858   71926 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0913 20:02:24.016267   71926 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0913 20:02:19.336285   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:21.836115   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:23.837035   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:24.018067   71926 out.go:235]   - Generating certificates and keys ...
	I0913 20:02:24.018195   71926 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0913 20:02:24.018297   71926 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0913 20:02:24.018394   71926 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0913 20:02:24.018482   71926 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0913 20:02:24.018586   71926 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0913 20:02:24.019167   71926 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0913 20:02:24.019590   71926 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0913 20:02:24.020258   71926 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0913 20:02:24.020814   71926 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0913 20:02:24.021409   71926 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0913 20:02:24.021519   71926 kubeadm.go:310] [certs] Using the existing "sa" key
	I0913 20:02:24.021596   71926 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0913 20:02:24.094249   71926 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0913 20:02:24.178186   71926 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0913 20:02:24.412313   71926 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0913 20:02:24.570296   71926 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0913 20:02:24.585365   71926 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0913 20:02:24.586493   71926 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0913 20:02:24.586548   71926 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0913 20:02:24.726026   71926 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0913 20:02:24.727979   71926 out.go:235]   - Booting up control plane ...
	I0913 20:02:24.728117   71926 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0913 20:02:24.740176   71926 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0913 20:02:24.741334   71926 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0913 20:02:24.742057   71926 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0913 20:02:24.744724   71926 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0913 20:02:26.336853   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:28.841632   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:31.336243   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:33.835739   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:36.337341   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:38.835188   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:40.836019   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:42.836112   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:45.681212   71702 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.255746666s)
	I0913 20:02:45.681319   71702 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 20:02:45.700645   71702 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0913 20:02:45.716032   71702 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0913 20:02:45.735914   71702 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0913 20:02:45.735934   71702 kubeadm.go:157] found existing configuration files:
	
	I0913 20:02:45.735991   71702 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0913 20:02:45.746143   71702 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0913 20:02:45.746212   71702 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0913 20:02:45.756542   71702 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0913 20:02:45.774317   71702 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0913 20:02:45.774371   71702 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0913 20:02:45.786627   71702 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0913 20:02:45.796851   71702 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0913 20:02:45.796913   71702 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0913 20:02:45.817449   71702 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0913 20:02:45.827702   71702 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0913 20:02:45.827769   71702 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0913 20:02:45.838431   71702 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0913 20:02:45.891108   71702 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0913 20:02:45.891320   71702 kubeadm.go:310] [preflight] Running pre-flight checks
	I0913 20:02:46.000041   71702 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0913 20:02:46.000212   71702 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0913 20:02:46.000375   71702 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0913 20:02:46.008967   71702 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0913 20:02:46.010730   71702 out.go:235]   - Generating certificates and keys ...
	I0913 20:02:46.010839   71702 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0913 20:02:46.010943   71702 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0913 20:02:46.011058   71702 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0913 20:02:46.011180   71702 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0913 20:02:46.011270   71702 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0913 20:02:46.011352   71702 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0913 20:02:46.011438   71702 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0913 20:02:46.011528   71702 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0913 20:02:46.011627   71702 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0913 20:02:46.011727   71702 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0913 20:02:46.011781   71702 kubeadm.go:310] [certs] Using the existing "sa" key
	I0913 20:02:46.011850   71702 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0913 20:02:46.203740   71702 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0913 20:02:46.287426   71702 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0913 20:02:46.417622   71702 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0913 20:02:46.837809   71702 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0913 20:02:47.159346   71702 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0913 20:02:47.159994   71702 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0913 20:02:47.162768   71702 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0913 20:02:45.335134   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:47.338183   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:47.164508   71702 out.go:235]   - Booting up control plane ...
	I0913 20:02:47.164636   71702 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0913 20:02:47.164740   71702 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0913 20:02:47.164827   71702 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0913 20:02:47.182734   71702 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0913 20:02:47.188946   71702 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0913 20:02:47.189012   71702 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0913 20:02:47.311613   71702 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0913 20:02:47.311820   71702 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0913 20:02:47.812730   71702 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.220732ms
	I0913 20:02:47.812859   71702 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0913 20:02:53.314958   71702 kubeadm.go:310] [api-check] The API server is healthy after 5.502078323s
	I0913 20:02:53.332711   71702 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0913 20:02:53.363295   71702 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0913 20:02:53.416780   71702 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0913 20:02:53.417000   71702 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-512125 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0913 20:02:53.450532   71702 kubeadm.go:310] [bootstrap-token] Using token: omlshd.2vtm45ugvt4lb37m
	I0913 20:02:49.837005   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:52.336369   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:53.451903   71702 out.go:235]   - Configuring RBAC rules ...
	I0913 20:02:53.452024   71702 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0913 20:02:53.474646   71702 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0913 20:02:53.501155   71702 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0913 20:02:53.510978   71702 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0913 20:02:53.529034   71702 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0913 20:02:53.540839   71702 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0913 20:02:53.724625   71702 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0913 20:02:54.178585   71702 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0913 20:02:54.728758   71702 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0913 20:02:54.729745   71702 kubeadm.go:310] 
	I0913 20:02:54.729808   71702 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0913 20:02:54.729816   71702 kubeadm.go:310] 
	I0913 20:02:54.729906   71702 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0913 20:02:54.729931   71702 kubeadm.go:310] 
	I0913 20:02:54.729981   71702 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0913 20:02:54.730079   71702 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0913 20:02:54.730170   71702 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0913 20:02:54.730180   71702 kubeadm.go:310] 
	I0913 20:02:54.730386   71702 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0913 20:02:54.730403   71702 kubeadm.go:310] 
	I0913 20:02:54.730453   71702 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0913 20:02:54.730476   71702 kubeadm.go:310] 
	I0913 20:02:54.730538   71702 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0913 20:02:54.730642   71702 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0913 20:02:54.730737   71702 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0913 20:02:54.730746   71702 kubeadm.go:310] 
	I0913 20:02:54.730866   71702 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0913 20:02:54.730978   71702 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0913 20:02:54.730990   71702 kubeadm.go:310] 
	I0913 20:02:54.731059   71702 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token omlshd.2vtm45ugvt4lb37m \
	I0913 20:02:54.731147   71702 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a4240e11abfbfd373505036507e5ef67d548fb372e560a526b421dfcccc65691 \
	I0913 20:02:54.731172   71702 kubeadm.go:310] 	--control-plane 
	I0913 20:02:54.731178   71702 kubeadm.go:310] 
	I0913 20:02:54.731250   71702 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0913 20:02:54.731265   71702 kubeadm.go:310] 
	I0913 20:02:54.731385   71702 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token omlshd.2vtm45ugvt4lb37m \
	I0913 20:02:54.731537   71702 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a4240e11abfbfd373505036507e5ef67d548fb372e560a526b421dfcccc65691 
	I0913 20:02:54.732490   71702 kubeadm.go:310] W0913 20:02:45.866846    2534 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0913 20:02:54.732825   71702 kubeadm.go:310] W0913 20:02:45.867680    2534 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0913 20:02:54.732991   71702 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0913 20:02:54.733013   71702 cni.go:84] Creating CNI manager for ""
	I0913 20:02:54.733024   71702 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0913 20:02:54.734613   71702 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0913 20:02:54.735888   71702 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0913 20:02:54.747812   71702 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0913 20:02:54.769810   71702 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0913 20:02:54.769849   71702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 20:02:54.769936   71702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-512125 minikube.k8s.io/updated_at=2024_09_13T20_02_54_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=fdd33bebc6743cfd1c61ec7fe066add478610a92 minikube.k8s.io/name=default-k8s-diff-port-512125 minikube.k8s.io/primary=true
	I0913 20:02:54.934477   71702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 20:02:55.021422   71702 ops.go:34] apiserver oom_adj: -16
	I0913 20:02:55.435528   71702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 20:02:55.935089   71702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 20:02:56.434609   71702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 20:02:56.934698   71702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 20:02:57.434523   71702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 20:02:57.935430   71702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 20:02:58.434786   71702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 20:02:58.935296   71702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 20:02:59.068131   71702 kubeadm.go:1113] duration metric: took 4.298327621s to wait for elevateKubeSystemPrivileges
	I0913 20:02:59.068171   71702 kubeadm.go:394] duration metric: took 5m1.428919049s to StartCluster
	I0913 20:02:59.068191   71702 settings.go:142] acquiring lock: {Name:mk3b751ea8a9f3a6c2d19469cffad500e96411b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 20:02:59.068274   71702 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19636-3902/kubeconfig
	I0913 20:02:59.069936   71702 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3902/kubeconfig: {Name:mk324cf401f96b93fed93af51e24b0634e5fa1a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 20:02:59.070196   71702 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.3 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0913 20:02:59.070258   71702 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0913 20:02:59.070355   71702 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-512125"
	I0913 20:02:59.070373   71702 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-512125"
	W0913 20:02:59.070386   71702 addons.go:243] addon storage-provisioner should already be in state true
	I0913 20:02:59.070383   71702 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-512125"
	I0913 20:02:59.070407   71702 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-512125"
	I0913 20:02:59.070407   71702 config.go:182] Loaded profile config "default-k8s-diff-port-512125": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 20:02:59.070425   71702 host.go:66] Checking if "default-k8s-diff-port-512125" exists ...
	I0913 20:02:59.070413   71702 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-512125"
	I0913 20:02:59.070447   71702 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-512125"
	W0913 20:02:59.070457   71702 addons.go:243] addon metrics-server should already be in state true
	I0913 20:02:59.070481   71702 host.go:66] Checking if "default-k8s-diff-port-512125" exists ...
	I0913 20:02:59.070819   71702 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 20:02:59.070863   71702 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 20:02:59.070866   71702 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 20:02:59.070891   71702 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 20:02:59.070911   71702 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 20:02:59.070935   71702 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 20:02:59.072027   71702 out.go:177] * Verifying Kubernetes components...
	I0913 20:02:59.073600   71702 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 20:02:59.088175   71702 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46813
	I0913 20:02:59.088737   71702 main.go:141] libmachine: () Calling .GetVersion
	I0913 20:02:59.089296   71702 main.go:141] libmachine: Using API Version  1
	I0913 20:02:59.089321   71702 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 20:02:59.089716   71702 main.go:141] libmachine: () Calling .GetMachineName
	I0913 20:02:59.090168   71702 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41303
	I0913 20:02:59.090184   71702 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43597
	I0913 20:02:59.090323   71702 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 20:02:59.090370   71702 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 20:02:59.090639   71702 main.go:141] libmachine: () Calling .GetVersion
	I0913 20:02:59.090642   71702 main.go:141] libmachine: () Calling .GetVersion
	I0913 20:02:59.091125   71702 main.go:141] libmachine: Using API Version  1
	I0913 20:02:59.091157   71702 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 20:02:59.091295   71702 main.go:141] libmachine: Using API Version  1
	I0913 20:02:59.091309   71702 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 20:02:59.091691   71702 main.go:141] libmachine: () Calling .GetMachineName
	I0913 20:02:59.091749   71702 main.go:141] libmachine: () Calling .GetMachineName
	I0913 20:02:59.092208   71702 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 20:02:59.092244   71702 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 20:02:59.092420   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetState
	I0913 20:02:59.096383   71702 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-512125"
	W0913 20:02:59.096408   71702 addons.go:243] addon default-storageclass should already be in state true
	I0913 20:02:59.096439   71702 host.go:66] Checking if "default-k8s-diff-port-512125" exists ...
	I0913 20:02:59.096799   71702 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 20:02:59.096839   71702 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 20:02:59.110299   71702 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32853
	I0913 20:02:59.110382   71702 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41891
	I0913 20:02:59.110847   71702 main.go:141] libmachine: () Calling .GetVersion
	I0913 20:02:59.110951   71702 main.go:141] libmachine: () Calling .GetVersion
	I0913 20:02:59.111458   71702 main.go:141] libmachine: Using API Version  1
	I0913 20:02:59.111472   71702 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 20:02:59.111483   71702 main.go:141] libmachine: Using API Version  1
	I0913 20:02:59.111500   71702 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 20:02:59.111815   71702 main.go:141] libmachine: () Calling .GetMachineName
	I0913 20:02:59.111979   71702 main.go:141] libmachine: () Calling .GetMachineName
	I0913 20:02:59.112029   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetState
	I0913 20:02:59.112585   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetState
	I0913 20:02:59.114070   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .DriverName
	I0913 20:02:59.114919   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .DriverName
	I0913 20:02:59.116054   71702 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 20:02:59.116911   71702 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0913 20:02:54.837749   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:57.335281   71233 pod_ready.go:103] pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace has status "Ready":"False"
	I0913 20:02:57.335308   71233 pod_ready.go:82] duration metric: took 4m0.006028535s for pod "metrics-server-6867b74b74-fnznh" in "kube-system" namespace to be "Ready" ...
	E0913 20:02:57.335316   71233 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0913 20:02:57.335325   71233 pod_ready.go:39] duration metric: took 4m4.043499675s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0913 20:02:57.335338   71233 api_server.go:52] waiting for apiserver process to appear ...
	I0913 20:02:57.335365   71233 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:02:57.335429   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:02:57.384724   71233 cri.go:89] found id: "8c6b66cfda64cc01b2add3842317e8e38bd44940ec52b87ff6868e2b782ceb0d"
	I0913 20:02:57.384750   71233 cri.go:89] found id: ""
	I0913 20:02:57.384759   71233 logs.go:276] 1 containers: [8c6b66cfda64cc01b2add3842317e8e38bd44940ec52b87ff6868e2b782ceb0d]
	I0913 20:02:57.384816   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:02:57.393335   71233 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:02:57.393406   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:02:57.432064   71233 cri.go:89] found id: "b7288e6c437a29f9a016812446ce5905f4cc78974775e1f26b227cd41414a8c0"
	I0913 20:02:57.432112   71233 cri.go:89] found id: ""
	I0913 20:02:57.432121   71233 logs.go:276] 1 containers: [b7288e6c437a29f9a016812446ce5905f4cc78974775e1f26b227cd41414a8c0]
	I0913 20:02:57.432170   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:02:57.437305   71233 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:02:57.437363   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:02:57.484101   71233 cri.go:89] found id: "5a58f184d570427c2adde5e982b72f455eb375ced496ce62a3217f22f7c409e7"
	I0913 20:02:57.484125   71233 cri.go:89] found id: ""
	I0913 20:02:57.484135   71233 logs.go:276] 1 containers: [5a58f184d570427c2adde5e982b72f455eb375ced496ce62a3217f22f7c409e7]
	I0913 20:02:57.484204   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:02:57.489057   71233 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:02:57.489129   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:02:57.531094   71233 cri.go:89] found id: "c32212fb06588456e47850075601e1e9622d8fab0cd30faa9a6289ff74e4bc9f"
	I0913 20:02:57.531138   71233 cri.go:89] found id: ""
	I0913 20:02:57.531147   71233 logs.go:276] 1 containers: [c32212fb06588456e47850075601e1e9622d8fab0cd30faa9a6289ff74e4bc9f]
	I0913 20:02:57.531208   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:02:57.536227   71233 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:02:57.536290   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:02:57.575177   71233 cri.go:89] found id: "57402126568c76ff63c95511c256896d8fb9ec9889deab2a2816385513386b86"
	I0913 20:02:57.575204   71233 cri.go:89] found id: ""
	I0913 20:02:57.575213   71233 logs.go:276] 1 containers: [57402126568c76ff63c95511c256896d8fb9ec9889deab2a2816385513386b86]
	I0913 20:02:57.575265   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:02:57.580702   71233 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:02:57.580772   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:02:57.616846   71233 cri.go:89] found id: "3e8d6c49b3b396aea040cc47de27b36e0f6018361f1925fe36424adffe6a6b73"
	I0913 20:02:57.616872   71233 cri.go:89] found id: ""
	I0913 20:02:57.616881   71233 logs.go:276] 1 containers: [3e8d6c49b3b396aea040cc47de27b36e0f6018361f1925fe36424adffe6a6b73]
	I0913 20:02:57.616937   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:02:57.626381   71233 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:02:57.626438   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:02:57.665834   71233 cri.go:89] found id: ""
	I0913 20:02:57.665859   71233 logs.go:276] 0 containers: []
	W0913 20:02:57.665868   71233 logs.go:278] No container was found matching "kindnet"
	I0913 20:02:57.665873   71233 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0913 20:02:57.665924   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0913 20:02:57.709261   71233 cri.go:89] found id: "db0694e689431d2416b903e0f01588bdf740ca0d24d9561799e853cfb065cb11"
	I0913 20:02:57.709282   71233 cri.go:89] found id: "d21ac9f9341fd40b5f181668035e7327fd6c2310c56a7f5f0158d440bfd2e9a3"
	I0913 20:02:57.709286   71233 cri.go:89] found id: ""
	I0913 20:02:57.709293   71233 logs.go:276] 2 containers: [db0694e689431d2416b903e0f01588bdf740ca0d24d9561799e853cfb065cb11 d21ac9f9341fd40b5f181668035e7327fd6c2310c56a7f5f0158d440bfd2e9a3]
	I0913 20:02:57.709352   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:02:57.713629   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:02:57.717722   71233 logs.go:123] Gathering logs for kubelet ...
	I0913 20:02:57.717739   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:02:57.791226   71233 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:02:57.791258   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 20:02:57.967572   71233 logs.go:123] Gathering logs for kube-apiserver [8c6b66cfda64cc01b2add3842317e8e38bd44940ec52b87ff6868e2b782ceb0d] ...
	I0913 20:02:57.967614   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8c6b66cfda64cc01b2add3842317e8e38bd44940ec52b87ff6868e2b782ceb0d"
	I0913 20:02:58.035311   71233 logs.go:123] Gathering logs for kube-scheduler [c32212fb06588456e47850075601e1e9622d8fab0cd30faa9a6289ff74e4bc9f] ...
	I0913 20:02:58.035356   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c32212fb06588456e47850075601e1e9622d8fab0cd30faa9a6289ff74e4bc9f"
	I0913 20:02:58.076771   71233 logs.go:123] Gathering logs for kube-proxy [57402126568c76ff63c95511c256896d8fb9ec9889deab2a2816385513386b86] ...
	I0913 20:02:58.076801   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57402126568c76ff63c95511c256896d8fb9ec9889deab2a2816385513386b86"
	I0913 20:02:58.120108   71233 logs.go:123] Gathering logs for storage-provisioner [db0694e689431d2416b903e0f01588bdf740ca0d24d9561799e853cfb065cb11] ...
	I0913 20:02:58.120138   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db0694e689431d2416b903e0f01588bdf740ca0d24d9561799e853cfb065cb11"
	I0913 20:02:58.169935   71233 logs.go:123] Gathering logs for storage-provisioner [d21ac9f9341fd40b5f181668035e7327fd6c2310c56a7f5f0158d440bfd2e9a3] ...
	I0913 20:02:58.169964   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d21ac9f9341fd40b5f181668035e7327fd6c2310c56a7f5f0158d440bfd2e9a3"
	I0913 20:02:58.213552   71233 logs.go:123] Gathering logs for dmesg ...
	I0913 20:02:58.213579   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:02:58.227590   71233 logs.go:123] Gathering logs for etcd [b7288e6c437a29f9a016812446ce5905f4cc78974775e1f26b227cd41414a8c0] ...
	I0913 20:02:58.227618   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b7288e6c437a29f9a016812446ce5905f4cc78974775e1f26b227cd41414a8c0"
	I0913 20:02:58.272273   71233 logs.go:123] Gathering logs for coredns [5a58f184d570427c2adde5e982b72f455eb375ced496ce62a3217f22f7c409e7] ...
	I0913 20:02:58.272304   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a58f184d570427c2adde5e982b72f455eb375ced496ce62a3217f22f7c409e7"
	I0913 20:02:58.325246   71233 logs.go:123] Gathering logs for kube-controller-manager [3e8d6c49b3b396aea040cc47de27b36e0f6018361f1925fe36424adffe6a6b73] ...
	I0913 20:02:58.325282   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e8d6c49b3b396aea040cc47de27b36e0f6018361f1925fe36424adffe6a6b73"
	I0913 20:02:58.383314   71233 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:02:58.383344   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:02:58.878384   71233 logs.go:123] Gathering logs for container status ...
	I0913 20:02:58.878423   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:02:59.116960   71702 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35415
	I0913 20:02:59.117841   71702 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0913 20:02:59.117861   71702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0913 20:02:59.117881   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHHostname
	I0913 20:02:59.117970   71702 main.go:141] libmachine: () Calling .GetVersion
	I0913 20:02:59.118540   71702 main.go:141] libmachine: Using API Version  1
	I0913 20:02:59.118559   71702 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 20:02:59.118756   71702 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0913 20:02:59.118776   71702 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0913 20:02:59.118795   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHHostname
	I0913 20:02:59.118937   71702 main.go:141] libmachine: () Calling .GetMachineName
	I0913 20:02:59.120038   71702 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 20:02:59.120119   71702 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 20:02:59.122253   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 20:02:59.122695   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 20:02:59.122693   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:54:e0", ip: ""} in network mk-default-k8s-diff-port-512125: {Iface:virbr2 ExpiryTime:2024-09-13 20:49:35 +0000 UTC Type:0 Mac:52:54:00:5b:54:e0 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-512125 Clientid:01:52:54:00:5b:54:e0}
	I0913 20:02:59.122727   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined IP address 192.168.61.3 and MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 20:02:59.122937   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHPort
	I0913 20:02:59.123131   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHKeyPath
	I0913 20:02:59.123151   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:54:e0", ip: ""} in network mk-default-k8s-diff-port-512125: {Iface:virbr2 ExpiryTime:2024-09-13 20:49:35 +0000 UTC Type:0 Mac:52:54:00:5b:54:e0 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-512125 Clientid:01:52:54:00:5b:54:e0}
	I0913 20:02:59.123172   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined IP address 192.168.61.3 and MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 20:02:59.123321   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHUsername
	I0913 20:02:59.123523   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHPort
	I0913 20:02:59.123531   71702 sshutil.go:53] new ssh client: &{IP:192.168.61.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/default-k8s-diff-port-512125/id_rsa Username:docker}
	I0913 20:02:59.123629   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHKeyPath
	I0913 20:02:59.123727   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHUsername
	I0913 20:02:59.123835   71702 sshutil.go:53] new ssh client: &{IP:192.168.61.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/default-k8s-diff-port-512125/id_rsa Username:docker}
	I0913 20:02:59.137333   71702 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45929
	I0913 20:02:59.137767   71702 main.go:141] libmachine: () Calling .GetVersion
	I0913 20:02:59.138291   71702 main.go:141] libmachine: Using API Version  1
	I0913 20:02:59.138311   71702 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 20:02:59.138659   71702 main.go:141] libmachine: () Calling .GetMachineName
	I0913 20:02:59.138865   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetState
	I0913 20:02:59.140658   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .DriverName
	I0913 20:02:59.140891   71702 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0913 20:02:59.140908   71702 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0913 20:02:59.140934   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHHostname
	I0913 20:02:59.144330   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 20:02:59.144802   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:54:e0", ip: ""} in network mk-default-k8s-diff-port-512125: {Iface:virbr2 ExpiryTime:2024-09-13 20:49:35 +0000 UTC Type:0 Mac:52:54:00:5b:54:e0 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-512125 Clientid:01:52:54:00:5b:54:e0}
	I0913 20:02:59.144834   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | domain default-k8s-diff-port-512125 has defined IP address 192.168.61.3 and MAC address 52:54:00:5b:54:e0 in network mk-default-k8s-diff-port-512125
	I0913 20:02:59.144971   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHPort
	I0913 20:02:59.145149   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHKeyPath
	I0913 20:02:59.145280   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .GetSSHUsername
	I0913 20:02:59.145398   71702 sshutil.go:53] new ssh client: &{IP:192.168.61.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/default-k8s-diff-port-512125/id_rsa Username:docker}
	I0913 20:02:59.313139   71702 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0913 20:02:59.364703   71702 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-512125" to be "Ready" ...
	I0913 20:02:59.390283   71702 node_ready.go:49] node "default-k8s-diff-port-512125" has status "Ready":"True"
	I0913 20:02:59.390322   71702 node_ready.go:38] duration metric: took 25.568477ms for node "default-k8s-diff-port-512125" to be "Ready" ...
	I0913 20:02:59.390335   71702 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0913 20:02:59.404911   71702 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-2qg68" in "kube-system" namespace to be "Ready" ...
	I0913 20:02:59.534386   71702 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0913 20:02:59.534414   71702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0913 20:02:59.562931   71702 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0913 20:02:59.562958   71702 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0913 20:02:59.569447   71702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0913 20:02:59.630245   71702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0913 20:02:59.664309   71702 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0913 20:02:59.664341   71702 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0913 20:02:59.766546   71702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0913 20:03:00.996748   71702 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.366470603s)
	I0913 20:03:00.996799   71702 main.go:141] libmachine: Making call to close driver server
	I0913 20:03:00.996814   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .Close
	I0913 20:03:00.996831   71702 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.427344727s)
	I0913 20:03:00.996874   71702 main.go:141] libmachine: Making call to close driver server
	I0913 20:03:00.996886   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .Close
	I0913 20:03:00.997187   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | Closing plugin on server side
	I0913 20:03:00.997223   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | Closing plugin on server side
	I0913 20:03:00.997216   71702 main.go:141] libmachine: Successfully made call to close driver server
	I0913 20:03:00.997261   71702 main.go:141] libmachine: Successfully made call to close driver server
	I0913 20:03:00.997272   71702 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 20:03:00.997283   71702 main.go:141] libmachine: Making call to close driver server
	I0913 20:03:00.997293   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .Close
	I0913 20:03:00.997261   71702 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 20:03:00.997352   71702 main.go:141] libmachine: Making call to close driver server
	I0913 20:03:00.997360   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .Close
	I0913 20:03:00.997576   71702 main.go:141] libmachine: Successfully made call to close driver server
	I0913 20:03:00.997619   71702 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 20:03:00.997631   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) DBG | Closing plugin on server side
	I0913 20:03:00.997657   71702 main.go:141] libmachine: Successfully made call to close driver server
	I0913 20:03:00.997717   71702 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 20:03:01.017603   71702 main.go:141] libmachine: Making call to close driver server
	I0913 20:03:01.017629   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .Close
	I0913 20:03:01.017896   71702 main.go:141] libmachine: Successfully made call to close driver server
	I0913 20:03:01.017913   71702 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 20:03:01.034684   71702 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.268104844s)
	I0913 20:03:01.034739   71702 main.go:141] libmachine: Making call to close driver server
	I0913 20:03:01.034756   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .Close
	I0913 20:03:01.035100   71702 main.go:141] libmachine: Successfully made call to close driver server
	I0913 20:03:01.035120   71702 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 20:03:01.035137   71702 main.go:141] libmachine: Making call to close driver server
	I0913 20:03:01.035145   71702 main.go:141] libmachine: (default-k8s-diff-port-512125) Calling .Close
	I0913 20:03:01.036842   71702 main.go:141] libmachine: Successfully made call to close driver server
	I0913 20:03:01.036871   71702 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 20:03:01.036882   71702 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-512125"
	I0913 20:03:01.039496   71702 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0913 20:03:01.432233   71233 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:03:01.452473   71233 api_server.go:72] duration metric: took 4m15.872372226s to wait for apiserver process to appear ...
	I0913 20:03:01.452503   71233 api_server.go:88] waiting for apiserver healthz status ...
	I0913 20:03:01.452544   71233 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:03:01.452600   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:03:01.495509   71233 cri.go:89] found id: "8c6b66cfda64cc01b2add3842317e8e38bd44940ec52b87ff6868e2b782ceb0d"
	I0913 20:03:01.495532   71233 cri.go:89] found id: ""
	I0913 20:03:01.495539   71233 logs.go:276] 1 containers: [8c6b66cfda64cc01b2add3842317e8e38bd44940ec52b87ff6868e2b782ceb0d]
	I0913 20:03:01.495601   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:03:01.502156   71233 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:03:01.502244   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:03:01.545020   71233 cri.go:89] found id: "b7288e6c437a29f9a016812446ce5905f4cc78974775e1f26b227cd41414a8c0"
	I0913 20:03:01.545046   71233 cri.go:89] found id: ""
	I0913 20:03:01.545056   71233 logs.go:276] 1 containers: [b7288e6c437a29f9a016812446ce5905f4cc78974775e1f26b227cd41414a8c0]
	I0913 20:03:01.545114   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:03:01.549607   71233 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:03:01.549675   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:03:01.589590   71233 cri.go:89] found id: "5a58f184d570427c2adde5e982b72f455eb375ced496ce62a3217f22f7c409e7"
	I0913 20:03:01.589619   71233 cri.go:89] found id: ""
	I0913 20:03:01.589627   71233 logs.go:276] 1 containers: [5a58f184d570427c2adde5e982b72f455eb375ced496ce62a3217f22f7c409e7]
	I0913 20:03:01.589677   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:03:01.595352   71233 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:03:01.595429   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:03:01.642418   71233 cri.go:89] found id: "c32212fb06588456e47850075601e1e9622d8fab0cd30faa9a6289ff74e4bc9f"
	I0913 20:03:01.642441   71233 cri.go:89] found id: ""
	I0913 20:03:01.642449   71233 logs.go:276] 1 containers: [c32212fb06588456e47850075601e1e9622d8fab0cd30faa9a6289ff74e4bc9f]
	I0913 20:03:01.642511   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:03:01.647937   71233 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:03:01.648004   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:03:01.691575   71233 cri.go:89] found id: "57402126568c76ff63c95511c256896d8fb9ec9889deab2a2816385513386b86"
	I0913 20:03:01.691603   71233 cri.go:89] found id: ""
	I0913 20:03:01.691612   71233 logs.go:276] 1 containers: [57402126568c76ff63c95511c256896d8fb9ec9889deab2a2816385513386b86]
	I0913 20:03:01.691669   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:03:01.697223   71233 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:03:01.697296   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:03:01.737359   71233 cri.go:89] found id: "3e8d6c49b3b396aea040cc47de27b36e0f6018361f1925fe36424adffe6a6b73"
	I0913 20:03:01.737386   71233 cri.go:89] found id: ""
	I0913 20:03:01.737395   71233 logs.go:276] 1 containers: [3e8d6c49b3b396aea040cc47de27b36e0f6018361f1925fe36424adffe6a6b73]
	I0913 20:03:01.737453   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:03:01.743717   71233 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:03:01.743779   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:03:01.784813   71233 cri.go:89] found id: ""
	I0913 20:03:01.784836   71233 logs.go:276] 0 containers: []
	W0913 20:03:01.784845   71233 logs.go:278] No container was found matching "kindnet"
	I0913 20:03:01.784849   71233 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0913 20:03:01.784898   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0913 20:03:01.823391   71233 cri.go:89] found id: "db0694e689431d2416b903e0f01588bdf740ca0d24d9561799e853cfb065cb11"
	I0913 20:03:01.823420   71233 cri.go:89] found id: "d21ac9f9341fd40b5f181668035e7327fd6c2310c56a7f5f0158d440bfd2e9a3"
	I0913 20:03:01.823427   71233 cri.go:89] found id: ""
	I0913 20:03:01.823436   71233 logs.go:276] 2 containers: [db0694e689431d2416b903e0f01588bdf740ca0d24d9561799e853cfb065cb11 d21ac9f9341fd40b5f181668035e7327fd6c2310c56a7f5f0158d440bfd2e9a3]
	I0913 20:03:01.823484   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:03:01.828764   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:03:01.834519   71233 logs.go:123] Gathering logs for storage-provisioner [db0694e689431d2416b903e0f01588bdf740ca0d24d9561799e853cfb065cb11] ...
	I0913 20:03:01.834546   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db0694e689431d2416b903e0f01588bdf740ca0d24d9561799e853cfb065cb11"
	I0913 20:03:01.872925   71233 logs.go:123] Gathering logs for etcd [b7288e6c437a29f9a016812446ce5905f4cc78974775e1f26b227cd41414a8c0] ...
	I0913 20:03:01.872954   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b7288e6c437a29f9a016812446ce5905f4cc78974775e1f26b227cd41414a8c0"
	I0913 20:03:01.927669   71233 logs.go:123] Gathering logs for coredns [5a58f184d570427c2adde5e982b72f455eb375ced496ce62a3217f22f7c409e7] ...
	I0913 20:03:01.927702   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a58f184d570427c2adde5e982b72f455eb375ced496ce62a3217f22f7c409e7"
	I0913 20:03:01.973537   71233 logs.go:123] Gathering logs for kube-scheduler [c32212fb06588456e47850075601e1e9622d8fab0cd30faa9a6289ff74e4bc9f] ...
	I0913 20:03:01.973576   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c32212fb06588456e47850075601e1e9622d8fab0cd30faa9a6289ff74e4bc9f"
	I0913 20:03:02.017320   71233 logs.go:123] Gathering logs for kube-proxy [57402126568c76ff63c95511c256896d8fb9ec9889deab2a2816385513386b86] ...
	I0913 20:03:02.017353   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57402126568c76ff63c95511c256896d8fb9ec9889deab2a2816385513386b86"
	I0913 20:03:02.064003   71233 logs.go:123] Gathering logs for kubelet ...
	I0913 20:03:02.064042   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:03:02.134901   71233 logs.go:123] Gathering logs for dmesg ...
	I0913 20:03:02.134933   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:03:02.150541   71233 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:03:02.150575   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 20:03:02.268583   71233 logs.go:123] Gathering logs for kube-apiserver [8c6b66cfda64cc01b2add3842317e8e38bd44940ec52b87ff6868e2b782ceb0d] ...
	I0913 20:03:02.268626   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8c6b66cfda64cc01b2add3842317e8e38bd44940ec52b87ff6868e2b782ceb0d"
	I0913 20:03:02.320972   71233 logs.go:123] Gathering logs for kube-controller-manager [3e8d6c49b3b396aea040cc47de27b36e0f6018361f1925fe36424adffe6a6b73] ...
	I0913 20:03:02.321004   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e8d6c49b3b396aea040cc47de27b36e0f6018361f1925fe36424adffe6a6b73"
	I0913 20:03:02.373848   71233 logs.go:123] Gathering logs for storage-provisioner [d21ac9f9341fd40b5f181668035e7327fd6c2310c56a7f5f0158d440bfd2e9a3] ...
	I0913 20:03:02.373881   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d21ac9f9341fd40b5f181668035e7327fd6c2310c56a7f5f0158d440bfd2e9a3"
	I0913 20:03:02.409851   71233 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:03:02.409882   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:03:02.833329   71233 logs.go:123] Gathering logs for container status ...
	I0913 20:03:02.833384   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:03:01.041611   71702 addons.go:510] duration metric: took 1.971356508s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0913 20:03:01.415839   71702 pod_ready.go:103] pod "coredns-7c65d6cfc9-2qg68" in "kube-system" namespace has status "Ready":"False"
	I0913 20:03:03.911854   71702 pod_ready.go:103] pod "coredns-7c65d6cfc9-2qg68" in "kube-system" namespace has status "Ready":"False"
	I0913 20:03:04.745279   71926 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0913 20:03:04.745917   71926 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0913 20:03:04.746165   71926 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0913 20:03:05.413146   71702 pod_ready.go:93] pod "coredns-7c65d6cfc9-2qg68" in "kube-system" namespace has status "Ready":"True"
	I0913 20:03:05.413172   71702 pod_ready.go:82] duration metric: took 6.008227569s for pod "coredns-7c65d6cfc9-2qg68" in "kube-system" namespace to be "Ready" ...
	I0913 20:03:05.413184   71702 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-pm4s9" in "kube-system" namespace to be "Ready" ...
	I0913 20:03:07.420197   71702 pod_ready.go:103] pod "coredns-7c65d6cfc9-pm4s9" in "kube-system" namespace has status "Ready":"False"
	I0913 20:03:07.920309   71702 pod_ready.go:93] pod "coredns-7c65d6cfc9-pm4s9" in "kube-system" namespace has status "Ready":"True"
	I0913 20:03:07.920333   71702 pod_ready.go:82] duration metric: took 2.507141455s for pod "coredns-7c65d6cfc9-pm4s9" in "kube-system" namespace to be "Ready" ...
	I0913 20:03:07.920342   71702 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-512125" in "kube-system" namespace to be "Ready" ...
	I0913 20:03:07.924871   71702 pod_ready.go:93] pod "etcd-default-k8s-diff-port-512125" in "kube-system" namespace has status "Ready":"True"
	I0913 20:03:07.924892   71702 pod_ready.go:82] duration metric: took 4.543474ms for pod "etcd-default-k8s-diff-port-512125" in "kube-system" namespace to be "Ready" ...
	I0913 20:03:07.924901   71702 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-512125" in "kube-system" namespace to be "Ready" ...
	I0913 20:03:07.929323   71702 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-512125" in "kube-system" namespace has status "Ready":"True"
	I0913 20:03:07.929343   71702 pod_ready.go:82] duration metric: took 4.435416ms for pod "kube-apiserver-default-k8s-diff-port-512125" in "kube-system" namespace to be "Ready" ...
	I0913 20:03:07.929351   71702 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-512125" in "kube-system" namespace to be "Ready" ...
	I0913 20:03:07.933200   71702 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-512125" in "kube-system" namespace has status "Ready":"True"
	I0913 20:03:07.933225   71702 pod_ready.go:82] duration metric: took 3.865423ms for pod "kube-controller-manager-default-k8s-diff-port-512125" in "kube-system" namespace to be "Ready" ...
	I0913 20:03:07.933237   71702 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-6zfwm" in "kube-system" namespace to be "Ready" ...
	I0913 20:03:07.938215   71702 pod_ready.go:93] pod "kube-proxy-6zfwm" in "kube-system" namespace has status "Ready":"True"
	I0913 20:03:07.938241   71702 pod_ready.go:82] duration metric: took 4.996366ms for pod "kube-proxy-6zfwm" in "kube-system" namespace to be "Ready" ...
	I0913 20:03:07.938251   71702 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-512125" in "kube-system" namespace to be "Ready" ...
	I0913 20:03:08.317175   71702 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-512125" in "kube-system" namespace has status "Ready":"True"
	I0913 20:03:08.317200   71702 pod_ready.go:82] duration metric: took 378.941006ms for pod "kube-scheduler-default-k8s-diff-port-512125" in "kube-system" namespace to be "Ready" ...
	I0913 20:03:08.317207   71702 pod_ready.go:39] duration metric: took 8.926861264s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0913 20:03:08.317220   71702 api_server.go:52] waiting for apiserver process to appear ...
	I0913 20:03:08.317270   71702 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 20:03:08.332715   71702 api_server.go:72] duration metric: took 9.262487177s to wait for apiserver process to appear ...
	I0913 20:03:08.332745   71702 api_server.go:88] waiting for apiserver healthz status ...
	I0913 20:03:08.332766   71702 api_server.go:253] Checking apiserver healthz at https://192.168.61.3:8444/healthz ...
	I0913 20:03:08.337492   71702 api_server.go:279] https://192.168.61.3:8444/healthz returned 200:
	ok
	I0913 20:03:08.338513   71702 api_server.go:141] control plane version: v1.31.1
	I0913 20:03:08.338534   71702 api_server.go:131] duration metric: took 5.781718ms to wait for apiserver health ...
	I0913 20:03:08.338540   71702 system_pods.go:43] waiting for kube-system pods to appear ...
	I0913 20:03:08.519723   71702 system_pods.go:59] 9 kube-system pods found
	I0913 20:03:08.519751   71702 system_pods.go:61] "coredns-7c65d6cfc9-2qg68" [06d7bc39-7f7b-405c-828f-22d68741b063] Running
	I0913 20:03:08.519756   71702 system_pods.go:61] "coredns-7c65d6cfc9-pm4s9" [82a23abb-d3a2-415b-a992-971fe65ee840] Running
	I0913 20:03:08.519760   71702 system_pods.go:61] "etcd-default-k8s-diff-port-512125" [b146d4ea-c125-42c6-8435-b23a83c75597] Running
	I0913 20:03:08.519764   71702 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-512125" [69a6fc72-2e73-41ad-a945-985c4f3b406c] Running
	I0913 20:03:08.519767   71702 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-512125" [7875db7b-84da-4dae-b78c-beb539869479] Running
	I0913 20:03:08.519770   71702 system_pods.go:61] "kube-proxy-6zfwm" [b62cff15-1c67-42d6-a30b-6f43a914fa0c] Running
	I0913 20:03:08.519773   71702 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-512125" [840349ee-5068-4ccf-9e6f-9879904b3647] Running
	I0913 20:03:08.519779   71702 system_pods.go:61] "metrics-server-6867b74b74-tk8qn" [e4e5d427-7760-4397-8529-3ae3734ed891] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0913 20:03:08.519782   71702 system_pods.go:61] "storage-provisioner" [7cd5034f-5d90-4155-acab-804dca90a2ed] Running
	I0913 20:03:08.519790   71702 system_pods.go:74] duration metric: took 181.244915ms to wait for pod list to return data ...
	I0913 20:03:08.519797   71702 default_sa.go:34] waiting for default service account to be created ...
	I0913 20:03:08.717123   71702 default_sa.go:45] found service account: "default"
	I0913 20:03:08.717146   71702 default_sa.go:55] duration metric: took 197.343901ms for default service account to be created ...
	I0913 20:03:08.717155   71702 system_pods.go:116] waiting for k8s-apps to be running ...
	I0913 20:03:08.920347   71702 system_pods.go:86] 9 kube-system pods found
	I0913 20:03:08.920378   71702 system_pods.go:89] "coredns-7c65d6cfc9-2qg68" [06d7bc39-7f7b-405c-828f-22d68741b063] Running
	I0913 20:03:08.920383   71702 system_pods.go:89] "coredns-7c65d6cfc9-pm4s9" [82a23abb-d3a2-415b-a992-971fe65ee840] Running
	I0913 20:03:08.920388   71702 system_pods.go:89] "etcd-default-k8s-diff-port-512125" [b146d4ea-c125-42c6-8435-b23a83c75597] Running
	I0913 20:03:08.920392   71702 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-512125" [69a6fc72-2e73-41ad-a945-985c4f3b406c] Running
	I0913 20:03:08.920396   71702 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-512125" [7875db7b-84da-4dae-b78c-beb539869479] Running
	I0913 20:03:08.920401   71702 system_pods.go:89] "kube-proxy-6zfwm" [b62cff15-1c67-42d6-a30b-6f43a914fa0c] Running
	I0913 20:03:08.920407   71702 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-512125" [840349ee-5068-4ccf-9e6f-9879904b3647] Running
	I0913 20:03:08.920415   71702 system_pods.go:89] "metrics-server-6867b74b74-tk8qn" [e4e5d427-7760-4397-8529-3ae3734ed891] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0913 20:03:08.920421   71702 system_pods.go:89] "storage-provisioner" [7cd5034f-5d90-4155-acab-804dca90a2ed] Running
	I0913 20:03:08.920433   71702 system_pods.go:126] duration metric: took 203.271141ms to wait for k8s-apps to be running ...
	I0913 20:03:08.920446   71702 system_svc.go:44] waiting for kubelet service to be running ....
	I0913 20:03:08.920492   71702 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 20:03:08.937818   71702 system_svc.go:56] duration metric: took 17.363979ms WaitForService to wait for kubelet
	I0913 20:03:08.937850   71702 kubeadm.go:582] duration metric: took 9.867627646s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0913 20:03:08.937866   71702 node_conditions.go:102] verifying NodePressure condition ...
	I0913 20:03:09.117836   71702 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0913 20:03:09.117861   71702 node_conditions.go:123] node cpu capacity is 2
	I0913 20:03:09.117870   71702 node_conditions.go:105] duration metric: took 180.000591ms to run NodePressure ...
	I0913 20:03:09.117880   71702 start.go:241] waiting for startup goroutines ...
	I0913 20:03:09.117886   71702 start.go:246] waiting for cluster config update ...
	I0913 20:03:09.117896   71702 start.go:255] writing updated cluster config ...
	I0913 20:03:09.118224   71702 ssh_runner.go:195] Run: rm -f paused
	I0913 20:03:09.166470   71702 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0913 20:03:09.168569   71702 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-512125" cluster and "default" namespace by default
	I0913 20:03:05.379534   71233 api_server.go:253] Checking apiserver healthz at https://192.168.39.32:8443/healthz ...
	I0913 20:03:05.385296   71233 api_server.go:279] https://192.168.39.32:8443/healthz returned 200:
	ok
	I0913 20:03:05.386447   71233 api_server.go:141] control plane version: v1.31.1
	I0913 20:03:05.386467   71233 api_server.go:131] duration metric: took 3.933956718s to wait for apiserver health ...
	I0913 20:03:05.386476   71233 system_pods.go:43] waiting for kube-system pods to appear ...
	I0913 20:03:05.386501   71233 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:03:05.386558   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:03:05.435632   71233 cri.go:89] found id: "8c6b66cfda64cc01b2add3842317e8e38bd44940ec52b87ff6868e2b782ceb0d"
	I0913 20:03:05.435663   71233 cri.go:89] found id: ""
	I0913 20:03:05.435674   71233 logs.go:276] 1 containers: [8c6b66cfda64cc01b2add3842317e8e38bd44940ec52b87ff6868e2b782ceb0d]
	I0913 20:03:05.435734   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:03:05.440489   71233 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:03:05.440552   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:03:05.479659   71233 cri.go:89] found id: "b7288e6c437a29f9a016812446ce5905f4cc78974775e1f26b227cd41414a8c0"
	I0913 20:03:05.479684   71233 cri.go:89] found id: ""
	I0913 20:03:05.479692   71233 logs.go:276] 1 containers: [b7288e6c437a29f9a016812446ce5905f4cc78974775e1f26b227cd41414a8c0]
	I0913 20:03:05.479739   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:03:05.483811   71233 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:03:05.483868   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:03:05.519053   71233 cri.go:89] found id: "5a58f184d570427c2adde5e982b72f455eb375ced496ce62a3217f22f7c409e7"
	I0913 20:03:05.519077   71233 cri.go:89] found id: ""
	I0913 20:03:05.519085   71233 logs.go:276] 1 containers: [5a58f184d570427c2adde5e982b72f455eb375ced496ce62a3217f22f7c409e7]
	I0913 20:03:05.519139   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:03:05.523529   71233 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:03:05.523596   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:03:05.560575   71233 cri.go:89] found id: "c32212fb06588456e47850075601e1e9622d8fab0cd30faa9a6289ff74e4bc9f"
	I0913 20:03:05.560599   71233 cri.go:89] found id: ""
	I0913 20:03:05.560608   71233 logs.go:276] 1 containers: [c32212fb06588456e47850075601e1e9622d8fab0cd30faa9a6289ff74e4bc9f]
	I0913 20:03:05.560655   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:03:05.564712   71233 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:03:05.564761   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:03:05.602092   71233 cri.go:89] found id: "57402126568c76ff63c95511c256896d8fb9ec9889deab2a2816385513386b86"
	I0913 20:03:05.602131   71233 cri.go:89] found id: ""
	I0913 20:03:05.602141   71233 logs.go:276] 1 containers: [57402126568c76ff63c95511c256896d8fb9ec9889deab2a2816385513386b86]
	I0913 20:03:05.602202   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:03:05.606465   71233 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:03:05.606531   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:03:05.652471   71233 cri.go:89] found id: "3e8d6c49b3b396aea040cc47de27b36e0f6018361f1925fe36424adffe6a6b73"
	I0913 20:03:05.652499   71233 cri.go:89] found id: ""
	I0913 20:03:05.652509   71233 logs.go:276] 1 containers: [3e8d6c49b3b396aea040cc47de27b36e0f6018361f1925fe36424adffe6a6b73]
	I0913 20:03:05.652567   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:03:05.656969   71233 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:03:05.657028   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:03:05.695549   71233 cri.go:89] found id: ""
	I0913 20:03:05.695575   71233 logs.go:276] 0 containers: []
	W0913 20:03:05.695586   71233 logs.go:278] No container was found matching "kindnet"
	I0913 20:03:05.695594   71233 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0913 20:03:05.695657   71233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0913 20:03:05.732796   71233 cri.go:89] found id: "db0694e689431d2416b903e0f01588bdf740ca0d24d9561799e853cfb065cb11"
	I0913 20:03:05.732824   71233 cri.go:89] found id: "d21ac9f9341fd40b5f181668035e7327fd6c2310c56a7f5f0158d440bfd2e9a3"
	I0913 20:03:05.732830   71233 cri.go:89] found id: ""
	I0913 20:03:05.732838   71233 logs.go:276] 2 containers: [db0694e689431d2416b903e0f01588bdf740ca0d24d9561799e853cfb065cb11 d21ac9f9341fd40b5f181668035e7327fd6c2310c56a7f5f0158d440bfd2e9a3]
	I0913 20:03:05.732905   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:03:05.737676   71233 ssh_runner.go:195] Run: which crictl
	I0913 20:03:05.742071   71233 logs.go:123] Gathering logs for etcd [b7288e6c437a29f9a016812446ce5905f4cc78974775e1f26b227cd41414a8c0] ...
	I0913 20:03:05.742109   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b7288e6c437a29f9a016812446ce5905f4cc78974775e1f26b227cd41414a8c0"
	I0913 20:03:05.792956   71233 logs.go:123] Gathering logs for kube-scheduler [c32212fb06588456e47850075601e1e9622d8fab0cd30faa9a6289ff74e4bc9f] ...
	I0913 20:03:05.792984   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c32212fb06588456e47850075601e1e9622d8fab0cd30faa9a6289ff74e4bc9f"
	I0913 20:03:05.834623   71233 logs.go:123] Gathering logs for kube-proxy [57402126568c76ff63c95511c256896d8fb9ec9889deab2a2816385513386b86] ...
	I0913 20:03:05.834651   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57402126568c76ff63c95511c256896d8fb9ec9889deab2a2816385513386b86"
	I0913 20:03:05.872365   71233 logs.go:123] Gathering logs for storage-provisioner [db0694e689431d2416b903e0f01588bdf740ca0d24d9561799e853cfb065cb11] ...
	I0913 20:03:05.872395   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db0694e689431d2416b903e0f01588bdf740ca0d24d9561799e853cfb065cb11"
	I0913 20:03:05.909565   71233 logs.go:123] Gathering logs for storage-provisioner [d21ac9f9341fd40b5f181668035e7327fd6c2310c56a7f5f0158d440bfd2e9a3] ...
	I0913 20:03:05.909589   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d21ac9f9341fd40b5f181668035e7327fd6c2310c56a7f5f0158d440bfd2e9a3"
	I0913 20:03:05.950037   71233 logs.go:123] Gathering logs for container status ...
	I0913 20:03:05.950073   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:03:06.006670   71233 logs.go:123] Gathering logs for kubelet ...
	I0913 20:03:06.006702   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 20:03:06.075591   71233 logs.go:123] Gathering logs for dmesg ...
	I0913 20:03:06.075633   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:03:06.090020   71233 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:03:06.090051   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 20:03:06.193190   71233 logs.go:123] Gathering logs for kube-apiserver [8c6b66cfda64cc01b2add3842317e8e38bd44940ec52b87ff6868e2b782ceb0d] ...
	I0913 20:03:06.193216   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8c6b66cfda64cc01b2add3842317e8e38bd44940ec52b87ff6868e2b782ceb0d"
	I0913 20:03:06.236386   71233 logs.go:123] Gathering logs for coredns [5a58f184d570427c2adde5e982b72f455eb375ced496ce62a3217f22f7c409e7] ...
	I0913 20:03:06.236414   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a58f184d570427c2adde5e982b72f455eb375ced496ce62a3217f22f7c409e7"
	I0913 20:03:06.276618   71233 logs.go:123] Gathering logs for kube-controller-manager [3e8d6c49b3b396aea040cc47de27b36e0f6018361f1925fe36424adffe6a6b73] ...
	I0913 20:03:06.276644   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e8d6c49b3b396aea040cc47de27b36e0f6018361f1925fe36424adffe6a6b73"
	I0913 20:03:06.332088   71233 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:03:06.332119   71233 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:03:09.189499   71233 system_pods.go:59] 8 kube-system pods found
	I0913 20:03:09.189533   71233 system_pods.go:61] "coredns-7c65d6cfc9-lrrkx" [3b5420cc-a0cd-4f34-8f65-9fa6fd5dd02f] Running
	I0913 20:03:09.189542   71233 system_pods.go:61] "etcd-embed-certs-175374" [4f645ba5-cffa-49a2-9219-8e635f58abd3] Running
	I0913 20:03:09.189549   71233 system_pods.go:61] "kube-apiserver-embed-certs-175374" [4c21b983-d949-4642-b17c-b547330b0d05] Running
	I0913 20:03:09.189564   71233 system_pods.go:61] "kube-controller-manager-embed-certs-175374" [defb4f6c-81f8-4405-b2c0-abe91c846f4c] Running
	I0913 20:03:09.189571   71233 system_pods.go:61] "kube-proxy-jv77q" [28580bbe-7c5f-4161-8370-41f3286d508c] Running
	I0913 20:03:09.189577   71233 system_pods.go:61] "kube-scheduler-embed-certs-175374" [c5aa66b9-523c-46e8-b8e4-70680490dbf5] Running
	I0913 20:03:09.189588   71233 system_pods.go:61] "metrics-server-6867b74b74-fnznh" [9ca67e1c-a852-4513-abfc-ace5908d2727] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0913 20:03:09.189597   71233 system_pods.go:61] "storage-provisioner" [afa99920-b51a-4d30-a8e0-269a0beeee8a] Running
	I0913 20:03:09.189610   71233 system_pods.go:74] duration metric: took 3.803122963s to wait for pod list to return data ...
	I0913 20:03:09.189618   71233 default_sa.go:34] waiting for default service account to be created ...
	I0913 20:03:09.192997   71233 default_sa.go:45] found service account: "default"
	I0913 20:03:09.193023   71233 default_sa.go:55] duration metric: took 3.397513ms for default service account to be created ...
	I0913 20:03:09.193033   71233 system_pods.go:116] waiting for k8s-apps to be running ...
	I0913 20:03:09.198238   71233 system_pods.go:86] 8 kube-system pods found
	I0913 20:03:09.198263   71233 system_pods.go:89] "coredns-7c65d6cfc9-lrrkx" [3b5420cc-a0cd-4f34-8f65-9fa6fd5dd02f] Running
	I0913 20:03:09.198268   71233 system_pods.go:89] "etcd-embed-certs-175374" [4f645ba5-cffa-49a2-9219-8e635f58abd3] Running
	I0913 20:03:09.198272   71233 system_pods.go:89] "kube-apiserver-embed-certs-175374" [4c21b983-d949-4642-b17c-b547330b0d05] Running
	I0913 20:03:09.198276   71233 system_pods.go:89] "kube-controller-manager-embed-certs-175374" [defb4f6c-81f8-4405-b2c0-abe91c846f4c] Running
	I0913 20:03:09.198280   71233 system_pods.go:89] "kube-proxy-jv77q" [28580bbe-7c5f-4161-8370-41f3286d508c] Running
	I0913 20:03:09.198284   71233 system_pods.go:89] "kube-scheduler-embed-certs-175374" [c5aa66b9-523c-46e8-b8e4-70680490dbf5] Running
	I0913 20:03:09.198291   71233 system_pods.go:89] "metrics-server-6867b74b74-fnznh" [9ca67e1c-a852-4513-abfc-ace5908d2727] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0913 20:03:09.198298   71233 system_pods.go:89] "storage-provisioner" [afa99920-b51a-4d30-a8e0-269a0beeee8a] Running
	I0913 20:03:09.198305   71233 system_pods.go:126] duration metric: took 5.267005ms to wait for k8s-apps to be running ...
	I0913 20:03:09.198314   71233 system_svc.go:44] waiting for kubelet service to be running ....
	I0913 20:03:09.198349   71233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 20:03:09.216256   71233 system_svc.go:56] duration metric: took 17.93212ms WaitForService to wait for kubelet
	I0913 20:03:09.216295   71233 kubeadm.go:582] duration metric: took 4m23.636198466s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0913 20:03:09.216318   71233 node_conditions.go:102] verifying NodePressure condition ...
	I0913 20:03:09.219598   71233 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0913 20:03:09.219623   71233 node_conditions.go:123] node cpu capacity is 2
	I0913 20:03:09.219634   71233 node_conditions.go:105] duration metric: took 3.310981ms to run NodePressure ...
	I0913 20:03:09.219644   71233 start.go:241] waiting for startup goroutines ...
	I0913 20:03:09.219650   71233 start.go:246] waiting for cluster config update ...
	I0913 20:03:09.219659   71233 start.go:255] writing updated cluster config ...
	I0913 20:03:09.219956   71233 ssh_runner.go:195] Run: rm -f paused
	I0913 20:03:09.275861   71233 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0913 20:03:09.277856   71233 out.go:177] * Done! kubectl is now configured to use "embed-certs-175374" cluster and "default" namespace by default
	I0913 20:03:09.746358   71926 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0913 20:03:09.746651   71926 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0913 20:03:19.746817   71926 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0913 20:03:19.747178   71926 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0913 20:03:39.747563   71926 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0913 20:03:39.747837   71926 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0913 20:04:19.749006   71926 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0913 20:04:19.749293   71926 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0913 20:04:19.749318   71926 kubeadm.go:310] 
	I0913 20:04:19.749381   71926 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0913 20:04:19.749450   71926 kubeadm.go:310] 		timed out waiting for the condition
	I0913 20:04:19.749486   71926 kubeadm.go:310] 
	I0913 20:04:19.749554   71926 kubeadm.go:310] 	This error is likely caused by:
	I0913 20:04:19.749588   71926 kubeadm.go:310] 		- The kubelet is not running
	I0913 20:04:19.749737   71926 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0913 20:04:19.749752   71926 kubeadm.go:310] 
	I0913 20:04:19.749887   71926 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0913 20:04:19.749920   71926 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0913 20:04:19.749949   71926 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0913 20:04:19.749955   71926 kubeadm.go:310] 
	I0913 20:04:19.750044   71926 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0913 20:04:19.750136   71926 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0913 20:04:19.750145   71926 kubeadm.go:310] 
	I0913 20:04:19.750247   71926 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0913 20:04:19.750339   71926 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0913 20:04:19.750430   71926 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0913 20:04:19.750523   71926 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0913 20:04:19.750574   71926 kubeadm.go:310] 
	I0913 20:04:19.751362   71926 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0913 20:04:19.751496   71926 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0913 20:04:19.751584   71926 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0913 20:04:19.751725   71926 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0913 20:04:19.751774   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0913 20:04:20.207124   71926 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 20:04:20.223940   71926 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0913 20:04:20.234902   71926 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0913 20:04:20.234923   71926 kubeadm.go:157] found existing configuration files:
	
	I0913 20:04:20.234970   71926 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0913 20:04:20.246057   71926 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0913 20:04:20.246154   71926 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0913 20:04:20.257102   71926 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0913 20:04:20.266497   71926 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0913 20:04:20.266545   71926 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0913 20:04:20.276259   71926 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0913 20:04:20.285640   71926 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0913 20:04:20.285690   71926 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0913 20:04:20.294988   71926 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0913 20:04:20.303978   71926 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0913 20:04:20.304021   71926 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0913 20:04:20.313426   71926 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0913 20:04:20.392431   71926 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0913 20:04:20.392519   71926 kubeadm.go:310] [preflight] Running pre-flight checks
	I0913 20:04:20.550959   71926 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0913 20:04:20.551072   71926 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0913 20:04:20.551169   71926 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0913 20:04:20.746999   71926 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0913 20:04:20.749008   71926 out.go:235]   - Generating certificates and keys ...
	I0913 20:04:20.749104   71926 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0913 20:04:20.749181   71926 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0913 20:04:20.749255   71926 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0913 20:04:20.749339   71926 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0913 20:04:20.749457   71926 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0913 20:04:20.749540   71926 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0913 20:04:20.749608   71926 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0913 20:04:20.749670   71926 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0913 20:04:20.749732   71926 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0913 20:04:20.749801   71926 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0913 20:04:20.749833   71926 kubeadm.go:310] [certs] Using the existing "sa" key
	I0913 20:04:20.749877   71926 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0913 20:04:20.997924   71926 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0913 20:04:21.175932   71926 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0913 20:04:21.442609   71926 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0913 20:04:21.714181   71926 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0913 20:04:21.737741   71926 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0913 20:04:21.738987   71926 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0913 20:04:21.739058   71926 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0913 20:04:21.885498   71926 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0913 20:04:21.887033   71926 out.go:235]   - Booting up control plane ...
	I0913 20:04:21.887170   71926 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0913 20:04:21.893768   71926 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0913 20:04:21.893887   71926 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0913 20:04:21.894035   71926 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0913 20:04:21.904503   71926 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0913 20:05:01.906985   71926 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0913 20:05:01.907109   71926 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0913 20:05:01.907459   71926 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0913 20:05:06.907586   71926 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0913 20:05:06.907859   71926 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0913 20:05:16.908086   71926 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0913 20:05:16.908310   71926 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0913 20:05:36.908887   71926 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0913 20:05:36.909114   71926 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0913 20:06:16.909944   71926 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0913 20:06:16.910449   71926 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0913 20:06:16.910509   71926 kubeadm.go:310] 
	I0913 20:06:16.910582   71926 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0913 20:06:16.910675   71926 kubeadm.go:310] 		timed out waiting for the condition
	I0913 20:06:16.910694   71926 kubeadm.go:310] 
	I0913 20:06:16.910764   71926 kubeadm.go:310] 	This error is likely caused by:
	I0913 20:06:16.910837   71926 kubeadm.go:310] 		- The kubelet is not running
	I0913 20:06:16.910965   71926 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0913 20:06:16.910979   71926 kubeadm.go:310] 
	I0913 20:06:16.911088   71926 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0913 20:06:16.911126   71926 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0913 20:06:16.911172   71926 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0913 20:06:16.911180   71926 kubeadm.go:310] 
	I0913 20:06:16.911298   71926 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0913 20:06:16.911400   71926 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0913 20:06:16.911415   71926 kubeadm.go:310] 
	I0913 20:06:16.911586   71926 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0913 20:06:16.911787   71926 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0913 20:06:16.911938   71926 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0913 20:06:16.912063   71926 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0913 20:06:16.912087   71926 kubeadm.go:310] 
	I0913 20:06:16.912489   71926 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0913 20:06:16.912671   71926 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0913 20:06:16.912770   71926 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0913 20:06:16.912842   71926 kubeadm.go:394] duration metric: took 7m57.58767006s to StartCluster
	I0913 20:06:16.912893   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0913 20:06:16.912960   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0913 20:06:16.957280   71926 cri.go:89] found id: ""
	I0913 20:06:16.957307   71926 logs.go:276] 0 containers: []
	W0913 20:06:16.957319   71926 logs.go:278] No container was found matching "kube-apiserver"
	I0913 20:06:16.957326   71926 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0913 20:06:16.957393   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0913 20:06:16.997065   71926 cri.go:89] found id: ""
	I0913 20:06:16.997095   71926 logs.go:276] 0 containers: []
	W0913 20:06:16.997106   71926 logs.go:278] No container was found matching "etcd"
	I0913 20:06:16.997115   71926 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0913 20:06:16.997193   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0913 20:06:17.033075   71926 cri.go:89] found id: ""
	I0913 20:06:17.033099   71926 logs.go:276] 0 containers: []
	W0913 20:06:17.033107   71926 logs.go:278] No container was found matching "coredns"
	I0913 20:06:17.033112   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0913 20:06:17.033180   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0913 20:06:17.071062   71926 cri.go:89] found id: ""
	I0913 20:06:17.071090   71926 logs.go:276] 0 containers: []
	W0913 20:06:17.071101   71926 logs.go:278] No container was found matching "kube-scheduler"
	I0913 20:06:17.071108   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0913 20:06:17.071176   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0913 20:06:17.107558   71926 cri.go:89] found id: ""
	I0913 20:06:17.107584   71926 logs.go:276] 0 containers: []
	W0913 20:06:17.107594   71926 logs.go:278] No container was found matching "kube-proxy"
	I0913 20:06:17.107599   71926 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0913 20:06:17.107658   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0913 20:06:17.146037   71926 cri.go:89] found id: ""
	I0913 20:06:17.146066   71926 logs.go:276] 0 containers: []
	W0913 20:06:17.146076   71926 logs.go:278] No container was found matching "kube-controller-manager"
	I0913 20:06:17.146083   71926 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0913 20:06:17.146158   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0913 20:06:17.189129   71926 cri.go:89] found id: ""
	I0913 20:06:17.189163   71926 logs.go:276] 0 containers: []
	W0913 20:06:17.189174   71926 logs.go:278] No container was found matching "kindnet"
	I0913 20:06:17.189181   71926 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0913 20:06:17.189241   71926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0913 20:06:17.224018   71926 cri.go:89] found id: ""
	I0913 20:06:17.224045   71926 logs.go:276] 0 containers: []
	W0913 20:06:17.224056   71926 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0913 20:06:17.224067   71926 logs.go:123] Gathering logs for dmesg ...
	I0913 20:06:17.224081   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 20:06:17.238494   71926 logs.go:123] Gathering logs for describe nodes ...
	I0913 20:06:17.238526   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0913 20:06:17.319627   71926 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0913 20:06:17.319650   71926 logs.go:123] Gathering logs for CRI-O ...
	I0913 20:06:17.319663   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0913 20:06:17.472823   71926 logs.go:123] Gathering logs for container status ...
	I0913 20:06:17.472854   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 20:06:17.515919   71926 logs.go:123] Gathering logs for kubelet ...
	I0913 20:06:17.515957   71926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0913 20:06:17.566082   71926 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0913 20:06:17.566145   71926 out.go:270] * 
	W0913 20:06:17.566249   71926 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0913 20:06:17.566272   71926 out.go:270] * 
	W0913 20:06:17.567031   71926 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0913 20:06:17.570669   71926 out.go:201] 
	W0913 20:06:17.571961   71926 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0913 20:06:17.572007   71926 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0913 20:06:17.572024   71926 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0913 20:06:17.573440   71926 out.go:201] 
	
	
	==> CRI-O <==
	Sep 13 20:17:50 old-k8s-version-234290 crio[635]: time="2024-09-13 20:17:50.300879121Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726258670300836992,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d94c3159-d1a4-49cc-88e9-86e0418f1512 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 20:17:50 old-k8s-version-234290 crio[635]: time="2024-09-13 20:17:50.301622894Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2f4bd300-9013-43dd-ab7c-2e40ea36cd1a name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 20:17:50 old-k8s-version-234290 crio[635]: time="2024-09-13 20:17:50.301715467Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2f4bd300-9013-43dd-ab7c-2e40ea36cd1a name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 20:17:50 old-k8s-version-234290 crio[635]: time="2024-09-13 20:17:50.301855958Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=2f4bd300-9013-43dd-ab7c-2e40ea36cd1a name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 20:17:50 old-k8s-version-234290 crio[635]: time="2024-09-13 20:17:50.337432679Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8e469d1b-5235-48f2-bdef-bdf006f24bcc name=/runtime.v1.RuntimeService/Version
	Sep 13 20:17:50 old-k8s-version-234290 crio[635]: time="2024-09-13 20:17:50.337524977Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8e469d1b-5235-48f2-bdef-bdf006f24bcc name=/runtime.v1.RuntimeService/Version
	Sep 13 20:17:50 old-k8s-version-234290 crio[635]: time="2024-09-13 20:17:50.338812521Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=aaa37dca-82b4-4be3-9e78-06fd840d1acf name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 20:17:50 old-k8s-version-234290 crio[635]: time="2024-09-13 20:17:50.339456674Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726258670339422677,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=aaa37dca-82b4-4be3-9e78-06fd840d1acf name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 20:17:50 old-k8s-version-234290 crio[635]: time="2024-09-13 20:17:50.340155252Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cb1bef74-f402-4004-948f-162c6f8ef6ee name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 20:17:50 old-k8s-version-234290 crio[635]: time="2024-09-13 20:17:50.340256466Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cb1bef74-f402-4004-948f-162c6f8ef6ee name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 20:17:50 old-k8s-version-234290 crio[635]: time="2024-09-13 20:17:50.340328943Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=cb1bef74-f402-4004-948f-162c6f8ef6ee name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 20:17:50 old-k8s-version-234290 crio[635]: time="2024-09-13 20:17:50.376071268Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=84939752-5d87-480c-ad5d-cae1da59de8c name=/runtime.v1.RuntimeService/Version
	Sep 13 20:17:50 old-k8s-version-234290 crio[635]: time="2024-09-13 20:17:50.376170546Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=84939752-5d87-480c-ad5d-cae1da59de8c name=/runtime.v1.RuntimeService/Version
	Sep 13 20:17:50 old-k8s-version-234290 crio[635]: time="2024-09-13 20:17:50.377297700Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c5ca0d3d-a835-4bf0-8ee5-4df80d53af44 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 20:17:50 old-k8s-version-234290 crio[635]: time="2024-09-13 20:17:50.377797743Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726258670377710238,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c5ca0d3d-a835-4bf0-8ee5-4df80d53af44 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 20:17:50 old-k8s-version-234290 crio[635]: time="2024-09-13 20:17:50.378497748Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ddd7e720-b0d9-4a0d-b93c-89642eb8206e name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 20:17:50 old-k8s-version-234290 crio[635]: time="2024-09-13 20:17:50.378566857Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ddd7e720-b0d9-4a0d-b93c-89642eb8206e name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 20:17:50 old-k8s-version-234290 crio[635]: time="2024-09-13 20:17:50.378606639Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=ddd7e720-b0d9-4a0d-b93c-89642eb8206e name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 20:17:50 old-k8s-version-234290 crio[635]: time="2024-09-13 20:17:50.411519130Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=527e0390-6dad-4145-89ae-ffae5ab0272e name=/runtime.v1.RuntimeService/Version
	Sep 13 20:17:50 old-k8s-version-234290 crio[635]: time="2024-09-13 20:17:50.411616565Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=527e0390-6dad-4145-89ae-ffae5ab0272e name=/runtime.v1.RuntimeService/Version
	Sep 13 20:17:50 old-k8s-version-234290 crio[635]: time="2024-09-13 20:17:50.414017746Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a3f86ce8-604d-46c5-937a-e6849a18aa34 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 20:17:50 old-k8s-version-234290 crio[635]: time="2024-09-13 20:17:50.414432147Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726258670414403742,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a3f86ce8-604d-46c5-937a-e6849a18aa34 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 20:17:50 old-k8s-version-234290 crio[635]: time="2024-09-13 20:17:50.415052805Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0b66c5e4-078f-47f0-9aab-343f34cffe81 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 20:17:50 old-k8s-version-234290 crio[635]: time="2024-09-13 20:17:50.415127635Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0b66c5e4-078f-47f0-9aab-343f34cffe81 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 20:17:50 old-k8s-version-234290 crio[635]: time="2024-09-13 20:17:50.415173300Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=0b66c5e4-078f-47f0-9aab-343f34cffe81 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Sep13 19:57] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.066109] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.048142] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Sep13 19:58] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.610500] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.676115] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.362178] systemd-fstab-generator[561]: Ignoring "noauto" option for root device
	[  +0.066050] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.062575] systemd-fstab-generator[573]: Ignoring "noauto" option for root device
	[  +0.203353] systemd-fstab-generator[587]: Ignoring "noauto" option for root device
	[  +0.197412] systemd-fstab-generator[599]: Ignoring "noauto" option for root device
	[  +0.328737] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +6.657608] systemd-fstab-generator[890]: Ignoring "noauto" option for root device
	[  +0.063640] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.000194] systemd-fstab-generator[1014]: Ignoring "noauto" option for root device
	[ +13.374485] kauditd_printk_skb: 46 callbacks suppressed
	[Sep13 20:02] systemd-fstab-generator[5056]: Ignoring "noauto" option for root device
	[Sep13 20:04] systemd-fstab-generator[5327]: Ignoring "noauto" option for root device
	[  +0.071026] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 20:17:50 up 19 min,  0 users,  load average: 0.12, 0.08, 0.04
	Linux old-k8s-version-234290 5.10.207 #1 SMP Thu Sep 12 19:03:33 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Sep 13 20:17:47 old-k8s-version-234290 kubelet[6823]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:108 +0x66
	Sep 13 20:17:47 old-k8s-version-234290 kubelet[6823]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.DefaultWatchErrorHandler(0xc000256540, 0x4f04d00, 0xc00040fa30)
	Sep 13 20:17:47 old-k8s-version-234290 kubelet[6823]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:138 +0x185
	Sep 13 20:17:47 old-k8s-version-234290 kubelet[6823]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run.func1()
	Sep 13 20:17:47 old-k8s-version-234290 kubelet[6823]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:222 +0x70
	Sep 13 20:17:47 old-k8s-version-234290 kubelet[6823]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc00079c6f0)
	Sep 13 20:17:47 old-k8s-version-234290 kubelet[6823]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5f
	Sep 13 20:17:47 old-k8s-version-234290 kubelet[6823]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000bf1ef0, 0x4f0ac20, 0xc0003c3720, 0x1, 0xc0001000c0)
	Sep 13 20:17:47 old-k8s-version-234290 kubelet[6823]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0xad
	Sep 13 20:17:47 old-k8s-version-234290 kubelet[6823]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc000256540, 0xc0001000c0)
	Sep 13 20:17:47 old-k8s-version-234290 kubelet[6823]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Sep 13 20:17:47 old-k8s-version-234290 kubelet[6823]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Sep 13 20:17:47 old-k8s-version-234290 kubelet[6823]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Sep 13 20:17:47 old-k8s-version-234290 kubelet[6823]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc00046fde0, 0xc000c136c0)
	Sep 13 20:17:47 old-k8s-version-234290 kubelet[6823]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Sep 13 20:17:47 old-k8s-version-234290 kubelet[6823]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Sep 13 20:17:47 old-k8s-version-234290 kubelet[6823]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Sep 13 20:17:47 old-k8s-version-234290 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 140.
	Sep 13 20:17:47 old-k8s-version-234290 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Sep 13 20:17:47 old-k8s-version-234290 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Sep 13 20:17:47 old-k8s-version-234290 kubelet[6831]: I0913 20:17:47.923689    6831 server.go:416] Version: v1.20.0
	Sep 13 20:17:47 old-k8s-version-234290 kubelet[6831]: I0913 20:17:47.924047    6831 server.go:837] Client rotation is on, will bootstrap in background
	Sep 13 20:17:47 old-k8s-version-234290 kubelet[6831]: I0913 20:17:47.926110    6831 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Sep 13 20:17:47 old-k8s-version-234290 kubelet[6831]: I0913 20:17:47.927362    6831 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Sep 13 20:17:47 old-k8s-version-234290 kubelet[6831]: W0913 20:17:47.927402    6831 manager.go:159] Cannot detect current cgroup on cgroup v2
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-234290 -n old-k8s-version-234290
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-234290 -n old-k8s-version-234290: exit status 2 (242.000519ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-234290" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (147.44s)

                                                
                                    

Test pass (240/310)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 32.67
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.13
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.31.1/json-events 24.18
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.06
18 TestDownloadOnly/v1.31.1/DeleteAll 0.13
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.12
21 TestBinaryMirror 0.59
22 TestOffline 112.23
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 141.31
31 TestAddons/serial/GCPAuth/Namespaces 0.17
35 TestAddons/parallel/InspektorGadget 12.03
38 TestAddons/parallel/CSI 50.25
39 TestAddons/parallel/Headlamp 20.69
40 TestAddons/parallel/CloudSpanner 6.63
41 TestAddons/parallel/LocalPath 55.45
42 TestAddons/parallel/NvidiaDevicePlugin 6.56
43 TestAddons/parallel/Yakd 10.75
44 TestAddons/StoppedEnableDisable 7.54
45 TestCertOptions 48.81
46 TestCertExpiration 301.27
48 TestForceSystemdFlag 65.86
49 TestForceSystemdEnv 46.43
51 TestKVMDriverInstallOrUpdate 5.45
55 TestErrorSpam/setup 40.51
56 TestErrorSpam/start 0.33
57 TestErrorSpam/status 0.71
58 TestErrorSpam/pause 1.55
59 TestErrorSpam/unpause 1.75
60 TestErrorSpam/stop 4.93
63 TestFunctional/serial/CopySyncFile 0
64 TestFunctional/serial/StartWithProxy 79.6
65 TestFunctional/serial/AuditLog 0
66 TestFunctional/serial/SoftStart 32.96
67 TestFunctional/serial/KubeContext 0.04
68 TestFunctional/serial/KubectlGetPods 0.08
71 TestFunctional/serial/CacheCmd/cache/add_remote 3.64
72 TestFunctional/serial/CacheCmd/cache/add_local 2.26
73 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
74 TestFunctional/serial/CacheCmd/cache/list 0.04
75 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.21
76 TestFunctional/serial/CacheCmd/cache/cache_reload 1.67
77 TestFunctional/serial/CacheCmd/cache/delete 0.09
78 TestFunctional/serial/MinikubeKubectlCmd 0.1
79 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
80 TestFunctional/serial/ExtraConfig 28.98
81 TestFunctional/serial/ComponentHealth 0.07
82 TestFunctional/serial/LogsCmd 1.41
83 TestFunctional/serial/LogsFileCmd 1.42
84 TestFunctional/serial/InvalidService 4.87
86 TestFunctional/parallel/ConfigCmd 0.33
87 TestFunctional/parallel/DashboardCmd 23.47
88 TestFunctional/parallel/DryRun 0.3
89 TestFunctional/parallel/InternationalLanguage 0.15
90 TestFunctional/parallel/StatusCmd 1.04
94 TestFunctional/parallel/ServiceCmdConnect 8.89
95 TestFunctional/parallel/AddonsCmd 0.12
96 TestFunctional/parallel/PersistentVolumeClaim 45.07
98 TestFunctional/parallel/SSHCmd 0.38
99 TestFunctional/parallel/CpCmd 1.33
100 TestFunctional/parallel/MySQL 31.59
101 TestFunctional/parallel/FileSync 0.23
102 TestFunctional/parallel/CertSync 1.25
106 TestFunctional/parallel/NodeLabels 0.06
108 TestFunctional/parallel/NonActiveRuntimeDisabled 0.46
110 TestFunctional/parallel/License 0.68
111 TestFunctional/parallel/ServiceCmd/DeployApp 12.2
112 TestFunctional/parallel/ProfileCmd/profile_not_create 0.35
113 TestFunctional/parallel/MountCmd/any-port 11.65
114 TestFunctional/parallel/ProfileCmd/profile_list 0.32
115 TestFunctional/parallel/ProfileCmd/profile_json_output 0.28
116 TestFunctional/parallel/MountCmd/specific-port 1.82
117 TestFunctional/parallel/ServiceCmd/List 0.44
118 TestFunctional/parallel/ServiceCmd/JSONOutput 0.42
119 TestFunctional/parallel/ServiceCmd/HTTPS 0.27
120 TestFunctional/parallel/ServiceCmd/Format 0.27
121 TestFunctional/parallel/ServiceCmd/URL 0.31
122 TestFunctional/parallel/MountCmd/VerifyCleanup 1.13
132 TestFunctional/parallel/UpdateContextCmd/no_changes 0.09
133 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.09
134 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
135 TestFunctional/parallel/Version/short 0.05
136 TestFunctional/parallel/Version/components 0.55
137 TestFunctional/parallel/ImageCommands/ImageListShort 0.23
138 TestFunctional/parallel/ImageCommands/ImageListTable 0.24
139 TestFunctional/parallel/ImageCommands/ImageListJson 0.22
140 TestFunctional/parallel/ImageCommands/ImageListYaml 0.24
141 TestFunctional/parallel/ImageCommands/ImageBuild 4.11
142 TestFunctional/parallel/ImageCommands/Setup 1.96
143 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.34
144 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.06
145 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.98
146 TestFunctional/parallel/ImageCommands/ImageSaveToFile 7.57
147 TestFunctional/parallel/ImageCommands/ImageRemove 0.61
148 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.31
149 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.68
150 TestFunctional/delete_echo-server_images 0.03
151 TestFunctional/delete_my-image_image 0.01
152 TestFunctional/delete_minikube_cached_images 0.02
156 TestMultiControlPlane/serial/StartCluster 207.7
157 TestMultiControlPlane/serial/DeployApp 7.62
158 TestMultiControlPlane/serial/PingHostFromPods 1.21
159 TestMultiControlPlane/serial/AddWorkerNode 57.66
160 TestMultiControlPlane/serial/NodeLabels 0.06
161 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.51
162 TestMultiControlPlane/serial/CopyFile 12.43
164 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 3.46
166 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.38
168 TestMultiControlPlane/serial/DeleteSecondaryNode 16.61
169 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.36
177 TestJSONOutput/start/Command 88.75
178 TestJSONOutput/start/Audit 0
180 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
181 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
183 TestJSONOutput/pause/Command 0.73
184 TestJSONOutput/pause/Audit 0
186 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
189 TestJSONOutput/unpause/Command 0.66
190 TestJSONOutput/unpause/Audit 0
192 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/stop/Command 7.38
196 TestJSONOutput/stop/Audit 0
198 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
200 TestErrorJSONOutput 0.19
205 TestMainNoArgs 0.04
206 TestMinikubeProfile 85.33
209 TestMountStart/serial/StartWithMountFirst 27.94
210 TestMountStart/serial/VerifyMountFirst 0.36
211 TestMountStart/serial/StartWithMountSecond 28.76
212 TestMountStart/serial/VerifyMountSecond 0.37
213 TestMountStart/serial/DeleteFirst 0.87
214 TestMountStart/serial/VerifyMountPostDelete 0.37
215 TestMountStart/serial/Stop 1.28
216 TestMountStart/serial/RestartStopped 23.88
217 TestMountStart/serial/VerifyMountPostStop 0.38
220 TestMultiNode/serial/FreshStart2Nodes 109
221 TestMultiNode/serial/DeployApp2Nodes 6.28
222 TestMultiNode/serial/PingHostFrom2Pods 0.78
223 TestMultiNode/serial/AddNode 53.01
224 TestMultiNode/serial/MultiNodeLabels 0.06
225 TestMultiNode/serial/ProfileList 0.21
226 TestMultiNode/serial/CopyFile 6.99
227 TestMultiNode/serial/StopNode 2.29
228 TestMultiNode/serial/StartAfterStop 40.18
230 TestMultiNode/serial/DeleteNode 2.2
232 TestMultiNode/serial/RestartMultiNode 202.98
233 TestMultiNode/serial/ValidateNameConflict 44.92
240 TestScheduledStopUnix 113.47
244 TestRunningBinaryUpgrade 236.06
249 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
250 TestNoKubernetes/serial/StartWithK8s 92.63
258 TestNetworkPlugins/group/false 2.91
269 TestNoKubernetes/serial/StartWithStopK8s 44.89
270 TestNoKubernetes/serial/Start 44.73
272 TestPause/serial/Start 58.29
273 TestNoKubernetes/serial/VerifyK8sNotRunning 0.21
274 TestNoKubernetes/serial/ProfileList 1.65
275 TestNoKubernetes/serial/Stop 1.52
276 TestNoKubernetes/serial/StartNoArgs 43.57
277 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.24
279 TestStoppedBinaryUpgrade/Setup 2.61
280 TestStoppedBinaryUpgrade/Upgrade 106.52
281 TestNetworkPlugins/group/auto/Start 95.9
282 TestNetworkPlugins/group/kindnet/Start 89.98
283 TestStoppedBinaryUpgrade/MinikubeLogs 1.04
284 TestNetworkPlugins/group/calico/Start 101.57
285 TestNetworkPlugins/group/auto/KubeletFlags 0.23
286 TestNetworkPlugins/group/auto/NetCatPod 10.24
287 TestNetworkPlugins/group/auto/DNS 0.16
288 TestNetworkPlugins/group/auto/Localhost 0.13
289 TestNetworkPlugins/group/auto/HairPin 0.14
290 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
291 TestNetworkPlugins/group/kindnet/KubeletFlags 0.25
292 TestNetworkPlugins/group/kindnet/NetCatPod 11.29
293 TestNetworkPlugins/group/custom-flannel/Start 72.08
294 TestNetworkPlugins/group/kindnet/DNS 0.19
295 TestNetworkPlugins/group/kindnet/Localhost 0.14
296 TestNetworkPlugins/group/kindnet/HairPin 0.17
297 TestNetworkPlugins/group/enable-default-cni/Start 92.76
298 TestNetworkPlugins/group/calico/ControllerPod 6.01
299 TestNetworkPlugins/group/calico/KubeletFlags 0.22
300 TestNetworkPlugins/group/calico/NetCatPod 14.27
301 TestNetworkPlugins/group/calico/DNS 0.15
302 TestNetworkPlugins/group/calico/Localhost 0.14
303 TestNetworkPlugins/group/calico/HairPin 0.16
304 TestNetworkPlugins/group/flannel/Start 79.22
305 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.23
306 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.26
307 TestNetworkPlugins/group/custom-flannel/DNS 0.15
308 TestNetworkPlugins/group/custom-flannel/Localhost 0.14
309 TestNetworkPlugins/group/custom-flannel/HairPin 0.13
310 TestNetworkPlugins/group/bridge/Start 102.18
311 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.26
312 TestNetworkPlugins/group/enable-default-cni/NetCatPod 13.31
313 TestNetworkPlugins/group/enable-default-cni/DNS 0.16
314 TestNetworkPlugins/group/enable-default-cni/Localhost 0.13
315 TestNetworkPlugins/group/enable-default-cni/HairPin 0.13
318 TestNetworkPlugins/group/flannel/ControllerPod 6.01
319 TestNetworkPlugins/group/flannel/KubeletFlags 0.2
320 TestNetworkPlugins/group/flannel/NetCatPod 12.27
321 TestNetworkPlugins/group/flannel/DNS 0.17
322 TestNetworkPlugins/group/flannel/Localhost 0.2
323 TestNetworkPlugins/group/flannel/HairPin 0.13
325 TestStartStop/group/no-preload/serial/FirstStart 102.09
326 TestNetworkPlugins/group/bridge/KubeletFlags 0.22
327 TestNetworkPlugins/group/bridge/NetCatPod 11.23
329 TestStartStop/group/embed-certs/serial/FirstStart 57.24
330 TestNetworkPlugins/group/bridge/DNS 0.16
331 TestNetworkPlugins/group/bridge/Localhost 0.14
332 TestNetworkPlugins/group/bridge/HairPin 0.14
334 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 84.24
335 TestStartStop/group/embed-certs/serial/DeployApp 13.32
336 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.02
338 TestStartStop/group/no-preload/serial/DeployApp 11.29
339 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.02
341 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.27
342 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.01
347 TestStartStop/group/embed-certs/serial/SecondStart 635.38
349 TestStartStop/group/no-preload/serial/SecondStart 568.37
351 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 584.6
352 TestStartStop/group/old-k8s-version/serial/Stop 1.35
353 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.18
364 TestStartStop/group/newest-cni/serial/FirstStart 48.89
365 TestStartStop/group/newest-cni/serial/DeployApp 0
366 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.12
367 TestStartStop/group/newest-cni/serial/Stop 10.49
368 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
369 TestStartStop/group/newest-cni/serial/SecondStart 35.48
370 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
371 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
372 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.24
373 TestStartStop/group/newest-cni/serial/Pause 4.21
x
+
TestDownloadOnly/v1.20.0/json-events (32.67s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-220014 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-220014 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (32.668274246s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (32.67s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-220014
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-220014: exit status 85 (54.928106ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-220014 | jenkins | v1.34.0 | 13 Sep 24 18:20 UTC |          |
	|         | -p download-only-220014        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/13 18:20:46
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0913 18:20:46.589924   11091 out.go:345] Setting OutFile to fd 1 ...
	I0913 18:20:46.590179   11091 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 18:20:46.590189   11091 out.go:358] Setting ErrFile to fd 2...
	I0913 18:20:46.590193   11091 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 18:20:46.590355   11091 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19636-3902/.minikube/bin
	W0913 18:20:46.590472   11091 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19636-3902/.minikube/config/config.json: open /home/jenkins/minikube-integration/19636-3902/.minikube/config/config.json: no such file or directory
	I0913 18:20:46.590996   11091 out.go:352] Setting JSON to true
	I0913 18:20:46.591859   11091 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":190,"bootTime":1726251457,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0913 18:20:46.591947   11091 start.go:139] virtualization: kvm guest
	I0913 18:20:46.594319   11091 out.go:97] [download-only-220014] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0913 18:20:46.594429   11091 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19636-3902/.minikube/cache/preloaded-tarball: no such file or directory
	I0913 18:20:46.594465   11091 notify.go:220] Checking for updates...
	I0913 18:20:46.596052   11091 out.go:169] MINIKUBE_LOCATION=19636
	I0913 18:20:46.597540   11091 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 18:20:46.598796   11091 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19636-3902/kubeconfig
	I0913 18:20:46.599900   11091 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19636-3902/.minikube
	I0913 18:20:46.601184   11091 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0913 18:20:46.603377   11091 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0913 18:20:46.603560   11091 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 18:20:46.703243   11091 out.go:97] Using the kvm2 driver based on user configuration
	I0913 18:20:46.703277   11091 start.go:297] selected driver: kvm2
	I0913 18:20:46.703294   11091 start.go:901] validating driver "kvm2" against <nil>
	I0913 18:20:46.703618   11091 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 18:20:46.703748   11091 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19636-3902/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0913 18:20:46.718188   11091 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0913 18:20:46.718251   11091 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0913 18:20:46.718844   11091 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0913 18:20:46.719010   11091 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0913 18:20:46.719038   11091 cni.go:84] Creating CNI manager for ""
	I0913 18:20:46.719095   11091 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0913 18:20:46.719106   11091 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0913 18:20:46.719179   11091 start.go:340] cluster config:
	{Name:download-only-220014 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-220014 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 18:20:46.719372   11091 iso.go:125] acquiring lock: {Name:mk6d09a9e7ffc35d34bf41cb51ad15df14a4d34d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 18:20:46.721285   11091 out.go:97] Downloading VM boot image ...
	I0913 18:20:46.721326   11091 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19636-3902/.minikube/cache/iso/amd64/minikube-v1.34.0-1726156389-19616-amd64.iso
	I0913 18:21:00.628365   11091 out.go:97] Starting "download-only-220014" primary control-plane node in "download-only-220014" cluster
	I0913 18:21:00.628386   11091 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0913 18:21:00.742950   11091 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0913 18:21:00.742980   11091 cache.go:56] Caching tarball of preloaded images
	I0913 18:21:00.743140   11091 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0913 18:21:00.745113   11091 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0913 18:21:00.745148   11091 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0913 18:21:00.887517   11091 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/19636-3902/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-220014 host does not exist
	  To start a cluster, run: "minikube start -p download-only-220014"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-220014
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (24.18s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-283125 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-283125 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (24.181416008s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (24.18s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-283125
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-283125: exit status 85 (58.763913ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-220014 | jenkins | v1.34.0 | 13 Sep 24 18:20 UTC |                     |
	|         | -p download-only-220014        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 13 Sep 24 18:21 UTC | 13 Sep 24 18:21 UTC |
	| delete  | -p download-only-220014        | download-only-220014 | jenkins | v1.34.0 | 13 Sep 24 18:21 UTC | 13 Sep 24 18:21 UTC |
	| start   | -o=json --download-only        | download-only-283125 | jenkins | v1.34.0 | 13 Sep 24 18:21 UTC |                     |
	|         | -p download-only-283125        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/13 18:21:19
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0913 18:21:19.562084   11381 out.go:345] Setting OutFile to fd 1 ...
	I0913 18:21:19.562216   11381 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 18:21:19.562224   11381 out.go:358] Setting ErrFile to fd 2...
	I0913 18:21:19.562228   11381 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 18:21:19.562403   11381 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19636-3902/.minikube/bin
	I0913 18:21:19.562926   11381 out.go:352] Setting JSON to true
	I0913 18:21:19.563681   11381 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":223,"bootTime":1726251457,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0913 18:21:19.563742   11381 start.go:139] virtualization: kvm guest
	I0913 18:21:19.565557   11381 out.go:97] [download-only-283125] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0913 18:21:19.565666   11381 notify.go:220] Checking for updates...
	I0913 18:21:19.566911   11381 out.go:169] MINIKUBE_LOCATION=19636
	I0913 18:21:19.567986   11381 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 18:21:19.569036   11381 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19636-3902/kubeconfig
	I0913 18:21:19.570071   11381 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19636-3902/.minikube
	I0913 18:21:19.571196   11381 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0913 18:21:19.573029   11381 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0913 18:21:19.573193   11381 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 18:21:19.604578   11381 out.go:97] Using the kvm2 driver based on user configuration
	I0913 18:21:19.604603   11381 start.go:297] selected driver: kvm2
	I0913 18:21:19.604608   11381 start.go:901] validating driver "kvm2" against <nil>
	I0913 18:21:19.604921   11381 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 18:21:19.604997   11381 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19636-3902/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0913 18:21:19.619839   11381 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0913 18:21:19.619885   11381 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0913 18:21:19.620426   11381 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0913 18:21:19.620583   11381 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0913 18:21:19.620609   11381 cni.go:84] Creating CNI manager for ""
	I0913 18:21:19.620655   11381 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0913 18:21:19.620676   11381 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0913 18:21:19.620732   11381 start.go:340] cluster config:
	{Name:download-only-283125 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-283125 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 18:21:19.620831   11381 iso.go:125] acquiring lock: {Name:mk6d09a9e7ffc35d34bf41cb51ad15df14a4d34d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 18:21:19.622569   11381 out.go:97] Starting "download-only-283125" primary control-plane node in "download-only-283125" cluster
	I0913 18:21:19.622593   11381 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0913 18:21:20.290928   11381 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0913 18:21:20.290969   11381 cache.go:56] Caching tarball of preloaded images
	I0913 18:21:20.291166   11381 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0913 18:21:20.292988   11381 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I0913 18:21:20.293028   11381 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 ...
	I0913 18:21:20.407459   11381 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4?checksum=md5:aa79045e4550b9510ee496fee0d50abb -> /home/jenkins/minikube-integration/19636-3902/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-283125 host does not exist
	  To start a cluster, run: "minikube start -p download-only-283125"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-283125
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.59s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-840809 --alsologtostderr --binary-mirror http://127.0.0.1:46177 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-840809" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-840809
--- PASS: TestBinaryMirror (0.59s)

                                                
                                    
x
+
TestOffline (112.23s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-568412 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-568412 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m51.219121383s)
helpers_test.go:175: Cleaning up "offline-crio-568412" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-568412
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-568412: (1.011598971s)
--- PASS: TestOffline (112.23s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:975: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-979357
addons_test.go:975: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-979357: exit status 85 (46.683054ms)

                                                
                                                
-- stdout --
	* Profile "addons-979357" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-979357"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:986: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-979357
addons_test.go:986: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-979357: exit status 85 (46.804029ms)

                                                
                                                
-- stdout --
	* Profile "addons-979357" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-979357"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (141.31s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-979357 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-979357 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns: (2m21.305637435s)
--- PASS: TestAddons/Setup (141.31s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:594: (dbg) Run:  kubectl --context addons-979357 create ns new-namespace
addons_test.go:608: (dbg) Run:  kubectl --context addons-979357 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (12.03s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-ppcdc" [f803e61f-3656-4c93-a016-3aa86dfb2383] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.039691275s
addons_test.go:789: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-979357
addons_test.go:789: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-979357: (5.993984095s)
--- PASS: TestAddons/parallel/InspektorGadget (12.03s)

                                                
                                    
x
+
TestAddons/parallel/CSI (50.25s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:505: csi-hostpath-driver pods stabilized in 7.694572ms
addons_test.go:508: (dbg) Run:  kubectl --context addons-979357 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:513: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-979357 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-979357 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-979357 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:518: (dbg) Run:  kubectl --context addons-979357 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:523: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [7892613e-de29-4e74-a386-c34e3faa1dbf] Pending
helpers_test.go:344: "task-pv-pod" [7892613e-de29-4e74-a386-c34e3faa1dbf] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [7892613e-de29-4e74-a386-c34e3faa1dbf] Running
addons_test.go:523: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 14.003950202s
addons_test.go:528: (dbg) Run:  kubectl --context addons-979357 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:533: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-979357 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-979357 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:538: (dbg) Run:  kubectl --context addons-979357 delete pod task-pv-pod
addons_test.go:544: (dbg) Run:  kubectl --context addons-979357 delete pvc hpvc
addons_test.go:550: (dbg) Run:  kubectl --context addons-979357 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-979357 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-979357 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-979357 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-979357 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-979357 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-979357 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-979357 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-979357 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-979357 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-979357 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-979357 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-979357 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-979357 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-979357 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-979357 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-979357 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:560: (dbg) Run:  kubectl --context addons-979357 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [34b800b1-d2f8-4d81-badb-d5d003b1751c] Pending
helpers_test.go:344: "task-pv-pod-restore" [34b800b1-d2f8-4d81-badb-d5d003b1751c] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [34b800b1-d2f8-4d81-badb-d5d003b1751c] Running
addons_test.go:565: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004833667s
addons_test.go:570: (dbg) Run:  kubectl --context addons-979357 delete pod task-pv-pod-restore
addons_test.go:574: (dbg) Run:  kubectl --context addons-979357 delete pvc hpvc-restore
addons_test.go:578: (dbg) Run:  kubectl --context addons-979357 delete volumesnapshot new-snapshot-demo
addons_test.go:582: (dbg) Run:  out/minikube-linux-amd64 -p addons-979357 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:582: (dbg) Done: out/minikube-linux-amd64 -p addons-979357 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.721098332s)
addons_test.go:586: (dbg) Run:  out/minikube-linux-amd64 -p addons-979357 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:586: (dbg) Done: out/minikube-linux-amd64 -p addons-979357 addons disable volumesnapshots --alsologtostderr -v=1: (1.035239215s)
--- PASS: TestAddons/parallel/CSI (50.25s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (20.69s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:768: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-979357 --alsologtostderr -v=1
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-57fb76fcdb-dmdl5" [fd091d4e-0e2f-44dc-a87f-33e4890bedd1] Pending
helpers_test.go:344: "headlamp-57fb76fcdb-dmdl5" [fd091d4e-0e2f-44dc-a87f-33e4890bedd1] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-dmdl5" [fd091d4e-0e2f-44dc-a87f-33e4890bedd1] Running
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 14.004668119s
addons_test.go:777: (dbg) Run:  out/minikube-linux-amd64 -p addons-979357 addons disable headlamp --alsologtostderr -v=1
addons_test.go:777: (dbg) Done: out/minikube-linux-amd64 -p addons-979357 addons disable headlamp --alsologtostderr -v=1: (5.749560892s)
--- PASS: TestAddons/parallel/Headlamp (20.69s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.63s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-vn46r" [f71152bf-b7e0-4c32-82f4-1bbc6829fc77] Running
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.004107355s
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-979357
--- PASS: TestAddons/parallel/CloudSpanner (6.63s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (55.45s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:920: (dbg) Run:  kubectl --context addons-979357 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:926: (dbg) Run:  kubectl --context addons-979357 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:930: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-979357 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-979357 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-979357 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-979357 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-979357 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-979357 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-979357 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [816d967e-d591-4e13-aecb-0cf44aa24faf] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [816d967e-d591-4e13-aecb-0cf44aa24faf] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [816d967e-d591-4e13-aecb-0cf44aa24faf] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.004913632s
addons_test.go:938: (dbg) Run:  kubectl --context addons-979357 get pvc test-pvc -o=json
addons_test.go:947: (dbg) Run:  out/minikube-linux-amd64 -p addons-979357 ssh "cat /opt/local-path-provisioner/pvc-2e98d28b-4232-4373-82bf-032b9972820e_default_test-pvc/file1"
addons_test.go:959: (dbg) Run:  kubectl --context addons-979357 delete pod test-local-path
addons_test.go:963: (dbg) Run:  kubectl --context addons-979357 delete pvc test-pvc
addons_test.go:967: (dbg) Run:  out/minikube-linux-amd64 -p addons-979357 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:967: (dbg) Done: out/minikube-linux-amd64 -p addons-979357 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.679614219s)
--- PASS: TestAddons/parallel/LocalPath (55.45s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.56s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-66thw" [ad466b8e-d669-4281-be12-36ab1bbbee83] Running
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003358997s
addons_test.go:1002: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-979357
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.56s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.75s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-tmqz9" [be30a802-e73a-4fc1-908e-2c7784677657] Running
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.004157208s
addons_test.go:1014: (dbg) Run:  out/minikube-linux-amd64 -p addons-979357 addons disable yakd --alsologtostderr -v=1
addons_test.go:1014: (dbg) Done: out/minikube-linux-amd64 -p addons-979357 addons disable yakd --alsologtostderr -v=1: (5.740918794s)
--- PASS: TestAddons/parallel/Yakd (10.75s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (7.54s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-979357
addons_test.go:170: (dbg) Done: out/minikube-linux-amd64 stop -p addons-979357: (7.288121289s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-979357
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-979357
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-979357
--- PASS: TestAddons/StoppedEnableDisable (7.54s)

                                                
                                    
x
+
TestCertOptions (48.81s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-718151 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-718151 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (47.58376594s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-718151 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-718151 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-718151 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-718151" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-718151
--- PASS: TestCertOptions (48.81s)

                                                
                                    
x
+
TestCertExpiration (301.27s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-235626 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
E0913 19:39:06.601353   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-235626 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m21.375775001s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-235626 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-235626 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (38.904219657s)
helpers_test.go:175: Cleaning up "cert-expiration-235626" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-235626
--- PASS: TestCertExpiration (301.27s)

                                                
                                    
x
+
TestForceSystemdFlag (65.86s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-642942 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-642942 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m4.662292063s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-642942 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-642942" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-642942
--- PASS: TestForceSystemdFlag (65.86s)

                                                
                                    
x
+
TestForceSystemdEnv (46.43s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-756212 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-756212 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (45.445894249s)
helpers_test.go:175: Cleaning up "force-systemd-env-756212" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-756212
--- PASS: TestForceSystemdEnv (46.43s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (5.45s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (5.45s)

                                                
                                    
x
+
TestErrorSpam/setup (40.51s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-239511 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-239511 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-239511 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-239511 --driver=kvm2  --container-runtime=crio: (40.508057991s)
--- PASS: TestErrorSpam/setup (40.51s)

                                                
                                    
x
+
TestErrorSpam/start (0.33s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-239511 --log_dir /tmp/nospam-239511 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-239511 --log_dir /tmp/nospam-239511 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-239511 --log_dir /tmp/nospam-239511 start --dry-run
--- PASS: TestErrorSpam/start (0.33s)

                                                
                                    
x
+
TestErrorSpam/status (0.71s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-239511 --log_dir /tmp/nospam-239511 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-239511 --log_dir /tmp/nospam-239511 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-239511 --log_dir /tmp/nospam-239511 status
--- PASS: TestErrorSpam/status (0.71s)

                                                
                                    
x
+
TestErrorSpam/pause (1.55s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-239511 --log_dir /tmp/nospam-239511 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-239511 --log_dir /tmp/nospam-239511 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-239511 --log_dir /tmp/nospam-239511 pause
--- PASS: TestErrorSpam/pause (1.55s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.75s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-239511 --log_dir /tmp/nospam-239511 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-239511 --log_dir /tmp/nospam-239511 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-239511 --log_dir /tmp/nospam-239511 unpause
--- PASS: TestErrorSpam/unpause (1.75s)

                                                
                                    
x
+
TestErrorSpam/stop (4.93s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-239511 --log_dir /tmp/nospam-239511 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-239511 --log_dir /tmp/nospam-239511 stop: (2.326268929s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-239511 --log_dir /tmp/nospam-239511 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-239511 --log_dir /tmp/nospam-239511 stop: (1.106493384s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-239511 --log_dir /tmp/nospam-239511 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-239511 --log_dir /tmp/nospam-239511 stop: (1.499229123s)
--- PASS: TestErrorSpam/stop (4.93s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19636-3902/.minikube/files/etc/test/nested/copy/11079/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (79.6s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-204039 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E0913 18:39:06.601556   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357/client.crt: no such file or directory" logger="UnhandledError"
E0913 18:39:06.608314   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357/client.crt: no such file or directory" logger="UnhandledError"
E0913 18:39:06.619643   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357/client.crt: no such file or directory" logger="UnhandledError"
E0913 18:39:06.641024   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357/client.crt: no such file or directory" logger="UnhandledError"
E0913 18:39:06.682363   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357/client.crt: no such file or directory" logger="UnhandledError"
E0913 18:39:06.763751   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357/client.crt: no such file or directory" logger="UnhandledError"
E0913 18:39:06.925278   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357/client.crt: no such file or directory" logger="UnhandledError"
E0913 18:39:07.246944   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357/client.crt: no such file or directory" logger="UnhandledError"
E0913 18:39:07.888994   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357/client.crt: no such file or directory" logger="UnhandledError"
E0913 18:39:09.170588   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357/client.crt: no such file or directory" logger="UnhandledError"
E0913 18:39:11.733619   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357/client.crt: no such file or directory" logger="UnhandledError"
E0913 18:39:16.855466   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357/client.crt: no such file or directory" logger="UnhandledError"
E0913 18:39:27.096960   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-204039 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m19.595697096s)
--- PASS: TestFunctional/serial/StartWithProxy (79.60s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (32.96s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-204039 --alsologtostderr -v=8
E0913 18:39:47.578334   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-204039 --alsologtostderr -v=8: (32.955413509s)
functional_test.go:663: soft start took 32.956121359s for "functional-204039" cluster.
--- PASS: TestFunctional/serial/SoftStart (32.96s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-204039 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.64s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-204039 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-204039 cache add registry.k8s.io/pause:3.1: (1.136006973s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-204039 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-204039 cache add registry.k8s.io/pause:3.3: (1.304118901s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-204039 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-204039 cache add registry.k8s.io/pause:latest: (1.199843362s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.64s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.26s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-204039 /tmp/TestFunctionalserialCacheCmdcacheadd_local3699728815/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-204039 cache add minikube-local-cache-test:functional-204039
functional_test.go:1089: (dbg) Done: out/minikube-linux-amd64 -p functional-204039 cache add minikube-local-cache-test:functional-204039: (1.931529203s)
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-204039 cache delete minikube-local-cache-test:functional-204039
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-204039
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.26s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.21s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-204039 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.21s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.67s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-204039 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-204039 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-204039 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (201.253237ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-204039 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-amd64 -p functional-204039 cache reload: (1.017425973s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-204039 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.67s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-204039 kubectl -- --context functional-204039 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-204039 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (28.98s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-204039 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0913 18:40:28.540573   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-204039 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (28.984549503s)
functional_test.go:761: restart took 28.984664749s for "functional-204039" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (28.98s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-204039 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.41s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-204039 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-204039 logs: (1.409749132s)
--- PASS: TestFunctional/serial/LogsCmd (1.41s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.42s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-204039 logs --file /tmp/TestFunctionalserialLogsFileCmd1297723297/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-204039 logs --file /tmp/TestFunctionalserialLogsFileCmd1297723297/001/logs.txt: (1.420961107s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.42s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.87s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-204039 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-204039
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-204039: exit status 115 (262.675584ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.239:32767 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-204039 delete -f testdata/invalidsvc.yaml
functional_test.go:2327: (dbg) Done: kubectl --context functional-204039 delete -f testdata/invalidsvc.yaml: (1.42836723s)
--- PASS: TestFunctional/serial/InvalidService (4.87s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-204039 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-204039 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-204039 config get cpus: exit status 14 (66.125515ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-204039 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-204039 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-204039 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-204039 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-204039 config get cpus: exit status 14 (47.455639ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (23.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-204039 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-204039 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 20695: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (23.47s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-204039 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-204039 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (162.446929ms)

                                                
                                                
-- stdout --
	* [functional-204039] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19636
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19636-3902/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19636-3902/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 18:40:58.901695   20522 out.go:345] Setting OutFile to fd 1 ...
	I0913 18:40:58.901820   20522 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 18:40:58.901828   20522 out.go:358] Setting ErrFile to fd 2...
	I0913 18:40:58.901833   20522 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 18:40:58.902016   20522 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19636-3902/.minikube/bin
	I0913 18:40:58.902543   20522 out.go:352] Setting JSON to false
	I0913 18:40:58.903806   20522 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1402,"bootTime":1726251457,"procs":228,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0913 18:40:58.903932   20522 start.go:139] virtualization: kvm guest
	I0913 18:40:58.906326   20522 out.go:177] * [functional-204039] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0913 18:40:58.907654   20522 out.go:177]   - MINIKUBE_LOCATION=19636
	I0913 18:40:58.907679   20522 notify.go:220] Checking for updates...
	I0913 18:40:58.910985   20522 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 18:40:58.912560   20522 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19636-3902/kubeconfig
	I0913 18:40:58.914264   20522 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19636-3902/.minikube
	I0913 18:40:58.915514   20522 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0913 18:40:58.918699   20522 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 18:40:58.920574   20522 config.go:182] Loaded profile config "functional-204039": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 18:40:58.921190   20522 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:40:58.921327   20522 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:40:58.941386   20522 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41125
	I0913 18:40:58.941904   20522 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:40:58.942448   20522 main.go:141] libmachine: Using API Version  1
	I0913 18:40:58.942473   20522 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:40:58.942844   20522 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:40:58.943074   20522 main.go:141] libmachine: (functional-204039) Calling .DriverName
	I0913 18:40:58.943365   20522 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 18:40:58.943711   20522 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:40:58.943747   20522 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:40:58.964953   20522 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34881
	I0913 18:40:58.965650   20522 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:40:58.966312   20522 main.go:141] libmachine: Using API Version  1
	I0913 18:40:58.966336   20522 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:40:58.966717   20522 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:40:58.966931   20522 main.go:141] libmachine: (functional-204039) Calling .DriverName
	I0913 18:40:59.012336   20522 out.go:177] * Using the kvm2 driver based on existing profile
	I0913 18:40:59.014153   20522 start.go:297] selected driver: kvm2
	I0913 18:40:59.014172   20522 start.go:901] validating driver "kvm2" against &{Name:functional-204039 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:functional-204039 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.239 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 18:40:59.014311   20522 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 18:40:59.016410   20522 out.go:201] 
	W0913 18:40:59.017350   20522 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0913 18:40:59.018673   20522 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-204039 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-204039 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-204039 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (148.475208ms)

                                                
                                                
-- stdout --
	* [functional-204039] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19636
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19636-3902/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19636-3902/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 18:40:58.766826   20478 out.go:345] Setting OutFile to fd 1 ...
	I0913 18:40:58.767012   20478 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 18:40:58.767038   20478 out.go:358] Setting ErrFile to fd 2...
	I0913 18:40:58.767055   20478 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 18:40:58.767512   20478 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19636-3902/.minikube/bin
	I0913 18:40:58.768225   20478 out.go:352] Setting JSON to false
	I0913 18:40:58.769558   20478 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1402,"bootTime":1726251457,"procs":226,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0913 18:40:58.769674   20478 start.go:139] virtualization: kvm guest
	I0913 18:40:58.771848   20478 out.go:177] * [functional-204039] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	I0913 18:40:58.772982   20478 out.go:177]   - MINIKUBE_LOCATION=19636
	I0913 18:40:58.772979   20478 notify.go:220] Checking for updates...
	I0913 18:40:58.775620   20478 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 18:40:58.776833   20478 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19636-3902/kubeconfig
	I0913 18:40:58.778039   20478 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19636-3902/.minikube
	I0913 18:40:58.779460   20478 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0913 18:40:58.780885   20478 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 18:40:58.782440   20478 config.go:182] Loaded profile config "functional-204039": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 18:40:58.783215   20478 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:40:58.783282   20478 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:40:58.798996   20478 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38351
	I0913 18:40:58.799456   20478 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:40:58.799995   20478 main.go:141] libmachine: Using API Version  1
	I0913 18:40:58.800016   20478 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:40:58.800439   20478 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:40:58.800603   20478 main.go:141] libmachine: (functional-204039) Calling .DriverName
	I0913 18:40:58.800849   20478 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 18:40:58.801124   20478 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 18:40:58.801152   20478 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:40:58.816089   20478 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35821
	I0913 18:40:58.816532   20478 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:40:58.817002   20478 main.go:141] libmachine: Using API Version  1
	I0913 18:40:58.817030   20478 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:40:58.817383   20478 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:40:58.817557   20478 main.go:141] libmachine: (functional-204039) Calling .DriverName
	I0913 18:40:58.851005   20478 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0913 18:40:58.852072   20478 start.go:297] selected driver: kvm2
	I0913 18:40:58.852090   20478 start.go:901] validating driver "kvm2" against &{Name:functional-204039 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:functional-204039 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.239 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 18:40:58.852212   20478 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 18:40:58.854632   20478 out.go:201] 
	W0913 18:40:58.855606   20478 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0913 18:40:58.856904   20478 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-204039 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-204039 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-204039 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-204039 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-204039 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-999c4" [2368672b-0746-459e-a8bf-79387de86c7e] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-999c4" [2368672b-0746-459e-a8bf-79387de86c7e] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.379307845s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-204039 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.39.239:32638
functional_test.go:1675: http://192.168.39.239:32638: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-999c4

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.239:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.239:32638
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.89s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-204039 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-204039 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (45.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [537c5baa-cfb0-46c7-8409-19fcddf10cd3] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003180228s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-204039 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-204039 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-204039 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-204039 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [d43a1e30-9cc9-4d3a-a5af-080880bc9340] Pending
helpers_test.go:344: "sp-pod" [d43a1e30-9cc9-4d3a-a5af-080880bc9340] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [d43a1e30-9cc9-4d3a-a5af-080880bc9340] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 26.004696978s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-204039 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-204039 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-204039 delete -f testdata/storage-provisioner/pod.yaml: (5.27525289s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-204039 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [1872d812-2a87-4040-b37b-0bcea5543ab1] Pending
helpers_test.go:344: "sp-pod" [1872d812-2a87-4040-b37b-0bcea5543ab1] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [1872d812-2a87-4040-b37b-0bcea5543ab1] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.004120461s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-204039 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (45.07s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-204039 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-204039 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-204039 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-204039 ssh -n functional-204039 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-204039 cp functional-204039:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd951756175/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-204039 ssh -n functional-204039 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-204039 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-204039 ssh -n functional-204039 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.33s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (31.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-204039 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-2n4px" [3f8ce8cf-7c61-40f8-a9b6-a1796e21e0bd] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-2n4px" [3f8ce8cf-7c61-40f8-a9b6-a1796e21e0bd] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 30.003928667s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-204039 exec mysql-6cdb49bbb-2n4px -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-204039 exec mysql-6cdb49bbb-2n4px -- mysql -ppassword -e "show databases;": exit status 1 (127.470056ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-204039 exec mysql-6cdb49bbb-2n4px -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (31.59s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/11079/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-204039 ssh "sudo cat /etc/test/nested/copy/11079/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/11079.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-204039 ssh "sudo cat /etc/ssl/certs/11079.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/11079.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-204039 ssh "sudo cat /usr/share/ca-certificates/11079.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-204039 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/110792.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-204039 ssh "sudo cat /etc/ssl/certs/110792.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/110792.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-204039 ssh "sudo cat /usr/share/ca-certificates/110792.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-204039 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.25s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-204039 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-204039 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-204039 ssh "sudo systemctl is-active docker": exit status 1 (251.36711ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-204039 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-204039 ssh "sudo systemctl is-active containerd": exit status 1 (213.093873ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (12.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-204039 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-204039 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-jqgg8" [79ed54b8-3380-4a99-b48a-d14a186b7b79] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-jqgg8" [79ed54b8-3380-4a99-b48a-d14a186b7b79] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 12.005144397s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (12.20s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (11.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-204039 /tmp/TestFunctionalparallelMountCmdany-port1065752933/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1726252857711942035" to /tmp/TestFunctionalparallelMountCmdany-port1065752933/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1726252857711942035" to /tmp/TestFunctionalparallelMountCmdany-port1065752933/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1726252857711942035" to /tmp/TestFunctionalparallelMountCmdany-port1065752933/001/test-1726252857711942035
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-204039 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-204039 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (251.497295ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-204039 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-204039 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 13 18:40 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 13 18:40 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 13 18:40 test-1726252857711942035
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-204039 ssh cat /mount-9p/test-1726252857711942035
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-204039 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [b4395dba-955f-4551-88bb-c868279de4de] Pending
helpers_test.go:344: "busybox-mount" [b4395dba-955f-4551-88bb-c868279de4de] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [b4395dba-955f-4551-88bb-c868279de4de] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [b4395dba-955f-4551-88bb-c868279de4de] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 9.004036249s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-204039 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-204039 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-204039 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-204039 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-204039 /tmp/TestFunctionalparallelMountCmdany-port1065752933/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (11.65s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "272.336974ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "43.839916ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "233.46199ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "47.373494ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-204039 /tmp/TestFunctionalparallelMountCmdspecific-port2990456459/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-204039 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-204039 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (225.467373ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-204039 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-204039 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-204039 /tmp/TestFunctionalparallelMountCmdspecific-port2990456459/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-204039 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-204039 ssh "sudo umount -f /mount-9p": exit status 1 (185.321703ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-204039 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-204039 /tmp/TestFunctionalparallelMountCmdspecific-port2990456459/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.82s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-204039 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-204039 service list -o json
functional_test.go:1494: Took "422.591416ms" to run "out/minikube-linux-amd64 -p functional-204039 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-204039 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.39.239:32407
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-204039 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-204039 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.39.239:32407
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-204039 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3051608448/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-204039 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3051608448/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-204039 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3051608448/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-204039 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-204039 ssh "findmnt -T" /mount1: exit status 1 (231.301966ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-204039 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-204039 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-204039 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-204039 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-204039 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3051608448/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-204039 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3051608448/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-204039 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3051608448/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.13s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-204039 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-204039 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-204039 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-204039 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-204039 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-204039 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-204039 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
localhost/minikube-local-cache-test:functional-204039
localhost/kicbase/echo-server:functional-204039
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20240813-c6f155d6
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-204039 image ls --format short --alsologtostderr:
I0913 18:41:38.665787   22441 out.go:345] Setting OutFile to fd 1 ...
I0913 18:41:38.665879   22441 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0913 18:41:38.665888   22441 out.go:358] Setting ErrFile to fd 2...
I0913 18:41:38.665892   22441 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0913 18:41:38.666074   22441 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19636-3902/.minikube/bin
I0913 18:41:38.666730   22441 config.go:182] Loaded profile config "functional-204039": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0913 18:41:38.666848   22441 config.go:182] Loaded profile config "functional-204039": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0913 18:41:38.667249   22441 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0913 18:41:38.667292   22441 main.go:141] libmachine: Launching plugin server for driver kvm2
I0913 18:41:38.681077   22441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45455
I0913 18:41:38.681570   22441 main.go:141] libmachine: () Calling .GetVersion
I0913 18:41:38.682107   22441 main.go:141] libmachine: Using API Version  1
I0913 18:41:38.682134   22441 main.go:141] libmachine: () Calling .SetConfigRaw
I0913 18:41:38.682427   22441 main.go:141] libmachine: () Calling .GetMachineName
I0913 18:41:38.682595   22441 main.go:141] libmachine: (functional-204039) Calling .GetState
I0913 18:41:38.684468   22441 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0913 18:41:38.684563   22441 main.go:141] libmachine: Launching plugin server for driver kvm2
I0913 18:41:38.699969   22441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41133
I0913 18:41:38.700334   22441 main.go:141] libmachine: () Calling .GetVersion
I0913 18:41:38.700877   22441 main.go:141] libmachine: Using API Version  1
I0913 18:41:38.700919   22441 main.go:141] libmachine: () Calling .SetConfigRaw
I0913 18:41:38.701289   22441 main.go:141] libmachine: () Calling .GetMachineName
I0913 18:41:38.701465   22441 main.go:141] libmachine: (functional-204039) Calling .DriverName
I0913 18:41:38.701628   22441 ssh_runner.go:195] Run: systemctl --version
I0913 18:41:38.701650   22441 main.go:141] libmachine: (functional-204039) Calling .GetSSHHostname
I0913 18:41:38.704540   22441 main.go:141] libmachine: (functional-204039) DBG | domain functional-204039 has defined MAC address 52:54:00:a5:33:a9 in network mk-functional-204039
I0913 18:41:38.704891   22441 main.go:141] libmachine: (functional-204039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:33:a9", ip: ""} in network mk-functional-204039: {Iface:virbr1 ExpiryTime:2024-09-13 19:38:34 +0000 UTC Type:0 Mac:52:54:00:a5:33:a9 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:functional-204039 Clientid:01:52:54:00:a5:33:a9}
I0913 18:41:38.704917   22441 main.go:141] libmachine: (functional-204039) DBG | domain functional-204039 has defined IP address 192.168.39.239 and MAC address 52:54:00:a5:33:a9 in network mk-functional-204039
I0913 18:41:38.705147   22441 main.go:141] libmachine: (functional-204039) Calling .GetSSHPort
I0913 18:41:38.705284   22441 main.go:141] libmachine: (functional-204039) Calling .GetSSHKeyPath
I0913 18:41:38.705411   22441 main.go:141] libmachine: (functional-204039) Calling .GetSSHUsername
I0913 18:41:38.705522   22441 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/functional-204039/id_rsa Username:docker}
I0913 18:41:38.791624   22441 ssh_runner.go:195] Run: sudo crictl images --output json
I0913 18:41:38.849325   22441 main.go:141] libmachine: Making call to close driver server
I0913 18:41:38.849340   22441 main.go:141] libmachine: (functional-204039) Calling .Close
I0913 18:41:38.849601   22441 main.go:141] libmachine: (functional-204039) DBG | Closing plugin on server side
I0913 18:41:38.849606   22441 main.go:141] libmachine: Successfully made call to close driver server
I0913 18:41:38.849632   22441 main.go:141] libmachine: Making call to close connection to plugin binary
I0913 18:41:38.849646   22441 main.go:141] libmachine: Making call to close driver server
I0913 18:41:38.849654   22441 main.go:141] libmachine: (functional-204039) Calling .Close
I0913 18:41:38.849873   22441 main.go:141] libmachine: Successfully made call to close driver server
I0913 18:41:38.849924   22441 main.go:141] libmachine: Making call to close connection to plugin binary
I0913 18:41:38.849894   22441 main.go:141] libmachine: (functional-204039) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-204039 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-204039 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/kindest/kindnetd              | v20240813-c6f155d6 | 12968670680f4 | 87.2MB |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| registry.k8s.io/kube-apiserver          | v1.31.1            | 6bab7719df100 | 95.2MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| localhost/kicbase/echo-server           | functional-204039  | 9056ab77afb8e | 4.94MB |
| registry.k8s.io/coredns/coredns         | v1.11.3            | c69fa2e9cbf5f | 63.3MB |
| registry.k8s.io/kube-controller-manager | v1.31.1            | 175ffd71cce3d | 89.4MB |
| registry.k8s.io/kube-proxy              | v1.31.1            | 60c005f310ff3 | 92.7MB |
| docker.io/library/nginx                 | latest             | 39286ab8a5e14 | 192MB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| localhost/minikube-local-cache-test     | functional-204039  | 342da6dc6f780 | 3.33kB |
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/etcd                    | 3.5.15-0           | 2e96e5913fc06 | 149MB  |
| registry.k8s.io/kube-scheduler          | v1.31.1            | 9aa1fad941575 | 68.4MB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-204039 image ls --format table --alsologtostderr:
I0913 18:41:38.902646   22493 out.go:345] Setting OutFile to fd 1 ...
I0913 18:41:38.902964   22493 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0913 18:41:38.902997   22493 out.go:358] Setting ErrFile to fd 2...
I0913 18:41:38.903007   22493 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0913 18:41:38.903436   22493 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19636-3902/.minikube/bin
I0913 18:41:38.904722   22493 config.go:182] Loaded profile config "functional-204039": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0913 18:41:38.904832   22493 config.go:182] Loaded profile config "functional-204039": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0913 18:41:38.905231   22493 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0913 18:41:38.905266   22493 main.go:141] libmachine: Launching plugin server for driver kvm2
I0913 18:41:38.918860   22493 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35711
I0913 18:41:38.919291   22493 main.go:141] libmachine: () Calling .GetVersion
I0913 18:41:38.919803   22493 main.go:141] libmachine: Using API Version  1
I0913 18:41:38.919820   22493 main.go:141] libmachine: () Calling .SetConfigRaw
I0913 18:41:38.920118   22493 main.go:141] libmachine: () Calling .GetMachineName
I0913 18:41:38.920273   22493 main.go:141] libmachine: (functional-204039) Calling .GetState
I0913 18:41:38.922244   22493 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0913 18:41:38.922284   22493 main.go:141] libmachine: Launching plugin server for driver kvm2
I0913 18:41:38.936388   22493 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40063
I0913 18:41:38.936788   22493 main.go:141] libmachine: () Calling .GetVersion
I0913 18:41:38.937258   22493 main.go:141] libmachine: Using API Version  1
I0913 18:41:38.937279   22493 main.go:141] libmachine: () Calling .SetConfigRaw
I0913 18:41:38.937615   22493 main.go:141] libmachine: () Calling .GetMachineName
I0913 18:41:38.937834   22493 main.go:141] libmachine: (functional-204039) Calling .DriverName
I0913 18:41:38.938025   22493 ssh_runner.go:195] Run: systemctl --version
I0913 18:41:38.938054   22493 main.go:141] libmachine: (functional-204039) Calling .GetSSHHostname
I0913 18:41:38.940933   22493 main.go:141] libmachine: (functional-204039) DBG | domain functional-204039 has defined MAC address 52:54:00:a5:33:a9 in network mk-functional-204039
I0913 18:41:38.941337   22493 main.go:141] libmachine: (functional-204039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:33:a9", ip: ""} in network mk-functional-204039: {Iface:virbr1 ExpiryTime:2024-09-13 19:38:34 +0000 UTC Type:0 Mac:52:54:00:a5:33:a9 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:functional-204039 Clientid:01:52:54:00:a5:33:a9}
I0913 18:41:38.941364   22493 main.go:141] libmachine: (functional-204039) DBG | domain functional-204039 has defined IP address 192.168.39.239 and MAC address 52:54:00:a5:33:a9 in network mk-functional-204039
I0913 18:41:38.941451   22493 main.go:141] libmachine: (functional-204039) Calling .GetSSHPort
I0913 18:41:38.941690   22493 main.go:141] libmachine: (functional-204039) Calling .GetSSHKeyPath
I0913 18:41:38.941844   22493 main.go:141] libmachine: (functional-204039) Calling .GetSSHUsername
I0913 18:41:38.941986   22493 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/functional-204039/id_rsa Username:docker}
I0913 18:41:39.018354   22493 ssh_runner.go:195] Run: sudo crictl images --output json
I0913 18:41:39.089976   22493 main.go:141] libmachine: Making call to close driver server
I0913 18:41:39.089996   22493 main.go:141] libmachine: (functional-204039) Calling .Close
I0913 18:41:39.090277   22493 main.go:141] libmachine: Successfully made call to close driver server
I0913 18:41:39.090295   22493 main.go:141] libmachine: Making call to close connection to plugin binary
I0913 18:41:39.090314   22493 main.go:141] libmachine: Making call to close driver server
I0913 18:41:39.090321   22493 main.go:141] libmachine: (functional-204039) Calling .Close
I0913 18:41:39.090533   22493 main.go:141] libmachine: (functional-204039) DBG | Closing plugin on server side
I0913 18:41:39.090550   22493 main.go:141] libmachine: Successfully made call to close driver server
I0913 18:41:39.090562   22493 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-204039 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-204039 image ls --format json --alsologtostderr:
[{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigest
s":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3","repoDigests":["docker.io/library/nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3","docker.io/library/nginx@sha256:88a0a069d5e9865fcaaf8c1e53ba6bf3d8d987b0fdc5e0135fec8ce8567d673e"],"repoTags":["docker.io/library/nginx:latest"],"size":"191853369"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064
c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee","repoDigests":["registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771","registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"95237600"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"742080"},{"id":"12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f","repoDigests":["docker.io/
kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b","docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"],"repoTags":["docker.io/kindest/kindnetd:v20240813-c6f155d6"],"size":"87190579"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"342da6dc6f780e0ef59cbaac1415c98c037c5e54a2d0f8a258a559ac60735229","repoDigests":["localhost/minikube-local-cache-test@sha256:6bea91265092bea1cf0cea697fed9aa0a3b72d7588d24da810f0af3ee1408348"],"repoTags":["localhost/minikube-local-cache-test:functional-204039"],"size":"3330"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","r
epoDigests":["registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d","registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"149009664"},{"id":"175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1","registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"89437508"},{"id":"60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561","repoDigests":["registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44","registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"9273384
9"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-204039"],"size":"4943877"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e","registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"63273227"},{"id":"9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b","repoDigests":["registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0","registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"68420934"}
,{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-204039 image ls --format json --alsologtostderr:
I0913 18:41:38.901648   22487 out.go:345] Setting OutFile to fd 1 ...
I0913 18:41:38.901790   22487 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0913 18:41:38.901802   22487 out.go:358] Setting ErrFile to fd 2...
I0913 18:41:38.901809   22487 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0913 18:41:38.902137   22487 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19636-3902/.minikube/bin
I0913 18:41:38.903001   22487 config.go:182] Loaded profile config "functional-204039": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0913 18:41:38.903169   22487 config.go:182] Loaded profile config "functional-204039": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0913 18:41:38.903795   22487 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0913 18:41:38.903860   22487 main.go:141] libmachine: Launching plugin server for driver kvm2
I0913 18:41:38.918691   22487 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37459
I0913 18:41:38.919241   22487 main.go:141] libmachine: () Calling .GetVersion
I0913 18:41:38.919795   22487 main.go:141] libmachine: Using API Version  1
I0913 18:41:38.919816   22487 main.go:141] libmachine: () Calling .SetConfigRaw
I0913 18:41:38.920150   22487 main.go:141] libmachine: () Calling .GetMachineName
I0913 18:41:38.920337   22487 main.go:141] libmachine: (functional-204039) Calling .GetState
I0913 18:41:38.922388   22487 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0913 18:41:38.922430   22487 main.go:141] libmachine: Launching plugin server for driver kvm2
I0913 18:41:38.936405   22487 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34397
I0913 18:41:38.936871   22487 main.go:141] libmachine: () Calling .GetVersion
I0913 18:41:38.937386   22487 main.go:141] libmachine: Using API Version  1
I0913 18:41:38.937403   22487 main.go:141] libmachine: () Calling .SetConfigRaw
I0913 18:41:38.937719   22487 main.go:141] libmachine: () Calling .GetMachineName
I0913 18:41:38.937901   22487 main.go:141] libmachine: (functional-204039) Calling .DriverName
I0913 18:41:38.938076   22487 ssh_runner.go:195] Run: systemctl --version
I0913 18:41:38.938132   22487 main.go:141] libmachine: (functional-204039) Calling .GetSSHHostname
I0913 18:41:38.941227   22487 main.go:141] libmachine: (functional-204039) DBG | domain functional-204039 has defined MAC address 52:54:00:a5:33:a9 in network mk-functional-204039
I0913 18:41:38.941700   22487 main.go:141] libmachine: (functional-204039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:33:a9", ip: ""} in network mk-functional-204039: {Iface:virbr1 ExpiryTime:2024-09-13 19:38:34 +0000 UTC Type:0 Mac:52:54:00:a5:33:a9 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:functional-204039 Clientid:01:52:54:00:a5:33:a9}
I0913 18:41:38.941723   22487 main.go:141] libmachine: (functional-204039) DBG | domain functional-204039 has defined IP address 192.168.39.239 and MAC address 52:54:00:a5:33:a9 in network mk-functional-204039
I0913 18:41:38.941885   22487 main.go:141] libmachine: (functional-204039) Calling .GetSSHPort
I0913 18:41:38.942040   22487 main.go:141] libmachine: (functional-204039) Calling .GetSSHKeyPath
I0913 18:41:38.942162   22487 main.go:141] libmachine: (functional-204039) Calling .GetSSHUsername
I0913 18:41:38.942272   22487 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/functional-204039/id_rsa Username:docker}
I0913 18:41:39.020817   22487 ssh_runner.go:195] Run: sudo crictl images --output json
I0913 18:41:39.064734   22487 main.go:141] libmachine: Making call to close driver server
I0913 18:41:39.064749   22487 main.go:141] libmachine: (functional-204039) Calling .Close
I0913 18:41:39.065000   22487 main.go:141] libmachine: Successfully made call to close driver server
I0913 18:41:39.065019   22487 main.go:141] libmachine: Making call to close connection to plugin binary
I0913 18:41:39.065031   22487 main.go:141] libmachine: Making call to close driver server
I0913 18:41:39.065035   22487 main.go:141] libmachine: (functional-204039) DBG | Closing plugin on server side
I0913 18:41:39.065038   22487 main.go:141] libmachine: (functional-204039) Calling .Close
I0913 18:41:39.065303   22487 main.go:141] libmachine: (functional-204039) DBG | Closing plugin on server side
I0913 18:41:39.065328   22487 main.go:141] libmachine: Successfully made call to close driver server
I0913 18:41:39.065338   22487 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-204039 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-204039 image ls --format yaml --alsologtostderr:
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests:
- registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "149009664"
- id: 6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771
- registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "95237600"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f
repoDigests:
- docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b
- docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166
repoTags:
- docker.io/kindest/kindnetd:v20240813-c6f155d6
size: "87190579"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0
- registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "68420934"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
- registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "63273227"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-204039
size: "4943877"
- id: 175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1
- registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "89437508"
- id: 60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44
- registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "92733849"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3
repoDigests:
- docker.io/library/nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3
- docker.io/library/nginx@sha256:88a0a069d5e9865fcaaf8c1e53ba6bf3d8d987b0fdc5e0135fec8ce8567d673e
repoTags:
- docker.io/library/nginx:latest
size: "191853369"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 342da6dc6f780e0ef59cbaac1415c98c037c5e54a2d0f8a258a559ac60735229
repoDigests:
- localhost/minikube-local-cache-test@sha256:6bea91265092bea1cf0cea697fed9aa0a3b72d7588d24da810f0af3ee1408348
repoTags:
- localhost/minikube-local-cache-test:functional-204039
size: "3330"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-204039 image ls --format yaml --alsologtostderr:
I0913 18:41:38.665036   22440 out.go:345] Setting OutFile to fd 1 ...
I0913 18:41:38.665210   22440 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0913 18:41:38.665222   22440 out.go:358] Setting ErrFile to fd 2...
I0913 18:41:38.665227   22440 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0913 18:41:38.665490   22440 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19636-3902/.minikube/bin
I0913 18:41:38.666144   22440 config.go:182] Loaded profile config "functional-204039": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0913 18:41:38.666260   22440 config.go:182] Loaded profile config "functional-204039": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0913 18:41:38.666611   22440 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0913 18:41:38.666645   22440 main.go:141] libmachine: Launching plugin server for driver kvm2
I0913 18:41:38.681010   22440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36989
I0913 18:41:38.681544   22440 main.go:141] libmachine: () Calling .GetVersion
I0913 18:41:38.682072   22440 main.go:141] libmachine: Using API Version  1
I0913 18:41:38.682112   22440 main.go:141] libmachine: () Calling .SetConfigRaw
I0913 18:41:38.682482   22440 main.go:141] libmachine: () Calling .GetMachineName
I0913 18:41:38.682666   22440 main.go:141] libmachine: (functional-204039) Calling .GetState
I0913 18:41:38.684546   22440 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0913 18:41:38.684582   22440 main.go:141] libmachine: Launching plugin server for driver kvm2
I0913 18:41:38.699447   22440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33475
I0913 18:41:38.699845   22440 main.go:141] libmachine: () Calling .GetVersion
I0913 18:41:38.700372   22440 main.go:141] libmachine: Using API Version  1
I0913 18:41:38.700395   22440 main.go:141] libmachine: () Calling .SetConfigRaw
I0913 18:41:38.700715   22440 main.go:141] libmachine: () Calling .GetMachineName
I0913 18:41:38.700911   22440 main.go:141] libmachine: (functional-204039) Calling .DriverName
I0913 18:41:38.701064   22440 ssh_runner.go:195] Run: systemctl --version
I0913 18:41:38.701092   22440 main.go:141] libmachine: (functional-204039) Calling .GetSSHHostname
I0913 18:41:38.704411   22440 main.go:141] libmachine: (functional-204039) DBG | domain functional-204039 has defined MAC address 52:54:00:a5:33:a9 in network mk-functional-204039
I0913 18:41:38.704827   22440 main.go:141] libmachine: (functional-204039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:33:a9", ip: ""} in network mk-functional-204039: {Iface:virbr1 ExpiryTime:2024-09-13 19:38:34 +0000 UTC Type:0 Mac:52:54:00:a5:33:a9 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:functional-204039 Clientid:01:52:54:00:a5:33:a9}
I0913 18:41:38.704861   22440 main.go:141] libmachine: (functional-204039) DBG | domain functional-204039 has defined IP address 192.168.39.239 and MAC address 52:54:00:a5:33:a9 in network mk-functional-204039
I0913 18:41:38.704990   22440 main.go:141] libmachine: (functional-204039) Calling .GetSSHPort
I0913 18:41:38.705175   22440 main.go:141] libmachine: (functional-204039) Calling .GetSSHKeyPath
I0913 18:41:38.705363   22440 main.go:141] libmachine: (functional-204039) Calling .GetSSHUsername
I0913 18:41:38.705508   22440 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/functional-204039/id_rsa Username:docker}
I0913 18:41:38.793696   22440 ssh_runner.go:195] Run: sudo crictl images --output json
I0913 18:41:38.852933   22440 main.go:141] libmachine: Making call to close driver server
I0913 18:41:38.852948   22440 main.go:141] libmachine: (functional-204039) Calling .Close
I0913 18:41:38.853208   22440 main.go:141] libmachine: Successfully made call to close driver server
I0913 18:41:38.853220   22440 main.go:141] libmachine: (functional-204039) DBG | Closing plugin on server side
I0913 18:41:38.853225   22440 main.go:141] libmachine: Making call to close connection to plugin binary
I0913 18:41:38.853266   22440 main.go:141] libmachine: Making call to close driver server
I0913 18:41:38.853278   22440 main.go:141] libmachine: (functional-204039) Calling .Close
I0913 18:41:38.853521   22440 main.go:141] libmachine: Successfully made call to close driver server
I0913 18:41:38.853538   22440 main.go:141] libmachine: Making call to close connection to plugin binary
I0913 18:41:38.853543   22440 main.go:141] libmachine: (functional-204039) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-204039 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-204039 ssh pgrep buildkitd: exit status 1 (181.756549ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-204039 image build -t localhost/my-image:functional-204039 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-204039 image build -t localhost/my-image:functional-204039 testdata/build --alsologtostderr: (3.719294977s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-amd64 -p functional-204039 image build -t localhost/my-image:functional-204039 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 22343c185e6
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-204039
--> 3bf29259768
Successfully tagged localhost/my-image:functional-204039
3bf292597680a0ddebb962f987f6a45f4a42b6644e975a59a251d6c070db5158
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-204039 image build -t localhost/my-image:functional-204039 testdata/build --alsologtostderr:
I0913 18:41:39.293934   22563 out.go:345] Setting OutFile to fd 1 ...
I0913 18:41:39.294072   22563 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0913 18:41:39.294082   22563 out.go:358] Setting ErrFile to fd 2...
I0913 18:41:39.294087   22563 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0913 18:41:39.294282   22563 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19636-3902/.minikube/bin
I0913 18:41:39.294888   22563 config.go:182] Loaded profile config "functional-204039": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0913 18:41:39.295409   22563 config.go:182] Loaded profile config "functional-204039": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0913 18:41:39.295789   22563 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0913 18:41:39.295833   22563 main.go:141] libmachine: Launching plugin server for driver kvm2
I0913 18:41:39.310740   22563 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38847
I0913 18:41:39.311222   22563 main.go:141] libmachine: () Calling .GetVersion
I0913 18:41:39.311793   22563 main.go:141] libmachine: Using API Version  1
I0913 18:41:39.311814   22563 main.go:141] libmachine: () Calling .SetConfigRaw
I0913 18:41:39.312176   22563 main.go:141] libmachine: () Calling .GetMachineName
I0913 18:41:39.312406   22563 main.go:141] libmachine: (functional-204039) Calling .GetState
I0913 18:41:39.314181   22563 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0913 18:41:39.314225   22563 main.go:141] libmachine: Launching plugin server for driver kvm2
I0913 18:41:39.328924   22563 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44943
I0913 18:41:39.329366   22563 main.go:141] libmachine: () Calling .GetVersion
I0913 18:41:39.329820   22563 main.go:141] libmachine: Using API Version  1
I0913 18:41:39.329841   22563 main.go:141] libmachine: () Calling .SetConfigRaw
I0913 18:41:39.330216   22563 main.go:141] libmachine: () Calling .GetMachineName
I0913 18:41:39.330393   22563 main.go:141] libmachine: (functional-204039) Calling .DriverName
I0913 18:41:39.330587   22563 ssh_runner.go:195] Run: systemctl --version
I0913 18:41:39.330611   22563 main.go:141] libmachine: (functional-204039) Calling .GetSSHHostname
I0913 18:41:39.333164   22563 main.go:141] libmachine: (functional-204039) DBG | domain functional-204039 has defined MAC address 52:54:00:a5:33:a9 in network mk-functional-204039
I0913 18:41:39.333485   22563 main.go:141] libmachine: (functional-204039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:33:a9", ip: ""} in network mk-functional-204039: {Iface:virbr1 ExpiryTime:2024-09-13 19:38:34 +0000 UTC Type:0 Mac:52:54:00:a5:33:a9 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:functional-204039 Clientid:01:52:54:00:a5:33:a9}
I0913 18:41:39.333510   22563 main.go:141] libmachine: (functional-204039) DBG | domain functional-204039 has defined IP address 192.168.39.239 and MAC address 52:54:00:a5:33:a9 in network mk-functional-204039
I0913 18:41:39.333644   22563 main.go:141] libmachine: (functional-204039) Calling .GetSSHPort
I0913 18:41:39.333790   22563 main.go:141] libmachine: (functional-204039) Calling .GetSSHKeyPath
I0913 18:41:39.333924   22563 main.go:141] libmachine: (functional-204039) Calling .GetSSHUsername
I0913 18:41:39.334017   22563 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/functional-204039/id_rsa Username:docker}
I0913 18:41:39.413076   22563 build_images.go:161] Building image from path: /tmp/build.1441517757.tar
I0913 18:41:39.413147   22563 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0913 18:41:39.425953   22563 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1441517757.tar
I0913 18:41:39.430857   22563 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1441517757.tar: stat -c "%s %y" /var/lib/minikube/build/build.1441517757.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1441517757.tar': No such file or directory
I0913 18:41:39.430891   22563 ssh_runner.go:362] scp /tmp/build.1441517757.tar --> /var/lib/minikube/build/build.1441517757.tar (3072 bytes)
I0913 18:41:39.456408   22563 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1441517757
I0913 18:41:39.466235   22563 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1441517757 -xf /var/lib/minikube/build/build.1441517757.tar
I0913 18:41:39.475718   22563 crio.go:315] Building image: /var/lib/minikube/build/build.1441517757
I0913 18:41:39.475775   22563 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-204039 /var/lib/minikube/build/build.1441517757 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0913 18:41:42.942890   22563 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-204039 /var/lib/minikube/build/build.1441517757 --cgroup-manager=cgroupfs: (3.467088922s)
I0913 18:41:42.942981   22563 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1441517757
I0913 18:41:42.955283   22563 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1441517757.tar
I0913 18:41:42.967544   22563 build_images.go:217] Built localhost/my-image:functional-204039 from /tmp/build.1441517757.tar
I0913 18:41:42.967579   22563 build_images.go:133] succeeded building to: functional-204039
I0913 18:41:42.967583   22563 build_images.go:134] failed building to: 
I0913 18:41:42.967607   22563 main.go:141] libmachine: Making call to close driver server
I0913 18:41:42.967618   22563 main.go:141] libmachine: (functional-204039) Calling .Close
I0913 18:41:42.967895   22563 main.go:141] libmachine: (functional-204039) DBG | Closing plugin on server side
I0913 18:41:42.967947   22563 main.go:141] libmachine: Successfully made call to close driver server
I0913 18:41:42.967970   22563 main.go:141] libmachine: Making call to close connection to plugin binary
I0913 18:41:42.967992   22563 main.go:141] libmachine: Making call to close driver server
I0913 18:41:42.968003   22563 main.go:141] libmachine: (functional-204039) Calling .Close
I0913 18:41:42.968221   22563 main.go:141] libmachine: (functional-204039) DBG | Closing plugin on server side
I0913 18:41:42.968230   22563 main.go:141] libmachine: Successfully made call to close driver server
I0913 18:41:42.968243   22563 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-204039 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
2024/09/13 18:41:22 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.938782581s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-204039
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.96s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-204039 image load --daemon kicbase/echo-server:functional-204039 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-amd64 -p functional-204039 image load --daemon kicbase/echo-server:functional-204039 --alsologtostderr: (1.116123429s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-204039 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-204039 image load --daemon kicbase/echo-server:functional-204039 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-204039 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.06s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-204039
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-204039 image load --daemon kicbase/echo-server:functional-204039 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-204039 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.98s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (7.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-204039 image save kicbase/echo-server:functional-204039 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:380: (dbg) Done: out/minikube-linux-amd64 -p functional-204039 image save kicbase/echo-server:functional-204039 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (7.566318954s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (7.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-204039 image rm kicbase/echo-server:functional-204039 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-204039 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-204039 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:409: (dbg) Done: out/minikube-linux-amd64 -p functional-204039 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (1.101697413s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-204039 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-204039
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-204039 image save --daemon kicbase/echo-server:functional-204039 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-204039
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.68s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-204039
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-204039
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-204039
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (207.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-617764 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0913 18:41:50.462461   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357/client.crt: no such file or directory" logger="UnhandledError"
E0913 18:44:06.601029   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357/client.crt: no such file or directory" logger="UnhandledError"
E0913 18:44:34.304563   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-617764 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m27.021911092s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-617764 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (207.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-617764 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-617764 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-617764 -- rollout status deployment/busybox: (5.51207324s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-617764 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-617764 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-617764 -- exec busybox-7dff88458-c28t9 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-617764 -- exec busybox-7dff88458-srmxt -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-617764 -- exec busybox-7dff88458-t4fwq -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-617764 -- exec busybox-7dff88458-c28t9 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-617764 -- exec busybox-7dff88458-srmxt -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-617764 -- exec busybox-7dff88458-t4fwq -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-617764 -- exec busybox-7dff88458-c28t9 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-617764 -- exec busybox-7dff88458-srmxt -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-617764 -- exec busybox-7dff88458-t4fwq -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.21s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-617764 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-617764 -- exec busybox-7dff88458-c28t9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-617764 -- exec busybox-7dff88458-c28t9 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-617764 -- exec busybox-7dff88458-srmxt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-617764 -- exec busybox-7dff88458-srmxt -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-617764 -- exec busybox-7dff88458-t4fwq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-617764 -- exec busybox-7dff88458-t4fwq -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.21s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (57.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-617764 -v=7 --alsologtostderr
E0913 18:45:57.576375   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/functional-204039/client.crt: no such file or directory" logger="UnhandledError"
E0913 18:45:57.583059   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/functional-204039/client.crt: no such file or directory" logger="UnhandledError"
E0913 18:45:57.594491   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/functional-204039/client.crt: no such file or directory" logger="UnhandledError"
E0913 18:45:57.615864   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/functional-204039/client.crt: no such file or directory" logger="UnhandledError"
E0913 18:45:57.657321   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/functional-204039/client.crt: no such file or directory" logger="UnhandledError"
E0913 18:45:57.738786   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/functional-204039/client.crt: no such file or directory" logger="UnhandledError"
E0913 18:45:57.900996   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/functional-204039/client.crt: no such file or directory" logger="UnhandledError"
E0913 18:45:58.222470   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/functional-204039/client.crt: no such file or directory" logger="UnhandledError"
E0913 18:45:58.864568   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/functional-204039/client.crt: no such file or directory" logger="UnhandledError"
E0913 18:46:00.146825   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/functional-204039/client.crt: no such file or directory" logger="UnhandledError"
E0913 18:46:02.708888   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/functional-204039/client.crt: no such file or directory" logger="UnhandledError"
E0913 18:46:07.830636   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/functional-204039/client.crt: no such file or directory" logger="UnhandledError"
E0913 18:46:18.072554   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/functional-204039/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-617764 -v=7 --alsologtostderr: (56.826269509s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-617764 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (57.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-617764 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.43s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-617764 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-617764 cp testdata/cp-test.txt ha-617764:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-617764 ssh -n ha-617764 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-617764 cp ha-617764:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile62644144/001/cp-test_ha-617764.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-617764 ssh -n ha-617764 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-617764 cp ha-617764:/home/docker/cp-test.txt ha-617764-m02:/home/docker/cp-test_ha-617764_ha-617764-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-617764 ssh -n ha-617764 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-617764 ssh -n ha-617764-m02 "sudo cat /home/docker/cp-test_ha-617764_ha-617764-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-617764 cp ha-617764:/home/docker/cp-test.txt ha-617764-m03:/home/docker/cp-test_ha-617764_ha-617764-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-617764 ssh -n ha-617764 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-617764 ssh -n ha-617764-m03 "sudo cat /home/docker/cp-test_ha-617764_ha-617764-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-617764 cp ha-617764:/home/docker/cp-test.txt ha-617764-m04:/home/docker/cp-test_ha-617764_ha-617764-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-617764 ssh -n ha-617764 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-617764 ssh -n ha-617764-m04 "sudo cat /home/docker/cp-test_ha-617764_ha-617764-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-617764 cp testdata/cp-test.txt ha-617764-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-617764 ssh -n ha-617764-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-617764 cp ha-617764-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile62644144/001/cp-test_ha-617764-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-617764 ssh -n ha-617764-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-617764 cp ha-617764-m02:/home/docker/cp-test.txt ha-617764:/home/docker/cp-test_ha-617764-m02_ha-617764.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-617764 ssh -n ha-617764-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-617764 ssh -n ha-617764 "sudo cat /home/docker/cp-test_ha-617764-m02_ha-617764.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-617764 cp ha-617764-m02:/home/docker/cp-test.txt ha-617764-m03:/home/docker/cp-test_ha-617764-m02_ha-617764-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-617764 ssh -n ha-617764-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-617764 ssh -n ha-617764-m03 "sudo cat /home/docker/cp-test_ha-617764-m02_ha-617764-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-617764 cp ha-617764-m02:/home/docker/cp-test.txt ha-617764-m04:/home/docker/cp-test_ha-617764-m02_ha-617764-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-617764 ssh -n ha-617764-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-617764 ssh -n ha-617764-m04 "sudo cat /home/docker/cp-test_ha-617764-m02_ha-617764-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-617764 cp testdata/cp-test.txt ha-617764-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-617764 ssh -n ha-617764-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-617764 cp ha-617764-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile62644144/001/cp-test_ha-617764-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-617764 ssh -n ha-617764-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-617764 cp ha-617764-m03:/home/docker/cp-test.txt ha-617764:/home/docker/cp-test_ha-617764-m03_ha-617764.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-617764 ssh -n ha-617764-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-617764 ssh -n ha-617764 "sudo cat /home/docker/cp-test_ha-617764-m03_ha-617764.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-617764 cp ha-617764-m03:/home/docker/cp-test.txt ha-617764-m02:/home/docker/cp-test_ha-617764-m03_ha-617764-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-617764 ssh -n ha-617764-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-617764 ssh -n ha-617764-m02 "sudo cat /home/docker/cp-test_ha-617764-m03_ha-617764-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-617764 cp ha-617764-m03:/home/docker/cp-test.txt ha-617764-m04:/home/docker/cp-test_ha-617764-m03_ha-617764-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-617764 ssh -n ha-617764-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-617764 ssh -n ha-617764-m04 "sudo cat /home/docker/cp-test_ha-617764-m03_ha-617764-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-617764 cp testdata/cp-test.txt ha-617764-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-617764 ssh -n ha-617764-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-617764 cp ha-617764-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile62644144/001/cp-test_ha-617764-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-617764 ssh -n ha-617764-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-617764 cp ha-617764-m04:/home/docker/cp-test.txt ha-617764:/home/docker/cp-test_ha-617764-m04_ha-617764.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-617764 ssh -n ha-617764-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-617764 ssh -n ha-617764 "sudo cat /home/docker/cp-test_ha-617764-m04_ha-617764.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-617764 cp ha-617764-m04:/home/docker/cp-test.txt ha-617764-m02:/home/docker/cp-test_ha-617764-m04_ha-617764-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-617764 ssh -n ha-617764-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-617764 ssh -n ha-617764-m02 "sudo cat /home/docker/cp-test_ha-617764-m04_ha-617764-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-617764 cp ha-617764-m04:/home/docker/cp-test.txt ha-617764-m03:/home/docker/cp-test_ha-617764-m04_ha-617764-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-617764 ssh -n ha-617764-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-617764 ssh -n ha-617764-m03 "sudo cat /home/docker/cp-test_ha-617764-m04_ha-617764-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.43s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.460759989s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (16.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-617764 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-617764 node delete m03 -v=7 --alsologtostderr: (15.880848897s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-617764 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (16.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.36s)

                                                
                                    
x
+
TestJSONOutput/start/Command (88.75s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-697744 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E0913 19:12:09.668647   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-697744 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m28.747864854s)
--- PASS: TestJSONOutput/start/Command (88.75s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.73s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-697744 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.73s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.66s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-697744 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.66s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.38s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-697744 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-697744 --output=json --user=testUser: (7.379679197s)
--- PASS: TestJSONOutput/stop/Command (7.38s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.19s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-159616 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-159616 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (60.35872ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"4a030927-4acd-4592-9e6c-ede12774ff38","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-159616] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"1cb130b9-e82a-47b4-9a0c-3739a3ac7f8f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19636"}}
	{"specversion":"1.0","id":"0f2261e6-5ce8-4972-8746-3074097efde5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"82bcad64-f38e-4e14-a0f7-6191cd6e4cf3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19636-3902/kubeconfig"}}
	{"specversion":"1.0","id":"99444f6a-defd-4d38-9d65-845bb9cf21af","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19636-3902/.minikube"}}
	{"specversion":"1.0","id":"88c0229d-f373-47ec-9527-7ea7cf40e20a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"06b50ff4-34e7-47a0-bff4-8bf603f5a7b8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"6f33fa22-78e8-491d-984f-ee3778523a6a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-159616" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-159616
--- PASS: TestErrorJSONOutput (0.19s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (85.33s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-856374 --driver=kvm2  --container-runtime=crio
E0913 19:14:06.601576   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-856374 --driver=kvm2  --container-runtime=crio: (40.534484804s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-874309 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-874309 --driver=kvm2  --container-runtime=crio: (42.196841247s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-856374
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-874309
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-874309" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-874309
helpers_test.go:175: Cleaning up "first-856374" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-856374
--- PASS: TestMinikubeProfile (85.33s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (27.94s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-765180 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-765180 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (26.934984281s)
--- PASS: TestMountStart/serial/StartWithMountFirst (27.94s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-765180 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-765180 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.36s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (28.76s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-780605 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-780605 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (27.754986555s)
--- PASS: TestMountStart/serial/StartWithMountSecond (28.76s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-780605 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-780605 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.87s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-765180 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.87s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-780605 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-780605 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-780605
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-780605: (1.280222904s)
--- PASS: TestMountStart/serial/Stop (1.28s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (23.88s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-780605
E0913 19:15:57.575993   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/functional-204039/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-780605: (22.884440419s)
--- PASS: TestMountStart/serial/RestartStopped (23.88s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-780605 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-780605 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (109s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-832180 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-832180 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m48.588392882s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-832180 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (109.00s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-832180 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-832180 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-832180 -- rollout status deployment/busybox: (4.825285247s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-832180 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-832180 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-832180 -- exec busybox-7dff88458-5p296 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-832180 -- exec busybox-7dff88458-mjlx4 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-832180 -- exec busybox-7dff88458-5p296 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-832180 -- exec busybox-7dff88458-mjlx4 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-832180 -- exec busybox-7dff88458-5p296 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-832180 -- exec busybox-7dff88458-mjlx4 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.28s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-832180 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-832180 -- exec busybox-7dff88458-5p296 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-832180 -- exec busybox-7dff88458-5p296 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-832180 -- exec busybox-7dff88458-mjlx4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-832180 -- exec busybox-7dff88458-mjlx4 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.78s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (53.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-832180 -v 3 --alsologtostderr
E0913 19:19:00.645206   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/functional-204039/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:19:06.601526   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-832180 -v 3 --alsologtostderr: (52.455648789s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-832180 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (53.01s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-832180 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.21s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (6.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-832180 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-832180 cp testdata/cp-test.txt multinode-832180:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-832180 ssh -n multinode-832180 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-832180 cp multinode-832180:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2586814433/001/cp-test_multinode-832180.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-832180 ssh -n multinode-832180 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-832180 cp multinode-832180:/home/docker/cp-test.txt multinode-832180-m02:/home/docker/cp-test_multinode-832180_multinode-832180-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-832180 ssh -n multinode-832180 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-832180 ssh -n multinode-832180-m02 "sudo cat /home/docker/cp-test_multinode-832180_multinode-832180-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-832180 cp multinode-832180:/home/docker/cp-test.txt multinode-832180-m03:/home/docker/cp-test_multinode-832180_multinode-832180-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-832180 ssh -n multinode-832180 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-832180 ssh -n multinode-832180-m03 "sudo cat /home/docker/cp-test_multinode-832180_multinode-832180-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-832180 cp testdata/cp-test.txt multinode-832180-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-832180 ssh -n multinode-832180-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-832180 cp multinode-832180-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2586814433/001/cp-test_multinode-832180-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-832180 ssh -n multinode-832180-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-832180 cp multinode-832180-m02:/home/docker/cp-test.txt multinode-832180:/home/docker/cp-test_multinode-832180-m02_multinode-832180.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-832180 ssh -n multinode-832180-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-832180 ssh -n multinode-832180 "sudo cat /home/docker/cp-test_multinode-832180-m02_multinode-832180.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-832180 cp multinode-832180-m02:/home/docker/cp-test.txt multinode-832180-m03:/home/docker/cp-test_multinode-832180-m02_multinode-832180-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-832180 ssh -n multinode-832180-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-832180 ssh -n multinode-832180-m03 "sudo cat /home/docker/cp-test_multinode-832180-m02_multinode-832180-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-832180 cp testdata/cp-test.txt multinode-832180-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-832180 ssh -n multinode-832180-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-832180 cp multinode-832180-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2586814433/001/cp-test_multinode-832180-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-832180 ssh -n multinode-832180-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-832180 cp multinode-832180-m03:/home/docker/cp-test.txt multinode-832180:/home/docker/cp-test_multinode-832180-m03_multinode-832180.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-832180 ssh -n multinode-832180-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-832180 ssh -n multinode-832180 "sudo cat /home/docker/cp-test_multinode-832180-m03_multinode-832180.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-832180 cp multinode-832180-m03:/home/docker/cp-test.txt multinode-832180-m02:/home/docker/cp-test_multinode-832180-m03_multinode-832180-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-832180 ssh -n multinode-832180-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-832180 ssh -n multinode-832180-m02 "sudo cat /home/docker/cp-test_multinode-832180-m03_multinode-832180-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (6.99s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-832180 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-832180 node stop m03: (1.459311835s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-832180 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-832180 status: exit status 7 (421.37477ms)

                                                
                                                
-- stdout --
	multinode-832180
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-832180-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-832180-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-832180 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-832180 status --alsologtostderr: exit status 7 (410.656016ms)

                                                
                                                
-- stdout --
	multinode-832180
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-832180-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-832180-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 19:19:20.732401   41460 out.go:345] Setting OutFile to fd 1 ...
	I0913 19:19:20.732642   41460 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 19:19:20.732651   41460 out.go:358] Setting ErrFile to fd 2...
	I0913 19:19:20.732655   41460 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 19:19:20.732817   41460 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19636-3902/.minikube/bin
	I0913 19:19:20.732967   41460 out.go:352] Setting JSON to false
	I0913 19:19:20.732998   41460 mustload.go:65] Loading cluster: multinode-832180
	I0913 19:19:20.733127   41460 notify.go:220] Checking for updates...
	I0913 19:19:20.733554   41460 config.go:182] Loaded profile config "multinode-832180": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 19:19:20.733573   41460 status.go:255] checking status of multinode-832180 ...
	I0913 19:19:20.734032   41460 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:19:20.734092   41460 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:19:20.751985   41460 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37263
	I0913 19:19:20.752556   41460 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:19:20.753158   41460 main.go:141] libmachine: Using API Version  1
	I0913 19:19:20.753183   41460 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:19:20.753474   41460 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:19:20.753632   41460 main.go:141] libmachine: (multinode-832180) Calling .GetState
	I0913 19:19:20.755056   41460 status.go:330] multinode-832180 host status = "Running" (err=<nil>)
	I0913 19:19:20.755073   41460 host.go:66] Checking if "multinode-832180" exists ...
	I0913 19:19:20.755345   41460 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:19:20.755378   41460 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:19:20.770501   41460 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44305
	I0913 19:19:20.770934   41460 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:19:20.771382   41460 main.go:141] libmachine: Using API Version  1
	I0913 19:19:20.771406   41460 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:19:20.771753   41460 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:19:20.771928   41460 main.go:141] libmachine: (multinode-832180) Calling .GetIP
	I0913 19:19:20.774410   41460 main.go:141] libmachine: (multinode-832180) DBG | domain multinode-832180 has defined MAC address 52:54:00:ca:be:cf in network mk-multinode-832180
	I0913 19:19:20.774857   41460 main.go:141] libmachine: (multinode-832180) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:be:cf", ip: ""} in network mk-multinode-832180: {Iface:virbr1 ExpiryTime:2024-09-13 20:16:37 +0000 UTC Type:0 Mac:52:54:00:ca:be:cf Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:multinode-832180 Clientid:01:52:54:00:ca:be:cf}
	I0913 19:19:20.774894   41460 main.go:141] libmachine: (multinode-832180) DBG | domain multinode-832180 has defined IP address 192.168.39.107 and MAC address 52:54:00:ca:be:cf in network mk-multinode-832180
	I0913 19:19:20.775042   41460 host.go:66] Checking if "multinode-832180" exists ...
	I0913 19:19:20.775448   41460 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:19:20.775511   41460 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:19:20.790278   41460 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38929
	I0913 19:19:20.790707   41460 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:19:20.791169   41460 main.go:141] libmachine: Using API Version  1
	I0913 19:19:20.791187   41460 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:19:20.791465   41460 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:19:20.791635   41460 main.go:141] libmachine: (multinode-832180) Calling .DriverName
	I0913 19:19:20.791804   41460 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0913 19:19:20.791826   41460 main.go:141] libmachine: (multinode-832180) Calling .GetSSHHostname
	I0913 19:19:20.794388   41460 main.go:141] libmachine: (multinode-832180) DBG | domain multinode-832180 has defined MAC address 52:54:00:ca:be:cf in network mk-multinode-832180
	I0913 19:19:20.794884   41460 main.go:141] libmachine: (multinode-832180) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:be:cf", ip: ""} in network mk-multinode-832180: {Iface:virbr1 ExpiryTime:2024-09-13 20:16:37 +0000 UTC Type:0 Mac:52:54:00:ca:be:cf Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:multinode-832180 Clientid:01:52:54:00:ca:be:cf}
	I0913 19:19:20.794925   41460 main.go:141] libmachine: (multinode-832180) DBG | domain multinode-832180 has defined IP address 192.168.39.107 and MAC address 52:54:00:ca:be:cf in network mk-multinode-832180
	I0913 19:19:20.795041   41460 main.go:141] libmachine: (multinode-832180) Calling .GetSSHPort
	I0913 19:19:20.795209   41460 main.go:141] libmachine: (multinode-832180) Calling .GetSSHKeyPath
	I0913 19:19:20.795328   41460 main.go:141] libmachine: (multinode-832180) Calling .GetSSHUsername
	I0913 19:19:20.795434   41460 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/multinode-832180/id_rsa Username:docker}
	I0913 19:19:20.873368   41460 ssh_runner.go:195] Run: systemctl --version
	I0913 19:19:20.879993   41460 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 19:19:20.893849   41460 kubeconfig.go:125] found "multinode-832180" server: "https://192.168.39.107:8443"
	I0913 19:19:20.893876   41460 api_server.go:166] Checking apiserver status ...
	I0913 19:19:20.893905   41460 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:19:20.906959   41460 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1055/cgroup
	W0913 19:19:20.915769   41460 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1055/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0913 19:19:20.915816   41460 ssh_runner.go:195] Run: ls
	I0913 19:19:20.920320   41460 api_server.go:253] Checking apiserver healthz at https://192.168.39.107:8443/healthz ...
	I0913 19:19:20.924356   41460 api_server.go:279] https://192.168.39.107:8443/healthz returned 200:
	ok
	I0913 19:19:20.924380   41460 status.go:422] multinode-832180 apiserver status = Running (err=<nil>)
	I0913 19:19:20.924392   41460 status.go:257] multinode-832180 status: &{Name:multinode-832180 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0913 19:19:20.924425   41460 status.go:255] checking status of multinode-832180-m02 ...
	I0913 19:19:20.924714   41460 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:19:20.924754   41460 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:19:20.939601   41460 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36037
	I0913 19:19:20.939948   41460 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:19:20.940429   41460 main.go:141] libmachine: Using API Version  1
	I0913 19:19:20.940452   41460 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:19:20.940755   41460 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:19:20.940918   41460 main.go:141] libmachine: (multinode-832180-m02) Calling .GetState
	I0913 19:19:20.942488   41460 status.go:330] multinode-832180-m02 host status = "Running" (err=<nil>)
	I0913 19:19:20.942502   41460 host.go:66] Checking if "multinode-832180-m02" exists ...
	I0913 19:19:20.942788   41460 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:19:20.942823   41460 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:19:20.957492   41460 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43129
	I0913 19:19:20.957965   41460 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:19:20.958454   41460 main.go:141] libmachine: Using API Version  1
	I0913 19:19:20.958474   41460 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:19:20.958771   41460 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:19:20.958959   41460 main.go:141] libmachine: (multinode-832180-m02) Calling .GetIP
	I0913 19:19:20.961511   41460 main.go:141] libmachine: (multinode-832180-m02) DBG | domain multinode-832180-m02 has defined MAC address 52:54:00:d9:b4:36 in network mk-multinode-832180
	I0913 19:19:20.961868   41460 main.go:141] libmachine: (multinode-832180-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:b4:36", ip: ""} in network mk-multinode-832180: {Iface:virbr1 ExpiryTime:2024-09-13 20:17:34 +0000 UTC Type:0 Mac:52:54:00:d9:b4:36 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:multinode-832180-m02 Clientid:01:52:54:00:d9:b4:36}
	I0913 19:19:20.961894   41460 main.go:141] libmachine: (multinode-832180-m02) DBG | domain multinode-832180-m02 has defined IP address 192.168.39.235 and MAC address 52:54:00:d9:b4:36 in network mk-multinode-832180
	I0913 19:19:20.962015   41460 host.go:66] Checking if "multinode-832180-m02" exists ...
	I0913 19:19:20.962434   41460 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:19:20.962478   41460 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:19:20.977350   41460 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41425
	I0913 19:19:20.977782   41460 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:19:20.978268   41460 main.go:141] libmachine: Using API Version  1
	I0913 19:19:20.978287   41460 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:19:20.978561   41460 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:19:20.978733   41460 main.go:141] libmachine: (multinode-832180-m02) Calling .DriverName
	I0913 19:19:20.978916   41460 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0913 19:19:20.978934   41460 main.go:141] libmachine: (multinode-832180-m02) Calling .GetSSHHostname
	I0913 19:19:20.981511   41460 main.go:141] libmachine: (multinode-832180-m02) DBG | domain multinode-832180-m02 has defined MAC address 52:54:00:d9:b4:36 in network mk-multinode-832180
	I0913 19:19:20.981915   41460 main.go:141] libmachine: (multinode-832180-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:b4:36", ip: ""} in network mk-multinode-832180: {Iface:virbr1 ExpiryTime:2024-09-13 20:17:34 +0000 UTC Type:0 Mac:52:54:00:d9:b4:36 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:multinode-832180-m02 Clientid:01:52:54:00:d9:b4:36}
	I0913 19:19:20.981943   41460 main.go:141] libmachine: (multinode-832180-m02) DBG | domain multinode-832180-m02 has defined IP address 192.168.39.235 and MAC address 52:54:00:d9:b4:36 in network mk-multinode-832180
	I0913 19:19:20.982052   41460 main.go:141] libmachine: (multinode-832180-m02) Calling .GetSSHPort
	I0913 19:19:20.982208   41460 main.go:141] libmachine: (multinode-832180-m02) Calling .GetSSHKeyPath
	I0913 19:19:20.982337   41460 main.go:141] libmachine: (multinode-832180-m02) Calling .GetSSHUsername
	I0913 19:19:20.982472   41460 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3902/.minikube/machines/multinode-832180-m02/id_rsa Username:docker}
	I0913 19:19:21.065903   41460 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 19:19:21.081705   41460 status.go:257] multinode-832180-m02 status: &{Name:multinode-832180-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0913 19:19:21.081754   41460 status.go:255] checking status of multinode-832180-m03 ...
	I0913 19:19:21.082137   41460 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 19:19:21.082208   41460 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:19:21.097929   41460 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45779
	I0913 19:19:21.098351   41460 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:19:21.098772   41460 main.go:141] libmachine: Using API Version  1
	I0913 19:19:21.098792   41460 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:19:21.099071   41460 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:19:21.099259   41460 main.go:141] libmachine: (multinode-832180-m03) Calling .GetState
	I0913 19:19:21.100965   41460 status.go:330] multinode-832180-m03 host status = "Stopped" (err=<nil>)
	I0913 19:19:21.100980   41460 status.go:343] host is not running, skipping remaining checks
	I0913 19:19:21.100986   41460 status.go:257] multinode-832180-m03 status: &{Name:multinode-832180-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.29s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (40.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-832180 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-832180 node start m03 -v=7 --alsologtostderr: (39.558312579s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-832180 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (40.18s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-832180 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-832180 node delete m03: (1.694790486s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-832180 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.20s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (202.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-832180 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0913 19:28:49.670814   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:29:06.601143   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:30:57.575595   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/functional-204039/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-832180 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m22.480736775s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-832180 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (202.98s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (44.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-832180
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-832180-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-832180-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (60.755178ms)

                                                
                                                
-- stdout --
	* [multinode-832180-m02] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19636
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19636-3902/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19636-3902/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-832180-m02' is duplicated with machine name 'multinode-832180-m02' in profile 'multinode-832180'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-832180-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-832180-m03 --driver=kvm2  --container-runtime=crio: (43.8106321s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-832180
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-832180: exit status 80 (218.083138ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-832180 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-832180-m03 already exists in multinode-832180-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-832180-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (44.92s)

                                                
                                    
x
+
TestScheduledStopUnix (113.47s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-903494 --memory=2048 --driver=kvm2  --container-runtime=crio
E0913 19:35:57.576090   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/functional-204039/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-903494 --memory=2048 --driver=kvm2  --container-runtime=crio: (41.907894081s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-903494 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-903494 -n scheduled-stop-903494
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-903494 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-903494 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-903494 -n scheduled-stop-903494
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-903494
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-903494 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-903494
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-903494: exit status 7 (63.877564ms)

                                                
                                                
-- stdout --
	scheduled-stop-903494
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-903494 -n scheduled-stop-903494
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-903494 -n scheduled-stop-903494: exit status 7 (63.531575ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-903494" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-903494
--- PASS: TestScheduledStopUnix (113.47s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (236.06s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.3770856052 start -p running-upgrade-605510 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.3770856052 start -p running-upgrade-605510 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m16.86193765s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-605510 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-605510 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m35.313496367s)
helpers_test.go:175: Cleaning up "running-upgrade-605510" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-605510
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-605510: (1.138221756s)
--- PASS: TestRunningBinaryUpgrade (236.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-590674 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-590674 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (76.593761ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-590674] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19636
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19636-3902/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19636-3902/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (92.63s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-590674 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-590674 --driver=kvm2  --container-runtime=crio: (1m32.382631711s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-590674 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (92.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (2.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-604714 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-604714 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (94.987284ms)

                                                
                                                
-- stdout --
	* [false-604714] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19636
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19636-3902/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19636-3902/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 19:38:28.511585   49800 out.go:345] Setting OutFile to fd 1 ...
	I0913 19:38:28.511723   49800 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 19:38:28.511733   49800 out.go:358] Setting ErrFile to fd 2...
	I0913 19:38:28.511739   49800 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 19:38:28.511907   49800 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19636-3902/.minikube/bin
	I0913 19:38:28.512464   49800 out.go:352] Setting JSON to false
	I0913 19:38:28.513319   49800 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4851,"bootTime":1726251457,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0913 19:38:28.513414   49800 start.go:139] virtualization: kvm guest
	I0913 19:38:28.515878   49800 out.go:177] * [false-604714] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0913 19:38:28.517136   49800 out.go:177]   - MINIKUBE_LOCATION=19636
	I0913 19:38:28.517133   49800 notify.go:220] Checking for updates...
	I0913 19:38:28.518458   49800 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 19:38:28.519664   49800 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19636-3902/kubeconfig
	I0913 19:38:28.520993   49800 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19636-3902/.minikube
	I0913 19:38:28.522173   49800 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0913 19:38:28.523576   49800 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 19:38:28.525232   49800 config.go:182] Loaded profile config "NoKubernetes-590674": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 19:38:28.525347   49800 config.go:182] Loaded profile config "offline-crio-568412": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 19:38:28.525429   49800 config.go:182] Loaded profile config "running-upgrade-605510": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0913 19:38:28.525514   49800 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 19:38:28.560349   49800 out.go:177] * Using the kvm2 driver based on user configuration
	I0913 19:38:28.561585   49800 start.go:297] selected driver: kvm2
	I0913 19:38:28.561596   49800 start.go:901] validating driver "kvm2" against <nil>
	I0913 19:38:28.561613   49800 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 19:38:28.563523   49800 out.go:201] 
	W0913 19:38:28.564698   49800 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0913 19:38:28.565996   49800 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-604714 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-604714

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-604714

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-604714

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-604714

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-604714

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-604714

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-604714

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-604714

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-604714

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-604714

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-604714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-604714"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-604714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-604714"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-604714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-604714"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-604714

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-604714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-604714"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-604714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-604714"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-604714" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-604714" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-604714" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-604714" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-604714" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-604714" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-604714" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-604714" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-604714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-604714"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-604714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-604714"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-604714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-604714"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-604714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-604714"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-604714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-604714"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-604714" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-604714" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-604714" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-604714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-604714"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-604714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-604714"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-604714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-604714"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-604714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-604714"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-604714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-604714"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-604714

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-604714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-604714"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-604714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-604714"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-604714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-604714"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-604714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-604714"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-604714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-604714"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-604714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-604714"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-604714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-604714"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-604714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-604714"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-604714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-604714"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-604714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-604714"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-604714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-604714"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-604714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-604714"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-604714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-604714"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-604714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-604714"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-604714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-604714"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-604714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-604714"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-604714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-604714"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-604714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-604714"

                                                
                                                
----------------------- debugLogs end: false-604714 [took: 2.666617768s] --------------------------------
helpers_test.go:175: Cleaning up "false-604714" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-604714
--- PASS: TestNetworkPlugins/group/false (2.91s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (44.89s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-590674 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-590674 --no-kubernetes --driver=kvm2  --container-runtime=crio: (43.435282408s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-590674 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-590674 status -o json: exit status 2 (241.679947ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-590674","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-590674
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-590674: (1.213083533s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (44.89s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (44.73s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-590674 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-590674 --no-kubernetes --driver=kvm2  --container-runtime=crio: (44.725662874s)
--- PASS: TestNoKubernetes/serial/Start (44.73s)

                                                
                                    
x
+
TestPause/serial/Start (58.29s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-933457 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-933457 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (58.285318604s)
--- PASS: TestPause/serial/Start (58.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-590674 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-590674 "sudo systemctl is-active --quiet service kubelet": exit status 1 (205.347521ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.65s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (1.040250828s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.65s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.52s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-590674
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-590674: (1.517811586s)
--- PASS: TestNoKubernetes/serial/Stop (1.52s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (43.57s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-590674 --driver=kvm2  --container-runtime=crio
E0913 19:40:57.575989   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/functional-204039/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-590674 --driver=kvm2  --container-runtime=crio: (43.566085581s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (43.57s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-590674 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-590674 "sudo systemctl is-active --quiet service kubelet": exit status 1 (237.45737ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.24s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.61s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.61s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (106.52s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.3649796446 start -p stopped-upgrade-520539 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.3649796446 start -p stopped-upgrade-520539 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (58.204606099s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.3649796446 -p stopped-upgrade-520539 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.3649796446 -p stopped-upgrade-520539 stop: (2.144419097s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-520539 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-520539 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (46.168440448s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (106.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (95.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-604714 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-604714 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m35.904233886s)
--- PASS: TestNetworkPlugins/group/auto/Start (95.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (89.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-604714 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
E0913 19:44:06.601419   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-604714 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m29.983158583s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (89.98s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.04s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-520539
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-520539: (1.040004381s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (101.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-604714 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-604714 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m41.568359301s)
--- PASS: TestNetworkPlugins/group/calico/Start (101.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-604714 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-604714 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-97g6t" [8534b118-6a9c-4493-a287-ab99ef8cefcd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-97g6t" [8534b118-6a9c-4493-a287-ab99ef8cefcd] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.004622733s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-604714 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-604714 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-604714 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-bnfzv" [34e5c3e1-fd32-4d91-86cd-c7910fd9a53e] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005481676s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-604714 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-604714 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-vxlm7" [eadcef92-f6b9-4957-964e-b4da866af1fd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-vxlm7" [eadcef92-f6b9-4957-964e-b4da866af1fd] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.004951366s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (72.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-604714 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
E0913 19:45:29.672940   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-604714 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m12.075947617s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (72.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-604714 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-604714 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-604714 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (92.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-604714 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-604714 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m32.755532552s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (92.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-tjzcf" [9af05651-31c3-40a6-948f-89622516abb2] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004998821s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-604714 "pgrep -a kubelet"
E0913 19:45:57.579558   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/functional-204039/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (14.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-604714 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-4nvp6" [01a6acd2-ccbb-4cb4-a77a-7ac818a54c02] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-4nvp6" [01a6acd2-ccbb-4cb4-a77a-7ac818a54c02] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 14.003869551s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (14.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-604714 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-604714 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-604714 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (79.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-604714 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-604714 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m19.218981944s)
--- PASS: TestNetworkPlugins/group/flannel/Start (79.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-604714 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-604714 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-h2tsv" [9ed9c7ab-d25c-4dd5-b40c-43f7d89d5fce] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-h2tsv" [9ed9c7ab-d25c-4dd5-b40c-43f7d89d5fce] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.004430008s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-604714 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-604714 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-604714 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (102.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-604714 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-604714 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m42.174888785s)
--- PASS: TestNetworkPlugins/group/bridge/Start (102.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-604714 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-604714 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-bxqbw" [48b4e8df-2d66-4ca6-9251-a343b6975fb0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-bxqbw" [48b4e8df-2d66-4ca6-9251-a343b6975fb0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 13.004597805s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-604714 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-604714 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-604714 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-twcxf" [17af23ba-f1f6-4636-aed5-bb235a1a3ab1] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004566725s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-604714 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-604714 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-ftjsj" [2cfe11e6-6e80-4941-a755-79b948421e81] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-ftjsj" [2cfe11e6-6e80-4941-a755-79b948421e81] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.003880288s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-604714 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-604714 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-604714 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (102.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-239327 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-239327 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (1m42.085229084s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (102.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-604714 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-604714 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-nxwhl" [1d44d441-4ee4-4524-bac5-8a571ac76732] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-nxwhl" [1d44d441-4ee4-4524-bac5-8a571ac76732] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.005226888s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (57.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-175374 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-175374 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (57.242702032s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (57.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-604714 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-604714 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-604714 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (84.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-512125 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-512125 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (1m24.23760379s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (84.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (13.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-175374 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [78550100-7601-4019-a699-49a888b727ec] Pending
helpers_test.go:344: "busybox" [78550100-7601-4019-a699-49a888b727ec] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [78550100-7601-4019-a699-49a888b727ec] Running
E0913 19:50:00.700820   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/auto-604714/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:50:00.707200   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/auto-604714/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:50:00.718572   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/auto-604714/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:50:00.740000   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/auto-604714/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:50:00.781394   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/auto-604714/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:50:00.862857   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/auto-604714/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:50:01.024390   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/auto-604714/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:50:01.345814   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/auto-604714/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 13.005080347s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-175374 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (13.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-175374 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0913 19:50:01.987345   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/auto-604714/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-175374 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (11.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-239327 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [bbf45dbd-00fc-4d1f-952b-e3741f1e2e96] Pending
helpers_test.go:344: "busybox" [bbf45dbd-00fc-4d1f-952b-e3741f1e2e96] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0913 19:50:10.951671   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/auto-604714/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:50:12.299616   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/kindnet-604714/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:50:12.305959   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/kindnet-604714/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:50:12.317336   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/kindnet-604714/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:50:12.338719   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/kindnet-604714/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:50:12.380148   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/kindnet-604714/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:50:12.461594   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/kindnet-604714/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [bbf45dbd-00fc-4d1f-952b-e3741f1e2e96] Running
E0913 19:50:12.623741   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/kindnet-604714/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:50:12.945520   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/kindnet-604714/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:50:13.587624   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/kindnet-604714/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:50:14.869549   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/kindnet-604714/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:50:17.431465   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/kindnet-604714/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 11.003273792s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-239327 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (11.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-239327 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-239327 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-512125 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [20c1686a-3596-4bef-8989-c6baee6beb67] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [20c1686a-3596-4bef-8989-c6baee6beb67] Running
E0913 19:50:51.373728   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/calico-604714/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:50:51.380169   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/calico-604714/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:50:51.391549   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/calico-604714/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:50:51.413034   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/calico-604714/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:50:51.454391   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/calico-604714/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:50:51.535879   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/calico-604714/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:50:51.697364   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/calico-604714/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:50:52.019205   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/calico-604714/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.00506111s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-512125 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-512125 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0913 19:50:52.661196   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/calico-604714/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:50:53.276864   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/kindnet-604714/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-512125 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (635.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-175374 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-175374 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (10m35.131173645s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-175374 -n embed-certs-175374
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (635.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (568.37s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-239327 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
E0913 19:52:51.437530   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/flannel-604714/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-239327 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (9m28.11387206s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-239327 -n no-preload-239327
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (568.37s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (584.6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-512125 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
E0913 19:53:29.845054   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/flannel-604714/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:53:35.232751   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/calico-604714/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-512125 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (9m44.33592182s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-512125 -n default-k8s-diff-port-512125
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (584.60s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (1.35s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-234290 --alsologtostderr -v=3
E0913 19:53:41.526667   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/enable-default-cni-604714/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-234290 --alsologtostderr -v=3: (1.354148232s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (1.35s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-234290 -n old-k8s-version-234290
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-234290 -n old-k8s-version-234290: exit status 7 (64.337774ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-234290 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (48.89s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-350416 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-350416 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (48.892030885s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (48.89s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-350416 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-350416 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.119846675s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.49s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-350416 --alsologtostderr -v=3
E0913 20:18:49.676712   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357/client.crt: no such file or directory" logger="UnhandledError"
E0913 20:18:50.604163   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/bridge-604714/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-350416 --alsologtostderr -v=3: (10.486844179s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.49s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-350416 -n newest-cni-350416
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-350416 -n newest-cni-350416: exit status 7 (66.397076ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-350416 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (35.48s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-350416 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
E0913 20:19:06.600889   11079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3902/.minikube/profiles/addons-979357/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-350416 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (35.199337514s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-350416 -n newest-cni-350416
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (35.48s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-350416 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (4.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-350416 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-amd64 pause -p newest-cni-350416 --alsologtostderr -v=1: (1.700059355s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-350416 -n newest-cni-350416
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-350416 -n newest-cni-350416: exit status 2 (308.31166ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-350416 -n newest-cni-350416
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-350416 -n newest-cni-350416: exit status 2 (299.585398ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-350416 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-350416 -n newest-cni-350416
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-350416 -n newest-cni-350416
--- PASS: TestStartStop/group/newest-cni/serial/Pause (4.21s)

                                                
                                    

Test skip (37/310)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.31.1/cached-images 0
15 TestDownloadOnly/v1.31.1/binaries 0
16 TestDownloadOnly/v1.31.1/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0
37 TestAddons/parallel/Olm 0
47 TestDockerFlags 0
50 TestDockerEnvContainerd 0
52 TestHyperKitDriverInstallOrUpdate 0
53 TestHyperkitDriverSkipUpgrade 0
104 TestFunctional/parallel/DockerEnv 0
105 TestFunctional/parallel/PodmanEnv 0
124 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
125 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
126 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
127 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
128 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
129 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
130 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
131 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
153 TestGvisorAddon 0
174 TestImageBuild 0
201 TestKicCustomNetwork 0
202 TestKicExistingNetwork 0
203 TestKicCustomSubnet 0
204 TestKicStaticIP 0
236 TestChangeNoneUser 0
239 TestScheduledStopWindows 0
241 TestSkaffold 0
243 TestInsufficientStorage 0
247 TestMissingContainerUpgrade 0
253 TestNetworkPlugins/group/kubenet 2.57
261 TestNetworkPlugins/group/cilium 4.21
267 TestStartStop/group/disable-driver-mounts 0.16
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:817: skipping: crio not supported
--- SKIP: TestAddons/serial/Volcano (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:438: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (2.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-604714 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-604714

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-604714

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-604714

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-604714

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-604714

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-604714

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-604714

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-604714

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-604714

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-604714

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-604714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-604714"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-604714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-604714"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-604714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-604714"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-604714

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-604714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-604714"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-604714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-604714"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-604714" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-604714" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-604714" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-604714" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-604714" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-604714" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-604714" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-604714" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-604714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-604714"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-604714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-604714"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-604714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-604714"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-604714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-604714"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-604714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-604714"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-604714" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-604714" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-604714" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-604714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-604714"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-604714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-604714"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-604714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-604714"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-604714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-604714"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-604714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-604714"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-604714

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-604714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-604714"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-604714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-604714"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-604714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-604714"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-604714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-604714"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-604714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-604714"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-604714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-604714"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-604714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-604714"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-604714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-604714"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-604714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-604714"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-604714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-604714"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-604714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-604714"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-604714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-604714"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-604714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-604714"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-604714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-604714"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-604714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-604714"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-604714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-604714"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-604714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-604714"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-604714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-604714"

                                                
                                                
----------------------- debugLogs end: kubenet-604714 [took: 2.443862436s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-604714" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-604714
--- SKIP: TestNetworkPlugins/group/kubenet (2.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-604714 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-604714

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-604714

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-604714

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-604714

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-604714

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-604714

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-604714

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-604714

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-604714

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-604714

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-604714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-604714"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-604714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-604714"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-604714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-604714"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-604714

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-604714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-604714"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-604714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-604714"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-604714" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-604714" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-604714" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-604714" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-604714" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-604714" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-604714" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-604714" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-604714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-604714"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-604714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-604714"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-604714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-604714"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-604714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-604714"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-604714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-604714"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-604714

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-604714

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-604714" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-604714" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-604714

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-604714

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-604714" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-604714" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-604714" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-604714" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-604714" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-604714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-604714"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-604714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-604714"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-604714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-604714"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-604714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-604714"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-604714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-604714"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-604714

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-604714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-604714"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-604714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-604714"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-604714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-604714"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-604714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-604714"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-604714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-604714"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-604714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-604714"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-604714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-604714"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-604714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-604714"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-604714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-604714"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-604714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-604714"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-604714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-604714"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-604714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-604714"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-604714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-604714"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-604714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-604714"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-604714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-604714"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-604714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-604714"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-604714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-604714"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-604714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-604714"

                                                
                                                
----------------------- debugLogs end: cilium-604714 [took: 3.74612896s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-604714" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-604714
--- SKIP: TestNetworkPlugins/group/cilium (4.21s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-221882" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-221882
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
Copied to clipboard